pax_global_header00006660000000000000000000000064132155776510014526gustar00rootroot0000000000000052 comment=d98af95b322d2f7a60d03a87508f0e47420e71ac hypothesis-python-3.44.1/000077500000000000000000000000001321557765100153355ustar00rootroot00000000000000hypothesis-python-3.44.1/.coveragerc000066400000000000000000000007271321557765100174640ustar00rootroot00000000000000[run] branch = True include = **/.tox/*/lib/*/site-packages/hypothesis/*.py **/.tox/*/lib/*/site-packages/hypothesis/**/*.py omit = **/pytestplugin.py **/strategytests.py **/compat*.py **/extra/__init__.py **/.tox/*/lib/*/site-packages/hypothesis/internal/coverage.py [report] exclude_lines = @abc.abstractmethod @abc.abstractproperty NotImplementedError pragma: no cover __repr__ __ne__ __copy__ __deepcopy__ hypothesis-python-3.44.1/.gitattributes000066400000000000000000000000201321557765100202200ustar00rootroot00000000000000* text eol=lfhypothesis-python-3.44.1/.gitignore000066400000000000000000000002421321557765100173230ustar00rootroot00000000000000*.swo *.swp *.pyc venv* .cache .hypothesis docs/_build *.egg-info _build .tox .coverage .runtimes .idea .vagrant .DS_Store deploy_key .pypirc secrets.tar htmlcov hypothesis-python-3.44.1/.isort.cfg000066400000000000000000000001251321557765100172320ustar00rootroot00000000000000[settings] known_third_party = attr, click, django, faker, flaky, numpy, pytz, scipy hypothesis-python-3.44.1/.pyup.yml000066400000000000000000000004621321557765100171350ustar00rootroot00000000000000requirements: - requirements/tools.txt: updates: all pin: True - requirements/test.txt: updates: all pin: True - requirements/benchmark.txt: updates: all pin: True - requirements/coverage.txt: updates: all pin: True schedule: "every week on monday" hypothesis-python-3.44.1/.travis.yml000066400000000000000000000033521321557765100174510ustar00rootroot00000000000000language: c sudo: false env: PYTHONDONTWRITEBYTECODE=x os: - linux branches: only: - "master" cache: apt: true directories: - $HOME/.runtimes - $HOME/.venv - $HOME/.cache/pip - $HOME/wheelhouse - $HOME/.stack - $HOME/.local env: global: - BUILD_RUNTIMES=$HOME/.runtimes - FORMAT_ALL=true matrix: # Core tests that we want to run first. - TASK=check-pyup-yml - TASK=check-release-file - TASK=check-shellcheck - TASK=documentation - TASK=lint - TASK=doctest - TASK=check-rst - TASK=check-format - TASK=check-benchmark - TASK=check-coverage - TASK=check-requirements - TASK=check-pypy - TASK=check-py27 - TASK=check-py36 - TASK=check-quality # Less important tests that will probably # pass whenever the above do but are still # worth testing. - TASK=check-unicode - TASK=check-ancient-pip - TASK=check-pure-tracer - TASK=check-py273 - TASK=check-py27-typing - TASK=check-py34 - TASK=check-py35 - TASK=check-nose - TASK=check-pytest28 - TASK=check-faker070 - TASK=check-faker-latest - TASK=check-django18 - TASK=check-django110 - TASK=check-django111 - TASK=check-pandas19 - TASK=check-pandas20 - TASK=check-pandas21 - TASK=deploy script: - python scripts/run_travis_make_task.py matrix: fast_finish: true notifications: email: recipients: - david@drmaciver.com on_success: never on_failure: change addons: apt: packages: - libgmp-dev hypothesis-python-3.44.1/CITATION000066400000000000000000000011251321557765100164710ustar00rootroot00000000000000Please use one of the following samples to cite the hypothesis version (change x.y) from this installation Text: [Hypothesis] Hypothesis x.y, 2016 David R. MacIver, https://github.com/HypothesisWorks/hypothesis-python BibTeX: @misc{Hypothesisx.y, title = {{H}ypothesis x.y}, author = {David R. MacIver}, year = {2016}, howpublished = {\href{https://github.com/HypothesisWorks/hypothesis-python}{\texttt{https://github.com/HypothesisWorks/hypothesis-python}}}, } If you are unsure about which version of hypothesis you are using run: `pip show hypothesis`. hypothesis-python-3.44.1/CONTRIBUTING.rst000066400000000000000000000231501321557765100177770ustar00rootroot00000000000000============= Contributing ============= First off: It's great that you want to contribute to Hypothesis! Thanks! ------------------ Ways to Contribute ------------------ Hypothesis is a mature yet active project. This means that there are many ways in which you can contribute. For example, it's super useful and highly appreciated if you do any of: * Submit bug reports * Submit feature requests * Write about Hypothesis * Give a talk about Hypothesis * Build libraries and tools on top of Hypothesis outside the main repo * Submit PRs If you build a Hypothesis strategy that you would like to be more widely known please add it to the list of external strategies by preparing a PR against the docs/strategies.rst file. If you find an error in the documentation, please feel free to submit a PR that fixes the error. Spot a tyop? Fix it up and send us a PR! You can read more about how we document Hypothesis in ``guides/documentation.rst`` The process for submitting source code PRs is generally more involved (don't worry, we'll help you through it), so do read the rest of this document first. ----------------------- Copyright and Licensing ----------------------- It's important to make sure that you own the rights to the work you are submitting. If it is done on work time, or you have a particularly onerous contract, make sure you've checked with your employer. All work in Hypothesis is licensed under the terms of the `Mozilla Public License, version 2.0 `_. By submitting a contribution you are agreeing to licence your work under those terms. Finally, if it is not there already, add your name (and a link to your GitHub and email address if you want) to the list of contributors found at the end of this document, in alphabetical order. It doesn't have to be your "real" name (whatever that means), any sort of public identifier is fine. In particular a GitHub account is sufficient. ----------------------- The actual contribution ----------------------- OK, so you want to make a contribution and have sorted out the legalese. What now? First off: If you're planning on implementing a new feature, talk to us first! Come `join us on IRC `_, or open an issue. If it's really small feel free to open a work in progress pull request sketching out the idea, but it's best to get feedback from the Hypothesis maintainers before sinking a bunch of work into it. In general work-in-progress pull requests are totally welcome if you want early feedback or help with some of the tricky details. Don't be afraid to ask for help. In order to get merged, a pull request will have to have a green build (naturally) and to be approved by a Hypothesis maintainer (and, depending on what it is, possibly specifically by DRMacIver). The review process is the same one that all changes to Hypothesis go through, regardless of whether you're an established maintainer or entirely new to the project. It's very much intended to be a collaborative one: It's not us telling you what we think is wrong with your code, it's us working with you to produce something better together. We have `a lengthy check list `_ of things we look for in a review. Feel free to have a read of it in advance and go through it yourself if you'd like to. It's not required, but it might speed up the process. Once your pull request has a green build and has passed review, it will be merged to master fairly promptly. This will immediately trigger a release! Don't be scared. If that breaks things, that's our fault not yours - the whole point of this process is to ensure that problems get caught before we merge rather than after. ~~~~~~~~~~~~~~~~ The Release File ~~~~~~~~~~~~~~~~ All changes to Hypothesis get released automatically when they are merged to master. In order to update the version and change log entry, you have to create a release file. This is a normal restructured text file called RELEASE.rst that lives in the root of the repository and will be used as the change log entry. It should start with following lines: * RELEASE_TYPE: major * RELEASE_TYPE: minor * RELEASE_TYPE: patch This specifies the component of the version number that should be updated, with the meaning of each component following `semver `_. As a rule of thumb if it's a bug fix it's probably a patch version update, if it's a new feature it's definitely a minor version update, and you probably shouldn't ever need to use a major version update unless you're part of the core team and we've discussed it a lot. This line will be removed from the final change log entry. ~~~~~~~~~ The build ~~~~~~~~~ The build is orchestrated by a giant Makefile which handles installation of the relevant pythons. Actually running the tests is managed by `tox `_, but the Makefile will call out to the relevant tox environments so you mostly don't have to know anything about that unless you want to make changes to the test config. You also mostly don't need to know anything about make except to type 'make' followed by the name of the task you want to run. All of it will be checked on CI so you don't *have* to run anything locally, but you might find it useful to do so: A full Travis run takes about twenty minutes, and there's often a queue, so running a smaller set of tests locally can be helpful. The makefile should be "fairly" portable, but is currently only known to work on Linux or OS X. It *might* work on a BSD or on Windows with cygwin installed, but it hasn't been tried. If you try it and find it doesn't work, please do submit patches to fix that. Some notable commands: 'make format' will reformat your code according to the Hypothesis coding style. You should use this before each commit ideally, but you only really have to use it when you want your code to be ready to merge. You can also use 'make check-format', which will run format and some linting and will then error if you have a git diff. Note: This will error even if you started with a git diff, so if you've got any uncommitted changes this will necessarily report an error. 'make check' will run check-format and all of the tests. Warning: This will take a *very* long time. On Travis the build currently takes more than an hour of total time (it runs in parallel on Travis so you don't have to wait quite that long). If you've got a multi-core machine you can run 'make -j 2' (or any higher number if you want more) to run 2 jobs in parallel, but to be honest you're probably better off letting Travis run this step. You can also run a number of finer grained make tasks - check ``.travis.yml`` for a short list and the Makefile for details. Note: The build requires a lot of different versions of python, so rather than have you install them yourself, the makefile will install them itself in a local directory. This means that the first time you run a task you may have to wait a while as the build downloads and installs the right version of python for you. -------------------- List of Contributors -------------------- The primary author for most of Hypothesis is David R. MacIver (me). However the following people have also contributed work. As well as my thanks, they also have copyright over their individual contributions. * `Adam Johnson `_ * `Adam Sven Johnson `_ * `Alex Stapleton `_ * `Alex Willmer `_ (alex@moreati.org.uk) * `Ben Peterson `_ (killthrush@hotmail.com_) * `Charles O'Farrell `_ * `Charlie Tanksley `_ * `Chris Down `_ * `Christopher Martin `_ (ch.martin@gmail.com) * `Cory Benfield `_ * `Cristi Cobzarenco `_ (cristi@reinfer.io) * `David Bonner `_ (dbonner@gmail.com) * `Derek Gustafson `_ * `Dion Misic `_ (dion.misic@gmail.com) * `Florian Bruhin `_ * `follower `_ * `Jeremy Thurgood `_ * `JP Viljoen `_ (froztbyte@froztbyte.net) * `Jonty Wareing `_ (jonty@jonty.co.uk) * `jwg4 `_ * `kbara `_ * `Lee Begg `_ * `marekventur `_ * `Marius Gedminas `_ (marius@gedmin.as) * `Markus Unterwaditzer `_ (markus@unterwaditzer.net) * `Matt Bachmann `_ (bachmann.matt@gmail.com) * `Max Nordlund `_ (max.nordlund@gmail.com) * `Maxim Kulkin `_ (maxim.kulkin@gmail.com) * `mulkieran `_ * `Nicholas Chammas `_ * `Peadar Coyle `_ (peadarcoyle@gmail.com) * `Richard Boulton `_ (richard@tartarus.org) * `Sam Hames `_ * `Saul Shanabrook `_ (s.shanabrook@gmail.com) * `Tariq Khokhar `_ (tariq@khokhar.net) * `Will Hall `_ (wrsh07@gmail.com) * `Will Thompson `_ (will@willthompson.co.uk) * `Zac Hatfield-Dodds `_ (zac.hatfield.dodds@gmail.com) hypothesis-python-3.44.1/LICENSE.txt000066400000000000000000000006351321557765100171640ustar00rootroot00000000000000Copyright (c) 2013, David R. MacIver All code in this repository except where explicitly noted otherwise is released under the Mozilla Public License v 2.0. You can obtain a copy at http://mozilla.org/MPL/2.0/. Some code in this repository comes from other projects. Where applicable, the original copyright and license are noted and any modifications made are released dual licensed with the original license. hypothesis-python-3.44.1/Makefile000066400000000000000000000204471321557765100170040ustar00rootroot00000000000000.PHONY: clean documentation DEVELOPMENT_DATABASE?=postgres://whereshouldilive@localhost/whereshouldilive_dev SPHINXBUILD = $(DEV_PYTHON) -m sphinx SPHINX_BUILDDIR = docs/_build ALLSPHINXOPTS = -d $(SPHINX_BUILDDIR)/doctrees docs -W export BUILD_RUNTIMES?=$(HOME)/.cache/hypothesis-build-runtimes export TOX_WORK_DIR=$(BUILD_RUNTIMES)/.tox export COVERAGE_FILE=$(BUILD_RUNTIMES)/.coverage PY27=$(BUILD_RUNTIMES)/snakepit/python2.7 PY273=$(BUILD_RUNTIMES)/snakepit/python2.7.3 PY34=$(BUILD_RUNTIMES)/snakepit/python3.4 PY35=$(BUILD_RUNTIMES)/snakepit/python3.5 PY36=$(BUILD_RUNTIMES)/snakepit/python3.6 PYPY=$(BUILD_RUNTIMES)/snakepit/pypy BEST_PY3=$(PY36) TOOLS=$(BUILD_RUNTIMES)/tools TOX=$(TOOLS)/tox SPHINX_BUILD=$(TOOLS)/sphinx-build ISORT=$(TOOLS)/isort FLAKE8=$(TOOLS)/flake8 PYFORMAT=$(TOOLS)/pyformat RSTLINT=$(TOOLS)/rst-lint PIPCOMPILE=$(TOOLS)/pip-compile TOOL_VIRTUALENV:=$(BUILD_RUNTIMES)/virtualenvs/tools-$(shell scripts/tool-hash.py tools) TOOL_PYTHON=$(TOOL_VIRTUALENV)/bin/python TOOL_PIP=$(TOOL_VIRTUALENV)/bin/pip BENCHMARK_VIRTUALENV:=$(BUILD_RUNTIMES)/virtualenvs/benchmark-$(shell scripts/tool-hash.py benchmark) BENCHMARK_PYTHON=$(BENCHMARK_VIRTUALENV)/bin/python FILES_TO_FORMAT=$(BEST_PY3) scripts/files-to-format.py export PATH:=$(BUILD_RUNTIMES)/snakepit:$(TOOLS):$(PATH) export LC_ALL=en_US.UTF-8 $(PY27): scripts/retry.sh scripts/install.sh 2.7 $(PY273): scripts/retry.sh scripts/install.sh 2.7.3 $(PY34): scripts/retry.sh scripts/install.sh 3.4 $(PY35): scripts/retry.sh scripts/install.sh 3.5 $(PY36): scripts/retry.sh scripts/install.sh 3.6 $(PYPY): scripts/retry.sh scripts/install.sh pypy $(TOOL_VIRTUALENV): $(BEST_PY3) $(BEST_PY3) -m virtualenv $(TOOL_VIRTUALENV) $(TOOL_PIP) install -r requirements/tools.txt $(BENCHMARK_VIRTUALENV): $(BEST_PY3) rm -rf $(BUILD_RUNTIMES)/virtualenvs/benchmark-* $(BEST_PY3) -m virtualenv $(BENCHMARK_VIRTUALENV) $(BENCHMARK_PYTHON) -m pip install -r requirements/benchmark.txt $(TOOLS): $(TOOL_VIRTUALENV) mkdir -p $(TOOLS) install-tools: $(TOOLS) format: $(PYFORMAT) $(ISORT) $(FILES_TO_FORMAT) | xargs $(TOOL_PYTHON) scripts/enforce_header.py # isort will sort packages differently depending on whether they're installed $(FILES_TO_FORMAT) | xargs env -i PATH="$(PATH)" $(ISORT) -p hypothesis -ls -m 2 -w 75 \ -a "from __future__ import absolute_import, print_function, division" \ -rc src tests examples $(FILES_TO_FORMAT) | xargs $(PYFORMAT) -i lint: $(FLAKE8) $(FLAKE8) src tests --exclude=compat.py,test_reflection.py,test_imports.py,tests/py2,test_lambda_formatting.py check-pyup-yml: $(TOOL_VIRTUALENV) $(TOOL_PYTHON) scripts/validate_pyup.py check-release-file: $(BEST_PY3) $(BEST_PY3) scripts/check-release-file.py deploy: $(TOOL_VIRTUALENV) $(TOOL_PYTHON) scripts/deploy.py check-format: format find src tests -name "*.py" | xargs $(TOOL_PYTHON) scripts/check_encoding_header.py git diff --exit-code install-core: $(PY27) $(PYPY) $(BEST_PY3) $(TOX) STACK=$(HOME)/.local/bin/stack GHC=$(HOME)/.local/bin/ghc SHELLCHECK=$(HOME)/.local/bin/shellcheck $(STACK): mkdir -p ~/.local/bin curl -L https://www.stackage.org/stack/linux-x86_64 | tar xz --wildcards --strip-components=1 -C $(HOME)/.local/bin '*/stack' $(GHC): $(STACK) $(STACK) setup $(SHELLCHECK): $(GHC) $(STACK) install ShellCheck check-shellcheck: $(SHELLCHECK) shellcheck scripts/*.sh check-py27: $(PY27) $(TOX) $(TOX) --recreate -e py27-full check-py273: $(PY273) $(TOX) $(TOX) --recreate -e oldpy27 check-py27-typing: $(PY27) $(TOX) $(TOX) --recreate -e py27typing check-py34: $(PY34) $(TOX) $(TOX) --recreate -e py34-full check-py35: $(PY35) $(TOX) $(TOX) --recreate -e py35-full check-py36: $(BEST_PY3) $(TOX) $(TOX) --recreate -e py36-full check-pypy: $(PYPY) $(TOX) $(TOX) --recreate -e pypy-full check-pypy-with-tracer: $(PYPY) $(TOX) $(TOX) --recreate -e pypy-with-tracer check-nose: $(TOX) $(TOX) --recreate -e nose check-pytest30: $(TOX) $(TOX) --recreate -e pytest30 check-pytest28: $(TOX) $(TOX) --recreate -e pytest28 check-quality: $(TOX) $(TOX) --recreate -e quality check-ancient-pip: $(PY273) scripts/check-ancient-pip.sh $(PY273) check-pytest: check-pytest28 check-pytest30 check-faker070: $(TOX) $(TOX) --recreate -e faker070 check-faker-latest: $(TOX) $(TOX) --recreate -e faker-latest check-django18: $(TOX) $(TOX) --recreate -e django18 check-django110: $(TOX) $(TOX) --recreate -e django110 check-django111: $(TOX) $(TOX) --recreate -e django111 check-django: check-django18 check-django110 check-django111 check-pandas18: $(TOX) $(TOX) --recreate -e pandas18 check-pandas19: $(TOX) $(TOX) --recreate -e pandas19 check-pandas20: $(TOX) $(TOX) --recreate -e pandas20 check-pandas21: $(TOX) $(TOX) --recreate -e pandas21 check-examples2: $(TOX) $(PY27) $(TOX) --recreate -e examples2 check-examples3: $(TOX) $(TOX) --recreate -e examples3 check-coverage: $(TOX) $(TOX) --recreate -e coverage check-pure-tracer: $(TOX) $(TOX) --recreate -e pure-tracer check-unicode: $(TOX) $(PY27) $(TOX) --recreate -e unicode check-noformat: check-coverage check-py26 check-py27 check-py34 check-py35 check-pypy check-django check-pytest check: check-format check-noformat check-fast: lint $(PYPY) $(PY36) $(TOX) $(TOX) --recreate -e pypy-brief $(TOX) --recreate -e py36-prettyquick check-rst: $(RSTLINT) $(FLAKE8) $(RSTLINT) CONTRIBUTING.rst README.rst $(RSTLINT) guides/*.rst $(FLAKE8) --select=W191,W291,W292,W293,W391 *.rst docs/*.rst compile-requirements: $(PIPCOMPILE) $(PIPCOMPILE) requirements/benchmark.in --output-file requirements/benchmark.txt $(PIPCOMPILE) requirements/test.in --output-file requirements/test.txt $(PIPCOMPILE) requirements/tools.in --output-file requirements/tools.txt $(PIPCOMPILE) requirements/typing.in --output-file requirements/typing.txt $(PIPCOMPILE) requirements/coverage.in --output-file requirements/coverage.txt upgrade-requirements: $(PIPCOMPILE) --upgrade requirements/benchmark.in --output-file requirements/benchmark.txt $(PIPCOMPILE) --upgrade requirements/test.in --output-file requirements/test.txt $(PIPCOMPILE) --upgrade requirements/tools.in --output-file requirements/tools.txt $(PIPCOMPILE) --upgrade requirements/typing.in --output-file requirements/typing.txt $(PIPCOMPILE) --upgrade requirements/coverage.in --output-file requirements/coverage.txt check-requirements: compile-requirements git diff --exit-code secrets.tar.enc: deploy_key .pypirc rm -f secrets.tar secrets.tar.enc tar -cf secrets.tar deploy_key .pypirc travis encrypt-file secrets.tar rm secrets.tar check-benchmark: $(BENCHMARK_VIRTUALENV) PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --check --nruns=100 build-new-benchmark-data: $(BENCHMARK_VIRTUALENV) PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --skip-existing --nruns=1000 update-improved-benchmark-data: $(BENCHMARK_VIRTUALENV) PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --update=improved --nruns=1000 update-all-benchmark-data: $(BENCHMARK_VIRTUALENV) PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --update=all --nruns=1000 update-benchmark-headers: $(BENCHMARK_VIRTUALENV) PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --only-update-headers $(TOX): $(BEST_PY3) tox.ini $(TOOLS) rm -f $(TOX) ln -sf $(TOOL_VIRTUALENV)/bin/tox $(TOX) touch $(TOOL_VIRTUALENV)/bin/tox $(TOX) $(SPHINX_BUILD): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/sphinx-build $(SPHINX_BUILD) $(PYFORMAT): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/pyformat $(PYFORMAT) $(ISORT): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/isort $(ISORT) $(RSTLINT): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/rst-lint $(RSTLINT) $(FLAKE8): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/flake8 $(FLAKE8) $(PIPCOMPILE): $(TOOLS) ln -sf $(TOOL_VIRTUALENV)/bin/pip-compile $(PIPCOMPILE) clean: rm -rf .tox rm -rf .hypothesis rm -rf docs/_build rm -rf $(TOOLS) rm -rf $(BUILD_RUNTIMES)/snakepit rm -rf $(BUILD_RUNTIMES)/virtualenvs find src tests -name "*.pyc" -delete find src tests -name "__pycache__" -delete .PHONY: RELEASE.rst RELEASE.rst: documentation: $(SPHINX_BUILD) docs/*.rst RELEASE.rst scripts/build-documentation.sh $(SPHINX_BUILD) $(PY36) doctest: $(SPHINX_BUILD) docs/*.rst PYTHONPATH=src $(SPHINX_BUILD) -W -b doctest -d docs/_build/doctrees docs docs/_build/html fix_doctests: $(TOOL_VIRTUALENV) PYTHONPATH=src $(TOOL_PYTHON) scripts/fix_doctests.py hypothesis-python-3.44.1/README.rst000066400000000000000000000041241321557765100170250ustar00rootroot00000000000000========== Hypothesis ========== Hypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work. e.g. .. code-block:: python @given(st.lists( st.floats(allow_nan=False, allow_infinity=False), min_size=1)) def test_mean(xs): assert min(xs) <= mean(xs) <= max(xs) .. code-block:: Falsifying example: test_mean( xs=[1.7976321109618856e+308, 6.102390043022755e+303] ) Hypothesis is extremely practical and advances the state of the art of unit testing by some way. It's easy to use, stable, and powerful. If you're not using Hypothesis to test your project then you're missing out. ------------------------ Quick Start/Installation ------------------------ If you just want to get started: .. code-block:: pip install hypothesis ----------------- Links of interest ----------------- The main Hypothesis site is at `hypothesis.works `_, and contains a lot of good introductory and explanatory material. Extensive documentation and examples of usage are `available at readthedocs `_. If you want to talk to people about using Hypothesis, `we have both an IRC channel and a mailing list `_. If you want to receive occasional updates about Hypothesis, including useful tips and tricks, there's a `TinyLetter mailing list to sign up for them `_. If you want to contribute to Hypothesis, `instructions are here `_. If you want to hear from people who are already using Hypothesis, some of them `have written about it `_. If you want to create a downstream package of Hypothesis, please read `these guidelines for packagers `_. hypothesis-python-3.44.1/Vagrantfile000066400000000000000000000016221321557765100175230ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : # This is a trivial Vagrantfile designed to simplify development of Hypothesis on Windows, # where the normal make based build system doesn't work, or anywhere else where you would # prefer a clean environment for Hypothesis development. It doesn't do anything more than spin # up a suitable local VM for use with vagrant ssh. You should then use the Makefile from within # that VM. PROVISION = <> $HOME/.bashrc fi cd /vagrant/ make install-tools PROVISION Vagrant.configure(2) do |config| config.vm.provider "virtualbox" do |v| v.memory = 1024 end config.vm.box = "ubuntu/trusty64" config.vm.provision "shell", inline: PROVISION, privileged: false end hypothesis-python-3.44.1/appveyor.yml000066400000000000000000000046641321557765100177370ustar00rootroot00000000000000environment: global: # SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the # /E:ON and /V:ON options are not enabled in the batch script interpreter # See: http://stackoverflow.com/a/13751649/163740 CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\scripts\\run_with_env.cmd" TWINE_USERNAME: DRMacIver TWINE_PASSWORD: secure: TpmpMHwgS4xxcbbzROle2xyb3i+VPP8cT5ZL4dF/UrA= matrix: - PYTHON: "C:\\Python27-x64" PYTHON_VERSION: "2.7.13" PYTHON_ARCH: "64" - PYTHON: "C:\\Python35-x64" PYTHON_VERSION: "3.5.3" PYTHON_ARCH: "64" - PYTHON: "C:\\Python36-x64" PYTHON_VERSION: "3.6.1" PYTHON_ARCH: "64" # This matches both branches and tags (no, I don't know why either). # We need a match both for pushes to master, and our release tags which # trigger wheel builds. branches: only: - master - /^\d+\.\d+\.\d+$/ artifacts: - path: 'dist\*.whl' name: wheel install: - ECHO "Filesystem root:" - ps: "ls \"C:/\"" - ECHO "Installed SDKs:" - ps: "ls \"C:/Program Files/Microsoft SDKs/Windows\"" # Install Python (from the official .msi of http://python.org) and pip when # not already installed. - "powershell ./scripts/install.ps1" # Prepend newly installed Python to the PATH of this build (this cannot be # done from inside the powershell script as it would require to restart # the parent CMD process). - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" # Check that we have the expected version and architecture for Python - "python --version" - "python -c \"import struct; print(struct.calcsize('P') * 8)\"" - "%CMD_IN_ENV% python -m pip.__main__ install --upgrade setuptools pip wheel twine" - "%CMD_IN_ENV% python -m pip.__main__ install setuptools -rrequirements/test.txt" - "%CMD_IN_ENV% python -m pip.__main__ install .[all]" - "%CMD_IN_ENV% python setup.py bdist_wheel --dist-dir dist" deploy_script: - ps: "if ($env:APPVEYOR_REPO_TAG -eq $TRUE) { python -m twine upload dist/* }" build: false # Not a C# project, build stuff at the test step instead. test_script: # Build the compiled extension and run the project tests - "%CMD_IN_ENV% python -m pytest -n 0 tests/cover" - "%CMD_IN_ENV% python -m pytest -n 0 tests/datetime" - "%CMD_IN_ENV% python -m pytest -n 0 tests/fakefactory" - "%CMD_IN_ENV% python -m pip.__main__ uninstall flaky -y" - "%CMD_IN_ENV% python -m pytest -n 0 tests/pytest -p pytester --runpytest subprocess" hypothesis-python-3.44.1/benchmark-data/000077500000000000000000000000001321557765100201765ustar00rootroot00000000000000hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=always000066400000000000000000000021231321557765100300430ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 402 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 6.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 6 [1000 times] to 6 [1000 times] bytes. # * Median size: 6 # * 99% of examples had at least 6 bytes # * 99% of examples had at most 6 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWTKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTAZWGAY2UBAAR2KANAA====END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=array_average000066400000000000000000000102501321557765100313530ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "array_average". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 416 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 718.32 bytes, standard deviation: 529.53 bytes # # Additional interesting statistics: # # * Ranging from 6 [6 times] to 3226 [once] bytes. # * Median size: 824 # * 99% of examples had at least 42 bytes # * 99% of examples had at most 1723 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3240: STARTPCOELGB3SISDODCEV7JDC5Q3AT75ZKZI4RUQZWPDNFR66LXGJOZNVGEK5ZUX5ACEEIA5J35PT57776767H5PL6RLKZ537IXGX2PTTV75JRC47PQ7FPG72KPG2CZ6YZ4O7R7NKGPPK7XUWS7XV2XP2KPWWMCQWWNDX5K5THPUZP7B5BNLKLKUCWO3OPRLWPXZBJ6ZUPG37ZX5K4TNR5OPPM7IRYUFHFQW7W36SUNQRNK3E66JXBKTCLXW67E4PNX5TBWMY3L7Z5U56UJSC5K5N42TMDMF4WSSMOURNJEULG3J6ZO2RN6ZNTE5HK4RM7WWKVIXYQB2KYEMDXD5WKLAH4OA7JUZ65MLF2IWK4TWRZ6WIB5WGT2WHDBBVAEVABRL2BMPUT3AOQJGQBNEIKLW5K43UCNBOSQ3DAK5G5MU6EORZRHGZTTAAMSOVYMBMWR4S3GRA7X36TJLFQGW4GBD24BWI2OO6WMQ62SJGAQIKPNT6ZIZTL2STOXP335ZP36AFMM3AFFCVDALYFAS7AA6ATO6AHLMEVFYIPWEGXQAKTUXA3K7WQEALGBVYOXPQDRAYXC4U77XH5LMCRGFEIH53C3VEUITCWAZSATTUYECITLENELNEUY3FGP3R3QIFTIOWBOG5ISSGU4DUILFTAZY54KJPSIITBECJTVUISO3IQCIO4YNDWBESCQ676WPGJAHDYN6UJULGENRGK4ESRL7RUYXGHZ7TYZSVPFK4JIALMNJA3G5ZTDOOFNW3N2EQUDDN5QIDWKW6O6E4V2QJLU5E2FOXBDYMKOCMLQND2KAXHG5MNQZCNB6TW5EFDIBJT6IOM6N7B3KSM7FMM5MOOU3YJVQKKGILG2VGJLJNISZDPISZD2GIUDQFCKAEQ6WBBRYQLVBKKYH5RF34RTGO2CJD5GRWHMWUCPTVKIUHVEUDVHBXGKNSI526KWRIJYFVZ6GD2DTAKE5IWHUZRB6WMN2M4R4FSHGC3KSBLFIUOMO2ANP7BODS2BPU4NYLGPURB3NJJKXUZCJZSCNBHNQ6EEG2XNFHSALLKEXTEQGNDYQY4VKAMJKRNHFMF3BXFIBBRABPGPSYSTQEHIDSUEIKHCLZ5DIIBJBTSMNPXRBAJTOH4KG2QIFZ6RJGTBYRLZKAPZ2VLACBMQJ4YTE4KX2BF5ZITSSTBWGJ4NGVR6NG4ADESGQERTCI5HATX5DF2MK7LZCZYTZQVKJIFMZXDJYFN5EWODLN6RXJOEWDRLULTCEAXRPXAXSJNICQFVELNKQ4U6VXCTSJK3NH6EZ3AIFMY5DWEKKJRO25B634VAK6EL6QFK3ZKAIZCIJZFBHDUZJSGMKW6VWNEKNURARLE3TSFFDLF2KAVPB52YDBKFQMOC3SCW33BXIRIJRTQIIRGCSBTLIUCQCVRPVYLTFCIB6CWRZX43MUXHFBVNXV6RAPIUW3K5JVEELEXCEFKI7TOE3RLVGAQ6EK2KYMC3IN6YOS7AE7FQOKPJX2QTVP3RODEKZ6F5RQNJCLMQMOQBLLT455J2TODLWDTTCQGWLO3DSO6VU6VOIXIQSWQFKOBKW4WNL3PQ25GIWCZLBFOKICMLLYDLSBZCRBMVNT5FOTZWHXWMFHZOUGKKYJZUK4M6BZ54UVXW4XBIHCCR3KS6GNWFSRK6QCQHM7MXQKRC3OAOAKGH7BZOZD5GK372J3PYJJDDU4I24VKHIIV36OSXC3YBHEDKYJNTGPZFCHU24YZQMARYMAGR76EOIB532UQPTTTSOF4MUQB2LWZHRVI4CQMZKAJMCQQ725HSKRKWFK3GHGXK2HY26HN62IKMPV3HKA46RR2GMZUQKXJYZH6FYVRMODTJDRBKVZEKR7JONN3PYRYF4K7TASJRERNJ3NS3V7N7NRWR5LTRP255LEWKWXOQ2EQ6VAHUPIGGE2DVEMBPDTDQXU2ZIITO2TQTU7LDHMOQG7FDLXMIYAB6AQZOSVU62SSSHIXC5JRPFWRA7RT5KRLWHULBA4OOP5OQKEIUE3CWWJ7Z7A5AX3CFUGYEZHR33QMVTRV5UKH3C2HVDLSCBRTJ3JPDPUGCBJYMJIZICTZUZKSVYAUAJEQHONJ6XXIE6KXD22QIHKSQ6JXNETTXJN3VBS6JYAVJFNBLVQAQPURY33FHNS5AXR77FSELFUENWOZVBP2AZ5ZIJPYQX6SI4E723ZTQPKSP5CA2IROVGL5MMYOB5JJCZDTDVUKWGJYHA5BVKANSV7VCJFWROE25F3QPENZFUGXMYFOGWZGA2SYLDQYNTV65QJ4T2OU2FTKPTSZCZSXJI2LJNYADCETQDXDDIC7ZI73AEOGK3J6L6MZTGP6TALWP7QYIQ44NHNJGB3XHORFLQRXLHEWLZE6NKC3OZBGY5O6HZOHXIP6B2S2S6FFD5HAGO3ONE2U6AMKXZZJS5OD7BU6DK7OIYK6T2RTWXOPTMBRH4RRLVQZTGRZBTNMCZDEDJD4PZEC3LXJTFQCJ4CYKVOQSZ5RVEJN5GV7VQ5JYDX2QJUZ5IOWB6CPQ2J5EFP5IZVMN3I5UE37YCKVYWUUZISE55SVVS6QK3RRAGJCB3ANCYC67J4YUWOZWIDAD65E7UGTVIHSMLJCNJ5TDS5Y4D23EACQ25OGJEM226DUGN4NMC5VY6SRGMAMEWYDFB2KBUYZ54NM75ZQRT5CJZCMZZOWYZASGMCM45W4P725A2D3NLUY63YCD4DB7XDLXOHYUMJMH4F6SOOVNIGHSJRP6N4QNGY3WQ73Y3QX3W2QRIFTFDLQR6NZ5HPZNPHDTLULM6UEFOSDWXCVAPUYGK6KEOQNQROY5P52P64YBGZR3YNVB6GUQYRTHBYDAJ7TZACRSL5YUKTAEC6LXZVI3WASDBLFZ7JOLTWPYOT2KFOJC4HU2PLZ6DLUC5TA3HON76YVYG3GWSIJ4GU55ALGU4OM3NE5ADOYS6A5EV3WKUKXVVJ4TPEGVB22DQ7VXQI2XWLIFEOUKNIJL62TU5CQAI44R2TRLVM3MIDTLG3HGXWEI6XJL554K2M3KXYXQRMTTGAGR3DJO5OMG5THO7H5VHV4E5O3HB4GJEL6ZQFZA6N2LDWG75ZUNKXFT7P2ACZRF7URANQBIJFT7G6LGBWEYFWW43MQNHUBENBBNYA3L5XJWD27Z5FR65TZUXGGRWPDWRDH2K67GERDB3Q2JI42KY3L3IR4HX7P27L4757XH5OL4R47H77HVLZCNA====END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=lower_bound000066400000000000000000000074351321557765100310750ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 404 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 269.65 bytes, standard deviation: 101.83 bytes # # Additional interesting statistics: # # * Ranging from 29 [3 times] to 727 [once] bytes. # * Median size: 267 # * 99% of examples had at least 47 bytes # * 99% of examples had at most 551 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 2848: STARTPCOFLGBZWISDODCEV7ZGH3DPCQEXBU2VCTZDJBXM6GSNBXKV7EALAQ2OI4WSYECLEIJ6Y7367T5OX347X57PZ5XVPM57X65L5XT7XKZP777PFNW6VN454Z5H5PTX2YNOM76NRUFNPZ5NH3P3NV76M6635S65V34PH7XWONTVHP25N5LWXTH5YTO7NIRXHPOVZ6Z7IBPW32V4GMXNQML55NKWS3WP22TPVUNHVKLFLXLJ5V2PSPNZGC6ZD3X5LYDF24DFWNUW7TBOXJ3WGZ45HZKX35DVP34GO2LCE7QD6P4ZUOSTZ4K36WLNRVGSLIJ7XP3D63BBQ52WLDYS2OKVJF7B5VFK6TBJORXXVCZLJVXY62OHKVYQ5LF4F74GADD5UAWR4OOQWYOOXGIJHUNEUX6E3HXTMKUM5HJU4EPWZIVSOADLLKT4F2GWRSSJFN7XJLC7N7KJVRLEZQWWBK43LSXCVEKPEXQYAI2PMPAVT5GPIJP3Q5FHXBTL627HA625TVFJLIVFX2CDWCKX3O44WU3BVNHW2OHUXWZNOQAP3YFQRVT3S2FG3QDLW6WLZG3KUMKU2VMOB5EDZEFINANFQXHVCIH7WHSV5LS6KBBJYRRX3O4E3G7CRK4WFAPS56DAXVV67JVQI5F5OTIFRK2PTLVVFIV44BT64WHSXPLEVSUTPZCYUSMFFM6XFGKNCQSNLPMNG46XXZNCM6YADR4322YFQ2KCWI6CIPVLLDAID55PMZETTL5KB3OKOCTT6TZBTVWBA2RUEPAJXVPER2D6A2H6V7Z6QCCFBXXPWTDII5OPFJBJ5NR2VVQYDEKSNTONRZVJIOJT7PLOJJJ2UOIZDPVLFPLKSOS4ECMEYRQN5DCWDRJUZ7EMAOT5USVEOOEAWBBRVJ5YKLH4AFAVQLS3VCZ7UPKUOESJL5CX7MKR4CE53ZAVFCKPPPPRIV22QQ5MQ6CFNEZVLLFVRZ6JHCGRV24FH53DXZKJHIMQ4XHEQGW6UMWYPKOGJXGAHOKBLOAQ7LCBMAUUT7FS4PIZX4CUKPXJM62DM2AFKFSF4URQYK4G5SPNX244AHTQKXYHBDCDCIR7XSYFLZFEUOERY3PBHUEUWQIVRVEL5CNJQI53RIC7BF2EEQVL3KVBAPSKCLS3DHJNN2EUPK7ECV5UQCZDWEBGSKRXNJ4DCRBQ7EWDPXSYXAGJX7RA7TRZS43ICMDEVBWQLXBRLILJWSQOHV32ALKLRX4J63G6MADORHDQ5DSSRQSOHDMLG4T7B7DWMNI6TSBU3XLCRY7LMXWPT5I7UEY2P7XPOXJWGP6RAIUERZZ24GR2BUM3XLUCMMX7MDSWQM64Q7MU73SDHSEAYSHBQPDGRUGLD573QOPLMXW5J53OJJYNKLWCKNOIUZQBHBJZ2UYVFMJWFZ5WXGMD77K2XBPBB7INSLQHA2443XARU3L3E5LSHIYMMGLEEREPFURQ7BBZSEIEUYQJYA57AR3S5DZRLLWUW75RVM3LILB5BKQJEBDPGJWKR5CIRHBIF4IDMDIJRPFUEDKBVKLA4ULJENYKQOTWIMHIS2VK6IW6N2ZRI72VJJBIWDVV47C2K2QECZPSAYIE62UZSCPBYM7QIOQGXVK5SAVSDVJPFC6AR7VNE4W2UN4XKVNQCVCSNUYQFHIAQVVQZRFVIJG6LKKTN5AIDTAIQIYLDHPRUAS2UWFCIQU2RQITDVNMJRHIYO3YZLCHL3UI3KFA4YRRRHKWECZNSZSSTFAH72X46GEYE5N2SKOACYANVWOANCQDHC4NL3TMSKPVYAPGCMOCPJGSRI2SCVVLD2I5OWOBBZ6ALY7EMVT2JZSDCQ4RYL2SCBOL5J5IYREWQNCJIXLT3HHPDEFNCBMJJLNJSFDMZAJ5LWSRNY2O66VRJSR72PYX4SVBH2RKQXPMSNTYRTLOHOQSJJGENMDKJHUAF3CNE3WFT3XRU32PZVALEFHITZSZQQVACOZIISZQQ5I2KVK5X5V4RWJVOHAXPULQVMDHNRB6HDQCT7XAVMLJUNALQW6T4C5O5RK3SFWYHP44FQBXWKCXNKGXQIJ22GGL2ZSZYUMSA73KBC3INLFAMKPOEKCJRRCIIVGPOTZGVKIXOK66R2CAU3C6DIM2BDXT2ZE7NAKFIM4EICW3VO4WTPBYQJ4QUOAVXNDY43ERSETIH3EDQGP4V656IMA3PMYYWFCYG47FEXGXA3D7F2GHVJUWVICDYW7OAMKKYIKFIUZ4GS3JUFDKB3NC7MLBYUTCDSTQR4QRBEAFAKONSE727OM6MMDQGDBMWXX3CGR5NRGJGFZYHDGNI6JOUWU3EHHGWCCKAOAMFSAC4JCIBBYPE7UH6IDBVA3YRIASJF3Y6LKESLWYOROJ4DWNMAVHYMSZYLH7Y5G5ZEASAIBV57A2PDE5DKABT7SDN6EY76OV5YGEVEOQ4BOWQRWUBS6CEC6QFEKWDWC5GWHYMAN7M5QUBSZ2GUFTWMT2YRQWRZ4KOBYX6OWTRLOPVPGNOH4Y7IIWETDHCIUH4UPIPQKWSXR5T5XCAKMC2ZXESJEON3B4TRVIB4JURZSMGAQR462GEBR4K2KUPK57GCND5Y5MMTZZ4CYF3DIAJBR42KUMUMCPQ52DYONIGCUGN24OAQEN3IB35LVNOX75FRSAUTUOH4BTQU5MCSBC5VIZESA6JRGRM2SV7AG34KXZVHKZHMSQL43W2A6AL7AYK22TKIYAE66HMRJZJZ4CKWEBNKFEBK4BDDRYOMBLKYAKXXUOP5IWIMSOIM55GH2CDRZVFAIDVS2PBCMATHHNQ4H76H67LY6XZ6PX77XYJVCPZ337IP3YCCA=END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=never000066400000000000000000000103221321557765100276620ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 401 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1195.67 bytes, standard deviation: 221.57 bytes # # Additional interesting statistics: # # * Ranging from 694 [once] to 2424 [once] bytes. # * Median size: 1178 # * 99% of examples had at least 786 bytes # * 99% of examples had at most 1936 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3288: STARTPCOF3GBZVYSDSESEV7ZFC4QJ3SL34SVDWUXGDZGSMYYHP37U64MIZD2WAKE4QIHJRO43SOP767R7O7767PV7PDZ7X37MZKPXT5PTTD6PONXX5FQZZ4KM6NCKHTLYQN5L7X6FU3L5XZV7DX3IHHPON5DZSZLOGO667PGJWNJZ23L7K6POXLR6O3XRRRW7PCEVFHC6XUXDOWX7YOJ77OXLKYVZI7GPRHJGP6TNRNVFQZMRZV7D5BLEPWEFCUR6YHE7NXLWM34W2SDD3PHZ5TBMFBKBV4L2TOBB362BMNSHIB5OHNA5P6LNY6OGTPLNHM3U4FZSTMKXVIZUZQP6TJM47NRJP23YZCJTK33NIOIUAMFIP5XSDC6MHAJ5GC5MMXV5CBFNFEQ3L6XVQWNQ2FCLKQMMR3NQ3EZYXPQGVZV5C3SEYYRWWGFTOZLA6ETNMJ4Q57HAKT3KMH7QRUFOV5UUFKEMZIQ62RRYDOZAEQWC3BO5Q4L7H3OF2I6NI4CXD4K5FNCG7XZFTCKXQGV4KI4LTR66RM2TLRP3D2EFXM4DB5ZRLC2WFVHVFZZMTTLG7WHSEAACONM7P4T3CS3YLPAFOZP5A6CHCBHZJFETH6TDYYPBYN7XBV4ON56ZLUIEO3JN6O56VTK6LLQ4ASE4IGQM5WN6EGSLSG2A6LM3K2N6RZ2OAHVBULMDEQFP4H5Y4IW2TPK2KRTQY3GQ2JPY3GNGYFKBHUXCIFTZNW4F5OQGAFMSAJAICNICTZCOATKIS4KE2NZM3W7AME5DLVV2QHODOYSNOYIU6UISTMCBPUCXRFMKOVGTSKGKEMHVERQLERBSZDHQAJDPJQRBC5NRBNCV5TFNMCE6PNXIRK4CGVXRUKPYVORFLUNLQL5S2Z4CGQ3IWOST5ZJ6BLGQIS2M6NS3SNXLDO6SBBMS4KUH2PUP27G3JNDWBMTUDCWWMKEQY33IMMTPCS2BMYBVNH2EP4LO2WYXU7ECGYQD63GVCBQFNQK2HENN6HYTFBVAFWA7UTIVK4UKCFMBRKLZC4YTGFYDF453RI3VBGHEEBJFXLGJIHOEDCG5IZBDW7KC72GEZYDV3PZWCAHJRGZNQRSHRDDBPK43AFNQNO2S7K5FFJ7FQNN6LTOSQ3FPC24DUGGIXPOJORZHXPSAG4BY2UFXCZTJQHRKUJSXWHUVMWQVF66SM2NPUGFUNF2IDP25UAEAJCVADB2VEWGYPC2XEY6XRUQOLWCAOQTRDBL5EC343C443IJUC2WGM2NKMWIDD6KEQLUWZ2N5GRSOTN2UNKBI2GSF33NNUUWB4OYZ25KYWH4IW6ANYJGK3XTILUJZXEMSAXASWWWY2CRUYNTBDXVIXU6SB7DMDH6KTKPJDPGZY7UYUKRRAKS2I7N62VSEIJTWQ4DKIBPRMW5X3F7BESXKB4JQKVL6CM3RMWIJZZOZMKXETSCDQDQTBDBLY3IU2S2X3HYOCHVTNQSGFIBHKQOEU6NA35SAJJQMJVHVN5IFFIMFNWQKOSR3XSAJ5VBA3EULFNNN3BWJF5QZAHO5C4RMIGNXQV2NVBUY46BDZZJWVR6N7JJLRNESJSHIZMT4SJPHW7HMAO7Z4BTOBGYHFZDXO6NXSCQHOYR5QVCMGYJ2ENFQWOAUKT5WFOBQKCUG4ESBN7IFZXY7VMBGOWTAPZDVQQOVGWLZUIDIAQBTRXHJVF4HVWA2MJRZCLXQVWXDV2NYGAW4HD6VWJPWBLFLKJIR2CDKDMG545HYQLV6KDTFUHNEEFGVZRG5MZCAEISL2E27GBLBGURLCCG7SRMNSRUBN4N2XOTZUWCTKQJMM4JV2CXNFEFWSWIVGLZVG2JHGHVN6ZCJNAVGKVH4AYVISGEZXOQQYNC74NKTJ2XKGTFVHHV63F7ARAXKKW7JWY4GJUV5QZBJGRW4OCVQ4CEHL2T73RUNOVE6H5HUREXFWHZAK4ACIARRZXMHK54BMOKNEWWQFPEUJG6RW2MPVDVGXZQRSWJZAANUIXHHDTRZFW3J6SSVYMI6WYOHORLKE4XNABGYUU4RI2PCNFDM6NENTGSR3VZFVPIS6WJX6S3E3URKQIFXIEE34PRKQ63H4BOPSG7GOHTE34KALNKPUHXIWANJXOLPZBD6SFKGK6CSVSJBCJ5MERU6R2X266UGCXJ5NUDGXZR4ESYWBVYNRUMQVH4ABJXJPNGQWUV2VQV2KNNJHJQJVEWNKE773GN3I6F7ARZSFNTNZFFEPMHSBZXVWI2VVV2KLTXH2WTHPJRUYJH2F6XYGZLSIR26HMNP2NPHM43IN7THVTM7WRKMHVVGPX7M3EP34VMNAOORUWNZQCT43ITWIQRFVDEKLP2WN3ORSAHVRJ7UPNT34T4EHF2BRADX7ILHJQYEVQ57SLQFJN54LO4CUHSX3SZHUKZ3VYUHNTAST5QMZTWVTVPD5UAUEAMWVGHICRJKTX3V6WFABCXC4NYBCJ2N3FE4LFVKHGDWTJYGTJ634PCNKGMAEEATL5SRJTUZ2HIFXUWRUBAKWVNLS73Q2QPM6BKB6VR2OFD4VTIVK5T7V4Z5JL7KLSHM7FVSNIT5NLGYMTI4IGMQ5EBQPFHVAXUGCDGMHYLTRFURKOWQ4AQ77IMPJPXIIDU3LY2MOEO6TU47DMLAN46TN2SCVLY4SLCH6JQQMQHVWSNOIPVFAV65G33JJGOXIBBEKPFOPXGVBMAN33KCVBMUTONDZX4OLPHFLBBOKE7PHANVZKVEZ6CN3C5NLPCB6BE3DNNAALJWMZTEKOTCTKH2MO7IC2QP5KLSGQSST2GWWVCL362O4HBX4HGVCESFCKTCRR4SR3JUAHHMXRT6EBHYI2Z5KUX5D2YBYW45NER6C4ZJVC762MPTVWOOOZWIQGEPPXQZFB3MTIKIFBLVCLU3EWJUNEZ7CCKGVHL4NP3CZFHT5RPV2OBRFW3FOVY6XVUY3J7UTCXHK6ZCMX4M4FSHL25SHARVV6H3ID7TSYR4ZZ2GYZ5Y35TKFY5ILZFKRI2STZM3ZZX554E5P74W2GM7HMCMPYFQM2JIP2IO35XCISZJC4XET7A2CUNY3NZZMXOXQNBPD4SXJ5TUZRLNLDPR42XSSJNSTFRWPUPMBWJWSSCPDOIPUZLRNLDQXISU5IGVY53IMQTXW6HANEVWC5FDPMIQLTF4QBYVYUFQTUFSVBGPPYSSHOBPRYX44X3H4KAZHXV3S5ZAMBYVTJR3NAWWTO5PJMDS52XUZW2SESPJP5GXZ6XR7P377X5PXRY7L4W6L777AO2DEMFKEND hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=size_lower_bound000066400000000000000000000123631321557765100321230ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 405 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 17180.64 bytes, standard deviation: 19739.17 bytes # # Additional interesting statistics: # # * Ranging from 6 [56 times] to 128851 [once] bytes. # * Median size: 9886 # * 99% of examples had at least 6 bytes # * 99% of examples had at most 84297 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4328: STARTPCOE3GJRSIS3SDKEV4ZDD5Q2AQEYFZC6IUQU626ILZHQVXK5TEHTKKZZ3U676V4RECIMQTDQ7367HT3773VY6P37737OG3YRW7TG6P7WDG7726H6FP33RP75TB5Y5UR7Y3LM6WBX6TGYYUNJL42Z47W5CND67DK4J6P6KCR5PBBX73356TZ6ICH5TVNS7LSZVHSSGXVEC54OG6ELVTZOFN4W72QY75FL6JW32FJVXVP65UF7OLLNI473QZ3X6DWYNOKVPHX37GCRL6J2GJ73HLZDXLV6UIZ5XKPKHCR5FMOMV4OHWHB37NSC4EMR35TJ7ORWPPOMKZ4LL7FTJ5YRZ36O4UEH32WQGLN36FUHB673XRSYPL3HSBPPLL27IXO66F5APJPA6CWKGWR676N4HIMPDPHPVVDT24XUOXIXEP2IMF27TTR6IOVMSM5HR4OXLVTXXXZ4Z3P7NKOWDYFGW2TF45V7P6JP46P67LLUGL4KTWBOL7DDTVHZOGWOO45OZEVNVZZZ5WXZEB6K3E4I44S5P673JPTL3OI6LSGSK5X7535UZ3B7K3RZ7GSYHOFKWGKD3H7LBXCR3QUPWO2445LWJZ5M66IWEGXXF4VUVGFSE4AOUB4JZI3BXA6WO75NOCQW4AVGOSMOWGFOFMHPXDJTQ3ATJSSH4SVVAGM4ORTS4FFTP7OUDOSL2AYA5IYABWCUQFZWKR76P6HN5PWC5Q3PZ42NMYW65W6I6RNXMTBXXJ4BHL3XK7PLXFLMSELKAKKDQHMCRMW6I6XX4XLBHFQXCDZ3C23PNLQYMW33SEY76VDDHFH2ZKSPKJDTWRJFMI5WQNJHUGDTO72FKZ2SMJ5STPF4SML2UYF2757XV6KZ3V5Y5QKKVLKL57IG5BKLTHBART6U2X5OU7I74U7O6H6T2W5F5SE3VI2556J72JEFOKMOWF2NKPRE26VQXC43CVHYMXY3WNUMQ2MQ2US4MU45EJR4FFHQJQR7X6LZ2XJ4LCLQJOAWK5K6SGKTEDZN56PGGBGUCDDOOJGLEMNKV5PRBFUTMGYM4CMQVRC7SLT3VRDUG5KHLDMQHPU35MY5QSIGDASYODEL3P62L3E7KWTLQGS7WEMPK32XT5VEJ3BQZJCWOIK44BV56MJIZXXCYPV7R2RYPVVMZ2JENBZZOUDN4XG3K4ATSNRVJ6MPLHEPXCAPNDIXXIG5NFKAVK7LAONPVKUILLHUHPR2L2JB4J2O7U7ZVTIT52D2A2U7MYXPOQE2SYCZVWP4KMPUCFJMXQQIBWN5G6HDXTMV6ZTSE3MHNCYL4QDRRQDCO6QLL42BEW6CEDBWAPP3LCNKBTJNAIMDBPYBFAHE3CQJJHIXAMI4K5TPUAOB3PIJN4DY46YV3J3FZDVQIZW7TJG5OYQHTL6BZITO3GFPJGJTWLN4ECPB3ATPECEEZ7VNTEE24AQAY2PRQAPVRLBVNHCBGLXNXSX6DCAWBCQYMUEJDCW2I33O2CF6RGNOALW5VCGEISJHU5NUOHOQWFODBEDWTENSHVZIC2WW3WSN4TZPZPXZ3COQG6WTRNXTGHG2G2VTLGD6MWAAGHO2NIRB27XON3X4YC522LEQRLH6GUHKYEHYIETPWDNATQYMMVIDWZJX3QDVW4PFGGWSA6IDHSGYK7AJSGTZSCNCXOTASTQ3U4CXETHXNZXMQTZM6NEKSW7CDJKMWIK4OQS7NQZFTRCGJESGPI2L37ZGCUDRV2RGCL63WJ2KOB3GKTUAUEIQ3ERJ4H54WZUIGKR7KPHBH3NL4SK5BQ7NAL2YCWPWFA7BHK7F72PB3VYHI4Q3OPNU7UL3VMNFYLWPNFEQAWJSEVPRMRETWTLD3N2NTGJXTQ7UXQG3Y6LJE5VAPRNOY4JEVNNXLDGWFWM4CTXE6DX3G2SOYKU6AV2E4NHXFHCDV3RJ2UG4AIAFHCVYDQXZSQHRSH45VBWTRGNO3JHULOIH2Q7JRDCYWSWKROAJIN24M4NCRIBC3UUAKGSPDXTRIWCPO2T7GBEMZBNI5WYJMQXTKFV6CJ2ES63CGBYOSPZ7RXGYTYFT2REFOFDPVUY4LL2O6C4OPW5ZGQP5HWWRETXZRZ3I2RKJDSO5HXGO3BG2LQ7MG2JHW4WS5NVESYW77PKSQZ4P62M3GWYAWUYWI2YYEAVMIAVB5X47VHIY6D3YGCXKILLJ3YVXKHJUY3G7HRUB6QDYOPJEUIDMZNWCKGXCVQ7JIJHLOPXZIHYJDMMEGHOOD4SZIXBNCMJ4QBD4UZLREIOAZP57GMQR574OXDCJSEFC7LUFTQBTSC7SGSMS3VEOTDVPPC2RNEETEUXT5MRUZA6EYMCLJ4HHWLMQMTTCKKCLTDICEWWPI3U3Q6GT6VLD2FTNTXDXCHSEUHEYLDB4SFAU4MVRLCKQUEYY5L2ZMPBTTS5NFWYRVWD2WR76CTRGFQU4RYKPZNSC42BZMP54CUHRLFOFNRVEGK5CIFHVE2E75LFX3JQ3I5JBGU2LRKBPHO5DV2NGLGLQTY3YOMYRJWIYVOIZSYDMIAKQHX5VAEW5DZ5O55CRQACCH7W2ZVTGUAELF2MQ6ISGMDSHQNGVR5QR6ZM4JIFSGI5DCDG2PRAB2TB2ZLU4IDPWJEYMGBLGMS7YZXMWKDLIYB53PPC5N6LFEJO2P4CTNEBLKO4BQ33YANJR2QNS24OWKIYXVIUBEPZYGVBLBYC475RFUOR4CNQKP7PGSO3H542MOAUDWN4CG77VLJAX2YXYH3O3RH22XXLOU6NRPUNGLGLRLOWJSOIWOZR4HBLW2UCV6LVGAPL75YLSNWG67AM6A7LBSP44CPPGNAUR2D4XW5NWPO2QHXWMH64OXDEBNU73XGPXZ3LE3F2XL3WTGX2CTLU2YYDDCRNWX7BSXL4CM3G4HAC233OEEYZHBSVIXTHZW7TZX44GQOMYNKEPDRXNDOAI7RA6MSE7KO5YCPI6VIA4VDIGCMUZGSTXQHSTAGSCMY53U6BK65QQZLBB5C3GTXU6RKMMZMAZZKSYV6OQ7OD6FB5H37JPXBG4YN2YJEF2UWUW2GLBOTHTOYXMPJEPKIWYME4QX4DBHA3EHANO3LVMGMVDM4ULBJIPIK5O6SQWCLBKB6AKTQ7LGOCO57L2DCYZUBCMH5NMSAUTBNQUBVFXTXC62KPTEMFESM22NK3ZNM252B3BXSIR2E6FEFHHOBWEPCSURINZGIEY5LVMGUT4DR6DTBLGZDRFNROJHJOHT5FICU7P3PUMHJJEE3EOPDIMGJHTQ3GDL7IW3RZ6QZ4CCNJJ3MZZLQP3NIXBJV6DJVFZBUNXGRK4I3JARXETLKN5WFEATM67DTWG6YTGGD23NAKPUDELSPMMFOGQPB7J4UA3PCY4F2U6Z3ITH5DU3DLZEHHNMQGTDNMQDF6TEJJGNZIKXGDKVYDFTKARXFLXUOIAAPSKCWWB3BCI5E33R44XVEKLU7X4LCBVH4UKBS6VFJ7NVS2I44LWLQMQBIZ66IYOWEK3GWT3Y3KRJXTZO67F5F7FOI3I3ZQDTLS6LRGOD2TMBCLD3R4AIKGMZSFXDA3UXCINRAEG6LBOIZ5XI6CJBVE4JQYASRX272V66JB6SMYXSULOWB5CA4JBOZHT7BL6VBIBRT4BW7DCGFTOA6WKSBK6DMDR6ABEX3B2JQ54E2USCJQ3KGM3PV3OMH2GKEREE7PTEOZTETWVQ5ZWZFVA65XBGQIWUMKKOMP5QUYXBXNSHBXJDX2WJKVQEGV4CIYDUFRSEX64Z6WPMFUNI2GWG7IPHJXL7VVXO2Z3HTN5ZJRUBKWODPOMJZKMBIZUM2DSEHQPSGCMZXWJVHV72AJL4PXSQ562LUQZ2URMZEBTP4YQHKMVMNCJMF264HBEL3Z5XKMAWKLOENXGZ3AJKYUVKCR2XYETNWWU7MY645BHNRRZBUUBWTV3YTBZVPSZY5RK7CYJWQPTQMUUELQY4MGWFKZNX3F6YVTBRQXHDHTLH3NDUF3UQYV4BOTIUBWCGJGH7L44CQD3SMKULQ353X23RPA2NC34AQXP5JK2ZQ5D5EJG64G6NGF7T2LWL5XPIGNEBZYE55QFDWJXGMJGA5LODYAYJYZ7ZPD5I44K3FZSGCALO7PJ5I4UAUZQQL3JD6WIS34OQCPP4KCKT3EYBTPF3ZMEMEQJ6OMZJQ2HUH5KO26ZLJACP24YYZWM42FXSR66CN53HP7LQW3AKSIQ3QW23A4VSL3UQ6UDV3Q6FQ2GPF6DZTNNTMGKKL2UPJ4HZFAGPWCGMJQGKUGVO7775XD46P777DRR6P37677QD7HHX7XDNAY7END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=usually000066400000000000000000000030441321557765100302440ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 403 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 12.25 bytes, standard deviation: 23.06 bytes # # Additional interesting statistics: # # * Ranging from 6 [905 times] to 183 [once] bytes. # * Median size: 6 # * 99% of examples had at least 6 bytes # * 99% of examples had at most 123 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 560: STARTPCONKV6LN3BSAEH4CXF6OHDWMGE2DP2S6VLB66XWVVI77PNAXAEWNAIFHOEVI2NEJCAPPROM3C4YZS6X666IY36D7P4TJFFAGG3UU7SVMV3TE4EN44FEA3NPX5UHVH2E76SLDURRINWGTGQ2YIJR7QDUZ5G6SIBB5UBJKZQBDSW2OIDJAXCDNDUDYSN5EYJOA7X3G6C3EADDN3NJ2WJ2VXGMVOCSKFNYTBABSMTUZQXC7UYEXLHCD3NZ464MM3FLZZ3ROD55YKXI227CJXVEITDJI4CG2OJXIWRXURHPQGTEZ3HHBZZY3ULFCGUPEDLQGOGGDYSSRNCSEXDXRLPIWKAVY4UUCZW5NILYUAZZKUGXYQTS6KGITKTEU2AY6IMQOTETO62ZB5JIOTV2VXU5SKX7R5APIY44RYMPGX44HUHGGS4JQNUTZIFKFYCAUZTSMIYJ2DEP3UHHORHAUQ2XCHBUSUYRNRLUS4WMELM2KXAUSWQMWPGYSX3TDOZPXBAT3NXULFA3YD2QLWHT2L4TQDPDGLHZ77XPAOA756IF4IHRWYY=END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=usually-interesting=array_average000066400000000000000000000102611321557765100315530ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "usually" and the interestingness condition "array_average". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 418 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 795.10 bytes, standard deviation: 578.38 bytes # # Additional interesting statistics: # # * Ranging from 6 [4 times] to 2684 [once] bytes. # * Median size: 997 # * 99% of examples had at least 27 bytes # * 99% of examples had at most 1920 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3248: STARTPCOE3GB3SIRTOECEV7BBRGYGQDBHPL5CSCTTCZEPE6C64LXEJOQLSBQZMR32GUE7VSWEJ7736XZ7OP67H5P36XT7YR5L64RK7O57L4L6YWEP3PYNLXGWW76PWR7ZLGO7V7PPPTZW64Z5C5NNS33WX6GXX2QXLEMK7Y3BNV5WX5RU5GMXRXIB4ZPNE7TF4VDL22R3DNY4PKYPL7LALNKQ7LT5UHKOUB5HJTLLFH5LAOZ7RB2XNUDF44VZ6NJ3XZNU27F6CEX3ZNXOBZWYRNFNCFS574J56OLPEMS63H5LNFPZXSFST734LIS7MVGEPHTFGHLZYCMKBDYEICPIHA5XQSG6ZJSHRN7N4HJANOMSDYM4H26QXI6LBID3SGTD3FZHHEGW2MEUWAUROKYR5Y5S3HX4C2N6Z5O2YG24AYRHOLS34P376PYCU3XRLWKN7RUKEBLO2U5UUM3HAPBO5ARF43K6JKOHECADHO5QEDS6KW6KYAIOSQQOFVUHZQ2TW3KLLQQUWSQQS3JITAPQ3EQATH3TBAIBIRUQBDO4DJ5E5TQCHSFJ4RGQ6TUCYWGR5N5AIVMFU4QWHRXUALAZAPQEWBKUAUE3KGPUVJNQU7ARCSFBFQX7UYCJBUBCDSLTMDQI6Z7WOXJJN7F3SVIFOFH3RSANTA2TSMAVATMY3YEBIVY7MX2FVQH52GMKSLRLQULLXJYNYCKQQQADXY3IEF2SKX5ANVWMIA5XRAT67NNQGUSXBV2N2JGXWJIVVAVNLA25PAQGP2GBDXNG6YUDPNC7RIMCTF7HLL535OBNZVBHBZ3IAZIGDWKDGWICD2XPQ6BQKBFDV52CXW7OYSCRHOWHKIKWLOTZOMFTK4G3ETBCEYMYPKJIJHDZOBJXNTF4WECTZEVDQU4OTPWCZKRYWSS6BIRG6EL44JUFX3QL4CUS6JQ2E7QZKHEKUYEJEL2ACETFLX37DF6SUXBG6POUHMN5GRIE27KMXHK7UIIUDDSP4KJMAGWAUOBGR44WKIB6U2RGM2A4L2IQYAY4RNULXYA3AMECGHIG62LKQEVC6RAKSWTNABJAWXRRRPRMFDCEYXZJFEK4QYEHXRTFUZ6HB6NWASCZU3F7EAWC4LWDBQYLMLC6AUXLCRVDDVKBMY5RLMRKMPJ2BJWVLUDLKZEQT7JNHNMVCXGLTD2L3JNZIJD6YVWFXNYITEXAAUJFJQPQIJ7XBSFOETRFLJXQ22UDYLS6JXFTDHWVBL6H3OUDF2PRZLQJDA4LWEMXOBERSGDDWVKZRXHZIKG2BRVRGZYSKUL7DO5C6XMFTDOWUPJXDRQYT7HPK7R2HRHXND6SDPIDMFYF3F6PXWAMEZMAHHLFENKI4FOFYNJUIYZWE5OB4EB3CWGLPLEN6R6JLUJFMYXKUENZACTZVMMDY7FGIR3CE7Z6D6KL2IMRGTEAIKHFNB4LLEEBWGLMCR65FMWNYUJHKQC4WBPGPJBWOP5PBKD2D53NFIEUJNQ74GOXDFH2XJG3IWUYNYATDZLKAVTJ7EQH7HOKFM5OVZ3ZKXUR6D4VYQQDEE2XNC5CHKLEOXFGOVZ7D5Q4L7TYA5TIUAYWKJMNW4RZPWNCTHKBSZG26TAB5MWNZWHTGBHE2YP7BK2JTK6LFTCNBEQ57ZJZ7I3SYIEK4MWIJ4MRRJQSFXJIESK7TGYYTOEZIU2ZRFXVDC5AR6EAI62VEVMQQEVUTUKUK7CEITM3SC6O4TUQCUQQ67K24E6A7GDAWZ5UIXWNAXKHM65W64ZYAAGQTW74KU42QYHPHGPVQ4CRMCDHM3WXREUAOHS5DBTYKNGPQJ3A3EIUJR4AMMMFZ5GAUARAZU45FPFCBB2UKTL5Q6CGW3YUL64RLKHYR3RR5YJFK5ABOTQ24NZWPDBNUSUMYAEFGYIYYZXCCAHMTJLFLE2WBXLOGB74YVKI36FUMOBXDVKWXU4UDT5NKGGMR5OJSW2YAS4LEXUFBJMPIK7E237THJJHI7YNZOKHNOYOOE3JSUA5PJ3COQKBJZKPMFCCA7SHIPNKCY3FVWFNQIO2YRMGOITCE5M2TL2YM2O5Q4OWRVHSJ27FRYUQXKVMPFV6BIUETOCCDDEYKMWZYDJ5B6DQ5JHZDZWTTALHDJ5PTEPP2UFGWJC5GHE5R73IM6E2LEOPZSEPWD3GW2UZ4EBJHFGWVAUDZXB5ZYWYVWON4MNZU4N53UGO2I7H5LARTKNFYI7AUKHE5ELIE46L4O5VGEECGNB33D5HCZMSXEA6RWWPAP5M65CDRH6DIIW55VXAYU3ADN2H7VL6IDPRCMN732YJMEYW5EM23VGWBAHLAO655RZJNB3FGFJAYXKZY7ACRGVSZYHQKAD232JKLBPLPW4RZFPFRGZ4XCZKFGOWQBSIGMQPB4BXGA6KKL2MPEMDC3T3VQKBYTO2OLELWJ5TNOQ5U55VRASD3QFSZGGVIBS23G6KOIASPNO2NQLK4FPLPRFKOZNNMYOKOS7LLWWH3ZCBHFFMAVXIFAV4M6JKMKBLOOJJFXGBR2HC6ICUG5BUZZ2DAOFIVZXHCVOG52E6O6HPOTKMWX25XZKEHEDTQXSZLDZUM5TAS3DCDBAAAMLMIY346PEH7YMIYH7XQFS7MHP6S2NT36O4RR572KMMMUVVVIZZADVV36QWKHXIZUMES3Q656D6B3ET4X75O2WPDUHMHY5DEX74S3BUBBWBHUPO7O4FSWIOEPRRUAFYDZ34LZEQRPWNDTI23ZBVWAIFAPTBXBP4ADPKBKKJXAEP7BHZMMPUJ7ZUXFSAE433JM3AM6GWDNWTEPZHO6JQTVZZCCGK464XIXY57GMGYFXDDDIEIAGVXCJZH3LENQMMRHHVQG7ZUYCYRY2G7ZVWQ6JNM43KWSWFTTBI7LPFIK57HQBJZJ5K4AW3BULADRKNEBMKL5PXIBBKVZNPLIJEV5VCVP3LWD5IRJZ42BBFOYHP7TZLUZU6CODJGZR43VJYTVNZRNQS24AEOXIPRVIT3DC3CXXSJEAH6XDMTJTPR5MP7F53Q7O7Z4IRU3XJ3JDDWVBXKP7RJRNPSCQOGDY57LKKOLVBWLZWRVYHCX3T3THH4WLU7INA3RZ4CE46UQ3ZLM3VX4ZVSB27JUFXQGLTUYIEVL5I7YSDX6BL54RZX4WSHUEKIVAOOYMP5VUG6UHIT2UXRVKC5L63VEE3RYKNNFDSJORTYKRKZ7XZPZ6727D77727K22XNP7POP4TRLMOS===END hypothesis-python-3.44.1/benchmark-data/arrays10-valid=usually-interesting=size_lower_bound000066400000000000000000000123441321557765100323200ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 406 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 18011.85 bytes, standard deviation: 22290.09 bytes # # Additional interesting statistics: # # * Ranging from 6 [44 times] to 168836 [once] bytes. # * Median size: 8860 # * 99% of examples had at least 6 bytes # * 99% of examples had at most 98792 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4312: STARTPCOELGB5SIWLSDMEV7ZGF3LH6ADQICK5IUQU626ILZHQVXK5TEPWUQSOZ52FOFMLAAJJSCP646H377XV547776XDN6774PWPH337PLZV37XV662O7V37P4PXJL6TZT5J7D66XXR523XV6J3L5DNVYR7X5VMKTX7GVDQZ75VN3677ELJPOD6ORC7QQW4VGN6PRDULOTBNKBV2PPXM5VL7VLRV57JEW44WC7I63KO5ZTFY7J4GT4EV4M7WP26G5FRPYXY24T5GD2VVVYMV23RXH5CCPPWOOG4H56X2XH4OEOOZWPZPR6P3YLEZ7KWDZORPB23ZHEZT53IJ6O553EUALMJLSIWJJAVN44323CD5X4LXVANF2T4IX552XSQ2ZZJXFFMH33VVTXDHP5NR64MLUKPGOQK2NE5NTPJBPNQXEAIXWTZ75Q5TF5HTKNDFGC7KW25YHN4GO55LY2R6K7YZ53IMU6V2EVK4VEGP37WG6D7I4V4PHWPP5XEK4OZK32YEYY5CPPNBLDPEATVHZSY27Z53ULHPKASOSG7A2R2357LFO5Z7VEFBSRQCJN45L6K4SZWRBZHQDPTDQSLQKXS6RXXV533HNARSVZQRFV5BZQN6IZUV6OMGSVEFJJ5TX3CDYL3UWQDYLBY3DR552LQY3YLNHVVHTBVXGEAZM2VGM4PPQS4W6TVDKXPZGW6PSQ7MPYXPEGLDNM7WGR33GCOJNL5IIEGIHHO3FDOY3A2TZ2UOB2KGCDDAZ46F5XFEVOETQYPVEKSZSTNM24FQVO3FOODX6LQCDYXFKVVNCBVONKKUXO4K65JAQLKQ5YQL3V6XJEYF2OATJDIHXIBI3YDL36M674UB5BZ5TTNGG4F7M56ABMUXAGECOLPHDLGDFPSDWV2L22N36UAEETLHISMS76WWK5TMGIX5ZONQSXHOUNF4BNZTDEQNBKLRFTRRRFLOKYX6E5S45IRTOBGPGV3ANSJSJRM5AG42MPY2W34NOSDAIZJHKUAIHSZKEEQ3USF3E2DHHYIJNDXRVAKCLMITVSJ72MYZ4Q3G2TLMMWHYTUC4G6OVO5MM5T66DTLXOAYP2FJ3ZL4GCNJHE6OAYIOO6UTT6HEQJU7AK4MHIMQGD6DQU6RLELJ2FPO4BWZ6X5TNIWWMRPTAAQS4YWL4N4NSRT6PRBEEBZZYIX7NEL6I5MRIVMA43F5O5SYARDTVFBKLR6FKXSPDQJ6GHU56BZZC7ADY57WY4ENPNSZQE2CYZUKASP7HEDVII7VFFRKH2GZIQKHJH6EKGYET66PBXECSO2G32ECKUOQTZRYZMRKRAOFA66DMCGP7GNFXL65MMM2HE4YF7N32PO4WDDMOGPR52ZSF5X4BDJ4ABOFN76Z2MSX2F2NMGMRCHV5ZOLYATOKOWI6JNW2EKIJABMEMPMZHV2OAGNUPAGETJ2EAPO67277UHXJUM7INGFYZCN6L3B7US7TVWRNBRUHPESVBLUTAC5F37OOVU5HEHWCOJGHXK7PAJJAI2GB3VRAGFWDHJF4XKOZG6E5FYOLA5OI2JKVVHIJ3BYVG62FOPHIC5RHELEKHG4QPJCRYG6E55LCFWH255Y4JNYHPBTI26F7VPCVWX5KIIOMI656SAGV2UVO5QCM3P7CHRS3NSQO2QJEMSXM3XQAZ2CPES2REH7EXS7HY43727KTOU2Q4QAPJ77FD6I34VSFZEYG4ODFXJ55AIHRTTB2Q3MUG6KREYGCS4DM2E7RGN4CREUTSO6XJVQNJCENA54XOI2QDHZZUEHEEJ3IPS5LZW3LYHH5JX4PBDTRFA625KZFTICSI3VXCDIR2S2JZBDRYWZOY5JR66ECILQURLHWHS63LW532TT734QZZABJVQNCOV2S3I27XYN2LJIQ6CM66EKILXOG6T2JA2LMJVSGBL24CMLZ6FYKNINCGOAPANGZWAO2GDPQRQB76MHABDZQMC5CFTQRSJ2UGAGTA4JOIRJTV5TO6DYTMAQ75BFFTXSUNU3TDFYBPST745B7JG5JMLMFMFGNCM4TJN2RU5UWNUE4T7FUGJEJNKK63C6XOMC3B5TI2C5LSXPPQPOGG5HBBHCI6GIYPGGIQY23T6QG7QHMY6JVTQUQOKCLDD7BWTQK5QH3PFY7B54VH3XCGPS4AT34UUXWARF5LWM7PEUCHYISPIAYBNV555GG4ISDBYHZFVPXNBZPQSGNSNQWXAWPS7Z2MHBPSTLJB6ZT7TGNGJF7JKC26D7SEO5JMMJUCNBVQ6QY7QRTWQTMI7RCYOTKT5XAY6WR2X7KBI5X7XKVQIUTIVYGCCVO5MDIZPBKN43W6QFVG4LQV4ZBIPXHZV3O4XQP2IBBDHK5565Z3H2PMOBWIFNLLWAOXIBWI5CK3LCB6ECJUL4IV6PDCAJOD6D3JVBEZDHZTZSRTF6UJUFSPZEMAN7MODACOURZX47RHXLJ6RWL6P3AMG6EFZCGTJXPGUEYKRVHYZA6QMJY5K3G3NC4IWFEA3ZC4DSC3FTFFTWWEMR7VCADOUGNNNTE33R75WV4ACCZVWLQXZXFSFIQN3KRE42HZ6NTZQ7FSHNWABHBNCCJTHUEHVYQHMFXEXBTD7ETJ3CZLPDYNKHGRUX3ZUGQADDVMENNYEYZISE75WH47MGAXGK7AVYHV3PS3YQVNTEHUGXI3JUIA6POHYWWQPTJ7ZAJ4GU43SFTEAH43Q2KRO6HE5MBP72QRUGINHRZTE6U2CSJMJD7MENSJ5J7MMAHMZQ7D6L2D2ABOPV44ZEO5TCL5ND7D2QH3MELOAG44RJZ2CMUY5TSE5ELHPBSBQKAHGOXZEBWUTH2LWP3OQCBKES7RWK7NQMY42YEO2W6MVPGBSJTBPOCORKVRM4CRATD3RGWEK5EOEDQWKSELLO62DA3MCICMRWAQF3MU2TC6VONJVYTCTRGDXUL4ZO7MGG2UR6MOMNS6OSMJXVIKWDISQXYQ355JXVMJUS64BXGWBTIPE3ESRBXJX7XTDCRRMI3BHGHWVR2ARBI5IHWNULM4HITFKSRBG4XH2BSPN2ICWISXWR5PZVVKHYB4ZFHDQAEOGUXMELSY4JATM2CA5JP6C5TPKPVOZ7WQMAMN6P6TSWI22AHIQ2QXIYWBZQOBGDFLYGAWNWZA6B7DT53C7RD6JBBRCNCF4PFLGKE5HPJAYRYMG6BT4WNRCKAI7Q3HC43L2T4KTG7GTM3O4DCIZZPRNZCWXCXPCWLYHBWEFM2MCV65A4H2FBRAEWYESCPF6PTWOEQWBCUYRTS7G7CGA4VOQ6IXMOWX4BTV54OWINUZZ3XIMJOSIQTJK4JXCTDQ4GWMXUUD36UIPNSTW3EUNYYQMSQQB6YPMFKSUXLY6ZSL26XVVB7B5JL6743KDKZJU67QMGGL3X6MVRWY6BVD57APVSOVGKTBBPFMHRMFSB66TTOWATCUWKEIDQT6JNUTY4CWZZDBTFKHMIMP2ACPOBGBB5HPV3BNUMWHDOUCOGKUCVL5AYEQP66Z5JO3PT3MDXH65HWRKHE27C6EH7N4P72EJBTLGRUJR2FDQ5AQSIAULIUONBBIWMDUJZWDTYJ4JYML7G5SXDW5L6OOWJJ4PKYUCPW4NC7TZWABVYLHAXIEX5E7XDUHAR2JNVHG5UQH2PLGY62RQGYHO4YFETRUO7Q2JSOG5JPEMCHFRJQUXJQOWWI4HTZ6RORY6OPECHZWB3MKPXQOZ5B7GA5FBULKLPPBIFLHKMFU4XGESTP6QSNSM43W37X7VBICKQWWY5OD7FSKQDXRHSXXZWWD3LJHGDOJZI64DZWDZEHVHHCGIFFWC3ELSFEO24YQ4WJVI2SXLWT6GB43JP7F3RFWTZ3MW2W3NATL2VHA2C35WD5DSS2FAYAGIBD222DYP22BQQOON3W3OLCHLDTFOWMQCQ2FY72DNIJ6SOIMYGVTDRJ6ZQV6JC3CDVKZDQLYCGEG7LMTX6P44JVE4HOVKYXDQGC3PW6DM2BS4HQXZTJTLRHHK547SRZM5AOPHWVRTE4Y6EOSECWEMBSXX2NMTYKK3B6WUJF2MNIXDEXKPDYOD5XACLPE6TDIPGGUYESHJ33XULSFTJAPYPLLOKPC3QGHLYQOB2C4QE6DW7OSAXNRDBLM5JHBTVW4J6BRXRUIBAG5OKGR7GWAIPGYLG2WSSODQQL3HE5UBTR6SWQN3PPCMT2ACLSZ4XME5EFIH3MGKGHWUZVH34F5SMJGROHJFLBON364H2PWNOHJUNTFZJEIYJX5GQXVR7P377XY5PHT777ZY7PVZIH77YH5DIL6XQ======END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=always000066400000000000000000000021651321557765100304210ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 408 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 2 [1000 times] to 2 [1000 times] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTAZWGCY2QBABUOT6OLQ====END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=array_average000066400000000000000000000101401321557765100317210ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "array_average". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 417 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 553.41 bytes, standard deviation: 372.98 bytes # # Additional interesting statistics: # # * Ranging from 23 [once] to 2354 [once] bytes. # * Median size: 661 # * 99% of examples had at least 65 bytes # * 99% of examples had at most 1271 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3136: STARTPCOELGB3SKODODEEV6ZKKWAB34AHIFMXGMVXBLGMF3P533AP4AUNRLMZ75AQFDKGUM4767X36X3T6P37PX57D5KH57V7WV3P327X7WJY67P3476HX574LOGXW7PU6PXNPZO7VXSGXP7U7XDHSY7K3642Q5LDR54ZGHXI2PXPRUO7G7UN7PZB452BX6LWFXSLLVN3AXXH7L6TXZ66HSQSP5DJZS5WJ36247MN3UUPQLJMHZTXAHX7PBWNUNTZVNVUPMYTMUO5J5NCD3G6DVN4X2ENPW5DM4KFU2S5VUT2RJNTWRN376LO5DXARW52KCIDB7W3C5H3MZ6564N3D7DTUWUMCAC4UGF2MYQEQBWKKH57O7PPIQL2VBCE5LNL4K4QYEN72PTSS33FFIQBPBB6T2HWCD7KJYXN32MEOWAI3LPPI7IQLBBKOYSNLXV4AIUIN7ECVAECLASHZOFYJ7KZLYQJASZGMB63L3C4WTHHCY5EW27LSOJYFPTONJ7K73JN7RZSLSVFO4KYGQDUUFRPXPRQE6ES2YJI2BOOTWOKH5LW5IWWKGWYMIHBPENYUBTGBXQDA7O7MSQ43YEQCYMI2DK3AVV2DAFVNNTEYAPOUPT4NOUEZQQMHKII4GHRF2NKR4PYVIW4EHABIPBH7A5LLCWWKMK3UXUOAUDZJGKWGBWUJCYY2UL2BULOZBJY3YBN5NDG2V4VACPDOWCGBZB5DANVFGDMS7W25IIJFKQME5U45LMQQROXMJAUKAYA5MI53A3I3XLX6UKEZJYFLQUA6JZ2SGMJ2ACDCF3FUULVKBLU3AJ5SPYMUARQS4LLRQVBBWQTRNKRBRNHJCC4I5FY2JTQ3MUWTAO34K7YSOI3FRU5DIMVI6T33VIOULPSDX4A2CR2QEFKDXFQ76FOWQG2LUN3S652WOWBAFKIE4ZANFVFZUONUQ5FCLPIIW5ED3NSSP276CHKVGOJZPQQTB5VXSIVXQRP6VBBUP5UIISAC72XZILZVKDEUOV5KRWH5WG324CELQIWTE23FBYMTL4BKARWUC2J4NJ2B4KB37KSQQU2KYXXJZSUDJNU3NM5LE4LNZLS2QVJ2GWDNHQGA5CWPHMGHLJVKUVE3GJRRBDZKKWVC4KYENVA4SRD6CD5222TMEC2AUJJIEJYFZKIHU3MXKK3EC6XU6JCXSFFNZNFNIQWQJVKVCOF2OHMM3CVAJ2YURL6RH2MIUCBG237O62G2NFEYR4RHJM2NJICGPJAEFD4LMXSKRIZSB4C5DK5Y5Q6HX6ULJDSXBB5RSGJIYQV3IUXBXKLGMJBRJSAYRARPI2NM2LMZPE5A45AFJ5YLXGXPFRL4SEN6XKEKAXJYR6EVABGRNKNJVNTGQFEA5J3WO3MVIJ5KTOITMWE4PJJIRTGKUMY37VKITS2D63QFNEAGEKR74OLAHSPYWJ4FUU43E7YHMU7272W6FRJCKZFGIIAQ4YUWBKXR3ZE6VMVBCO4A3ETOCYRYLASJSUPYUHGEHAU5IWRBYU72T6JEOIAIRGVD7VRDJASROUPT2PR6YIMIM63LMYOEGK4HCG7GGHOWQAU4SHVTEHJ57TUTU6RXXOMHKNCG2QWRLH37LQKVTSL4FOJQIVA6WQA5NAULQBETOGKLFETWUOHUPO5SPBAVGNRBAALYD2DGBO7CU32PHU2GLGZDQJDGWKV2GBXPURTAYIEYLTGCMKVG37DDMRJRZ4C3XFN5ICYPXHWZFMEI3N6AODQK4ZZXDXW2A4FNI3MMMFKXPHUUYIZHUDCSSSG7AJ47V24BZORPLJ3NLJX3T3GANHN5L56IJ24ITSMLXXIR52DTR3H7E3IXJ4FQRDNGPUV3XKULGAYINDZAK2GZL3HFG7YCEON2WRJLQZ4OSVJQSIYQI6BZ7MWFVTYNW77RVEP6PJT4WQTWYJ5E3KMDRUZI3QF4YEYKEKTM7FX5QPHJJEBR6AEVBEA4FOCBJ5REUURCFRZJ6MAOY4UJCVCPUBTZHTMPKMUTQQKSSEDKOSILFY42R4O2X6QIW5BQJTAUGR5ROM3HMSECP35XYQ5DVJO5RVIKHCJIDHY2FJQHHXNW6EZDO47SYOAYEZSFSOYPUOKVCPXBQBTMOQW4NTIZWUXNRY44FRGQCKEZ3HNW3PFHDTLIQDD4RQ2MV7YGNVSDSFBEGSGMIADLSXCKAOB6NQFYG3XED4ZZM546ZIZSNQIFDXUHVZDQLEERDW4HRVPLYTTT42PKOYZUACCZQGQOL2DU2STY4YGFLVRAOVYAGRZ7EWESCPZHISTA2EJHUAQBDQMWAQPXKJDQLOMDOSJHUTC5B4SQOFN5GMYMT2J5N7F52UAYDJQG7GNXVL3HUD6KQAWPXL3T2XWXK4N4W7BSCFIFU2FJUM7MPNNFAXZMNOPSK5JKTXE5QLRL6EU2MMAOSBLQLGSQZ4BO5Y3RVZ6YG24YXCCJVTWBKIOCVW23YHNWROMGYXGBFHY7MZELFRHD2C7DIDQYGA6TSRRJDNFKH6ZB67MHLGKBPRWH7LGZYXTCTT747KXMIWELXXASTGN2JOVTTIAF6LCZ3LFIKQZRG7EUDKSZBLI3TXU3GF7UMS5GCBXQHHGM45EJZMPLCBRIPYMET7TSDNXAI7I6FC2HJKZT4ZTOMTZ3XG4V6BRRAX4DORY5ZBJNHEBIRU2AGI423TVJRNARNESBMHDI2BGEQSTMVC55WJS5OSORUZT2NSHBRRGHOFFLFEQCPJFX4EGBZ2UWOPKA44A4TAMCYJFH3QLOMWENAB4U3CF4YTVFJNK54CEDHY5XJNYLYXADOOOSB33EDPF5MYHDNSP2HJM6GOGKK5MP2CITFYYAJZN4CJM63T4RACYQ7LYNFQTYU72AIE57TJAA6ZCTYMRLIUGEKGVDL7AW44GAQ7MZYAQUL357AQQGL4JOCSWX7OEZTUWQPJ5K3YKWRN5Q3AHFWXEQRCNWNHWIWWORKWJR4G33ODF6UJ4KKXLR2PUSQYY2U6E7XTA7N42FQIIONVE575UWTIIEWYRDIGS2GAGJWC7LURIL5MV26BGGYUEOICML7NUVSMT5GIJBX22SL6VICHPRM7GNLC6LEYJG7P67H66XW5PT47776XW4PXGL775B6H4XFXHEND hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=lower_bound000066400000000000000000000101201321557765100314260ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 410 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 845.64 bytes, standard deviation: 247.26 bytes # # Additional interesting statistics: # # * Ranging from 2 [5 times] to 2546 [once] bytes. # * Median size: 852 # * 99% of examples had at least 32 bytes # * 99% of examples had at most 1482 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3120: STARTPCOELGB3SLWTMDCEW4ZDL4IE4IL4JW4KZOMSO4H4GK53Y55TB4AE3IV2K4JES7A2RWDP57P47X3T7X57H57X37BR7LLYPH7T6VQT45FZP7NTH56XY5U6WXXW7VXXM75Z7PL4PWTW546VH3ZMV3Z5DNLLSNMNVW55W2XLLJVF6743NO7UTSQVXWXY2GXX333VO3J2X47I27LNE5VUZVXFU54PHBOYXB7WX636YLAZVGOR7V63WLKLP3C3LUXZXFUZTYWB3VKH3LVQ7XTRJSJXHXGTQGHW43S77JUTUQVPY7I7ZFMHU3DS4S5XR3UQ6S3U4BOBLLZ53BGXXOXDIN7XJEG6DI5HFTRXACTN3SRKZYPMUAAH6DDJFNO23MYWJH6MQI64VBYTJ4XHE5OIUOTHOWWHFDS4BFWWFAY6OIJR5PPPINXKRGJTRQ5FXOJUPJGKUVBXAW6DEO42CEDPYHCKOLCYOK5Q34TQOKG3WLG7KOTLVYVBAOT3NMDS6UWDFBJFFNO5E5QCQMUW2J36FHSGYBBQZ7HBH57BUJRYRF4SPADDNS2GTXCTDLLRPX2BEWOTRFT2Y5HTY3RK5WPA4A6MFD7A3PSUO7YSH745O7ZQV26OZOO7PUC4EM4I2PICMR7WG3SWGTJ5TKM4CI5A5I6AGKSCRE33TICAN2ZDQKRUTE4YKBU3WCIKU6DW53KON5HUZKNRMC7HHZABJRQACMMWWSW4LZMLY6YB2KOQVETVTWFVEJNAT5HLK2JBKVHETTU6QERYOJUMXFPN4SYHW3MFLJQ4YIXIGTAXO5OAFIRLPV34ERXCUK5CVQOXXO5ZC4XCXOIME64UV4FUPG57RKXJDRWTFKANVZLOECF3KQ4AG3QACHXADBUEDVWYBR3JCRYDLDFFPCRBLTUJQIIXDPERGPSOM4IVSVTKLOXVFARD3NMLZLXQHNY6DJB6RMHNVU5KOAXJRVQLKJHIKH7AXA6ZE2EOLY75IUDEQKQPB4UTRGY74ZSMCZTKTNQHBYMOTVRRFTLLNVRT5HL7PWT45FRFCAAKR6FA4FRYVXUETXTIEXQYNHMCIF5SCOPCHQZ5QZAEIQSUCQDDZKPYTT4TRXHEWKXAPY7ULEH5US4R4NUCR7BND5NNU5WUPCFDWYLB4BSNZEOFHVLNIKWBFO3UXCJCNIS7QW3TJERRSNUK4LPQFSEV3IZKNLV2FP2OEIN24UP6BOFZ7IEG3LKLJSEUD4NT25Z6RMDDBKXFZEHLV3DJ2E2X6EBSTTUFGXR2SP5I3NVWRNQZVHSEIUDORTZKVIRYR52XGHNBNYFPOW2FZ4QP5U6YTLLIPRRRM7KSQUXC4O3WGKBROUJ2URDROH2ZD5KPGUOADW7IFZPAPVNOE6JB4JWJC3CLSOZSZZJI4AAI5MSDZAK2IPLDJXSEJB7DB4YYKMB4MBIK2FCRCBXSW4BYIIMGSCZ7RR327I7QAMI6CZEQSP4ONIBBTYACZGTHE72K6MPWV3DQ4KX5KGJYFKSDKXUH325QPPP5SRCAPXEIBFNGXPUCWRAAPND2T3ZM426WXZ6LE72AQLK6MRVHPDVZS5MUIBWBTK5IOPMWGSHWV2GZPE53ODIKQT4DVFCHJ6POS6PUEEX24RFLYCILAUUQGYI2K3EIBBSYMAZ4UE3ZYBYKHIUMSXV5NO6OF3XADZX37YIGIKVKCE2CIB24XBV3KQRQVAMTGZ2OAGC4KWFWISERI7JW3LIE32SVWT2VMXFLIR3RTT6ZNCHPC6LHBHLQXDUXPRBCPKMVFRTDZI4KIUUWGB27HKUV2FP4L2Z5EWDCX6QUGDKEXFGWFEMXD4BUMDRG6Y6ECUQD44DIAQ63IET4HCELIWEWFSCSPAMUI2AOTTP5Q5YOSCVTTIEJ5EFXKTF2QDPRAVWY4JQB74BAJVPWLWV4LDUKKPHFAODITWTNRETUYPDMQKCYLQKQBGARXK26AYBQFUTHM4P7DZSJEOGANVPON5NC3CUNAH4G56QWPHU6NTJ5NK7BRGGZHLFI3O7F4NHYHLMUHGJSFQDJYJ3B4IYTNTCUBJDBKW7JYFWD66YU2QPBQDEQ4MN7R2XACOFIMDYPCRC4XWSJVHI2VYNBSTPV45ARDTB6SEHLMY7OSNNJYTCEBV7IDPWWRGPMTNO53XNERVMAYJG4FIHKJNIGPJCQC2AT2QASCWB5MENIK3GV73UAQYG4MKFCD22P6L4PIZGJIPRQBCOMJIENNF6VYEAOIQDHXNM2CJ5DXMUXRUOX2CXIXEJECR72XWGKGFTSEQOYHUFPHRVFLUFDQWEXL3BKBSS7URVEYRSMUMLOALRWGGVOW2JV4UCZRXHKZABS753ID7CYEOGBDANTFSNTUIWMNZ2WQ2D7LUNHMQPZVEMZNRM6VZ4YC36TS6NRCATDE2U6D6M4M6HBGV26CARZRWYXMF5NMMOZVAA2AVLP33FCM7Q4JIU2PNFJT4CAQ3PPNDCUQN6VNEKNNIZZILKRMKTZXPCZHC535UM5DGIWQQMAM6CBFHZP7BLXNEYT7VAHS5S2YEXATAWE2M64E7YC54272BD5MIUMIYLPI645J6BSEZTHLJRKERDPIYPQVI65XJENGRFX4Q6SYW3QRVCR5MM4TXXQFT4LLUWC4C6VGPENAI6QLSQHAOCABXIJKEXKN2JEMPJICUD6RBNC77UKUH4TRMYG25DWLXKHPZIIR2KWNCEACHZRSV5CC6OAFQVDU2GDWKGVGTH6SY6WVTRQ4MKPZYTLAPKR5QGH3EQJ56G7L3FJG522RSBI6YRO6V4VW3NPQ3BJPGCQZFLB27EOTTL2EBSCAVDXPG6VR3UP7PAZGUIJ6LM6FSCB7ZVHOZAMGXPWCT6OGGYD3GXW6N6EEDDBZ6VPFJSMI55HTVBASY2RWEY6M6AN3Q3THM7KGIDLIDMKPLGH3O2GT3QEGJCFRSLK32EVG24CKDX3ZJHQI7AR6KJCZ65FUDYV5ZTZH4FNAASJMDEMAAGGCNPUOGGFFBRARVMP45L2EGCHPLIC7TAXBX6722SGXGB4OVQDCVG4JA7WWUW36YTIEW2WYHVTBW6652X7H67T4PT7P57775PZ5NR3NZ5774AU2JF3IM======END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=never000066400000000000000000000100621321557765100302330ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 407 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 894.57 bytes, standard deviation: 162.49 bytes # # Additional interesting statistics: # # * Ranging from 568 [once] to 2325 [once] bytes. # * Median size: 870 # * 99% of examples had at least 619 bytes # * 99% of examples had at most 1405 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3096: STARTPCOFLGBZSISTODCEV7JNC5Q3LSFAJINLFDSGSDGZ4NE2DO4LPAEVI244D67RM3UJIQRFD734736OX3Y7H47X767Y7X4PH5OR3PZK7D7TZSXQ63ZTV2W4L5N55GRX67OYXQXZTX33C2H23R4DX3XIXPJT7Q47CO7O5U665DLDGGEOTNW5PNZ5Y72P3SFQPR3XJR2IPHB54PS7QZ7XHV346S43WGLMPS7LXZ6PVSM56Q3HPLWNEKT57KOPNDY2I4LNGETGNDAFVTC3JXFTE73XAPAG544OBCBDRQPJXCBDGHMV5RRSQKAUO54QT4K4HY5QMPDOYMMTW4OS7RJIHP7C47T7NTCDWPOWACYMA36HCQL2UXS3R4PY6M7WOHRJA46XSJ6U5L7FT23GEPWPJH3Y4SYDN5TU74L6R7SS2DXCGP3NAAUVADE5SSCEKP7SFTW3I4GGD5QGJLTT4XZHGALCYAG5HR7DOHEIRFLQB2FROAD57OAG7HJ5S4UDHWRVNB2RUPRKMEYTL4EHFQAMK3C3JEYV4NUJWQZIX2Y7JDHDHIIMI7QXA54GF5DAKXQGUC2QIZXJAQMJOTYNGBYLBU4BVIAESN36PGXA62AD6VXXTDPIQ3LMXBPRIMUOWZ4H3TO4W5DZQCMZJU3AXWE6SAMEGGA4CD2ACWCTKGZUVANAWYMQPNEM247JAYATTPMLDELA64YDDKGVWF5RBADU3EII6HIFDMC6YJQ7RD7SDCV4WQ6KCUCCNBQGANBCUW2CJGOUQNB55WDNCDIQ5PEGLWY64E7POM4GP5QYX5E2BB7BDS4QKH2OSDWAJPH5HSYPPLVXG2JZA2Z53HMQOHBVZBFSBJBIJ5FM5GP4QBRMXTJIA7RQIBSLXVRHGJVRTIR5OJQ2C5KKIK5SWN4ULTBNYMX4UKJTELKCQB7CVCUD5RBFZ3VWBP5NSIKTQDQMT2IDOZHTMYSJIZUCLDHBMRD5T2FDMOXGDSNN3MFSEI5DEEPNKQNPBT2AK4BDCAVEEOWUNYY2ZHBAWB72YYSP3VPS5XQGFKNUA3WRLQWYPVIIO3PZ3GEQVQXLESE3EIGWGXPI56A5CQEQSE52YGISWKRPCZKAHTPO6YFHPY3HUUGIQOXX2FJRCRQOJFFEGR75RXFGQPLXW3HUZKTNERRCXELHBI5KIRFPWM4KYZEJMRWEEWPKROJHEF7PFUENIH5CD7L25KTGWFSSM7QUOPUPAQWQ3FY4A7IY34QZEVEZYDD56E33USPIVFRXI5NAEZKMRUKC4JEHZIZUJD4IHKBD6ZVNUK6Z46CXAXJIWGSSZD7FITUBCODVJUKQMGB6Z4OYZDG5KSY7IXMCQBKE2R5B24LLY2JULVBGIMROPT5LCYLKONUS5VUVSTRP3NETCFOJSNCVERK66U6V4F7BDAMFZEOMGIS3ERHFS6IFVSRZKELU5PS2KV5MA34RGYXOKIICMKLNO4JSAIEALQKNOW6SOGTTVTNMIYITBLOKU2X6XRPAKS7USCSZQGGSXMVJBBBDAU6BNWVHKLPAV6X5SCZD2CZG6LCWBJMWCWKU2SOFO35I5ULI3JOBSAFWQNULFPTMTSWZMBEWN3XZ4J4Q6YWQF7FECBELDCMOISABBLMRSHQPRCFNFGGEX6O22PTVZ2HMBXPVGCSNUQCAHAEEA3FLNFBEOAHJH52Y5EQCZ3SR3IEZRKTFA7TDMMQZZC5VX7LW33OBJIABK3AZTPN7JMWHQYJ2EMH4I4OOJGESAQADKNEALELFQBMVILT4SU7ZNUT4AJFIW5JXVGOOL2LVA6ZXZ3LG4LPOX5MUC3JIG2RXX2L6YWIQTXZUAXNK5BU6LHWJG5DKBRQIBYKNDZKM63O3RHD4DOCJAK5ZQDPMN2YF7J3TVTUX5PHJZMD6XZMTM4YXWDWFFL5HXNKMFKBYJOULBEJWQCUOQLO5HXS4ZUKFNKJ5WSXMKYYHON7WLJJOYRZJJRAXUSSJFDJ333FNBJFKYDGQYTZCV3WMNGHRI4JGSEGXUIGDRCQ4W3BRFW7NXCH2HRCYBANRY2U4XJ5ONIJVZ2WJKE4FI27JCQEQR5KWHEVUSOKVSRG3MXYSWZFZRBQEPVNVM4J2EF4W4NEOA2ENIRJKTFU5GCVNKI2PTEHTXL4GZX5YA3O7HWSPZLO2NWVAHTSJPUKYLFIRE5YYFZQZ334ONPTAZWA7HIRXIL6FADA5ANC6EWMIZ5G6FP3DZ2PKK7EFQ6SBF7NXSTSFZXIAUGSN7RSRI7PRDCQ56FCIV5JAORXWJKGIUOS2YTN3FHNTVQ66QQI23AFFBUJIMREZJUB7NHDDHPASCK2EUOMA32LTBHPRC36ENAZGMM2MUFJAKYVNIJ2RI6ILJNRI55SX3JFCEZRWOMQZSBTMZJFLKSCBMTUAFNLXWVQMS7ZZROYYNPSR4UT6XDTJRPL2ZARRPT4M7TELU73X54R2XL5MYZJ7L64GDH5VURP7FJ2FKGLUIFNQG5T5LKQKLZEH6CU7JRCEPNKIDFJHHAFINFK66WY4WETOVWGGVIJQ5NWYVYIO32YZOS7QA62Z2GMMASHTZ3E4AO2VCZXQPWRPLKTK6K7WJQEWOZ6LPDWULF4VX6CGNWZFQ25DNORDV7UK5QDX7BGAERJYAQQME7UVZLJJXLXKUFX5EWHLRBZ2Z42PBVFPVN3OXDPBJTDMWYFABW2IU2ZFGVGL2724OI7MYMS5GONMETOINSEE3747LTWCZ7JEVKU5ZLTCLAYPMEROMKZ5VC3GVAODFBJS2LE4C6B3N75SJHKGOLESNWSKSSCFBDFXKZWEXS2CS6CN4XWIFWO2UUQV3XV7LQJFSNTUAKJVBE57TG3JGGVW4HIZAXMUKPBF4M35C6CFLGPRQCDZP7TMWAMEOWHZIEYZEF6UW2SUJL6NXFTKLCLAK6SSPXJHRM7Y5JRPKET46ZILP3HE3QUJOQETZN3ZIQYY7QLZAWT55FJDHDDKZPSFRRSVAW77P4QUKQNLYZ47HBCZVQS6D7F3FSIEAMEYRPUEWEQXRAQ34SN5P7VFKDUYXHQ2HA7X67D5PT7PZ4PDZ7H56PU6M3774B5CBXDM4===END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=size_lower_bound000066400000000000000000000120751321557765100324730ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 411 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 12079.59 bytes, standard deviation: 17201.65 bytes # # Additional interesting statistics: # # * Ranging from 2 [20 times] to 181025 [once] bytes. # * Median size: 5220 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 78262 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4112: STARTPCOD3GBR2IS3ODMEV6ZLL4IGAQARBJFO4JZOMDI4FOZ4W5ZX7LBURKWS7P333GJBIGQLXUMY777PZ46777473Z6PH57PZQ3NTVPT63A7LO7X4HG757LQ756K73W257PLY73LLLP7XSJX3WNOV2XNOV7XGYV74RD7ZPW55V5DH46PNVPVWKS6XK4XVX7K2P7VWON7G27IS6UGGXGYXXXLMKR527PWPA6QM5D2PZKLIHMDTPKYLZWWEK25MFMWQ4ZN7FLEPGCR2FP45RH3OLU7FDP6GTZZXCTXWGD533OEPW67PNR4V327WI3MJU2PN6SOPMFOFHBMFVPL25DEP5SK6RUIXWYOWNULEIM2L7EEELYSFTSQTC3PGXFFEH5RJJLHJZ75Z7SPVGUFZ3FH5ATIZUQ6HO3B5CVXCT37G7XU6THS6TELFM4H3NO6JSTUWX6KSAWS3MFSRS2HJDDRCZAV3DXNXJQIWXHN5I224UW2AXJZ24MUTQNXOUZBYVUC6VKTOMZHRMWRMFCVRKNT7B6VVL3LFWGBZHOVLH3C3O2PAVPPU5VYXJKLNLNDSWYNMJJKFW32RS2ZJP327UTNSZNM7X3WIXZEEUPJT7TT2VPBNT6ZYXTEX2CLZRUHRJVENLVUL5OEDI6TUNGU4J24NH3VJMSAS4I3SO4QU2LDUZ6XLPYVEEKKJZCRPCCTW3LVF7CWU2CJ3JLYTIQ2VGWKPIXBA3WKBBEMXWFTIKK63UBCQUXFOXRWKHC5G4225FXLNPIK5HPKPDMBUXFHIU3X5RL3B3D4YXLC43C7YMT5OXT3FGY3NAOCKLKHTKMYRMGVKV6V2XRKAY3R4J4CAVCJ5SB6QNXR7FOPDNSPV6OUVC44FW4J2J3QTRNHHOYK5WIEUKIMHPA6DXSTSSNSI5OKFGPZMODTOQR3Z6B6XIJJIDSCAO63GRLEEWRSKQDBFSK2FN6WWUVAOIHCS34HGS7F4QILW72GYRCJD4FFH4N6AS6JFZDRDSTWYNJPR53HKIME6EN62G2HL4D2QUPPFLMTVBNZ5A6U424IRHOCMPU7OQIVEE76FTE4NTWA7YYFGI3ASBRZ66QKBAANGVV5DU4BR3SMH3QSZIZSK5RNSQDJO32ASYJDP4254QSHJNAWXHATI4SDXBBFUAN3S3KJYY3QBWSGY5DPTFFHCPN7NDJPTPMTUA23K4EZCAMAXXG7U2EB4WAEVCT6ZNSEBYYJL3GSBYWSAOUFHMRKMOS7KMBLZ5MDNCCDQ53UABQYS4YT7KJIJ66Y7SJPEQA5GZROXDY43AMQLV7RRCZVHAH2OVQRYHIGEEOGIXPQQQVDY5CHHE6R77X5R2XJLQWKA4T3OZENHAGSU5ZTHRSEDKGIGMSS5RMDFZP5GOK6NQO6Y7NNI5DVF4K3POKKFNIHMEO43UCDKJ5DTPUPEKVND3W4DQCQIKGEPUKUJZ42H2QRGDBJORX2XIPKDTFHOPGUQHXPS2FTZJDCXNZEK72QQU5YGUNKAVFUYTVZTBVXHAXBI5LHQ3N5BLN3UCUNXADABESVLWDIH6ILJ35EEMWO2XDIGL6WZZCC5MTLEGB2J6IMQKHJEXG5Q2UDVID27DPO3KRK6P5N4UOHR6SL2IR6ECQRM6NO2OEDKWANTBBC4JCDJH3WXBUGJ47B4FMCQX26YRM2DN54YEWHV6MCIJ7YCAT2RSDBLLCD54WWRLFKHEAWRQPJ2MT4JY4SIL4TSFGKSHYYMCZBI7SHRC4QET3QSJB5E4YFCL2YMKV3BXIYG2QN7CLG5OJF3IAL6D5EJJGYYFY4WEKJLX23SFXJ3QBO2Z4EI6TABUNG42LEDF66DCKW5OGTQ625MHRRFSJRQTZT3563MACVZJILQDWS5ICFEZZT6EKJSPIYGCLKDYSSOF4L3RY2NCM644QPASEDANBXEUOJCX5Q2WEBTR3J6VAHSCOWQHTPKUIP4TPNV6IJ63C5HCEPKOWA4ULM2W2ZQRFUD3PY6KLWB667ZDSF5CCCWUJZQKBNQH224H7MVO4RKPF2CU62452LYXHEB24GINL26LIHESZIO56IYGSXWT47JMEX4NQD22EHR6BWGYR2X5HUOERU3LJKCOA44KLMRVM4D473X2EWMO6QLNGEOCV3OSDURRSOKM22NDFLWCFPN6FWOQBTYYD3ME6RZ45CVRNUCATUU22E7VC525SLDNO2LN3QUJBM52LTDQ6XBLEAYPLXINFCUUEO2IZRK6EPXI5ZE24ZBEAUWZ46CAOHTKRJLF5NGZUUYRULV23BE5MVGIHRVXHY5CDXCCRFVEVUJR6HPK3IB2Q4NWTMO3JYA6ZCCEEC6MHLFYTILUJYCQUMG5VBJALXUVBSQ3NPE3SQRNCRWJHO7WZDQQCXFIQDYHXNADQPKCUTCCJYSA3DROXM3BFMD5KK5ZPNDRC7EFQFOK7DEAJ7HNURYQDZGOTNNJ7AVMRJEFQJYKJVAZ3LIB3ZU7HTPUUK4LSDJC3XSHPL22NHOUYQ3CIDWXUUW7KFR46GNEGWUYOI6NB4XEUA3DNX4HVV7AJPHX2IHEQXY5BS4KAPYOQWVDRZUIYRUG6OTVNNRD4EXNVS5KKJSS5GPQBTBMWWMTGQ3EIBFUIAM264PMLIVTWUVAFYILMDJ2YWKEFK37YRIY2H2J2AHYGFLS5QKV2CDB6TWWUOTHJZKCY7QYZQYTLDCBQF5YLLHBP2OTP2PAJ5A2CHJDB27SAC2PRD2SO2CE5BUKRQIY4TYVTMQDKBWO6354GTJBQKBVDZ3NJOCD46KNW2SXGOHQ4FH4LUV7YYLISL3ADYASBBL4ZUMH4DCTIGJWTX2MIMNNRQ7W2MRK2AJUP2MMKHR3BGOTVD5KCW42L6A6IDHLJCGUZ7Q6FOJ5254AUZRAF5MLQIE62XOEXSG57KAGD5II2ZQUET5DZDF3X3TA4D3G2EL7RVMNTSGRVJ3PBJSA2AHWWE47IMUXKQFFQQ3BFURLRPVDFZJP6JSCLQZRO2B6EYQ7FPX6MDTYVZSSUNFGVDX5Q5KMG3PWVIMIZIZM2OR6AYBBPIJBSNWWJQ3LCEZ43PB6VPJZZ66WRVXU24HBWTFLZY42CMCCYB6GIZCHJTPROLVDEOUB7XVTXUVAZT7UXYTCI7PX5WW26D572IEPEYWOJPFVYJ3VJW3BRWI26LNGPTDDTBNIZ5CZ7SQDAMBIV3WW5DPUXUPAJ3GLRUVBUX62F6PYCZED3NVEY7VVEG72IIXOM6K4FYDOUGRR7SOYOR24STHKWMDKUZWHG47TNRLGRIS5MLWJUVDZW764KB6WMTTCUXKEN7NNDWNTDBMXUJBRS77MSXZFFR2OI5FQ6LBR5JF3KP6UXDBMGMN5HAG2ORDALOG7XRPGGSJAF2NAXPGP3LI3MBEMCI5B6B64O7UZWYRTCIMDBQWEJZLTPLD2NEC5GDQ4TVWSL4IT3GYTCCMGVVLINEQWCMP4TJAV4H6NNWU3CNBDEE2P3TW6U5FDHEOKG4TCMMORVDC7O7TYFF2GO7OUHIFQMDYTNGBMHFTJ4PYNLGA5GHMGFPZSKU2UZILPBZVUCBNRN4LKNDOKLBGGLMKYCFEWBYGSCBX6ZJDGMHKU5TYNM4WMURNQULD6SO7LRTUXFQ7JAH2WPXCFEM6ODOJTJIHVJQAJASEXDJT4CWFDR6AIEYYXYQIB2FOZ3V5ORQS3KEMJV66FZWGVVCUEO57LDRJU2SG24XG2SSWZZM73QGLK6OPLVUD6TUZO6IASNMQLGZJTV23WQ4MIGTAIXW2ZV2Q2ZNPM7D3DUUOVZMC3WKSJ3HH3PV65W2ZGQMVPE6YL6FY5Q2NOBSHDKBWYOZBVRDAHFGH2B2SZ4DRP6ZF5KPZBUAUSSEAWBSKTHRWBFAHLZ5OWPDDI342C77IAPBGHZWV5IO6UTGGAZWPT6U4XZQ3Z42XTBIJBE3S7BP4BTOURVB2TA4SHGFXZVOZED2XA3HS3HDUYZXVIY66TGXDA53DYWA2QPOELQOSH3ZVAEMCH6M5FBV6TCHN56MJZRTYMIDMLJKRAIBIYO7TBLHDMDSL4WOUCY7D7JTHTS7XZ5PD47777H57NPT772QZP7N77YBEODVPGQ=END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=usually000066400000000000000000000031251321557765100306140ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 409 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 7.44 bytes, standard deviation: 18.30 bytes # # Additional interesting statistics: # # * Ranging from 2 [900 times] to 138 [once] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 100 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 576: STARTPCOLKV2JN3CCAEH4BLZNSB625T4UUNFX7CILG34J6L3RRYJQFRW5G6EIKSJBCSZPIV3QS72P3PT5POSNN7WB3AGPNSMRSLHBDMQOLX6IRJ6Y3O3ERUBCTYWIZRGHJHINKBAN4KSMDTMTVEXH4JQ5REANT5A67WPJ2J3RTFDY3FGUNB4OT2XZR4F3TJ32KVL2ATOX5MBWQAVSJFNLTNKYHHAEYDAPR5IVV72GE3NDNHFVBOSR2VGAESGCYGAKZLQ3KXL2MLT43R7VIUZPVQT7AXRAWJLJOJMVZLA6BWFMIJT47AHIEH26JDKGQTGN2MHNYBU37E5PJRBQFX2FIS52U2EAXQ57JICECFDDTBKECEKEGEMNNLXA3BZ2GJMTSBLHPB3YR3DZ4A62E2JLWQNUJILJWBCEFW3PANL4XHPXMKFTX2LYGDOIN27KXYTBIKXE7WX22AXWOUTA44RFKH4JDR4OV2BVBDKGUGTY2EZXAQTSEX24XS6E5RLLD7YGQSIWDV2YQ6FJFKSZC6VZJ762SSGJGSYX75MTRDD4ZJTNLU77N7YFZHO46LZ2QYFPQ===END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=usually-interesting=array_average000066400000000000000000000102111321557765100321160ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "usually" and the interestingness condition "array_average". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 419 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 593.62 bytes, standard deviation: 408.75 bytes # # Additional interesting statistics: # # * Ranging from 25 [once] to 1786 [once] bytes. # * Median size: 639 # * 99% of examples had at least 55 bytes # * 99% of examples had at most 1410 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3176: STARTPCOD3GBZWIS3WDKEW7JND5RTJBYAF72WCTZNJBXM55E2DPMLPGJCZI3PKS76FABBSFENI737776677747H57OX577ZIWP6PZKVD776MVNPUCTT2PZH4PTNOG7L45LWTZWOZZ3H7TV7LNBH5UMPLLHI47S5VZ7H5I6P6K3HYXPNXK223HMG2K3PL2XXHVCOVTJ3OVDJ6663ZZG3GWDXRZYF7LNTEHNDTKKV35IY2LUZHNP6WU23JFE544T7M27NLNJZGNMXXZKHFZV3YR47VD4Z7POIPLWLVIIVXWVV7DYYAV2IYPOVSAY7HFV2VBLZ5MVW7VLXOPUVCVXI3I3VY2XLWFDUC6XTU2TBZLM2MLDRV3UOHFD3762W3BFCFG3F2ZJE6M4Q6IWERMW253F2X45PO2IDGBLMP5YWOGX4FHMDRVEJFRIXTC4OL5ILA263SPPQPVUKR3NIT5CMPE66Y3FM2Q24GDSOPFPTTS5DBIISJOHWQYZ5TOOOWLBPLFITDUIXO7W6MZLKUORLFRWVMVSL6K35KDBZBJH72UKERAZCBSJ5GRCRBNSLYAMZKYUQWAYRU6LRCYBGCTTGW52U63EGXPQ5B2PWGFAPUJNX2TEGXYIEHIAKSJBTKLR2TQEP3UYMPA6NRBEM3NKILDCM4IZ34ECNHZZJ32ASCXA2A4IE42ZJER3MSASASRSDNQCEQKBYAOQUFJUTRLEFD3KCYZLJM47M5BTHPMQFK5KSPDEJQXAK3CZJATQJLSLZDW2ZY5UAVWBKZ77O3C4AMECWOBTUKJV3M6INHLKUIFD2SEYEF733CFRH3APIWNIZBLHJ4TYKQV6QVIB4ORZIME465BCTTQJQZFOEZNAC3KLYW5ZKMQFIHEQVYLEQNB6KPX3XOETT5TTWQMKLFCWX45LOB5AVPSFZNQGUWTMXCZCCABSMVQWSZXME6VOTLKHVEZ7TSUATCTHEGUFEGK2DU6LUXTENXVTDFAAKI5KS3VYJSFILEYMGAJBUCSMCXO6OORFYWJARJEKSCJ4F4ZXWC4XURE5PK7SAAVLHQQBUQU6CAGJJIJAT2MEBMR32X5FPPLEYZQZ3EK6KUQG4EE2F3W7W35WCTHNHOOEUA5KJE2TGTXYYJ26MCRWGRCKCDFVU5MBYQ2Q4ZG4UUCHLX7FAQ6KO2HYNZIL2X53G7EDZ4TUT2KKDANRZLCVIIWJATUUKRJZQMEUXZCW42G3HTOWUHBYKRFWNHERIAIJVONN6WBXNKOLZKEKSKFAQKK32QDPQ7D3FGUKNZVYNDPL44PCVSPCE7FVTWL27INWEUSANXIBCX3K6J6HENYXNW2KE4IQENILU5A7VFXSRFBKL6TE2K4XQ2OPBYWLHR2GAV2FBUJOG2TLV4QUWMU7SBFBZ5NJB4AGF4AQZLBTWP3QIZUJWPOVKW36WSYV5XBWTOVQBJC2QFYRWTYNN3SJ4IYATEWZRMYV5H2O3QEPJ4FJQ4KAXVI4SAJ6ZAYMTLAYCTRATQCNI6UOMRXJSY5DTWMBCTIBNPG23IDFGSJL5MHML3OL27I6UZBATLNGUBAAVVAMBVEMF2NKLOVCPQ3LPZKG7FEKPPFQNTZTO2CKTEFRHTYK5SVMROGQQNG5GSL3XKIQEONASX3JZ26L2QSJBYSHIIDTHG7KUK6DWQZPMR5VKV4QCUO4MUYSZ55V63RWNX4W7VBG27542WBKO5NI5JSNLZG2QBAWCQCXU6P2VFPOSB6NRVNPDIKWXBMSEQ3JZSOBBMQLLMCBBYGYIONGJFGCPHXRWFVKVYRYKF5FESRYSYOPUYGE55PE4C45YRPEW6QGREHY2UBLOU2QVHOVSOAIEV3IVKZLDTQYESXSTBXUJJ4VKSVDWLPU5KOXIXD3TURAUFUBCPT3UP7T23NWLQD4UBWK4VZTPZKOIL42Q4X3QEOU6EIUJHJWO37LG3SK3S2C343AGARWBISYAK3LN4UEJ5DHX5F7ZGTJHCXZWOY5CYIJMRIJZNTKQB7H2HOSUUKBGKYG3UCBUFUVPK5F56IL6MXWZSX7V6PSMJQV7OKVO5LOCIVYAUNBLKZEXDWWRAFVIAATH3OMVVFS6IZYYIW56MIUQEBHUZUIADFAVM33S66DISBJCSEUVBZOVF4Y6R3TBGXNM26J47FULZ2X2TFXM6TFOFAURAUALL3U3GO54DAQ2BST6YA5ITDOJI5ZQWK4FKRS65XYC54Q4PGQVQ3X2L3QXDXQZKAEFIERHANIGDWYT2IJROYSTGGFAIPLKVMUCLZAWGOD7MAM6UW6YLEL7LKYK2ORFLDGC4LTIBYK7R7EG4FK7O5IOKEOZ5QFAACXZXKAVENFNAKZKJOJ4TZVJVMFMJ33GSGAR7QZUQJREQOTYSNJ7RQ3ESL5W5NL27FOYOESE452NYJST4VSL3PIPGBULSWSWADCGVNKCC4W5EZRKZ5NBA3JJAA6VSLKAYFRU32PTOXSJHZZKWLUQ33UUS63LILFLUTUY5UMWHC2DZZP7Q6NYHOKG3LMF7VSOKIAAOAWU5BVSSIVASS3FLG54YO5XHNYJQBFJN6FTKAFZSIHV3GLHJL64NVGJ6FBQMWESGBN43RUNTU7E435B3VR4CT3VMM2UK3H227V4N2Y4V7JZT4DFCIUX6V6NKTP6WIURWRSVDPEZO345TSXBPS5ZQ5R7MK6J2SYYEBLJD3ACXPMJTTFIMUDJJUPMMGCQDD22JI7HX4ONPM3EYUJVIJX3NC2PBIOEUHV2U56TUL6F2EC2GKK4BGRQSXC3DO46ESQ4EKDKERD5C5ZWYFNBTL4W5NUNPRDVYEQXLL5GL3GMJ5ZSLEXXXOITPGGM7C7ZA5KNG7FDAOIGT3PGTGYKN577DVKDF2JDUGP3BQCRIGDVPMLZAJTQHTMQOG3Z3ZBLSUO645TIMWNGLEDUH4PTNL34UN4CUWLBTRKZRMPBCYUA6WPR2NTOXR32356LYOOEQJUVSYZUCNBXRXTUMFP3DIY7SRZKNE7QZTW7IZ6PAJ65LTYDQPD5BTYAYQNWYIB5FFQTPHZZOB33JTUX47JCVL7SBYJTSZKBMT4O3JR3MP6W3366ARFVH3NCGQILRXV2RD7SG6K7KMXCBG6XFO6JL2L2GPATDOIYPRJPI256WPT5P377766PTV7P77SKA776Z7JF7KVII=END hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=usually-interesting=size_lower_bound000066400000000000000000000122261321557765100326670ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 412 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 14057.69 bytes, standard deviation: 19622.91 bytes # # Additional interesting statistics: # # * Ranging from 2 [14 times] to 138211 [once] bytes. # * Median size: 5749 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 89609 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4200: STARTPCOELGBROKSLWDMEV6RNUWABAGJCB6NO4J5JSN3QXSMV3PV3XM74ZLQVJCRZS72ICBUHIN7YT4P377XV56P367X46X2Y6HBT2335PDP272VJTX276O6RK6V547575CTTR774Y6H2PRLO37P5OX6NNXBT6U5N4KT5CLR6JVV366LL4O3T5KRVU636P5SV272D77TYRVH2PHX5CW4ZOOHO7LZVT57EOTL7ZXV53YLAVZBAVVV72PN2PX4YO55I6GNRWTHJ3U272EZ3PTRZLLYQYR2QFS7I6FT4Y6QZ33KTTXZHE23PH44VS63JDU73AMV42R6Q455FP6H6KDJTYPD332PA5Z6T4RAOFUYNP5H2M44MYOXP7AQDVHVWP7BU64LOIWZ46GR635ET6M7DQYPSDCFVS475T53JSERLUTE7MOHO3HLC5L2XTV4LJQD6H5O6JW7TYRL6C35WTM7HGQ6EPNF5RMLM64N5YYUC75JYOT5LA24ES2U7IAST2HUYHUEPO6WUXOMKM6EWOJUZLHLWTV22W7ELQE37LVNZLDJVD74PDW3HSFEQPHMOEOOMBYHF2JIQ2II7674G3H5PKT77OKKUY32RPFSBILMLCB3LVB2CBVCPD2D6ETNO5M2M6IGZY3IF3DROYNEVKKI524PAJLDVCTHQLVX3SDNOW45JAADP2CDHG33PF4LOBWPSFDKZ5IBBYQ5B53ZFHK4OG43TVXTYQMWPV25ZU2LVIH5YDUVN2TVKEAVZPX224VZAWDIGZHNJ2CND7HWUX2YZFVMZWPFYEMKWJORQDNMPU22GIG6NT72SIBA4KR2LOA7O64UCNGKTK55FT57BI5POER3Q3DLQR3HLY3XSIFQ5QITIMQ25Z4ABYB2K4DLAJGW5O5FR2N4TYQCQKHV7PHS64YMQV445OREQ4Z5LODEG73MOHC6MSMWHWCCA524XY3RX45FNVPVJKBVEUFJLVRSLQAMC45JYU42BANI4MEMKJFRSADPKKRAOK6N7VLLQ4PSXEFNHPM3NIXCG3EMG4XJVJ65BYLVUKMMH5RODZLHGG44D3RQFES54R3LAWRWCF7SRF3PKSI77VUCZ6ORD5BVRSB53WJMVKDLULPERWBV3FR3TFEVLMN4HAF3G5LY4M4LDB5DZAQISONPHPCHHGNVRDAKCVGIXKWUK4KGQP6TAVNNNMF3JIHOD3CIUFCG6CAPL4BVPL7A4LYVFZ253VUBZC255BDIU3XMYFOHPJFJVDGQZOTNE2RB6SGYTGOXFV5JSPZHS6FZAF7DEOUIDBGRD32QPGPKKX6LFX6TOENR333JOCP7TOFYAQGQF3ZVIJ7UE2DXK3ZY4DLI7ERP3TQKG6YTMXME34GZNKMPSQMHOLOIJHK6TIFCK6ACWXUOHGMNCWA5MV5VZEEUDDADWEOA7BCEAULWOYADHMSSPILG75ZJ4DYTVBO2A5FS2ZT4JF3XGW6ITCNB5GSFLDZZXNV4AUPDXA66HBYT7RJD3UVDLPDGXVTQ7PJ2IPSATMSDYIDM6XIYMS4DHQFHXUK3QFD57ASK5M5S7S6MLYDC73DGVI2O6HEHP2TJEYRIGUNE2N4H22K3F6N3IMWNWRQEO57PHDFJHL2USZ2BUJRSHGSIOMQHGYXCESQ5Z2JXY3SJ7F3V323JCTT2KB3PSHFCNTD36F4YR4EDRW34OUQEUWAPUNQYSRIENQROAJMSKUXF466IPKAU6N7I4RUHOYJ2GRRMJLBNVZLUVKPP3VQD5ULT6ATVLDPWNTUSWQODHB7AS5D2OSFWSRHHIJVWCFCQLUE5WZVTZ5VMCKZAGH6QOE7Z62ZUEIUG5YAQFOWE3ULL35D4LC5SWECA2VADXXCV2DXVKPHIYHZFZRRXFWI7KTHGKIBMGWOVQ6LAKOF6QDLO6AEEPI2GBROZV2AAETUC7A4KLMWICXKKAAF5FUOGDWNURRQYEG26Q2QITZOSEBK2QY3LIOYDFYHNQJJLKNTYOHRBVIDLXDCNP3XTGK7LBMM3KR3SH4OXXFBZTPTHNMHWQU6N3QRIQXFB7ZID5FEMNIBEFJ2NDDXZEJURJPQVYXFZYLOSNHKVSSPF2R2DYZNUKICGROSL2GD54AGFW6EK5AA5LMA6YJQRXDD4ZQG2RXDJS2UBU4FFAY7A6ZATGAPXVC6VODT2VSNBRFE4DZBQQFGPDFIVFXHKWD6UBWUBBIQN2D6GUY5GY5JRKPUTLPYAEQBQG7FBDRIFERB5QGWRVDADOFZVNPNFVWFF4GSG4QBPVS665OD4XT24RMZS257MNLFKFDEGGCFW4YW4P47QE6GB3BWQLZ6H6KZ4HI2MKHPQXNH5HSYVTH54CNQK4J3W4XMD4774NL7RDMT5H7R6YNA3OATVWEU4GON3HMG5WA7XTZFFIWEKG2VYOSQYREKVP2VHFGX5F7MDUWPQOWYEFNGBBMQUBIK3KGKVNGJHEOUF5ISL2ZTL66MMOSDSMAAYGUPKN36KRO2KPPILLANGT73J747ARGAEGQE235LPT4VZHX4PXLKLDU4IPJP5OVUEE4QET3Q6TXZVEOWIUOIYEUSGOTLMQY3Z2KH6DRULDWJYDOTXUSK7XXUAT43B75A4PBK732TURSNFSVGMMARR3HS36XWI2WRJPATZFOZ7Q5GAH6YJAKKT5DDABAE2FB47THKXQ65X74NHCPTTGUJLJ6PCKMYB4PZPADFE443U3QKJHNM7M7UUQU42X634IXK7L7EZC6I7CVALAWA2ZAKDZW2IXELLNUFEHNZUWMRKAC3KC6APBHDKRHYGCGJIXXCHWAQPNOB7JCF4OHYK4EAU3TQYYHVFVTGBGOMCHUIESIU5CAHL4YQXMO4IGFCNMYPC3WJ3H4W7SYXAHMMBZ6KYI2AFSSS3TGKK2GQSMRNKGLYMADQEIUTQCJ6MN6WM7XI7H5F2TUVG4YI7AJMTOPNRX7ZPWJBPA34H3GFYYCZQZLHHC3ZSTUMQM47O2JMLZHHSCQAS4OZHAWQ6YUFOVSOHES47DWFUKZDNADKHBRCMDBNVQWP7RHLREE4BDS334SRNVWTJIOPTQ43EQ5QP4HEKVJNNJALPHQHW3K4GHQWYBLDK2FPJY6ZQRVLF4T44TZYGSPIPSO5Z7IKUQGMAGPY3GHTDMI463CWEOPD2DYV7Y27K73YXMWRRWDZWOX2P5R5L3ELK7562TI34FXPIIZILIUJ2PAUERN7Q3TGLE7WIIE7BL27I6FIXSAQICTAL26OO32YIYPTHMY6NPIROD6ODNL47ZAUVPCYHAT24OGN27BIXED352ENF3JDFPIB45ZLLSN5MAYUQWN4GIBTYEEDJOZRTSVJSNYMAXKS63QM7QCQYBQPR7KMMLBPD5SFYGS4AXHCQS64TWOZ5XBYEJMHCBUMC3RO43TTPSQTJJALZ2VBGSAVCFX7ROY63ES6BXFUHK3ZUPQ4H56DT2CFWT6MTVSXGU5AVU5CQ2MYASLY24GBCSUNSDAGAGW67C7DF6YKDLYBMIYWOCSBEDEH5XLGEDWNMUL7AXML3OH35DU2LQKNBVOM7J6TUO63SH6SMDFZDY3L6AFKN5SCL6VG4VQVMIT34NGCZVDNAREBG4374PVUHQHFETOIYZK7MLE24QDJTH6V2V5II3RREHE6FDU4YHQYNWNBPMMKU442KBHPM7RJZ5LNZ2HUAH2NHML2X4A5GUC5LKXCPZW4BUOBIGW5I5YYPM66T6SP7OTZUVZAFEOHU5G6VNVRO7VTKYOXLMMAO6HMU5RPO4EJTPGE5D47ATOL52OYY5MB2JYDLIX5YYTMKD6LIMFDYTJDE3PUIXIAWG2A7T34ZTE3ADQD2G4YNDFW7LLKGJZBG37YGPKVXJVCTSSZHGNN24COG6JZ2UGAA5UYCWOG2JWNHJ7VA4IGAPPQ36ROEM43PDHDG22FFKCVFL7KIQWNBA6IBKTW223R3MBWE2PGIX73F4OE3U7LTDTLFJJDU7OTO646RHAG2DGVD46EKAIIZQ6PUDVO2JIBVWKK67MUHAVL6T3PNLTEPRPBBOBQ2JS2YLMULPNXL56O2GQUIN4MA66M2BJ2DBJOBMDFFLV7NQ6BWVEDZVILAK3MHTGMWUG4FTHUWHPNQ5FGZ3X7BZQ6QH7KL6ZA7SGDG45QNCF5L2474PX67LY6XZ6P777HRS7D55776R623UM4KQ====END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=always000066400000000000000000000021171321557765100302550ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 378 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 2 [1000 times] to 2 [1000 times] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARXG232QBABUPE6OOQ====END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=has_duplicates000066400000000000000000000116011321557765100317430ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "has_duplicates". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 414 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2946.87 bytes, standard deviation: 1606.30 bytes # # Additional interesting statistics: # # * Ranging from 526 [once] to 14321 [once] bytes. # * Median size: 2554 # * 99% of examples had at least 786 bytes # * 99% of examples had at most 8342 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3968: STARTPCOD3GB3SISDODCEV4ZLD5QYYSD37PMKIKPNNEF5TYKLVO4QB5K625DUKSZVSBEQTFEPBXY7X7775Z6X54PT7P766AZ4754XLXZ66L57EPV7XMKR37BW47NOLXHXHWNO467NP52XDSX6PNSWVX2BD6T67HGK274NKPVWH2C76MOD3WOJ6O2FUMZN6SCDLMNPKZDO3GLR6TVRSQ77DJNGH57ILWTLO3FPE63PPVQJWVB65E22XGT7M5GWN7JT4UOPYO6Z5O27IW2JP7TFUH65PVVP7QV5PUX6OHTHWEULPG23GVT2HCP2W5U6UZHY6DE5OYXO3PBYQMP62332K6CVVP2OTXM4NVFJQYH32JNVP2NHNS6FNGHTEOMM77PDVT46PLRZ5ZGHVBBIZ5PGFKHSVEUZKQTXRO4RHV645HMT7QIJ2PTHV3F44OWJX3KOF4G6UGUGHH3XA5HWK2NGNN7XE22TOHM536XUUW5FBR25I5KAKVMXWVB7TDFLL6LSE5IDE5SN4YYS5FYNM23FQIS37HBK46F2LLUOVVBYZO3DHFSETMOH5KJQWD2KKLAVAINGL25QRKDSHIEXC22LEAYFY72NUUZZMQHNA4QFZETEHJNGDGTE5DEJHZ3WR3FD5L36HVFHGP7RYLZ7OV37LX5FHRNM56TWHU4P3E6CCX66LC72CLMXLZKYAVA6R77FKZZ2EBKWQ626VO6HX2XYAVHGL3MIKEY2PAZHGB3ZL4JZVEGWYOORDCBJ6QSYNQ6Z7CZ5V4MGRNXUHSCKOGMNE7YJ3KMVQOOSPCLU2VXYMMQ6ZZN7ZTHX5WLDB7CODAC35SMD7ESD2STWSVSSXKOKRFO6URZMPFTXY5Q3IJZVBLSCFWMFAXAVRORINK2VSA5PGAINYVSLEJFEHYMR5CTL3JSAOGD6KWXFUTVZRQUHIQUEFBA4TEKCCQT7NC3UHMYOG3CAELCPSAVB7S7XAAJ4YQYGGTJJCOVCH6FNMQPVG3KKBCSIQN4UNNLV3C6SVCQBQVV75I2BWZSMOSIWUQO5UKY3MXAKFIN64Q7SSR5WUCKZUTHGT42TZFRKKKTDQL4EERKLM5VM2XVDWUF6RK6OK6JANZA3BGSVNKBKTJ2VZO4HMIVE4ER5YA3RIYIGVANWBSOQMUYH7UO43CWWQSXLQCEKBPMQBR5PL2ILHVBAWKT22UXBIT5G2ZLS4NZEKP4AKRHNIBXJNE6F2V56U6IKHEKZLPS7SKJZGGQGZKS3ZLRUWIYIISMHGUHNVF2QKEFH4XWVFOPMGRHU5MVJWBKJMFURE5PIRV44B5XQZYKSYGFIDM4XDMJG7TBHCMJ25BGKKZNIDCP2SGSMKCLHJQKKVL7HLVOPNB3A6ZIRWSRA4QDMJDU3BZWZAVATAJD5SXWAWOJIDLDZYQAKSGYOQ5SMRM4QWGVRZ5IYRFI5UQEPUPXQTVSBAVVDDIOQSG4XCIR2CQZZBVQ2LZCCGEWMKCMQJKZIGGAU4CALNA3X2CXCY2OYMEWSGCFXSF3DL5NAAEPZAOAE75GAFPEXBKP7KPUPVPOFOHIQNNKWG35RVMSCZYNN2BSKIKHIXHUF6MKZRFBCGJ7UT7MBHGT4XTLOM7HMANYE6YA5EH5NO4QDIZWIGPJRQEKX3DQ4DUKB2TRYOG5GZUHZBKA7IY3CLMLRYNW6LIKYJDKKXVHD6E66FHNNTDTHVCCNWDBVOJZNDPKAA44SJZZIOPYVRJSALSNBYKWMAPQW5MSISVB2HQGIHUIAMM5V6F4Q62GR5HWV6LZSI67NUJQ5MR5Q5AVYWQF3VLZDGJNQMJXWEAGVJWEUDBBTJ7KC35XCGKCRUWKFMA4CO3LUBIQSNPCKONS3FEUYDMNXAWK7P4OXMQLKLKDFKBXKZIDDAUGWUDQKGJT4WCJNCTIVV7ELUY3X2ADFETPGBX4RZGUUF4O6QE4XTBG6SFMXIWV3EU5F37GHM6UFYJ7QA4VVW6AO3BD6EB5NKKSUWJF2EW3PUCYASFI2ZPCBX4B2AFHPXXOUFIOALGEAPAW2RJYYBUVY5ADNBIDJSW2PPL2Q7JQTO6LOCYFA7ZMONE763BZYM7N5JDL2DCDDAWADL5F4YHAL2PVSLHCLU3APOQDHZC6GCFY3OUDYIIBI3XARZJE53WGEZ7CHWOOOGL744YTESJHY5EYY27TZ4SZNUPKJUADUYEUJZC2TDW6DL3DBPFZI7BAMPAD4ML3PN7FEVWIKIISTVABIKPQNL7U3AEOHSMAWH3JRQ6OBBWE6YQWTKYCRJDCK5MAIRWGLO25G5FZZN35ZV5EBPL7UHF426AO3VU32MMMLAA2KPADFELHLWMSR3IBAEXQNS7EZIUBKZUNORQRMSLLJ4LD2JJBR3H7TJDFLFQOODJP62IZP72GHZYRDSZNEUCMAADXGHPUOUP6BMJDO4HXJSTBLAMQ6TRSSY43UBBCTTI6W6YLICRR3C6T6MYZDUTQJM3QDQ2IY5U55O6IVOPTO5OXHA7TAA7ZBYVG7D2VAROUNSB4XJSNWTVKYHWXGBFISBOI53FAFSMFXUQTNOENC57XAJUMECBWKQ7C2Z4FVS6O3MKPZ3JM4OQG4YGJB4NJAGY7LCPAZM44TJEGJMTS5JB3ZR5E2SZU36YDKUMAT25UYOMCA3RRNNPPNWLYELOEF6LGHIOHQOUPU6IXHQ5LV2CMOZBMQN35CZWLZPVUTGHXJXY5WCESSQVEJCSGWJEG5KLDZENUO4J2BRPADQ23NWUXUEBY6SLMMF2E443IKDWNTCBULLL2OTLN35J4E7R536WWM2BBQNEOKKHGIGX7KHY664BXXO57N3TGEMHVIYWRTW4WHRMGX72VYZNBMMJZXYG2OR7O6JNEI5GRALQAQOCXKPAHSU42UTH53Y7GRTXUJ7DAGCGGE7QF3WUG3IGIDJT3VRYYCYAPKXR6LDEAUU4G2YL3KRA7K563PA6B2VXMYU77B42TUWC2R5HSFVIYG463PVCAUUIAIYHKEMZFWIAU72ERR35YUYFMNDD46LON6YOKB7EFPJI26RY2K2NVGJAA4E2EIQDRQ4HDDWVGWITD4KSHXHDTFFGHLIENDJMR6M5A3UNL5Q3WWWREJ53XBLAS2ZHGAKK7L232PNCYUNY2UYLFKXIYERONQ4WXJW354FM63MHGMJWSGLM4Y5TENRFPW4MI2QYSG5DWW4Y3NHMEJNGCQSJMYGIJD6X5FIPK3Q64KKZGTIGCL2EBFA4YQJJ4ZX62YOCEUYFNWJOGQDXW5MJ47X3WJR5KKSB2OHRSI5A72KDENX52WQIPL5LV4I6VROUVBUBVSRBZM2YBSV6C3HAFINSO36XTHZCVTQYR44QJPYDONU2BQOMBUIL5O5ZB6PROOUAWIC3NJ73KXO4XW7WM55MOKFM7NSKM7RII6BFTBGZA446L3I72WE2FCGUJSWWQH5FPANZEEH7GG2MDE6TSGGNE4XYSCTQOMC7ST2DGUMB2RANAG53NSBH7PEBLAGDTGQH3NRQ3X373UZWQIMJV2P6YCS5VQJELI77B53WPO6ZYI3A4S6VSQKPLBZVXOGLPVZVUSZSOFHVJSPY2SWKTG7PPFSAVMPB3B4AG3DK4R4YZIJUQUGYLY46CTU64WCVLGL3LOPR4TTW7D4C64MHBM527HDWOCVTN2XJQ7JWCYE6563SJIBCRUIX5KHJBXPNBF5HVSYZ3CZ7UHUILR7OIQFUSKLM6H2MIB3GM5KH2ZIOHRH7JKXNCHOYJ4DXRNW6TOOYNIOCVABUCSV6GYSC7BYRHO3OH24YAG5D3NJ6J74N6PVUTNLXQ4RTAGXG5L757FZXVFSCFZHN6VC4C62KKQCYLAU5CXM3RRGUKSDNUUJPGJ4EIDAZRHYEQK45WU7TYVGCDDDTTDKSYWTEVAYRQD5I7MJ2XPROG6YUBJOJIYP57H77PY7XV5OXZ7P2FA3775B6AOOWE4END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=lower_bound000066400000000000000000000102021321557765100312660ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 380 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 771.53 bytes, standard deviation: 922.53 bytes # # Additional interesting statistics: # # * Ranging from 2 [6 times] to 8880 [once] bytes. # * Median size: 529 # * 99% of examples had at least 10 bytes # * 99% of examples had at most 6150 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3208: STARTPCOF3GB5WISLSDMEV4ZPD3BRJAIPZW5LFDSO2GDLR63RXOV3TAPYA7QKHEK5LXKVESIEQJES7XH5P357737PL67LR4P772TD7DHR63X3LX3LWG3LX3N35XWFVS63W47GXO3P3PQYN3XXM5G5RWPT7VR257XL5D655X6WF3JOFX2S5O277X2B4GVGRNLLN4I6H55HZH5FD7PLUU73JTLNDNVOAXWLZD5W47NXYNHNNV3PLUF3LXDRLQGQS6XJ7OV65WXG6VT7EEWL76X4C24PPPLP6M6X7GT6DXX5OPPV425AURQB7PLMNOT3LG7MPF6NV2LRJWHJ3JJ3C6ZDP662UJNX2I7VWG56V5OACD7LTHAAHJZHJ27LIYYXNTWW2OEV5YMUWQJLQCU7RR6V3ZFU6OQBFBANWSTE5PNNWVB6LXLQAUU35S7PFDSFACWW73NZWXJBSIHZLU2323PGHXYSJA2X4CRRQK7QVHOBZOCH2PHFXOOPWW2J4IQVC4ABCPCYYOKNPXMUJCCZSVYFK3OVYW6CVEWUJJ2XQHTTYWVVVMVOZJNHZGRM7JLVAWWDB5KNKARYXNF2BLPRJBDPVUW74UO5NHNFI5DT2HWZBXCQBHL5KF3OQXZRNELX25CXXL2FXVCEKI7DGFHHCVUPL5YRMLFNNWCZMSEO2FNN5OSAPPKVCDVX5DVU2KNKYS7BWXKBJBOWDL7IDV635YBLMZXZCYRPCV5J5EJBF3O4DCNZIRL6CRTOHMJBZRNQUWTY6WDH3QU7HJFCKGMFGNLS5VRSABKJJUQR7UPGAT5VSU3ZNVA3GSK557IAMJWMABZ2WNCL5NHFLKU3ZI4SCIKLXY6U2L2ZAJBIHOILPUNQLPBXUA6FTSVBGW5WUQKSPUE4JMLYIXWFKLFMCMJNJNODNA5TSNZHXUXEBRMTPPEJZ7BPYJDK7OJFUFWQTDY3DMEXMC2SRBRIRC6WUSGNB7AKJW4ENUHUZTXIUT57J2RZS3YKUPJROMCRJJDS6U635AMEUIBJIFPSR2CSJAWEBBYHU2Z6PR2AS6BMMZ3UMSYPLVNNZWKX2TNLBUIUL4LXX7UBNNM36VZQP2CAPPAEAUJBOTBHNSZ2JZE2TI5HIKP64ITUPLHUFJBKFRNWVXCOXO6ME6ASDTK5QQGCSJSGTJNMLMI5C5VUQREUAIP7OJWXMEF7EMMK7KX4CRJVT3UZ4MCQ2WQHDWFSN5Y2ANIIYBTWFJX2SL5JEBM4SHKC5RC47PIDFSXKENWCBW3LCYKHVZXD7WLRO7UFCSCCZMCZDDV2A5CQUEJ7FKV2UQIFTC2GYP6QXUDXX2LCJL5EPZRB2BQZMCJYO4X5YBPQLWPSMJ4OTHVDQCRDCV2Q7UBFRNHFOSS3GQFS7QPQUAEE2PWRO2II7UWGMTYZTW2CVNQOCZSESPWS32QZ4J6QEWMX6TXKZ3EENLG2TJAE2VFF7MYXIIN3LTVJJEQBS6GZPAFHLJ7DR32XVXGPZZ6WT7FHTJFVQ75GTCKWN5KVWMAQHQWAZ2XNFVKFVUYRARCPOFBU45XVCDBYTCOU6LEUXCJULST7BHLDKTKEHT3M43QBVGL5O7YWUQ6JGGZ7QXOVIH26M2CJBUAN3YFL42QSW2MKKPFPMCBIDZMXBRET3KQBA3NICREG4MMG6NVYUO2ET5B6CYTYLP7HUPVTPQY4KEBEZALELCFOUAA3JMTVHL2OJ5AOVKBN7SRBP7ABS66XKYA2RMNEORPSCRPDIALI25UNKIITY6UKT2GJSKWZEYI6JIRXKGCMEPRJMCYJBWZJV42W45TY5AM4AMQACFLOWNNM6ZOVL46IRPZM5U2S6DYSYRZPH2SMLXFJQ3MWCOZRMSTSYSJUDVJ6MKF2527AW5FRQJXHQU4TZ653ZXF6DIF5ABUFTQGW7NAL4LYHD4GNYUCGVCKLDQFGFKJC4HWRN5UK7G6NDLPBMUFVDRSB6UXI3C7DNBU23Q3UB3NE7TF2JQYKWGIFDCN67TWIAXUEBPY7567FMSA6WO4PKTCAAMR4LL54ZXCFBD3KVKHAXADA7D4WO3BNTH6PI2PFJMPKMF7R5EVHI7ALQU5HGRHZVVI5F7T5YQ4JJNGTRYLSHA7FMNBJYJAQTJ3Y2IPOZFSDVAVJO444RKB4ZS754YCSY66OAPSXPLQRAQ2M2ESDH3DEE76QNWPBR4UYZPU4QGV2FKZNVARF475EPOXKNHXHPNJQ4QGC6WRMLXOLCEYZ6FHH4FWZKZSUISQRLPMOGXW4HCNU2HZPH7P72VWVPCRP2AHQNH3JIO37ZD3JINRTRN7VTYZMQ6DALKZAZTWMMWA3CPV4WFA4MWVW4RQ3JTTEZDAB3P3IHW5ZVHJYDQ5DIXQIKP4TBRX7BVAZ2SICBZ2PPFTZULPHAYS2ZWRIXNEEI5LZIIT2PANV3NUX4GRRE4MG4DWGQOI3YKSHTOJMJQUTDJXV2TVJQKGMGEIVQVV4YOYOB6ARTOHX3DGJUASYELC2XIHIH6BP4BCK4DXNXRT2TBJTRKMK7O5VFRYLOWNHMAVQWOWLKDCOSUQUXC3XZDCKXDTPYECHFM675TXCEPRD2SWY7KOJNCC2CDEGDUXWIKA54XVDUKTFLIV7K6T4J5CDTTXIZI6JFHIUOZB2Y5MLZJQKOU4YJBFJDTUT7V5UWQFZ36OPPML6P5PRHVQ2KOOV5DSSRK2DNMQYVIG4OZJZWORKCMBXLXHKNVO65LXZPZU6UOG6GMF7S7YBEWEMGZIJDEYKYVUNQNMEGTU6ROZVJD6ZKHZ4OPRHZIW4O4VV734QYXDPUXKEMYJRY4ABXPT63X5GPAYTBBSOBZEMD3J2RSAD44N3VBXOBJPNBGVYCIF44IL5DANIMP37AQYYSASDVA4O5SLDFLFILN4PTKNCMYJUZUXINTEPJWUWOWIFUZ7BXK7YNZ775JGAMHBKR34TWV4B7FUFOR36U4MIIUH74VXZOJ423R2BZCRW6C2KJ3KU62B7QOO5HUFD75FJLDBG2NDTZ4546HL4QCE3GLR66EMRDWZY3GL5A4II42JLXIRRZGLYFIM3UH6M6H5LHYPLTF2PYE4GHFS25RSMKO4CXZHMGMTVUSXLKPFZDWA5VQOXVM7L344KYMH7JUWEHZD37WV7KPAPPI7QXGUWCLYP3D6BXVMNI7P77XZ4PV7PL67757H2INFPTH77AII4NMOLEND hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=minsum000066400000000000000000000103701321557765100302650ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "minsum". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 413 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1357.65 bytes, standard deviation: 694.08 bytes # # Additional interesting statistics: # # * Ranging from 721 [once] to 10775 [once] bytes. # * Median size: 1156 # * 99% of examples had at least 775 bytes # * 99% of examples had at most 4018 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3328: STARTPCOE3GB3CLS3UDKEW4ZDL4IEETAO7W4KZOMSO4H4GK53Y5YDU5EWUEUVVYXELYWTNA2PJX47P777WP57777PTV4PP7KFV7GXR7NGVDLL64JFO23ZH75PR5LDRW6PLNT77WYF4ZS4ZPFOKY7FXUL4W745VPCZGWSWR5V626LNGOPK6HMPO7HTWTTIFX77EEQ323WTKH7HP4RTNXJ3D63JD67QUDHNFYUU5UFXGMFN46NMVOJW4P263MO7LI223KP35S2FVGF6K4XCVHUGZ3E722AYPLGQVJ3TZ5JHFH3UGE6C62LNL3VEQA327ZK7DKLAK6XK3JNXONJLZUOO7HPPFKLWVM6S3XZNBWX3SJ26QYAWTYJKYQQU3RGXRZMTLQ25O7FSO5PHSKAT6GC2ZUE65SAE4RLDXY57CSGOS36ZEKWM2OJW5SMLXX2TMR73L6ZUKKMMCXJDNXWLR2HZULJFIMTDWF23WC2X6VGELF3Z6HHJZGKYE6RIHRNY34TGDWWE43CDWQ2BRTMYRRM6PYXJYNJWLLMAKEQ5L4Z7E6FFZMRNPGJZADKDEPP4RZINXAY4AGXIHZ4V3V6AY5V3ZPSXXHPVPJNNGJM5JTVPI5C3J4LTGNR6OODWD7OBTOACV7MOGUTDWY5VZEHD2VR2HAVZ4AC7AKVF3TUVETHZ3J5VO65LEWO6APQ5P4W4M2Z3A2Z7AGOJG7LWA2EPPXAG523VPJ55SIHGLFUAGRJN3HQ7UR3WLRNY7HHHZ5DTG3QSPWKYFGTXNO2FPPUYH722AK7JLOUIBFFUBGUB7D5CU3DFUWZ3WKJ7LW27JIP4MQJGYV4HIMLGAUXRKDNHWPGMVQJOPGBFAJSTTAMQYOVL67FPTEMCMGIMR5XKDZNZ2YOFWKMTIFKVALHBJCC7NMFKIJZKLHDIFSVB7URMXF4IGWTM4TMKRCLHZWBD5DLFBX3MZQUYTTQ6GIRPPRUI7HWGBU6WZJEHIAAUUWQ7JIRNT3OG4JZELP4WBLZHG7OFLXAKEAEMCGSJO3CTM6EWJRN3MRY7D4GUO734PHREI7F2ZXIMWFLGSZ7JAP53V6FYIHGHAT2AVKFAOWABAE6F7D3XMFDRD7G4KRAYNVBK7GADX2QAPQ6YC6CCGDWV25ZN243XCTV4G6JRWF6VNV4Q3ZHQLKZMIJD4HONUIRGIV4MYKKVODNCBREOA6OLGAD7X3OW6LM2SJVC324VMOO5S3D44F7MWC46J2PU3PDNAIK6MWKEV3JPNQSIT542KPYBKJXIQULVN7SXTYCZK6V3K5YYSVX5ODMFP3HMCNAPDCFLB7NH4OHEEQTRYOULK7WDJMM5ROREV26DQZAR6QUHREPMUTUSKEQ5NWADIBWFMFCFHQWJXRNDPXH33TTFWDG2CK4OOWUJDZJA3FJGDCEVQCFS5SFZKHTKCI2VCNFLQU7XCXUHS2PJ7EJIXFKOQET42R7ZLIW5EIIXKYHS64EHVDG7VWZPLHDCPDAXGVICZDB7EEUVFDTSRHEEYMCLPNHZGZYUS4HEHOQB5H7IGZTKCDMPQKAMBT4FKK3TB7VC2X5OTLIWP6FCEOUVM4XBGQO2PAXC3725QOZAGWXO2HFW4UF2LESBRFJ3YP3NKLULIOPC2NNDLXMUHKLGBQ3W272NDBNGKX5T26XLR7HSXKG2MBXQU3GW5X26FU6KNDYHEIFXMNFUGUJU43BZB2VQ7D4OIIZ4DIMRHCLUKYJ372KN6EQL3HSUNNV6VYKEUJ532CWOND64XRXJ3SZA3MVYDVOW2D4IDBFCLQTJW7BZMMIR2E3HC6D7GSZNEZFGVAG4PMSFFUU5GN5UD2NHBJ2527ZA7NNCDE5J4WW7IWDADXAM2FUHJCS632DRQ3JKR3IJSZEVFL66VLU5ZELVL2FP6STVQ26SYDU5SHCSD47BYVVJXKIA2QCPEYYFPOUOVUEPDBTSI5SQFQZVR6GIBR2A2HIVOHIXCU5NLZTZ2AOXCMZUTLGI7TV3VAPSPAUNUANPHJXSZEUAYJJUH6KUMEEDQT3KSUIBJYQCG5T6SKW35VHKPCIBOLYWZOGSBRJXI3NBUHTWFBOY5PU4W64I7QGGMNXE7QY2RKKYMW65D3VCJXUAB2ZQWNHMVQ6D3QWDJVRRRBBJXKZ7FBQMSTK2VBVJGMV3SIT6MXLJOFVMGODWDYWRE4DUXG2VEMYS7QRHKCPPJEOUUJKAIRUKWQ2RO4TWLUJDRIIX6WQFMYRI6W57AVGTWOM5FVVH42E5LR2JLN7I43R3URM4VIFLTBABN3C4PESOKD7ODBKNITRGEEHIMRVZJCKSOQYBRY7YVFJBYZD5XVHMBS5QTFE5DVJ4WE7OTIC5DWPEJYFSQAN7GNX3REUFVW7HK5SHCTSEU23XPDEKQSJ7TUI7GIOUZ6BUJPBPPCJR3XX6SCW3ADXNMRGOLUQUMJMLZD6QHXHBG36UELTMEPP6ACNO7Q5I2GA4UC5PBLQGECKY7XVGJGTFCUGN333BD7JRRWX2TT7J2IWQXQGP5SSOMVEYKBHF35LW2ER4SJYZZG7WRCEUQC2ANING4VEZSD5HVYFBE3WU76YMC65KCNVUFKP657Z6HD2EPVMMMVVZDDCG57X64OM5MNNQ3VFUP2DYXNLBNLX4MEVOVBSST5HRU36DH6A2Y3FZQSO3ROOGES5H7IPRHYZ6MV2I27LQA6SSNZWWBHKPKO4BLCOU7H2JQNA67ECMNHIE26QPLCVFXGOSLV4PBAF2KUOQT42QTD7EWQQNDM4GVHSGWG7SLJUR4L4PIFCDTI5EO7O26HEBF2OT4TCBUPFIK2YSJAUC6Y5TFOBAD73LLX6N5VUKHE7VMR6X6CTJSJ6KNRXUBXHI4FRWY3EHCKW5U3VHLQVKRKO2JGH27KTIRVPPYRY7UMQ35OBGHGQ3M45ASQFDF2WF3EUL5VYGNXM5I6FVBGPFNIDFADKOIO25IFTGQR25J4MV27VX5MQDA2IKTV6PJIZFIT4RERNPHLWCNX324MRFYQJO65Q7ZCB7NKPV6FYEOCOJXTMW5HE6YFV7PAG2IHAB2XWT7LZ4JXYFHA3MGSSJGAHDIZTLORQHDYYRVMPEVVIGFX2MPJWSKBEYF3XKY4FOOQSO3WBV26RZ6XK2MGKGXIJI77TCAOWZN73554BX272OKYEK2IYZUNW6AG4A4YG3G5GVMGKLQY25JHNFUQQIYX44BIQRJL6PPOVER6BNJJGYHZPMN7ZKKN6UCULXUF7OJN6UVHZC4FPTZWFFCONO42FGT2KNZQ3P56PLY7H37777P3L47P76YQWH5577Z6MQZ3E======END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=never000066400000000000000000000120431321557765100300730ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 377 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 8729.52 bytes, standard deviation: 1998.71 bytes # # Additional interesting statistics: # # * Ranging from 5148 [once] to 22379 [once] bytes. # * Median size: 8354 # * 99% of examples had at least 5762 bytes # * 99% of examples had at most 15022 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4136: STARTPCOE3GF32JOMSDMDL5C2KWCBL5E7NPUKZOMTO4F4TFOX45YTD5TGIJ5DL4Z6P5EFARAJB7767HLT777V45PT777Y6G3VQOL67WEJH56XR4OLP7464OGK277WLC7X4VGF2LXXD6LK6YBVS37KKM76OXGVT64V77Z4MOSN7VZLCW6UV3UI7ZVM757UWDLH7Y6V672TSYXW57KQ753752NFU4U4P23M7VM7GXT5CU2TQ7GOIEDLT7PL5TWTK5NZ73443Y3VOHT5V3PPNXOZPO6NZXDHOCIXSOK4UQHB6T2XPTV7IP3VB6T46PPVAAX5LUGGKF23J7GUKOYXX562XJBGKEMOXF4KGFFPH2OH5GKH265HW43ZXUMEG537JOPVKHHCR2S2P5WVQVPB7NU7XV5DC6HDNZO3OD2E2BFGPPCPA7HFPWTDFGTDLF2OTDOC3AWAVXKMTNKQCKSLPFRWXC5HFZGRLP47V4YS55OIG3Z6NMVGBZ3RNOSTCHNIHX2O4VI452NSJJQI7HIFWOYU6JGCFE523IPFWIN3VOL65ZYJMVWTMWF2NM6BBIXEPL73QEZA5SR2G3T6L4RXLSTUBZ2I4OLLS6ZB4ICIEGESBIVDPBHQC7PZXG2BAMAXQ6NKSDC4EHXAUH5V6YTXLRCTGGOCO5GC7ZD4S2MUSHKATCCD2DEKXQBWCSKZMTVB53OMRVAPYZAZUMDZPCGLWN2J6TOSMPZTR4BDQGZPALWWHBK4EYLFYII6YGX2TEZVYWAPIBIBTMACUAFPST54PPRRQXKLYWYIADYDK24BCJ6XHBVOHPEOKVEM34INPZS27KVEDLILBTQ62Q7I3CFFKAEKEVSHNNGJYYA5TYHQUVJNUG6DXBFQMNQSOCZHIEMB547KJIODUYRSTFNNFR5N4WUS4J776WO7CWNOX237ZSNQUXBDSF2OPZQAQQSBZZLVBHSXQCXWXLPFMCZXAI4TVCVSEHNHYBIDAXLHTHS65G5WVYZJCTTYAN3TA4NGNFIY37H3RQDXRMYU24WFVPXCNBBTRUPJ3EOLGY5LKUJ2F6YAEGCUFJVRO6N5IWHJ45OINL2656CGMEHHQWUIOBTVR5Z2B52AQBLP3RQLORDFQSBMK6XDLXKEO7ZPTWUDTJS7SMABWWKJEJN7YUFFPLFNDPK7PL32FEC4YB72TQ6HAWRDDWI6KYK3THS2VZCWOWHODUMUJVAMLSI2ITPLHWUWGRY3KENBONJP6SIAIJAKQF2JJRFHOVNJCCPTV6MXGYHREPIERVBCHRJW7ZCFYAKNYZHH73OJB32TWQWEEPEKQ27QDZYOS5LJMWQICEFYLJDPBCTZDXJVKCEF42ODG5HLGPV2EWFLYDJWKNBFPRF2BLGJ2A3JLS4CIVFRBOIOXYCJSFVPVQXOPSXEKWXCTVBH2TECMU4ZYAP6CXULPLGHVKYUKILOOZODB3FCFQI2WQWDIYGGVZETDC3SYWLTKNREUMJRE4PYAKSAQQIPMST2OR3E4HL4DUUSNR3DV4YEOUAQN7KRKVDHPOKGZSUWGY3SOB4WAKSRRZX72VPK43SNJH6R3EAEKVXWKRO3EBL4LNJENEGR65B7CJRKZEGOCZQYLZYSYZDBLNLLYAMMV2BUVTI3XMOT7542IS2JFAXDNPASYV7W3Z4OXWLT4HZJX6EUFLGFIY3AVA773WURR5IWUTVYAU63UMKIZ2YOS4I25OOMSSFQUQU3A2QX5D2LDUBEKUITHC4EADIJTOFXADL3WAEZBKJEEPHBBCNBMPMLNTMJRQQRSFBWAMSKGPRH4EGMPEQXRBIAU425F5MQNURUZVGROQMWFPDMDHZMDRGWFRAIOVPQPVZWYZASYC6B3VRERCYBTA4YOEUMZFARFKHFN6RVWIIZNKGZ2OBB5J2OBZWK6VAYROPYQWXMXVGSXWISM6HXPB5VFWAPZ3GCKBTJGID3X2GOV5RWX5AXE2GFH2C2NFJYAK3MOG4FYAL5W2AUIWTOMAYSYLMURTLHAQ7BSGDIL5I3RJ3B6FJNEPTGSFQ7UBYMUVXE7CTC54QU7QCFZJD5KJ7MMZODI7QAYQ7IO5COWVWOBUOQRWO2NGJUR6MHEJRTT6SJWWT75T3BJ4EJXDRLZP4GUYAXSNU7IBSSE5U2VBIIJ652JDKPSNUHILHUZH2QUSBUR44Q55LYKOVAUZ7WEHFKHIFUNPBLV3TFPOEBUU7QV2PYFGDCVHCUYV3PTFDBZHRLVAHHP4CAQGVBVUHTQ3OT2BOGQJLQQDGKT24KHMVGB7WFCH36PSZ5LZQRC2PGWZ5XIEASYHEYDJXNKPMQEB7ONJ2AC7M4M7RPJB3VBKZOOL2BCNRISORVZF7ZXHYOP3TSG2PHDWX35YX7V67IRD2XL2A6EVTOM32KO7OXM2OA4W74BMYOPPOYTVBCDF6TSQ2SVQWHYUI37FHUHZS2Z4DIOYV72GWT4KTFQYZILPWEIUEJOOZHVYPTILO5HYIDAOYJCQRYSRYHZRNJZFG4ANTF2JTNJ6IXM4RVJXSAD5KV723CLI3IFMBA4APKQCH6ZFPOLP3HLF2ABNJNVK6Q3H6NAFPNQONUT3F2BMYDVQHIYKC3OZ24DBCNUIJSHM75KGSFHDOHMYXUMBGFCCMC22WPOLTK2WPPQRFMCFEOZG5T4CO3NJ7ZDMCG6EP7CYHDPLVPVTAG4KVJYESCXXHPNVGPOUTC3MMG5KQP6DCCOJDF5ZJ4DBVW54XGA4YULLWGHCYFVRVZGMME7NWBOBFWQPOE72U4PDCXJ7LWG7T7UO2QX3QLFDKSQUSCCH3WUCZWDT7FS2MU5AZ2CBPWXDCASDQ7JW472ZK3C73INBT54LU3WYRYT3UG7CXSNEJITLEOL3Q3OM4LDTKFR7JAY7V7ZNUWACF3UWREPMWZMD23Q5EPJBNXUWFAHN2463PQHVIDP3L4F7S5ONTLOI2JULOKCVPLQBII6GSBEP5MYYOU4DSWGR2NN5Q5F2PEQPFEFNGKX66MGGIWDJQG6EY475D4J3Q5XSAT7L6OEIFHHD43P2LY5K5GC7CWA2ZPJOPDOT5WBEFT4K2WWW7OJ7OWLOIPC7LJ5SZUKJJIB5BZ5WU6BOCRXGLRVZJCRZTBM7POXMPAGPEMGOPR4NQE3LP62C7SK72EB6A2DKAT5CQ3FTHWVUHAMOE34TU2JOIRMDG27RSOVU4LH4FKAJ2PRLCDE54GHIMA6QDCPMCMCZFI7APIP6GGU7HNERIMKJ2B3EXL6EI54BDLWNIP6Z5THDWH7Y6LYRLIL7B76ME5F54637TCZ2LYCJTDPDOL2XXX257IM4IU6W2USDLQEU3IUQCYMU26UCFAKPOTPUQ3DITRTBMJDNAML2XMERBKDSGGJYHVGC3NPYUMQHBJQ6QQK3LXNUHTEFHRMHWLBXHVNMCUZGYGXMWPDLZ6GMJWBMZMDGXYDODSZMABNS5F3CIWCJFASPIFVZEOFQ232I7ALU4U73KVSN2J3QCRZH4VDGDF3HR3E55I7T7UQXIQEA5SULVKUOXCUQPULDFRMFJXM75346DBV5LDPH2OAQIODQ5IZAS2TYXHIJA6FYFBO763Z7I7QMRCWG53YWBEFMHN2RIA7KXCKTAF5HLBBZLODNWANENLOSUTCGJD5N34PUDVT2PWZORHADQG5SOMWJQUQJ5V5735CCCGRJILHUMY6SMIZRFY4XE7HPLFHMROJYDA3ZQG4UM5AVDEMP6AC5P5LGSBVY6DRQGW6WKQRVPUJSIZZOB37J5DWBVM2D7TFBZ25ZU5243OSX3NRHQH572MBN56GOUOOYI642S6QI6LDVT5VJQBSYYNI4OST7E55J7BMUXJRQ6RIJHKAYQGNBO3JSHB7ZUSLGYCPT2DVMMQVWMNBURT5JRHU7MCZHBLLC62ZO6SGCKMW6Q7GN5I4LB4G5Z4T6QTOZRI5G2M5TIJUGGU2Z2CREGI6BSBZP3NQBA3PFGE372AF72LBPOMD7N3QM7PIS4WTK6DGO6DQHUFLW5EAMHEHKCKC66MMVL2WGGWTCRGRXZWGZ3BOYYQJDAIIMO5KXQRXEBDLKXXV3UIMFHXBJCPD7EXVL3P673V6P367326PH777D44PD457GP76AUV42J72===END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=size_lower_bound000066400000000000000000000101771321557765100323330ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 381 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 859.48 bytes, standard deviation: 578.94 bytes # # Additional interesting statistics: # # * Ranging from 2 [28 times] to 3870 [once] bytes. # * Median size: 750 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2469 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3200: STARTPCOE3GB3SJS3SEKDW5JNC5QZMTZD6W2RZBJRXMW32PCOYXOEAHEVPTUN67XIOTBCSFAJE7777LZ5777573Z6XL5PP7G5VPV7OLCNZV2S233VOG72XFDN3VZ67P35PPR3X27XVX7P2W277D54D7KURX27WPX7O6ZO7J2O375ZR36WN3BR7K7OJRVANXXN7M7JIZHPWRVU3AR3LWE7V22X6DINZQQDL5XIKFRY6MOTXZMWNUITYU3SH37HIOZZOPL3FN53NUGAHGK2ZUN2QZPYUJDJVLIAW7PI2NJBGC24DKTDCDNXWWUNN5TASFMHDTZM4CDJT46UIQJVZKWWKTNV2ZZM3UGQLHWXXOSD5XT7AGM7HCBT53EJ3QPXWOK7JVW4G6L4M3T5I62IV3WFJD37XPEEJZRD4EBHVSTAQG7DOXOW32J4FJK5GVINIWNFMAWSZRCEILPE3JMCOOOCKKM6KVZOJVM6DOTBRINQHT323JZRJTLBLISFALHT4RQ7737XVNE7RI3LWBGEBPG4STHKQ6V3NMEHENCETRCWFGCGCPHDSLQPCSW4HWTTRWZ5ZYB3ZCVIL36EU4B6IWOG4HODV6KZXJ5ZPW7V6JNPKTQYJE7SXCMOW3HBJAHHWAFWTUV3WAYEPWOXCLV55HNCDUWFWDIITRYVDQCA4QGMWP5AK27FLAPEEJGDF6OIXEVLZ3WU7TAD7CXIKFCUQ4FDUSDANJMSHLMAU7MSN6DCJVUWZGEXTJDMQAUZG5ITSIMGUTVX4K6WF2FNTMST3CUTS7SNUORTE4ROZOQJ22BQUMJ3BIC2YL43GQRDWKQXU7R5D3KQQZIYXNTOHVIKIEKF36TUKIWTKGAZC3QSLHMDBV2THFR5M7ADKW67BMRDEBFZKJEAVSVGOHMVTID63JTUAW6NLKFD3X6IXJKR7MBSPGCVF6MZYTIY2VNM7VFDSSB7GUUMLWKADHN65YZLES3A33DUL4TCSSQQANE3F4O73COLTKSEW6V424YE6BDIMUXXCR3PM2URMMCQGZOSEZKFZQZPVOXKIRGZW7BN37ZGSWJIJNS7HKFPFIZCJ3A7MEV4EN2PBUQEJKST3PETLYBAKQRO3GQXSKQS7INPWEZ6HKPB5ALJK4KWZV4JHXCGX4BNLFN7IU4NHGVIVIFESJHSBS73NZGUTPIWYQMH7TI3TLIZVJVB5OTV3AMUQCYUBVVLRT3WFPOVUTN4RYQ3OW667O2XVQZ5FKRI7J7QTBEBVDAZIIMSCPWQD7NYD2GSL5GZ53WCW7P4NEILRBITAMMMC3GCM2ABOHFNP6WMBRFQNU457JNJOREYBM2Z2MM3TQZEU5CKV3FJP26NWJSS23FB6WVMWMPWBT6E3QJ3ECKMCL23OQACRMTFAAKEAHU2QSZYBQRMOAQS4GQHASOFFGKPPSNXEX7SOOM2CNM3WGGECAIXBARCXGUCZXRJ5SHMWEDAYAKXWZ6PRCSSDOXNV34MDWZSTHUH7Q75KWMIIJK4RLNHLM2KYKA6HZIHY7CMJ432W4WYIC7WXFM542RFWKO3SOJKVIULATAMXK336W2CBGAVRGVCQBATQKVJP244XX3VOMFJLSBV7T7AMGEG3TC2BQEZXRVGCLBOVHK7OIGUSPSV3ML2VIGHZMVEG32YJBRIRGKYKDKEMX6BW5RHRVDD5KYDPSXN4C4CYS7PXT5BWET5GB3NZ2PI2Z5D4TCU3B2LDYYK32XJTXMZSEV4KV63SUV2IEJ3VZYKCVB5KZTHDEMS26O5TEBMLPEDQEV5C6L66VYMJ4DBSD64JDYOSU4OXXI5YIGSNYOPW3M7GCTJG7DXK22JX5LGS2FB5MCCSDG4EEQXZWGHX2SLBZS43DPVWBONCQ4PE6LMZAC6OQC55BABMVBMZRO5JGUBDRVVOY3IKPBCSIALMIPPMYHJS3TU5AKZPIHCLWE4FLJLFEZ3SLHGVWY2LYTAGE6IVO2JYDKI7S4GHN5K3FKGM5OA7NFBHPD3TTWLLTDYMBEKE3JRBIG6APHJK7DF7EFJY3KXGLTSMJKIERGUJP3ZYCA2P6WA73U7P2IOENM55BGGBRV5TDJRB2G5BIC55XWQAFSXDAWLUUG3JFCABM3XFQLJY7AH34WICZSOIF6X3EWBBZWR2DEMFUR56WVXM5MORFMRAUIZHZFLBK35UTI4NMUCLVPMWRAAF7WRVZJK6SGXX7M65ODLEQ7FIIGKFHILYHSL3WAEDKEEKUXEQ7B5PFBBTVADI4PBBPOQCGWSGOBTPNRZRLWGXKE5YMO5D2E5PYVG423KCH5OCNOXWYVYAGRQPONZOV3TD3AOPLKUEGUKVXTKPWDLPUDBRN2VOGXXTPXKG6PLBLYUYKU7K5SB5EEDW6UV2LFSR7BZV2H4DMYXUZHTYSWQXXUXYPTFS5TJOQY37CUZ6H65IXOGE4ZG7BKTAFMF4GOOGAJWUNOSJN6XMDPXVWWY6F3HVDOA5FUBQ6XVIOHXA26JKBXCL6XNCSB74AWKA2NUFJ55SRHY5WCMDRJUIM27Z5GNZGEQSXO7EINUQYHQXVE6JUK2VR6IDUYYP6HEZW2VH3VOOFEUDFCFHISO7OGT7KMUTRVBSTGJSHQ7RZOPWHVOZO56D5HBIE523SPYVGSLS2PKUCL5P2YLOJYMFABBXEG6W4PE4XJPQR5LMC3JK25WU6PBOKUV5YFCPGK6Z6LJCREWJAX3IKDDCRNWMNJIY3XA4EEGEI6ITFV6A3HPOTTCTSF2Z6MAYGMNSDZHASBMCPWMDHSQC5IOLMYE7OALUOOKL5NUTU3GTT7MJV7HUSZ5P3F5NBR4PJY3MUIU2INWC2G44IWBNZLSUGK2B6ZZPO4IJJRWVNPD7HI53TNB3R4T3P2MZQP3IIP3LJD36KKNSZ3H6TAC2UMKZP7U2RFY33OFTRRZJSEFXM5HLS6QTAUTHZLFQ3DT2TMR5HFBFYGQXYKVRAIG73PBAOBOSTIIQ27PVXPYODQEKUODMTONF5NFSDO4K662ULHAC4Z245MOYZAKUM3BEUDEMZGOZFDZN7J5QS6GHWJBTXSZ3FSL2XA7AO45W347GOJPIOIYQKING5XQTL2Q626XEX3DTYBJG4OWOH2YYT7T5XQBKH4GJUAHLB4PS7F3P23NTLEPJRIWOUXGP57X77P26P57O7367W25PHH77E7R3YWUYI======END hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=usually000066400000000000000000000031521321557765100304530ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 379 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 34.57 bytes, standard deviation: 151.52 bytes # # Additional interesting statistics: # # * Ranging from 2 [898 times] to 2029 [once] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 731 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 632: STARTPCOMKV53OLCCADH4CWH6WKYQBACPTFKMXK4ER3JOTH6HWTGOG3DIQVZ4O6M5TAXBWAIKXVPDX3DPTY3TTLDZPYKV32DBIIG4WZCHHWBHRHOPCAUMJK3JB3T3VCLYXFGY552PVZEEUT6V5YTMAPOK64FC4CBO25FYBYYPSH5B4TFADLXN3C7QBVPDNQB6UJ57Y3A7VKC6CZJKKK7DXQQ522LJ7M4DL2CYLHTWAZITYO73TC7QDR7IDLIHLXVI4IQDWKIFYKJIDQDUYMY5WDFVUZAZR5Q5NAL2N7JFR4EL5AJXCZH4IQAJKMSKLFLJUVIO5OSQNW4V3J5BJHELXIBX7CAJIUM63ASRSHRAZJO4AEV73OQSNKXSVMKH5GOWNFIHZLPMVM2DBOZ35KBJ6OXQBMF5CQAXTN5IEXDECVS2FRFRIIIK6YMJBVVWTGBZ6ACKGZEOJAQAPQ32PXRZ44RJIOBJJDNYZO4NNJBMZUORSTVNNJQE6CE4LOZJV4GDRW4UZG2OAB6BIQK2XNLLVDJCJSGTVTSYTRF7O6UEA5XIFZOMHK6VGVJLWS5A7W2LJUJOPQINDHA4JZBEWYME25ETS7V3BXRTYTPPZM7Q2RPO7MD3LCIWPA======END hypothesis-python-3.44.1/benchmark-data/intlists-valid=has_duplicates-interesting=minsum000066400000000000000000000123241321557765100317560ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "has_duplicates" and the interestingness condition "minsum". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 415 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 7914.53 bytes, standard deviation: 3847.96 bytes # # Additional interesting statistics: # # * Ranging from 2836 [once] to 29591 [once] bytes. # * Median size: 6846 # * 99% of examples had at least 3461 bytes # * 99% of examples had at most 21917 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4304: STARTPCOELGB5SITLSDKEV7JDD5QY7QAQQ4VPUJME626ILZHQVXK5TEHV7N44T2PK4KQSAQJJSCP647L5777265PX77724PVR727FT2736XV57X56X3FTPZ7WKXD6L727W267HN7H7VM57L52TUGPS4TX676N7PROUP7BS7XCQ7KQOX2OUHHFA5Y33UUJU7P3LNF6RDZEW4M7L3FLT5HXTH5YKXU2PYYJOIWRQLDU7YWU6GB2HH3ZDTPVY67IYF4VHH7P447532MNZZ432XP25HIHVMOTQHD5LTY4JN74SO5UKCYMDOS3X7VSIWTHRUM46257UFQBKRXK2FKFQPTSBR627DI4ZBQN2W36JPPXXKZV7RSWZDVWPXZFY7S6PEK5WZVP2SMZ3BGF7FTLB6WK44XRY5MO4GKMZRMDPO2USDOHWDTHXZNXLQEP25EB33V7IOPZLPHZ3RN5PW4DTPMIAM47OOBFNMWX6RQ5R232WSQ7MA4JAN443PEO35C4P6SYFOUUAUHENZ3AN56XNFZTCF33RLTXQR3UX35AANH6HQJ7MOCOHDVDU43S64DQRXTZZYNMHTT6N2DUJYLWO7HF4TDTWZNYTQOEBRD7HIXHRVL4IZLKNFZN77SB2ILKCPNOJHYRKTOIPTG45OTSIE4ITSOPYZ3OD56HTLOIOI6NOB34E6OF7BPIZLOPFO4K2EU2TRUT5O7DW3JVDJWU5ZBQSJ2G4H47MPTDVPNL6NZBW2HF5JUATH3PU735RDZSLTRWZU2G7M4JUWSEUBGBDGUDXIVTUL3OXRJS4TJ4A54K4APU2LO2F4E5SEG4VZRGQLAX4OTQWG5HCW7XCG76CUAUH6FS2HSPW5ZRPTU5YICPI25RMABLK6N7SGZZ7VZOLLWQE5TB3WG7VHS7KWPU4F5DTSIJVXXUSBQOLTFZTMI6NDZZYYBPWXSMD5PDUK6BBD2OELWLXIDOBXUOMTLQNSOG2ZGKQSFZNJ3AFJ4CUM5WYX3YYTNWXY7BXBOP5HMGNSOEORW4MXVEQR2A2RTXRCZY5YXLDDXTOL6VHQG46QTJCD5SACM5Y5FQ54XCAWMQ3VWD4ATMILU2MZBMMK5LVZPN5LITTQZAHRR7GNK3YB2U2RPXPGPWJVJTHG56RVXJODIU6AHT7UUCDZIWVZ6PFY75APGFMLS6OX5HP46EQGMPGDCSPJO7WN657BXVLUYMRNWBE53BQSKRY6Z3VC4ZPWZWTP64H7CQSFWEXQ65UCVOSHQTPNGD5JY7HO74I5W5BRZWZN6PJOX7PJWPVI5FSTNGJM3QOJFGZYPVBPJ3HTFK2IKG3NIKW7YPARTDLYKUJQCMNPIUE2RIOHWENBG2D36DRMV3ZY62IJMVCLJPNY3XNK3R5DADXSAEHBHG7IG4XHTIAQM2QDBK2JEDQXZJSIIENEPVLNDQXOGZ3ECVO56VHF7KQP3HSRNBASMDOAQW3OJOPXFESOSQFZQVGMO2R6WBNYC2EHP7AGEPHI55PZZUZ5GMVDXRVVDUST4QQP4AZ56QJZDSNH3QCVQXJSOE3UPG6FCGZGXACANZ32QRLSDXQ2JUJD4QC6USFPQLUPU4MZQRSIQNS7GWORI5SQ3YU7DGH4DPKD3UDOZIUMJZB2COOA4KEKO3KMCIYXWY4YWAFLH67X6FBLNVTDRGH6MQHAEFEYSA3GR6VUNBVNVHBCZRTFMWAA6QSMZGPIRU43AP4O766BTFAAIHFYKOJ5FZWEZALF3ERT7SVNNAPDMLCSZ6P5T2XHAP2GO5PMBRRKPZXU6JAUZLJTHOET6GQ3TAOGS2JXT6M2GRX7UXPPGGA6OG7DMUZ3IFLDJIDJPV5Q45FNPH42G5I4F6SG6EG6QRKPPQTEIYWMZI7HY57ZZLQPTW4J2G57NMWZEDS6VJ4CZRMQGFRTCVS7K4GEOJ3IOCRBJNRMHJXVR3ZFSJSZOQ7M5INQRDQGBB3HLQ3FSEDVJ6QOENJINMQCJV6QFRVPGRKQOKGALDQ5YOBBMMCA2GCPCFEPM3SFQKC5NBFUPURVB5LIOLXDL7HSGIF57JKRSJOSKSCEQNIO6CGYI65XG3JHCWGLMZBLRSKF5JY5ZRJGVNC2QTSEMRHCYJO72XL6QFFJCOTQ2DBV3QR7ZSHNFBJHIG7DUOMMQGLFYID7TCBDNHINLQKBFFBY3EUC6VXFADME45TB4PTZ2MD6KKW5PAILKO6HBU2WSXXAIHHPWPV4BTA4QZVG2M3AFAHT4E33PG2I53GGOPZ3JBUIL6WVNLWBM22EZY7QGYTPXV2HTYSDZJXQ3GSD4D5ISSIOBNXBXRDSQFA5BN5ZSBWV6ALKKZIBXWMIYQW377B3TIYDSADFLU55M4A7JZI5FAWM6EOFROIAIOLQHOTANB4MGL73P2ZCX6B5JB3WLRH6WIDYUHZXRTYGI4SNJRKPROEQLHDNMRQK4R276VSSCTAP6IZDGBKLRAES5UECAOHGXRRU2QOTXH7OAJUNMML36K6EXPGTFI3UZ3JFQ6R7ZT3KRTBF7BN6CRM7ZBJ3AEWNVKOQAJ6Y24DAPMMFXHA4B4HJPZY23UUHES67AJT4TKPNYWUMRX7A5X2NN6QVEYITNTXF7WXYD4B4MUTWZZRM245HQJRKU6RNHTXPIOUGQFFH2RZFSBRLIVV4GV2IQDFLBWRDMEORBRUBQIVZL6U7MQM4YDNKPTJ3ZIUD4HABZOI5BJFB7RJGUOYQMC6FFWTIKQMX4N7GHT3MZMWYTMTQWCFWKRB7VFY3J3JDK2KGMFNIVHRYCVAQ2YSLPQB7A6RWH2VCQRBDKZLBDWKEWIGBZS5P5FNLWO7OKWZYF7MGQMMVTRBGZ7CCWNWIJ4Z3USUZ4XQNLRLGRT7XUYPYNFQRUCM22ADYHE2NLZRW2AJQCER32XZGADXIRGDWML4SFTOQKYKJLUEAUS2JHE4CWF43R6Z576BT7IBTY6QCXHACZ4FS3JUYT4J4N5TAVBQWNLULEWBHF2GRA3WMQUNAYVLSXQO27KEF3RLCEDNLNTJVJQQO4T2LLCENI7ONV5GOY5UBYNN6GPFFAKW6FZZNVCTU5NPBNG54IVPZQH7WGX5PUYL4HKYHA7JBXIZV2BG4IN45STLBRWKBERPZEJGAJLHLZDCAMLY2SYC53KEXDRK4XDWGMYBX3MLWB4W4HTYJZU7CPTTMMNJDQEYZ6UBERQOEY3Q6YMOTD2XPV6DTMNYNJM6SDXSRQKWG7P3DLGEJVOX2W52Z3JA4VPGN2JT326J6UYHTBRDPDW2OJ7JW7C5S2FYEFV3NDRQF3XUPLMPUZGTNSIFCGMSRKOSIYRVV7ZBQSEUMDA2DEYNB63DVRN3QH52FPKOL2VGSOMHO2MYDXY4H6GRGHT37VYLKBTQ3T7GHEM6YJO5U4T6W3D42HWDX24OVURA3IHRLOYNZ33QYGQJ3JRB3F4PNML3UGFJSFR2HA33GCLWUTDYMO3J5LQ7OOKJ56576K43DSECFCV4PTSUTL5NXFIKR5EGVODTNGPYYVSXLJYS6G4PQTLD2Z6ZKMGAZ3W653P7QXANERRJ6KQGGIMIITZVR32IYDWHD5RMEZ6FY37IFRTEEBGQOSAZV3WRL5BFZETOM7FRXCO5XVMC2LQBE2NXJLD3N5KMXVISFVOTDTFLIESA4EZRHKEXVPV6WD6P7II5KNB7F4CGYKSKUAH6XQMZTO45XU2J6R2CGXPREH7APMEVD6T6BZR3SM4QD7KQFPASKLFD5BZTABNL3CQDB3SRE7E5S7IAWI3LTRG6JWXPT6B4XFJVEYXC7W4RJS5EEWKHAQEKX4DUZV24GAY3CK62I5YEIVENIWXK4BYTXTR7HJ5PEKFGGFJSTHEQKZF4V7SOIAZRIZPGCO23NP3QKTBSTA6ZNQPULBWVPZQ3FIEDHIJYMQR6OCSPBWVDYRY7C3VDFEBHRYVQDZSBOVCJFXFWN4XPSZBNUMMNSDOAUSU6TAYR2KQJ4NUPUIYVP353YXIQF56CONQZ72THK2EL4BS7XCPTWVO62PA4L7RXMOFUQ3U3CAGFYRDPAAGV7XNHDXN7XP5QHQIUJZPNDHH5XRLJMZYBSPUKDUTTOEPOPKRL3XF2BFR4LVAYC64GP3NQWCMU5MZVRLNTENTV6U2DF75UM4OBK3LQECFRR3L7Y7ASOAD7LSFSMSK525M55M3U3S7FRBTAE4KZSH3IQF4EA4RTUS6ZPWFUPIPAS3H6MYC5TH2ZWZ2LNYA36L33R7BE7TALHDGU3J347H56X57737OXL37735MHSP3577QAUW3R5DQ====END hypothesis-python-3.44.1/benchmark-data/intlists-valid=usually-interesting=size_lower_bound000066400000000000000000000104601321557765100325240ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for intlists [lists(elements=integers())], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 382 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 959.35 bytes, standard deviation: 668.30 bytes # # Additional interesting statistics: # # * Ranging from 2 [17 times] to 5024 [once] bytes. # * Median size: 817 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2788 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3376: STARTPCOELGF3SIS3SDKEP6SWH3BWJCIOAY37IUQU6Y6IDZHRX67PMKPCI3Z3G65KX6AACKEQIUD7775PHX777X7PH25PV577LGG7L6V6535PTI537P5ST6PTV2XX27WPD7IVPWOPPNNO6N2I44P6FRT6SXHHV5O3U5XSNSXPGG7ZWZLB5RZ2A7WDRK5SSXX26VWOTMGG3LZ2GN2OMOKZJ7L5TP3UJBNWO6O53IOWPT4445TY3TPJPLJYU3S23K2HKU3K2PXE3FU43NLD726HHQVWHSE3WP7FZVM272OOPFGROVENLIWWOVCORZG5MTWJZT73S3TXD2ZBE4T64Y3QAZPM4CYCA7LN5TTPKS66UU3GB54RR674V6ZWKREUPRUPJSH3WJ3E4K5L3QZ3XYAZCVHXIRMXC76B6UMF5HTD5XMRGY55PNNEFFDNYXTDXQMEIELWPZFGBZOOAQSMCO22OCE6YVC4LPTXIXK2JMMHBXM2DXMBUQONZ4GW3JTQH7D6VMHNG4ZFXJSFGGPRI5KCRPNVQUNHMKYUUYX2PJNXOTVBVBNXUAVQAIYXQNZWOY2NRM364BMA2T5X52UNQKZIXOBBXXL6MDVNMZLT5PY7CTH3267MZKQJHRRTZKEB6PRN7PKAEDLUNKBBJ5VQFW4I4J2RCRIVTOAMUXME2OZVV5B366R4DIKV3ZVRYFMSBRVIR57H6FIK4LFQB3Y4FMDLDG4XGARAG2OJJEIBM5U4HUBWCXWZID3TIELYKNU24BMIFEPMRZHDIFFNFG5REZZGKGXYUUI5QLCZNTLSZ3M5UUHLMXWIEHO2VC2D7EGBFNSWXG4EZVCOGVDZ3OF6FGFSZWXYJS2JTFDDUXJBHC7C3VRLVS636BBYBGVL24V4JNELMBEAXRFG33OJQUUE7CRCKEC3E4C6RIPTC25QC6Y2UT5WB2KQJPSO5M4NJZCLRISQXGNTR3A33WQLFRAM5JS4ZTKGCAIHMTVEDGEDFVBQITO6ZQ7EI4HBDRMLRHNG6ODGPGA3LXOCOWUFKP5JGABE5CCNATBNH6FCZJ4ARPWO44D6ILGIUIOZ3LFSZWCZMTVJIF4O6JPPATPW45GW2HPELR3WJYPTKJARQRPGLEDAYLHPSQJRAS22AYYVSA557O57TKFBQRPTWMUNZO7FIPBHIA5ZTAV6YZTL7LR3X5JZY53NPCHFMUB6UVCWSJMDB55VALRLOUTZX2R3QONGLBUK5HHEHC2BRCVSDS5SOGCUZG766OJFTGAQGMZDOJXSMOC24AY4FU6PUU4453BWT7RYT4K37WYTHQYWC2QNKQ3AK3LWWT3RAOMNDBHSUZYSUVZIXJXBVBX7XFURIOGHD7WAVCW3XYOSLTEPTOSKPHJI3DJ2ZCZF3EB6NJUEPI7B3HQTAGI2D4MJZXRXZEU5GFIMMEHTIP4MNA3XOTUIIRPD2XKULILWAUEHJCLHIJ4U2ECDPNCIOUCO3SSB4237FEHCM4WWZV5KEBCQC3PU6X6KVZQVLFEADMLCG54FMNUVTZSL2ZLLLQITS6PH3LHUTZPVB3LCU5DAUMNRINCXT4XDJXN3WSTHHBGGCSGZNYTJ5FVESIASWLUU4U5BZGJCW57RXGUKXOZYQW5N74S42GMJAX2TKHOPWU2M5DJYMMKRDWM45E5AIIAEU32ISOKCQGQ5KTPY62Z5FHF3APLPFDTYXQOJ73YSJKS2T26JLW2SGZTVXVZOXKPC2YVA24YKZ33VPFBPCWUL72BHJIQ55C5336AWTMCLWMXPQPOIRH3KRF6ZBHATGY5JWRQNRDL62QH7FM4ZKGQTS5JKOWI5XA4S5J5IZAHE444FEAXXVQ34DCBCULHCD6PSFH3SOKHHYSKKVO4S5MXSKCFEGG4F5GXPDTUWE27VDSYNALAO7I6H2O5TTLB2MBIR4CWDDYHPUJPFX6CA7X27Z5U3V75UUKS357UACUUS2NQLQUHK25FYURK4LEYMOXVAQLPENXI4PTLZGOIT45VWR4ZBMHVKOMSWTKG65ZOUI44YITI2EV3K4PY3UVIFGNGDJJB4PXINKEQROJP7VK4RGF26S32XOOZ3BH3HSOEYALFHQKOZBIMBYBDHKG5X25PE4IIUQXHGTTSPCVYV2QXYJC36KSSPBNG36F3PUVKZUMS7UTTSINDX3DZXJQ2OY3U2J3NFV44TSD7TC272KKGSNDCE2ZKEPPX7C46NPTLRZM5AAP23JCI2KSBLUP2MOI45FKUVN6FDDXNM4K3QLKR7NWOD3JZX5IEA2J42TSFZXR5L3FNSODHSJYAVSUUBVU6G5SGXDX6TBLWUJ4VVAEPGMPM6WSBHIMKEEUQZGAWCDYQ4AMLVHBSHBW3UYCO55G5GCRDARNJKMYILQOGHAXDSVMWQ6BBIBHJCOAADHS2BZQTBWED2OO3SQIPKGA25MFTWJMJ6THA6UZXGDPCFBDXD6GTTZEUNO4SGMGCSCBSR4MTRFOEKHV7JTZIUHYAAVJDKGNC7JL74ZCTHGTI2DOXJRCLHJQ5NQXOK3H6646S4C6S5NNEFIB7NGVKGEOSBAVEW4XHXZCNTFXYVVMHY6BSPYIWQTFMSHGA2TORHARNSAGGLAXL6FYYCSB35MOLCAYUKQLHYAGO2OWMEHXEKVQSEYLTGFMSDMCH75Y7TKFQ6IEZRR5524VXAEJHLMQJ3XWPR4TV2YZZ4KQ4HS3HZDICTUTG6BJABXLP3OQ3QGV3QFRWDO2442QNDEGFTKFEVTYCSGVDSYJ7K6TZ375TCLY7YTF4J3VDUUY7GQICQNDGZSUOLNUSF5CV2C5RWF4JAQCMXA6Z43ZKWUB3L7PHUIRPNAXZP36JICE3XE6PFS5PCCGWTOTTXXDPHTY2CQSLJ2VHVEA6GYPCEH54W4WTOF5RXDN2M6E2VMKPASE46IMMIKP5W2DH3VE6QCEPHCPOCEL5NQQ3SF2IZ6KM65LVDNX2AFT36HUSDR4HIYW6NENEDFSWFXCUYQI34MDYBOEIKWQNUGBRYZ5XOQ6L5DXF626S764UX4J62K5XETZEH5IIAY5X5BHNHJ4XVOV2Z2QSY64WWTCG4OG2L4BXPOT7ON6DLJEHUKPRE6K2P76MGKQKIZIPENWVMKIHHUQOHBPV3PLWPFBQ7IAM66KYPO4HVLFNI5XTZXXGKBPLHFP5Y34DQCFANNWRT3MLVSRQOWROBYYVDDOY5FPGYDVINWZ4JVFMYJHSYPK2NMPVBW7FSUCZCFVUD7BCL6WKJC7PHIET5V27ZFDKPUPBLPHSYT6IWVMU6F6K23RA5IYVVHOLLJN6UGQUEQ6CEFBIQZX4IANQWQN25E7OXP57X77P26P57O7367V2UBDT77YH5ZB666E======END hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=always000066400000000000000000000021121321557765100273540ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 372 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 32.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 32 [1000 times] to 32 [1000 times] bytes. # * Median size: 32 # * 99% of examples had at least 32 bytes # * 99% of examples had at most 32 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 104: STARTPCOO3V5BBUACADAAYFKZU2QUCUSKYQTQKSQOWIHMBZNWAXW4CC3TLZXS2AVM24QSAAAAAAAA6BJU7IXBH3PNJLPEOMAQKEF23Y======END hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=lower_bound000066400000000000000000000067021321557765100304040ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 374 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 5697.52 bytes, standard deviation: 241.36 bytes # # Additional interesting statistics: # # * Ranging from 4768 [once] to 6640 [once] bytes. # * Median size: 5712 # * 99% of examples had at least 5136 bytes # * 99% of examples had at most 6256 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 2520: STARTPCOGLGB5SLWDMDEEV6ZLL4IL7AX6UXOF4XGBWOG6ZQXN7XJDPQ6UFT3MUJJHCNBA3BUDIAH67P4735Z7L7PZ7PZ772MKXLS7D7Z5ZY7RHTNH4PHHXKPOXDHVCLVSHHWHWUZON5PMWXO57LDRPV7ZZV6KDDSVV2OX7MXNP6WZV2KRD35M5PLSGLNQN6FXOLHUWB3IMJ33Y6ZV2PWULBU3XJDP4HEYY57W3LQSCPVPCL7KZ6FVU633ZVZYGUEPADNHYABB54TM6EVSPUSJ4MN7MXPBZ5WHS5W6YODYOC3RUI4OHYBMS6O5LYVE3DKO5K64U4UC37S3COJZ3X5PL3RFPEAPARDOAA3C5TBBS6PKK7HHXGXH6EPARL3RLFNQLLDGPPBZB2EDCXLEBCJIJSYCLPX3H255BM2TNPP3SM3WPIPBSHP6UXEJUZLRMYT45DQ35OK66YLGVQICSMEUK7QQ6OYW4MNXX5NQA3P7BAMTCZ4HL7P2QI6HYABRXSBJ6YHFTTA464NUWRR4Y5GM7FPFSFTWQTMHVTZPROQPESB3DKL44H2ZGMWDPRPTO3T4F2RLVUE4T4ABYRAAGC6KGW4G2DCEIPBP6VKMEU4CS5W2YWVVK246IXNLFETPSFD7URVRGBXIBJDHF2UNNQ7MVTGB2MCUMSHZWV5LYVFPQI4UO3DPVCJN6ZNWFCYFFFDETXQKZGJP7RKPMWGWOPIMNHEUS6HIZIR5KIROCGXWNXGD3M7JEG3T3Y7IYEZ6UDE2PKM4OGRR7N6WVVVN6S7NZVTSYS7LFSF5TXKZ4724FMX6WF4WEAQ7RBI3ARJ4MFDWK3SJ7WD3PLLGUZIC7FCTEO4S36ZFEPXFTT2ZUSCGIRBA3R5CVADLEMYUEVOM3IVD244THIRD5VHERFREDPXMHMRV4OCAUYUJVZUDK5N6ZRHOVPGZWM5KB3RCJONWSPZ5GMSRXEGKUIA4POAT6OOLBI5VV3DUKKIMOE7CAKVHYT5VNIS46IU4K5RD5LP66LGWTPWJLLYDA5WUXZJDZ2XYJATJAHNQ25M2CCQSLYUQPQLK35YZOPQEM7DGSPAX4ZLK3BSVAF4WQ6SOGRK5SSZCRQH43XVFGZKKRFIVD6HHLK7ALDBT4PCUITHHNVY5FEMLTVQSCPM6ZWMJAU7MU27HT2WJ3WJBUCCNDKZ4WGZOVIA2K7PJKNQY7CRLPUWPVQTM63QEB2X5I6JRQTNNPXV3H3PMLB3NDSYO4BKG4GQ3EJTCYQXMIXM22YZHPBHNXLR2EPESN3WNIKGLIR2XCIHMXVV3HCEAPQ2VKPBMTJJHK2U4OWBWCU45YDHN4AFUT45LHLDGLFX6GWWXVV5QOVLFAHOYF5J4YNHHUWQPVRJZZGJ3FCPOVZWO7ZGZ3RTR5PJDGT5P75FAOXVSXO7PXIG6OLOFTJRKXTW37KRVKEOYRY4CT76WN4RYFSN3ZWYRSJLKHJIPVESLEQZ6PI3ULXEDUXSMQA46VSZ4FBC2LSJY2A5VDRMQPJNOGPKWYWZ5RUMCT3VOGDBHAZFP5CKWRWRJXTROQXFGTGXJYWAGQFQ5VCFV2MRKRWWU3ZLXYBRBU3JN5HX5DBOQVGJCHRDWVO6PE2WLG65TP2PJV3TTTR52E32W4L7XHATQUVJI5DDXGS3NOL47G57XJYBPSNO427FHRZYFF4FEPN2JKFW2I7HALCT6OJQA3KYCZLLOXP3DA4DUU777V4V3ZYSCQ443FVWANU2NT6YQTUCW5B5OZZ5E6Z3CXBKWUVCXLGUXY5Z5TGMSSKZZ7TGOEHZ5EQODXLMPKJSE7K5NEOZUMUNQHZPZLJNEQ2EFDB6FZZVDDO4INHKIW2I56WHWS53EX4WTJCGLWA7M3GNAV222P76TXKOW3U4QGOPWPM22U37HAZQNAIVVLKERPCVGPNTKUJ5GIU4YMJKJHE7MBCJN37HUXOLWK5ZPMSPW4OKKPLV5KFWN2SW4XNHJE27K6VRPOTJZF5KIJKXBZN2ZJHHXV42THZEGD2BRPQBDKJPRDHAWOK3VYL2BQPQ52TSI6VZ5YV5GM34ME5QO552X7DPYVHJCE4BPCCTL6LZUZ5GEL252TUC45SNEHVNIU3YMNNBTE4KHD5L454KZ6DHMG5LV6VFXPPPJLLEAMSM5O2ZJYY3CMU7XM7HAL2X5PJLCXO2FPGGH3KMTTIWGYFXTU75KEI3HFPLDL26ZPWKPVK4R3WFRBNXIX72KW5GC6G67U3OPXD4FE464TE25C2X5OMVBTWOBBLHZ3PPIBPVUY3PM5ZBMV3CKFAHEZ25URUHT52JIL3LIT6PNKAZTW7B7BUA36AMQSJ4E5FMMJC37XNDFW4RNE3KT24L5NLOUFDY23WBUBJN2QTLW3PTHOEJ3E6QOPHMP6XVH54YDRPVLISOENKRON3U5M7KHV5X4J6HFSTIU3NTWMN6P4WKT367QM523N6ZXZUBCKJ4OXCT42G5NH57X27D6P56735PZ7ORR7Y5774ASNGTJ4Y======END hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=never000066400000000000000000000021351321557765100272000ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 371 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1600.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 1600 [1000 times] to 1600 [1000 times] bytes. # * Median size: 1600 # * 99% of examples had at least 1600 bytes # * 99% of examples had at most 1600 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 112: STARTPCOO3RFRBGADAEAAYBKZ5L2TEQEAUWKF5RGGDHKOOF3XPMF6FPXMS6O5MNTI7PNNWWLLA3O3WZW5XNTN3P7T6SXEDTR4YHWL23PA7D6RHHFQ====END hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=size_lower_bound000066400000000000000000000030041321557765100314260ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 375 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1063.74 bytes, standard deviation: 743.83 bytes # # Additional interesting statistics: # # * Ranging from 32 [342 times] to 1600 [658 times] bytes. # * Median size: 1600 # * 99% of examples had at least 32 bytes # * 99% of examples had at most 1600 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 528: STARTPCOLKVN3B3BDADH4SUUHHB2CAVEP2FOEI2D45XKA7Q5WEQCAJHWLW4Z2GQBFCB73L27OOZN6JVFR5U3Z3B36T5PMRZ676PR737L377E77QNVMBLWTN34H5YT5GZFJXYGVPZLW5QPDRTGKHEQ6O3UXLQW5OFUGOGKTA3HGWKDBINPLOUYCPKA4VD4UN65LEVGR24KMKKUDW3JGEOW7SXNKNGSLCGSRNGWGXDX3W3N4BYRZ53FRVIVIQZNDXFH3MVVYYY2H3NU7CXNN7R5T3ARUQV6YN2TMFDV7P55GSQ6YZ3RVU4YO3KHSYYJQA55TSQGMRNUKUZCGLLRNN4VGWCW4SYULHOES4JNOLHCU33UY3Q3YNQQMQIZYW7VYPP452GCMYO47ONI33ID46BP5JOKC3XCWXXOLHFCROMAX7I2PXR2RWUIXFAETMNWTOKZXAW5NW4JVPMEIX4PG7OLJ7F2QTOTXE62DLTAZY4TMYQIBNN7VWHGPF6V5OUUS5U3VZRRBWT4HYYJZ2NLNVA=END hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=usually000066400000000000000000000024761321557765100275670ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 373 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 35.44 bytes, standard deviation: 12.91 bytes # # Additional interesting statistics: # # * Ranging from 32 [910 times] to 208 [once] bytes. # * Median size: 32 # * 99% of examples had at least 32 bytes # * 99% of examples had at most 80 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 352: STARTPCOKWVRKZ2WEULKWWJJIQNRW2JIYAYJTCMCJWEEGVHC2NBY4ONENKR5MTH2MB5FUN6UIMEJZOZNBRYGWI6XPWDKNJRIDKGYZLBIM7RME2QHUJKET4JXFZ3RDI2OJIWBCG3L5GKV4ZA2ZOGD56THKD6GCOKYKIN2C52AOK5DCZQMYR5ACOOR2DGIBOUWKIZVVQ6OHTKMRABEC2BAGEIRGQZPWIDDSXNJKDRNIMP5BJQ4BYMBF7GEJFMAB5GSWL6RGY5PYHLISDZGG4JBVX4HZLZR3HXZTPOCJRZWDDNK3M74MZJDG36BJAXJIDYS55MSYDUC2LYWU2QKGA5653DOLQFQAQW2L3AI=END hypothesis-python-3.44.1/benchmark-data/ints-valid=usually-interesting=size_lower_bound000066400000000000000000000051711321557765100316330ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for ints [integers()], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 376 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1227.23 bytes, standard deviation: 810.66 bytes # # Additional interesting statistics: # # * Ranging from 32 [280 times] to 1984 [once] bytes. # * Median size: 1744 # * 99% of examples had at least 32 bytes # * 99% of examples had at most 1888 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 1672: STARTPCOHLGFRSLNTADCEP7C6GOUFITJRI5N7SJEZOK2SL6LUZ7R5WFEAE32BXI4I3DUSIBQACLEW7ZZ776XV7PZ6X7TR7O7B6OL7XPWUPWK626R7Q7NKV7V6X2JD7W72WW7NO7623V5V3DJWI56H6XHHTXPHNXMOQVWLLFUG73GV33MM3EXPYY73LV74HOJXQ5PN337M6ZVH3PSC5FY7IV3CVZXFYPPGBVBHXR5RND6AHWYDWYUA46HCRX5YT2O5K7N3V3WYWCM5FNFD425ZWAWF3KYFHNMDZ7P3M3F37WTLGWCNU5ZJF53GRSDXGSH2KJ7QCWUJSZYNZ7UE6XZ5PWC3T2V3U3WOWF4GDX73ZIJWQBV4TU6TTYVUMT37F7N53F7WZ3UZDGNFKEBU7K7BRLVBU6GEHV3255K5O5MDXET6Y4PDWBODQKFA5NMHZTCB5ZLTEJ44FB6X2OBNLFXTM4QZNCF34PRPCAND32RWOWWUWDKRN4LJLXI7ZNOKF5KTDZ4O46LGLL7LLXLKHXVROPDJ43J5XH7WAE6HHNMHHHYMBFKB5YRAD7SUL7ZJLOABXUBDMOSW5EUZL2U5L4RRPC24J3QIKESMPGRMBITHHL6SQHCBCKTNA63HLKQ4A22454HYCCEKXMGTKNMQJ2ARDQBD2E35XD7BXTOIWHBJNXA5PFKEF34FAJZYDS43FGXTTME5GIOFSKGN2WHC7SVGUG7ODUU65O6GBEJTLBSA66MSWTDNNU6UPHC6OLSZQHJYOHUCE7AAAY27SYHPX53JVTCM54GMUBKEUS7NGM4QMKPYIMHSKMRQ7E6CWFZ4YHMQFLY5N7PS2MRUMLWLX3RJ3JP7GITHGCMJBKCLRWFDKKAFM7XRE4LQL5TM4DZVCRVAXXRWL5FFI2BSIWDRMHCVKA6Z5DSKZN44V33KCGJUTWOV6XEDIE5SJ7W2MSMMPBDMLTHTSPZYW4KESKBFEFJ5DRKBQ5HPGNGOY5YZL4UTN3M3ELPZLLPKBNWBLGHU4DPQ2MAJSKB74MGIGQU47IVOMNEHN4F4YC3IT42ZCAOLWEFCEVEE5MHXJTCQU6BMAQTUF5LHXSBQCCCEKKG3NY5E6KTYOXKXEOBAREY4DGZUNMSVRUNAOA3YJ7MZ4RFKMPSEXQJJPIU6ZYREUPDZ7ACFT643CWG5TJFHBRVMOTJMV65NGBOQXVMY25YFZU6R5DKEFLZZDBA2Z7I34YUWREGFKAMUOHQKHUTYKMGEPA4FIJOUT74BDRMPLI7HA5JU5O4N3JYVV4EI3ESHDGW33GL22CNU46S5T2WA7ZQNM467YR6EVK6UPFBYYLOV73IJZEUPKHCQOGQTWM6WVIFG2QFNMSY6Z2RZ2DBQUB3CSBZPRXVNCYNU4S5GQLYBI4NU7SMULJH6YRJKG3RJYCAKA6YTPYZJTY22DTJEZ6EGLGBNTQFTYHS6P6PQKGEMGMUVXFK2H2D67WU2J3PPT4XDLQRAZQWO3GFUVKWG5AGUCACEHLCQHQNVXCLMMKXNQZKHZCG2FNJTXGXGLKC2HSLOR2HBVV4LSCIKLQIG5JCRL7DRBEPDTOATB7OS4TS7WDX57G3SLHQGLS2LZ54AAQ4A6XJNFSEKB3QV6RLFBSHLOOHSVLM4P56LXXN7HY7X7XR7N2H2H7H5A6GK7XHIEND hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=always000066400000000000000000000022401321557765100313110ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 384 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 2 [1000 times] to 2 [1000 times] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARWW4Y2QBABUO76ONA====END hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=lower_bound000066400000000000000000000116701321557765100323370ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 386 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 6083.48 bytes, standard deviation: 2640.32 bytes # # Additional interesting statistics: # # * Ranging from 2 [2 times] to 62752 [once] bytes. # * Median size: 6097 # * 99% of examples had at least 385 bytes # * 99% of examples had at most 9697 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3944: STARTPCOE3GF3SISTODSEP6SWH3BW7DAIHVFPFDSGSDGZ4PW4N7X3EIH2V3WIXHIVCXKFQJACEM6B777PR5OPP57P56XRY7LZ6MPNPZ74LDHZ7W2SZP57ZYO6P66L2KRR532VR7B4PUVP257WGHNP36I6H66LGPVVHN2ZZ463NXXXKO76447LZWUF7O5MGB3232J33BVGTNLDWWHMLUB3X225UCP5LJUP3HFB765P5HQ5ZETKUTJ4JPHU3T3T36WURJRJVKWXR3V4YRPW7WTS5U2XILS7PIZVZEOHDSXDO6ISXEKAHMJNGPRNO2RXKLHXS77MPI74PUXJKIE4W7PU43FD5C76B36Z4GOEUCQAMOPCXVFQNN566R2O3ZEXNRNL3KHNINUZ3LSUU3UWK555O2LI52TLFFIFG4VT5LZSTSKKHU77Y53I7O2DKFHUERICPRCOTEC4LKZHO2KPGTHWE4FJX6TIPCPVVOJ5HJ27VKKPONM73PLTFDVKG3LTI66PL5LDI5P23JCSZL6LSOWQWH46FR4XCBKYC25VJHEAHDAFCWP6Y2BA4FWI5HXLVWBF6ZJIV7LPJPLLS5524IQBT7M3TPCH5NWKVHCB4TBGVPOPTKBYU67NLCFYCAJBYSD7MOVOOTTAFQZGBT55DKNLQL2WJKZKWXYNJC2GPICKWCB5FHEPAX2BVSJKJZ2D2CXJAKMWZIYW5LJY26SGI5EC6H5JRPBYQJPOEDFAOB2BYOSJLTPLCJEPAAVOQLWQK2SXPEVEWTSV42RPG2XDRFPMUHCMYANMPRY2ESSWTUUJIN5HY6XT25KM4C3KTOBSDF5DGJ3ZLOFXRR5EIOKT3MC4OXKDVNX26D23WXDHRW6K4LRA2LK6OUGOWNMWE2W72P4HCBL5UYAH4ARMAXICN43WDVXFDND3O6RGTCZA3JFWZ5AAQRIVTIXYADT2R6E3GD62V3D6STCKFCXT62ANHHEWORKDNKAJH4UBEB3SIKZRW6T7RKFHFK4LHS5JYFBPIV2QK3PHPWVKB7YWBJUS6PNJSLVTORLSAZ7CUTQIAPFCSVRAFK6R7A65G4ISVQOXKS5DHVB2QUIH2G2Q3LO6WXO7MLJJU4J67PHCGMC5GVXKIGWOQKTBINE4JYFWAERTAG2SRIUOVAABOHDX5BT3QIENT7QJYUHCUKI27SHWKTWRLU6K65WELBSJOHD2PUMCVX3KD2NI2DKN77JECJTIF6MK24JWVVLKNFKEKGDRSRSJF4AZFCEYFCQNY654F4O6ODIHLXJHWRDIBUGYWAOSIWCIQQIODDYBJ4KET27X6C7P523UVFP3UJZYK2SUPBMYPR23XWIBADZBT5ABLGVTNIDDWJSRK5MCTE5BT7BXLIS4ZGW4RWCEDTITCCVB45WRZQOHYSYPFUYUECM3BI6XLXQZDVT7HKF6IS47RLKTDEGIY4SRXB66D5UFBKEVPZ4G2URRAFUVUBAMNGPHU65W2NNB5QHRWXHU3WAKYU5M5KXUT37L54C5IFVMX3C5KFJFMA4KQA2AL4BM7PVL7HZJCHGSJHNU6KDUVJPUWDLBGUO2MECWN63O5JFAP4B72QSBBJTM3MS24Y2AEHDZUQJAOTUDWIJWTLOX4GLPW2TKEVGIWDNI6ZPJFJEUDS4CETGHFKXJAMKZURT674O2COB3443FIMKFG4GOBNBCODN6XOQOVXEJG3DVFUIWMQYKDGNLYYWWNCKXFFFEM2DCHZWSI3U5HEOQECHB4KBJOYSVUNZSDCOJW7QHIKILDMRIYNM27XQ7OQEPZFKKRWLWKOLTPPJFFDJ6D3SQ36KPIT2ZSQJW62IMHUUCBYNRLLLUD6QADGHNWQIB5OE6GNQ47UACXDPF2FUWR2SVWLD2DD633ZIPVBNC3MCAB3N25AAIG5BPPDNNV7KY4MADWNGR3HXDAPC4TFKUYGPO4HUQD6T6QAJMOYME2BZXNW5EYP56YCGTSQ4WSJ4EJ46TQTQU2HQIBDL3GBLIHYHHIYOCMFLZXQTLSDME5AEAFCJCPLZTBLRIYBY3GVDWB4X3DROZANZEJGQ2UQJ46PM3MQO5TAKAN5VWJO6NO3YITWYW5Y3RLCKN42N7HKC2GFCREFOMARRQO7CO5GTJ4VGGEELEUBBFOXIRFG355CLFV6FXIV7YYVQPBN333IHWUHCDQZX3TR3P3B3FCDMZQH5WITKIUFLRBXNNBOYNOFZQKWSQYVLYRTEEDOTRSG22WOS6H5OO5M4AUE4EQAHZVXKFVAAAEXMHDYX2MEZ3M42HF5TEMOQCP7RV7XINIGNAPQSTUOXJWQTBB7G7XCDWCJH3MPNL3TI5GMDYTONHU4MUK6ZAHZDYTLS6J55MBWWQL524FRHUC5JVAHFGUVV6XVPVEKXA67MVEYXPYPVLBG2XHUVFQZVGBDJKGPW64HL7EAXLQWE5CVQBIZDMEH54B2YHENKGMGBGK2G5MRTBGAYN5J5AVPN7WV32PYJ6M3TSY65MAZIGDIXG4CAZG5DBT3CR4GCXLO22EJUOX4LIW7AYA5JQQIDAGJITMIOVNYYU5LNZQPMLYORDO6AZNOVYXIQ4DLKVTDDK7BRSUTZ3BPKEE64XESBUD6HNGI2TTY6VPLTB7QZE3XP7NDNYANPDVFG4G62YUZQDXD5NC2KGWYGRELYTGVXHQPIKE4HGPX6D4RMN4APWU3HBRW7W64SQKYCWTE7NRLHCO2TAZNEAJGUQNZCGW77MW6HIMC6AIHIRYNYQIWDIFUMPJU2KXJYGFKGRUCK3ZA7N3WGKWLLRQYBTEBDQ4546TQA7UO5ODKABGXUETQTDIXDEYF2BLV5RNFIRVYIXXMN32CDQD3NO5LU65MNE3QTFJSWR7ZTPW5Q3RXUGOCR3WARPHHWFHMILI7ZMAMPDAZ5LGDWEEZNNU4KDV4VZBVI4YZZEJJDW5LJNYNRY557LVCFKKD5GDNDO3AIFMDTXTBN5M2KZ55KJE6WTE6REY6PMNZRSYQMW7GYQN5HZ4TAUXCGUBDW2VKZ6NIK5B5SFCRBWNDQEWQ5REP6HYRH7GUXACZZ4HGWVXAZJYMVYKWRNNYFL6FVD5F5EBY6SAOATEAG3ENQWTBF7ARKISB247LVLIUK6HUOUCALO7EMQT64CYSSDEB6IKRWPSOY6VZP5VKMAPBT7Z2M3VJSGMHKG6P3K3FPVNUF2TYUIPCUYJNOKAZYCHJOWTZOQ6APMGYOZFF3MSBZTKUD5UPC5VENGJDFRRQ63WV2QH3R675BWTHJ32ZTUATLMIDAZXWMO626D2FR44PKX3LKOCEW3XH2WAUNPZIQTMTWS52UKMUP3RB7SBXW3QGJKSETKDYPUDVIRRQYKBXXND52LRAGE5FQ3ROCXINGZC5ARTTLYOC4E3UX55X6RRILM2HR34IJ34XWZVCMTVOMWEXOMB7XPQYCOQ3RDTDUEL3IIZJCEVJM5VEUZFMTYSRXWNRO2JJHSJZGAWR6MTZBGGN5VEKDGJGABF2UI4WYXL45RNJFBKOSWVUTPPXJLNBMOCL3RKVNRC5ZB23MPIFPBXSDOLM5CDFP7HTMYAUWME2RJWMYZQ5YDB6FJ5AVRQNUZSS5BGNRZ3TQK4Y3USFQ4UY7LKVMWHFEOOZTUNARPYN6AGAM76RGLZLKL5XLRMRJQJTCVHLXM7EV6TDI6325YT5VL64E4BHNZUGOE243HZ7E6WLLAIANHAZSCJ56SWNKWL4VOR6BLLL2GPYY3UR3L7YQZXGFXYBTW3N6YXAWF4T3KZ3IKINXB2FZ3YGAB6YECUH4MZZUAIUFNWM4KQJF6UMIYYLFACYL5XNJHK2QUHKTA54TNSRRRPG5SXGYQT3DSWUGCXEZE3DL3ZL4N7MPKML323PTGSR7GTS7H6CEIRXJTHSTGDO2B2735P767R5PT47PT77XY3IT75537VSYWGUI=END hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=never000066400000000000000000000116231321557765100311350ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 383 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 6323.27 bytes, standard deviation: 1171.34 bytes # # Additional interesting statistics: # # * Ranging from 3554 [once] to 15672 [once] bytes. # * Median size: 6224 # * 99% of examples had at least 4224 bytes # * 99% of examples had at most 9688 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3912: STARTPCOE3GB52I2LSDMDV7ZNLRQ35CDREKK7YXS4YGZY33GC5X65YSBZS545ORG7K5FLFEJAIQH5463T777V547757HNN6775HXTT3PX7XJ5XBLPN5PF5JVZ54P3V5LVHP7HVO7N5XP7T257U5443X3U7TGZK7PMVIP347G2G73VZVFH5R6QTJ6OL3O367T6X27NQX5PSOT5RVKUWK7E2ZLZ5L2PFOD25GN2LZOZ7YZ4TNTY5YQJV2SQQ44HKZ53H3YU6HKBTCV7OZUBPPCVLMV5JVHPOHMALXZ22FIUKLGFTS6PL5H3IT2ZHGN4CX3SVHRVWRJXNVG35DVXFHBHCQZZSWKT3GUDX5MVVPWO3IZPJ2W5NX2SPY3DXLKSPYDFSCTW5IWA37O3D7KAUV7ZHGR3JTTY3LXVGDC5WX7PPTNLGXPSQ2VBJSTOVEQVPJ7FDNI5VQZZ6YWIJFC7I3VSY642SHIVPGVHXPTRVFQBBIOWFF3D2UQIDH24RYJFVPFNKOO22GNYV2RGHMNKT6FVZ3MZHKXJLN47ZFVKOPT6LN3SSVOTNZMETVPHEUWKLDKNDP3VSSM5ECQV6GZ44W4KE7LQLMIRPKZPJTCQGWP2GPLLCPQSGG32M2IJHJP44QIX5KUEZG6EFV2ZUU5J4BPKQSV75LNIKLULU25RAIO3DF3WG5ANOFUW5ZLO2ORTAV7MRWAL7KBDXF4VKXSXRZDHNCTRTKSFF35O4CSX7VOULPQ6L7OGBI4MDEIHLVDXUCSWH3W52VKY36UGK2NWA6YR77LDUWPDWHBCI3ZAWD4PPUDXZJ47SGP2XXTOO6PNEUHCRUSXN2BK47M5RWBACFBHPXLUWKVLMPISSSLXFDNHH2MFS72JOOZXZQPNZZ7XLSJYUJG4IKLQTJN5EFSYAJ43ACULFPW6BRRTJKS3W57POHTBFI5PCCBZXHCSO6PJAL5DODTMWPE4YIY3OMGCUAXS2MECCJ6TZQAD4QJDKVELVT4Y5GOW5MA2ZKPCU6476CJUTL3N6JYCP7NYZR2YPTHHC6V6PATKOMNDSP7PKHJYIVOUPWYW5QRT6AT5OU3EL3GNGF25PX7WTBJOSFRDWYUGNC4IA7HFME4A7F6MJT4IUKU7EE6MOBS54KIFZYE4XU3UFVAXJGP4ECKNXL34XJJH2ABCGP7BJEZPGZJTOZ6S4TALEY34QZ5VMOLIBHISYY56Z5G4AFJ7DVUDBWJL33FE4JSH5K3KKHAR3MKB27RMCS5ASHXQNFO5OAOXCNUG3HU2CW6VJUWWL3YHQHTIV226CZ56IXWE63AKOOVOSFE4MQGCMPL2DHAUJTKTXLPPJYHUZWGYUMJ3K4VDSSXVA4JQ2YBCQBCGKGXW23GAT7BEMV6WHRYL6XE5A47DDVYN7CMWRO5NOV62ZMDZFOSB3B22H6INSKSWFBR3DJT2FW27OJTVWKT3OMWS7X7OA4FKLFSA7JPTO4L5STTKWGUALXVA67SXLXVXTONLWAXN2M5IW6AKLWBZ66YFOD5WAQVKQA3DKAWBX67NEDXEPV43XOLRFDHQT2VMQ3YPNEYKWTFBWG3576CZR2RNPIJRN2W7IL2RQP422VHQM2VCGVL3UB2RHJ5AJFLPRUMDQCRQUB6RL7VET5GLETIPKKIUJUPQDA4UD7VCNIS5YJSRLXRWCT52B5C2ACTXKO5UGH25KBCELCR4LBARFDTNQTJTLLB754H36NOECKDNY6E67WB6OFS5WHD7UC7T4SSDPJ4Y4G2PYMHNUNJTMTHYYBEZVWYX3JFP7PXTVXTNPJK4ST35HILQAUHBAZKMTD6KEYVO7PCVSONKMZU7BAEKTUHYXIQU6IPALN3U3Y3NLUI5LTINTVRV6Y5ZNO7PF5TXFQEPPHWFYMBMCTCUHZNHO3OYCKHTVGO35THY6J7662YKJNVX2VLY3POYDUVOGLMKAXVJ34H543RTQJNUBKONNEQLHAZFJ2ONWAPNZ7VSQJ5VZHAHBDIOBJEA7ZRAG3S3SJIOK4GE4JROCGSN2ZHFWPQGC6REXQHOVE6WKZ6FJQK2J6GPIJRO7WEFL5Q5UAC5O6TBBT5P3IGL53OJYZOP634DVPIJVPR62G254LPDIV4AS2CHWD74N4TMCBJF6HHSM6RNPIHYKRYUB7JHKMMOEQSS7BUSP4RWN7AYBZNQPWWDYVXJPDSIIM5IXNU7QCXSQWB55RWPSGJWM3ID7FYZDGYP2FPAZJDNM5RZF6W4XWFQCZUUE2EHQUFBALAIDXB5UYWAGPNYGB2WKXZNAL3ZNCQ4TWMLBVJ7D3RDSW3DEF2DZQAHM2EJITJC5NR3RHA4E4ZQLBZ7QY3QLDA73PLZXKJW5M4SD4VLZLHERGV7F5DDHRHNQK5JERPGNKCW4V2YUGUWE4MAVZRPB3ZRYALN2CAUG33LJAQXDXDWKJ3O4FBVZTRNGEZEJHDY7NDTQRX7I35NREV2IMEJB55PKMZPC3362MLDBLR52456INKJJZ4ZTHWDXXG7TY3EB54AU475GGHO7DBG3TFGY5VJ73V7NXIOR26HP3DS4UX56TEY6AMU6CFJ7OLYONOHY47H7XESV7G2O2Q673AD7MDAUM7BRRRO6G7VUPRQ6WXRZB3BGNFAYFWTC3NXSB54FW7YCHZQJ2RBZADNSPU6CHHUKZYAZUB5BQ3TIDSLPKMSJRTJU7MZPMBMY3HYMMWOXNTXQG75AW654X32XRW5KPJBNK6K76ORB5CGIHOWCNWFRGHU36YVZWGLN37S52MNFYGC3I4HCVSDUVHXRUUCBPYRI3LRRALG6RKUT64XHYPYYIZTCHPRW3RUCT6UPB2ROE34UCIBOX7MYZ7HMUTENFJ6LAEG2BBZ6K7Q2ZS663KD6O5XFYGLCKGALHG2CXI4I2KVUQPR6DVC2GRF3E4ZID4T6LTD4H7ORBO2HZJTD32CEI4NHFXWQWNFCFZGVIBCDMFMJGQ72PQ2BDZO2PL5HYI6R6MDYU6W7INUHUJQQ2OMRM6URQAJXRN5PZYT4P347JGSJSBAFWGRFCJHF7BPHVQ7AFUTKOEUBO255SZ2RFD3GM6O3YLZF5HMP3UYTU7NRCNJHLYUDPJGPYSEBCHHE6P65KDHYOPKLWGLQ5AFDNU42Z7RZOXHD3W2KWJFMBCTVHP4LM6KA5J3SSRB7W6FYBXUZNHGWEA4NIHPWSX5TFQKM3CJRY5LTGOXTRJ4MHI2IW3RDTDF5QAGYGMUUCZBRRNNMVP2CQKR3H5TC25MBXCSZS4OIYPBKUBRDMWS3JOUNWDROBXIPH3STRA2ODB3O6VTPKDGPKUMGHKP3OOKNJH5N5GLXYELHBI4D3JTSWF2IMMP4I5H2FVY2PBLNP5Z6W7YPGPJOJTNGAZNIFGKYHPWUZWYGTOLEYU4QFNSATCXYCDIGISM426ZUBWGVCOEQJXI4O7OWSET34P53X3AGKUHLYYWSF7F35HYT2YW6WISNEQJDVQOHW4B2I3LMZY4LYORORQJ5KHCBIKAPHFLGHAODZNYAGI4HS6GZLDAWHNMCRTSLQDOCPQMNVYJA6D5UTANCUBSSJP4XMB5JBGB7RDB3MDFPDVBWZC5IRYHRFP6B4AK4IXANXQGULE64Z3YWRUFT6D52DIY6M3QFWHY2JSWR7SAWOWAQUQRKS27B6HF6PXEJA4Q6YTTHZBRTLJYHANHVSG6AAIBZLD52ZE4XAAPTLMPK6ZGPIOBBWGAUYCKH5CLW4SWWESP6MPC3B5TAO4JOA44J4DBYYKN5AZD2OWZ4IMJV5MTSM4X2OBD5O7CZCSQTPS57QWJCDQ2H2OA5B6FY2RO3PX7QHVBGSCSDSNURUHTHL3NBIGXSOCGYA6PTEFQZGFSNB5YGPVR7P777LW46P767DTW6773WVV775374ROGVCI=END hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=size_lower_bound000066400000000000000000000107121321557765100333650ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 387 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1285.08 bytes, standard deviation: 1151.25 bytes # # Additional interesting statistics: # # * Ranging from 2 [17 times] to 5662 [once] bytes. # * Median size: 878 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 5322 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3448: STARTPCOE3GBZSJSDSDSEV6JFM4QKYTBK3L6S2ZNJPUDSNEZTM5Y77IBRTVKCUZC7ZYBHWE4BYDX667D267766PZ5PDZ7X37MYHG7L5TDHPV7PLF7HT7N7RP627EP2337OVYO777PF3LHLFNP77U24M6HI7KZ6MWLI5J6GZXXUJ5VKPJYGPH7PKZO53D532ZMRKITW3LJ3J3LT4BXZGAWVTICWAZ3HI5WXIYWM6W3T7UHXGO46O6OR6RS6PK6HJAHW4FK64GJ3GXWC2WM5YO567M5NY74TNOY7USCPPF6TDFO26TVNN7THVRMRJR6U7PGRT5DYT6NDZ63WXFX553VZ2FL5T7D2BXD3CCD3TMJSK5CKMX2IPFX2BHWNWHJV3JZM26BGF3OVLHIUZS35W65YGD3ZSM5ITQSY44TMXLMMJAWPKJK5BBI7DLOKWHWYZVGYXLFSSX6TT3UTUPCINBWOFKUZDH74Q3FVRKR7ZGKLCW7KM4Z4U7ABCRKO3DQ2RKHMXBZWMUXPOSMH27S7AFAYRQINRJOSDE24OMWRTUFVXPZVY6JX7BWHYXOKQTIOAC45E7IZBIXZ2CE26DD2F4VYDO63RVZN43CMQ43GAVVYBNOIURILTYTVVARBR4D36IC3XUE3FCU5FK6TOW6FLWP32PUSU76XMQJ4XOL3QUBCDSJPTVIMNZAU6G7G6JXFJEWPLGKSGFEPJY5O62ZDAVHMCXLKKJNWDBEFQESEOYTO3UQMIUUOPR2TEBF6UMIH4KSI22N6UNEXBSZCM2CN6A3XHPJ5NWLBXVD7NU634YFCDFSWKB4TIJXXPBORVS6QKWE43LF5Z4IA55ZR6ZQJME3LVIOZ6NJWM2R4PEDKKS2ZAL4R6LWLI76BD76ITAWJX635I4BC645GBPQAO6Y2QP3UYTAQFSOQJ4IKOPT4NCSQBXH276T42RZ4EU4FG7SF7A232PLNQQGGX73BZWYQTPCLHNBPNCHA3CQY6HPNMV3R7AEYBBQQPLEARASSZUCT3HWML63LKPRO3ZKEWD2RKVKKOXB2QV4SAM6ZLLXDIGCMDSQ7F4AOWVGJVQROMVFCITBTLS7BEEMCLWGD4OBRFZLBY57MVYX33EGB6KMWSZXFUBOZZQMINQVVR3C2SUZ5BPDYOUITO4BYSRO3JE5DSHBMN4RXWS74QKLJEXDFS6A6UB34SX2UD5AOI3JVLZGTTRN7ZBKECD7NYPGFIXK2WX5SMRO2PXJJ3E5FWQF6ASZJOBREXA7RXE7VKGCDFRMHKWN3ZFMTZGR27UA6OMDLG4S5C37HJOCCF2LYIXKIMPGP7NIZ2RB6F7QUKIGBSYIHIS4B3OUNAIVEEN7KLKCTSSZIUGC6CSVOPAYMGAH6V2HTMUMTYUPVSIBIEIQC2F6PJKVADDEYQKJE5B5D7ZTMMX55U3F4KCH3GSTEWZ2W2BSE7FHHCFAWJSWPSNOVMBCFO6Z6HZSWWZHQKYIOITITCKRFBHA34H3RJYW2PKIAAK2RDRGWSGNUYN56QTO3JSJJKONFFENQUXAC7IXFFHEQ5CWUPEUOJ5AXIMEI6ARRMS3QA3QJECQLCCOUKWDMAG44UMRSA2DZAK5UYMRP4GGQX4O66AWE6NKFGYP7QFG2EGGA2F4ACN23JAJIVID266KYDBBWFB3NBCNVDV6RQ66KYFYTA3OMR5EIAPZGFIL7KMQLJCL2M54I6L2M5SGDBYE4RBXA5HVR5JRDZ65NUHRCVDJWTYYTTUQVPF7OZIP4XH5REEIIHDGXLS3QGHLRID7IP6PY7C2U5NC5O3ZVGEXMMJILHA5XLXQQQCYNN2PDUNCFKM2INEY34IIM4DH2ZR4Q2TSBLFUD7QTDEKX2CSZWOWN5HYXEAWW466TFJUEK6IQDEBNMB5LCAL2UY7SYQH6QGBPAVQ42KPUYKCTPHLJ4DV3E5AAI3NCL4OB2BTNPCSRU34TCCU3AN3VKLIIPYKX63TX6FIZPET2NMPEMNJKA4KJJC3EDGKVLJW4DWLUHK2TVRBSXUBGU4CUDJ5GGYIOPBBC5UOYB6FRPDO3EMFFBA2GIJ4WN433KRGB6L32BBR54JBMM2P5VXBIAMCF736ICIMBF5BECIJMLJ3MBTKXRJZYZH4MSPRKHBUMM5CN2DKNELU4QRW6NDJIZZIXXSX6UQDQNFNSK6UVKMDDCQDOQPFFS3EDWYQUQ4U44U5TVRR4B6GMERTRQIMGWOVE5ABPMGUZFXOSU3U6GCKUHQ4QJOGRSHWSLC46LE4CFVUHBSETKSDH2LVL36W25VD3JYG7PI5OX4FKLPVGPWV6RQCWG3DZHQ7QHQSTQYPWUDPRDWMD27MK2FGDVJSUS2YFRIBOGUHTNUJZBCYKGAU6TBNMYFU4DJUPRWFFQFKW3WNO3KCJGK7FKVPGQEIIPD5IPNGPMN5LFSQV2ZAMUM4YTQCKZG7XCMIHWJYMGJW73DKURZTDCJRL5CQQEPGVWRVGAM2IEPKNFB3CGNQ7JHCIN2YZ3O3EBQEZ4FPO62TJ7M33PMRSYRRIR4TUTEHYP2OX5PK6GMCT4KTVO2YK22DIGPTJJIKPI533CI5SUOQS2N4BWGHRIYUONE27NSNS5XUFZTPHEXJ4K5AOSHNUM3VUG7DUJZBVS6XQZEZFNSLFNLU2JLU34WOMXEM47KVIVNVBLF7VBD77MH3YE7Y32XLZLXTXKPKERMKK4TE3YBMFKURV6WWSAO72BGLKMQPVPQ4SIJDHUZ7LNF4CURSOACLGS2VGT65A5KIPHOCUKPBTOYLTKXSW5D4LJ5UMI7ZSP6WLYU4Z26AAO3LELZHEWBFQCUEC5MTLJJPBAGCDRFRJMEGDQD55VPEBBQTNLKR23KCTH4YSTAVPA2XXWTRZGW3VEDU6V3UPMDHABN5PXIUY2B6BSYFKPPHASH224Y3QOFOBTKNWH2PBZLAJYVV3RL2ZVZO7QMDOBCEUOY37SQ37HR7JPYCTVMJFAPFYNNY3VV4T4CQ2YDG2QH2ABFGUYFKG6XTYRG6CCLMLGLQYDCM5XBKIOBV6EXCMLFGTC5GLFNUFNELGX2NRAQU2KDSBVQFOCDN24YIVKFIEWY7P3BE4X4GBJSAGVBG2KGZB5WCW4N6H5ZGNWL6RC47ZPNQC6FSI5YH4VB75XHK6BSTJBKVMYASVOYZW7RF3H2G4SIF2P7PWIMICEATY3R3MTKUF5P32RIWY3QNSXIM3YR4N7HXXWK3UTSQZKWSWJ6R4TDWPGGAZTG4KAINKH7OLBZHGGNURRXOC2LGEUIZTQNOUYMWGTR55YMUCKU3XH4VJWBICTAQBGCGDGFLY3V7VEHMGVJMVKATFCLGRFZCJYSYU6O4PO3WVH4LROYM7ZO3ZV6XIBNMMF4GH4WB2HP3WZWROWXQCZYRETLRBIG67PP56X57PZ6XZ6PXR675IMP76777BKXR6M===END hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=usually000066400000000000000000000033141321557765100315120ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 385 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 80.91 bytes, standard deviation: 339.19 bytes # # Additional interesting statistics: # # * Ranging from 2 [904 times] to 3362 [once] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 1989 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 648: STARTPCOL2V2BOLBSADH4JLDOOHAQSIIPIK453SNEHT56WXJ36N6UYSHLCMHAU2O5TGJEATEWE5ZFSLHWC7H7XCHMHS7J2WPE6EISU3LTEKHMPTU4TV7CW4ABAWNPWDU424IYQAGE2NLOS7MDZEQFCZK65RGSCIQ235AQHV6GFLHN44P55JPX6C6HUTAGELMQMZEVMBCW63657WW3GB5MUEBTZV7WEA47KG3N3XL7SBORYUW665YUDHKIXSDAEYISJW2MZ2UKTK3H37UHRADDEH26CLIIG3COFDYFV5VDB644G7MOBHYBEFFA6467CSGQLVB4BFTDNJDFJY2VLOOT6TFSN2DT7Q6WU35CVCQFAVIRPBUSJYOZTC4HOE7V4KQ5XBA3PUSUNELEWUKFATOFPGX2OA2B2UKFQ5ABULPHEYJ6C4CO4ME5GL6SP6CKLMLEWD5XNYAEPWIBED4CSH3DEUPEZ3ECERL3PTYS6PKVD72H4BTZU5MGOAZFE5AOQBZDONBVPPNGVTCWOOHU256YBRVVZNGI4HN2GKETHRFTSJ57FQXKFFECRN2PW7HC34DGY3ZDQGZVCYKTCCKYXPZXCBNFJBGQY6GCJPZUETPXCYN2NEXOPUZQL2XW7NZ7EPUPT2Y3OXJR3UA=END hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=usually-interesting=size_lower_bound000066400000000000000000000110241321557765100335600ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 388 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 1393.57 bytes, standard deviation: 1259.92 bytes # # Additional interesting statistics: # # * Ranging from 2 [18 times] to 6429 [once] bytes. # * Median size: 1016 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 5411 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3520: STARTPCOELGF3RYSTODCEP5S3B4IEEKU2O76FODTA2HDPM3B763WWUF2A2LHO6Z2KWJJ6RLCWF77T4PLV67777T26HN5P37L3F35P2HL7PFZV3PPV7HX7RZD746L4PR74KODZY56LL35P4W47JLFN6XDLE7D66JA2Y7F5Z66BOLPUJ3JZ5OUKSMHY557PV66WOXQ247UO43W2PVM252RLG6NGG2URRYPZNPZ5W3LG73OSOP6ZROOJ5G44Q6WL34YWJBQMRYRWXW6M5PKXJESOL75DCW32GNZKTW4QFFOT4WMUFW45NZWYXIQ3H6WOK65HPLMOEAVHTYWRBMWZ6D7HGK2HM6K2XC2XSN6ZTHX4VBW2M5HJNZINT47BLE46GGZD243ZTCHHMX62DI36LW2SCB5272HH77NFOBUY4M44SUE263QDTRU2BSOLE2ATFDCWT4YS23PGZGGSUZL3XAVQSUO5X6JYUYZODVRQL343BLN5DRXU5E6BV3OTS3P2HPBGM55MHTZHF2PIEFPTFZLSJMNQ2IT2P544OWJWQUR7TUQ3I4V3UVQGAAWFWCSQEJGZ4HDZ4JO675AVAQXHJIF5DTDNQXKQEFUEXWFRIOYJIFB3PKEEOPU3CIM4UFZXLV54JLRX4TTWQB4PMSNFGBKHIBNZL76QKNGFKVXWCJYNVEKBXXF3OWZ6G5FSMEDS2XWG5YFGMI5PEGNHABACEWT3AO7KGEYK2PCGD2PGG36K6PPFWQIT4P5TRTWF4F2ZFOZGQRZ2WKT7EPX2GN3GD3BMNCVVVBFXVHJUFB6ZUBLMWPFNO5J4FN5KNE363XOPINCABDTJ3CXWPNWYMILTNNLEVB2LP6YFOLZIETUPMP3WBX6UKB7SCEDXFSOTKNYVJBZFVFOU7CQK74SYNFL5NN2TJJRCH4VVKVTATAXVOAGQ5B6IYEZNKWXVGKREG533XBMMVTMSOV4YM5R6ILP3NMRCF2IHAZPXXBXYMYQLKFS5YCZVKNATM6GH3KAYEBB2QIQYNS7CSIKTFKF37A2ZCIKYFYA4EHFCQUKWWNIARJK3PZJ4MXRYMHKBKJRN4QUOUZO6DK32DG6AYOFIQEBAXWNLZUQCZOOFVVCK2ARPI22VAUQDXL2FNVFVMG4PC2G73UHXNC3VWWGWUQ7JKN6KOIWWJXKDQAV4AYKDFRPZQRXQA5LP6EVEGKILA4JFJ4TO2D3VIMBJVSAEJBHIIMNIGE5IG3EBEYHZA6N3AK7Q5J3MWY47RIDTXA466FQVQFYQC3UJTOU353VBUBQ7ZKR46KRBXHQUV5QQAF4ROCJODHLQLQUYXOQCYE3A7OMDJO3ZIMPKPQWBWITPGERUYVP2OYDCJUQIRJJ4YTJDUX32GYKWLXXLMMIAFGM4PP4HW32WMD2MNBAYUI4PQU5QJGWAURLI5M24UAV3GAEHTGXFQQTVZTWXKU4MPALD3R6ELU4RWQHQONSUIIDAHHSVIK6HVKDVXN5VKWKVCWZSY2ARWSHIZ2HCGQEALG6OVNRUG2Q4KVERQLMRNLTNFFUSWIGA6AKPOGNDQKSVQRPHDAQAGA3M62GBDDPGRZ3APR6YAWH3DLNCL2GTIIVF22DSHJSNBKYXIJ3SNW35SVMDUVJR6D2JKJOETG3RASGDRD6ODV7J3UZHJQ2ONP4SXRDE4JVOSSANE6MEBLSNZJWL4JSRZ4VHH4TLDIOIRWDS5AE4NWJPVFV2YT7GKVHRK5PLETKNMC3FLIV5AHT3QFA52KLAYJ4RPQT367NVLLR3MY2FXRNOTCND4YYVEYPBDNXOV7AKWVT5KQYT7AQBOAICUIVQUQBM2EM2UXYXVKJDJLQJA77Q2Z2DQXNHDAFVUR3QSVYQTPL5EWJZGXUUVKKQQIER6SJCHVC4QBAJ2RZJ5EJ7KWXVECNZAQD3PS5HEBJELJE6I6KFFKS53C4QSSFYEEA3LCD3YRUJHTUDD45ZD2YWNM4VVJVNAUV7ULWWBPGSHGIWQ3MXVBBHZ6TVDHEY3PLUMNRHDWUN5WQD2IJZMZE3XA7ADKK6VCOBIETZRSR2EAYCRMX4YT3CYIT23YQVITAIZDZHKJ4BJQXJEDXAYVR5UGKWPSKQZN6AFGMCVBGENXXHWMQGVAACNRCMPZICFFDO2F7VVAPY6QJNLTA5DC7G5TCXFMOBA3UYAJ4YRWSBDA4IEOXVKORXDUXLSEHKFGFJLTIXC3WWQKPHWRLBJIIUR3E47POKT4N6QDSED45W3NPPOJGURUTDBLTCVD36SH45OYVQB2FUJH62AWD3BUODACCVLTJEIKNB5I3ELBPKE5XO4VVNZJXQJEVCCRHPLBF5KBBGUV736HPFLR4YVDK2WMMDVWLWEVF426WAOJ6AFWWE2ECD3HSKD7D74VTYP5GZ7RIJTQNXWBA3MO4YTQ3ILUAPWJSJUICBCRVA2XCS33CXOBNG3RDWY7TIH2CCLGR53IVJZQ7GBLPGZL745FRAZCOQ7A7KJFBJVEAUSCVDVCXS2KQBWFB42ICGMFEFVHCOU277KVNSJ5A5RSDUYZDRUZBBMUK2MZLFH6GEAB5UGIZ26R262VXWROIR4OR3EQJUGBBWAICMVLI2AJMS7Y2I7ICUI3IASSPAMZZCU2HL56A26TEAWO67ETZGIS3DE6FASJJVQMIVYVWSZFKWFAML4PGVETBW5BBBINMATVJX6TQIB5U2KAECFXIWERCGKEG3RZ5NEUSBGD4EYB7IAP4UXZJG75KQZZ2VPWATMLWFOQECUURLAF4JQQKISXCDXYWGXXKGV5N47GJFSI3KPWQYBVUZVI6KVNJBV5RT4PAZFU5BO25A2CFKGTOHPY2R2BM6FCC3GIHGOEKWBGK7AKB5SHVKVS7HM5QN5V4EYZE2TBTVTQKLB3B5XO6CG7QSVCXR7RPQ37MO6HANCOOYH6RUYYOKIE6RWZROFD5YVL27FH5JJZAUO4BVPPDM47JBYTDNPW7KOOLNUPN4M7ASN2QYVPB6C4EE2662CWKVSWTSJKGOP5CPZKAU6GUYW4KTMDHCZ5KOUUHCFY2JM4UTZQDW3C4YV2UH4XIZMIJPBAWUPKMRIBJRMDIQQ24YVTRAHWW27MJ3UTHPUI4N4FAPBV5KWD2UOJXWQT5G55WMIXU7DAXEHIC6VKXNG5P2RX3YW5FMXVPUXYT5BKUQDI7W6VJNL5LVRHU3VJHQMYKNNM7TU3TGUOSEBDDEUHKMPJ2O5ZQ3AICRYQYIEBRAIYVCD3UMPRRHE27JYMB3UZDCKNBUA2N6RXDR53KLWYZ7POWUM36YJK46UA7FLZJ3ZXHIWKDXTJB6OTWJU6UTHTTZKAQBEVSAF6WDKF6Y2MB6YOQUWCXQ3EA2RKVCHJ5C7BWRPTG67GFEKAZNZ4SZMBIUMNOM2J5AZJWK5P3ONUEBAYYOMHBW4LFUQAYBIU5QRCX2CYYF4RZP6LSJHYIASN6AFYATEF4OEUKHW6KHVKSYT724P56XR5PT67767HRTPVP3X77QCEAG5D3A====END hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=always000066400000000000000000000020671321557765100273740ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 390 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 2 [1000 times] to 2 [1000 times] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 2 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARWWY22QBABUO26OLQ====END hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=lower_bound000066400000000000000000000074011321557765100304100ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 392 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 211.02 bytes, standard deviation: 172.69 bytes # # Additional interesting statistics: # # * Ranging from 2 [3 times] to 1741 [once] bytes. # * Median size: 164 # * 99% of examples had at least 11 bytes # * 99% of examples had at most 815 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 2848: STARTPCOFLGB3OITDODEEV6RFFLEAAT4PIVK4Z26IDY6N5TZN2PP5AEMK2A2N7XB6CA3IGQNKB7XZ7T45PXZ7PZ7P55XRPNP6323D6TY7KYPLPZ67J654Z3TU3Q35H4RW4NNKZ7TXTMZ5GSWAT6XIPV7X3HB6GV7WT3XT6B6Z4X25R723YXT7P2YE67Y6VXVTBTNX44Y7P47RBZG5OY7IST4YYW37U3UZWD5NWSRV7T6IZV2X327BHWT7PINT5WMTVGBST2KXW2Y33FU7VWCOZX26PI3JY5XFWB3LW4Q2WZFRDRNVM63LQL3WOOOHNGP5SZE5L3S43OFTHQOTFUIKV3XM6B6XD3PF7L636PQ4UKQQAFI3TKA7DLXKN2LS2OCXC53CQOZRV44CPRUS7QHWLXXMQYMXNGJZ25HQ5FS3FDTMFOITWI2OTQLBAAIMOZZRJC3ELHLRZLMCBPACEYQ4MKS42RWLYUJSZRIDZRLDIRJGQ5LDRLVQBYOTUNDVISUFPEWKT4PSNODCC3HIS2Z5Z6MOBZ4JT4GYLZGITTQJKXXVKWUA63PBJVCOT4OXA7HMWERZXM5OYEZ2XD7IZBY33JMVXGJIRZW54OE3EMKXEBNH5NEIOC6CPLAJC5UTENJ7A4V2SKQOXBDTWVZEKMCM3BMZPVOHANHDMJFJQBEUWJ7B2A5NC4ESQDZK6X365WCDVMWAIZNLNNGPHKKWZT6U4NEUKI4NUC73QBNZEN5SZ7PIP2JQX3FAD5RIPX4ZJEYWJHRWWSBXKTI3IJPXQI7EOFM2CGQIIK4AKXFHMA2WLTGCSBQIBNXRETIZRMCF2ECGTJPSIVZZ5EMZDEKLYJWHWD3KMFYWMRISBFAXKN7PM37V2BASMZZO5SJ2KG57OFYFMCSHNAOKTUPELFGBOPBE2APASCCES7JBQFUNZTIX4RE45EKTEXGXJGZELVGDJHZTPEHT6VRGL7XCB545CLGJSXCAIFL7ZW4WLXJEFZ5ORVZ5HEB4JJZ456223RGYIURVAARP6EDLKJHRCUQAKMVIFGDI6JBKQELTW5QN5DWHQ7HLGWUQV4JUCU7JXFAVY2RMLABK3PH4WIKP6DQDXDHOONK4EIKMEEJ4UEF75FYERIWC4TA2VDAQ3SCIPJAW7SLZFEODRSF3EYMFUWXLSUTJVU3XDKAWWCBFYRTPMSK6AMYRBIFKBBRDQNUFQDEGPIGQVRERCTIV2ZFDG26NMTVSUYSS4NURP2OQOYZB7IOCSYV6OYTM5QCRCFECQY5EERDC6KSCSTQOAB4ESLAWIWBPRVAD7RTLE4FTH2XW4Y6OJCSXBLIWDIIAHQS5V6BBZA6OE4WTVSWUP2HNHU4RMNMP5YAE4ILYXUIJLK72Q3P32EVVCKJ2UYLCJQIKBMTKFAWJEMR75BBEO5BIRDIYFCCRCQGGQNOGV4UYY3FBFLZ7KBKZAKOZG2OHJSQXWS53EHMADWHKOLELSXIVTTVCTSICGEC5O5MCKJHG3T6U3JZ7GU3HFDZG2JOL5FSZJOOBEUWRYVNVUNDUDX4EI7UKAUZYK6YORPT5SSBL53NNL6LLSSLBIMO5DOSONOTCAMBUVLMUCON5FI5UVQXLUCKIS2GINQ2BMER266RBKWRFXIVONJAOOAAGOMIW36DGL54HRR3KIQWRTNFNTANKT2SDC5NVUMUPE5DA3Z6QLTZKODIQ3YLLZ7YKDTBNRANZ6KBITZE6Z4EMRB5PK5TM2PKTQLNPGYWRI2CXIOFWFVZZSFXJEVUL5ZWKOWBDPK7ALHVYRUSIR3AKDZ7UFCBKGCJEYJFID3HBA3JFAK6QXXXIXNN6SNEVMRCFCSY7NC66APDJ63FFJKARIUNQVXNCNKCH2JZ6WRIZAHPY5FIWKRSU2RVKMZ3IWMLLNWR3TKHJCM4IB6IYVGUHWIVF5N5OC7SKBP65ODF5MJ6YF5QCPY24C5EAMVA2IIVPWFSN5VGZ3FZ5CQ3LES5IKLVQEC72K7M2UXLF27KBSNBIQF4M7XT46XFELKMTCAFANCBJZDJFZGLX36CSLXFTFL4RCDJE2IVB2LB2J6BAMNGF2BTCTBBTLIJKWUETLDJZEFRBYKNDOFHGL5RLUZQAWGYBKWBG3WIREUHLOUGKLVXILCILWVR5FRU2BQDSOGNBJBAQHV2FLRZPHKDMYBYLOUWUEZ732VBNWKCP4ZXVBLI2M4JRODYQYSOSXC5QY66URHVKBWFHRLW4FDFVODZJ2DQUHGG6NLKUYPEYJFIOJNM2ECENU2J6HVMABX2ZQVFXIK5XVKJLJX5DVIIFCT5IXABUK53E2WIZSTFAZWFWYQD2VW5KZ6JWTDI3HNRKGY2IDK5WYQTYUE4K3JIAMTX5RJDKYWPG5HGOXKG7XSESCKG4YGRWV5GMEQUGRBAE2QERQYM6KHGCJSNLVLQNUL77BCHX4DJHI2VVY3Q4JXOPLANKIEIGQYXX3PLVAYEJTQJAS2AQ4HLWKYNED5OEISD5TINFURC6XG5KD7OZSFLEGOSQBR3KGOKU74LHP3C6G6ROKSCR3X3STN3RLHT2AGLFGM3BFBST23OLZ2POBP6TZRMSBRPBDIMVFSEX47F775UCB4OH75NKAUHBSLGKBFCMMBR7RSAFHAJJLATTVN2TYKJK7GXZJFJZRWAO3OTX664NGVOTBBFIQLG5DF6YBTFARTOMEGLN47BPSLKGMLK2BJAHZDSGTC2SJXYULZFY7IIQDEMPD2N2TBEK7PPSYMFDYJU26VP7YZM32GQJJMETLGQDFS7W7T5JIU3SLH2ODL3LPFFNKPG6XDX76COD7PR6HZ7756HR47TW7OWA7T5R7AJEMHWA====END hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=never000066400000000000000000000105601321557765100272100ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 389 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 2344.62 bytes, standard deviation: 386.42 bytes # # Additional interesting statistics: # # * Ranging from 1417 [once] to 5701 [once] bytes. # * Median size: 2294 # * 99% of examples had at least 1694 bytes # * 99% of examples had at most 3421 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3472: STARTPCOE3GB3CISKWEKFW4ZDD5RTJD7LZLJI4RUQZWPDJGQ32C6OUHUXCKR2VIU4RT6NTM37T36P37775T5P347777XRR4MCT7X2SFJTXTYYNM7XW27H5HXZZ6HXHTV6ZZ2Z4Z7I5SKX5GWMS6M4G5V77DRZZ2NNJT57TFT2S5J7244SLVU4TSJM6PTLTY3YHHPTPGJ5OWJ7ZN4RHPPMM4VTYY3CAPHXDXWUNDT5W5CWM3LOPYB5S3LYVTPZ5TLHEPJLOPAZ4Y2JLH45ROIKO25P5PA7DXF4P4JFRZDTDMFBTQOXMHGOX4MWX4ZAK3D563PH3EQ5K7WWT5JWF5N5U24HGL6YKCYCDDUP2HMI3M7SHRVQUUMNZK3ZZUX4W7ZNK2U4JAGI7ZJZFXRFB6EWVS23U74X2LZHSFXD4NFGE3X4APFU6DDDJZZYU535MHN5TM4UIVHAOO5XS6FYYLZRBPH7HJU5WNZE6WE3C5H3BEMOZV6Z2EZLJCUOHHBYRTKTYO5XOS7B2BNP6WS7GLZZ5LCIOSHO2PL4PKTCASRAY4HCXEJWYVDSZW3WBMSWJ3A56QMWHNPRK4E3J6TIUMK446WJRYFK7TQTKJ3OO7ESAGBLPC2KDAHPYYZ552PFIRTO6Q2MRB7Q73B445XKPSPMRHB55DII27Z63QGCALDBFHWW2Y4WW76VVJIW2Z474U7NQU6BJSZMVAXVKTPXIPOZFFSK2E5R5YNWZK2U7IQNV5LAEZTP5LVMKY5DLWEDZCB32OAYFMUXCLWW62B7A4E7EABG2EJ5FNXBEKZLZ25MNLZWRAVDCH6LLLULZCKP6Q3LR4FF4KL5D2CYZCN2VZNHHJ3KL3D5JL2ESTC6ERF4L7UQTW5JRCMHW7FURZVCXTOO2JZLRFSDPTWEGNI66K4DRIWG62AVFASSNGJMGSYJDWLNKMNTXYBJTEK3PE5IGZ7DO4JI33HLTJK2UFEYJ43ML62BPOKC4SADC6BMO6INZSC5QB6DQKQGWVLNG7F6RT2CPTXBMFMJ3PAUOOLYT5XAHUMCWIVNZT26VOXBLVEWJGI6GEXEXH7HUK5ODEEN7GBZZZVZOBNOBY4ENVBPNFQN3L37I3E5QKOCB7VJSII5NQ3VW4JO2VUNVQ22FUKYB5V4LLBC2CPJLH5VBFPRM6JREPDKOGWVQAJRF7GT7FFPX5M2XR2PQFQ7JO4ZTETPVELXMK6VXLDGNOBCYQZXCC5YG5APFKD4BNZFVMRW5VJ6LWKGMY36NV7LUFMY53OWFQ2QEGTHCWD5IGTLMJ4FN6VAVAWNMR23Z5BHNPS427MGPEMFOIMVFTXHIR7FHRNXI57RQY54RJFVIXZJKYRJQIPPRE33TWVBYDB6LJJ7OIL7EJZL5GGE3HGAUIZAHO4FKWM4ESS7V7JSLOT7MVSTEO4D3I43TDRKGMPS7CUMK5QW76F4RC5IJ36B55RO6I3C6V55NNUNLZR66ZX7IRGPZVXOT7ZOHK7NKHBZP4RQSNTANB342XT4G4255ZH4N6OKXYLOJJISMH3NSWC236XYU2VKXQJ62KNSQ5K5BTNDJLR6ZOJBPF6JYVZHVBBTQENM42J26EWNF2AJ2C6GI37VI7VP3PO6UQ4UHU6YEPKVPJKZLATOYSDKEXU27GQVIXJXOO6F4WPZCZ3G22WUHTJ3HJCW2XEKTQNMQXOQUFPH6SKESBGMJBEZ7DHM67PUYZNVLG24XAC3OLNIS4CUXONRLCXPU6XRL4AQJNF6BB34ORRUS5XHLPUXQZB4KUZFI3XC6QRCGNJ3GLYQZ5C3I6QVT2NBIZCZDIVVPKLP7WYHYO5SFKDCNIQZHTOOK5LVXCU7WW7PK2DMLVZ7BM27VDCFAGWEMEJG3OZNBS6FGCT2E52UVVIHS2UD2MT3RLVHT5RM5VQKAJDZDZH22HLVIU4EDKC3TK5J2PNRKBKRSYBMMWKRWSETMOUZJJA2I7S5EZZP5PLMWSAZX7EDEWOIOCU52B6FWIZW5VPQAS6Y2LAEXQ2OYPXVNNWUHK5MVPTY3JZI4LN5W3KR7Z2OC6TTUEDNNWWCLSIJ77IJVNDJOOKVCHVF5XUTBP5NCBQXLEDG7ZWUDGTYRBE2LZ6XLFPQVAPU5W2KSX65L5XDVGZFBW6XUMYWUMPDVKBKWRNJITYNTDNN7J72T43BE4ZQWC6BN6PVVNRDVN2YWWXOUFSSHVS6347ZRX7GSBA6Z3GEI7GBI6MGTDL2DRWANN72P5E5FDQ3JCOC5TSL3WR5FN2QIP3AZFK7W2UVUFTINPJTPCTEXXRMS47J453VHAC7B3JZNNZY5SUBFQNU6T3B75NJQT2ZH7L6HBZ3I7OHHVLGBUF5TZ6222ORDPW2V7RXLMZDYN5WLWDH4G66UV7STOFJP3BXPFB4LYX6TS54IESIOCXG6TWQPRC32JD3OTFG6QWP6AJFH7CGMB6BBO5TZGESUECOCTZXN6IEQ6RQC5RSVX5P23Y56FK3TSL2X7JQI2WQZ2HF76J34FXN7UJK5TOINX5GQ4UDTBBOCZ33EM73MRZ2TMQCF6OO434EZOVST5JHKNAJCLIT3CICCZPMEIZVP7JCYLCNHOYI6WSDYLGCON6GW5R23XFR5IOKYXV6ILHJZWBXVD5TLJ5XKSR7FWXDJTSX76VTJUQXPOVHVNLINFX7VWDNDQP23UFOIYBJ75X2VNR4KZFH55JRCVVLCO2FHBUK5DHCT4IQPATQ4T5732UUXCNHNJOOJKFPQL4SF6VLCQ6SUMJ6OHPBM7VHADM54CNXJ3MYSSEKAT7KKU7MTBMALIW5NYBF6ONTTS5AXO2M46G6ZFF5XOIADCCAE2DEH3OHFFK7ZQAMQNNLGOV4B4E662XMSLPB55B752USUOYTM5X473RUBLZOTUPKRLGP3TDGG3PIZL22WA75V27PNWN7H35NJUTSH25RPZVOJIYJN7G4FQFOVRLJT2ESKW2PY64KVJUYQEBH4LSUZ5E5LFKGF7OOO73NB6WX2VZLXAWATOZ22XQ3GHRP7ONJCPZW74QKE7CDN6WTSD3BCUM5XFFRYZWKNHV6ZNXVPS6DWBHHEZZT2FYIQSL6MH7VPMK6A6DXCGP3XNPZPLAX5Y25223RO67HLGY4JG5UV33L5N236H3DJQVKF6T2URA5RXXE3FDK7FK5BHLORKFN2OYTE4KH2UMLZKV73K6ZU22VTRJW62UFJQMMLSQ54Q7BPABXHJ7YRRKQLHUXZG57OA7IFAKPR3PSQFW6NHKODVJUKVHDVXE6DXIUPQMF6BJFCLIDTO2EVT2PK6C3NR35I34VFVKSN433WNSGLKPKXVWRBGJKT6M6OQ5JTUVVWQJSAZNGR4FGMO7TXS2R7XPCRPS3U2VXLI2YVXW5GQKUC5C42RUNN5DLKL5DRO6FS5OWG356YU7FC2VGTK7XP77PLY7H55P377XV6PXR6ZNO776R63VWK7FQ====END hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=size_lower_bound000066400000000000000000000116541321557765100314470ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 393 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 7542.10 bytes, standard deviation: 8988.48 bytes # # Additional interesting statistics: # # * Ranging from 2 [13 times] to 52606 [once] bytes. # * Median size: 4092 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 36889 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4032: STARTPCOD3GB3SJSLWDKEW7JDC5Q3ETYSHX2WCTH5GGFSY6JUE6YX6KQFUTTV6UWV4EU7IQRMD7747LZ2677773Z6XL5PP5KK535P6LGO7L3T53N56XYR656H76NOK7PV7W3OH7NL5IT7777LFG726DVJP3FM2MVWZBPVRIYO735BGVY732MP63KME7L2IGONXX5KS23DWGNBPWXN4XHFXXUJV5PHSEAWTBZD6V723YHR3H67VONV7P37W3FM6F6NVYG42XT53LLX645CDI3C5WT7HDVWLLL7LAWLAIWTV3VF563WJVCZCWWMV7T4J47OKS6XYYXNP2Q6NT3H5FA37N2Z447W5ODX3ISC2XYSAKT5SRGMRVAZ4YSUHL26W267TTFDEDXMESLNW5JUCPSLYHWTLCSNMK4HLCDUSL3OQKEWYI2AMZB5DW6PKRXJYPO7KZ5Z2Y7XSZZ4OB3IULLDNMOQKJLVOQO2LZU4SCRUMIGDT3ZLMTDGTYZ7SX3JPITS4KJ57LCZIC7555WLOQ6O5VKGIAKATN67NRCVLF3LMCZCWUXCCATT6JBSDSWQ4EE4IGQMHKXN3FTRSGFIJHABML5O7GDQW3XZE32PYZDRJJ3IZNHDJAOCMPA474S6NQSK2FICZ2KVKG2X6Y7PROBZF23TAU2LS2OZP4SRXM3BYZNU36INCKN77VWCB2XTG3ANTQBLVZZH6YTB7JJPNNGUTYZFZMXUUTV3WX2B5QA4B5UZ4LEKK53N5GC7J2T7PRIYWK722QWBTZVGLDSATPNBGCDSXMXWWT2VWRGBRGWFNPGSN4KNR3RCPD6KGX6BQYMYKZGULCRZAP2H5GRDZVWGS7FNJOKOJIPSK64BI25KSJQ7DF43OTF3M775FX2V6A7FLWTR4X6D5PWABTQ3OS2JBLJZ3BEL4OXPHXCFATLBLKCSC5EC6DRGIDGBBHETYRCMTXIHDN2IEQIEI6352KUP5W4D6JFR3ZH5CNJ45QD7SHSJCVIIZIPUY4LMKHCS2XYFALHZDKNQHGRYCQNKO6V2XCEIGKOFBESBUJXS7VJESV6ZNUPFGYDC5OQGCHJCXDQBYA3ZJN2XYNQMLBXQ2KOKRSKJDZ3CMSK6T5KN3VPLAEVADL727DHXOAFELWZVP3XBRB5FC5FBEKRDWTZ53BC6NOUI3ZRXQXGGVQKLAXZUC2ALBQPHSEKPKDMDPKPPW3OQB5OFKIC3KI5PRLM4U5DPPMAUBJ27LYDIJVGMVRO3OUMFY4L3TEHHBHTGIXETWSUPBTP6NQUQMDCM3NYAY5XUPNX2BVZAUHHK6KXOOTYZAPIIAUL6IQOH6KUIE7KYKY5ZNYXB5PYZZPN2N225S6XAM4SYKIIHHPD4GGFOXXXCEPIDBH6Z7IUCZIUSXWD6ZY4QFHYTFY3KD6R2S2NY4J7W2MIU5CIVTRJUUONZNOJIGM5B5DDQOD4KXKLQC636BJIKRAXKCJ5Z6XHZGH3NIIL6Q3ZHI6VXIGMOYUBO4ROG56BG6KRIRZR6KXEEWVJKYKQPSXAYL2CFJG6ESK7YODG7PIKASZ3YBFLCOTJ4GFKNN62D5X23JKYEYWDB3SHKXBCVDBMOWOC4DXKKVGK2PJG4WWTGSQ2ACRJYSQU3F7UMYJWQL4O6KTYGW35QJZ7JK45ORI563MOAHEHPRQMCCTS3N472H2VBUN7PTAYQNSLSAGOSVPARHEFTUECCUVKRWIKCTS62IOU2VEPANAZYRQT3SPYDV3NBVOCMILIXSQX2VS2L5YVR7XKCMTTHWNESOSFA6NPEAWJX4WSHD3KIIQDT532HKECBP64EGN6AOE73436IQ6TCRWWSWK4OROUV3KYHXAQLSWZECPT5ZDWZ5IOPUGIVF5S5RXUKS2Q2CAVI4EKQF3RV4GFRL6R3AZH4QZD7U2GFZD2DZFGQF3TRCKHUBGYNS3FMXACMA354HZ6F7KXPOY5H5VGVKBNML7DZOCD3IJSK5CAIN6QU43UFPLHOXSBB3RFILIK6KDW3IYMCS7477YQNQKBSRF3CD65WQY3UREYNWXCVDCEGK5RNCBVXGDQZQWJDNCQCFN5EVPIMORLUKZVLM57YNIA2JKO5AQGV33R2AY2KGDI6DG5Z7PKU2EL5EWWU75B5Y2FVOVPRACE5RU6WKQ2XAWUZEUKZAZPGQLY3V6PRY256C6CENGBXYWUW7LSD7BPUFIIQEI7C46SNE5WJUDKNZGJIUQECHZ2ZFRW3LI7SJTHZYOYMAIYXVVWGAK33LOE5QSCP7VEIKHJRCRGRUQVA4PJSYML2EFEE6VYJGQZFU2WUCM4WZSQNSQKTU2KB2R5UFMSHDSR4LKON6YNGEMEAF54A4MKEZCI75JCREZGZM2PEXQRAKC4SSZI74ICOIT5UK4FHAJRKGRQQVOIZYUMLE2UEQQMVJU6HUHWMHSE243JFGW6DJKS7VNRMJ5KARRZPMUO2RFRQMN6WGREMVMD3BRB4UFTYNPJH7WJPVFH2Q2N6DILF3BVZQMFDB2HND2AGUQ33R5TYKCXKZ2BBGE2KQEXUWGSYY2AORBAUGQOXJCIATQRNLDBBZWRQIVDLPJJEJ3BSBRWFC6FXE5SNANWCUWQLKAA5L264HEIUW3BX5RZAVZHWGCI4KFG3YGS2IKWM2YHIZQZZZZ4Q5L4MPMEZORKGNO2OFLGRLCTUNJXQ2ZOLVBU6TWVPNBID2X4KJZBLWBUGGPIQ34IWINCEPAH4M5A2OQOSMOFPVDOOO3S4C3AE4TK3UC6NSJW7ALMM4X2F3ESYFK6SEDJVCF6DG7JIPWUBBZXNID6RIXBL443YMI2J62IJTPW65FVQKXVI76LFJNCJQ5H4T3CHZMZWVK3MNABQRRZ6TPMGB3SBH4YFONFQ4R4MAV6N72HTNCITFDIBSKOTDGXGBY5A6VICOPLORDG4BBONXSG65I3AE7PYOXR2AP3PNUKYYYKFT4ZMTDTTMPDUJXQER6SL5NFLE3J7SQKTSOUGOI7VJKOOCML5DWKYMOEPAXLTXWQJQ36E6FGJTVGGM2DKMMOI75CGMIGX3VDF3ORMXKMHPX7QLBYXOIKCYHNRMJNA6WQARHNAL24KISJMMZWH4CO32BPS4A7ETGBN3LCWX2DXNTOREYBYAN45BSSXGXNVS3JQKVUOF3GAP27PCE4IWI2GBFI5DBWMAV7LZZJUVOMSZXYUNBDSO7V256EMXFS7Z2SDMFZN36RTA47MZCPUVOLTVDT72TAMYNK4LILHUPVKRSE63IKCRAW4GZVJXA4GDW4UQLZZT7XFI44LGUVIKW4OSFWYPAGMNVFVH4ZQTHFCHOJSTCTDRTONCFQ73HPSKYZGDMLBVOOE7TC26QLMDYDRFFRDCVUZUI2NSPIUPMP65Y4MKFKZWNWAYRD4HGVLQHSZ6WLRONA5BFIULTGNC4ERNHW6VZ6OMLXGAUDQTMQ33KK73TMEIM5QSWLVZYV5QGONMZEOLLTWGDNXNUAB44ZZYTOMXKFTNFGQRHSXI2P4SO6TGV6TTJ6STG7AY5U6TVCSA23O7M7NYAJZZ52RGMQP6ZEMEEQM7XYFG4VRG3W62AO5QTC5BDE3HOIO5IK2JDTIX64V3DEIC6WTSELKO5OT3TOJ4KMYBT55YV6SN6G7SE4SWSSBWD4OPCGPO63C5GL3CA7F2PNIFSJLWUKMNPJ34F7SUEOARW5DXH3SL77U7SXCDLRDU52FJCFPYNAX4QG5BQEEL2AZRB5QO3BAANXUG2LRKF2IGFZC6ICJJVHEHZ4256UC4YTEYB5DKOTIGOLS6RAGUKLFENVEAKBZSRZSRQCHEXRMYNTB443T7EDHGLEELNEJNQCDPTTF4IHI7XARRQDVFE7VFPSDQMAORDRQQC7A44HZXFVJNYJIXLLCIQ7LM6GWRAVS6C7ON6WC4VLQ6O7LINCV2YZQ4FOCFLY7WFYYY7UAYWMOPI4PXJV25T5NMEH6LXDVEDWASBIDGXBS7IA2GK37P57X26XT7P3777Z5OX2SJD772H4TNNIMA===END hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=usually000066400000000000000000000030271321557765100275670ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 391 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 8.03 bytes, standard deviation: 23.94 bytes # # Additional interesting statistics: # # * Ranging from 2 [898 times] to 290 [once] bytes. # * Median size: 2 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 129 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 576: STARTPCOL2V2NN7BSADH5FNI44PPATCH3BPZS5W3BY5WONXJ7564CJANGFARRNNNOSVJK2Q3LM6J45BHT6735J5Z77VV5B63U5DQRTOWGD63GQQPOQWXCA6MPE32Q6VXF5LIRUBEL6RRN646QYJEZAJTBHC63HBTIMNSOPHQWMQQ6AEK6HIOWYLOT22JLF753BMDNWDO2RMZERN44FK2RREWZBVUDUQXV4TQTNOLY7UG4OM6UZIDD2JDGRTTD6QSIS5HMDYGB56HHUFVHQGFIJPFKVV55NQGHVRUE2GIVUCMXIKT73M3ZGMFTJMD46X34VAPLIEA2W6HLK5KHZSH2QGQBBA6CVJAYMHDJFWGE2FPVVEAFLKJG4B2KH7P76SK4AEPBHKOBOVZCR3NVWG75WGXAAEPMEVZGCWYHNFBC6RH5NWZNVZP3JW43ZAYXQEJEJ6YFEYHQ4JKHRR2KIF3BRTAARLL3AYQ3MLFZQ7RCTN5J32CXWGGULZZ7D7TWIRODAXO26ALU26HIQVZ2QS6HBWDCMBIHOUXA2J5TCX3HZXF2P2PKNT7FH4BXV6H5AN7NGC4NEND hypothesis-python-3.44.1/benchmark-data/text-valid=usually-interesting=size_lower_bound000066400000000000000000000117561321557765100316500ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text [text()], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 394 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 8751.32 bytes, standard deviation: 10378.56 bytes # # Additional interesting statistics: # # * Ranging from 2 [24 times] to 67837 [once] bytes. # * Median size: 4892 # * 99% of examples had at least 2 bytes # * 99% of examples had at most 48101 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 4096: STARTPCOD3GCNSIO3SDMEV6RNAWQLSL4CCMKXOHGM5M7QPJ3HN6HOIZ7CQWIRPKX3XOUKASA4YRFC73Z7H3377X527P3773Y6GH635PWF6PZS3R7P3WFZ56XR7NTC5PTZHL575Z2V34WVL7LW77X2CFI7254PCO7X5HCK5ND7NPWZ25HNUWK2VR7X567VRL3LIVHUTXQWWHO5SLNCR4TJ6HWSW5L7764JM3R5WFH6TN3T7IH26XDXIXOX4MXPHN5GJ5YOLZL3XHW5NMOQ6YR3UGC43KSXPNFTC257NXNTPEPLSKP34VCFDRI33EP32JTWNL7QRYPAXVZ2Z5JGDPFLZ7VSZJ5HXQ724DTWZP5U6ZXGXXOZOFTNOSPD552WKZVHHVUOJ5LZMDQRM6CWW56XXELSSOQK4GM4VUFH2Z5IOLOOFHO7UXS6UWBDGVPUEL7HJAQIVW2PX4C2H2ETO5R4DYWAVKGK6WMRKUFHWCPXS45AP27JP3D5WX7LFV5RLDMFF6XTNFI5PDSUFBBLW53OLFI7PWFOXVTHTBP2WRR6M6J2TQUTHGOUMEI3Y5VVJUNHZK4KLOUGVCBZJDFSEW35VN3TE5BKJ3IISFY6TW56F2VTV6VYJT4WYC2RJQJ4PW4N4CJXUHZOIYJNICN4TGKLHPVOB7TWQ4WIB2ERXJHQBUBRXXHQSUFX2IMFXEM4WWXMVM3U3NPKCVPO2WT26ID7V6XGAKLUPNDSYQGG2JPJHLTGQ2AJ224KAFDK5KPOZJDQTYENOMOSPFWKXLP5MEM2O4FPKQB6HB4NMAJOVHGXX2OSEWJCY6WEFXJJWSYR3SBAMRYLNKEXOOFKGLUMKGPPELQOAWFOPCKCYN2FDZXYBNYBPMOSCKC43OPNIYPO2OHCU2M7BIPE7HFRYRDXCQ2MBTA5HFM4I4MLXKBJEHDRGHHDFKVD5O7YMRG6ZBBHEDK47HJ5TLVKME4EXDKPISHRJ4AFGJUPBGT72N7UQNS5MRTW4ZM5RNE7USSQWKLWTMLNWHHTAZIXO2QUWUEUZLLGLKYILCIHYTAS35Q4757ZYF7YM4RGS6YZTFNMUCSYN465DFINZS3QVVWUJOEQH4D6LOKT45IFX6JEYSV6STBU2XH2VCW3QRU7LQMGYWMVDBOTQNDKBSASLU55CNUXWOPUCBLIXTO7JUYN2S5BGNDKSOXVCJBBDUVPAMPVIKSSAHSJCSXZKFDNAMMLBXHLJX2FOA5F6F7AOB7AI6DPQHYSF4BWLUAS27YI6NZEQ4KJFSEOVAVFBIUHD7GU3ST6MBI2V7QPKNS4QUJYPPBCCE5JTM67F2L4NJF7NEOGXFCLJERLM3SDGHBCMRFLND53STX4NBLIFT2NCTSD4WUV3F2BGJNI3YTFEVH3AA7TKCHRUIGM5REHWFFV4LK6EE76X54VLJZSI7NW7YE3CJZBCKETWSKAAUNQJH62URRB2NMIZIFSLBHLN2DENHIQXO3UNIOENNO62XQE3YQBWC45KLEZEHWRFGQITP6RDLU7NUZ5TVEOQXVBEOZ6IIPZU5Z2PE7K74SY3AZVX5FGN3YGYM5QZUD2X5CCXFLIUFDY3WCB2MQF6NY5KBZIRJP6OXEGMF2RPDMYHDEGBTRKNQWT7KGW5D2KT4A3VHSPIDOFEAI6Q64PIMT4JCJWRBEALIJ2E3LESRKB2LEHBYRGNXBBEIUKMSAFUWFUA6RBGYC26D3OPBZK7YTHF5QG26PRDO2JJWD3JBIYBEGC5BVASDPU7VQTHSTC7AOVA5RZYKATIYLEVPR5IUUGKMY62UJDT7JSJOLOQWLNE7ZMDWHRF62TZXQJQQXKWR4QHGXPDK3EMR57TTOVG5NZTCSFBGQONUVLCFVJYAOEMFZUXOQ2RTKGNR3EEXSTMKOL4E6F264TEBI3FTIHGMQE66F3SS5KT6ZVLU2TZPRLNGW4SLZTHSFCN6UGTVQAX4FMPR7W47QG7SHTPO2HVJAJNIWQP3C2WGOQQEVXLRQLBDJHXHJVGC7IJSESMQ6LQDMGFD73WEEFPMKJ3M2PPOYGA422A7DDDQLO3WHQGZD2XESLUNL3IYHCAAJSOXKBCREZOF4RQ32QBAYOE2FSI55Q4GJ34CQ5EUGZPPW7O7JIP42DBSVKS4V2BSH4GRH4UDG4TWWSFJQ53A6W445RZ5DPQCD7S2SMCZLTNZGBWMWNAFBE5F6XIOKI3ORKHCLHDHJNPO4IVZKRBXR3ZF4LV27247WETHUP34U4LWSYYGWBMR2UPXRGFAMBY2BJI572F6R4JDA2MPSE7W7FFFO2XPDWLAFWKHUPJMFUX7CYUE3PUHL62R2BE3KDSVNFXR4BAA7URUFTBT2ZSCLFPXQV4VQXQOIBDKRVLYO6DTM6EPKNDFARLTPZHVGTDTL7RXRYUOK5EYCHBSND4XCKSN3NAGYKPPKU47CSMCBPZBT7LUF4SUV4ME2Z3KQTJQOD53PVSODQAKUMC6D4ZVWMAQTJ4O7XXYGWFWV4RVMHIS4KL5RT4WSSCH6S6LOKKGREXGCALR3V6VRND5RY2YL5CDMSHKHNIINAF6PJFIA4R4NVGW6XI7FLKCH672VGFATYQ2JEKCENSEQOVJDCHEZGRIYKN2W4KOMWIHW5XFQI3CKASJ7MMTVAB5GS4HNZDBO7BIGNTYIXFDHI4PFT4GECQ6LTHJCRSHDVRWW6PPFJAY6ONTNC5TNU4FBSPNX4TWTTZFGGF6HDCMMPLL35STAUJASWUJDQ4REPBCHP4QOGTIADSGHDN63YKS4YX62BVJHSSMZ7O6NWI2HD7ISWAOXPMFJGHGADBVXM5CCIVQNAC2IHQGZCHC2RQPGA6DKH5EXNM2OLU6G62YKWYYF3MEIZ43AKRMLJDXWGONGUGDWI3KYYVQUU4XHIOIOLXFRFHNDODV4LZ7FVZH3JL47Y3GKTV7ZU2EWHB7I67XYYLFO7MRYKAA5DNQRQQON3ODSNCZCJENWDAC2L6FVPZDNIVD73AWOGW4JWYOVKG22FRSRVO3MDY7OTCF7E63M4UZWGLKR45U4TNKOBHFQGDA3MAMDZRLOS544AP5ZSSPMO23VSG6WF6ARLKXS7VPPWXXAXRYBXKQIE5XG5A74AAMAIOK7TC3AZJYYOLHZ4QJZPZQBB2576MYFW4P4AJEZYJOIT7GS5GNN7C2TPSQ2SBQC3UKKP4YQVT4CCQNU23ZYLSLWASMHZEV4C3U2WL3AJPKHUHTEARPGHIIKNHQU62FSAMAUMQZKI5KLOQZHUMWYELSI2G7657V2AZG5GSPB7FZPUQHHUIEZHEW62GOVU2NNWILIEHROFHCYXSEM7KSQGDC5EH2Z6HFWI3UN5VDIRQND6RRBZN32YBOW2ZTOH4LTGEWG7GQAZKEATTRGPRBSZ2VCIIDHJOA6ZXLWJFFXSMSFJ73NJAG22RJF4HYGYGPAEEO4WSZR5ADQCIDZNQBXOLV2N5RGUEY2TD2UMJ7BDU3KRXXS674NGQX3UVSXWDWS2P7PRMKTCHRLOKN6CAMMHUYGG7YKWRJ7GWALUBUIRSEZYRZA5SY4OCNHJBTU4GFZEZAADG62GHMLRR7ZOFXN4LRCPEN5PSABKHLKXHKA4KGIGNLFVEGQLW2OP4J5YSHC47J4NTUJURDIIF4WYSF3IMPD4PKSSJODJGGNCTLEKV4IUMGQZGOJ6CTCEJYJ2PACQM2RETSFROTT2TMZRHDEA7ZUAECDAW4J4K6DZWFB2OMKH5IGUXY34GUFQK2S6OHCJXI7ZA63KEHPW57V7OS2R7DQWJOT3R3XDS2SF4VBVJ742QMPOQIKKKQWD6Y7NKB54IBMD2JDPGMXRA2AOXHQACHRXOT7CFXSBR7WHTNBCMQJ7GOMYCZY5UXQ3S6J3QJYVCNSZCV3QJY4PHS4WVN36D56N4TTU4GCRUBXS6O6XC3MC5DUDSMYXQ7YXD33Z7UN4HCVSWR5VFVICAQZ3BRXX7KUNXRLFJLBBJUTKHMTOVACRFFTZDQ2Z2CBHPPRO2V5ERT7QGH5BHZLI75DGOMU5WIDS7TQS7AR4Q4F57PWJ7P367T6PXL577347GD55ST4677YA3JSFULI=END hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=always000066400000000000000000000021271321557765100274560ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "always" and the interestingness condition "always". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 421 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 296.00 bytes, standard deviation: 0.00 bytes # # Additional interesting statistics: # # * Ranging from 296 [1000 times] to 296 [1000 times] bytes. # * Median size: 296 # * 99% of examples had at least 296 bytes # * 99% of examples had at most 296 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 104: STARTPCOO3RFBBWADAEAAYBKT5L3LNAEASXMF4CUEBV4VWA5VHYHOYQ6TT3WZI63DR2V6SWICISMSEREZF5DDM6ERZPK73FRK3S73AHUZLJKIEND hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=lower_bound000066400000000000000000000113141321557765100304730ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "always" and the interestingness condition "lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 398 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 5284.78 bytes, standard deviation: 917.40 bytes # # Additional interesting statistics: # # * Ranging from 392 [4 times] to 11932 [once] bytes. # * Median size: 5137 # * 99% of examples had at least 3847 bytes # * 99% of examples had at most 8235 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3800: STARTPCOE3GB32IODODEEV6RFFLEAB46EQX6F4XGAUHBLWPFXON7Q6XWPYTVWWZTGOSIQNB2DP6GP65PX77P56PL7PX567XXHXLY737GON7HRZXOW47OPKOPTCV6U44MLXHSEM36TVTXOO7LKQJ7X227XFT36T3T7A7VPGDTOTVN432GBZ7L55K72Z3FTHZZUZKZHGOOTK67XXT6WNXD647YGWLKKET6XSTXWR2WJNI76CV3IQ2TXHQRXU7LDWU67W4455D27WXE6HP5L7UIRF2PCYP755PJL72TT377K5GB5K37RVN37TXCONO57N7RDS57WNKFKPX5R3RCLKOZHNEO5LWXE2B5J3RNP25D27KJ46POVQJ5X6VMUNLITPY5PFW7ZWQOWITZRV5NG2H7LDBXK73DT63UU4GBLCMWRTS7IDDLCKESONLEXTSMS7MS4HHINM7S5ZTXX7Z27DZQY7XHDK2VBXJH5M3B4XCG554IESETXT2EM2UJ6VO5JNFTX4NZUORUDWHL6IGWF653ACKSGUDH5JJUL23MMH2RML2SN3Q5BYETP2HD7ESLI23PXHZJGQWOZRGA3TAIA4KOJQZDMZW6PI477DYPVNOOD4SRFXBRVWBRI7BXQAPNHNCBEBEWOIFRKYAKV3DVH63O6W7BUCB6Q5MMJJPCKCXPU2JNCTDAHSJVLRUXNU2XQPLZP5WNUK6LSWRQILTWM5PV3EN2DLRFGWDPPXVU3CSVFXUGXUKGA37DKTJ7JGSAKSFSAHOHFELNM4QE5MUJ3TWURRKYAWTMTMRWOP2HVQNJQY4QYUYTF5ADR6CSKWME3GBZKWCP2ZVMKCBSKIDLRSXXBAAWT762W7NJKHTXQOHVLEGQPWWLQCZHS2D7XED22WPPS7QQAIXO2Y5S4KXXFGF6INKCU324MVO3OQU64A3U4TJAFPNGU72VV7O75KWFURMM5K57W5WZ2RUITJSHEKTMNXSAQ2IJZXOG2U72AAJEHMPVJXDC2CCQDALVZRQZDLAAXFBIIK4RSC3VDXXQE6C4OZ3B6Z3VIT3XXODQY6OHDGTWTOBSRQYRYWS5RAQT7XF263LHR4NT5AKN5K5ZCEV4XQEP2P64KU5APWVIX6S7NDFATTTIUGCBBYPT5FAZ2GXUKECZKZ6GZYCNJVT5SKDQOJ2DAQ3ALN5PTIH25SHME2G7VXJUEHHKSNOPNDJZIYT3ESD6IHSO77GDEUC4AWWQCWF2EJEQTL5KODMQXXUHX47A7EKDQO3VPLSSJFRGW34SXCIRNDNKO2QJVVBMPJLHKTMWVMDQGHQSFNTUJ5AXU5OUUM7IPFMG4LAP5NKSPGKMS7WP3B2VN7GIJZHFU32W3PP4WDNJZCCUI4OJHOGJP6ORDIISJDAPGEWWYQ4FJBBUCPQ3HSODT42MJGBZN3HCQTMUHWXPRAGNUTNHTFBHOIDR4PBR6BSOPAHK6BRYWWUAJHRNFMRX45DYYC36ZHVMFEQ5LS2RGW6A2Q4QWAPTZDHOV2K4YJZIGWFYRMWDLSGG6QFOOIJWNLSYDV3AJQVFHENGTB42EGXIA5J4DRKGOHA4ERKCB4NBKRB6RR4F4EHIRM2ABJQ7LXPU332WDF5R3G7EVRM4ENWPEJV7L4FYPEI7OFJSPNZ2GSDHQCFJGRQG5QJWJFWRE2NZBYILOI6IK7HKJ3PAZ3RPD2HQ7AYF6EELTKYGHAVHXJZ2ZB4MOBQNOMQSZEVX2P4Y7DPKTXLQVPK2QHCVGUHBLES5QSZ275A7OPN4O7AQ7OBPF2I45JLLMIUA45FBL675QGMBQZEFFZKZ4GS35S2RD6TRK2ZOVQS7QINKBPLKHXOFTQYNNR2W3ALMW5N6G45SJ5WCRMWHHFUYVTDYBOSYCC2YDP3VKJLZLGBY34SY7D4DWEL46YX7UHXMNWJGNSUSWL26G7WBCJRJ5LC37AXXILSQ2HF33BZ6JL2G7SCDYGKNTHC6D7NONUEMLLK5TX7ILB5IJDPOEOX55G7P36Z7YWDJGSWK4LG3PJVHY5F6HB3GKZPOQWIIEPOMMM6VOH4BH7UCLGJULKRKEJY4KJL53RANB7AIFLB2THI5F6KQSFXSQDPEXTFAUELLIHQHPRPRNFMR7BEBFRVZCLIPTFVCZDBD4E5BI2QNYJW6QTEFLUWAO3FPFUFCVI5UKCYBCZ3RZVZCP4S3YW6U334RFU6KRFVMGFLCH7GBLIR5ZEJUHYKFZMMC4JWM4LHXDSUOYIVDNFJFVF3UNEYIBS3EAUAZON7TE5SI5ZBOMDJRDRNJ6AU2ESLQUBCWBESVO6KVJRHHLAEKBASZKEU4FYGIBRCDZVCTAAZZ4RJAFEGPDUO6WQPZTEY2UR4E45HGFTHJSB34JMCVB3BZKIUWS4AGH3IMHKLGJLUXR42CT4VM3RKUSAJ7USANSWO5VCQTG75VG4AIVELFUH5JHOPF2LVWMSYHYKQGJ4WEZKMMATSBDTRVT3MRMDT4RKGJLLZFEGU3GEAI6IPA3TIEEOVS5NCHEJIROMBYP4V5ISR53IPW7HKKF7T4BRIWRKUVQAYVIEHOYXFLGZCUJNYTULCAMDTDZDAHBJA7VHLKUBDDOQO3LD7N23THRZZA2UM2QLZAHFDETIYOGMYL7BQODOKYOET2TNLCERGTOFYXDU65WOLGRP56RFSPOIC72MLF72SOK3U4BHA4HA4D4BV6VASFVNBQUCMVMDNKOS4VZHN7H3AH7HI24WTJ6DDDG7PSXZPUXRQAA5ZQ57EFPDNWJ76VVI6BTY3OEUK6ACLLR2TMOHIIULCSXY7R7CS42VU2HWMW2N3GYSQA7Z7ZXBZXAKJCG2P6JWATAXUMZBMLEHJBAX4A4763ZBGFQCOXQZ4PMYWMTTJZOKEHCO36HHFKCS4LGQMPNO42STCFGUEMF2VFEFKHEHYZSUDUOBYTYL4FFGM4UN3RPJIFBZVPITW27PQ76I2XI7PKNYRE6UV2HRD3LN5F5RVP2UL4LCXW5DEN7PKCLUCRZSFZKQRREJJQQJVCH5KMPYS6IGLOOA66K2BKJKBSV3EK4ORTWAESN64IZISZSBMSBLCADDSJQKYAEBGGFEMECTPUWMZH3Q6FRIAX67CQXSOSBCNCWRMRSHJQ6BEFWFA3PHKZLVAIO363MRAJZED5LJXSPY5DFCTIPSYASSPXRJ3OZI5NDPU2CB2TCHV4FSXTDO2QAFFLLLTRLPJN3ITLVEW4XG22HAQLMIGAV2I5UH4YYAIMPFJRAPY6G7YXWQZ7SFV2FTHEU764Q57DQON43USTEIWBGIT77MTBPWAAOE5WJ3D7S7EMYJG3KMDTC2Z6Y3S4PLYSPV4M2Z6WQC3S6FYPWCB67R6L7VICZ4JH47Y2WEU7PW754FHPZHT3MU3ER7JDSAUT7PPFZZPIK3TOW6RBJ45ODALXYR35KV7QEOUHOU25Q7ZQBNZKZBX2P73SVSVJVTYZ7GJ3SK6ZIGRISYDES2U35XXEZVDOMCDSJL3I5GIZEJWGMDC5P6YVSFUSDAY7ISK3GCXSIXZX25OG7VHGGAGOIQAXDHROUV37NIDGMLZQOTAH7TVJBCXILGOM4QGE2HS5K4B5RJYYEROBM4GRAL5ZL5THTY4C5O4OLV7PZXLHZVGQNOTBYLCQYPSTXWIUDD63WMWQZHLZKMPR26DX5EDI5TKSLS2BK6OMCBTOYM5EKXSBHLGNNY26AMLK5PICFC6WUIK3GSDMRCKUWU2BC3PYOANHE2ZMY2MNR7ZGLT2KOZHR7PZ6PX7XL47HT7P77K2WHH7X5B7MA6ZJDEND hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=never000066400000000000000000000113441321557765100272760ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "always" and the interestingness condition "never". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 395 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 5361.16 bytes, standard deviation: 975.56 bytes # # Additional interesting statistics: # # * Ranging from 3494 [once] to 13144 [once] bytes. # * Median size: 5186 # * 99% of examples had at least 3950 bytes # * 99% of examples had at most 9406 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3832: STARTPCOE3GF3SISLWDKEP5SWF3JRJDRED4X6RLRHUWSD6Z5FF2G7AWOKZ2UR2NIVDTPCANEGIJXYT5PX77XV57PX67X56X2Y6GBHX67XZD757WK5GTZ5T6Z6VV6N5LG337PPTX7P2FJ6J7764UN7FNVLZB6XTA6SG6SM2TENYWO7MZ5I7COR573TFZVMG67HZ5VXK2ZWLLGLFKL7T255U72VVEJ5OL3PYML3K57G7TIT2ZV5K3EL3XLFO4N5T7W7NM7VZD5LIR7SV6TTHJ7X7O6XXRLJPW5DRM47UO7TVJ6G27AF4MLTGJ7Z75HMTZT7JHWTPK3IZSJ6J5DWT5P37KLR3LR6UMT6P4W6P53DYI5DX7O4Y7D3Q62S4MPJLCC57XWP3I6XFXJ6YW6B3WZVPKD5TUPW5SJZ5WSWI6H7HDV5HCF45S3YUJ5G5364N7X4OOEXZVPOKRUPDHCHYZGPLFB37P7MMQYH7TK4ZHUWRWFR33N6DDC3VP3RX2YVHVMTO5OS7YVMN2H53NRLP7NMN6JTSKZ3PVJGG36R5PDF3TEZTXGR2PNTI7HXYVUNWHNWRSJRDJX5DP5VK2P6QGV3NJKMJHWM3QZ5IEBSYJNNGFF5IV6PUX3PORKRZ3ENPRGL44XXYYVAQGJAZLE5OHPV5W7LHR44BNLAOSBDEOZ2GUJRC4ZHWFPL3T4QMIOPJAF7E25MGV5JL44JJGOMLCTDSL3R5OP2LQK6DDUPGG7R2KQLQTTVYVATE4RFMC3MHMZQIWE23HRVHVLUZFKBETZD56FQVXNAXVRVFVLUVCLY5G2QBIBIHCUI3N4KQVPMYVTVKYR7N6ROOIGBHPLYLK2LAZVYMOKVV24Q2WBIGSBCTR5ABCRQ6U2D7JPWAV4HEQCEZGUDDGOOKOZUVGYBUUEDJKXPGZXVBTNQLKJFMLNG6EOWGQGZCCJ5ZUTKYAVCCMHNFRRKC5VAXEOCLHXWD5RB2CZ4XIDWIVUKBMGTEGXIF335R2DGGFODEXUBIZEDUMLGC34AOXVXIR5FAL4FI6FEP52VTVU7VS55SM73X7MQICSJ7GPNBKAIKHVV4JM6KM4FABXCBOVM2A2ZBPRB3JJURDR3FVUTJ3RP6EBUCCX2G7WG3S2V6EFEAVCSWSUCOZD3B763QADEYOMIJ42CQ42KCIJIAWUPW6QUZ5KXIS4AFSMKILBG5ISZ22SU5DRKGKWEE7TXC5GAFUGKX267WDBT6I4CA6SOJUM7RAKYAP4EEGNC3WCKUCV56EJGXFMQ23VHLFYEVTRVSBIT5TMT7AVXUAMKC5AFSHWIYJAEKSWEFDRWU4FUM5SRHEIXAENRU7EMLCISUVBJCUZDSGN72Q42TNJBZUB2TEW4SNBI7M3OMG6VQVPNNIXYAMCN47STJI6T6IVBWMWNMDKWIGFGUAWOJBWBHBE7BGDBDPBL5IQZWRSGW3DFBJ6TP7UMGW3COM2BUM5J2DM3STWHGQUL7O2XWQV4Y57NNR5SEYFP4RLQMO7SI4ZNHKFOIBBI4TWWEBLMT4FGCI32WL7I55INCAZECCVS2QS7JY3GHPCITZRSJHTC4BFDUQHJ2ILPZBJVZAFEMEAHZMT4BOUXQJDZSE2XSKWVUW2EW2A3ROMYCWLQCZ2PRAGA6US3YE24QEYBRO6CEGRHODGMARVF75YT4WZP3LVFJ6KGVPXXSV5PKQSYPBZYT2SPMWP4KENXYECKAJCZN2T6HSNK7C5OIIUHS4AHCQ7VI25YTCNGNPNLXBEC3UHZYFMSMIDTJAYEZKSFVECHVCAOUCPXDWP7QIONDSNCMAKPE4J5FIMOEQZB5WGWPAMOF7IRRKENFXKKNS56NRCJPRDGE64IYUS2JAOKVGVMFNNJPLYMBIZMO7BJ5QWOBDTDCJYTZYF3CTGPD6AQ2Z2OTHV7UVNVVB23YGQUBIESRXADWOH3WQFQJLJBNNDZB3O5UZ2D4BSQYY26AXCSN3UIBFRRKSLJBHJ44PADLYYXW36LQ2ARN44FYVEI7OZY7BEUA5SC3CHAZT5SDL6EWN5XZ4HP624JX6KKKQQYI4XKTDKMADPKAYKEZ4MLRSQCPLJ3HHF4FZIN4WKJTBMHBGGUVNHMEISOIMNGLQVC4AYKG2OIRXHF5DFCUFODFMGVMANRHM7MKX3OCRAQ3DJHX7EPMRIGZTHQPK54VHT7GSTP6EERJLHZEWXJGGEOPTV24AVGV4DNDFXPIMVHFPMI2XM2WURES3GIMBAOTASJYWST2XZQN6W6IL5QC66BK4U56LJEJQ6CYPIBCROIORVIER5TENGEM7JJDZGTJOG3PRXX5CUGNUSLBUER4BGKBYVZXIC2XWLYX4PWLMOS5NQJ4I4SOGOHOYSAYEYIZ7WF2FPWHDIZLW5NG7TSBAITBZMWESBMR2E5T2GPPTHLOP2ITR7L2RQIYV4KAF7ECQ2UFZKHDCRVOUEXABR4G36FSM65XFJ4BBXREP3CJZXVDVEFBWRIFKEXISGCXPLQGLZISCSEEKRCANUXXIM3EM4IZQKMFVL75CSOYRCZYDDIL4RO4H3UGTXR4PUMWHFGU2HWMSR6MUSRBWP3KW3CXXNG4U2DLGYG3HEQXJ4M67JSQAIRJJI6KHODH5UVHA2MCMQ37EFE3E4UBUQ2JDB6K2VLJAUERZQUFEHVVTT3JE3GSCAJTZNG7VS2QLFRIBPTA3DQOKK77EOZOX2QWNYLLVT5IWV4VAYC42JGDSOVSJTBOTKDD62D6CVOAJNCPKQZZK4EAUQKWQNFEYVFOZUXKYLMXFKIS3SQMYIUQJSO2MSTMEQ63TRFAOIRD54T3GVYQAZMUL37PKNIICXHSTJAV2NB2YAEMGMN7JKDXM3BKDRWAC362JXYFU4QA4Q6QK2JCUQAQ2J3TQNBW67D3RFGZ6ZI2I6EEYTP52Y6QVG3USVK56FJ5PK5JV7J53R6AZLAHOR4WLLDR76VMJ4PKMEFQMZRXIMLTV3A4OSA5KJCDRST5445VUMJJLYAGQ2V3WM7RV6SPE4JF6P5QGF2STKPLQZ5BIP5OJYQ7ORLI4MWBYKZPDH6OQU6OEJJPIKZOQOCGBIDDBCDEEXPKICYLQDNAYZ4S5DLDZJ344VR7MSDQVPWINZFNL7JGBVEIRP22LJ3UGVSHGT3FQA2O44RYCS2HQ3ZE5TOTQJY2W74PY7VIEUP2GDUTJDQIQQVEKHAKLBK5RDRU2R35JK6QHDCXSRFHGGNC4XMMVON3N75FY2RA5SWZ3AFRWTWYGFJYKOYO3GGIKEGPKFP2ACQZPZOOP4H6PN4VF3QFYUK3ZKU7EXP3FP7WZZYPNE3B6SSNA42R2MVXLSMBAERIH2XQALE5H2BFJRMAESSGHP7ZUCVQYFBSNVZ5YQSFQP3NE7LJHC6V2NVESU47RK3RCYQB565F5GF2V4PILKEIBHVCWTSQNEOS6NYGC5HVW6IWC6ZP7KUW7XY4NZE2OCTC5RVQJWMEE6ZRK2XGSYQQC4HRHAM6T2J5CB2DDUC5ODYCBTP25WSPYKQSFSL5IJJJOAPN46VLFEATTFUKWUQTDCYCGEDYN4HXJ4RGQDVNUZ2L3V25HW4IT435YP6ZKD2JRDOS4DJ3CY56QWKIRIUEJSFCWGCU4ARE2JCZXISJXUQUMSRDJWYR7SORUCVPIV2RQPE75YKM3A7LQJSBRSQBPXG5ZEK3A7LUJJ7IGHXXR2XSFXVHWUFO5POKMGSEXJQ5ISX4565HDMVKNPIXJYHYVQMLGBVIS7UNEHAVOBS3KJTWHBJNNRGT45F3GVABFES2CDVA77C57X26X37P3775Z5PX4NM5775D5Z4EZ2CEND hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=size_lower_bound000066400000000000000000000105441321557765100315310ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "always" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 399 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 6396.64 bytes, standard deviation: 8280.42 bytes # # Additional interesting statistics: # # * Ranging from 392 [333 times] to 66330 [once] bytes. # * Median size: 2974 # * 99% of examples had at least 392 bytes # * 99% of examples had at most 38237 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3432: STARTPCOH2WF5S3S3YDL3SU4VWTZB5KDZJPSXZFEZOLKSN6LZZPD3ADEKNRXDJG3LQ66H26LEIEQAIH77XV47P77OX547L57X37GNUKG47D26YM7T7TC2LOPR63OV5NR7VMFPP7V2GTU7XC3VV2NP37HIPFOZ7X5D5LAVLQMS2GX6TPV3CRY36XPKHVRSPZMK5WHQ675L2FH6G3PQOPA7PWGMDGYYIVWT5OCF4VIZMHWWHLXCKPR7UJ3EJU4DGFM6W53EXZ4NRVMCS555ZF3LDN4MMIUMMFK6P4KCLEUV5OKO55F3WTDOGOGH6ND6V5I2OIS256AFVK2YWRNK7Y52WTTJUPYHKZLQZVTLAY3W3BJHVCFWTMV6KTE3SGF64WEYPNQ7NXVLW56635Z3SNS5HHS3SAVJ7W5NO2S4FNIE6XDIXVUUGZVRO3CVUO4OHVLV2XFHLPOPDDFQSZQOQBEDSUX6KCROGGBKXIRIXBZSC6O7FJGFAVTHLIGM6KULXUYUSTKBKYQWOFE6VFHXXQYRJPYFJJJQLPJRFXFVGXXGWMI3YMEQKMXKIRZPM543YWKKRFSQIL6MUXK66EPDHMZLQRK6FQOHKQTGNP5DI6IPZ3ECKJCFDLILF62XMZ32RRQKU62TBG2ZHIEHOIFDCO74XVMSDOBK2LHLBFLE7TIXI47TM76NLHAH5PTYVGYO3FOTA5MG56QVLTO7UCWKLDJ3ZAKSFEDZXOX7R2WMVNBLU3Q7UKYRBM6TW2NM5N5DLNGO2SCNWERGFVNR3YSMM3J52PV2FG4ZTDQSLL4XBXFOBT26AXVMF4ZCSQ6JESQYEHQCCHGFIQEIDNTZRRC6YLEPDO532HQYY5QCZAD4RC7FKFHFGKWRIUMKIFQZGO3CWD7C57DS4OXF2WFLZG3A5WRGBPLF4Q44LXZQTYJRA6KYSZ4ZQWSBZJ56UD4SE3G2LMNFWY2BFLBBTL67SZXS2VEW3J4FNXAYIRP4IVJVDORADPT3VQQ44ELDWFTHTLMKY2Y7RUBIETHTIEQD7ONYVTQ4XAF6UKGRSBEPF3FALXUI7DOLQOFNGIDKLH276NMZVPBD344JZUFHOY5RKY7XZL76JCC7SVQTKCJIKX46PPNZFWSMJIUQUWNEEH4WYEZRFXJYNMGZMDIBR2V5GSBM5A2JNXG7XEKEX6RQDH2NTWS4II4WSHQADTYSUCW2SSJ35EFESU3PVHXTFTG55CTJKB47BSEJPLRHFE6MJU5Z5O5DXISABJJFAAZN5ZJ5IJNSKDAH7TX6RCTULMBSXACDIIDSWPBY2OMGS3ZXQLIEJVXH2C3MY2GX6WDLKXLGJBBEV3EHNDCGPCHMTH3N6R4OJQUWWBLESTZIMC3BZOYKWJMCD55UDVIIWN2R7IL7SKMUSA2SLE3DNXIEUOL3VYI6XKDHEXXF2XIJQDQ7MFTF7TIMTRLJGS2N22WY5OVJLXD7HVZ5I2T3IZM2IHJYAPY3TW2WH5VF3ROITNC67TPRCQPZ7WQRJMJV7Q3VMSBBIKMRBI2CVY3QREAVA47MIS536AUPHMLBO7TYZMKIU66JKZMQKAK343WI6JUPGWJXJXTZA34COKIO3HROH2IDBUDTILOAES3WYAG5ORFKI2W6PUIYYLHSI7ETICARP7C2C3S42UWRQ5N6KF4OC3SQOMQYOTZCXGO5NV46HXFVYVNPOVEPHKAKOCKPNOFH644GAP6N42JAMRUPSOZLK62T2KTXR6INTTMGQMVXZRAZLB2C2LVV32QNQZ63O3TMEOETW2RCMJBVMXIV43KKJOPS3C66KSWTGNOYPQA5CBQGT67ESYIOU57SWCJLVMMUD3UOI7MRTFDA7T6EXHH3PY7Y6YIYWKANGVMXAX5WOOXHRZTVW27EL4SBCDAJ4SPKGLIQAB5RANJ7M6EUIYZDD3KFJWWJM3OXXKHNWXGTAFC44WXKBQSEMDNUVF6TONN25CMXV2AM3EICEF3EE7CNONXXGBBKFS54EEMUL3KKPWTMYKMJERGECRLOBC3SNI463IBKHGKHTBH5LCZFAZNKPYGAGSTS6MD6AWIXLFBDQ3PMQRXE4OQQ3BVBE2S332AQZYW4HFFHKJZXFAJV2DZSC55CUCVG5BJAKDRYW3GERBWHAYYPFPOYP3D2S6GLHLQYFAU7AMIU4Y2Q5WQT6WSWU7URINAYYW3525EOWNJCTPNTYPLVSZ46G6MKPQDBL3M2E33KZFEGBNTGJK3ODIWPJ7PDOSTPM3IHABF22B4C6CG4L7O3KDWNELIYSKTGSOVUBLMYA3T4I6EKROVJZGJLZS6ZH5T2I22YEIKANPX3PYGIRGRBHLD2HLSRVQK6EOTAE6E4224WH5WXOECZVOWQOYC44YMTUJFJ6OSDXCTRN7G7E2OWR6IC5LLHOANR7OQ5IQRA5RUZ54YLYHIHGDSZWZGQWUKKYZ3KCZTPIXDWXLDILEOCBFXCS44SVZARPBFWWVSKOQZNNFYXSIQZJ2GOTKHV3HVWVFQYUYYLTA3BI6J46EPRFHBYEPGI46R63DKMLZSD5QBZBL5SFKXMKLP52KYA66T2BV3SKR4WRBSDYH2RZPGA77UDWX46O2L6ASKTKDIMG77BW7C6D4NUDR6LWJTINCOFQOHNEH34XW4WCS6SWKGMPGCDI3UVULPO3T3WQXENQUC5ZUJHBJ23OB2RGJJHHI6XKNDGDXR5M6A6L6B6YOVPQC7C5O5WLAKJ2XTD6KWEFURY334SFWIPTEZDKHJIFSRWW6Y6RMWPEMDSMNDLWYGP46O2O4F5XHAKA7HGGR7TW2H45SHSHV5NHPBDQ7B7WPDJTWXXNQRFH7T4MUDA42WESHE4JVFI2FHWGG5EGVGZGHYH3SVYMIIIFNQTIU3TODKBYQXXVEXG2R6T2CWUXIIULGSTQHBSNIS57V3U3WN7DQQIZPCNTXF2IS55C5WH5F56J6PBUXKA4SJPCFNGKMLCUZWG3FZTNPVBEHA4FNYL3YKYAQK4IPA3LD3XZMJV7OLZONP35UJLOGGUTOW2FB2FXO3TE42TVUEZM32GOZTEYA5H2OUTZOH4O3Z432AH475OF6PVW3W5ECGXSQQZDUV3H5UKWSQ4CLNHFWO74RZ53YSVJXX4EVAM2MEI7R5EBF237R2ZE7QKCZS3DC3AWPWXIJOKFO3UUHWZ5335JFZOLDZRHPERIC2A3B25Z7P44L7PUAOCTNFRMJW3SOOPNYHVH2JKS5JPK7HLJDOWSS5FW4TJJTDRJ756M7FSGPE2LCXJXO3UGG4BJCRNSWVWSPCRGZJ3WEOBETOPB6T2AMYLSKEMJMNPV7MCUC4675IYKSJU3ZGLMMXDRVBESHXS6PJD3P65JXA2E5A7XLNHLZYWHPVMDLVG364AJOXP77XZ6HV47367774PKF76LL77SL45CZFGE===END hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=usually000066400000000000000000000034251321557765100276560ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "always" and the interestingness condition "usually". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 397 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 532.32 bytes, standard deviation: 428.93 bytes # # Additional interesting statistics: # # * Ranging from 392 [895 times] to 3275 [once] bytes. # * Median size: 392 # * 99% of examples had at least 392 bytes # * 99% of examples had at most 2290 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 808: STARTPCON2V53N3BTADH4SUQHGBQ7IXVJCXZJXI2UHZ3MFX5O7JMC2SUGUPLMZNYYWLSCRREHYHCPI7VP26D5PW5VZD7HYOJQS5B26QWAQGESN5V5NWLB6J6ZD5FHMKWNHVMR3HCFRMCLR7ELASI2NTTW2JECNMNLGQYBBU6YZW77ZWZEC3CIJYNDH5ILX25VXCW2NRMHZT3B6G7URTCGNQEFZ6WHFVAJHDA3NJXMEUDDBNMBFIRIMIETC2FQG65FBE3S67WTZRHN5RX37Q74GS3FTCDG3ZCQAEVTIUATWBC7VDHH6WOA7AVA7LH53PEYETMN4NMSMSGUXHGQGCK4WMFBHO5B63CEYPDQ2LUZWZUKFRCONOOSZ22RJVRVWWVV4SOYKEDOC3IUB6JCTRCE4HYZC7QHY3LKI2YFN3YO3JV5OIQ4WJIRONDKCJGHJNXTOAPTQYUYRS6MARJO2SC5QVYSHGNUE425WC4EMXOM5BMJI6F32J5X2ZTE5PRTJIK7ZHKUL2DYUTVNZOJNEY4LU4DA2GL5PEGAYG5WKUYOOLS2KWTONGAVLGFKTUWIGUKQF3BRXIULKLI55U4VVYSFTADD76QQS6NEXXZUHI2JJUFBWQL4HRKJKXNU7JEHBE2UYCN526YJZGRBJ2772TSE3U7JGXN36XHMTMM7CJCF6WK32QFDVY2FVHHTZND7522KCKNR3KDTWBMTJRTNBUOGLN5LR4QO6NSDSXLPTI2YASJFLGX7HAV2YFTUVX555CNXKWRGCXCW7AJ5T4HMP27F6JZDZ24X7P4AJXCBVVIA====END hypothesis-python-3.44.1/benchmark-data/text5-valid=usually-interesting=size_lower_bound000066400000000000000000000110071321557765100317220ustar00rootroot00000000000000# This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for text5 [text(min_size=5)], with the validity # condition "usually" and the interestingness condition "size_lower_bound". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed 400 # # Key statistics for this benchmark: # # * 1000 examples # * Mean size: 7619.13 bytes, standard deviation: 10345.12 bytes # # Additional interesting statistics: # # * Ranging from 392 [303 times] to 117219 [once] bytes. # * Median size: 3603 # * 99% of examples had at least 392 bytes # * 99% of examples had at most 46515 bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. Data 3592: STARTPCOHKWF3OYSLWDP4CWOY2FIQEBAIF52XPRXOMDI4N5TB7753VOFNG3FVIZHENIY6G2EWOVMB7747LZ27777P3Z6XL4P772GLH477VKXIT4P65XR6H7GHXTOPR7WDH4FIHJ7PONMLVPYHL5NS25LX3WDKPCZVX47R7EI4X6AZQVDTM6RF7ZSFLO26OCXLKWSY2RN4GPRYFLZPVVLS6JAGWWB7A5HZ6YKBQNTLWFSZL6TF3AZHPG2PNVMYTBCTWY6QYQ3OEYLIV5XNVTAW74GDHOLXNXQJ2INYB3Q3D2OHKVV3NTLN3DP4YJX55DEV67HBKG3GHYO6FQDVMVGM7326T54P52J4P5EJO7O7Z7ZMLNDEW3JM7IZRVX7R3356AL2TX5GOLRLJLJXXIEWBFMWM4WGX7E7EF5XXZMRW647R2MKYZZV674MY7EGVKARBG25GGOUNTOX4DSXDIY7VGTXKHZJT27WW5HGQEHYKZKUYVPIY5YTOS3EVF4BZPGBLPRFOKUJ25DOBKLZRK5GQC4JTDKSPKUCHPT5YJRCOBBW6VIP6FFXWFAGKYUS6KP2XOCTSI3JKMGPDGI6HXI3Z3VK7X3L5NNSJNELNQPHEOMQBTZG3XNBWL6FUMBDXKEJL5PB25QCMRVXCJOM47LE3FHFV5Y544QWZI7JZQCFIEM5OUSCX5GDFM5HXSM3DYXEXD4VLVIYFVKX4SW2L27AU5OTNLU6JLJONNYBBLPZAJJVDO34KA5DIH525LQTFKIX5LVDACEHGGZVMJ4DIJHNWLCPSUX7P5L65W2HXHTT3X43I63NITEQ2ZVFYAYRY5PCKSVDAVCDLHNTXXK7336WSHWG75THIL4IGJR542MVNWXODLWEDZNBNAQYM23Q7I7LI5SMYE53HS7UGLFW3OCKNBLJVVOELM6VRRDZUTI732PN35PS74XIS2YRZIV62QF344D4IDLZTQU2N43VPNOESHRLL6AUIVY4V6I7LGJ3FN7DQGDZPNABMAMZJMIHO3PBJU3AJK752E7FYNOWKXZZTY72RK4D74URKNKZD5SWEH65TKXVO5EP5DATKXIECVEO4KWWMMFC6ZXHW25REZV33EMXEBIJA5ZAUOYAYWIEBUWFYS2XMFSLRXVPXOWBODA6FXT6IW5JQB5J4PO4DNWWYKUPKODSQSTVMLSPGYIRNNKYPCYNWRFDZBRM7EZKXTMDZIGWZBXG7BTCONTUJZYVX3X6WANFCX2IRHFZXYXJVTJVV2FZKYU7UQYBU22ZX4ELCC6T4U5WPYDYADKWM34IAJIKPGUXEFI35HWIJZ4GRJQB6WWEB7JABUWP3A3TD7OHFRMTSMMEFGZX6ADK6R6K5A53TKDUCE72MDIQKCHCQCXLIJXYMSLD2WOOGJUKHD2NMTTIPX2N5R27JSVW7EUK5Z3A2BIX6PWLTLO5WD4LMWVKKRDI3TCTHOLIOELWQBVXIPBWRT6GBG7BFYVUBWERQYR2D3MOF5A5ZCTM2EHU2ZPIEAQFJ37IFJGJVYZ2TCKU4IS7ZJCHAGFAYQ6KQV75QQNUS7WXIXPP2PYDCZGS4MPCCGMEZBJNZOYFDWRKFBECCW4564UC3NZO5GTG54FUUGQQQRASAG5RX5M5UWEECATPOVX5VGDEB3PUR3ADSI5N7YNXMURWCM2YXK2SNUWMSOS3LXBRF2E3OCQAFCJWJ3FQAFOLQVAZRLB3GWOS7Q5J62C5R655JO2DIPITWPOMHZA4K6HZCCSXRNSWCMK7BVT6Z6VQMFISUT7QP24I43URODGKLZA65SY4NRQFAMEPLVQXIMCV7L2DFYSURAELZLSBKSW2JOPKLVU3JU5DAVQ3VOOTEEPK2KIWEOCOVEVJTQKRW2M2PJKNZKPCKFFVST3ZWCSXT2F2IBZDNTUMQ4HL5POMVSBIU5Q6CM44NPQMJEFLHHBIIMV6NSO26SDMSIAL5JCNUHCRRML62PYW42AW5MTW3IKGENV5XNZ2YKB62R6HYOFDHRDCKSJM27JDEGDXE3ZKRARQBEAGH62FXOBMQGIV2JYAP6HFSVUUUB5IT3TAYGV4WJLY2BHQLRDIYX3HRSCWIMXARUZW3S7U2BBEN7NYIMB46OIL3NKIKCP5EIWWYPLX6IJNSOVRIO6U2CW3UICDZWOWKXDUXJJI4GD5F4NEYD2XLYLXREOSJDH2J4NSURKKMTL7QADK7YJTLDFJAVHDPW65LJUTXDIUOTN5ONDSUVLD2BTEX3CCZHBS4V5O7FKU2JAM6L4YKJ4UJU676RQ4HIPRBAQKPAGZQUGNIIK2TJON4TFZFZZB4SE3L5YMEARNUC6Z7FHIYJ7G65GRVZDAUJTOOXRWW2ETKZNXDOFYNJ2EVAFAMPE7VRFT6O5SVOJLUSNOLLHYEJE2SURU4FE7FDSKNREC7F4IVMC2NMO6VK327PB5POBUC5LDHFVAMCPEFIMFGCKDMV2NAEOR5UUC7B5FEOZFIG2OQYMT3LTQUY4Q7R3PIIJ27YSYZTVQ4HF3NWGQ6VENWAK5NDY7KL4TKHVZVYVZWNRLQFJQEWJQTUJJ3BKTGDCQYCR6KRDIMLOJFU5UEINYGX34XRXRFBXJOQ3NTDG2RMGY6TUQCWZJW3JIGYTIBQCWIVCWEE5B5G23WNHTFJVAQDA2T6L2F6Y64ECZY7MKQBFZOLHVSQWSU3SRWHQCLO62JVUAFWOQNQRWIS6INORAEJUCIBOEDZZO7EGUEJ3RVF4WACZ7QKQSKX3FI7GHDMACUF5IMNOHFJQKN4JTNVWSAO32D2VAUEO4PQFXUWYTGYLPNTPTL7NGLHW5GB3GWDS4CPLV2Q4L6RVXHXVHGZD4CMBTOVTFVSUX3RGB52I2GFRSS42F4SRVU36AWSSMVKRXUOFPMUHMMMUF6RIGV5WYJJCECXVVB3GKSH646BTCYSPPNCEQYZKU3ERC762YXVPCBYME3ZXJU454BN7N7KLUUI2OWGNCLBSQSZZX63RCELG2G3FG5ZDATUZ6C2MRBKZX3T75ZHFVZ4IW2ZFD5E32QL243S7KKS7UD4TIDLNN3UXYYOJ33MIYA2EJ2YHFE667DVGKKKHJYIBM6KURUAAG5SDJFYA77OM7AHYNSM2QMIKJ7JMHNOOWKCBECRRNVRYTTBA74LZWQXFHWPEDCJF364XZJTX5OVV2M3AYWHX3G6INYU6USSGIVGIHRUTB46ARNUOQEGFCQYGHOES4YIAUYX5BVOOY2NX373A2UD3FGOK54BVKDT7OJOONJMHQIP3H6H3LQBGDVLXJSPCUHY2LKNKLOXKXLWEI577GWYC6Q5STDAJSSWTO3W6R3UX7RK7N5OQLRV56XHNHSA53JQ44XQBXODWW2CJL7ANQYVJOMFSAQESD2SBUZVS44WVMXOGKFVJW66PVUNU2T4IHZG47KV3RIAXIX6JEEJQ3DIHJMQK2QVHV37F4SGF7TH5QLTRQNUSIS3YIO3EISNK3G3KKQ4XKIM44MS2ZKHGAQZERXGWQEUY65PQDT36I6V7RL62S2OEAZMONPJJW3EON2RIFWIX757HY7X77PZ7X5Z5PX64JA7HX76AERLJM2C===END hypothesis-python-3.44.1/circle.yml000066400000000000000000000011021321557765100173130ustar00rootroot00000000000000test: override: - scripts/run_circle.py machine: # Courtesy of https://pewpewthespells.com/blog/building_python_on_circleci.html pre: - export PATH=/usr/local/bin:$PATH:/Users/distiller/Library/Python/2.7/bin - pip install --user --ignore-installed --upgrade virtualenv - ln -s $HOME/Library/Python/2.7/bin/virtualenv /usr/local/bin/virtualenv - cd "$(brew --repository)" && git fetch && git reset --hard origin/master - brew update dependencies: override: - make install-core cache_directories: - ~/.cache/hypothesis-build-runtimes hypothesis-python-3.44.1/docs/000077500000000000000000000000001321557765100162655ustar00rootroot00000000000000hypothesis-python-3.44.1/docs/_static/000077500000000000000000000000001321557765100177135ustar00rootroot00000000000000hypothesis-python-3.44.1/docs/_static/.empty000066400000000000000000000000001321557765100210400ustar00rootroot00000000000000hypothesis-python-3.44.1/docs/changes.rst000066400000000000000000004012631321557765100204350ustar00rootroot00000000000000========= Changelog ========= This is a record of all past Hypothesis releases and what went into them, in reverse chronological order. All previous releases should still be available on pip. Hypothesis APIs come in three flavours: * Public: Hypothesis releases since 1.0 are `semantically versioned `_ with respect to these parts of the API. These will not break except between major version bumps. All APIs mentioned in this documentation are public unless explicitly noted otherwise. * Semi-public: These are APIs that are considered ready to use but are not wholly nailed down yet. They will not break in patch releases and will *usually* not break in minor releases, but when necessary minor releases may break semi-public APIs. * Internal: These may break at any time and you really should not use them at all. You should generally assume that an API is internal unless you have specific information to the contrary. ------------------- 3.44.1 - 2017-12-18 ------------------- This release fixes :issue:`997`, in which under some circumstances the body of tests run under Hypothesis would not show up when run under coverage even though the tests were run and the code they called outside of the test file would show up normally. ------------------- 3.44.0 - 2017-12-17 ------------------- This release adds a new feature: The :ref:`@reproduce_failure `, designed to make it easy to use Hypothesis's binary format for examples to reproduce a problem locally without having to share your example database between machines. This also changes when seeds are printed: * They will no longer be printed for normal falsifying examples, as there are now adequate ways of reproducing those for all cases, so it just contributes noise. * They will once again be printed when reusing examples from the database, as health check failures should now be more reliable in this scenario so it will almost always work in this case. This work was funded by `Smarkets `_. ------------------- 3.43.1 - 2017-12-17 ------------------- This release fixes a bug with Hypothesis's database management - examples that were found in the course of shrinking were saved in a way that indicated that they had distinct causes, and so they would all be retried on the start of the next test. The intended behaviour, which is now what is implemented, is that only a bounded subset of these examples would be retried. ------------------- 3.43.0 - 2017-12-17 ------------------- :exc:`~hypothesis.errors.HypothesisDeprecationWarning` now inherits from :exc:`python:FutureWarning` instead of :exc:`python:DeprecationWarning`, as recommended by :pep:`565` for user-facing warnings (:issue:`618`). If you have not changed the default warnings settings, you will now see each distinct :exc:`~hypothesis.errors.HypothesisDeprecationWarning` instead of only the first. ------------------- 3.42.2 - 2017-12-12 ------------------- This patch fixes :issue:`1017`, where instances of a list or tuple subtype used as an argument to a strategy would be coerced to tuple. ------------------- 3.42.1 - 2017-12-10 ------------------- This release has some internal cleanup, which makes reading the code more pleasant and may shrink large examples slightly faster. ------------------- 3.42.0 - 2017-12-09 ------------------- This release deprecates :ref:`faker-extra`, which was designed as a transition strategy but does not support example shrinking or coverage-guided discovery. ------------------- 3.41.0 - 2017-12-06 ------------------- :func:`~hypothesis.strategies.sampled_from` can now sample from one-dimensional numpy ndarrays. Sampling from multi-dimensional ndarrays still results in a deprecation warning. Thanks to Charlie Tanksley for this patch. ------------------- 3.40.1 - 2017-12-04 ------------------- This release makes two changes: * It makes the calculation of some of the metadata that Hypothesis uses for shrinking occur lazily. This should speed up performance of test case generation a bit because it no longer calculates information it doesn't need. * It improves the shrinker for certain classes of nested examples. e.g. when shrinking lists of lists, the shrinker is now able to concatenate two adjacent lists together into a single list. As a result of this change, shrinking may get somewhat slower when the minimal example found is large. ------------------- 3.40.0 - 2017-12-02 ------------------- This release improves how various ways of seeding Hypothesis interact with the example database: * Using the example database with :func:`~hypothesis.seed` is now deprecated. You should set ``database=None`` if you are doing that. This will only warn if you actually load examples from the database while using ``@seed``. * The :attr:`~hypothesis.settings.derandomize` will behave the same way as ``@seed``. * Using ``--hypothesis-seed`` will disable use of the database. * If a test used examples from the database, it will not suggest using a seed to reproduce it, because that won't work. This work was funded by `Smarkets `_. ------------------- 3.39.0 - 2017-12-01 ------------------- This release adds a new health check that checks if the smallest "natural" possible example of your test case is very large - this will tend to cause Hypothesis to generate bad examples and be quite slow. This work was funded by `Smarkets `_. ------------------- 3.38.9 - 2017-11-29 ------------------- This is a documentation release to improve the documentation of shrinking behaviour for Hypothesis's strategies. ------------------- 3.38.8 - 2017-11-29 ------------------- This release improves the performance of :func:`~hypothesis.strategies.characters` when using ``blacklist_characters`` and :func:`~hypothesis.strategies.from_regex` when using negative character classes. The problems this fixes were found in the course of work funded by `Smarkets `_. ------------------- 3.38.7 - 2017-11-29 ------------------- This is a patch release for :func:`~hypothesis.strategies.from_regex`, which had a bug in handling of the :obj:`python:re.VERBOSE` flag (:issue:`992`). Flags are now handled correctly when parsing regex. ------------------- 3.38.6 - 2017-11-28 ------------------- This patch changes a few byte-string literals from double to single quotes, thanks to an update in :pypi:`unify`. There are no user-visible changes. ------------------- 3.38.5 - 2017-11-23 ------------------- This fixes the repr of strategies using lambda that are defined inside decorators to include the lambda source. This would mostly have been visible when using the :ref:`statistics ` functionality - lambdas used for e.g. filtering would have shown up with a ```` as their body. This can still happen, but it should happen less often now. ------------------- 3.38.4 - 2017-11-22 ------------------- This release updates the reported :ref:`statistics ` so that they show approximately what fraction of your test run time is spent in data generation (as opposed to test execution). This work was funded by `Smarkets `_. ------------------- 3.38.3 - 2017-11-21 ------------------- This is a documentation release, which ensures code examples are up to date by running them as doctests in CI (:issue:`711`). ------------------- 3.38.2 - 2017-11-21 ------------------- This release changes the behaviour of the :attr:`~hypothesis.settings.deadline` setting when used with :func:`~hypothesis.strategies.data`: Time spent inside calls to ``data.draw`` will no longer be counted towards the deadline time. As a side effect of some refactoring required for this work, the way flaky tests are handled has changed slightly. You are unlikely to see much difference from this, but some error messages will have changed. This work was funded by `Smarkets `_. ------------------- 3.38.1 - 2017-11-21 ------------------- This patch has a variety of non-user-visible refactorings, removing various minor warts ranging from indirect imports to typos in comments. ------------------- 3.38.0 - 2017-11-18 ------------------- This release overhauls :doc:`the health check system ` in a variety of small ways. It adds no new features, but is nevertheless a minor release because it changes which tests are likely to fail health checks. The most noticeable effect is that some tests that used to fail health checks will now pass, and some that used to pass will fail. These should all be improvements in accuracy. In particular: * New failures will usually be because they are now taking into account things like use of :func:`~hypothesis.strategies.data` and :func:`~hypothesis.assume` inside the test body. * New failures *may* also be because for some classes of example the way data generation performance was measured was artificially faster than real data generation (for most examples that are hitting performance health checks the opposite should be the case). * Tests that used to fail health checks and now pass do so because the health check system used to run in a way that was subtly different than the main Hypothesis data generation and lacked some of its support for e.g. large examples. If your data generation is especially slow, you may also see your tests get somewhat faster, as there is no longer a separate health check phase. This will be particularly noticeable when rerunning test failures. This work was funded by `Smarkets `_. ------------------- 3.37.0 - 2017-11-12 ------------------- This is a deprecation release for some health check related features. The following are now deprecated: * Passing :attr:`~hypothesis.HealthCheck.exception_in_generation` to :attr:`~hypothesis.settings.suppress_health_check`. This no longer does anything even when passed - All errors that occur during data generation will now be immediately reraised rather than going through the health check mechanism. * Passing :attr:`~hypothesis.HealthCheck.random_module` to :attr:`~hypothesis.settings.suppress_health_check`. This hasn't done anything for a long time, but was never explicitly deprecated. Hypothesis always seeds the random module when running @given tests, so this is no longer an error and suppressing it doesn't do anything. * Passing non-:class:`~hypothesis.HealthCheck` values in :attr:`~hypothesis.settings.suppress_health_check`. This was previously allowed but never did anything useful. In addition, passing a non-iterable value as :attr:`~hypothesis.settings.suppress_health_check` will now raise an error immediately (it would never have worked correctly, but it would previously have failed later). Some validation error messages have also been updated. This work was funded by `Smarkets `_. ------------------- 3.36.1 - 2017-11-10 ------------------- This is a yak shaving release, mostly concerned with our own tests. While :func:`~python:inspect.getfullargspec` was documented as deprecated in Python 3.5, it never actually emitted a warning. Our code to silence this (nonexistent) warning has therefore been removed. We now run our tests with ``DeprecationWarning`` as an error, and made some minor changes to our own tests as a result. This required similar upstream updates to :pypi:`coverage` and :pypi:`execnet` (a test-time dependency via :pypi:`pytest-xdist`). There is no user-visible change in Hypothesis itself, but we encourage you to consider enabling deprecations as errors in your own tests. ------------------- 3.36.0 - 2017-11-06 ------------------- This release adds a setting to the public API, and does some internal cleanup: - The :attr:`~hypothesis.settings.derandomize` setting is now documented (:issue:`890`) - Removed - and disallowed - all 'bare excepts' in Hypothesis (:issue:`953`) - Documented the :attr:`~hypothesis.settings.strict` setting as deprecated, and updated the build so our docs always match deprecations in the code. ------------------- 3.35.0 - 2017-11-06 ------------------- This minor release supports constraining :func:`~hypothesis.strategies.uuids` to generate a particular version of :class:`~python:uuid.UUID` (:issue:`721`). Thanks to Dion Misic for this feature. ------------------- 3.34.1 - 2017-11-02 ------------------- This patch updates the documentation to suggest :func:`builds(callable) ` instead of :func:`just(callable()) `. ------------------- 3.34.0 - 2017-11-02 ------------------- Hypothesis now emits deprecation warnings if you apply :func:`@given ` more than once to a target. Applying :func:`@given ` repeatedly wraps the target multiple times. Each wrapper will search the space of of possible parameters separately. This is equivalent but will be much more inefficient than doing it with a single call to :func:`@given `. For example, instead of ``@given(booleans()) @given(integers())``, you could write ``@given(booleans(), integers())`` ------------------- 3.33.1 - 2017-11-02 ------------------- This is a bugfix release: - :func:`~hypothesis.strategies.builds` would try to infer a strategy for required positional arguments of the target from type hints, even if they had been given to :func:`~hypothesis.strategies.builds` as positional arguments (:issue:`946`). Now it only infers missing required arguments. - An internal introspection function wrongly reported ``self`` as a required argument for bound methods, which might also have affected :func:`~hypothesis.strategies.builds`. Now it knows better. ------------------- 3.33.0 - 2017-10-16 ------------------- This release supports strategy inference for more field types in Django :func:`~hypothesis.extra.django.models` - you can now omit an argument for Date, Time, Duration, Slug, IP Address, and UUID fields. (:issue:`642`) Strategy generation for fields with grouped choices now selects choices from each group, instead of selecting from the group names. ------------------- 3.32.2 - 2017-10-15 ------------------- This patch removes the ``mergedb`` tool, introduced in Hypothesis 1.7.1 on an experimental basis. It has never actually worked, and the new :doc:`Hypothesis example database ` is designed to make such a tool unnecessary. ------------------- 3.32.1 - 2017-10-13 ------------------- This patch has two improvements for strategies based on enumerations. - :func:`~hypothesis.strategies.from_type` now handles enumerations correctly, delegating to :func:`~hypothesis.strategies.sampled_from`. Previously it noted that ``Enum.__init__`` has no required arguments and therefore delegated to :func:`~hypothesis.strategies.builds`, which would subsequently fail. - When sampling from an :class:`python:enum.Flag`, we also generate combinations of members. Eg for ``Flag('Permissions', 'READ, WRITE, EXECUTE')`` we can now generate, ``Permissions.READ``, ``Permissions.READ|WRITE``, and so on. ------------------- 3.32.0 - 2017-10-09 ------------------- This changes the default value of :attr:`use_coverage=True ` to True when running on pypy (it was already True on CPython). It was previously set to False because we expected it to be too slow, but recent benchmarking shows that actually performance of the feature on pypy is fairly acceptable - sometimes it's slower than on CPython, sometimes it's faster, but it's generally within a factor of two either way. ------------------- 3.31.6 - 2017-10-08 ------------------- This patch improves the quality of strategies inferred from Numpy dtypes: * Integer dtypes generated examples with the upper half of their (non-sign) bits set to zero. The inferred strategies can now produce any representable integer. * Fixed-width unicode- and byte-string dtypes now cap the internal example length, which should improve example and shrink quality. * Numpy arrays can only store fixed-size strings internally, and allow shorter strings by right-padding them with null bytes. Inferred string strategies no longer generate such values, as they can never be retrieved from an array. This improves shrinking performance by skipping useless values. This has already been useful in Hypothesis - we found an overflow bug in our Pandas support, and as a result :func:`~hypothesis.extra.pandas.indexes` and :func:`~hypothesis.extra.pandas.range_indexes` now check that ``min_size`` and ``max_size`` are at least zero. ------------------- 3.31.5 - 2017-10-08 ------------------- This release fixes a performance problem in tests where :attr:`~hypothesis.settings.use_coverage` is set to True. Tests experience a slow-down proportionate to the amount of code they cover. This is still the case, but the factor is now low enough that it should be unnoticeable. Previously it was large and became much larger in 3.28.4. ------------------- 3.31.4 - 2017-10-08 ------------------- :func:`~hypothesis.strategies.from_type` failed with a very confusing error if passed a :func:`~python:typing.NewType` (:issue:`901`). These psudeo-types are now unwrapped correctly, and strategy inference works as expected. ------------------- 3.31.3 - 2017-10-06 ------------------- This release makes some small optimisations to our use of coverage that should reduce constant per-example overhead. This is probably only noticeable on examples where the test itself is quite fast. On no-op tests that don't test anything you may see up to a fourfold speed increase (which is still significantly slower than without coverage). On more realistic tests the speed up is likely to be less than that. ------------------- 3.31.2 - 2017-09-30 ------------------- This release fixes some formatting and small typos/grammar issues in the documentation, specifically the page docs/settings.rst, and the inline docs for the various settings. ------------------- 3.31.1 - 2017-09-30 ------------------- This release improves the handling of deadlines so that they act better with the shrinking process. This fixes :issue:`892`. This involves two changes: 1. The deadline is raised during the initial generation and shrinking, and then lowered to the set value for final replay. This restricts our attention to examples which exceed the deadline by a more significant margin, which increases their reliability. 2. When despite the above a test still becomes flaky because it is significantly faster on rerun than it was on its first run, the error message is now more explicit about the nature of this problem, and includes both the initial test run time and the new test run time. In addition, this release also clarifies the documentation of the deadline setting slightly to be more explicit about where it applies. This work was funded by `Smarkets `_. ------------------- 3.31.0 - 2017-09-29 ------------------- This release blocks installation of Hypothesis on Python 3.3, which :PEP:`reached its end of life date on 2017-09-29 <398>`. This should not be of interest to anyone but downstream maintainers - if you are affected, migrate to a secure version of Python as soon as possible or at least seek commercial support. ------------------- 3.30.4 - 2017-09-27 ------------------- This release makes several changes: 1. It significantly improves Hypothesis's ability to use coverage information to find interesting examples. 2. It reduces the default :attr:`~hypothesis.settings.max_examples` setting from 200 to 100. This takes advantage of the improved algorithm meaning fewer examples are typically needed to get the same testing and is sufficiently better at covering interesting behaviour, and offsets some of the performance problems of running under coverage. 3. Hypothesis will always try to start its testing with an example that is near minimized. The new algorithm for 1 also makes some changes to Hypothesis's low level data generation which apply even with coverage turned off. They generally reduce the total amount of data generated, which should improve test performance somewhat. Between this and 3 you should see a noticeable reduction in test runtime (how much so depends on your tests and how much example size affects their performance. On our benchmarks, where data generation dominates, we saw up to a factor of two performance improvement, but it's unlikely to be that large. ------------------- 3.30.3 - 2017-09-25 ------------------- This release fixes some formatting and small typos/grammar issues in the documentation, specifically the page docs/details.rst, and some inline docs linked from there. ------------------- 3.30.2 - 2017-09-24 ------------------- This release changes Hypothesis's caching approach for functions in ``hypothesis.strategies``. Previously it would have cached extremely aggressively and cache entries would never be evicted. Now it adopts a least-frequently used, least recently used key invalidation policy, and is somewhat more conservative about which strategies it caches. Workloads which create strategies based on dynamic values, e.g. by using :ref:`flatmap ` or :func:`~hypothesis.strategies.composite`, will use significantly less memory. ------------------- 3.30.1 - 2017-09-22 ------------------- This release fixes a bug where when running with :attr:`use_coverage=True ` inside an existing running instance of coverage, Hypothesis would frequently put files that the coveragerc excluded in the report for the enclosing coverage. ------------------- 3.30.0 - 2017-09-20 ------------------- This release introduces two new features: * When a test fails, either with a health check failure or a falsifying example, Hypothesis will print out a seed that led to that failure, if the test is not already running with a fixed seed. You can then recreate that failure using either the :func:`@seed ` decorator or (if you are running pytest) with ``--hypothesis-seed``. * :pypi:`pytest` users can specify a seed to use for :func:`@given ` based tests by passing the ``--hypothesis-seed`` command line argument. This work was funded by `Smarkets `_. ------------------- 3.29.0 - 2017-09-19 ------------------- This release makes Hypothesis coverage aware. Hypothesis now runs all test bodies under coverage, and uses this information to guide its testing. The :attr:`~hypothesis.settings.use_coverage` setting can be used to disable this behaviour if you want to test code that is sensitive to coverage being enabled (either because of performance or interaction with the trace function). The main benefits of this feature are: * Hypothesis now observes when examples it discovers cover particular lines or branches and stores them in the database for later. * Hypothesis will make some use of this information to guide its exploration of the search space and improve the examples it finds (this is currently used only very lightly and will likely improve significantly in future releases). This also has the following side-effects: * Hypothesis now has an install time dependency on the :pypi:`coverage` package. * Tests that are already running Hypothesis under coverage will likely get faster. * Tests that are not running under coverage now run their test bodies under coverage by default. This feature is only partially supported under pypy. It is significantly slower than on CPython and is turned off by default as a result, but it should still work correctly if you want to use it. ------------------- 3.28.3 - 2017-09-18 ------------------- This release is an internal change that affects how Hypothesis handles calculating certain properties of strategies. The primary effect of this is that it fixes a bug where use of :func:`~hypothesis.deferred` could sometimes trigger an internal assertion error. However the fix for this bug involved some moderately deep changes to how Hypothesis handles certain constructs so you may notice some additional knock-on effects. In particular the way Hypothesis handles drawing data from strategies that cannot generate any values has changed to bail out sooner than it previously did. This may speed up certain tests, but it is unlikely to make much of a difference in practice for tests that were not already failing with Unsatisfiable. ------------------- 3.28.2 - 2017-09-18 ------------------- This is a patch release that fixes a bug in the :mod:`hypothesis.extra.pandas` documentation where it incorrectly referred to :func:`~hypothesis.extra.pandas.column` instead of :func:`~hypothesis.extra.pandas.columns`. ------------------- 3.28.1 - 2017-09-16 ------------------- This is a refactoring release. It moves a number of internal uses of :func:`~python:collections.namedtuple` over to using attrs based classes, and removes a couple of internal namedtuple classes that were no longer in use. It should have no user visible impact. ------------------- 3.28.0 - 2017-09-15 ------------------- This release adds support for testing :pypi:`pandas` via the :ref:`hypothesis.extra.pandas ` module. It also adds a dependency on :pypi:`attrs`. This work was funded by `Stripe `_. ------------------- 3.27.1 - 2017-09-14 ------------------- This release fixes some formatting and broken cross-references in the documentation, which includes editing docstrings - and thus a patch release. ------------------- 3.27.0 - 2017-09-13 ------------------- This release introduces a :attr:`~hypothesis.settings.deadline` setting to Hypothesis. When set this turns slow tests into errors. By default it is unset but will warn if you exceed 200ms, which will become the default value in a future release. This work was funded by `Smarkets `_. ------------------- 3.26.0 - 2017-09-12 ------------------- Hypothesis now emits deprecation warnings if you are using the legacy SQLite example database format, or the tool for merging them. These were already documented as deprecated, so this doesn't change their deprecation status, only that we warn about it. ------------------- 3.25.1 - 2017-09-12 ------------------- This release fixes a bug with generating :doc:`numpy datetime and timedelta types `: When inferring the strategy from the dtype, datetime and timedelta dtypes with sub-second precision would always produce examples with one second resolution. Inferring a strategy from a time dtype will now always produce example with the same precision. ------------------- 3.25.0 - 2017-09-12 ------------------- This release changes how Hypothesis shrinks and replays examples to take into account that it can encounter new bugs while shrinking the bug it originally found. Previously it would end up replacing the originally found bug with the new bug and show you only that one. Now it is (often) able to recognise when two bugs are distinct and when it finds more than one will show both. ------------------- 3.24.2 - 2017-09-11 ------------------- This release removes the (purely internal and no longer useful) ``strategy_test_suite`` function and the corresponding strategytests module. ------------------- 3.24.1 - 2017-09-06 ------------------- This release improves the reduction of examples involving floating point numbers to produce more human readable examples. It also has some general purpose changes to the way the minimizer works internally, which may see some improvement in quality and slow down of test case reduction in cases that have nothing to do with floating point numbers. ------------------- 3.24.0 - 2017-09-05 ------------------- Hypothesis now emits deprecation warnings if you use ``some_strategy.example()`` inside a test function or strategy definition (this was never intended to be supported, but is sufficiently widespread that it warrants a deprecation path). ------------------- 3.23.3 - 2017-09-05 ------------------- This is a bugfix release for :func:`~hypothesis.strategies.decimals` with the ``places`` argument. - No longer fails health checks (:issue:`725`, due to internal filtering) - Specifying a ``min_value`` and ``max_value`` without any decimals with ``places`` places between them gives a more useful error message. - Works for any valid arguments, regardless of the decimal precision context. ------------------- 3.23.2 - 2017-09-01 ------------------- This is a small refactoring release that removes a now-unused parameter to an internal API. It shouldn't have any user visible effect. ------------------- 3.23.1 - 2017-09-01 ------------------- Hypothesis no longer propagates the dynamic scope of settings into strategy definitions. This release is a small change to something that was never part of the public API and you will almost certainly not notice any effect unless you're doing something surprising, but for example the following code will now give a different answer in some circumstances: .. code-block:: python import hypothesis.strategies as st from hypothesis import settings CURRENT_SETTINGS = st.builds(lambda: settings.default) (We don't actually encourage you writing code like this) Previously this would have generated the settings that were in effect at the point of definition of ``CURRENT_SETTINGS``. Now it will generate the settings that are used for the current test. It is very unlikely to be significant enough to be visible, but you may also notice a small performance improvement. ------------------- 3.23.0 - 2017-08-31 ------------------- This release adds a ``unique`` argument to :func:`~hypothesis.extra.numpy.arrays` which behaves the same ways as the corresponding one for :func:`~hypothesis.strategies.lists`, requiring all of the elements in the generated array to be distinct. ------------------- 3.22.2 - 2017-08-29 ------------------- This release fixes an issue where Hypothesis would raise a ``TypeError`` when using the datetime-related strategies if running with ``PYTHONOPTIMIZE=2``. This bug was introduced in v3.20.0. (See :issue:`822`) ------------------- 3.22.1 - 2017-08-28 ------------------- Hypothesis now transparently handles problems with an internal unicode cache, including file truncation or read-only filesystems (:issue:`767`). Thanks to Sam Hames for the patch. ------------------- 3.22.0 - 2017-08-26 ------------------- This release provides what should be a substantial performance improvement to numpy arrays generated using :ref:`provided numpy support `, and adds a new ``fill_value`` argument to :func:`~hypothesis.extra.numpy.arrays` to control this behaviour. This work was funded by `Stripe `_. ------------------- 3.21.3 - 2017-08-26 ------------------- This release fixes some extremely specific circumstances that probably have never occurred in the wild where users of :func:`~hypothesis.searchstrategy.deferred` might have seen a RuntimeError from too much recursion, usually in cases where no valid example could have been generated anyway. ------------------- 3.21.2 - 2017-08-25 ------------------- This release fixes some minor bugs in argument validation: * :ref:`hypothesis.extra.numpy ` dtype strategies would raise an internal error instead of an InvalidArgument exception when passed an invalid endianness specification. * :func:`~hypothesis.strategies.fractions` would raise an internal error instead of an InvalidArgument if passed ``float("nan")`` as one of its bounds. * The error message for passing ``float("nan")`` as a bound to various strategies has been improved. * Various bound arguments will now raise InvalidArgument in cases where they would previously have raised an internal TypeError or ValueError from the relevant conversion function. * :func:`~hypothesis.strategies.streaming` would not have emitted a deprecation warning when called with an invalid argument. ------------------- 3.21.1 - 2017-08-24 ------------------- This release fixes a bug where test failures that were the result of an :func:`@example ` would print an extra stack trace before re-raising the exception. ------------------- 3.21.0 - 2017-08-23 ------------------- This release deprecates Hypothesis's strict mode, which turned Hypothesis's deprecation warnings into errors. Similar functionality can be achieved by using :func:`simplefilter('error', HypothesisDeprecationWarning) `. ------------------- 3.20.0 - 2017-08-22 ------------------- This release renames the relevant arguments on the :func:`~hypothesis.strategies.datetimes`, :func:`~hypothesis.strategies.dates`, :func:`~hypothesis.strategies.times`, and :func:`~hypothesis.strategies.timedeltas` strategies to ``min_value`` and ``max_value``, to make them consistent with the other strategies in the module. The old argument names are still supported but will emit a deprecation warning when used explicitly as keyword arguments. Arguments passed positionally will go to the new argument names and are not deprecated. ------------------- 3.19.3 - 2017-08-22 ------------------- This release provides a major overhaul to the internals of how Hypothesis handles shrinking. This should mostly be visible in terms of getting better examples for tests which make heavy use of :func:`~hypothesis.strategies.composite`, :func:`~hypothesis.strategies.data` or :ref:`flatmap ` where the data drawn depends a lot on previous choices, especially where size parameters are affected. Previously Hypothesis would have struggled to reliably produce good examples here. Now it should do much better. Performance should also be better for examples with a non-zero ``min_size``. You may see slight changes to example generation (e.g. improved example diversity) as a result of related changes to internals, but they are unlikely to be significant enough to notice. ------------------- 3.19.2 - 2017-08-21 ------------------- This release fixes two bugs in :mod:`hypothesis.extra.numpy`: * :func:`~hypothesis.extra.numpy.unicode_string_dtypes` didn't work at all due to an incorrect dtype specifier. Now it does. * Various impossible conditions would have been accepted but would error when they fail to produced any example. Now they raise an explicit InvalidArgument error. ------------------- 3.19.1 - 2017-08-21 ------------------- This is a bugfix release for :issue:`739`, where bounds for :func:`~hypothesis.strategies.fractions` or floating-point :func:`~hypothesis.strategies.decimals` were not properly converted to integers before passing them to the integers strategy. This excluded some values that should have been possible, and could trigger internal errors if the bounds lay between adjacent integers. You can now bound :func:`~hypothesis.strategies.fractions` with two arbitrarily close fractions. It is now an explicit error to supply a min_value, max_value, and max_denominator to :func:`~hypothesis.strategies.fractions` where the value bounds do not include a fraction with denominator at most max_denominator. ------------------- 3.19.0 - 2017-08-20 ------------------- This release adds the :func:`~hypothesis.strategies.from_regex` strategy, which generates strings that contain a match of a regular expression. Thanks to Maxim Kulkin for creating the `hypothesis-regex `_ package and then helping to upstream it! (:issue:`662`) ------------------- 3.18.5 - 2017-08-18 ------------------- This is a bugfix release for :func:`~hypothesis.strategies.integers`. Previously the strategy would hit an internal assertion if passed non-integer bounds for ``min_value`` and ``max_value`` that had no integers between them. The strategy now raises InvalidArgument instead. ------------------- 3.18.4 - 2017-08-18 ------------------- Release to fix a bug where mocks can be used as test runners under certain conditions. Specifically, if a mock is injected into a test via pytest fixtures or patch decorators, and that mock is the first argument in the list, hypothesis will think it represents self and turns the mock into a test runner. If this happens, the affected test always passes because the mock is executed instead of the test body. Sometimes, it will also fail health checks. Fixes :issue:`491` and a section of :issue:`198`. Thanks to Ben Peterson for this bug fix. ------------------- 3.18.3 - 2017-08-17 ------------------- This release should improve the performance of some tests which experienced a slow down as a result of the 3.13.0 release. Tests most likely to benefit from this are ones that make extensive use of ``min_size`` parameters, but others may see some improvement as well. ------------------- 3.18.2 - 2017-08-16 ------------------- This release fixes a bug introduced in 3.18.0. If the arguments ``whitelist_characters`` and ``blacklist_characters`` to :func:`~hypothesis.strategies.characters` both contained elements, then an ``InvalidArgument`` exception would be raised. Thanks to Zac Hatfield-Dodds for reporting and fixing this. ------------------- 3.18.1 - 2017-08-14 ------------------- This is a bug fix release to fix :issue:`780`, where :func:`~hypothesis.strategies.sets` and similar would trigger health check errors if their element strategy could only produce one element (e.g. if it was :func:`~hypothesis.strategies.just`). ------------------- 3.18.0 - 2017-08-13 ------------------- This is a feature release: * :func:`~hypothesis.strategies.characters` now accepts ``whitelist_characters``, particular characters which will be added to those it produces. (:issue:`668`) * A bug fix for the internal function ``_union_interval_lists()``, and a rename to ``_union_intervals()``. It now correctly handles all cases where intervals overlap, and it always returns the result as a tuple for tuples. Thanks to Alex Willmer for these. ------------------- 3.17.0 - 2017-08-07 ------------------- This release documents :ref:`the previously undocumented phases feature `, making it part of the public API. It also updates how the example database is used. Principally: * A ``Phases.reuse`` argument will now correctly control whether examples from the database are run (it previously did exactly the wrong thing and controlled whether examples would be *saved*). * Hypothesis will no longer try to rerun *all* previously failing examples. Instead it will replay the smallest previously failing example and a selection of other examples that are likely to trigger any other bugs that will found. This prevents a previous failure from dominating your tests unnecessarily. * As a result of the previous change, Hypothesis will be slower about clearing out old examples from the database that are no longer failing (because it can only clear out ones that it actually runs). ------------------- 3.16.1 - 2017-08-07 ------------------- This release makes an implementation change to how Hypothesis handles certain internal constructs. The main effect you should see is improvement to the behaviour and performance of collection types, especially ones with a ``min_size`` parameter. Many cases that would previously fail due to being unable to generate enough valid examples will now succeed, and other cases should run slightly faster. ------------------- 3.16.0 - 2017-08-04 ------------------- This release introduces a deprecation of the timeout feature. This results in the following changes: * Creating a settings object with an explicit timeout will emit a deprecation warning. * If your test stops because it hits the timeout (and has not found a bug) then it will emit a deprecation warning. * There is a new value ``unlimited`` which you can import from hypothesis. ``settings(timeout=unlimited)`` will *not* cause a deprecation warning. * There is a new health check, ``hung_test``, which will trigger after a test has been running for five minutes if it is not suppressed. ------------------- 3.15.0 - 2017-08-04 ------------------- This release deprecates two strategies, :func:`~hypothesis.strategies.choices` and :func:`~hypothesis.strategies.streaming`. Both of these are somewhat confusing to use and are entirely redundant since the introduction of the :func:`~hypothesis.strategies.data` strategy for interactive drawing in tests, and their use should be replaced with direct use of :func:`~hypothesis.strategies.data` instead. ------------------- 3.14.2 - 2017-08-03 ------------------- This fixes a bug where Hypothesis would not work correctly on Python 2.7 if you had the :mod:`python:typing` module :pypi:`backport ` installed. ------------------- 3.14.1 - 2017-08-02 ------------------- This raises the maximum depth at which Hypothesis starts cutting off data generation to a more reasonable value which it is harder to hit by accident. This resolves (:issue:`751`), in which some examples which previously worked would start timing out, but it will also likely improve the data generation quality for complex data types. ------------------- 3.14.0 - 2017-07-23 ------------------- Hypothesis now understands inline type annotations (:issue:`293`): - If the target of :func:`~hypothesis.strategies.builds` has type annotations, a default strategy for missing required arguments is selected based on the type. Type-based strategy selection will only override a default if you pass :const:`hypothesis.infer` as a keyword argument. - If :func:`@given ` wraps a function with type annotations, you can pass :const:`~hypothesis.infer` as a keyword argument and the appropriate strategy will be substituted. - You can check what strategy will be inferred for a type with the new :func:`~hypothesis.strategies.from_type` function. - :func:`~hypothesis.strategies.register_type_strategy` teaches Hypothesis which strategy to infer for custom or unknown types. You can provide a strategy, or for more complex cases a function which takes the type and returns a strategy. ------------------- 3.13.1 - 2017-07-20 ------------------- This is a bug fix release for :issue:`514` - Hypothesis would continue running examples after a :class:`~python:unittest.SkipTest` exception was raised, including printing a falsifying example. Skip exceptions from the standard :mod:`python:unittest` module, and ``pytest``, ``nose``, or ``unittest2`` modules now abort the test immediately without printing output. ------------------- 3.13.0 - 2017-07-16 ------------------- This release has two major aspects to it: The first is the introduction of :func:`~hypothesis.strategies.deferred`, which allows more natural definition of recursive (including mutually recursive) strategies. The second is a number of engine changes designed to support this sort of strategy better. These should have a knock-on effect of also improving the performance of any existing strategies that currently generate a lot of data or involve heavy nesting by reducing their typical example size. ------------------- 3.12.0 - 2017-07-07 ------------------- This release makes some major internal changes to how Hypothesis represents data internally, as a prelude to some major engine changes that should improve data quality. There are no API changes, but it's a significant enough internal change that a minor version bump seemed warranted. User facing impact should be fairly mild, but includes: * All existing examples in the database will probably be invalidated. Hypothesis handles this automatically, so you don't need to do anything, but if you see all your examples disappear that's why. * Almost all data distributions have changed significantly. Possibly for the better, possibly for the worse. This may result in new bugs being found, but it may also result in Hypothesis being unable to find bugs it previously did. * Data generation may be somewhat faster if your existing bottleneck was in draw_bytes (which is often the case for large examples). * Shrinking will probably be slower, possibly significantly. If you notice any effects you consider to be a significant regression, please open an issue about them. ------------------- 3.11.6 - 2017-06-19 ------------------- This release involves no functionality changes, but is the first to ship wheels as well as an sdist. ------------------- 3.11.5 - 2017-06-18 ------------------- This release provides a performance improvement to shrinking. For cases where there is some non-trivial "boundary" value (e.g. the bug happens for all values greater than some other value), shrinking should now be substantially faster. Other types of bug will likely see improvements too. This may also result in some changes to the quality of the final examples - it may sometimes be better, but is more likely to get slightly worse in some edge cases. If you see any examples where this happens in practice, please report them. ------------------- 3.11.4 - 2017-06-17 ------------------- This is a bugfix release: Hypothesis now prints explicit examples when running in verbose mode. (:issue:`313`) ------------------- 3.11.3 - 2017-06-11 ------------------- This is a bugfix release: Hypothesis no longer emits a warning if you try to use :func:`~hypothesis.strategies.sampled_from` with :class:`python:collections.OrderedDict`. (:issue:`688`) ------------------- 3.11.2 - 2017-06-10 ------------------- This is a documentation release. Several outdated snippets have been updated or removed, and many cross-references are now hyperlinks. ------------------- 3.11.1 - 2017-05-28 ------------------- This is a minor ergonomics release. Tracebacks shown by pytest no longer include Hypothesis internals for test functions decorated with :func:`@given `. ------------------- 3.11.0 - 2017-05-23 ------------------- This is a feature release, adding datetime-related strategies to the core strategies. :func:`~hypotheses.extra.pytz.timezones` allows you to sample pytz timezones from the Olsen database. Use directly in a recipe for tz-aware datetimes, or compose with :func:`~hypothesis.strategies.none` to allow a mix of aware and naive output. The new :func:`~hypothesis.strategies.dates`, :func:`~hypothesis.strategies.times`, :func:`~hypothesis.strategies.datetimes`, and :func:`~hypothesis.strategies.timedeltas` strategies are all constrained by objects of their type. This means that you can generate dates bounded by a single day (i.e. a single date), or datetimes constrained to the microsecond. :func:`~hypothesis.strategies.times` and :func:`~hypothesis.strategies.datetimes` take an optional ``timezones=`` argument, which defaults to :func:`~hypothesis.strategies.none` for naive times. You can use our extra strategy based on pytz, or roll your own timezones strategy with dateutil or even the standard library. The old ``dates``, ``times``, and ``datetimes`` strategies in ``hypothesis.extra.datetimes`` are deprecated in favor of the new core strategies, which are more flexible and have no dependencies. ------------------- 3.10.0 - 2017-05-22 ------------------- Hypothesis now uses :func:`python:inspect.getfullargspec` internally. On Python 2, there are no visible changes. On Python 3 :func:`@given ` and :func:`@composite ` now preserve :pep:`3107` annotations on the decorated function. Keyword-only arguments are now either handled correctly (e.g. :func:`@composite `), or caught in validation instead of silently discarded or raising an unrelated error later (e.g. :func:`@given `). ------------------ 3.9.1 - 2017-05-22 ------------------ This is a bugfix release: the default field mapping for a DateTimeField in the Django extra now respects the ``USE_TZ`` setting when choosing a strategy. ------------------ 3.9.0 - 2017-05-19 ------------------ This is feature release, expanding the capabilities of the :func:`~hypothesis.strategies.decimals` strategy. * The new (optional) ``places`` argument allows you to generate decimals with a certain number of places (e.g. cents, thousandths, satoshis). * If allow_infinity is None, setting min_bound no longer excludes positive infinity and setting max_value no longer excludes negative infinity. * All of ``NaN``, ``-Nan``, ``sNaN``, and ``-sNaN`` may now be drawn if allow_nan is True, or if allow_nan is None and min_value or max_value is None. * min_value and max_value may be given as decimal strings, e.g. ``"1.234"``. ------------------ 3.8.5 - 2017-05-16 ------------------ Hypothesis now imports :mod:`python:sqlite3` when a SQLite database is used, rather than at module load, improving compatibility with Python implementations compiled without SQLite support (such as BSD or Jython). ------------------ 3.8.4 - 2017-05-16 ------------------ This is a compatibility bugfix release. ``sampled_from`` no longer raises a deprecation warning when sampling from an ``Enum``, as all enums have a reliable iteration order. ------------------ 3.8.3 - 2017-05-09 ------------------ This release removes a version check for older versions of pytest when using the Hypothesis pytest plugin. The pytest plugin will now run unconditionally on all versions of pytest. This breaks compatibility with any version of pytest prior to 2.7.0 (which is more than two years old). The primary reason for this change is that the version check was a frequent source of breakage when pytest change their versioning scheme. If you are not working on pytest itself and are not running a very old version of it, this release probably doesn't affect you. ------------------ 3.8.2 - 2017-04-26 ------------------ This is a code reorganisation release that moves some internal test helpers out of the main source tree so as to not have changes to them trigger releases in future. ------------------ 3.8.1 - 2017-04-26 ------------------ This is a documentation release. Almost all code examples are now doctests checked in CI, eliminating stale examples. ------------------ 3.8.0 - 2017-04-23 ------------------ This is a feature release, adding the :func:`~hypothesis.strategies.iterables` strategy, equivalent to ``lists(...).map(iter)`` but with a much more useful repr. You can use this strategy to check that code doesn't accidentally depend on sequence properties such as indexing support or repeated iteration. ------------------ 3.7.4 - 2017-04-22 ------------------ This is a bug fix release for a single bug: * In 3.7.3, using :func:`@example ` and a pytest fixture in the same test could cause the test to fail to fill the arguments, and throw a TypeError. ------------------ 3.7.3 - 2017-04-21 ------------------ This release should include no user visible changes and is purely a refactoring release. This modularises the behaviour of the core :func:`~hypothesis.given` function, breaking it up into smaller and more accessible parts, but its actual behaviour should remain unchanged. ------------------ 3.7.2 - 2017-04-21 ------------------ This reverts an undocumented change in 3.7.1 which broke installation on debian stable: The specifier for the hypothesis[django] extra\_requires had introduced a wild card, which was not supported on the default version of pip. ------------------ 3.7.1 - 2017-04-21 ------------------ This is a bug fix and internal improvements release. * In particular Hypothesis now tracks a tree of where it has already explored. This allows it to avoid some classes of duplicate examples, and significantly improves the performance of shrinking failing examples by allowing it to skip some shrinks that it can determine can't possibly work. * Hypothesis will no longer seed the global random arbitrarily unless you have asked it to using :py:meth:`~hypothesis.strategies.random_module` * Shrinking would previously have not worked correctly in some special cases on Python 2, and would have resulted in suboptimal examples. ------------------ 3.7.0 - 2017-03-20 ------------------ This is a feature release. New features: * Rule based stateful testing now has an :func:`@invariant ` decorator that specifies methods that are run after init and after every step, allowing you to encode properties that should be true at all times. Thanks to Tom Prince for this feature. * The :func:`~hypothesis.strategies.decimals` strategy now supports ``allow_nan`` and ``allow_infinity`` flags. * There are :ref:`significantly more strategies available for numpy `, including for generating arbitrary data types. Thanks to Zac Hatfield Dodds for this feature. * When using the :func:`~hypothesis.strategies.data` strategy you can now add a label as an argument to ``draw()``, which will be printed along with the value when an example fails. Thanks to Peter Inglesby for this feature. Bug fixes: * Bug fix: :func:`~hypothesis.strategies.composite` now preserves functions' docstrings. * The build is now reproducible and doesn't depend on the path you build it from. Thanks to Chris Lamb for this feature. * numpy strategies for the void data type did not work correctly. Thanks to Zac Hatfield Dodds for this fix. There have also been a number of performance optimizations: * The :func:`~hypothesis.strategies.permutations` strategy is now significantly faster to use for large lists (the underlying algorithm has gone from O(n^2) to O(n)). * Shrinking of failing test cases should have got significantly faster in some circumstances where it was previously struggling for a long time. * Example generation now involves less indirection, which results in a small speedup in some cases (small enough that you won't really notice it except in pathological cases). ------------------ 3.6.1 - 2016-12-20 ------------------ This release fixes a dependency problem and makes some small behind the scenes improvements. * The fake-factory dependency was renamed to faker. If you were depending on it through hypothesis[django] or hypothesis[fake-factory] without pinning it yourself then it would have failed to install properly. This release changes it so that hypothesis[fakefactory] (which can now also be installed as hypothesis[faker]) will install the renamed faker package instead. * This release also removed the dependency of hypothesis[django] on hypothesis[fakefactory] - it was only being used for emails. These now use a custom strategy that isn't from fakefactory. As a result you should also see performance improvements of tests which generated User objects or other things with email fields, as well as better shrinking of email addresses. * The distribution of code using nested calls to :func:`~hypothesis.strategies.one_of` or the ``|`` operator for combining strategies has been improved, as branches are now flattened to give a more uniform distribution. * Examples using :func:`~hypothesis.strategies.composite` or ``.flatmap`` should now shrink better. In particular this will affect things which work by first generating a length and then generating that many items, which have historically not shrunk very well. ------------------ 3.6.0 - 2016-10-31 ------------------ This release reverts Hypothesis to its old pretty printing of lambda functions based on attempting to extract the source code rather than decompile the bytecode. This is unfortunately slightly inferior in some cases and may result in you occasionally seeing things like ``lambda x: `` in statistics reports and strategy reprs. This removes the dependencies on uncompyle6, xdis and spark-parser. The reason for this is that the new functionality was based on uncompyle6, which turns out to introduce a hidden GPLed dependency - it in turn depended on xdis, and although the library was licensed under the MIT license, it contained some GPL licensed source code and thus should have been released under the GPL. My interpretation is that Hypothesis itself was never in violation of the GPL (because the license it is under, the Mozilla Public License v2, is fully compatible with being included in a GPL licensed work), but I have not consulted a lawyer on the subject. Regardless of the answer to this question, adding a GPLed dependency will likely cause a lot of users of Hypothesis to inadvertently be in violation of the GPL. As a result, if you are running Hypothesis 3.5.x you really should upgrade to this release immediately. ------------------ 3.5.3 - 2016-10-05 ------------------ This is a bug fix release. Bugs fixed: * If the same test was running concurrently in two processes and there were examples already in the test database which no longer failed, Hypothesis would sometimes fail with a FileNotFoundError (IOError on Python 2) because an example it was trying to read was deleted before it was read. (:issue:`372`). * Drawing from an :func:`~hypothesis.strategies.integers` strategy with both a min_value and a max_value would reject too many examples needlessly. Now it repeatedly redraws until satisfied. (:pull:`366`. Thanks to Calen Pennington for the contribution). ------------------ 3.5.2 - 2016-09-24 ------------------ This is a bug fix release. * The Hypothesis pytest plugin broke pytest support for doctests. Now it doesn't. ------------------ 3.5.1 - 2016-09-23 ------------------ This is a bug fix release. * Hypothesis now runs cleanly in -B and -BB modes, avoiding mixing bytes and unicode. * :class:`python:unittest.TestCase` tests would not have shown up in the new statistics mode. Now they do. * Similarly, stateful tests would not have shown up in statistics and now they do. * Statistics now print with pytest node IDs (the names you'd get in pytest verbose mode). ------------------ 3.5.0 - 2016-09-22 ------------------ This is a feature release. * :func:`~hypothesis.strategies.fractions` and :func:`~hypothesis.strategies.decimals` strategies now support min_value and max_value parameters. Thanks go to Anne Mulhern for the development of this feature. * The Hypothesis pytest plugin now supports a --hypothesis-show-statistics parameter that gives detailed statistics about the tests that were run. Huge thanks to Jean-Louis Fuchs and Adfinis-SyGroup for funding the development of this feature. * There is a new :func:`~hypothesis.event` function that can be used to add custom statistics. Additionally there have been some minor bug fixes: * In some cases Hypothesis should produce fewer duplicate examples (this will mostly only affect cases with a single parameter). * py.test command line parameters are now under an option group for Hypothesis (thanks to David Keijser for fixing this) * Hypothesis would previously error if you used :pep:`3107` function annotations on your tests under Python 3.4. * The repr of many strategies using lambdas has been improved to include the lambda body (this was previously supported in many but not all cases). ------------------ 3.4.2 - 2016-07-13 ------------------ This is a bug fix release, fixing a number of problems with the settings system: * Test functions defined using :func:`@given ` can now be called from other threads (:issue:`337`) * Attempting to delete a settings property would previously have silently done the wrong thing. Now it raises an AttributeError. * Creating a settings object with a custom database_file parameter was silently getting ignored and the default was being used instead. Now it's not. ------------------ 3.4.1 - 2016-07-07 ------------------ This is a bug fix release for a single bug: * On Windows when running two Hypothesis processes in parallel (e.g. using pytest-xdist) they could race with each other and one would raise an exception due to the non-atomic nature of file renaming on Windows and the fact that you can't rename over an existing file. This is now fixed. ------------------ 3.4.0 - 2016-05-27 ------------------ This release is entirely provided by `Lucas Wiman `_: Strategies constructed by :func:`~hypothesis.extra.django.models` will now respect much more of Django's validations out of the box. Wherever possible full_clean() should succeed. In particular: * The max_length, blank and choices kwargs are now respected. * Add support for DecimalField. * If a field includes validators, the list of validators are used to filter the field strategy. ------------------ 3.3.0 - 2016-05-27 ------------------ This release went wrong and is functionally equivalent to 3.2.0. Ignore it. ------------------ 3.2.0 - 2016-05-19 ------------------ This is a small single-feature release: * All tests using :func:`@given ` now fix the global random seed. This removes the health check for that. If a non-zero seed is required for the final falsifying example, it will be reported. Otherwise Hypothesis will assume randomization was not a significant factor for the test and be silent on the subject. If you use :func:`~hypothesis.strategies.random_module` this will continue to work and will always display the seed. ------------------ 3.1.3 - 2016-05-01 ------------------ Single bug fix release * Another charmap problem. In 3.1.2 :func:`~hypothesis.strategies.text` and :func:`~hypothesis.strategies.characters` would break on systems which had ``/tmp`` mounted on a different partition than the Hypothesis storage directory (usually in home). This fixes that. ------------------ 3.1.2 - 2016-04-30 ------------------ Single bug fix release: * Anything which used a :func:`~hypothesis.strategies.text` or :func:`~hypothesis.strategies.characters` strategy was broken on Windows and I hadn't updated appveyor to use the new repository location so I didn't notice. This is now fixed and windows support should work correctly. ------------------ 3.1.1 - 2016-04-29 ------------------ Minor bug fix release. * Fix concurrency issue when running tests that use :func:`~hypothesis.strategies.text` from multiple processes at once (:issue:`302`, thanks to Alex Chan). * Improve performance of code using :func:`~hypothesis.strategies.lists` with max_size (thanks to Cristi Cobzarenco). * Fix install on Python 2 with ancient versions of pip so that it installs the enum34 backport (thanks to Donald Stufft for telling me how to do this). * Remove duplicated __all__ exports from hypothesis.strategies (thanks to Piët Delport). * Update headers to point to new repository location. * Allow use of strategies that can't be used in :func:`~hypothesis.find` (e.g. :func:`~hypothesis.strategies.choices`) in stateful testing. ------------------ 3.1.0 - 2016-03-06 ------------------ * Add a :func:`~hypothesis.strategies.nothing` strategy that never successfully generates values. * :func:`~hypothesis.strategies.sampled_from` and :func:`~hypothesis.strategies.one_of` can both now be called with an empty argument list, in which case they also never generate any values. * :func:`~hypothesis.strategies.one_of` may now be called with a single argument that is a collection of strategies as well as as varargs. * Add a :func:`~hypothesis.strategies.runner` strategy which returns the instance of the current test object if there is one. * 'Bundle' for RuleBasedStateMachine is now a normal(ish) strategy and can be used as such. * Tests using RuleBasedStateMachine should now shrink significantly better. * Hypothesis now uses a pretty-printing library internally, compatible with IPython's pretty printing protocol (actually using the same code). This may improve the quality of output in some cases. * As a 'phases' setting that allows more fine grained control over which parts of the process Hypothesis runs * Add a suppress_health_check setting which allows you to turn off specific health checks in a fine grained manner. * Fix a bug where lists of non fixed size would always draw one more element than they included. This mostly didn't matter, but if would cause problems with empty strategies or ones with side effects. * Add a mechanism to the Django model generator to allow you to explicitly request the default value (thanks to Jeremy Thurgood for this one). ------------------ 3.0.5 - 2016-02-25 ------------------ * Fix a bug where Hypothesis would now error on py.test development versions. ------------------ 3.0.4 - 2016-02-24 ------------------ * Fix a bug where Hypothesis would error when running on Python 2.7.3 or earlier because it was trying to pass a :class:`python:bytearray` object to :func:`python:struct.unpack` (which is only supported since 2.7.4). ------------------ 3.0.3 - 2016-02-23 ------------------ * Fix version parsing of py.test to work with py.test release candidates * More general handling of the health check problem where things could fail because of a cache miss - now one "free" example is generated before the start of the health check run. ------------------ 3.0.2 - 2016-02-18 ------------------ * Under certain circumstances, strategies involving :func:`~hypothesis.strategies.text` buried inside some other strategy (e.g. ``text().filter(...)`` or ``recursive(text(), ...))`` would cause a test to fail its health checks the first time it ran. This was caused by having to compute some related data and cache it to disk. On travis or anywhere else where the ``.hypothesis`` directory was recreated this would have caused the tests to fail their health check on every run. This is now fixed for all the known cases, although there could be others lurking. ------------------ 3.0.1 - 2016-02-18 ------------------ * Fix a case where it was possible to trigger an "Unreachable" assertion when running certain flaky stateful tests. * Improve shrinking of large stateful tests by eliminating a case where it was hard to delete early steps. * Improve efficiency of drawing :func:`binary(min_size=n, max_size=n) ` significantly by provide a custom implementation for fixed size blocks that can bypass a lot of machinery. * Set default home directory based on the current working directory at the point Hypothesis is imported, not whenever the function first happens to be called. ------------------ 3.0.0 - 2016-02-17 ------------------ Codename: This really should have been 2.1. Externally this looks like a very small release. It has one small breaking change that probably doesn't affect anyone at all (some behaviour that never really worked correctly is now outright forbidden) but necessitated a major version bump and one visible new feature. Internally this is a complete rewrite. Almost nothing other than the public API is the same. New features: * Addition of :func:`~hypothesis.strategies.data` strategy which allows you to draw arbitrary data interactively within the test. * New "exploded" database format which allows you to more easily check the example database into a source repository while supporting merging. * Better management of how examples are saved in the database. * Health checks will now raise as errors when they fail. It was too easy to have the warnings be swallowed entirely. New limitations: * :func:`~hypothesis.strategies.choices` and :func:`~hypothesis.strategies.streaming` strategies may no longer be used with :func:`~hypothesis.find`. Neither may :func:`~hypothesis.strategies.data` (this is the change that necessitated a major version bump). Feature removal: * The ForkingTestCase executor has gone away. It may return in some more working form at a later date. Performance improvements: * A new model which allows flatmap, composite strategies and stateful testing to perform *much* better. They should also be more reliable. * Filtering may in some circumstances have improved significantly. This will help especially in cases where you have lots of values with individual filters on them, such as lists(x.filter(...)). * Modest performance improvements to the general test runner by avoiding expensive operations In general your tests should have got faster. If they've instead got significantly slower, I'm interested in hearing about it. Data distribution: The data distribution should have changed significantly. This may uncover bugs the previous version missed. It may also miss bugs the previous version could have uncovered. Hypothesis is now producing less strongly correlated data than it used to, but the correlations are extended over more of the structure. Shrinking: Shrinking quality should have improved. In particular Hypothesis can now perform simultaneous shrinking of separate examples within a single test (previously it was only able to do this for elements of a single collection). In some cases performance will have improved, in some cases it will have got worse but generally shouldn't have by much. ------------------ 2.0.0 - 2016-01-10 ------------------ Codename: A new beginning This release cleans up all of the legacy that accrued in the course of Hypothesis 1.0. These are mostly things that were emitting deprecation warnings in 1.19.0, but there were a few additional changes. In particular: * non-strategy values will no longer be converted to strategies when used in given or find. * FailedHealthCheck is now an error and not a warning. * Handling of non-ascii reprs in user types have been simplified by using raw strings in more places in Python 2. * given no longer allows mixing positional and keyword arguments. * given no longer works with functions with defaults. * given no longer turns provided arguments into defaults - they will not appear in the argspec at all. * the basic() strategy no longer exists. * the n_ary_tree strategy no longer exists. * the average_list_length setting no longer exists. Note: If you're using using recursive() this will cause you a significant slow down. You should pass explicit average_size parameters to collections in recursive calls. * @rule can no longer be applied to the same method twice. * Python 2.6 and 3.3 are no longer officially supported, although in practice they still work fine. This also includes two non-deprecation changes: * given's keyword arguments no longer have to be the rightmost arguments and can appear anywhere in the method signature. * The max_shrinks setting would sometimes not have been respected. ------------------- 1.19.0 - 2016-01-09 ------------------- Codename: IT COMES This release heralds the beginning of a new and terrible age of Hypothesis 2.0. It's primary purpose is some final deprecations prior to said release. The goal is that if your code emits no warnings under this release then it will probably run unchanged under Hypothesis 2.0 (there are some caveats to this: 2.0 will drop support for some Python versions, and if you're using internal APIs then as usual that may break without warning). It does have two new features: * New @seed() decorator which allows you to manually seed a test. This may be harmlessly combined with and overrides the derandomize setting. * settings objects may now be used as a decorator to fix those settings to a particular @given test. API changes (old usage still works but is deprecated): * Settings has been renamed to settings (lower casing) in order to make the decorator usage more natural. * Functions for the storage directory that were in hypothesis.settings are now in a new hypothesis.configuration module. Additional deprecations: * the average_list_length setting has been deprecated in favour of being explicit. * the basic() strategy has been deprecated as it is impossible to support it under a Conjecture based model, which will hopefully be implemented at some point in the 2.x series. * the n_ary_tree strategy (which was never actually part of the public API) has been deprecated. * Passing settings or random as keyword arguments to given is deprecated (use the new functionality instead) Bug fixes: * No longer emit PendingDeprecationWarning for __iter__ and StopIteration in streaming() values. * When running in health check mode with non strict, don't print quite so many errors for an exception in reify. * When an assumption made in a test or a filter is flaky, tests will now raise Flaky instead of UnsatisfiedAssumption. ----------------------------------------------------------------------- `1.18.1 `_ - 2015-12-22 ----------------------------------------------------------------------- Two behind the scenes changes: * Hypothesis will no longer write generated code to the file system. This will improve performance on some systems (e.g. if you're using `PythonAnywhere `_ which is running your code from NFS) and prevent some annoying interactions with auto-restarting systems. * Hypothesis will cache the creation of some strategies. This can significantly improve performance for code that uses flatmap or composite and thus has to instantiate strategies a lot. ----------------------------------------------------------------------- `1.18.0 `_ - 2015-12-21 ----------------------------------------------------------------------- Features: * Tests and find are now explicitly seeded off the global random module. This means that if you nest one inside the other you will now get a health check error. It also means that you can control global randomization by seeding random. * There is a new random_module() strategy which seeds the global random module for you and handles things so that you don't get a health check warning if you use it inside your tests. * floats() now accepts two new arguments: allow\_nan and allow\_infinity. These default to the old behaviour, but when set to False will do what the names suggest. Bug fixes: * Fix a bug where tests that used text() on Python 3.4+ would not actually be deterministic even when explicitly seeded or using the derandomize mode, because generation depended on dictionary iteration order which was affected by hash randomization. * Fix a bug where with complicated strategies the timing of the initial health check could affect the seeding of the subsequent test, which would also render supposedly deterministic tests non-deterministic in some scenarios. * In some circumstances flatmap() could get confused by two structurally similar things it could generate and would produce a flaky test where the first time it produced an error but the second time it produced the other value, which was not an error. The same bug was presumably also possible in composite(). * flatmap() and composite() initial generation should now be moderately faster. This will be particularly noticeable when you have many values drawn from the same strategy in a single run, e.g. constructs like lists(s.flatmap(f)). Shrinking performance *may* have suffered, but this didn't actually produce an interestingly worse result in any of the standard scenarios tested. ----------------------------------------------------------------------- `1.17.1 `_ - 2015-12-16 ----------------------------------------------------------------------- A small bug fix release, which fixes the fact that the 'note' function could not be used on tests which used the @example decorator to provide explicit examples. ----------------------------------------------------------------------- `1.17.0 `_ - 2015-12-15 ----------------------------------------------------------------------- This is actually the same release as 1.16.1, but 1.16.1 has been pulled because it contains the following additional change that was not intended to be in a patch release (it's perfectly stable, but is a larger change that should have required a minor version bump): * Hypothesis will now perform a series of "health checks" as part of running your tests. These detect and warn about some common error conditions that people often run into which wouldn't necessarily have caused the test to fail but would cause e.g. degraded performance or confusing results. ----------------------------------------------------------------------- `1.16.1 `_ - 2015-12-14 ----------------------------------------------------------------------- Note: This release has been removed. A small bugfix release that allows bdists for Hypothesis to be built under 2.7 - the compat3.py file which had Python 3 syntax wasn't intended to be loaded under Python 2, but when building a bdist it was. In particular this would break running setup.py test. ----------------------------------------------------------------------- `1.16.0 `_ - 2015-12-08 ----------------------------------------------------------------------- There are no public API changes in this release but it includes a behaviour change that I wasn't comfortable putting in a patch release. * Functions from hypothesis.strategies will no longer raise InvalidArgument on bad arguments. Instead the same errors will be raised when a test using such a strategy is run. This may improve startup time in some cases, but the main reason for it is so that errors in strategies won't cause errors in loading, and it can interact correctly with things like pytest.mark.skipif. * Errors caused by accidentally invoking the legacy API are now much less confusing, although still throw NotImplementedError. * hypothesis.extra.django is 1.9 compatible. * When tests are run with max_shrinks=0 this will now still rerun the test on failure and will no longer print "Trying example:" before each run. Additionally note() will now work correctly when used with max_shrinks=0. ----------------------------------------------------------------------- `1.15.0 `_ - 2015-11-24 ----------------------------------------------------------------------- A release with two new features. * A 'characters' strategy for more flexible generation of text with particular character ranges and types, kindly contributed by `Alexander Shorin `_. * Add support for preconditions to the rule based stateful testing. Kindly contributed by `Christopher Armstrong `_ ----------------------------------------------------------------------- `1.14.0 `_ - 2015-11-01 ----------------------------------------------------------------------- New features: * Add 'note' function which lets you include additional information in the final test run's output. * Add 'choices' strategy which gives you a choice function that emulates random.choice. * Add 'uuid' strategy that generates UUIDs' * Add 'shared' strategy that lets you create a strategy that just generates a single shared value for each test run Bugs: * Using strategies of the form streaming(x.flatmap(f)) with find or in stateful testing would have caused InvalidArgument errors when the resulting values were used (because code that expected to only be called within a test context would be invoked). ----------------------------------------------------------------------- `1.13.0 `_ - 2015-10-29 ----------------------------------------------------------------------- This is quite a small release, but deprecates some public API functions and removes some internal API functionality so gets a minor version bump. * All calls to the 'strategy' function are now deprecated, even ones which pass just a SearchStrategy instance (which is still a no-op). * Never documented hypothesis.extra entry_points mechanism has now been removed ( it was previously how hypothesis.extra packages were loaded and has been deprecated and unused for some time) * Some corner cases that could previously have produced an OverflowError when simplifying failing cases using hypothesis.extra.datetimes (or dates or times) have now been fixed. * Hypothesis load time for first import has been significantly reduced - it used to be around 250ms (on my SSD laptop) and now is around 100-150ms. This almost never matters but was slightly annoying when using it in the console. * hypothesis.strategies.randoms was previously missing from \_\_all\_\_. ----------------------------------------------------------------------- `1.12.0 `_ - 2015-10-18 ----------------------------------------------------------------------- * Significantly improved performance of creating strategies using the functions from the hypothesis.strategies module by deferring the calculation of their repr until it was needed. This is unlikely to have been an performance issue for you unless you were using flatmap, composite or stateful testing, but for some cases it could be quite a significant impact. * A number of cases where the repr of strategies build from lambdas is improved * Add dates() and times() strategies to hypothesis.extra.datetimes * Add new 'profiles' mechanism to the settings system * Deprecates mutability of Settings, both the Settings.default top level property and individual settings. * A Settings object may now be directly initialized from a parent Settings. * @given should now give a better error message if you attempt to use it with a function that uses destructuring arguments (it still won't work, but it will error more clearly), * A number of spelling corrections in error messages * py.test should no longer display the intermediate modules Hypothesis generates when running in verbose mode * Hypothesis should now correctly handle printing objects with non-ascii reprs on python 3 when running in a locale that cannot handle ascii printing to stdout. * Add a unique=True argument to lists(). This is equivalent to unique_by=lambda x: x, but offers a more convenient syntax. ----------------------------------------------------------------------- `1.11.4 `_ - 2015-09-27 ----------------------------------------------------------------------- * Hide modifications Hypothesis needs to make to sys.path by undoing them after we've imported the relevant modules. This is a workaround for issues cryptography experienced on windows. * Slightly improved performance of drawing from sampled_from on large lists of alternatives. * Significantly improved performance of drawing from one_of or strategies using \| (note this includes a lot of strategies internally - floats() and integers() both fall into this category). There turned out to be a massive performance regression introduced in 1.10.0 affecting these which probably would have made tests using Hypothesis significantly slower than they should have been. ----------------------------------------------------------------------- `1.11.3 `_ - 2015-09-23 ----------------------------------------------------------------------- * Better argument validation for datetimes() strategy - previously setting max_year < datetime.MIN_YEAR or min_year > datetime.MAX_YEAR would not have raised an InvalidArgument error and instead would have behaved confusingly. * Compatibility with being run on pytest < 2.7 (achieved by disabling the plugin). ----------------------------------------------------------------------- `1.11.2 `_ - 2015-09-23 ----------------------------------------------------------------------- Bug fixes: * Settings(database=my_db) would not be correctly inherited when used as a default setting, so that newly created settings would use the database_file setting and create an SQLite example database. * Settings.default.database = my_db would previously have raised an error and now works. * Timeout could sometimes be significantly exceeded if during simplification there were a lot of examples tried that didn't trigger the bug. * When loading a heavily simplified example using a basic() strategy from the database this could cause Python to trigger a recursion error. * Remove use of deprecated API in pytest plugin so as to not emit warning Misc: * hypothesis-pytest is now part of hypothesis core. This should have no externally visible consequences, but you should update your dependencies to remove hypothesis-pytest and depend on only Hypothesis. * Better repr for hypothesis.extra.datetimes() strategies. * Add .close() method to abstract base class for Backend (it was already present in the main implementation). ----------------------------------------------------------------------- `1.11.1 `_ - 2015-09-16 ----------------------------------------------------------------------- Bug fixes: * When running Hypothesis tests in parallel (e.g. using pytest-xdist) there was a race condition caused by code generation. * Example databases are now cached per thread so as to not use sqlite connections from multiple threads. This should make Hypothesis now entirely thread safe. * floats() with only min_value or max_value set would have had a very bad distribution. * Running on 3.5, Hypothesis would have emitted deprecation warnings because of use of inspect.getargspec ----------------------------------------------------------------------- `1.11.0 `_ - 2015-08-31 ----------------------------------------------------------------------- * text() with a non-string alphabet would have used the repr() of the the alphabet instead of its contexts. This is obviously silly. It now works with any sequence of things convertible to unicode strings. * @given will now work on methods whose definitions contains no explicit positional arguments, only varargs (`bug #118 `_). This may have some knock on effects because it means that @given no longer changes the argspec of functions other than by adding defaults. * Introduction of new @composite feature for more natural definition of strategies you'd previously have used flatmap for. ----------------------------------------------------------------------- `1.10.6 `_ - 2015-08-26 ----------------------------------------------------------------------- Fix support for fixtures on Django 1.7. ------------------- 1.10.4 - 2015-08-21 ------------------- Tiny bug fix release: * If the database_file setting is set to None, this would have resulted in an error when running tests. Now it does the same as setting database to None. ----------------------------------------------------------------------- `1.10.3 `_ - 2015-08-19 ----------------------------------------------------------------------- Another small bug fix release. * lists(elements, unique_by=some_function, min_size=n) would have raised a ValidationError if n > Settings.default.average_list_length because it would have wanted to use an average list length shorter than the minimum size of the list, which is impossible. Now it instead defaults to twice the minimum size in these circumstances. * basic() strategy would have only ever produced at most ten distinct values per run of the test (which is bad if you e.g. have it inside a list). This was obviously silly. It will now produce a much better distribution of data, both duplicated and non duplicated. ----------------------------------------------------------------------- `1.10.2 `_ - 2015-08-19 ----------------------------------------------------------------------- This is a small bug fix release: * star imports from hypothesis should now work correctly. * example quality for examples using flatmap will be better, as the way it had previously been implemented was causing problems where Hypothesis was erroneously labelling some examples as being duplicates. ----------------------------------------------------------------------- `1.10.0 `_ - 2015-08-04 ----------------------------------------------------------------------- This is just a bugfix and performance release, but it changes some semi-public APIs, hence the minor version bump. * Significant performance improvements for strategies which are one\_of() many branches. In particular this included recursive() strategies. This should take the case where you use one recursive() strategy as the base strategy of another from unusably slow (tens of seconds per generated example) to reasonably fast. * Better handling of just() and sampled_from() for values which have an incorrect \_\_repr\_\_ implementation that returns non-ASCII unicode on Python 2. * Better performance for flatmap from changing the internal morpher API to be significantly less general purpose. * Introduce a new semi-public BuildContext/cleanup API. This allows strategies to register cleanup activities that should run once the example is complete. Note that this will interact somewhat weirdly with find. * Better simplification behaviour for streaming strategies. * Don't error on lambdas which use destructuring arguments in Python 2. * Add some better reprs for a few strategies that were missing good ones. * The Random instances provided by randoms() are now copyable. * Slightly more debugging information about simplify when using a debug verbosity level. * Support using given for functions with varargs, but not passing arguments to it as positional. --------------------------------------------------------------------- `1.9.0 `_ - 2015-07-27 --------------------------------------------------------------------- Codename: The great bundling. This release contains two fairly major changes. The first is the deprecation of the hypothesis-extra mechanism. From now on all the packages that were previously bundled under it other than hypothesis-pytest (which is a different beast and will remain separate). The functionality remains unchanged and you can still import them from exactly the same location, they just are no longer separate packages. The second is that this introduces a new way of building strategies which lets you build up strategies recursively from other strategies. It also contains the minor change that calling .example() on a strategy object will give you examples that are more representative of the actual data you'll get. There used to be some logic in there to make the examples artificially simple but this proved to be a bad idea. --------------------------------------------------------------------- `1.8.5 `_ - 2015-07-24 --------------------------------------------------------------------- This contains no functionality changes but fixes a mistake made with building the previous package that would have broken installation on Windows. --------------------------------------------------------------------- `1.8.4 `_ - 2015-07-20 --------------------------------------------------------------------- Bugs fixed: * When a call to floats() had endpoints which were not floats but merely convertible to one (e.g. integers), these would be included in the generated data which would cause it to generate non-floats. * Splitting lambdas used in the definition of flatmap, map or filter over multiple lines would break the repr, which would in turn break their usage. --------------------------------------------------------------------- `1.8.3 `_ - 2015-07-20 --------------------------------------------------------------------- "Falsifying example" would not have been printed when the failure came from an explicit example. --------------------------------------------------------------------- `1.8.2 `_ - 2015-07-18 --------------------------------------------------------------------- Another small bugfix release: * When using ForkingTestCase you would usually not get the falsifying example printed if the process exited abnormally (e.g. due to os._exit). * Improvements to the distribution of characters when using text() with a default alphabet. In particular produces a better distribution of ascii and whitespace in the alphabet. ------------------ 1.8.1 - 2015-07-17 ------------------ This is a small release that contains a workaround for people who have bad reprs returning non ascii text on Python 2.7. This is not a bug fix for Hypothesis per se because that's not a thing that is actually supposed to work, but Hypothesis leans more heavily on repr than is typical so it's worth having a workaround for. --------------------------------------------------------------------- `1.8.0 `_ - 2015-07-16 --------------------------------------------------------------------- New features: * Much more sensible reprs for strategies, especially ones that come from hypothesis.strategies. These should now have as reprs python code that would produce the same strategy. * lists() accepts a unique_by argument which forces the generated lists to be only contain elements unique according to some function key (which must return a hashable value). * Better error messages from flaky tests to help you debug things. Mostly invisible implementation details that may result in finding new bugs in your code: * Sets and dictionary generation should now produce a better range of results. * floats with bounds now focus more on 'critical values', trying to produce values at edge cases. * flatmap should now have better simplification for complicated cases, as well as generally being (I hope) more reliable. Bug fixes: * You could not previously use assume() if you were using the forking executor. --------------------------------------------------------------------- `1.7.2 `_ - 2015-07-10 --------------------------------------------------------------------- This is purely a bug fix release: * When using floats() with stale data in the database you could sometimes get values in your tests that did not respect min_value or max_value. * When getting a Flaky error from an unreliable test it would have incorrectly displayed the example that caused it. * 2.6 dependency on backports was incorrectly specified. This would only have caused you problems if you were building a universal wheel from Hypothesis, which is not how Hypothesis ships, so unless you're explicitly building wheels for your dependencies and support Python 2.6 plus a later version of Python this probably would never have affected you. * If you use flatmap in a way that the strategy on the right hand side depends sensitively on the left hand side you may have occasionally seen Flaky errors caused by producing unreliable examples when minimizing a bug. This use case may still be somewhat fraught to be honest. This code is due a major rearchitecture for 1.8, but in the meantime this release fixes the only source of this error that I'm aware of. --------------------------------------------------------------------- `1.7.1 `_ - 2015-06-29 --------------------------------------------------------------------- Codename: There is no 1.7.0. A slight technical hitch with a premature upload means there's was a yanked 1.7.0 release. Oops. The major feature of this release is Python 2.6 support. Thanks to Jeff Meadows for doing most of the work there. Other minor features * strategies now has a permutations() function which returns a strategy yielding permutations of values from a given collection. * if you have a flaky test it will print the exception that it last saw before failing with Flaky, even if you do not have verbose reporting on. * Slightly experimental git merge script available as "python -m hypothesis.tools.mergedbs". Instructions on how to use it in the docstring of that file. Bug fixes: * Better performance from use of filter. In particular tests which involve large numbers of heavily filtered strategies should perform a lot better. * floats() with a negative min_value would not have worked correctly (worryingly, it would have just silently failed to run any examples). This is now fixed. * tests using sampled\_from would error if the number of sampled elements was smaller than min\_satisfying\_examples. ------------------ 1.6.2 - 2015-06-08 ------------------ This is just a few small bug fixes: * Size bounds were not validated for values for a binary() strategy when reading examples from the database. * sampled\_from is now in __all__ in hypothesis.strategies * floats no longer consider negative integers to be simpler than positive non-integers * Small floating point intervals now correctly count members, so if you have a floating point interval so narrow there are only a handful of values in it, this will no longer cause an error when Hypothesis runs out of values. ------------------ 1.6.1 - 2015-05-21 ------------------ This is a small patch release that fixes a bug where 1.6.0 broke the use of flatmap with the deprecated API and assumed the passed in function returned a SearchStrategy instance rather than converting it to a strategy. --------------------------------------------------------------------- `1.6.0 `_ - 2015-05-21 --------------------------------------------------------------------- This is a smallish release designed to fix a number of bugs and smooth out some weird behaviours. * Fix a critical bug in flatmap where it would reuse old strategies. If all your flatmap code was pure you're fine. If it's not, I'm surprised it's working at all. In particular if you want to use flatmap with django models, you desperately need to upgrade to this version. * flatmap simplification performance should now be better in some cases where it previously had to redo work. * Fix for a bug where invalid unicode data with surrogates could be generated during simplification (it was already filtered out during actual generation). * The Hypothesis database is now keyed off the name of the test instead of the type of data. This makes much more sense now with the new strategies API and is generally more robust. This means you will lose old examples on upgrade. * The database will now not delete values which fail to deserialize correctly, just skip them. This is to handle cases where multiple incompatible strategies share the same key. * find now also saves and loads values from the database, keyed off a hash of the function you're finding from. * Stateful tests now serialize and load values from the database. They should have before, really. This was a bug. * Passing a different verbosity level into a test would not have worked entirely correctly, leaving off some messages. This is now fixed. * Fix a bug where derandomized tests with unicode characters in the function body would error on Python 2.7. --------------------------------------------------------------------- `1.5.0 `_ - 2015-05-14 --------------------------------------------------------------------- Codename: Strategic withdrawal. The purpose of this release is a radical simplification of the API for building strategies. Instead of the old approach of @strategy.extend and things that get converted to strategies, you just build strategies directly. The old method of defining strategies will still work until Hypothesis 2.0, because it's a major breaking change, but will now emit deprecation warnings. The new API is also a lot more powerful as the functions for defining strategies give you a lot of dials to turn. See :doc:`the updated data section ` for details. Other changes: * Mixing keyword and positional arguments in a call to @given is deprecated as well. * There is a new setting called 'strict'. When set to True, Hypothesis will raise warnings instead of merely printing them. Turning it on by default is inadvisable because it means that Hypothesis minor releases can break your code, but it may be useful for making sure you catch all uses of deprecated APIs. * max_examples in settings is now interpreted as meaning the maximum number of unique (ish) examples satisfying assumptions. A new setting max_iterations which defaults to a larger value has the old interpretation. * Example generation should be significantly faster due to a new faster parameter selection algorithm. This will mostly show up for simple data types - for complex ones the parameter selection is almost certainly dominated. * Simplification has some new heuristics that will tend to cut down on cases where it could previously take a very long time. * timeout would previously not have been respected in cases where there were a lot of duplicate examples. You probably wouldn't have previously noticed this because max_examples counted duplicates, so this was very hard to hit in a way that mattered. * A number of internal simplifications to the SearchStrategy API. * You can now access the current Hypothesis version as hypothesis.__version__. * A top level function is provided for running the stateful tests without the TestCase infrastructure. --------------------------------------------------------------------- `1.4.0 `_ - 2015-05-04 --------------------------------------------------------------------- Codename: What a state. The *big* feature of this release is the new and slightly experimental stateful testing API. You can read more about that in :doc:`the appropriate section `. Two minor features the were driven out in the course of developing this: * You can now set settings.max_shrinks to limit the number of times Hypothesis will try to shrink arguments to your test. If this is set to <= 0 then Hypothesis will not rerun your test and will just raise the failure directly. Note that due to technical limitations if max_shrinks is <= 0 then Hypothesis will print *every* example it calls your test with rather than just the failing one. Note also that I don't consider settings max_shrinks to zero a sensible way to run your tests and it should really be considered a debug feature. * There is a new debug level of verbosity which is even *more* verbose than verbose. You probably don't want this. Breakage of semi-public SearchStrategy API: * It is now a required invariant of SearchStrategy that if u simplifies to v then it is not the case that strictly_simpler(u, v). i.e. simplifying should not *increase* the complexity even though it is not required to decrease it. Enforcing this invariant lead to finding some bugs where simplifying of integers, floats and sets was suboptimal. * Integers in basic data are now required to fit into 64 bits. As a result python integer types are now serialized as strings, and some types have stopped using quite so needlessly large random seeds. Hypothesis Stateful testing was then turned upon Hypothesis itself, which lead to an amazing number of minor bugs being found in Hypothesis itself. Bugs fixed (most but not all from the result of stateful testing) include: * Serialization of streaming examples was flaky in a way that you would probably never notice: If you generate a template, simplify it, serialize it, deserialize it, serialize it again and then deserialize it you would get the original stream instead of the simplified one. * If you reduced max_examples below the number of examples already saved in the database, you would have got a ValueError. Additionally, if you had more than max_examples in the database all of them would have been considered. * @given will no longer count duplicate examples (which it never called your function with) towards max_examples. This may result in your tests running slower, but that's probably just because they're trying more examples. * General improvements to example search which should result in better performance and higher quality examples. In particular parameters which have a history of producing useless results will be more aggressively culled. This is useful both because it decreases the chance of useless examples and also because it's much faster to not check parameters which we were unlikely to ever pick! * integers_from and lists of types with only one value (e.g. [None]) would previously have had a very high duplication rate so you were probably only getting a handful of examples. They now have a much lower duplication rate, as well as the improvements to search making this less of a problem in the first place. * You would sometimes see simplification taking significantly longer than your defined timeout. This would happen because timeout was only being checked after each *successful* simplification, so if Hypothesis was spending a lot of time unsuccessfully simplifying things it wouldn't stop in time. The timeout is now applied for unsuccessful simplifications too. * In Python 2.7, integers_from strategies would have failed during simplification with an OverflowError if their starting point was at or near to the maximum size of a 64-bit integer. * flatmap and map would have failed if called with a function without a __name__ attribute. * If max_examples was less than min_satisfying_examples this would always error. Now min_satisfying_examples is capped to max_examples. Note that if you have assumptions to satisfy here this will still cause an error. Some minor quality improvements: * Lists of streams, flatmapped strategies and basic strategies should now now have slightly better simplification. --------------------------------------------------------------------- `1.3.0 `_ - 2015-05-22 --------------------------------------------------------------------- New features: * New verbosity level API for printing intermediate results and exceptions. * New specifier for strings generated from a specified alphabet. * Better error messages for tests that are failing because of a lack of enough examples. Bug fixes: * Fix error where use of ForkingTestCase would sometimes result in too many open files. * Fix error where saving a failing example that used flatmap could error. * Implement simplification for sampled_from, which apparently never supported it previously. Oops. General improvements: * Better range of examples when using one_of or sampled_from. * Fix some pathological performance issues when simplifying lists of complex values. * Fix some pathological performance issues when simplifying examples that require unicode strings with high codepoints. * Random will now simplify to more readable examples. --------------------------------------------------------------------- `1.2.1 `_ - 2015-04-16 --------------------------------------------------------------------- A small patch release for a bug in the new executors feature. Tests which require doing something to their result in order to fail would have instead reported as flaky. --------------------------------------------------------------------- `1.2.0 `_ - 2015-04-15 --------------------------------------------------------------------- Codename: Finders keepers. A bunch of new features and improvements. * Provide a mechanism for customizing how your tests are executed. * Provide a test runner that forks before running each example. This allows better support for testing native code which might trigger a segfault or a C level assertion failure. * Support for using Hypothesis to find examples directly rather than as just as a test runner. * New streaming type which lets you generate infinite lazily loaded streams of data - perfect for if you need a number of examples but don't know how many. * Better support for large integer ranges. You can now use integers_in_range with ranges of basically any size. Previously large ranges would have eaten up all your memory and taken forever. * Integers produce a wider range of data than before - previously they would only rarely produce integers which didn't fit into a machine word. Now it's much more common. This percolates to other numeric types which build on integers. * Better validation of arguments to @given. Some situations that would previously have caused silently wrong behaviour will now raise an error. * Include +/- sys.float_info.max in the set of floating point edge cases that Hypothesis specifically tries. * Fix some bugs in floating point ranges which happen when given +/- sys.float_info.max as one of the endpoints... (really any two floats that are sufficiently far apart so that x, y are finite but y - x is infinite). This would have resulted in generating infinite values instead of ones inside the range. --------------------------------------------------------------------- `1.1.1 `_ - 2015-04-07 --------------------------------------------------------------------- Codename: Nothing to see here This is just a patch release put out because it fixed some internal bugs that would block the Django integration release but did not actually affect anything anyone could previously have been using. It also contained a minor quality fix for floats that I'd happened to have finished in time. * Fix some internal bugs with object lifecycle management that were impossible to hit with the previously released versions but broke hypothesis-django. * Bias floating point numbers somewhat less aggressively towards very small numbers --------------------------------------------------------------------- `1.1.0 `_ - 2015-04-06 --------------------------------------------------------------------- Codename: No-one mention the M word. * Unicode strings are more strongly biased towards ascii characters. Previously they would generate all over the space. This is mostly so that people who try to shape their unicode strings with assume() have less of a bad time. * A number of fixes to data deserialization code that could theoretically have caused mysterious bugs when using an old version of a Hypothesis example database with a newer version. To the best of my knowledge a change that could have triggered this bug has never actually been seen in the wild. Certainly no-one ever reported a bug of this nature. * Out of the box support for Decimal and Fraction. * new dictionary specifier for dictionaries with variable keys. * Significantly faster and higher quality simplification, especially for collections of data. * New filter() and flatmap() methods on Strategy for better ways of building strategies out of other strategies. * New BasicStrategy class which allows you to define your own strategies from scratch without needing an existing matching strategy or being exposed to the full horror or non-public nature of the SearchStrategy interface. --------------------------------------------------------------------- `1.0.0 `_ - 2015-03-27 --------------------------------------------------------------------- Codename: Blast-off! There are no code changes in this release. This is precisely the 0.9.2 release with some updated documentation. ------------------ 0.9.2 - 2015-03-26 ------------------ Codename: T-1 days. * floats_in_range would not actually have produced floats_in_range unless that range happened to be (0, 1). Fix this. ------------------ 0.9.1 - 2015-03-25 ------------------ Codename: T-2 days. * Fix a bug where if you defined a strategy using map on a lambda then the results would not be saved in the database. * Significant performance improvements when simplifying examples using lists, strings or bounded integer ranges. ------------------ 0.9.0 - 2015-03-23 ------------------ Codename: The final countdown This release could also be called 1.0-RC1. It contains a teeny tiny bugfix, but the real point of this release is to declare feature freeze. There will be zero functionality changes between 0.9.0 and 1.0 unless something goes really really wrong. No new features will be added, no breaking API changes will occur, etc. This is the final shakedown before I declare Hypothesis stable and ready to use and throw a party to celebrate. Bug bounty for any bugs found between now and 1.0: I will buy you a drink (alcoholic, caffeinated, or otherwise) and shake your hand should we ever find ourselves in the same city at the same time. The one tiny bugfix: * Under pypy, databases would fail to close correctly when garbage collected, leading to a memory leak and a confusing error message if you were repeatedly creating databases and not closing them. It is very unlikely you were doing this and the chances of you ever having noticed this bug are very low. ------------------ 0.7.2 - 2015-03-22 ------------------ Codename: Hygienic macros or bust * You can now name an argument to @given 'f' and it won't break (issue #38) * strategy_test_suite is now named strategy_test_suite as the documentation claims and not in fact strategy_test_suitee * Settings objects can now be used as a context manager to temporarily override the default values inside their context. ------------------ 0.7.1 - 2015-03-21 ------------------ Codename: Point releases go faster * Better string generation by parametrizing by a limited alphabet * Faster string simplification - previously if simplifying a string with high range unicode characters it would try every unicode character smaller than that. This was pretty pointless. Now it stops after it's a short range (it can still reach smaller ones through recursive calls because of other simplifying operations). * Faster list simplification by first trying a binary chop down the middle * Simultaneous simplification of identical elements in a list. So if a bug only triggers when you have duplicates but you drew e.g. [-17, -17], this will now simplify to [0, 0]. ------------------- 0.7.0, - 2015-03-20 ------------------- Codename: Starting to look suspiciously real This is probably the last minor release prior to 1.0. It consists of stability improvements, a few usability things designed to make Hypothesis easier to try out, and filing off some final rough edges from the API. * Significant speed and memory usage improvements * Add an example() method to strategy objects to give an example of the sort of data that the strategy generates. * Remove .descriptor attribute of strategies * Rename descriptor_test_suite to strategy_test_suite * Rename the few remaining uses of descriptor to specifier (descriptor already has a defined meaning in Python) --------------------------------------------------------- 0.6.0 - 2015-03-13 --------------------------------------------------------- Codename: I'm sorry, were you using that API? This is primarily a "simplify all the weird bits of the API" release. As a result there are a lot of breaking changes. If you just use @given with core types then you're probably fine. In particular: * Stateful testing has been removed from the API * The way the database is used has been rendered less useful (sorry). The feature for reassembling values saved from other tests doesn't currently work. This will probably be brought back in post 1.0. * SpecificationMapper is no longer a thing. Instead there is an ExtMethod called strategy which you extend to specify how to convert other types to strategies. * Settings are now extensible so you can add your own for configuring a strategy * MappedSearchStrategy no longer needs an unpack method * Basically all the SearchStrategy internals have changed massively. If you implemented SearchStrategy directly rather than using MappedSearchStrategy talk to me about fixing it. * Change to the way extra packages work. You now specify the package. This must have a load() method. Additionally any modules in the package will be loaded in under hypothesis.extra Bug fixes: * Fix for a bug where calling falsify on a lambda with a non-ascii character in its body would error. Hypothesis Extra: hypothesis-fakefactory\: An extension for using faker data in hypothesis. Depends on fake-factory. ------------------ 0.5.0 - 2015-02-10 ------------------ Codename: Read all about it. Core hypothesis: * Add support back in for pypy and python 3.2 * @given functions can now be invoked with some arguments explicitly provided. If all arguments that hypothesis would have provided are passed in then no falsification is run. * Related to the above, this means that you can now use pytest fixtures and mark.parametrize with Hypothesis without either interfering with the other. * Breaking change: @given no longer works for functions with varargs (varkwargs are fine). This might be added back in at a later date. * Windows is now fully supported. A limited version (just the tests with none of the extras) of the test suite is run on windows with each commit so it is now a first class citizen of the Hypothesis world. * Fix a bug for fuzzy equality of equal complex numbers with different reprs (this can happen when one coordinate is zero). This shouldn't affect users - that feature isn't used anywhere public facing. * Fix generation of floats on windows and 32-bit builds of python. I was using some struct.pack logic that only worked on certain word sizes. * When a test times out and hasn't produced enough examples this now raises a Timeout subclass of Unfalsifiable. * Small search spaces are better supported. Previously something like a @given(bool, bool) would have failed because it couldn't find enough examples. Hypothesis is now aware of the fact that these are small search spaces and will not error in this case. * Improvements to parameter search in the case of hard to satisfy assume. Hypothesis will now spend less time exploring parameters that are unlikely to provide anything useful. * Increase chance of generating "nasty" floats * Fix a bug that would have caused unicode warnings if you had a sampled_from that was mixing unicode and byte strings. * Added a standard test suite that you can use to validate a custom strategy you've defined is working correctly. Hypothesis extra: First off, introducing Hypothesis extra packages! These are packages that are separated out from core Hypothesis because they have one or more dependencies. Every hypothesis-extra package is pinned to a specific point release of Hypothesis and will have some version requirements on its dependency. They use entry_points so you will usually not need to explicitly import them, just have them installed on the path. This release introduces two of them: hypothesis-datetime: Does what it says on the tin: Generates datetimes for Hypothesis. Just install the package and datetime support will start working. Depends on pytz for timezone support hypothesis-pytest: A very rudimentary pytest plugin. All it does right now is hook the display of falsifying examples into pytest reporting. Depends on pytest. ------------------ 0.4.3 - 2015-02-05 ------------------ Codename: TIL narrow Python builds are a thing This just fixes the one bug. * Apparently there is such a thing as a "narrow python build" and OS X ships with these by default for python 2.7. These are builds where you only have two bytes worth of unicode. As a result, generating unicode was completely broken on OS X. Fix this by only generating unicode codepoints in the range supported by the system. ------------------ 0.4.2 - 2015-02-04 ------------------ Codename: O(dear) This is purely a bugfix release: * Provide sensible external hashing for all core types. This will significantly improve performance of tracking seen examples which happens in literally every falsification run. For Hypothesis fixing this cut 40% off the runtime of the test suite. The behaviour is quadratic in the number of examples so if you're running the default configuration this will be less extreme (Hypothesis's test suite runs at a higher number of examples than default), but you should still see a significant improvement. * Fix a bug in formatting of complex numbers where the string could get incorrectly truncated. ------------------ 0.4.1 - 2015-02-03 ------------------ Codename: Cruel and unusual edge cases This release is mostly about better test case generation. Enhancements: * Has a cool release name * text_type (str in python 3, unicode in python 2) example generation now actually produces interesting unicode instead of boring ascii strings. * floating point numbers are generated over a much wider range, with particular attention paid to generating nasty numbers - nan, infinity, large and small values, etc. * examples can be generated using pieces of examples previously saved in the database. This allows interesting behaviour that has previously been discovered to be propagated to other examples. * improved parameter exploration algorithm which should allow it to more reliably hit interesting edge cases. * Timeout can now be disabled entirely by setting it to any value <= 0. Bug fixes: * The descriptor on a OneOfStrategy could be wrong if you had descriptors which were equal but should not be coalesced. e.g. a strategy for one_of((frozenset({int}), {int})) would have reported its descriptor as {int}. This is unlikely to have caused you any problems * If you had strategies that could produce NaN (which float previously couldn't but e.g. a Just(float('nan')) could) then this would have sent hypothesis into an infinite loop that would have only been terminated when it hit the timeout. * Given elements that can take a long time to minimize, minimization of floats or tuples could be quadratic or worse in the that value. You should now see much better performance for simplification, albeit at some cost in quality. Other: * A lot of internals have been been rewritten. This shouldn't affect you at all, but it opens the way for certain of hypothesis's oddities to be a lot more extensible by users. Whether this is a good thing may be up for debate... ------------------ 0.4.0 - 2015-01-21 ------------------ FLAGSHIP FEATURE: Hypothesis now persists examples for later use. It stores data in a local SQLite database and will reuse it for all tests of the same type. LICENSING CHANGE: Hypothesis is now released under the Mozilla Public License 2.0. This applies to all versions from 0.4.0 onwards until further notice. The previous license remains applicable to all code prior to 0.4.0. Enhancements: * Printing of failing examples. I was finding that the pytest runner was not doing a good job of displaying these, and that Hypothesis itself could do much better. * Drop dependency on six for cross-version compatibility. It was easy enough to write the shim for the small set of features that we care about and this lets us avoid a moderately complex dependency. * Some improvements to statistical distribution of selecting from small (<= 3 elements) * Improvements to parameter selection for finding examples. Bugs fixed: * could_have_produced for lists, dicts and other collections would not have examined the elements and thus when using a union of different types of list this could result in Hypothesis getting confused and passing a value to the wrong strategy. This could potentially result in exceptions being thrown from within simplification. * sampled_from would not work correctly on a single element list. * Hypothesis could get *very* confused by values which are equal despite having different types being used in descriptors. Hypothesis now has its own more specific version of equality it uses for descriptors and tracking. It is always more fine grained than Python equality: Things considered != are not considered equal by hypothesis, but some things that are considered == are distinguished. If your test suite uses both frozenset and set tests this bug is probably affecting you. ------------------ 0.3.2 - 2015-01-16 ------------------ * Fix a bug where if you specified floats_in_range with integer arguments Hypothesis would error in example simplification. * Improve the statistical distribution of the floats you get for the floats_in_range strategy. I'm not sure whether this will affect users in practice but it took my tests for various conditions from flaky to rock solid so it at the very least improves discovery of the artificial cases I'm looking for. * Improved repr() for strategies and RandomWithSeed instances. * Add detection for flaky test cases where hypothesis managed to find an example which breaks it but on the final invocation of the test it does not raise an error. This will typically happen with too much recursion errors but could conceivably happen in other circumstances too. * Provide a "derandomized" mode. This allows you to run hypothesis with zero real randomization, making your build nice and deterministic. The tests run with a seed calculated from the function they're testing so you should still get a good distribution of test cases. * Add a mechanism for more conveniently defining tests which just sample from some collection. * Fix for a really subtle bug deep in the internals of the strategy table. In some circumstances if you were to define instance strategies for both a parent class and one or more of its subclasses you would under some circumstances get the strategy for the wrong superclass of an instance. It is very unlikely anyone has ever encountered this in the wild, but it is conceivably possible given that a mix of namedtuple and tuple are used fairly extensively inside hypothesis which do exhibit this pattern of strategy. ------------------ 0.3.1 - 2015-01-13 ------------------ * Support for generation of frozenset and Random values * Correct handling of the case where a called function mutates it argument. This involved introducing a notion of a strategies knowing how to copy their argument. The default method should be entirely acceptable and the worst case is that it will continue to have the old behaviour if you don't mark your strategy as mutable, so this shouldn't break anything. * Fix for a bug where some strategies did not correctly implement could_have_produced. It is very unlikely that any of these would have been seen in the wild, and the consequences if they had been would have been minor. * Re-export the @given decorator from the main hypothesis namespace. It's still available at the old location too. * Minor performance optimisation for simplifying long lists. ------------------ 0.3.0 - 2015-01-12 ------------------ * Complete redesign of the data generation system. Extreme breaking change for anyone who was previously writing their own SearchStrategy implementations. These will not work any more and you'll need to modify them. * New settings system allowing more global and modular control of Verifier behaviour. * Decouple SearchStrategy from the StrategyTable. This leads to much more composable code which is a lot easier to understand. * A significant amount of internal API renaming and moving. This may also break your code. * Expanded available descriptors, allowing for generating integers or floats in a specific range. * Significantly more robust. A very large number of small bug fixes, none of which anyone is likely to have ever noticed. * Deprecation of support for pypy and python 3 prior to 3.3. 3.3 and 3.4. Supported versions are 2.7.x, 3.3.x, 3.4.x. I expect all of these to remain officially supported for a very long time. I would not be surprised to add pypy support back in later but I'm not going to do so until I know someone cares about it. In the meantime it will probably still work. ------------------ 0.2.2 - 2015-01-08 ------------------ * Fix an embarrassing complete failure of the installer caused by my being bad at version control ------------------ 0.2.1 - 2015-01-07 ------------------ * Fix a bug in the new stateful testing feature where you could make __init__ a @requires method. Simplification would not always work if the prune method was able to successfully shrink the test. ------------------ 0.2.0 - 2015-01-07 ------------------ * It's aliiive. * Improve python 3 support using six. * Distinguish between byte and unicode types. * Fix issues where FloatStrategy could raise. * Allow stateful testing to request constructor args. * Fix for issue where test annotations would timeout based on when the module was loaded instead of when the test started ------------------ 0.1.4 - 2013-12-14 ------------------ * Make verification runs time bounded with a configurable timeout ------------------ 0.1.3 - 2013-05-03 ------------------ * Bugfix: Stateful testing behaved incorrectly with subclassing. * Complex number support * support for recursive strategies * different error for hypotheses with unsatisfiable assumptions ------------------ 0.1.2 - 2013-03-24 ------------------ * Bugfix: Stateful testing was not minimizing correctly and could throw exceptions. * Better support for recursive strategies. * Support for named tuples. * Much faster integer generation. ------------------ 0.1.1 - 2013-03-24 ------------------ * Python 3.x support via 2to3. * Use new style classes (oops). ------------------ 0.1.0 - 2013-03-23 ------------------ * Introduce stateful testing. * Massive rewrite of internals to add flags and strategies. ------------------ 0.0.5 - 2013-03-13 ------------------ * No changes except trying to fix packaging ------------------ 0.0.4 - 2013-03-13 ------------------ * No changes except that I checked in a failing test case for 0.0.3 so had to replace the release. Doh ------------------ 0.0.3 - 2013-03-13 ------------------ * Improved a few internals. * Opened up creating generators from instances as a general API. * Test integration. ------------------ 0.0.2 - 2013-03-12 ------------------ * Starting to tighten up on the internals. * Change API to allow more flexibility in configuration. * More testing. ------------------ 0.0.1 - 2013-03-10 ------------------ * Initial release. * Basic working prototype. Demonstrates idea, probably shouldn't be used. hypothesis-python-3.44.1/docs/community.rst000066400000000000000000000044021321557765100210430ustar00rootroot00000000000000========= Community ========= The Hypothesis community is small for the moment but is full of excellent people who can answer your questions and help you out. Please do join us. The two major places for community discussion are: * `The mailing list `_. * An IRC channel, #hypothesis on freenode, which is more active than the mailing list. Feel free to use these to ask for help, provide feedback, or discuss anything remotely Hypothesis related at all. --------------- Code of conduct --------------- Hypothesis's community is an inclusive space, and everyone in it is expected to abide by a code of conduct. At the high level the code of conduct goes like this: 1. Be kind 2. Be respectful 3. Be helpful While it is impossible to enumerate everything that is unkind, disrespectful or unhelpful, here are some specific things that are definitely against the code of conduct: 1. -isms and -phobias (e.g. racism, sexism, transphobia and homophobia) are unkind, disrespectful *and* unhelpful. Just don't. 2. All software is broken. This is not a moral failing on the part of the authors. Don't give people a hard time for bad code. 3. It's OK not to know things. Everybody was a beginner once, nobody should be made to feel bad for it. 4. It's OK not to *want* to know something. If you think someone's question is fundamentally flawed, you should still ask permission before explaining what they should actually be asking. 5. Note that "I was just joking" is not a valid defence. What happens when this goes wrong? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For minor infractions, I'll just call people on it and ask them to apologise and not do it again. You should feel free to do this too if you're comfortable doing so. Major infractions and repeat offenders will be banned from the community. Also, people who have a track record of bad behaviour outside of the Hypothesis community may be banned even if they obey all these rules if their presence is making people uncomfortable. At the current volume level it's not hard for me to pay attention to the whole community, but if you think I've missed something please feel free to alert me. You can either message me as DRMacIver on freenode or send a me an email at david@drmaciver.com. hypothesis-python-3.44.1/docs/conf.py000066400000000000000000000074051321557765100175720ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # -*- coding: utf-8 -*- from __future__ import division, print_function, absolute_import import os import sys import datetime sys.path.append( os.path.join(os.path.dirname(__file__), '..', 'src') ) autodoc_member_order = 'bysource' extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.extlinks', 'sphinx.ext.viewcode', 'sphinx.ext.intersphinx', ] templates_path = ['_templates'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Hypothesis' copyright = u'2013-%s, David R. MacIver' % datetime.datetime.utcnow().year author = u'David R. MacIver' _d = {} with open(os.path.join(os.path.dirname(__file__), '..', 'src', 'hypothesis', 'version.py')) as f: exec(f.read(), _d) version = _d['__version__'] release = _d['__version__'] language = None exclude_patterns = ['_build'] pygments_style = 'sphinx' todo_include_todos = False intersphinx_mapping = { 'python': ('https://docs.python.org/3/', None), 'numpy': ('https://docs.scipy.org/doc/numpy/', None), 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None), 'pytest': ('https://docs.pytest.org/en/stable/', None), } autodoc_mock_imports = ['numpy', 'pandas'] doctest_global_setup = ''' # Some standard imports from hypothesis import * from hypothesis.strategies import * # Run deterministically, and don't save examples import random random.seed(0) doctest_settings = settings(database=None, derandomize=True) settings.register_profile('doctests', doctest_settings) settings.load_profile('doctests') # Never show deprecated behaviour in code examples import warnings warnings.filterwarnings('error', category=DeprecationWarning) ''' # This config value must be a dictionary of external sites, mapping unique # short alias names to a base URL and a prefix. # See http://sphinx-doc.org/ext/extlinks.html _repo = 'https://github.com/HypothesisWorks/hypothesis-python/' extlinks = { 'commit': (_repo + 'commit/%s', 'commit '), 'gh-file': (_repo + 'blob/master/%s', ''), 'gh-link': (_repo + '%s', ''), 'issue': (_repo + 'issues/%s', 'issue #'), 'pull': (_repo + 'pulls/%s', 'pull request #'), 'pypi': ('https://pypi.python.org/pypi/%s', ''), } # -- Options for HTML output ---------------------------------------------- if os.environ.get('READTHEDOCS', None) != 'True': # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] html_static_path = ['_static'] htmlhelp_basename = 'Hypothesisdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { } latex_documents = [ (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation', u'David R. MacIver', 'manual'), ] man_pages = [ (master_doc, 'hypothesis', u'Hypothesis Documentation', [author], 1) ] texinfo_documents = [ (master_doc, 'Hypothesis', u'Hypothesis Documentation', author, 'Hypothesis', 'Advanced property-based testing for Python.', 'Miscellaneous'), ] hypothesis-python-3.44.1/docs/data.rst000066400000000000000000000272601321557765100177370ustar00rootroot00000000000000============================= What you can generate and how ============================= *Most things should be easy to generate and everything should be possible.* To support this principle Hypothesis provides strategies for most built-in types with arguments to constrain or adjust the output, as well as higher-order strategies that can be composed to generate more complex types. This document is a guide to what strategies are available for generating data and how to build them. Strategies have a variety of other important internal features, such as how they simplify, but the data they can generate is the only public part of their API. Functions for building strategies are all available in the hypothesis.strategies module. The salient functions from it are as follows: .. automodule:: hypothesis.strategies :members: .. _shrinking: ~~~~~~~~~ Shrinking ~~~~~~~~~ When using strategies it is worth thinking about how the data *shrinks*. Shrinking is the process by which Hypothesis tries to produce human readable examples when it finds a failure - it takes a complex example and turns it into a simpler one. Each strategy defines an order in which it shrinks - you won't usually need to care about this much, but it can be worth being aware of as it can affect what the best way to write your own strategies is. The exact shrinking behaviour is not a guaranteed part of the API, but it doesn't change that often and when it does it's usually because we think the new way produces nicer examples. Possibly the most important one to be aware of is :func:`~hypothesis.strategies.one_of`, which has a preference for values produced by strategies earlier in its argument list. Most of the others should largely "do the right thing" without you having to think about it. ~~~~~~~~~~~~~~~~~~~ Adapting strategies ~~~~~~~~~~~~~~~~~~~ Often it is the case that a strategy doesn't produce exactly what you want it to and you need to adapt it. Sometimes you can do this in the test, but this hurts reuse because you then have to repeat the adaption in every test. Hypothesis gives you ways to build strategies from other strategies given functions for transforming the data. ------- Mapping ------- ``map`` is probably the easiest and most useful of these to use. If you have a strategy ``s`` and a function ``f``, then an example ``s.map(f).example()`` is ``f(s.example())``, i.e. we draw an example from ``s`` and then apply ``f`` to it. e.g.: .. doctest:: >>> lists(integers()).map(sorted).example() [-158104205405429173199472404790070005365, -131418136966037518992825706738877085689, -49279168042092131242764306881569217089, 2564476464308589627769617001898573635] Note that many things that you might use mapping for can also be done with :func:`~hypothesis.strategies.builds`. .. _filtering: --------- Filtering --------- ``filter`` lets you reject some examples. ``s.filter(f).example()`` is some example of ``s`` such that ``f(example)`` is truthy. .. doctest:: >>> integers().filter(lambda x: x > 11).example() 87034457550488036879331335314643907276 >>> integers().filter(lambda x: x > 11).example() 145321388071838806577381808280858991039 It's important to note that ``filter`` isn't magic and if your condition is too hard to satisfy then this can fail: .. doctest:: >>> integers().filter(lambda x: False).example() Traceback (most recent call last): ... hypothesis.errors.NoExamples: Could not find any valid examples in 20 tries In general you should try to use ``filter`` only to avoid corner cases that you don't want rather than attempting to cut out a large chunk of the search space. A technique that often works well here is to use map to first transform the data and then use ``filter`` to remove things that didn't work out. So for example if you wanted pairs of integers (x,y) such that x < y you could do the following: .. doctest:: >>> tuples(integers(), integers()).map(sorted).filter(lambda x: x[0] < x[1]).example() [-145066798798423346485767563193971626126, -19139012562996970506504843426153630262] .. _flatmap: ---------------------------- Chaining strategies together ---------------------------- Finally there is ``flatmap``. ``flatmap`` draws an example, then turns that example into a strategy, then draws an example from *that* strategy. It may not be obvious why you want this at first, but it turns out to be quite useful because it lets you generate different types of data with relationships to each other. For example suppose we wanted to generate a list of lists of the same length: .. code-block:: pycon >>> rectangle_lists = integers(min_value=0, max_value=10).flatmap( ... lambda n: lists(lists(integers(), min_size=n, max_size=n))) >>> find(rectangle_lists, lambda x: True) [] >>> find(rectangle_lists, lambda x: len(x) >= 10) [[], [], [], [], [], [], [], [], [], []] >>> find(rectangle_lists, lambda t: len(t) >= 3 and len(t[0]) >= 3) [[0, 0, 0], [0, 0, 0], [0, 0, 0]] >>> find(rectangle_lists, lambda t: sum(len(s) for s in t) >= 10) [[0], [0], [0], [0], [0], [0], [0], [0], [0], [0]] In this example we first choose a length for our tuples, then we build a strategy which generates lists containing lists precisely of that length. The finds show what simple examples for this look like. Most of the time you probably don't want ``flatmap``, but unlike ``filter`` and ``map`` which are just conveniences for things you could just do in your tests, ``flatmap`` allows genuinely new data generation that you wouldn't otherwise be able to easily do. (If you know Haskell: Yes, this is more or less a monadic bind. If you don't know Haskell, ignore everything in these parentheses. You do not need to understand anything about monads to use this, or anything else in Hypothesis). -------------- Recursive data -------------- Sometimes the data you want to generate has a recursive definition. e.g. if you wanted to generate JSON data, valid JSON is: 1. Any float, any boolean, any unicode string. 2. Any list of valid JSON data 3. Any dictionary mapping unicode strings to valid JSON data. The problem is that you cannot call a strategy recursively and expect it to not just blow up and eat all your memory. The other problem here is that not all unicode strings display consistently on different machines, so we'll restrict them in our doctest. The way Hypothesis handles this is with the :py:func:`recursive` function which you pass in a base case and a function that, given a strategy for your data type, returns a new strategy for it. So for example: .. doctest:: >>> from string import printable; from pprint import pprint >>> json = recursive(none() | booleans() | floats() | text(printable), ... lambda children: lists(children) | dictionaries(text(printable), children)) >>> pprint(json.example()) {'': 'wkP!4', '\nLdy': None, '"uHuds:8a{h\\:694K~{mY>a1yA:#CmDYb': {}, '#1J1': [')gnP', inf, ['6', 11881275561.716116, "v'A?qyp_sB\n$62g", ''], -1e-05, 'aF\rl', [-2.599459969184803e+250, True, True, None], [True, '9qP\x0bnUJH5', 3.0741121405774857e-131, None, '', -inf, 'L&', 1.5, False, None]], 'cx.': None} >>> pprint(json.example()) [5.321430774293539e+16, [], 1.1045114769709281e-125] >>> pprint(json.example()) {'a': []} That is, we start with our leaf data and then we augment it by allowing lists and dictionaries of anything we can generate as JSON data. The size control of this works by limiting the maximum number of values that can be drawn from the base strategy. So for example if we wanted to only generate really small JSON we could do this as: .. doctest:: >>> small_lists = recursive(booleans(), lists, max_leaves=5) >>> small_lists.example() True >>> small_lists.example() [False, False, True, True, True] >>> small_lists.example() True .. _composite-strategies: ~~~~~~~~~~~~~~~~~~~~ Composite strategies ~~~~~~~~~~~~~~~~~~~~ The :func:`@composite ` decorator lets you combine other strategies in more or less arbitrary ways. It's probably the main thing you'll want to use for complicated custom strategies. The composite decorator works by giving you a function as the first argument that you can use to draw examples from other strategies. For example, the following gives you a list and an index into it: .. doctest:: >>> @composite ... def list_and_index(draw, elements=integers()): ... xs = draw(lists(elements, min_size=1)) ... i = draw(integers(min_value=0, max_value=len(xs) - 1)) ... return (xs, i) ``draw(s)`` is a function that should be thought of as returning ``s.example()``, except that the result is reproducible and will minimize correctly. The decorated function has the initial argument removed from the list, but will accept all the others in the expected order. Defaults are preserved. .. doctest:: >>> list_and_index() list_and_index() >>> list_and_index().example() ([-57328788235238539257894870261848707608], 0) >>> list_and_index(booleans()) list_and_index(elements=booleans()) >>> list_and_index(booleans()).example() ([True], 0) Note that the repr will work exactly like it does for all the built-in strategies: it will be a function that you can call to get the strategy in question, with values provided only if they do not match the defaults. You can use :func:`assume ` inside composite functions: .. code-block:: python @composite def distinct_strings_with_common_characters(draw): x = draw(text(), min_size=1) y = draw(text(alphabet=x)) assume(x != y) return (x, y) This works as :func:`assume ` normally would, filtering out any examples for which the passed in argument is falsey. .. _interactive-draw: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Drawing interactively in tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There is also the :func:`~hypothesis.strategies.data` strategy, which gives you a means of using strategies interactively. Rather than having to specify everything up front in :func:`@given ` you can draw from strategies in the body of your test: .. code-block:: python @given(data()) def test_draw_sequentially(data): x = data.draw(integers()) y = data.draw(integers(min_value=x)) assert x < y If the test fails, each draw will be printed with the falsifying example. e.g. the above is wrong (it has a boundary condition error), so will print: .. code-block:: pycon Falsifying example: test_draw_sequentially(data=data(...)) Draw 1: 0 Draw 2: 0 As you can see, data drawn this way is simplified as usual. Test functions using the :func:`~hypothesis.strategies.data` strategy do not support explicit :func:`@example(...) `\ s. In this case, the best option is usually to construct your data with :func:`@composite ` or the explicit example, and unpack this within the body of the test. Optionally, you can provide a label to identify values generated by each call to ``data.draw()``. These labels can be used to identify values in the output of a falsifying example. For instance: .. code-block:: python @given(data()) def test_draw_sequentially(data): x = data.draw(integers(), label='First number') y = data.draw(integers(min_value=x), label='Second number') assert x < y will produce the output: .. code-block:: pycon Falsifying example: test_draw_sequentially(data=data(...)) Draw 1 (First number): 0 Draw 2 (Second number): 0 hypothesis-python-3.44.1/docs/database.rst000066400000000000000000000062711321557765100205710ustar00rootroot00000000000000=============================== The Hypothesis Example Database =============================== When Hypothesis finds a bug it stores enough information in its database to reproduce it. This enables you to have a classic testing workflow of find a bug, fix a bug, and be confident that this is actually doing the right thing because Hypothesis will start by retrying the examples that broke things last time. ----------- Limitations ----------- The database is best thought of as a cache that you never need to invalidate: Information may be lost when you upgrade a Hypothesis version or change your test, so you shouldn't rely on it for correctness - if there's an example you want to ensure occurs each time then :ref:`there's a feature for including them in your source code ` - but it helps the development workflow considerably by making sure that the examples you've just found are reproduced. -------------- File locations -------------- The default storage format is as a fairly opaque directory structure. Each test corresponds to a directory, and each example to a file within that directory. The standard location for it is .hypothesis/examples in your current working directory. You can override this, either by setting either the database\_file property on a settings object (you probably want to specify it on settings.default) or by setting the HYPOTHESIS\_DATABASE\_FILE environment variable. There is also a legacy sqlite3 based format. This is mostly still supported for compatibility reasons, and support will be dropped in some future version of Hypothesis. If you use a database file name ending in .db, .sqlite or .sqlite3 that format will be used instead. -------------------------------------------- Upgrading Hypothesis and changing your tests -------------------------------------------- The design of the Hypothesis database is such that you can put arbitrary data in the database and not get wrong behaviour. When you upgrade Hypothesis, old data *might* be invalidated, but this should happen transparently. It should never be the case that e.g. changing the strategy that generates an argument sometimes gives you data from the old strategy. ----------------------------- Sharing your example database ----------------------------- .. note:: If specific examples are important for correctness you should use the :func:`@example ` decorator, as the example database may discard entries due to changes in your code or dependencies. For most users, we therefore recommend using the example database locally and possibly persisting it between CI builds, but not tracking it under version control. The examples database can be shared simply by checking the directory into version control, for example with the following ``.gitignore``:: # Ignore files cached by Hypothesis... .hypothesis/ # except for the examples directory !.hypothesis/examples/ Like everything under ``.hypothesis/``, the examples directory will be transparently created on demand. Unlike the other subdirectories, ``examples/`` is designed to handle merges, deletes, etc if you just add the directory into git, mercurial, or any similar version control system. hypothesis-python-3.44.1/docs/details.rst000066400000000000000000000412001321557765100204410ustar00rootroot00000000000000============================= Details and advanced features ============================= This is an account of slightly less common Hypothesis features that you don't need to get started but will nevertheless make your life easier. ---------------------- Additional test output ---------------------- Normally the output of a failing test will look something like: .. code:: Falsifying example: test_a_thing(x=1, y="foo") With the ``repr`` of each keyword argument being printed. Sometimes this isn't enough, either because you have values with a ``repr`` that isn't very descriptive or because you need to see the output of some intermediate steps of your test. That's where the ``note`` function comes in: .. doctest:: >>> from hypothesis import given, note, strategies as st >>> @given(st.lists(st.integers()), st.randoms()) ... def test_shuffle_is_noop(ls, r): ... ls2 = list(ls) ... r.shuffle(ls2) ... note("Shuffle: %r" % (ls2)) ... assert ls == ls2 ... >>> try: ... test_shuffle_is_noop() ... except AssertionError: ... print('ls != ls2') Falsifying example: test_shuffle_is_noop(ls=[0, 0, 1], r=RandomWithSeed(0)) Shuffle: [0, 1, 0] ls != ls2 The note is printed in the final run of the test in order to include any additional information you might need in your test. .. _statistics: --------------- Test Statistics --------------- If you are using py.test you can see a number of statistics about the executed tests by passing the command line argument ``--hypothesis-show-statistics``. This will include some general statistics about the test: For example if you ran the following with ``--hypothesis-show-statistics``: .. code-block:: python from hypothesis import given, strategies as st @given(st.integers()) def test_integers(i): pass You would see: .. code-block:: none test_integers: - 100 passing examples, 0 failing examples, 0 invalid examples - Typical runtimes: ~ 1ms - Fraction of time spent in data generation: ~ 12% - Stopped because settings.max_examples=100 The final "Stopped because" line is particularly important to note: It tells you the setting value that determined when the test should stop trying new examples. This can be useful for understanding the behaviour of your tests. Ideally you'd always want this to be ``max_examples``. In some cases (such as filtered and recursive strategies) you will see events mentioned which describe some aspect of the data generation: .. code-block:: python from hypothesis import given, strategies as st @given(st.integers().filter(lambda x: x % 2 == 0)) def test_even_integers(i): pass You would see something like: .. code-block:: none test_even_integers: - 100 passing examples, 0 failing examples, 36 invalid examples - Typical runtimes: 0-1 ms - Fraction of time spent in data generation: ~ 16% - Stopped because settings.max_examples=100 - Events: * 80.88%, Retried draw from integers().filter(lambda x: ) to satisfy filter * 26.47%, Aborted test because unable to satisfy integers().filter(lambda x: ) You can also mark custom events in a test using the ``event`` function: .. autofunction:: hypothesis.event .. code:: python from hypothesis import given, event, strategies as st @given(st.integers().filter(lambda x: x % 2 == 0)) def test_even_integers(i): event("i mod 3 = %d" % (i % 3,)) You will then see output like: .. code-block:: none test_even_integers: - 100 passing examples, 0 failing examples, 38 invalid examples - Typical runtimes: 0-1 ms - Fraction of time spent in data generation: ~ 16% - Stopped because settings.max_examples=100 - Events: * 80.43%, Retried draw from integers().filter(lambda x: ) to satisfy filter * 31.88%, i mod 3 = 0 * 27.54%, Aborted test because unable to satisfy integers().filter(lambda x: ) * 21.74%, i mod 3 = 1 * 18.84%, i mod 3 = 2 Arguments to ``event`` can be any hashable type, but two events will be considered the same if they are the same when converted to a string with ``str``. ------------------ Making assumptions ------------------ Sometimes Hypothesis doesn't give you exactly the right sort of data you want - it's mostly of the right shape, but some examples won't work and you don't want to care about them. You *can* just ignore these by aborting the test early, but this runs the risk of accidentally testing a lot less than you think you are. Also it would be nice to spend less time on bad examples - if you're running 200 examples per test (the default) and it turns out 150 of those examples don't match your needs, that's a lot of wasted time. .. autofunction:: hypothesis.assume For example suppose you had the following test: .. code:: python @given(floats()) def test_negation_is_self_inverse(x): assert x == -(-x) Running this gives us: .. code:: Falsifying example: test_negation_is_self_inverse(x=float('nan')) AssertionError This is annoying. We know about NaN and don't really care about it, but as soon as Hypothesis finds a NaN example it will get distracted by that and tell us about it. Also the test will fail and we want it to pass. So lets block off this particular example: .. code:: python from math import isnan @given(floats()) def test_negation_is_self_inverse_for_non_nan(x): assume(not isnan(x)) assert x == -(-x) And this passes without a problem. In order to avoid the easy trap where you assume a lot more than you intended, Hypothesis will fail a test when it can't find enough examples passing the assumption. If we'd written: .. code:: python @given(floats()) def test_negation_is_self_inverse_for_non_nan(x): assume(False) assert x == -(-x) Then on running we'd have got the exception: .. code:: Unsatisfiable: Unable to satisfy assumptions of hypothesis test_negation_is_self_inverse_for_non_nan. Only 0 examples considered satisfied assumptions ~~~~~~~~~~~~~~~~~~~ How good is assume? ~~~~~~~~~~~~~~~~~~~ Hypothesis has an adaptive exploration strategy to try to avoid things which falsify assumptions, which should generally result in it still being able to find examples in hard to find situations. Suppose we had the following: .. code:: python @given(lists(integers())) def test_sum_is_positive(xs): assert sum(xs) > 0 Unsurprisingly this fails and gives the falsifying example ``[]``. Adding ``assume(xs)`` to this removes the trivial empty example and gives us ``[0]``. Adding ``assume(all(x > 0 for x in xs))`` and it passes: the sum of a list of positive integers is positive. The reason that this should be surprising is not that it doesn't find a counter-example, but that it finds enough examples at all. In order to make sure something interesting is happening, suppose we wanted to try this for long lists. e.g. suppose we added an ``assume(len(xs) > 10)`` to it. This should basically never find an example: a naive strategy would find fewer than one in a thousand examples, because if each element of the list is negative with probability one-half, you'd have to have ten of these go the right way by chance. In the default configuration Hypothesis gives up long before it's tried 1000 examples (by default it tries 200). Here's what happens if we try to run this: .. code:: python @given(lists(integers())) def test_sum_is_positive(xs): assume(len(xs) > 10) assume(all(x > 0 for x in xs)) print(xs) assert sum(xs) > 0 In: test_sum_is_positive() [17, 12, 7, 13, 11, 3, 6, 9, 8, 11, 47, 27, 1, 31, 1] [6, 2, 29, 30, 25, 34, 19, 15, 50, 16, 10, 3, 16] [25, 17, 9, 19, 15, 2, 2, 4, 22, 10, 10, 27, 3, 1, 14, 17, 13, 8, 16, 9, 2... [17, 65, 78, 1, 8, 29, 2, 79, 28, 18, 39] [13, 26, 8, 3, 4, 76, 6, 14, 20, 27, 21, 32, 14, 42, 9, 24, 33, 9, 5, 15, ... [2, 1, 2, 2, 3, 10, 12, 11, 21, 11, 1, 16] As you can see, Hypothesis doesn't find *many* examples here, but it finds some - enough to keep it happy. In general if you *can* shape your strategies better to your tests you should - for example :py:func:`integers(1, 1000) ` is a lot better than ``assume(1 <= x <= 1000)``, but ``assume`` will take you a long way if you can't. --------------------- Defining strategies --------------------- The type of object that is used to explore the examples given to your test function is called a :class:`~hypothesis.SearchStrategy`. These are created using the functions exposed in the :mod:`hypothesis.strategies` module. Many of these strategies expose a variety of arguments you can use to customize generation. For example for integers you can specify ``min`` and ``max`` values of integers you want. If you want to see exactly what a strategy produces you can ask for an example: .. doctest:: >>> integers(min_value=0, max_value=10).example() 9 Many strategies are built out of other strategies. For example, if you want to define a tuple you need to say what goes in each element: .. doctest:: >>> from hypothesis.strategies import tuples >>> tuples(integers(), integers()).example() (-85296636193678268231691518597782489127, 68871684356256783618296489618877951982) Further details are :doc:`available in a separate document `. ------------------------------------ The gory details of given parameters ------------------------------------ .. autofunction:: hypothesis.given The :func:`@given ` decorator may be used to specify which arguments of a function should be parametrized over. You can use either positional or keyword arguments or a mixture of the two. For example all of the following are valid uses: .. code:: python @given(integers(), integers()) def a(x, y): pass @given(integers()) def b(x, y): pass @given(y=integers()) def c(x, y): pass @given(x=integers()) def d(x, y): pass @given(x=integers(), y=integers()) def e(x, **kwargs): pass @given(x=integers(), y=integers()) def f(x, *args, **kwargs): pass class SomeTest(TestCase): @given(integers()) def test_a_thing(self, x): pass The following are not: .. code:: python @given(integers(), integers(), integers()) def g(x, y): pass @given(integers()) def h(x, *args): pass @given(integers(), x=integers()) def i(x, y): pass @given() def j(x, y): pass The rules for determining what are valid uses of ``given`` are as follows: 1. You may pass any keyword argument to ``given``. 2. Positional arguments to ``given`` are equivalent to the rightmost named arguments for the test function. 3. Positional arguments may not be used if the underlying test function has varargs, arbitrary keywords, or keyword-only arguments. 4. Functions tested with ``given`` may not have any defaults. The reason for the "rightmost named arguments" behaviour is so that using :func:`@given ` with instance methods works: ``self`` will be passed to the function as normal and not be parametrized over. The function returned by given has all the same arguments as the original test, minus those that are filled in by ``given``. ------------------------- Custom function execution ------------------------- Hypothesis provides you with a hook that lets you control how it runs examples. This lets you do things like set up and tear down around each example, run examples in a subprocess, transform coroutine tests into normal tests, etc. The way this works is by introducing the concept of an executor. An executor is essentially a function that takes a block of code and run it. The default executor is: .. code:: python def default_executor(function): return function() You define executors by defining a method execute_example on a class. Any test methods on that class with :func:`@given ` used on them will use ``self.execute_example`` as an executor with which to run tests. For example, the following executor runs all its code twice: .. code:: python from unittest import TestCase class TestTryReallyHard(TestCase): @given(integers()) def test_something(self, i): perform_some_unreliable_operation(i) def execute_example(self, f): f() return f() Note: The functions you use in map, etc. will run *inside* the executor. i.e. they will not be called until you invoke the function passed to ``execute_example``. An executor must be able to handle being passed a function which returns None, otherwise it won't be able to run normal test cases. So for example the following executor is invalid: .. code:: python from unittest import TestCase class TestRunTwice(TestCase): def execute_example(self, f): return f()() and should be rewritten as: .. code:: python from unittest import TestCase import inspect class TestRunTwice(TestCase): def execute_example(self, f): result = f() if inspect.isfunction(result): result = result() return result ------------------------------- Using Hypothesis to find values ------------------------------- You can use Hypothesis's data exploration features to find values satisfying some predicate. This is generally useful for exploring custom strategies defined with :func:`@composite `, or experimenting with conditions for filtering data. .. autofunction:: hypothesis.find .. doctest:: >>> from hypothesis import find >>> from hypothesis.strategies import sets, lists, integers >>> find(lists(integers()), lambda x: sum(x) >= 10) [10] >>> find(lists(integers()), lambda x: sum(x) >= 10 and len(x) >= 3) [0, 0, 10] >>> find(sets(integers()), lambda x: sum(x) >= 10 and len(x) >= 3) {0, 1, 9} The first argument to :func:`~hypothesis.find` describes data in the usual way for an argument to :func:`~hypothesis.given`, and supports :doc:`all the same data types `. The second is a predicate it must satisfy. Of course not all conditions are satisfiable. If you ask Hypothesis for an example to a condition that is always false it will raise an error: .. doctest:: >>> find(integers(), lambda x: False) Traceback (most recent call last): ... hypothesis.errors.NoSuchExample: No examples of condition lambda x: (The ``lambda x: unknown`` is because Hypothesis can't retrieve the source code of lambdas from the interactive python console. It gives a better error message most of the time which contains the actual condition) .. _type-inference: ------------------- Inferred Strategies ------------------- In some cases, Hypothesis can work out what to do when you omit arguments. This is based on introspection, *not* magic, and therefore has well-defined limits. :func:`~hypothesis.strategies.builds` will check the signature of the ``target`` (using :func:`~python:inspect.getfullargspec`). If there are required arguments with type annotations and no strategy was passed to :func:`~hypothesis.strategies.builds`, :func:`~hypothesis.strategies.from_type` is used to fill them in. You can also pass the special value :const:`hypothesis.infer` as a keyword argument, to force this inference for arguments with a default value. .. doctest:: >>> def func(a: int, b: str): ... return [a, b] >>> builds(func).example() [72627971792323936471739212691379790782, ''] :func:`@given ` does not perform any implicit inference for required arguments, as this would break compatibility with pytest fixtures. :const:`~hypothesis.infer` can be used as a keyword argument to explicitly fill in an argument from its type annotation. .. code:: python @given(a=infer) def test(a: int): pass # is equivalent to @given(a=integers()) def test(a): pass ~~~~~~~~~~~ Limitations ~~~~~~~~~~~ :pep:`3107` type annotations are not supported on Python 2, and Hypothesis does not inspect :pep:`484` type comments at runtime. While :func:`~hypothesis.strategies.from_type` will work as usual, inference in :func:`~hypothesis.strategies.builds` and :func:`@given ` will only work if you manually create the ``__annotations__`` attribute (e.g. by using ``@annotations(...)`` and ``@returns(...)`` decorators). The :mod:`python:typing` module is fully supported on Python 2 if you have the backport installed. The :mod:`python:typing` module is provisional and has a number of internal changes between Python 3.5.0 and 3.6.1, including at minor versions. These are all supported on a best-effort basis, but you may encounter problems with an old version of the module. Please report them to us, and consider updating to a newer version of Python as a workaround. hypothesis-python-3.44.1/docs/development.rst000066400000000000000000000043341321557765100213450ustar00rootroot00000000000000============================== Ongoing Hypothesis Development ============================== Hypothesis development is managed by me, `David R. MacIver `_. I am the primary author of Hypothesis. *However*, I no longer do unpaid feature development on Hypothesis. My roles as leader of the project are: 1. Helping other people do feature development on Hypothesis 2. Fixing bugs and other code health issues 3. Improving documentation 4. General release management work 5. Planning the general roadmap of the project 6. Doing sponsored development on tasks that are too large or in depth for other people to take on So all new features must either be sponsored or implemented by someone else. That being said, the maintenance team takes an active role in shepherding pull requests and helping people write a new feature (see :gh-file:`CONTRIBUTING.rst` for details and :pull:`154` for an example of how the process goes). This isn't "patches welcome", it's "we will help you write a patch". .. _release-policy: Release Policy ============== Hypothesis releases follow `semantic versioning `_. We maintain backwards-compatibility wherever possible, and use deprecation warnings to mark features that have been superseded by a newer alternative. If you want to detect this, you can :mod:`upgrade warnings to errors in the usual ways `. We use continuous deployment to ensure that you can always use our newest and shiniest features - every change to the source tree is automatically built and published on PyPI as soon as it's merged onto master, after code review and passing our extensive test suite. Project Roadmap =============== Hypothesis does not have a long-term release plan. However some visibility into our plans for future :doc:`compatibility ` may be useful: - We value compatibility, and maintain it as far as practical. This generally excludes things which are end-of-life upstream, or have an unstable API. - We would like to drop Python 2 support when it it reaches end of life in 2020. Ongoing support is likely to depend on commercial funding. - We intend to support PyPy3 as soon as it supports a recent enough version of Python 3. See :issue:`602`. hypothesis-python-3.44.1/docs/django.rst000066400000000000000000000147241321557765100202710ustar00rootroot00000000000000.. _hypothesis-django: =========================== Hypothesis for Django users =========================== Hypothesis offers a number of features specific for Django testing, available in the :mod:`hypothesis[django]` :doc:`extra `. This is tested against each supported series with mainstream or extended support - if you're still getting security patches, you can test with Hypothesis. Using it is quite straightforward: All you need to do is subclass :class:`hypothesis.extra.django.TestCase` or :class:`hypothesis.extra.django.TransactionTestCase` and you can use :func:`@given ` as normal, and the transactions will be per example rather than per test function as they would be if you used :func:`@given ` with a normal django test suite (this is important because your test function will be called multiple times and you don't want them to interfere with each other). Test cases on these classes that do not use :func:`@given ` will be run as normal. I strongly recommend not using :class:`~hypothesis.extra.django.TransactionTestCase` unless you really have to. Because Hypothesis runs this in a loop the performance problems it normally has are significantly exacerbated and your tests will be really slow. If you are using :class:`~hypothesis.extra.django.TransactionTestCase`, you may need to use ``@settings(suppress_health_check=[HealthCheck.too_slow])`` to avoid :doc:`errors due to slow example generation `. In addition to the above, Hypothesis has some support for automatically deriving strategies for your model types, which you can then customize further. .. warning:: Hypothesis creates saved models. This will run inside your testing transaction when using the test runner, but if you use the dev console this will leave debris in your database. For example, using the trivial django project I have for testing: .. code-block:: python >>> from hypothesis.extra.django.models import models >>> from toystore.models import Customer >>> c = models(Customer).example() >>> c >>> c.email 'jaime.urbina@gmail.com' >>> c.name '\U00109d3d\U000e07be\U000165f8\U0003fabf\U000c12cd\U000f1910\U00059f12\U000519b0\U0003fabf\U000f1910\U000423fb\U000423fb\U00059f12\U000e07be\U000c12cd\U000e07be\U000519b0\U000165f8\U0003fabf\U0007bc31' >>> c.age -873375803 Hypothesis has just created this with whatever the relevant type of data is. Obviously the customer's age is implausible, so lets fix that: .. code-block:: python >>> from hypothesis.strategies import integers >>> c = models(Customer, age=integers(min_value=0, max_value=120)).example() >>> c >>> c.age 5 You can use this to override any fields you like. Sometimes this will be mandatory: If you have a non-nullable field of a type Hypothesis doesn't know how to create (e.g. a foreign key) then the models function will error unless you explicitly pass a strategy to use there. Foreign keys are not automatically derived. If they're nullable they will default to always being null, otherwise you always have to specify them. e.g. suppose we had a Shop type with a foreign key to company, we would define a strategy for it as: .. code:: python shop_strategy = models(Shop, company=models(Company)) --------------- Tips and tricks --------------- Custom field types ================== If you have a custom Django field type you can register it with Hypothesis's model deriving functionality by registering a default strategy for it: .. code-block:: python >>> from toystore.models import CustomishField, Customish >>> models(Customish).example() hypothesis.errors.InvalidArgument: Missing arguments for mandatory field customish for model Customish >>> from hypothesis.extra.django.models import add_default_field_mapping >>> from hypothesis.strategies import just >>> add_default_field_mapping(CustomishField, just("hi")) >>> x = models(Customish).example() >>> x.customish 'hi' Note that this mapping is on exact type. Subtypes will not inherit it. Generating child models ======================= For the moment there's no explicit support in hypothesis-django for generating dependent models. i.e. a Company model will generate no Shops. However if you want to generate some dependent models as well, you can emulate this by using the *flatmap* function as follows: .. code:: python from hypothesis.strategies import lists, just def generate_with_shops(company): return lists(models(Shop, company=just(company))).map(lambda _: company) company_with_shops_strategy = models(Company).flatmap(generate_with_shops) Lets unpack what this is doing: The way flatmap works is that we draw a value from the original strategy, then apply a function to it which gives us a new strategy. We then draw a value from *that* strategy. So in this case we're first drawing a company, and then we're drawing a list of shops belonging to that company: The *just* strategy is a strategy such that drawing it always produces the individual value, so ``models(Shop, company=just(company))`` is a strategy that generates a Shop belonging to the original company. So the following code would give us a list of shops all belonging to the same company: .. code:: python models(Company).flatmap(lambda c: lists(models(Shop, company=just(c)))) The only difference from this and the above is that we want the company, not the shops. This is where the inner map comes in. We build the list of shops and then throw it away, instead returning the company we started for. This works because the models that Hypothesis generates are saved in the database, so we're essentially running the inner strategy purely for the side effect of creating those children in the database. Using default field values ========================== Hypothesis ignores field defaults and always tries to generate values, even if it doesn't know how to. You can tell it to use the default value for a field instead of generating one by passing ``fieldname=default_value`` to ``models()``: .. code:: python >>> from toystore.models import DefaultCustomish >>> models(DefaultCustomish).example() hypothesis.errors.InvalidArgument: Missing arguments for mandatory field customish for model DefaultCustomish >>> from hypothesis.extra.django.models import default_value >>> x = models(DefaultCustomish, customish=default_value).example() >>> x.customish 'b' hypothesis-python-3.44.1/docs/endorsements.rst000066400000000000000000000225621321557765100215340ustar00rootroot00000000000000============ Testimonials ============ This is a page for listing people who are using Hypothesis and how excited they are about that. If that's you and your name is not on the list, `this file is in Git `_ and I'd love it if you sent me a pull request to fix that. --------------------------------------------------------------------------------------- `Stripe `_ --------------------------------------------------------------------------------------- At Stripe we use Hypothesis to test every piece of our machine learning model training pipeline (powered by scikit). Before we migrated, our tests were filled with hand-crafted pandas Dataframes that weren't representative at all of our actual very complex data. Because we needed to craft examples for each test, we took the easy way out and lived with extremely low test coverage. Hypothesis changed all that. Once we had our strategies for generating Dataframes of features it became trivial to slightly customize each strategy for new tests. Our coverage is now close to 90%. Full-stop, property-based testing is profoundly more powerful - and has caught or prevented far more bugs - than our old style of example-based testing. --------------------------------------------------------------------------------------- Kristian Glass - Director of Technology at `LaterPay GmbH `_ --------------------------------------------------------------------------------------- Hypothesis has been brilliant for expanding the coverage of our test cases, and also for making them much easier to read and understand, so we're sure we're testing the things we want in the way we want. ----------------------------------------------- `Seth Morton `_ ----------------------------------------------- When I first heard about Hypothesis, I knew I had to include it in my two open-source Python libraries, `natsort `_ and `fastnumbers `_ . Quite frankly, I was a little appalled at the number of bugs and "holes" I found in the code. I can now say with confidence that my libraries are more robust to "the wild." In addition, Hypothesis gave me the confidence to expand these libraries to fully support Unicode input, which I never would have had the stomach for without such thorough testing capabilities. Thanks! ------------------------------------------- `Sixty North `_ ------------------------------------------- At Sixty North we use Hypothesis for testing `Segpy `_ an open source Python library for shifting data between Python data structures and SEG Y files which contain geophysical data from the seismic reflection surveys used in oil and gas exploration. This is our first experience of property-based testing – as opposed to example-based testing. Not only are our tests more powerful, they are also much better explanations of what we expect of the production code. In fact, the tests are much closer to being specifications. Hypothesis has located real defects in our code which went undetected by traditional test cases, simply because Hypothesis is more relentlessly devious about test case generation than us mere humans! We found Hypothesis particularly beneficial for Segpy because SEG Y is an antiquated format that uses legacy text encodings (EBCDIC) and even a legacy floating point format we implemented from scratch in Python. Hypothesis is sure to find a place in most of our future Python codebases and many existing ones too. ------------------------------------------- `mulkieran `_ ------------------------------------------- Just found out about this excellent QuickCheck for Python implementation and ran up a few tests for my `bytesize `_ package last night. Refuted a few hypotheses in the process. Looking forward to using it with a bunch of other projects as well. ----------------------------------------------- `Adam Johnson `_ ----------------------------------------------- I have written a small library to serialize ``dict``\s to MariaDB's dynamic columns binary format, `mariadb-dyncol `_. When I first developed it, I thought I had tested it really well - there were hundreds of test cases, some of them even taken from MariaDB's test suite itself. I was ready to release. Lucky for me, I tried Hypothesis with David at the PyCon UK sprints. Wow! It found bug after bug after bug. Even after a first release, I thought of a way to make the tests do more validation, which revealed a further round of bugs! Most impressively, Hypothesis found a complicated off-by-one error in a condition with 4095 versus 4096 bytes of data - something that I would never have found. Long live Hypothesis! (Or at least, property-based testing). ------------------------------------------- `Josh Bronson `_ ------------------------------------------- Adopting Hypothesis improved `bidict `_'s test coverage and significantly increased our ability to make changes to the code with confidence that correct behavior would be preserved. Thank you, David, for the great testing tool. -------------------------------------------- `Cory Benfield `_ -------------------------------------------- Hypothesis is the single most powerful tool in my toolbox for working with algorithmic code, or any software that produces predictable output from a wide range of sources. When using it with `Priority `_, Hypothesis consistently found errors in my assumptions and extremely subtle bugs that would have taken months of real-world use to locate. In some cases, Hypothesis found subtle deviations from the correct output of the algorithm that may never have been noticed at all. When it comes to validating the correctness of your tools, nothing comes close to the thoroughness and power of Hypothesis. ------------------------------------------ `Jon Moore `_ ------------------------------------------ One extremely satisfied user here. Hypothesis is a really solid implementation of property-based testing, adapted well to Python, and with good features such as failure-case shrinkers. I first used it on a project where we needed to verify that a vendor's Python and non-Python implementations of an algorithm matched, and it found about a dozen cases that previous example-based testing and code inspections had not. Since then I've been evangelizing for it at our firm. -------------------------------------------- `Russel Winder `_ -------------------------------------------- I am using Hypothesis as an integral part of my Python workshops. Testing is an integral part of Python programming and whilst unittest and, better, py.test can handle example-based testing, property-based testing is increasingly far more important than example-base testing, and Hypothesis fits the bill. --------------------------------------------- `Wellfire Interactive `_ --------------------------------------------- We've been using Hypothesis in a variety of client projects, from testing Django-related functionality to domain-specific calculations. It both speeds up and simplifies the testing process since there's so much less tedious and error-prone work to do in identifying edge cases. Test coverage is nice but test depth is even nicer, and it's much easier to get meaningful test depth using Hypothesis. -------------------------------------------------- `Cody Kochmann `_ -------------------------------------------------- Hypothesis is being used as the engine for random object generation with my open source function fuzzer `battle_tested `_ which maps all behaviors of a function allowing you to minimize the chance of unexpected crashes when running code in production. With how efficient Hypothesis is at generating the edge cases that cause unexpected behavior occur, `battle_tested `_ is able to map out the entire behavior of most functions in less than a few seconds. Hypothesis truly is a masterpiece. I can't thank you enough for building it. --------------------------------------------------- `Merchise Autrement `_ --------------------------------------------------- Just minutes after our first use of hypothesis `we uncovered a subtle bug`__ in one of our most used library. Since then, we have increasingly used hypothesis to improve the quality of our testing in libraries and applications as well. __ https://github.com/merchise/xoutil/commit/0a4a0f529812fed363efb653f3ade2d2bc203945 ------------------------------------------- `Your name goes here `_ ------------------------------------------- I know there are many more, because I keep finding out about new people I'd never even heard of using Hypothesis. If you're looking to way to give back to a tool you love, adding your name here only takes a moment and would really help a lot. As per instructions at the top, just send me a pull request and I'll add you to the list. hypothesis-python-3.44.1/docs/examples.rst000066400000000000000000000410411321557765100206350ustar00rootroot00000000000000================== Some more examples ================== This is a collection of examples of how to use Hypothesis in interesting ways. It's small for now but will grow over time. All of these examples are designed to be run under `py.test`_ (`nose`_ should probably work too). ---------------------------------- How not to sort by a partial order ---------------------------------- The following is an example that's been extracted and simplified from a real bug that occurred in an earlier version of Hypothesis. The real bug was a lot harder to find. Suppose we've got the following type: .. code:: python class Node(object): def __init__(self, label, value): self.label = label self.value = tuple(value) def __repr__(self): return "Node(%r, %r)" % (self.label, self.value) def sorts_before(self, other): if len(self.value) >= len(other.value): return False return other.value[:len(self.value)] == self.value Each node is a label and a sequence of some data, and we have the relationship sorts_before meaning the data of the left is an initial segment of the right. So e.g. a node with value ``[1, 2]`` will sort before a node with value ``[1, 2, 3]``, but neither of ``[1, 2]`` nor ``[1, 3]`` will sort before the other. We have a list of nodes, and we want to topologically sort them with respect to this ordering. That is, we want to arrange the list so that if ``x.sorts_before(y)`` then x appears earlier in the list than y. We naively think that the easiest way to do this is to extend the partial order defined here to a total order by breaking ties arbitrarily and then using a normal sorting algorithm. So we define the following code: .. code:: python from functools import total_ordering @total_ordering class TopoKey(object): def __init__(self, node): self.value = node def __lt__(self, other): if self.value.sorts_before(other.value): return True if other.value.sorts_before(self.value): return False return self.value.label < other.value.label def sort_nodes(xs): xs.sort(key=TopoKey) This takes the order defined by ``sorts_before`` and extends it by breaking ties by comparing the node labels. But now we want to test that it works. First we write a function to verify that our desired outcome holds: .. code:: python def is_prefix_sorted(xs): for i in range(len(xs)): for j in range(i+1, len(xs)): if xs[j].sorts_before(xs[i]): return False return True This will return false if it ever finds a pair in the wrong order and return true otherwise. Given this function, what we want to do with Hypothesis is assert that for all sequences of nodes, the result of calling ``sort_nodes`` on it is sorted. First we need to define a strategy for Node: .. code:: python from hypothesis import settings, strategy import hypothesis.strategies as s NodeStrategy = s.builds( Node, s.integers(), s.lists(s.booleans(), average_size=5, max_size=10)) We want to generate *short* lists of values so that there's a decent chance of one being a prefix of the other (this is also why the choice of bool as the elements). We then define a strategy which builds a node out of an integer and one of those short lists of booleans. We can now write a test: .. code:: python from hypothesis import given @given(s.lists(NodeStrategy)) def test_sorting_nodes_is_prefix_sorted(xs): sort_nodes(xs) assert is_prefix_sorted(xs) this immediately fails with the following example: .. code:: python [Node(0, (False, True)), Node(0, (True,)), Node(0, (False,))] The reason for this is that because False is not a prefix of (True, True) nor vice versa, sorting things the first two nodes are equal because they have equal labels. This makes the whole order non-transitive and produces basically nonsense results. But this is pretty unsatisfying. It only works because they have the same label. Perhaps we actually wanted our labels to be unique. Lets change the test to do that. .. code:: python def deduplicate_nodes_by_label(nodes): table = {} for node in nodes: table[node.label] = node return list(table.values()) NodeSet = s.lists(Node).map(deduplicate_nodes_by_label) We define a function to deduplicate nodes by labels, and then map that over a strategy for lists of nodes to give us a strategy for lists of nodes with unique labels. We can now rewrite the test to use that: .. code:: python @given(NodeSet) def test_sorting_nodes_is_prefix_sorted(xs): sort_nodes(xs) assert is_prefix_sorted(xs) Hypothesis quickly gives us an example of this *still* being wrong: .. code:: python [Node(0, (False,)), Node(-1, (True,)), Node(-2, (False, False))]) Now this is a more interesting example. None of the nodes will sort equal. What is happening here is that the first node is strictly less than the last node because (False,) is a prefix of (False, False). This is in turn strictly less than the middle node because neither is a prefix of the other and -2 < -1. The middle node is then less than the first node because -1 < 0. So, convinced that our implementation is broken, we write a better one: .. code:: python def sort_nodes(xs): for i in hrange(1, len(xs)): j = i - 1 while j >= 0: if xs[j].sorts_before(xs[j+1]): break xs[j], xs[j+1] = xs[j+1], xs[j] j -= 1 This is just insertion sort slightly modified - we swap a node backwards until swapping it further would violate the order constraints. The reason this works is because our order is a partial order already (this wouldn't produce a valid result for a general topological sorting - you need the transitivity). We now run our test again and it passes, telling us that this time we've successfully managed to sort some nodes without getting it completely wrong. Go us. -------------------- Time zone arithmetic -------------------- This is an example of some tests for pytz which check that various timezone conversions behave as you would expect them to. These tests should all pass, and are mostly a demonstration of some useful sorts of thing to test with Hypothesis, and how the hypothesis-datetime extra package works. .. doctest:: >>> from datetime import timedelta >>> from hypothesis.extra.pytz import timezones >>> # The datetimes strategy is naive by default, so tell it to use timezones >>> aware_datetimes = datetimes(timezones=timezones()) >>> @given(aware_datetimes, timezones(), timezones()) ... def test_convert_via_intermediary(dt, tz1, tz2): ... """Test that converting between timezones is not affected ... by a detour via another timezone. ... """ ... assert dt.astimezone(tz1).astimezone(tz2) == dt.astimezone(tz2) >>> @given(aware_datetimes, timezones()) ... def test_convert_to_and_fro(dt, tz2): ... """If we convert to a new timezone and back to the old one ... this should leave the result unchanged. ... """ ... tz1 = dt.tzinfo ... assert dt == dt.astimezone(tz2).astimezone(tz1) >>> @given(aware_datetimes, timezones()) ... def test_adding_an_hour_commutes(dt, tz): ... """When converting between timezones it shouldn't matter ... if we add an hour here or add an hour there. ... """ ... an_hour = timedelta(hours=1) ... assert (dt + an_hour).astimezone(tz) == dt.astimezone(tz) + an_hour >>> @given(aware_datetimes, timezones()) ... def test_adding_a_day_commutes(dt, tz): ... """When converting between timezones it shouldn't matter ... if we add a day here or add a day there. ... """ ... a_day = timedelta(days=1) ... assert (dt + a_day).astimezone(tz) == dt.astimezone(tz) + a_day >>> # And we can check that our tests pass >>> test_convert_via_intermediary() >>> test_convert_to_and_fro() >>> test_adding_an_hour_commutes() >>> test_adding_a_day_commutes() ------------------- Condorcet's Paradox ------------------- A classic paradox in voting theory, called Condorcet's paradox, is that majority preferences are not transitive. That is, there is a population and a set of three candidates A, B and C such that the majority of the population prefer A to B, B to C and C to A. Wouldn't it be neat if we could use Hypothesis to provide an example of this? Well as you can probably guess from the presence of this section, we can! This is slightly surprising because it's not really obvious how we would generate an election given the types that Hypothesis knows about. The trick here turns out to be twofold: 1. We can generate a type that is *much larger* than an election, extract an election out of that, and rely on minimization to throw away all the extraneous detail. 2. We can use assume and rely on Hypothesis's adaptive exploration to focus on the examples that turn out to generate interesting elections Without further ado, here is the code: .. code:: python from hypothesis import given, assume from hypothesis.strategies import integers, lists from collections import Counter def candidates(votes): return {candidate for vote in votes for candidate in vote} def build_election(votes): """ Given a list of lists we extract an election out of this. We do this in two phases: 1. First of all we work out the full set of candidates present in all votes and throw away any votes that do not have that whole set. 2. We then take each vote and make it unique, keeping only the first instance of any candidate. This gives us a list of total orderings of some set. It will usually be a lot smaller than the starting list, but that's OK. """ all_candidates = candidates(votes) votes = list(filter(lambda v: set(v) == all_candidates, votes)) if not votes: return [] rebuilt_votes = [] for vote in votes: rv = [] for v in vote: if v not in rv: rv.append(v) assert len(rv) == len(all_candidates) rebuilt_votes.append(rv) return rebuilt_votes @given(lists(lists(integers(min_value=1, max_value=5)))) def test_elections_are_transitive(election): election = build_election(election) # Small elections are unlikely to be interesting assume(len(election) >= 3) all_candidates = candidates(election) # Elections with fewer than three candidates certainly can't exhibit # intransitivity assume(len(all_candidates) >= 3) # Now we check if the election is transitive # First calculate the pairwise counts of how many prefer each candidate # to the other counts = Counter() for vote in election: for i in range(len(vote)): for j in range(i+1, len(vote)): counts[(vote[i], vote[j])] += 1 # Now look at which pairs of candidates one has a majority over the # other and store that. graph = {} all_candidates = candidates(election) for i in all_candidates: for j in all_candidates: if counts[(i, j)] > counts[(j, i)]: graph.setdefault(i, set()).add(j) # Now for each triple assert that it is transitive. for x in all_candidates: for y in graph.get(x, ()): for z in graph.get(y, ()): assert x not in graph.get(z, ()) The example Hypothesis gives me on my first run (your mileage may of course vary) is: .. code:: python [[3, 1, 4], [4, 3, 1], [1, 4, 3]] Which does indeed do the job: The majority (votes 0 and 1) prefer 3 to 1, the majority (votes 0 and 2) prefer 1 to 4 and the majority (votes 1 and 2) prefer 4 to 3. This is in fact basically the canonical example of the voting paradox, modulo variations on the names of candidates. ------------------- Fuzzing an HTTP API ------------------- Hypothesis's support for testing HTTP services is somewhat nascent. There are plans for some fully featured things around this, but right now they're probably quite far down the line. But you can do a lot yourself without any explicit support! Here's a script I wrote to throw random data against the API for an entirely fictitious service called Waspfinder (this is only lightly obfuscated and you can easily figure out who I'm actually talking about, but I don't want you to run this code and hammer their API without their permission). All this does is use Hypothesis to generate random JSON data matching the format their API asks for and check for 500 errors. More advanced tests which then use the result and go on to do other things are definitely also possible. .. code:: python import unittest from hypothesis import given, assume, settings, strategies as st from collections import namedtuple import requests import os import random import time import math # These tests will be quite slow because we have to talk to an external # service. Also we'll put in a sleep between calls so as to not hammer it. # As a result we reduce the number of test cases and turn off the timeout. settings.default.max_examples = 100 settings.default.timeout = -1 Goal = namedtuple("Goal", ("slug",)) # We just pass in our API credentials via environment variables. waspfinder_token = os.getenv('WASPFINDER_TOKEN') waspfinder_user = os.getenv('WASPFINDER_USER') assert waspfinder_token is not None assert waspfinder_user is not None GoalData = st.fixed_dictionaries({ 'title': st.text(), 'goal_type': st.sampled_from([ "hustler", "biker", "gainer", "fatloser", "inboxer", "drinker", "custom"]), 'goaldate': st.one_of(st.none(), st.floats()), 'goalval': st.one_of(st.none(), st.floats()), 'rate': st.one_of(st.none(), st.floats()), 'initval': st.floats(), 'panic': st.floats(), 'secret': st.booleans(), 'datapublic': st.booleans(), }) needs2 = ['goaldate', 'goalval', 'rate'] class WaspfinderTest(unittest.TestCase): @given(GoalData) def test_create_goal_dry_run(self, data): # We want slug to be unique for each run so that multiple test runs # don't interfere with each other. If for some reason some slugs trigger # an error and others don't we'll get a Flaky error, but that's OK. slug = hex(random.getrandbits(32))[2:] # Use assume to guide us through validation we know about, otherwise # we'll spend a lot of time generating boring examples. # Title must not be empty assume(data["title"]) # Exactly two of these values should be not None. The other will be # inferred by the API. assume(len([1 for k in needs2 if data[k] is not None]) == 2) for v in data.values(): if isinstance(v, float): assume(not math.isnan(v)) data["slug"] = slug # The API nicely supports a dry run option, which means we don't have # to worry about the user account being spammed with lots of fake goals # Otherwise we would have to make sure we cleaned up after ourselves # in this test. data["dryrun"] = True data["auth_token"] = waspfinder_token for d, v in data.items(): if v is None: data[d] = "null" else: data[d] = str(v) result = requests.post( "https://waspfinder.example.com/api/v1/users/" "%s/goals.json" % (waspfinder_user,), data=data) # Lets not hammer the API too badly. This will of course make the # tests even slower than they otherwise would have been, but that's # life. time.sleep(1.0) # For the moment all we're testing is that this doesn't generate an # internal error. If we didn't use the dry run option we could have # then tried doing more with the result, but this is a good start. self.assertNotEqual(result.status_code, 500) if __name__ == '__main__': unittest.main() .. _py.test: https://docs.pytest.org/en/latest/ .. _nose: https://nose.readthedocs.io/en/latest/ hypothesis-python-3.44.1/docs/extras.rst000066400000000000000000000066521321557765100203360ustar00rootroot00000000000000=================== Additional packages =================== Hypothesis itself does not have any dependencies, but there are some packages that need additional things installed in order to work. You can install these dependencies using the setuptools extra feature as e.g. ``pip install hypothesis[django]``. This will check installation of compatible versions. You can also just install hypothesis into a project using them, ignore the version constraints, and hope for the best. In general "Which version is Hypothesis compatible with?" is a hard question to answer and even harder to regularly test. Hypothesis is always tested against the latest compatible version and each package will note the expected compatibility range. If you run into a bug with any of these please specify the dependency version. There are seperate pages for :doc:`django` and :doc:`numpy`. -------------------- hypothesis[pytz] -------------------- .. automodule:: hypothesis.extra.pytz :members: -------------------- hypothesis[datetime] -------------------- .. automodule:: hypothesis.extra.datetime :members: .. _faker-extra: ----------------------- hypothesis[fakefactory] ----------------------- .. note:: This extra package is deprecated. We strongly recommend using native Hypothesis strategies, which are more effective at both finding and shrinking failing examples for your tests. The :func:`~hypothesis.strategies.from_regex`, :func:`~hypothesis.strategies.text` (with some specific alphabet), and :func:`~hypothesis.strategies.sampled_from` strategies may be particularly useful. :pypi:`Faker` (previously :pypi:`fake-factory`) is a Python package that generates fake data for you. It's great for bootstraping your database, creating good-looking XML documents, stress-testing a database, or anonymizing production data. However, it's not designed for automated testing - data from Hypothesis looks less realistic, but produces minimal bug-triggering examples and uses coverage information to check more cases. ``hypothesis.extra.fakefactory`` lets you use Faker generators to parametrize Hypothesis tests. This was only ever meant to ease your transition to Hypothesis, but we've improved Hypothesis enough since then that we no longer recommend using Faker for automated tests under any circumstances. hypothesis.extra.fakefactory defines a function fake_factory which returns a strategy for producing text data from any Faker provider. So for example the following will parametrize a test by an email address: .. code-block:: pycon >>> fake_factory('email').example() 'tnader@prosacco.info' >>> fake_factory('name').example() 'Zbyněk Černý CSc.' You can explicitly specify the locale (otherwise it uses any of the available locales), either as a single locale or as several: .. code-block:: pycon >>> fake_factory('name', locale='en_GB').example() 'Antione Gerlach' >>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example() 'Miloš Šťastný' >>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example() 'Harm Sanford' You can use custom Faker providers via the ``providers`` argument: .. code-block:: pycon >>> from faker.providers import BaseProvider >>> class KittenProvider(BaseProvider): ... def meows(self): ... return 'meow %d' % (self.random_number(digits=10),) >>> fake_factory('meows', providers=[KittenProvider]).example() 'meow 9139348419' hypothesis-python-3.44.1/docs/healthchecks.rst000066400000000000000000000020741321557765100214500ustar00rootroot00000000000000============= Health checks ============= Hypothesis tries to detect common mistakes and things that will cause difficulty at run time in the form of a number of 'health checks'. These include detecting and warning about: * Strategies with very slow data generation * Strategies which filter out too much * Recursive strategies which branch too much * Tests that are unlikely to complete in a reasonable amount of time. If any of these scenarios are detected, Hypothesis will emit a warning about them. The general goal of these health checks is to warn you about things that you are doing that might appear to work but will either cause Hypothesis to not work correctly or to perform badly. To selectively disable health checks, use the suppress_health_check setting. The argument for this parameter is a list with elements drawn from any of the class-level attributes of the HealthCheck class. To disable all health checks, set the perform_health_check settings parameter to False. .. module:: hypothesis .. autoclass:: HealthCheck :undoc-members: :inherited-members: hypothesis-python-3.44.1/docs/index.rst000066400000000000000000000053441321557765100201340ustar00rootroot00000000000000====================== Welcome to Hypothesis! ====================== `Hypothesis `_ is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn't have thought to look for. It is stable, powerful and easy to add to any existing test suite. It works by letting you write tests that assert that something should be true for every case, not just the ones you happen to think of. Think of a normal unit test as being something like the following: 1. Set up some data. 2. Perform some operations on the data. 3. Assert something about the result. Hypothesis lets you write tests which instead look like this: 1. For all data matching some specification. 2. Perform some operations on the data. 3. Assert something about the result. This is often called property based testing, and was popularised by the Haskell library `Quickcheck `_. It works by generating random data matching your specification and checking that your guarantee still holds in that case. If it finds an example where it doesn't, it takes that example and cuts it down to size, simplifying it until it finds a much smaller example that still causes the problem. It then saves that example for later, so that once it has found a problem with your code it will not forget it in the future. Writing tests of this form usually consists of deciding on guarantees that your code should make - properties that should always hold true, regardless of what the world throws at you. Examples of such guarantees might be: * Your code shouldn't throw an exception, or should only throw a particular type of exception (this works particularly well if you have a lot of internal assertions). * If you delete an object, it is no longer visible. * If you serialize and then deserialize a value, then you get the same value back. Now you know the basics of what Hypothesis does, the rest of this documentation will take you through how and why. It's divided into a number of sections, which you can see in the sidebar (or the menu at the top if you're on mobile), but you probably want to begin with the :doc:`Quick start guide `, which will give you a worked example of how to use Hypothesis and a detailed outline of the things you need to know to begin testing your code with it, or check out some of the `introductory articles `_. .. toctree:: :maxdepth: 1 :hidden: quickstart details settings data extras django numpy healthchecks database stateful supported examples community manifesto endorsements usage strategies changes development support packaging reproducing hypothesis-python-3.44.1/docs/manifesto.rst000066400000000000000000000064351321557765100210140ustar00rootroot00000000000000========================= The Purpose of Hypothesis ========================= What is Hypothesis for? From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests. From my perspective as the author, that is of course also a purpose of Hypothesis, but (if you will permit me to indulge in a touch of megalomania for a moment), the larger purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software. Software is, as they say, eating the world. Software is also `terrible`_. It's buggy, insecure and generally poorly thought out. This combination is clearly a recipe for disaster. And the state of software testing is even worse. Although it's fairly uncontroversial at this point that you *should* be testing your code, can you really say with a straight face that most projects you've worked on are adequately tested? A lot of the problem here is that it's too hard to write good tests. Your tests encode exactly the same assumptions and fallacies that you had when you wrote the code, so they miss exactly the same bugs that you missed when you wrote the code. Meanwhile, there are all sorts of tools for making testing better that are basically unused. The original Quickcheck is from *1999* and the majority of developers have not even heard of it, let alone used it. There are a bunch of half-baked implementations for most languages, but very few of them are worth using. The goal of Hypothesis is to bring advanced testing techniques to the masses, and to provide an implementation that is so high quality that it is easier to use them than it is not to use them. Where I can, I will beg, borrow and steal every good idea I can find that someone has had to make software testing better. Where I can't, I will invent new ones. Quickcheck is the start, but I also plan to integrate ideas from fuzz testing (a planned future feature is to use coverage information to drive example selection, and the example saving database is already inspired by the workflows people use for fuzz testing), and am open to and actively seeking out other suggestions and ideas. The plan is to treat the social problem of people not using these ideas as a bug to which there is a technical solution: Does property-based testing not match your workflow? That's a bug, let's fix it by figuring out how to integrate Hypothesis into it. Too hard to generate custom data for your application? That's a bug. Let's fix it by figuring out how to make it easier, or how to take something you're already using to specify your data and derive a generator from that automatically. Find the explanations of these advanced ideas hopelessly obtuse and hard to follow? That's a bug. Let's provide you with an easy API that lets you test your code better without a PhD in software verification. Grand ambitions, I know, and I expect ultimately the reality will be somewhat less grand, but so far in about three months of development, Hypothesis has become the most solid implementation of Quickcheck ever seen in a mainstream language (as long as we don't count Scala as mainstream yet), and at the same time managed to significantly push forward the state of the art, so I think there's reason to be optimistic. .. _terrible: https://www.youtube.com/watch?v=csyL9EC0S0c hypothesis-python-3.44.1/docs/numpy.rst000066400000000000000000000034561321557765100201770ustar00rootroot00000000000000=================================== Hypothesis for the Scientific Stack =================================== .. _hypothesis-numpy: ----- numpy ----- Hypothesis offers a number of strategies for `NumPy `_ testing, available in the :mod:`hypothesis[numpy]` :doc:`extra `. It lives in the ``hypothesis.extra.numpy`` package. The centerpiece is the :func:`~hypothesis.extra.numpy.arrays` strategy, which generates arrays with any dtype, shape, and contents you can specify or give a strategy for. To make this as useful as possible, strategies are provided to generate array shapes and generate all kinds of fixed-size or compound dtypes. .. automodule:: hypothesis.extra.numpy :members: .. _hypothesis-pandas: ------ pandas ------ Hypothesis provides strategies for several of the core pandas data types: :class:`pandas.Index`, :class:`pandas.Series` and :class:`pandas.DataFrame`. The general approach taken by the pandas module is that there are multiple strategies for generating indexes, and all of the other strategies take the number of entries they contain from their index strategy (with sensible defaults). So e.g. a Series is specified by specifying its :class:`numpy.dtype` (and/or a strategy for generating elements for it). .. automodule:: hypothesis.extra.pandas :members: ~~~~~~~~~~~~~~~~~~ Supported Versions ~~~~~~~~~~~~~~~~~~ There is quite a lot of variation between pandas versions. We only commit to supporting the latest version of pandas, but older minor versions are supported on a "best effort" basis. Hypothesis is currently tested against and confirmed working with Pandas 0.19, 0.20, and 0.21. Releases that are not the latest patch release of their minor version are not tested or officially supported, but will probably also work unless you hit a pandas bug. hypothesis-python-3.44.1/docs/packaging.rst000066400000000000000000000074031321557765100207470ustar00rootroot00000000000000==================== Packaging Guidelines ==================== Downstream packagers often want to package Hypothesis. Here are some guidelines. The primary guideline is this: If you are not prepared to keep up with the Hypothesis release schedule, don't. You will annoy me and are doing your users a disservice. Hypothesis has a very frequent release schedule. It's rare that it goes a week without a release, and there are often multiple releases in a given week. If you *are* prepared to keep up with this schedule, you might find the rest of this document useful. ---------------- Release tarballs ---------------- These are available from :gh-link:`the GitHub releases page `. The tarballs on pypi are intended for installation from a Python tool such as pip and should not be considered complete releases. Requests to include additional files in them will not be granted. Their absence is not a bug. ------------ Dependencies ------------ ~~~~~~~~~~~~~~~ Python versions ~~~~~~~~~~~~~~~ Hypothesis is designed to work with a range of Python versions. Currently supported are: * pypy-2.6.1 (earlier versions of pypy *may* work) * CPython 2.7.x * CPython 3.4.x * CPython 3.5.x * CPython 3.6.x If you feel the need to have separate Python 3 and Python 2 packages you can, but Hypothesis works unmodified on either. ~~~~~~~~~~~~~~~~~~~~~~ Other Python libraries ~~~~~~~~~~~~~~~~~~~~~~ Hypothesis has *mandatory* dependencies on the following libraries: * :pypi:`attrs` * :pypi:`coverage` * :pypi:`enum34` is required on Python 2.7 Hypothesis has *optional* dependencies on the following libraries: * :pypi:`pytz` (almost any version should work) * :pypi:`Faker`, version 0.7 or later * `Django `_, all supported versions * :pypi:`numpy`, 1.10 or later (earlier versions will probably work fine) * :pypi:`pandas`, 1.8 or later * :pypi:`py.test ` (2.8.0 or greater). This is a mandatory dependency for testing Hypothesis itself but optional for users. The way this works when installing Hypothesis normally is that these features become available if the relevant library is installed. ------------------ Testing Hypothesis ------------------ If you want to test Hypothesis as part of your packaging you will probably not want to use the mechanisms Hypothesis itself uses for running its tests, because it has a lot of logic for installing and testing against different versions of Python. The tests must be run with py.test. A version more recent than 2.8.0 is strongly encouraged, but it may work with earlier versions (however py.test specific logic is disabled before 2.8.0). Tests are organised into a number of top level subdirectories of the tests/ directory. * cover: This is a small, reasonably fast, collection of tests designed to give 100% coverage of all but a select subset of the files when run under Python 3. * nocover: This is a much slower collection of tests that should not be run under coverage for performance reasons. * py2: Tests that can only be run under Python 2 * py3: Tests that can only be run under Python 3 * datetime: This tests the subset of Hypothesis that depends on pytz * fakefactory: This tests the subset of Hypothesis that depends on fakefactory. * django: This tests the subset of Hypothesis that depends on django (this also depends on fakefactory). An example invocation for running the coverage subset of these tests: .. code-block:: bash pip install -e . pip install pytest # you will probably want to use your own packaging here python -m pytest tests/cover -------- Examples -------- * `arch linux `_ * `fedora `_ * `gentoo `_ hypothesis-python-3.44.1/docs/quickstart.rst000066400000000000000000000221341321557765100212130ustar00rootroot00000000000000================= Quick start guide ================= This document should talk you through everything you need to get started with Hypothesis. ---------- An example ---------- Suppose we've written a `run length encoding `_ system and we want to test it out. We have the following code which I took straight from the `Rosetta Code `_ wiki (OK, I removed some commented out code and fixed the formatting, but there are no functional modifications): .. code:: python def encode(input_string): count = 1 prev = '' lst = [] for character in input_string: if character != prev: if prev: entry = (prev, count) lst.append(entry) count = 1 prev = character else: count += 1 else: entry = (character, count) lst.append(entry) return lst def decode(lst): q = '' for character, count in lst: q += character * count return q We want to write a test for this that will check some invariant of these functions. The invariant one tends to try when you've got this sort of encoding / decoding is that if you encode something and then decode it then you get the same value back. Lets see how you'd do that with Hypothesis: .. code:: python from hypothesis import given from hypothesis.strategies import text @given(text()) def test_decode_inverts_encode(s): assert decode(encode(s)) == s (For this example we'll just let pytest discover and run the test. We'll cover other ways you could have run it later). The text function returns what Hypothesis calls a search strategy. An object with methods that describe how to generate and simplify certain kinds of values. The @given decorator then takes our test function and turns it into a parametrized one which, when called, will run the test function over a wide range of matching data from that strategy. Anyway, this test immediately finds a bug in the code: .. code:: Falsifying example: test_decode_inverts_encode(s='') UnboundLocalError: local variable 'character' referenced before assignment Hypothesis correctly points out that this code is simply wrong if called on an empty string. If we fix that by just adding the following code to the beginning of the function then Hypothesis tells us the code is correct (by doing nothing as you'd expect a passing test to). .. code:: python if not input_string: return [] If we wanted to make sure this example was always checked we could add it in explicitly: .. code:: python from hypothesis import given, example from hypothesis.strategies import text @given(text()) @example('') def test_decode_inverts_encode(s): assert decode(encode(s)) == s You don't have to do this, but it can be useful both for clarity purposes and for reliably hitting hard to find examples. Also in local development Hypothesis will just remember and reuse the examples anyway, but there's not currently a very good workflow for sharing those in your CI. It's also worth noting that both example and given support keyword arguments as well as positional. The following would have worked just as well: .. code:: python @given(s=text()) @example(s='') def test_decode_inverts_encode(s): assert decode(encode(s)) == s Suppose we had a more interesting bug and forgot to reset the count each time. Say we missed a line in our ``encode`` method: .. code:: python def encode(input_string): count = 1 prev = '' lst = [] for character in input_string: if character != prev: if prev: entry = (prev, count) lst.append(entry) # count = 1 # Missing reset operation prev = character else: count += 1 else: entry = (character, count) lst.append(entry) return lst Hypothesis quickly informs us of the following example: .. code:: Falsifying example: test_decode_inverts_encode(s='001') Note that the example provided is really quite simple. Hypothesis doesn't just find *any* counter-example to your tests, it knows how to simplify the examples it finds to produce small easy to understand ones. In this case, two identical values are enough to set the count to a number different from one, followed by another distinct value which should have reset the count but in this case didn't. The examples Hypothesis provides are valid Python code you can run. Any arguments that you explicitly provide when calling the function are not generated by Hypothesis, and if you explicitly provide *all* the arguments Hypothesis will just call the underlying function the once rather than running it multiple times. ---------- Installing ---------- Hypothesis is :pypi:`available on pypi as "hypothesis" `. You can install it with: .. code:: bash pip install hypothesis If you want to install directly from the source code (e.g. because you want to make changes and install the changed version) you can do this with: .. code:: bash pip install -e . You should probably run the tests first to make sure nothing is broken. You can do this with: .. code:: bash python setup.py test Note that if they're not already installed this will try to install the test dependencies. You may wish to do all of this in a `virtualenv `_. For example: .. code:: bash virtualenv venv source venv/bin/activate pip install hypothesis Will create an isolated environment for you to try hypothesis out in without affecting your system installed packages. ------------- Running tests ------------- In our example above we just let pytest discover and run our tests, but we could also have run it explicitly ourselves: .. code:: python if __name__ == '__main__': test_decode_inverts_encode() We could also have done this as a unittest TestCase: .. code:: python import unittest class TestEncoding(unittest.TestCase): @given(text()) def test_decode_inverts_encode(self, s): self.assertEqual(decode(encode(s)), s) if __name__ == '__main__': unittest.main() A detail: This works because Hypothesis ignores any arguments it hasn't been told to provide (positional arguments start from the right), so the self argument to the test is simply ignored and works as normal. This also means that Hypothesis will play nicely with other ways of parameterizing tests. e.g it works fine if you use pytest fixtures for some arguments and Hypothesis for others. ------------- Writing tests ------------- A test in Hypothesis consists of two parts: A function that looks like a normal test in your test framework of choice but with some additional arguments, and a :func:`@given ` decorator that specifies how to provide those arguments. Here are some other examples of how you could use that: .. code:: python from hypothesis import given import hypothesis.strategies as st @given(st.integers(), st.integers()) def test_ints_are_commutative(x, y): assert x + y == y + x @given(x=st.integers(), y=st.integers()) def test_ints_cancel(x, y): assert (x + y) - y == x @given(st.lists(st.integers())) def test_reversing_twice_gives_same_list(xs): # This will generate lists of arbitrary length (usually between 0 and # 100 elements) whose elements are integers. ys = list(xs) ys.reverse() ys.reverse() assert xs == ys @given(st.tuples(st.booleans(), st.text())) def test_look_tuples_work_too(t): # A tuple is generated as the one you provided, with the corresponding # types in those positions. assert len(t) == 2 assert isinstance(t[0], bool) assert isinstance(t[1], str) Note that as we saw in the above example you can pass arguments to :func:`@given ` either as positional or as keywords. -------------- Where to start -------------- You should now know enough of the basics to write some tests for your code using Hypothesis. The best way to learn is by doing, so go have a try. If you're stuck for ideas for how to use this sort of test for your code, here are some good starting points: 1. Try just calling functions with appropriate random data and see if they crash. You may be surprised how often this works. e.g. note that the first bug we found in the encoding example didn't even get as far as our assertion: It crashed because it couldn't handle the data we gave it, not because it did the wrong thing. 2. Look for duplication in your tests. Are there any cases where you're testing the same thing with multiple different examples? Can you generalise that to a single test using Hypothesis? 3. `This piece is designed for an F# implementation `_, but is still very good advice which you may find helps give you good ideas for using Hypothesis. If you have any trouble getting started, don't feel shy about :doc:`asking for help `. hypothesis-python-3.44.1/docs/reproducing.rst000066400000000000000000000107761321557765100213530ustar00rootroot00000000000000==================== Reproducing Failures ==================== One of the things that is often concerning for people using randomized testing like Hypothesis is the question of how to reproduce failing test cases. Fortunately Hypothesis has a number of features in support of this. The one you will use most commonly when developing locally is `the example database `, which means that you shouldn't have to think about the problem at all for local use - test failures will just automatically reproduce without you having to do anything. The example database is perfectly suitable for sharing between machines, but there currently aren't very good work flows for that, so Hypothesis provides a number of ways to make examples reproducible by adding them to the source code of your tests. This is particularly useful when e.g. you are trying to run an example that has failed on your CI, or otherwise share them between machines. .. _providing-explicit-examples: --------------------------- Providing explicit examples --------------------------- You can explicitly ask Hypothesis to try a particular example, using .. autofunction:: hypothesis.example Hypothesis will run all examples you've asked for first. If any of them fail it will not go on to look for more examples. It doesn't matter whether you put the example decorator before or after given. Any permutation of the decorators in the above will do the same thing. Note that examples can be positional or keyword based. If they're positional then they will be filled in from the right when calling, so either of the following styles will work as expected: .. code:: python @given(text()) @example("Hello world") @example(x="Some very long string") def test_some_code(x): assert True from unittest import TestCase class TestThings(TestCase): @given(text()) @example("Hello world") @example(x="Some very long string") def test_some_code(self, x): assert True As with ``@given``, it is not permitted for a single example to be a mix of positional and keyword arguments. Either are fine, and you can use one in one example and the other in another example if for some reason you really want to, but a single example must be consistent. ------------------------------------- Reproducing a test run with ``@seed`` ------------------------------------- .. autofunction:: hypothesis.seed When a test fails unexpectedly, usually due to a health check failure, Hypothesis will print out a seed that led to that failure, if the test is not already running with a fixed seed. You can then recreate that failure using either the ``@seed`` decorator or (if you are running :pypi:`pytest`) with ``--hypothesis-seed``. .. _reproduce_failure: ------------------------------------------------------- Reproducing an example with with ``@reproduce_failure`` ------------------------------------------------------- Hypothesis has an opaque binary representation that it uses for all examples it generates. This representation is not intended to be stable across versions or with respect to changes in the test, but can be used to to reproduce failures with the ``@reproduce_example`` decorator. .. autofunction:: hypothesis.reproduce_failure The intent is that you should never write this decorator by hand, but it is instead provided by Hypothesis. When a test fails with a falsifying example, Hypothesis may print out a suggestion to use ``@reproduce_failure`` on the test to recreate the problem as follows: .. doctest:: >>> from hypothesis import settings, given, PrintSettings >>> import hypothesis.strategies as st >>> @given(st.floats()) ... @settings(print_blob=PrintSettings.ALWAYS) ... def test(f): ... assert f == f ... >>> try: ... test() ... except AssertionError: ... pass Falsifying example: test(f=nan) You can reproduce this example by temporarily adding @reproduce_failure(..., b'AAD/8AAAAAAAAQA=') as a decorator on your test case Adding the suggested decorator to the test should reproduce the failure (as long as everything else is the same - changing the versions of Python or anything else involved, might of course affect the behaviour of the test! Note that changing the version of Hypothesis will result in a different error - each ``@reproduce_failure`` invocation is specific to a Hypothesis version). When to do this is controlled by the :attr:`~hypothesis.settings.print_blob` setting, which may be one of the following values: .. autoclass:: hypothesis.PrintSettings hypothesis-python-3.44.1/docs/settings.rst000066400000000000000000000250121321557765100206570ustar00rootroot00000000000000======== Settings ======== Hypothesis tries to have good defaults for its behaviour, but sometimes that's not enough and you need to tweak it. The mechanism for doing this is the :class:`~hypothesis.settings` object. You can set up a :func:`@given ` based test to use this using a settings decorator: :func:`@given ` invocation is as follows: .. code:: python from hypothesis import given, settings @given(integers()) @settings(max_examples=500) def test_this_thoroughly(x): pass This uses a :class:`~hypothesis.settings` object which causes the test to receive a much larger set of examples than normal. This may be applied either before or after the given and the results are the same. The following is exactly equivalent: .. code:: python from hypothesis import given, settings @settings(max_examples=500) @given(integers()) def test_this_thoroughly(x): pass ------------------ Available settings ------------------ .. module:: hypothesis .. autoclass:: settings :members: max_examples, max_iterations, min_satisfying_examples, max_shrinks, timeout, strict, database_file, stateful_step_count, database, perform_health_check, suppress_health_check, buffer_size, phases, deadline, use_coverage, derandomize .. _phases: ~~~~~~~~~~~~~~~~~~~~~ Controlling What Runs ~~~~~~~~~~~~~~~~~~~~~ Hypothesis divides tests into four logically distinct phases: 1. Running explicit examples :ref:`provided with the @example decorator `. 2. Rerunning a selection of previously failing examples to reproduce a previously seen error 3. Generating new examples. 4. Attempting to shrink an example found in phases 2 or 3 to a more manageable one (explicit examples cannot be shrunk). The phases setting provides you with fine grained control over which of these run, with each phase corresponding to a value on the :class:`~hypothesis._settings.Phase` enum: 1. ``Phase.explicit`` controls whether explicit examples are run. 2. ``Phase.reuse`` controls whether previous examples will be reused. 3. ``Phase.generate`` controls whether new examples will be generated. 4. ``Phase.shrink`` controls whether examples will be shrunk. The phases argument accepts a collection with any subset of these. e.g. ``settings(phases=[Phase.generate, Phase.shrink])`` will generate new examples and shrink them, but will not run explicit examples or reuse previous failures, while ``settings(phases=[Phase.explicit])`` will only run the explicit examples. .. _verbose-output: ~~~~~~~~~~~~~~~~~~~~~~~~~~ Seeing intermediate result ~~~~~~~~~~~~~~~~~~~~~~~~~~ To see what's going on while Hypothesis runs your tests, you can turn up the verbosity setting. This works with both :func:`~hypothesis.core.find` and :func:`@given `. .. doctest:: >>> from hypothesis import find, settings, Verbosity >>> from hypothesis.strategies import lists, booleans >>> find(lists(integers()), any, settings=settings(verbosity=Verbosity.verbose)) Trying example [] Found satisfying example [-106641080167757791735701986170810016341, -129665482689688858331316879188241401294, -17902751879921353864928802351902980929, 86547910278013668694989468221154862503, 99789676068743906931733548810810835946, -56833685188912180644827795048092269385, -12891126493032945632804716628985598019, 57797823215504994933565345605235342532, 98214819714866425575119206029702237685] Shrunk example to [-106641080167757791735701986170810016341, -129665482689688858331316879188241401294, -17902751879921353864928802351902980929, 86547910278013668694989468221154862503, 99789676068743906931733548810810835946, -56833685188912180644827795048092269385, -12891126493032945632804716628985598019, 57797823215504994933565345605235342532, 98214819714866425575119206029702237685] Shrunk example to [-106641080167757791735701986170810016341, -129665482689688858331316879188241401294, -17902751879921353864928802351902980929, 86547910278013668694989468221154862503] Shrunk example to [-106641080167757791735701986170810016341, 164695784672172929935660921670478470673] Shrunk example to [164695784672172929935660921670478470673] Shrunk example to [164695784672172929935660921670478470673] Shrunk example to [164695784672172929935660921670478470673] Shrunk example to [1] [1] The four levels are quiet, normal, verbose and debug. normal is the default, while in quiet Hypothesis will not print anything out, even the final falsifying example. debug is basically verbose but a bit more so. You probably don't want it. You can also override the default by setting the environment variable :envvar:`HYPOTHESIS_VERBOSITY_LEVEL` to the name of the level you want. So e.g. setting ``HYPOTHESIS_VERBOSITY_LEVEL=verbose`` will run all your tests printing intermediate results and errors. If you are using ``pytest``, you may also need to :doc:`disable output capturing for passing tests `. ------------------------- Building settings objects ------------------------- Settings can be created by calling :class:`~hypothesis.settings` with any of the available settings values. Any absent ones will be set to defaults: .. doctest:: >>> from hypothesis import settings >>> settings().max_examples 100 >>> settings(max_examples=10).max_examples 10 You can also copy settings from other settings: .. doctest:: >>> s = settings(max_examples=10) >>> t = settings(s, max_iterations=20) >>> s.max_examples 10 >>> t.max_iterations 20 >>> s.max_iterations 1000 >>> s.max_shrinks 500 >>> t.max_shrinks 500 ---------------- Default settings ---------------- At any given point in your program there is a current default settings, available as ``settings.default``. As well as being a settings object in its own right, all newly created settings objects which are not explicitly based off another settings are based off the default, so will inherit any values that are not explicitly set from it. You can change the defaults by using profiles (see next section), but you can also override them locally by using a settings object as a :ref:`context manager ` .. doctest:: >>> with settings(max_examples=150): ... print(settings.default.max_examples) ... print(settings().max_examples) 150 150 >>> settings().max_examples 100 Note that after the block exits the default is returned to normal. You can use this by nesting test definitions inside the context: .. code:: python from hypothesis import given, settings with settings(max_examples=500): @given(integers()) def test_this_thoroughly(x): pass All settings objects created or tests defined inside the block will inherit their defaults from the settings object used as the context. You can still override them with custom defined settings of course. Warning: If you use define test functions which don't use :func:`@given ` inside a context block, these will not use the enclosing settings. This is because the context manager only affects the definition, not the execution of the function. .. _settings_profiles: ~~~~~~~~~~~~~~~~~ settings Profiles ~~~~~~~~~~~~~~~~~ Depending on your environment you may want different default settings. For example: during development you may want to lower the number of examples to speed up the tests. However, in a CI environment you may want more examples so you are more likely to find bugs. Hypothesis allows you to define different settings profiles. These profiles can be loaded at any time. Loading a profile changes the default settings but will not change the behavior of tests that explicitly change the settings. .. doctest:: >>> from hypothesis import settings >>> settings.register_profile("ci", settings(max_examples=1000)) >>> settings().max_examples 100 >>> settings.load_profile("ci") >>> settings().max_examples 1000 Instead of loading the profile and overriding the defaults you can retrieve profiles for specific tests. .. doctest:: >>> with settings.get_profile("ci"): ... print(settings().max_examples) ... 1000 Optionally, you may define the environment variable to load a profile for you. This is the suggested pattern for running your tests on CI. The code below should run in a `conftest.py` or any setup/initialization section of your test suite. If this variable is not defined the Hypothesis defined defaults will be loaded. .. doctest:: >>> import os >>> from hypothesis import settings, Verbosity >>> settings.register_profile("ci", settings(max_examples=1000)) >>> settings.register_profile("dev", settings(max_examples=10)) >>> settings.register_profile("debug", settings(max_examples=10, verbosity=Verbosity.verbose)) >>> settings.load_profile(os.getenv(u'HYPOTHESIS_PROFILE', 'default')) If you are using the hypothesis pytest plugin and your profiles are registered by your conftest you can load one with the command line option ``--hypothesis-profile``. .. code:: bash $ py.test tests --hypothesis-profile ~~~~~~~~ Timeouts ~~~~~~~~ The `timeout` functionality of Hypothesis is being deprecated, and will eventually be removed. For the moment, the timeout setting can still be set and the old default timeout of one minute remains. If you want to future proof your code you can get the future behaviour by setting it to the value `unlimited`, which you can import from the main Hypothesis package: .. code:: python from hypothesis import given, settings, unlimited from hypothesis import strategies as st @settings(timeout=unlimited) @given(st.integers()) def test_something_slow(i): ... This will cause your code to run until it hits the normal Hypothesis example limits, regardless of how long it takes. `timeout=unlimited` will remain a valid setting after the timeout functionality has been deprecated (but will then have its own deprecation cycle). There is however now a timing related health check which is designed to catch tests that run for ages by accident. If you really want your test to run forever, the following code will enable that: .. code:: python from hypothesis import given, settings, unlimited, HealthCheck from hypothesis import strategies as st @settings(timeout=unlimited, suppress_health_check=[ HealthCheck.hung_test ]) @given(st.integers()) def test_something_slow(i): ... hypothesis-python-3.44.1/docs/stateful.rst000066400000000000000000000362621321557765100206570ustar00rootroot00000000000000================ Stateful testing ================ Hypothesis offers support for a stateful style of test, where instead of trying to produce a single data value that causes a specific test to fail, it tries to generate a program that errors. In many ways, this sort of testing is to classical property based testing as property based testing is to normal example based testing. The idea doesn't originate with Hypothesis, though Hypothesis's implementation and approach is mostly not based on an existing implementation and should be considered some mix of novel and independent reinventions. This style of testing is useful both for programs which involve some sort of mutable state and for complex APIs where there's no state per se but the actions you perform involve e.g. taking data from one function and feeding it into another. The idea is that you teach Hypothesis how to interact with your program: Be it a server, a python API, whatever. All you need is to be able to answer the question "Given what I've done so far, what could I do now?". After that, Hypothesis takes over and tries to find sequences of actions which cause a test failure. Right now the stateful testing is a bit new and experimental and should be considered as a semi-public API: It may break between minor versions but won't break between patch releases, and there are still some rough edges in the API that will need to be filed off. This shouldn't discourage you from using it. Although it's not as robust as the rest of Hypothesis, it's still pretty robust and more importantly is extremely powerful. I found a number of really subtle bugs in Hypothesis by turning the stateful testing onto a subset of the Hypothesis API, and you likely will find the same. Enough preamble, lets see how to use it. The first thing to note is that there are two levels of API: The low level but more flexible API and the higher level rule based API which is both easier to use and also produces a much better display of data due to its greater structure. We'll start with the more structured one. ------------------------- Rule based state machines ------------------------- Rule based state machines are the ones you're most likely to want to use. They're significantly more user friendly and should be good enough for most things you'd want to do. A rule based state machine is a collection of functions (possibly with side effects) which may depend on both values that Hypothesis can generate and also on values that have resulted from previous function calls. You define a rule based state machine as follows: .. code:: python import unittest from collections import namedtuple from hypothesis import strategies as st from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule Leaf = namedtuple('Leaf', ('label',)) Split = namedtuple('Split', ('left', 'right')) class BalancedTrees(RuleBasedStateMachine): trees = Bundle('BinaryTree') @rule(target=trees, x=st.integers()) def leaf(self, x): return Leaf(x) @rule(target=trees, left=trees, right=trees) def split(self, left, right): return Split(left, right) @rule(tree=trees) def check_balanced(self, tree): if isinstance(tree, Leaf): return else: assert abs(self.size(tree.left) - self.size(tree.right)) <= 1 self.check_balanced(tree.left) self.check_balanced(tree.right) def size(self, tree): if isinstance(tree, Leaf): return 1 else: return 1 + self.size(tree.left) + self.size(tree.right) In this we declare a Bundle, which is a named collection of previously generated values. We define two rules which put data onto this bundle - one which just generates leaves with integer labels, the other of which takes two previously generated values and returns a new one. We can then integrate this into our test suite by getting a unittest TestCase from it: .. code:: python TestTrees = BalancedTrees.TestCase if __name__ == '__main__': unittest.main() (these will also be picked up by py.test if you prefer to use that). Running this we get: .. code:: bash Step #1: v1 = leaf(x=0) Step #2: v2 = split(left=v1, right=v1) Step #3: v3 = split(left=v2, right=v1) Step #4: check_balanced(tree=v3) F ====================================================================== FAIL: runTest (hypothesis.stateful.BalancedTrees.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): (...) assert abs(self.size(tree.left) - self.size(tree.right)) <= 1 AssertionError Note how it's printed out a very short program that will demonstrate the problem. ...the problem of course being that we've not actually written any code to balance this tree at *all*, so of course it's not balanced. So lets balance some trees. .. code:: python from collections import namedtuple from hypothesis import strategies as st from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule Leaf = namedtuple('Leaf', ('label',)) Split = namedtuple('Split', ('left', 'right')) class BalancedTrees(RuleBasedStateMachine): trees = Bundle('BinaryTree') balanced_trees = Bundle('balanced BinaryTree') @rule(target=trees, x=st.integers()) def leaf(self, x): return Leaf(x) @rule(target=trees, left=trees, right=trees) def split(self, left, right): return Split(left, right) @rule(tree=balanced_trees) def check_balanced(self, tree): if isinstance(tree, Leaf): return else: assert abs(self.size(tree.left) - self.size(tree.right)) <= 1, \ repr(tree) self.check_balanced(tree.left) self.check_balanced(tree.right) @rule(target=balanced_trees, tree=trees) def balance_tree(self, tree): return self.split_leaves(self.flatten(tree)) def size(self, tree): if isinstance(tree, Leaf): return 1 else: return self.size(tree.left) + self.size(tree.right) def flatten(self, tree): if isinstance(tree, Leaf): return (tree.label,) else: return self.flatten(tree.left) + self.flatten(tree.right) def split_leaves(self, leaves): assert leaves if len(leaves) == 1: return Leaf(leaves[0]) else: mid = len(leaves) // 2 return Split( self.split_leaves(leaves[:mid]), self.split_leaves(leaves[mid:]), ) We've now written a really noddy tree balancing implementation. This takes trees and puts them into a new bundle of data, and we only assert that things in the balanced_trees bundle are actually balanced. If you run this it will sit there silently for a while (you can turn on :ref:`verbose output ` to get slightly more information about what's happening. debug will give you all the intermediate programs being run) and then run, telling you your test has passed! Our balancing algorithm worked. Now lets break it to make sure the test is still valid: Changing the split to ``mid = max(len(leaves) // 3, 1)`` this should no longer balance, which gives us the following counter-example: .. code:: python v1 = leaf(x=0) v2 = split(left=v1, right=v1) v3 = balance_tree(tree=v1) v4 = split(left=v2, right=v2) v5 = balance_tree(tree=v4) check_balanced(tree=v5) Note that the example could be shrunk further by deleting v3. Due to some technical limitations, Hypothesis was unable to find that particular shrink. In general it's rare for examples produced to be long, but they won't always be minimal. You can control the detailed behaviour with a settings object on the TestCase (this is a normal hypothesis settings object using the defaults at the time the TestCase class was first referenced). For example if you wanted to run fewer examples with larger programs you could change the settings to: .. code:: python TestTrees.settings = settings(max_examples=100, stateful_step_count=100) Which doubles the number of steps each program runs and halves the number of runs relative to the example. settings.timeout will also be respected as usual. Preconditions ------------- While it's possible to use :func:`~hypothesis.assume` in RuleBasedStateMachine rules, if you use it in only a few rules you can quickly run into a situation where few or none of your rules pass their assumptions. Thus, Hypothesis provides a :func:`~hypothesis.stateful.precondition` decorator to avoid this problem. The :func:`~hypothesis.stateful.precondition` decorator is used on ``rule``-decorated functions, and must be given a function that returns True or False based on the RuleBasedStateMachine instance. .. autofunction:: hypothesis.stateful.precondition .. code:: python from hypothesis.stateful import RuleBasedStateMachine, rule, precondition class NumberModifier(RuleBasedStateMachine): num = 0 @rule() def add_one(self): self.num += 1 @precondition(lambda self: self.num != 0) @rule() def divide_with_one(self): self.num = 1 / self.num By using :func:`~hypothesis.stateful.precondition` here instead of :func:`~hypothesis.assume`, Hypothesis can filter the inapplicable rules before running them. This makes it much more likely that a useful sequence of steps will be generated. Note that currently preconditions can't access bundles; if you need to use preconditions, you should store relevant data on the instance instead. Invariant --------- Often there are invariants that you want to ensure are met after every step in a process. It would be possible to add these as rules that are run, but they would be run zero or multiple times between other rules. Hypothesis provides a decorator that marks a function to be run after every step. .. autofunction:: hypothesis.stateful.invariant .. code:: python from hypothesis.stateful import RuleBasedStateMachine, rule, invariant class NumberModifier(RuleBasedStateMachine): num = 0 @rule() def add_two(self): self.num += 2 if self.num > 50: self.num += 1 @invariant() def divide_with_one(self): assert self.num % 2 == 0 NumberTest = NumberModifier.TestCase Invariants can also have :func:`~hypothesis.stateful.precondition`\ s applied to them, in which case they will only be run if the precondition function returns true. Note that currently invariants can't access bundles; if you need to use invariants, you should store relevant data on the instance instead. ---------------------- Generic state machines ---------------------- The class GenericStateMachine is the underlying machinery of stateful testing in Hypothesis. In execution it looks much like the RuleBasedStateMachine but it allows the set of steps available to depend in essentially arbitrary ways on what has happened so far. For example, if you wanted to use Hypothesis to test a game, it could choose each step in the machine based on the game to date and the set of actions the game program is telling it it has available. It essentially executes the following loop: .. code:: python machine = MyStateMachine() try: machine.check_invariants() for _ in range(n_steps): step = machine.steps().example() machine.execute_step(step) machine.check_invariants() finally: machine.teardown() Where ``steps`` and ``execute_step`` are methods you must implement, and ``teardown`` and ``check_invarants`` are methods you can implement if required. ``steps`` returns a strategy, which is allowed to depend arbitrarily on the current state of the test execution. *Ideally* a good steps implementation should be robust against minor changes in the state. Steps that change a lot between slightly different executions will tend to produce worse quality examples because they're hard to simplify. The steps method *may* depend on external state, but it's not advisable and may produce flaky tests. If any of ``execute_step``, ``check_invariants`` or ``teardown`` produces an exception, Hypothesis will try to find a minimal sequence of values steps such that the following throws an exception: .. code:: python machine = MyStateMachine() try: machine.check_invariants() for step in steps: machine.execute_step(step) machine.check_invariants() finally: machine.teardown() and such that at every point, the step executed is one that could plausible have come from a call to ``steps`` in the current state. Here's an example of using stateful testing to test a broken implementation of a set in terms of a list (note that you could easily do something close to this example with the rule based testing instead, and probably should. This is mostly for illustration purposes): .. code:: python import unittest from hypothesis.stateful import GenericStateMachine from hypothesis.strategies import tuples, sampled_from, just, integers class BrokenSet(GenericStateMachine): def __init__(self): self.data = [] def steps(self): add_strategy = tuples(just("add"), integers()) if not self.data: return add_strategy else: return ( add_strategy | tuples(just("delete"), sampled_from(self.data))) def execute_step(self, step): action, value = step if action == 'delete': try: self.data.remove(value) except ValueError: pass assert value not in self.data else: assert action == 'add' self.data.append(value) assert value in self.data TestSet = BrokenSet.TestCase if __name__ == '__main__': unittest.main() Note that the strategy changes each time based on the data that's currently in the state machine. Running this gives us the following: .. code:: bash Step #1: ('add', 0) Step #2: ('add', 0) Step #3: ('delete', 0) F ====================================================================== FAIL: runTest (hypothesis.stateful.BrokenSet.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): (...) assert value not in self.data AssertionError So it adds two elements, then deletes one, and throws an assertion when it finds out that this only deleted one of the copies of the element. ------------------------- More fine grained control ------------------------- If you want to bypass the TestCase infrastructure you can invoke these manually. The stateful module exposes the function run_state_machine_as_test, which takes an arbitrary function returning a GenericStateMachine and an optional settings parameter and does the same as the class based runTest provided. In particular this may be useful if you wish to pass parameters to a custom __init__ in your subclass. hypothesis-python-3.44.1/docs/strategies.rst000066400000000000000000000025011321557765100211670ustar00rootroot00000000000000============================= Projects extending Hypothesis ============================= The following is a non-exhaustive list of open source projects that make Hypothesis strategies available. If you're aware of any others please add them the list! The only inclusion criterion right now is that if it's a Python library then it should be available on pypi. * `hs-dbus-signature `_ - strategy to generate arbitrary D-Bus signatures * `hypothesis-regex `_ - merged into Hypothesis as the :func:`~hypothesis.strategies.from_regex` strategy.* * `lollipop-hypothesis `_ - strategy to generate data based on `Lollipop `_ schema definitions. * `hypothesis-fspaths `_ - strategy to generate filesystem paths. * `hypothesis-protobuf `_ - strategy to generate data based on `Protocol Buffer `_ schema definitions. If you're thinking about writing an extension, consider naming it ``hypothesis-{something}`` - a standard prefix makes the community more visible and searching for extensions easier. hypothesis-python-3.44.1/docs/support.rst000066400000000000000000000021461321557765100205360ustar00rootroot00000000000000================ Help and Support ================ For questions you are happy to ask in public, the :doc:`Hypothesis community ` is a friendly place where I or others will be more than happy to help you out. You're also welcome to ask questions on Stack Overflow. If you do, please tag them with 'python-hypothesis' so someone sees them. For bugs and enhancements, please file an issue on the :issue:`GitHub issue tracker <>`. Note that as per the :doc:`development policy `, enhancements will probably not get implemented unless you're willing to pay for development or implement them yourself (with assistance from me). Bugs will tend to get fixed reasonably promptly, though it is of course on a best effort basis. To see the versions of Python, optional dependencies, test runners, and operating systems Hypothesis supports (meaning incompatibility is treated as a bug), see :doc:`supported`. If you need to ask questions privately or want more of a guarantee of bugs being fixed promptly, please contact me on hypothesis-support@drmaciver.com to talk about availability of support contracts. hypothesis-python-3.44.1/docs/supported.rst000066400000000000000000000101141321557765100210410ustar00rootroot00000000000000============= Compatibility ============= Hypothesis does its level best to be compatible with everything you could possibly need it to be compatible with. Generally you should just try it and expect it to work. If it doesn't, you can be surprised and check this document for the details. --------------- Python versions --------------- Hypothesis is supported and tested on CPython 2.7 and CPython 3.4+. Hypothesis also supports PyPy2, and will support PyPy3 when there is a stable release supporting Python 3.4+. Hypothesis does not currently work on Jython, though could feasibly be made to do so. IronPython might work but hasn't been tested. 32-bit and narrow builds should work, though this is currently only tested on Windows. In general Hypothesis does not officially support anything except the latest patch release of any version of Python it supports. Earlier releases should work and bugs in them will get fixed if reported, but they're not tested in CI and no guarantees are made. ----------------- Operating systems ----------------- In theory Hypothesis should work anywhere that Python does. In practice it is only known to work and regularly tested on OS X, Windows and Linux, and you may experience issues running it elsewhere. If you're using something else and it doesn't work, do get in touch and I'll try to help, but unless you can come up with a way for me to run a CI server on that operating system it probably won't stay fixed due to the inevitable march of time. ------------------ Testing frameworks ------------------ In general Hypothesis goes to quite a lot of effort to generate things that look like normal Python test functions that behave as closely to the originals as possible, so it should work sensibly out of the box with every test framework. If your testing relies on doing something other than calling a function and seeing if it raises an exception then it probably *won't* work out of the box. In particular things like tests which return generators and expect you to do something with them (e.g. nose's yield based tests) will not work. Use a decorator or similar to wrap the test to take this form. In terms of what's actually *known* to work: * Hypothesis integrates as smoothly with py.test and unittest as I can make it, and this is verified as part of the CI. * py.test fixtures work correctly with Hypothesis based functions, but note that function based fixtures will only run once for the whole function, not once per example. * Nose works fine with hypothesis, and this is tested as part of the CI. yield based tests simply won't work. * Integration with Django's testing requires use of the :ref:`hypothesis-django` package. The issue is that in Django's tests' normal mode of execution it will reset the database one per test rather than once per example, which is not what you want. Coverage works out of the box with Hypothesis (and Hypothesis has 100% branch coverage in its own tests). However you should probably not use Coverage, Hypothesis and PyPy together. Because Hypothesis does quite a lot of CPU heavy work compared to normal tests, it really exacerbates the performance problems the two normally have working together. ----------------- Optional Packages ----------------- The supported versions of optional packages, for strategies in ``hypothesis.extra``, are listed in the documentation for that extra. Our general goal is to support all versions that are supported upstream. ------------------------ Regularly verifying this ------------------------ Everything mentioned above as explicitly supported is checked on every commit with `Travis `_ and `Appveyor `_ and goes green before a release happens, so when I say they're supported I really mean it. ------------------- Hypothesis versions ------------------- Backwards compatibility is better than backporting fixes, so we use :ref:`semantic versioning ` and only support the most recent version of Hypothesis. See :doc:`support` for more information. hypothesis-python-3.44.1/docs/usage.rst000066400000000000000000000051351321557765100201270ustar00rootroot00000000000000===================================== Open Source Projects using Hypothesis ===================================== The following is a non-exhaustive list of open source projects I know are using Hypothesis. If you're aware of any others please add them to the list! The only inclusion criterion right now is that if it's a Python library then it should be available on pypi. * `aur `_ * `argon2_cffi `_ * `attrs `_ * `axelrod `_ * `bidict `_ * `binaryornot `_ * `brotlipy `_ * :pypi:`chardet` * `cmph-cffi `_ * `cryptography `_ * `dbus-signature-pyparsing `_ * `fastnumbers `_ * `flocker `_ * `flownetpy `_ * `funsize `_ * `fusion-index `_ * `hyper-h2 `_ * `into-dbus-python `_ * `justbases `_ * `justbytes `_ * `loris `_ * `mariadb-dyncol `_ * `mercurial `_ * `natsort `_ * `pretext `_ * `priority `_ * `PyCEbox `_ * `PyPy `_ * `pyrsistent `_ * `python-humble-utils `_ * `pyudev `_ * `qutebrowser `_ * `RubyMarshal `_ * `Segpy `_ * `simoa `_ * `srt `_ * `tchannel `_ * `vdirsyncer `_ * `wcag-contrast-ratio `_ * `yacluster `_ * `yturl `_ hypothesis-python-3.44.1/examples/000077500000000000000000000000001321557765100171535ustar00rootroot00000000000000hypothesis-python-3.44.1/examples/README.rst000066400000000000000000000006621321557765100206460ustar00rootroot00000000000000============================ Examples of Hypothesis usage ============================ This is a directory for examples of using Hypothesis that show case its features or demonstrate a useful way of testing something. Right now it's a bit small and fairly algorithmically focused. Pull requests to add more examples would be *greatly* appreciated, especially ones using e.g. the Django integration or testing something "Businessy". hypothesis-python-3.44.1/examples/test_binary_search.py000066400000000000000000000115241321557765100234000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This file demonstrates testing a binary search. It's a useful example because the result of the binary search is so clearly determined by the invariants it must satisfy, so we can simply test for those invariants. It also demonstrates the useful testing technique of testing how the answer should change (or not) in response to movements in the underlying data. """ from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import given def binary_search(ls, v): """Take a list ls and a value v such that ls is sorted and v is comparable with the elements of ls. Return an index i such that 0 <= i <= len(v) with the properties: 1. ls.insert(i, v) is sorted 2. ls.insert(j, v) is not sorted for j < i """ # Without this check we will get an index error on the next line when the # list is empty. if not ls: return 0 # Without this check we will miss the case where the insertion point should # be zero: The invariant we maintain in the next section is that lo is # always strictly lower than the insertion point. if v <= ls[0]: return 0 # Invariant: There is no insertion point i with i <= lo lo = 0 # Invariant: There is an insertion point i with i <= hi hi = len(ls) while lo + 1 < hi: mid = (lo + hi) // 2 if v > ls[mid]: # Inserting v anywhere below mid would result in an unsorted list # because it's > the value at mid. Therefore mid is a valid new lo lo = mid # Uncommenting the following lines will cause this to return a valid # insertion point which is not always minimal. # elif v == ls[mid]: # return mid else: # Either v == ls[mid] in which case mid is a valid insertion point # or v < ls[mid], in which case all valid insertion points must be # < hi. Either way, mid is a valid new hi. hi = mid assert lo + 1 == hi # We now know that there is a valid insertion point <= hi and there is no # valid insertion point < hi because hi - 1 is lo. Therefore hi is the # answer we were seeking return hi def is_sorted(ls): """Is this list sorted?""" for i in range(len(ls) - 1): if ls[i] > ls[i + 1]: return False return True Values = st.integers() # We generate arbitrary lists and turn this into generating sorting lists # by just sorting them. SortedLists = st.lists(Values).map(sorted) # We could also do it this way, but that would be a bad idea: # SortedLists = st.lists(Values).filter(is_sorted) # The problem is that Hypothesis will only generate long sorted lists with very # low probability, so we are much better off post-processing values into the # form we want than filtering them out. @given(ls=SortedLists, v=Values) def test_insert_is_sorted(ls, v): """We test the first invariant: binary_search should return an index such that inserting the value provided at that index would result in a sorted set.""" ls.insert(binary_search(ls, v), v) assert is_sorted(ls) @given(ls=SortedLists, v=Values) def test_is_minimal(ls, v): """We test the second invariant: binary_search should return an index such that no smaller index is a valid insertion point for v.""" for i in range(binary_search(ls, v)): ls2 = list(ls) ls2.insert(i, v) assert not is_sorted(ls2) @given(ls=SortedLists, v=Values) def test_inserts_into_same_place_twice(ls, v): """In this we test a *consequence* of the second invariant: When we insert a value into a list twice, the insertion point should be the same both times. This is because we know that v is > the previous element and == the next element. In theory if the former passes, this should always pass. In practice, failures are detected by this test with much higher probability because it deliberately puts the data into a shape that is likely to trigger a failure. This is an instance of a good general category of test: Testing how the function moves in responses to changes in the underlying data. """ i = binary_search(ls, v) ls.insert(i, v) assert binary_search(ls, v) == i hypothesis-python-3.44.1/examples/test_rle.py000066400000000000000000000067751321557765100213650ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This example demonstrates testing a run length encoding scheme. That is, we take a sequence and represent it by a shorter sequence where each 'run' of consecutive equal elements is represented as a single element plus a count. So e.g. [1, 1, 1, 1, 2, 1] is represented as [[1, 4], [2, 1], [1, 1]] This demonstrates the useful decode(encode(x)) == x invariant that is often a fruitful source of testing with Hypothesis. It also has an example of testing invariants in response to changes in the underlying data. """ from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import given, assume def run_length_encode(seq): """Encode a sequence as a new run-length encoded sequence.""" if not seq: return [] # By starting off the count at zero we simplify the iteration logic # slightly. result = [[seq[0], 0]] for s in seq: if ( # If you uncomment this line this branch will be skipped and we'll # always append a new run of length 1. Note which tests fail. # False and s == result[-1][0] # Try uncommenting this line and see what problems occur: # and result[-1][-1] < 2 ): result[-1][1] += 1 else: result.append([s, 1]) return result def run_length_decode(seq): """Take a previously encoded sequence and reconstruct the original from it.""" result = [] for s, i in seq: for _ in range(i): result.append(s) return result # We use lists of a type that should have a relatively high duplication rate, # otherwise we'd almost never get any runs. Lists = st.lists(st.integers(0, 10)) @given(Lists) def test_decodes_to_starting_sequence(ls): """If we encode a sequence and then decode the result, we should get the original sequence back. Otherwise we've done something very wrong. """ assert run_length_decode(run_length_encode(ls)) == ls @given(Lists, st.integers(0, 100)) def test_duplicating_an_element_does_not_increase_length(ls, i): """The previous test could be passed by simply returning the input sequence so we need something that tests the compression property of our encoding. In this test we deliberately introduce or extend a run and assert that this does not increase the length of our encoding, because they should be part of the same run in the final result. """ # We use assume to get a valid index into the list. We could also have used # e.g. flatmap, but this is relatively straightforward and will tend to # perform better. assume(i < len(ls)) ls2 = list(ls) # duplicating the value at i right next to it guarantees they are part of # the same run in the resulting compression. ls2.insert(i, ls2[i]) assert len(run_length_encode(ls2)) == len(run_length_encode(ls)) hypothesis-python-3.44.1/guides/000077500000000000000000000000001321557765100166155ustar00rootroot00000000000000hypothesis-python-3.44.1/guides/README.rst000066400000000000000000000005121321557765100203020ustar00rootroot00000000000000================================= Guides for Hypothesis Development ================================= This is a general collection of useful documentation for people working on Hypothesis. It is separate from the main documentation because it is not much use if you are merely *using* Hypothesis. It's purely for working on it. hypothesis-python-3.44.1/guides/api-style.rst000066400000000000000000000132411321557765100212570ustar00rootroot00000000000000=============== House API Style =============== Here are some guidelines for how to write APIs so that they "feel" like a Hypothesis API. This is particularly focused on writing new strategies, as that's the major place where we add APIs, but also applies more generally. Note that it is not a guide to *code* style, only API design. The Hypothesis style evolves over time, and earlier strategies in particular may not be consistent with this style, and we've tried some experiments that didn't work out, so this style guide is more normative than descriptive and existing APIs may not match it. Where relevant, backwards compatibility is much more important than conformance to the style. ~~~~~~~~~~~~~~~~~~ General Guidelines ~~~~~~~~~~~~~~~~~~ * When writing extras modules, consistency with Hypothesis trumps consistency with the library you're integrating with. * *Absolutely no subclassing as part of the public API* * We should not strive too hard to be pythonic, but if an API seems weird to a normal Python user we should see if we can come up with an API we like as much but is less weird. * Code which adds a dependency on a third party package should be put in a hypothesis.extra module. * Complexity should not be pushed onto the user. An easy to use API is more important than a simple implementation. ~~~~~~~~~~~~~~~~~~~~~~~~~ Guidelines for strategies ~~~~~~~~~~~~~~~~~~~~~~~~~ * A strategy function should be somewhere between a recipe for how to build a value and a range of valid values. * It should not include distribution hints. The arguments should only specify how to produce a valid value, not statistical properties of values. * Strategies should try to paper over non-uniformity in the underlying types as much as possible (e.g. ``hypothesis.extra.numpy`` has a number of workarounds for numpy's odd behaviour around object arrays). ~~~~~~~~~~~~~~~~~ Argument handling ~~~~~~~~~~~~~~~~~ We have a reasonably distinctive style when it comes to handling arguments: * Arguments must be validated to the greatest extent possible. Hypothesis should reject bad arguments with an InvalidArgument error, not fail with an internal exception. * We make extensive use of default arguments. If an argument could reasonably have a default, it should. * Exception to the above: Strategies for collection types should *not* have a default argument for element strategies. * Interacting arguments (e.g. arguments that must be in a particular order, or where at most one is valid, or where one argument restricts the valid range of the other) are fine, but when this happens the behaviour of defaults should automatically be adjusted. e.g. if the normal default of an argument would become invalid, the function should still do the right thing if that default is used. * Where the actual default used depends on other arguments, the default parameter should be None. * It's worth thinking about the order of arguments: the first one or two arguments are likely to be passed positionally, so try to put values there where this is useful and not too confusing. * When adding arguments to strategies, think carefully about whether the user is likely to want that value to vary often. If so, make it a strategy instead of a value. In particular if it's likely to be common that they would want to write ``some_strategy.flatmap(lambda x: my_new_strategy(argument=x))`` then it should be a strategy. * Arguments should not be "a value or a strategy for generating that value". If you find yourself inclined to write something like that, instead make it take a strategy. If a user wants to pass a value they can wrap it in a call to ``just``. ~~~~~~~~~~~~~~ Function Names ~~~~~~~~~~~~~~ We don't have any real consistency here. The rough approach we follow is: * Names are `snake_case` as is standard in Python. * Strategies for a particular type are typically named as a plural name for that type. Where that type has some truncated form (e.g. int, str) we use a longer form name. * Other strategies have no particular common naming convention. ~~~~~~~~~~~~~~ Argument Names ~~~~~~~~~~~~~~ We should try to use the same argument names and orders across different strategies wherever possible. In particular: * For collection types, the element strategy (or strategies) should always be the first arguments. Where there is only one element strategy it should be called ``elements`` (but e.g. ``dictionaries`` has element strategies named ``keys`` and ``values`` and that's fine). * For ordered types, the first two arguments should be a lower and an upper bound. They should be called ``min_value`` and ``max_value``. * Collection types should have a ``min_size`` and a ``max_size`` parameter that controls the range of their size. ``min_size`` should default to zero and ``max_size`` to ``None`` (even if internally it is bounded). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A catalogue of current violations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following are places where we currently deviate from this style. Some of these should be considered targets for deprecation and/or improvement. * most of the collections in ``hypothesis.strategies`` have an ``average_size`` distribution hint. * many of the collections in ``hypothesis.strategies`` allow a default of ``None`` for their elements strategy (meaning only generate empty collections). * ``hypothesis.extra.numpy`` has some arguments which can be either strategies or values. * ``hypothesis.extra.numpy`` assumes arrays are fixed size and doesn't have ``min_size`` and ``max_size`` arguments (but this is probably OK because of more complicated shapes of array). * ``hypothesis.stateful`` is a great big subclassing based train wreck. hypothesis-python-3.44.1/guides/documentation.rst000066400000000000000000000065651321557765100222340ustar00rootroot00000000000000===================================== The Hypothesis Documentation Handbook ===================================== Good documentation can make the difference between good code and useful code - and Hypothesis is written to be used, as widely as possible. This is a working document-in-progress with some tips for how we try to write our docs, with a little of the what and a bigger chunk of the how. If you have ideas about how to improve these suggestions, meta issues or pull requests are just as welcome as for docs or code :D ---------------------------- What docs should be written? ---------------------------- All public APIs should be comprehensively described. If the docs are confusing to new users, incorrect or out of date, or simply incomplete - we consider all of those to be bugs; if you see them please raise an issue and perhaps submit a pull request. That's not much advice, but it's what we have so far. ------------ Using Sphinx ------------ We use `the Sphinx documentation system `_ to run doctests and convert the .rst files into html with formatting and cross-references. Without repeating the docs for Sphinx, here are some tips: - When documenting a Python object (function, class, module, etc.), you can use autodoc to insert and interpret the docstring. - When referencing a function, you can insert a reference to a function as (eg) ``:func:`hypothesis.given`\ ``, which will appear as ``hypothesis.given()`` with a hyperlink to the apropriate docs. You can show only the last part (unqualified name) by adding a tilde at the start, like ``:func:`~hypothesis.given`\ `` -> ``given()``. Finally, you can give it alternative link text in the usual way: ``:func:`other text `\ `` -> ``other text``. - For the formatting and also hyperlinks, all cross-references should use the Sphinx cross-referencing syntax rather than plain text. - Wherever possible, example code should be written as a doctest. This ensures that if the example raises deprecation warnings, or simply breaks, it will be flagged in CI and can be fixed immediately. ----------------- Changelog Entries ----------------- `Hypothesis does continuous deployment `_, where every pull request that touches ``./src`` results in a new release. That means every contributor gets to write their changelog! A changelog entry should be written in a new ``RELEASE.rst`` file in the repository root, and: - concisely describe what changed and why - use Sphinx cross-references to any functions or classes mentioned - if closing an issue, mention it with the issue role to generate a link - finish with a note of thanks from the maintainers: "Thanks to for this bug fix / feature / contribution" (depending on which it is). If this is your first contribution, don't forget to add yourself to contributors.rst! ----------------- Updating Doctests ----------------- We use the Sphinx `doctest` builder to ensure that all example code snippets are kept up to date. To make this less tedious, you can run ``scripts/fix_doctests.py`` (under Python 3) to... fix failing doctests. The script is pretty good, but doesn't handle ``+ELLIPSIS`` or ``+NORMALIZE_WHITESPACE`` options. Check that output is stable (running it again should give "All doctests are OK"), then review the diff before committing. hypothesis-python-3.44.1/guides/review.rst000066400000000000000000000231261321557765100206540ustar00rootroot00000000000000=================================== The Hypothesis Code Review Handbook =================================== Hypothesis has a process of reviewing every change, internal or external. This is a document outlining that process. It's partly descriptive, partly prescriptive, and entirely prone to change in response to circumstance and need. We're still figuring this thing out! ---------------- How Review Works ---------------- All changes to Hypothesis must be signed off by at least one person with write access to the repo other than the author of the change. Once the build is green and a reviewer has approved the change, anyone on the maintainer team may merge the request. More than one maintainer *may* review a change if they wish to, but it's not required. Any maintainer may block a pull request by requesting changes. Consensus on a review is best but not required. If some reviewers have approved a pull request and some have requested changes, ideally you would try to address all of the changes, but it is OK to dismiss dissenting reviews if you feel it appropriate. We've not tested the case of differing opinions much in practice yet, so we may grow firmer guidelines on what to do there over time. ------------ Review Goals ------------ At a high level, the two things we're looking for in review are answers to the following questions: 1. Is this change going to make users' lives worse? 2. Is this change going to make the maintainers' lives worse? Code review is a collaborative process between the author and the reviewer to try to ensure that the answer to both of those questions is no. Ideally of course the change should also make one or both of the users' and our lives *better*, but it's OK for changes to be mostly neutral. The author should be presumed to have a good reason for submitting the change in the first place, so neutral is good enough! -------------- Social Factors -------------- * Always thank external contributors. Thank maintainers too, ideally! * Remember that the `Code of Conduct `_ applies to pull requests and issues too. Feel free to throw your weight around to enforce this if necessary. * Anyone, maintainer or not, is welcome to do a code review. Only official maintainers have the ability to actually approve and merge a pull request, but outside review is also welcome. ------------ Requirements ------------ The rest of this document outlines specific things reviewers should focus on in aid of this, broken up by sections according to their area of applicability. All of these conditions must be satisfied for merge. Where the reviewer thinks this conflicts with the above higher level goals, they may make an exception if both the author and another maintainer agree. ~~~~~~~~~~~~~~~~~~~~ General Requirements ~~~~~~~~~~~~~~~~~~~~ The following are required for almost every change: 1. Changes must be of reasonable size. If a change could logically be broken up into several smaller changes that could be reviewed separately on their own merits, it should be. 2. The motivation for each change should be clearly explained (this doesn't have to be an essay, especially for small changes, but at least a sentence of explanation is usually required). 3. The likely consequences of a change should be outlined (again, this doesn't have an essay, and it may be sufficiently self-explanatory that the motivation section is sufficient). ~~~~~~~~~~~~~~~~~~~~~ Functionality Changes ~~~~~~~~~~~~~~~~~~~~~ This section applies to any changes in Hypothesis's behaviour, regardless of their nature. A good rule of thumb is that if it touches a file in src then it counts. 1. The code should be clear in its intent and behaviour. 2. Behaviour changes should come with appropriate tests to demonstrate the new behaviour. 3. Hypothesis must never be *flaky*. Flakiness here is defined as anything where a test fails and this does not indicate a bug in Hypothesis or in the way the user wrote the code or the test. 4. The version number must be kept up to date, following `Semantic Versioning `_ conventions: The third (patch) number increases for things that don't change public facing functionality, the second (minor) for things that do but are backwards compatible, and the first (major) changes for things that aren't backwards compatible. See the section on API changes for the latter two. 5. The changelog should be kept up to date by creating a RELEASE.rst file in the root of the repository. Make sure you build the documentation and manually inspect the resulting changelog to see that it looks good - there are a lot of syntax mistakes possible in RST that don't result in a compilation error. ~~~~~~~~~~~ API Changes ~~~~~~~~~~~ Public API changes require the most careful scrutiny of all reviews, because they are the ones we are stuck with for the longest: Hypothesis follows semantic versioning, and we don't release new major versions very often. Public API changes must satisfy the following: 1. All public API changes must be well documented. If it's not documented, it doesn't count as public API! 2. Changes must be backwards compatible. Where this is not possible, they must first introduce a deprecation warning, then once the major version is bumped the deprecation warning and the functionality may be removed. 3. If an API is deprecated, the deprecation warning must make it clear how the user should modify their code to adapt to this change ( possibly by referring to documentation). 4. If it is likely that we will want to make backwards incompatible changes to an API later, to whatever extent possible these should be made immediately when it is introduced instead. 5. APIs should give clear and helpful error messages in response to invalid inputs. In particular error messages should always display the value that triggered the error, and ideally be specific about the relevant feature of it that caused this failure (e.g. the type). 6. Incorrect usage should never "fail silently" - when a user accidentally misuses an API this should result in an explicit error. 7. Functionality should be limited to that which is easy to support in the long-term. In particular functionality which is very tied to the current Hypothesis internals should be avoided. 8. `DRMacIver `_ must approve the changes though other maintainers are welcome and likely to chip in to review as well. 9. We have a separate guide for `house API style `_ which should be followed. ~~~~~~~~~ Bug Fixes ~~~~~~~~~ 1. All bug fixes must come with a test that demonstrates the bug on master and which is fixed in this branch. An exception *may* be made here if the submitter can convincingly argue that testing this would be prohibitively difficult. 2. Where possible, a fix that makes it impossible for similar bugs to occur is better. 3. Where possible, a test that will catch both this bug and a more general class of bug that contains it is better. ~~~~~~~~~~~~~~~~ Settings Changes ~~~~~~~~~~~~~~~~ It is tempting to use the Hypothesis settings object as a dumping ground for anything and everything that you can think of to control Hypothesis. This rapidly gets confusing for users and should be carefully avoided. New settings should: 1. Be something that the user can meaningfully have an opinion on. Many of the settings that have been added to Hypothesis are just cases where Hypothesis is abdicating responsibility to do the right thing to the user. 2. Make sense without reference to Hypothesis internals. 3. Correspond to behaviour which can meaningfully differ between tests - either between two different tests or between two different runs of the same test (e.g. one use case is the profile system, where you might want to run Hypothesis differently in CI and development). If you would never expect a test suite to have more than one value for a setting across any of its runs, it should be some sort of global configuration, not a setting. Removing settings is not something we have done so far, so the exact process is still up in the air, but it should involve a careful deprecation path where the default behaviour does not change without first introducing warnings. ~~~~~~~~~~~~~~ Engine Changes ~~~~~~~~~~~~~~ Engine changes are anything that change a "fundamental" of how Hypothesis works. A good rule of thumb is that an engine change is anything that touches a file in hypothesis.internal.conjecture. All such changes should: 1. Be approved (or authored) by DRMacIver. 2. Be approved (or authored) by someone who *isn't* DRMacIver (a major problem with this section of the code is that there is too much that only DRMacIver understands properly and we want to fix this). 3. If appropriate, come with a test in test_discovery_ability.py showing new examples that were previously hard to discover. 4. If appropriate, come with a test in test_shrink_quality.py showing how they improve the shrinker. ~~~~~~~~~~~~~~~~~~~~~~ Non-Blocking Questions ~~~~~~~~~~~~~~~~~~~~~~ These questions should *not* block merge, but may result in additional issues or changes being opened, either by the original author or by the reviewer. 1. Is this change well covered by the review items and is there anything that could usefully be added to the guidelines to improve that? 2. Were any of the review items confusing or annoying when reviewing this change? Could they be improved? 3. Are there any more general changes suggested by this, and do they have appropriate issues and/or pull requests associated with them? hypothesis-python-3.44.1/notebooks/000077500000000000000000000000001321557765100173405ustar00rootroot00000000000000hypothesis-python-3.44.1/notebooks/Designing a better simplifier.ipynb000066400000000000000000004711231321557765100261150ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Designing a better simplifier\n", "\n", "This is a notebook talking through some of the considerations in the design of Hypothesis's approach to simplification.\n", "\n", "It doesn't perfectly mirror what actually happens in Hypothesis, but it should give some consideration to the sort of things that Hypothesis does and why it takes a particular approach.\n", "\n", "In order to simplify the scope of this document we are only going to\n", "concern ourselves with lists of integers. There are a number of API considerations involved in expanding beyond that point, however most of the algorithmic considerations are the same.\n", "\n", "The big difference between lists of integers and the general case is that integers can never be too complex. In particular we will rapidly get to the point where individual elements can be simplified in usually only log(n) calls. When dealing with e.g. lists of lists this is a much more complicated proposition. That may be covered in another notebook.\n", "\n", "Our objective here is to minimize the number of times we check the condition. We won't be looking at actual timing performance, because usually the speed of the condition is the bottleneck there (and where it's not, everything is fast enough that we need not worry)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def greedy_shrink(ls, constraint, shrink):\n", " \"\"\"\n", " This is the \"classic\" QuickCheck algorithm which takes a shrink function\n", " which will iterate over simpler versions of an example. We are trying\n", " to find a local minima: That is an example ls such that condition(ls)\n", " is True but that constraint(t) is False for each t in shrink(ls).\n", " \"\"\"\n", " while True:\n", " for s in shrink(ls):\n", " if constraint(s):\n", " ls = s\n", " break\n", " else:\n", " return ls" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def shrink1(ls):\n", " \"\"\"\n", " This is our prototype shrink function. It is very bad. It makes the\n", " mistake of only making very small changes to an example each time.\n", " \n", " Most people write something like this the first time they come to\n", " implement example shrinking. In particular early Hypothesis very much\n", " made this mistake.\n", " \n", " What this does:\n", " \n", " For each index, if the value of the index is non-zero we try\n", " decrementing it by 1.\n", " \n", " We then (regardless of if it's zero) try the list with the value at\n", " that index deleted.\n", " \"\"\"\n", " for i in range(len(ls)):\n", " s = list(ls)\n", " if s[i] > 0:\n", " s[i] -= 1\n", " yield list(s)\n", " del s[i]\n", " yield list(s)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def show_trace(start, constraint, simplifier):\n", " \"\"\"\n", " This is a debug function. You shouldn't concern yourself with\n", " its implementation too much.\n", " \n", " What it does is print out every intermediate step in applying a\n", " simplifier (a function of the form (list, constraint) -> list)\n", " along with whether it is a successful shrink or not.\n", " \"\"\"\n", " if start is None:\n", " while True:\n", " start = gen_list()\n", " if constraint(start):\n", " break\n", "\n", " shrinks = [0]\n", " tests = [0]\n", "\n", " def print_shrink(ls):\n", " tests[0] += 1\n", " if constraint(ls):\n", " shrinks[0] += 1\n", " print(\"✓\", ls)\n", " return True\n", " else:\n", " print(\"✗\", ls)\n", " return False\n", " print(\"✓\", start)\n", " simplifier(start, print_shrink)\n", " print()\n", " print(\"%d shrinks with %d function calls\" % (\n", " shrinks[0], tests[0]))" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from functools import partial" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [5, 5]\n", "✓ [4, 5]\n", "✓ [3, 5]\n", "✓ [2, 5]\n", "✓ [1, 5]\n", "✓ [0, 5]\n", "✗ [5]\n", "✓ [0, 4]\n", "✗ [4]\n", "✓ [0, 3]\n", "✗ [3]\n", "✓ [0, 2]\n", "✗ [2]\n", "✓ [0, 1]\n", "✗ [1]\n", "✓ [0, 0]\n", "✗ [0]\n", "✗ [0]\n", "\n", "10 shrinks with 17 function calls\n" ] } ], "source": [ "show_trace([5, 5], lambda x: len(x) >= 2, partial(greedy_shrink, shrink=shrink1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That worked reasonably well, but it sure was a lot of function calls for such a small amount of shrinking. What would have happened if we'd started with [100, 100]?" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def shrink2(ls):\n", " \"\"\"\n", " Here is an improved shrink function. We first try deleting each element\n", " and then we try making each element smaller, but we do so from the left\n", " hand side instead of the right. This means we will always find the\n", " smallest value that can go in there, but we will do so much sooner.\n", " \"\"\"\n", " for i in range(len(ls)):\n", " s = list(ls)\n", " del s[i]\n", " yield list(s)\n", " \n", " for i in range(len(ls)):\n", " for x in range(ls[i]):\n", " s = list(ls)\n", " s[i] = x\n", " yield s" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [5, 5]\n", "✗ [5]\n", "✗ [5]\n", "✓ [0, 5]\n", "✗ [5]\n", "✗ [0]\n", "✓ [0, 0]\n", "✗ [0]\n", "✗ [0]\n", "\n", "2 shrinks with 8 function calls\n" ] } ], "source": [ "show_trace([5, 5], lambda x: len(x) >= 2, partial(\n", " greedy_shrink, shrink=shrink2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This did indeed reduce the number of function calls significantly - we immediately determine that the value in the cell doesn't matter and we can just put zero there. \n", "\n", "But what would have happened if the value *did* matter?" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [1000]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [2]\n", "✗ [3]\n", "✗ [4]\n", "✗ [5]\n", "✗ [6]\n", "✗ [7]\n", "✗ [8]\n", "✗ [9]\n", "✗ [10]\n", "✗ [11]\n", "✗ [12]\n", "✗ [13]\n", "✗ [14]\n", "✗ [15]\n", "✗ [16]\n", "✗ [17]\n", "✗ [18]\n", "✗ [19]\n", "✗ [20]\n", "✗ [21]\n", "✗ [22]\n", "✗ [23]\n", "✗ [24]\n", "✗ [25]\n", "✗ [26]\n", "✗ [27]\n", "✗ [28]\n", "✗ [29]\n", "✗ [30]\n", "✗ [31]\n", "✗ [32]\n", "✗ [33]\n", "✗ [34]\n", "✗ [35]\n", "✗ [36]\n", "✗ [37]\n", "✗ [38]\n", "✗ [39]\n", "✗ [40]\n", "✗ [41]\n", "✗ [42]\n", "✗ [43]\n", "✗ [44]\n", "✗ [45]\n", "✗ [46]\n", "✗ [47]\n", "✗ [48]\n", "✗ [49]\n", "✗ [50]\n", "✗ [51]\n", "✗ [52]\n", "✗ [53]\n", "✗ [54]\n", "✗ [55]\n", "✗ [56]\n", "✗ [57]\n", "✗ [58]\n", "✗ [59]\n", "✗ [60]\n", "✗ [61]\n", "✗ [62]\n", "✗ [63]\n", "✗ [64]\n", "✗ [65]\n", "✗ [66]\n", "✗ [67]\n", "✗ [68]\n", "✗ [69]\n", "✗ [70]\n", "✗ [71]\n", "✗ [72]\n", "✗ [73]\n", "✗ [74]\n", "✗ [75]\n", "✗ [76]\n", "✗ [77]\n", "✗ [78]\n", "✗ [79]\n", "✗ [80]\n", "✗ [81]\n", "✗ [82]\n", "✗ [83]\n", "✗ [84]\n", "✗ [85]\n", "✗ [86]\n", "✗ [87]\n", "✗ [88]\n", "✗ [89]\n", "✗ [90]\n", "✗ [91]\n", "✗ [92]\n", "✗ [93]\n", "✗ [94]\n", "✗ [95]\n", "✗ [96]\n", "✗ [97]\n", "✗ [98]\n", "✗ [99]\n", "✗ [100]\n", "✗ [101]\n", "✗ [102]\n", "✗ [103]\n", "✗ [104]\n", "✗ [105]\n", "✗ [106]\n", "✗ [107]\n", "✗ [108]\n", "✗ [109]\n", "✗ [110]\n", "✗ [111]\n", "✗ [112]\n", "✗ [113]\n", "✗ [114]\n", "✗ [115]\n", "✗ [116]\n", "✗ [117]\n", "✗ [118]\n", "✗ [119]\n", "✗ [120]\n", "✗ [121]\n", "✗ [122]\n", "✗ [123]\n", "✗ [124]\n", "✗ [125]\n", "✗ [126]\n", "✗ [127]\n", "✗ [128]\n", "✗ [129]\n", "✗ [130]\n", "✗ [131]\n", "✗ [132]\n", "✗ [133]\n", "✗ [134]\n", "✗ [135]\n", "✗ [136]\n", "✗ [137]\n", "✗ [138]\n", "✗ [139]\n", "✗ [140]\n", "✗ [141]\n", "✗ [142]\n", "✗ [143]\n", "✗ [144]\n", "✗ [145]\n", "✗ [146]\n", "✗ [147]\n", "✗ [148]\n", "✗ [149]\n", "✗ [150]\n", "✗ [151]\n", "✗ [152]\n", "✗ [153]\n", "✗ [154]\n", "✗ [155]\n", "✗ [156]\n", "✗ [157]\n", "✗ [158]\n", "✗ [159]\n", "✗ [160]\n", "✗ [161]\n", "✗ [162]\n", "✗ [163]\n", "✗ [164]\n", "✗ [165]\n", "✗ [166]\n", "✗ [167]\n", "✗ [168]\n", "✗ [169]\n", "✗ [170]\n", "✗ [171]\n", "✗ [172]\n", "✗ [173]\n", "✗ [174]\n", "✗ [175]\n", "✗ [176]\n", "✗ [177]\n", "✗ [178]\n", "✗ [179]\n", "✗ [180]\n", "✗ [181]\n", "✗ [182]\n", "✗ [183]\n", "✗ [184]\n", "✗ [185]\n", "✗ [186]\n", "✗ [187]\n", "✗ [188]\n", "✗ [189]\n", "✗ [190]\n", "✗ [191]\n", "✗ [192]\n", "✗ [193]\n", "✗ [194]\n", "✗ [195]\n", "✗ [196]\n", "✗ [197]\n", "✗ [198]\n", "✗ [199]\n", "✗ [200]\n", "✗ [201]\n", "✗ [202]\n", "✗ [203]\n", "✗ [204]\n", "✗ [205]\n", "✗ [206]\n", "✗ [207]\n", "✗ [208]\n", "✗ [209]\n", "✗ [210]\n", "✗ [211]\n", "✗ [212]\n", "✗ [213]\n", "✗ [214]\n", "✗ [215]\n", "✗ [216]\n", "✗ [217]\n", "✗ [218]\n", "✗ [219]\n", "✗ [220]\n", "✗ [221]\n", "✗ [222]\n", "✗ [223]\n", "✗ [224]\n", "✗ [225]\n", "✗ [226]\n", "✗ [227]\n", "✗ [228]\n", "✗ [229]\n", "✗ [230]\n", "✗ [231]\n", "✗ [232]\n", "✗ [233]\n", "✗ [234]\n", "✗ [235]\n", "✗ [236]\n", "✗ [237]\n", "✗ [238]\n", "✗ [239]\n", "✗ [240]\n", "✗ [241]\n", "✗ [242]\n", "✗ [243]\n", "✗ [244]\n", "✗ [245]\n", "✗ [246]\n", "✗ [247]\n", "✗ [248]\n", "✗ [249]\n", "✗ [250]\n", "✗ [251]\n", "✗ [252]\n", "✗ [253]\n", "✗ [254]\n", "✗ [255]\n", "✗ [256]\n", "✗ [257]\n", "✗ [258]\n", "✗ [259]\n", "✗ [260]\n", "✗ [261]\n", "✗ [262]\n", "✗ [263]\n", "✗ [264]\n", "✗ [265]\n", "✗ [266]\n", "✗ [267]\n", "✗ [268]\n", "✗ [269]\n", "✗ [270]\n", "✗ [271]\n", "✗ [272]\n", "✗ [273]\n", "✗ [274]\n", "✗ [275]\n", "✗ [276]\n", "✗ [277]\n", "✗ [278]\n", "✗ [279]\n", "✗ [280]\n", "✗ [281]\n", "✗ [282]\n", "✗ [283]\n", "✗ [284]\n", "✗ [285]\n", "✗ [286]\n", "✗ [287]\n", "✗ [288]\n", "✗ [289]\n", "✗ [290]\n", "✗ [291]\n", "✗ [292]\n", "✗ [293]\n", "✗ [294]\n", "✗ [295]\n", "✗ [296]\n", "✗ [297]\n", "✗ [298]\n", "✗ [299]\n", "✗ [300]\n", "✗ [301]\n", "✗ [302]\n", "✗ [303]\n", "✗ [304]\n", "✗ [305]\n", "✗ [306]\n", "✗ [307]\n", "✗ [308]\n", "✗ [309]\n", "✗ [310]\n", "✗ [311]\n", "✗ [312]\n", "✗ [313]\n", "✗ [314]\n", "✗ [315]\n", "✗ [316]\n", "✗ [317]\n", "✗ [318]\n", "✗ [319]\n", "✗ [320]\n", "✗ [321]\n", "✗ [322]\n", "✗ [323]\n", "✗ [324]\n", "✗ [325]\n", "✗ [326]\n", "✗ [327]\n", "✗ [328]\n", "✗ [329]\n", "✗ [330]\n", "✗ [331]\n", "✗ [332]\n", "✗ [333]\n", "✗ [334]\n", "✗ [335]\n", "✗ [336]\n", "✗ [337]\n", "✗ [338]\n", "✗ [339]\n", "✗ [340]\n", "✗ [341]\n", "✗ [342]\n", "✗ [343]\n", "✗ [344]\n", "✗ [345]\n", "✗ [346]\n", "✗ [347]\n", "✗ [348]\n", "✗ [349]\n", "✗ [350]\n", "✗ [351]\n", "✗ [352]\n", "✗ [353]\n", "✗ [354]\n", "✗ [355]\n", "✗ [356]\n", "✗ [357]\n", "✗ [358]\n", "✗ [359]\n", "✗ [360]\n", "✗ [361]\n", "✗ [362]\n", "✗ [363]\n", "✗ [364]\n", "✗ [365]\n", "✗ [366]\n", "✗ [367]\n", "✗ [368]\n", "✗ [369]\n", "✗ [370]\n", "✗ [371]\n", "✗ [372]\n", "✗ [373]\n", "✗ [374]\n", "✗ [375]\n", "✗ [376]\n", "✗ [377]\n", "✗ [378]\n", "✗ [379]\n", "✗ [380]\n", "✗ [381]\n", "✗ [382]\n", "✗ [383]\n", "✗ [384]\n", "✗ [385]\n", "✗ [386]\n", "✗ [387]\n", "✗ [388]\n", "✗ [389]\n", "✗ [390]\n", "✗ [391]\n", "✗ [392]\n", "✗ [393]\n", "✗ [394]\n", "✗ [395]\n", "✗ [396]\n", "✗ [397]\n", "✗ [398]\n", "✗ [399]\n", "✗ [400]\n", "✗ [401]\n", "✗ [402]\n", "✗ [403]\n", "✗ [404]\n", "✗ [405]\n", "✗ [406]\n", "✗ [407]\n", "✗ [408]\n", "✗ [409]\n", "✗ [410]\n", "✗ [411]\n", "✗ [412]\n", "✗ [413]\n", "✗ [414]\n", "✗ [415]\n", "✗ [416]\n", "✗ [417]\n", "✗ [418]\n", "✗ [419]\n", "✗ [420]\n", "✗ [421]\n", "✗ [422]\n", "✗ [423]\n", "✗ [424]\n", "✗ [425]\n", "✗ [426]\n", "✗ [427]\n", "✗ [428]\n", "✗ [429]\n", "✗ [430]\n", "✗ [431]\n", "✗ [432]\n", "✗ [433]\n", "✗ [434]\n", "✗ [435]\n", "✗ [436]\n", "✗ [437]\n", "✗ [438]\n", "✗ [439]\n", "✗ [440]\n", "✗ [441]\n", "✗ [442]\n", "✗ [443]\n", "✗ [444]\n", "✗ [445]\n", "✗ [446]\n", "✗ [447]\n", "✗ [448]\n", "✗ [449]\n", "✗ [450]\n", "✗ [451]\n", "✗ [452]\n", "✗ [453]\n", "✗ [454]\n", "✗ [455]\n", "✗ [456]\n", "✗ [457]\n", "✗ [458]\n", "✗ [459]\n", "✗ [460]\n", "✗ [461]\n", "✗ [462]\n", "✗ [463]\n", "✗ [464]\n", "✗ [465]\n", "✗ [466]\n", "✗ [467]\n", "✗ [468]\n", "✗ [469]\n", "✗ [470]\n", "✗ [471]\n", "✗ [472]\n", "✗ [473]\n", "✗ [474]\n", "✗ [475]\n", "✗ [476]\n", "✗ [477]\n", "✗ [478]\n", "✗ [479]\n", "✗ [480]\n", "✗ [481]\n", "✗ [482]\n", "✗ [483]\n", "✗ [484]\n", "✗ [485]\n", "✗ [486]\n", "✗ [487]\n", "✗ [488]\n", "✗ [489]\n", "✗ [490]\n", "✗ [491]\n", "✗ [492]\n", "✗ [493]\n", "✗ [494]\n", "✗ [495]\n", "✗ [496]\n", "✗ [497]\n", "✗ [498]\n", "✗ [499]\n", "✓ [500]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [2]\n", "✗ [3]\n", "✗ [4]\n", "✗ [5]\n", "✗ [6]\n", "✗ [7]\n", "✗ [8]\n", "✗ [9]\n", "✗ [10]\n", "✗ [11]\n", "✗ [12]\n", "✗ [13]\n", "✗ [14]\n", "✗ [15]\n", "✗ [16]\n", "✗ [17]\n", "✗ [18]\n", "✗ [19]\n", "✗ [20]\n", "✗ [21]\n", "✗ [22]\n", "✗ [23]\n", "✗ [24]\n", "✗ [25]\n", "✗ [26]\n", "✗ [27]\n", "✗ [28]\n", "✗ [29]\n", "✗ [30]\n", "✗ [31]\n", "✗ [32]\n", "✗ [33]\n", "✗ [34]\n", "✗ [35]\n", "✗ [36]\n", "✗ [37]\n", "✗ [38]\n", "✗ [39]\n", "✗ [40]\n", "✗ [41]\n", "✗ [42]\n", "✗ [43]\n", "✗ [44]\n", "✗ [45]\n", "✗ [46]\n", "✗ [47]\n", "✗ [48]\n", "✗ [49]\n", "✗ [50]\n", "✗ [51]\n", "✗ [52]\n", "✗ [53]\n", "✗ [54]\n", "✗ [55]\n", "✗ [56]\n", "✗ [57]\n", "✗ [58]\n", "✗ [59]\n", "✗ [60]\n", "✗ [61]\n", "✗ [62]\n", "✗ [63]\n", "✗ [64]\n", "✗ [65]\n", "✗ [66]\n", "✗ [67]\n", "✗ [68]\n", "✗ [69]\n", "✗ [70]\n", "✗ [71]\n", "✗ [72]\n", "✗ [73]\n", "✗ [74]\n", "✗ [75]\n", "✗ [76]\n", "✗ [77]\n", "✗ [78]\n", "✗ [79]\n", "✗ [80]\n", "✗ [81]\n", "✗ [82]\n", "✗ [83]\n", "✗ [84]\n", "✗ [85]\n", "✗ [86]\n", "✗ [87]\n", "✗ [88]\n", "✗ [89]\n", "✗ [90]\n", "✗ [91]\n", "✗ [92]\n", "✗ [93]\n", "✗ [94]\n", "✗ [95]\n", "✗ [96]\n", "✗ [97]\n", "✗ [98]\n", "✗ [99]\n", "✗ [100]\n", "✗ [101]\n", "✗ [102]\n", "✗ [103]\n", "✗ [104]\n", "✗ [105]\n", "✗ [106]\n", "✗ [107]\n", "✗ [108]\n", "✗ [109]\n", "✗ [110]\n", "✗ [111]\n", "✗ [112]\n", "✗ [113]\n", "✗ [114]\n", "✗ [115]\n", "✗ [116]\n", "✗ [117]\n", "✗ [118]\n", "✗ [119]\n", "✗ [120]\n", "✗ [121]\n", "✗ [122]\n", "✗ [123]\n", "✗ [124]\n", "✗ [125]\n", "✗ [126]\n", "✗ [127]\n", "✗ [128]\n", "✗ [129]\n", "✗ [130]\n", "✗ [131]\n", "✗ [132]\n", "✗ [133]\n", "✗ [134]\n", "✗ [135]\n", "✗ [136]\n", "✗ [137]\n", "✗ [138]\n", "✗ [139]\n", "✗ [140]\n", "✗ [141]\n", "✗ [142]\n", "✗ [143]\n", "✗ [144]\n", "✗ [145]\n", "✗ [146]\n", "✗ [147]\n", "✗ [148]\n", "✗ [149]\n", "✗ [150]\n", "✗ [151]\n", "✗ [152]\n", "✗ [153]\n", "✗ [154]\n", "✗ [155]\n", "✗ [156]\n", "✗ [157]\n", "✗ [158]\n", "✗ [159]\n", "✗ [160]\n", "✗ [161]\n", "✗ [162]\n", "✗ [163]\n", "✗ [164]\n", "✗ [165]\n", "✗ [166]\n", "✗ [167]\n", "✗ [168]\n", "✗ [169]\n", "✗ [170]\n", "✗ [171]\n", "✗ [172]\n", "✗ [173]\n", "✗ [174]\n", "✗ [175]\n", "✗ [176]\n", "✗ [177]\n", "✗ [178]\n", "✗ [179]\n", "✗ [180]\n", "✗ [181]\n", "✗ [182]\n", "✗ [183]\n", "✗ [184]\n", "✗ [185]\n", "✗ [186]\n", "✗ [187]\n", "✗ [188]\n", "✗ [189]\n", "✗ [190]\n", "✗ [191]\n", "✗ [192]\n", "✗ [193]\n", "✗ [194]\n", "✗ [195]\n", "✗ [196]\n", "✗ [197]\n", "✗ [198]\n", "✗ [199]\n", "✗ [200]\n", "✗ [201]\n", "✗ [202]\n", "✗ [203]\n", "✗ [204]\n", "✗ [205]\n", "✗ [206]\n", "✗ [207]\n", "✗ [208]\n", "✗ [209]\n", "✗ [210]\n", "✗ [211]\n", "✗ [212]\n", "✗ [213]\n", "✗ [214]\n", "✗ [215]\n", "✗ [216]\n", "✗ [217]\n", "✗ [218]\n", "✗ [219]\n", "✗ [220]\n", "✗ [221]\n", "✗ [222]\n", "✗ [223]\n", "✗ [224]\n", "✗ [225]\n", "✗ [226]\n", "✗ [227]\n", "✗ [228]\n", "✗ [229]\n", "✗ [230]\n", "✗ [231]\n", "✗ [232]\n", "✗ [233]\n", "✗ [234]\n", "✗ [235]\n", "✗ [236]\n", "✗ [237]\n", "✗ [238]\n", "✗ [239]\n", "✗ [240]\n", "✗ [241]\n", "✗ [242]\n", "✗ [243]\n", "✗ [244]\n", "✗ [245]\n", "✗ [246]\n", "✗ [247]\n", "✗ [248]\n", "✗ [249]\n", "✗ [250]\n", "✗ [251]\n", "✗ [252]\n", "✗ [253]\n", "✗ [254]\n", "✗ [255]\n", "✗ [256]\n", "✗ [257]\n", "✗ [258]\n", "✗ [259]\n", "✗ [260]\n", "✗ [261]\n", "✗ [262]\n", "✗ [263]\n", "✗ [264]\n", "✗ [265]\n", "✗ [266]\n", "✗ [267]\n", "✗ [268]\n", "✗ [269]\n", "✗ [270]\n", "✗ [271]\n", "✗ [272]\n", "✗ [273]\n", "✗ [274]\n", "✗ [275]\n", "✗ [276]\n", "✗ [277]\n", "✗ [278]\n", "✗ [279]\n", "✗ [280]\n", "✗ [281]\n", "✗ [282]\n", "✗ [283]\n", "✗ [284]\n", "✗ [285]\n", "✗ [286]\n", "✗ [287]\n", "✗ [288]\n", "✗ [289]\n", "✗ [290]\n", "✗ [291]\n", "✗ [292]\n", "✗ [293]\n", "✗ [294]\n", "✗ [295]\n", "✗ [296]\n", "✗ [297]\n", "✗ [298]\n", "✗ [299]\n", "✗ [300]\n", "✗ [301]\n", "✗ [302]\n", "✗ [303]\n", "✗ [304]\n", "✗ [305]\n", "✗ [306]\n", "✗ [307]\n", "✗ [308]\n", "✗ [309]\n", "✗ [310]\n", "✗ [311]\n", "✗ [312]\n", "✗ [313]\n", "✗ [314]\n", "✗ [315]\n", "✗ [316]\n", "✗ [317]\n", "✗ [318]\n", "✗ [319]\n", "✗ [320]\n", "✗ [321]\n", "✗ [322]\n", "✗ [323]\n", "✗ [324]\n", "✗ [325]\n", "✗ [326]\n", "✗ [327]\n", "✗ [328]\n", "✗ [329]\n", "✗ [330]\n", "✗ [331]\n", "✗ [332]\n", "✗ [333]\n", "✗ [334]\n", "✗ [335]\n", "✗ [336]\n", "✗ [337]\n", "✗ [338]\n", "✗ [339]\n", "✗ [340]\n", "✗ [341]\n", "✗ [342]\n", "✗ [343]\n", "✗ [344]\n", "✗ [345]\n", "✗ [346]\n", "✗ [347]\n", "✗ [348]\n", "✗ [349]\n", "✗ [350]\n", "✗ [351]\n", "✗ [352]\n", "✗ [353]\n", "✗ [354]\n", "✗ [355]\n", "✗ [356]\n", "✗ [357]\n", "✗ [358]\n", "✗ [359]\n", "✗ [360]\n", "✗ [361]\n", "✗ [362]\n", "✗ [363]\n", "✗ [364]\n", "✗ [365]\n", "✗ [366]\n", "✗ [367]\n", "✗ [368]\n", "✗ [369]\n", "✗ [370]\n", "✗ [371]\n", "✗ [372]\n", "✗ [373]\n", "✗ [374]\n", "✗ [375]\n", "✗ [376]\n", "✗ [377]\n", "✗ [378]\n", "✗ [379]\n", "✗ [380]\n", "✗ [381]\n", "✗ [382]\n", "✗ [383]\n", "✗ [384]\n", "✗ [385]\n", "✗ [386]\n", "✗ [387]\n", "✗ [388]\n", "✗ [389]\n", "✗ [390]\n", "✗ [391]\n", "✗ [392]\n", "✗ [393]\n", "✗ [394]\n", "✗ [395]\n", "✗ [396]\n", "✗ [397]\n", "✗ [398]\n", "✗ [399]\n", "✗ [400]\n", "✗ [401]\n", "✗ [402]\n", "✗ [403]\n", "✗ [404]\n", "✗ [405]\n", "✗ [406]\n", "✗ [407]\n", "✗ [408]\n", "✗ [409]\n", "✗ [410]\n", "✗ [411]\n", "✗ [412]\n", "✗ [413]\n", "✗ [414]\n", "✗ [415]\n", "✗ [416]\n", "✗ [417]\n", "✗ [418]\n", "✗ [419]\n", "✗ [420]\n", "✗ [421]\n", "✗ [422]\n", "✗ [423]\n", "✗ [424]\n", "✗ [425]\n", "✗ [426]\n", "✗ [427]\n", "✗ [428]\n", "✗ [429]\n", "✗ [430]\n", "✗ [431]\n", "✗ [432]\n", "✗ [433]\n", "✗ [434]\n", "✗ [435]\n", "✗ [436]\n", "✗ [437]\n", "✗ [438]\n", "✗ [439]\n", "✗ [440]\n", "✗ [441]\n", "✗ [442]\n", "✗ [443]\n", "✗ [444]\n", "✗ [445]\n", "✗ [446]\n", "✗ [447]\n", "✗ [448]\n", "✗ [449]\n", "✗ [450]\n", "✗ [451]\n", "✗ [452]\n", "✗ [453]\n", "✗ [454]\n", "✗ [455]\n", "✗ [456]\n", "✗ [457]\n", "✗ [458]\n", "✗ [459]\n", "✗ [460]\n", "✗ [461]\n", "✗ [462]\n", "✗ [463]\n", "✗ [464]\n", "✗ [465]\n", "✗ [466]\n", "✗ [467]\n", "✗ [468]\n", "✗ [469]\n", "✗ [470]\n", "✗ [471]\n", "✗ [472]\n", "✗ [473]\n", "✗ [474]\n", "✗ [475]\n", "✗ [476]\n", "✗ [477]\n", "✗ [478]\n", "✗ [479]\n", "✗ [480]\n", "✗ [481]\n", "✗ [482]\n", "✗ [483]\n", "✗ [484]\n", "✗ [485]\n", "✗ [486]\n", "✗ [487]\n", "✗ [488]\n", "✗ [489]\n", "✗ [490]\n", "✗ [491]\n", "✗ [492]\n", "✗ [493]\n", "✗ [494]\n", "✗ [495]\n", "✗ [496]\n", "✗ [497]\n", "✗ [498]\n", "✗ [499]\n", "\n", "1 shrinks with 1003 function calls\n" ] } ], "source": [ "show_trace([1000], lambda x: sum(x) >= 500,\n", " partial(greedy_shrink, shrink=shrink2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because we're trying every intermediate value, what we have amounts to a linear probe up to the smallest value that will work. If that smallest value is large, this will take a long time. Our shrinking is still O(n), but n is now the size of the smallest value that will work rather than the starting value. This is still pretty suboptimal.\n", "\n", "What we want to do is try to replace our linear probe with a binary search. What we'll get isn't exactly a binary search, but it's close enough." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def shrink_integer(n):\n", " \"\"\"\n", " Shrinker for individual integers.\n", " \n", " What happens is that we start from the left, first probing upwards in powers of two.\n", " \n", " When this would take us past our target value we then binary chop towards it.\n", " \"\"\"\n", " if not n:\n", " return\n", " for k in range(64):\n", " probe = 2 ** k\n", " if probe >= n:\n", " break\n", " yield probe - 1\n", " probe //= 2\n", " while True:\n", " probe = (probe + n) // 2\n", " yield probe\n", " if probe == n - 1:\n", " break\n", "\n", "\n", "def shrink3(ls):\n", " for i in range(len(ls)):\n", " s = list(ls)\n", " del s[i]\n", " yield list(s)\n", " for x in shrink_integer(ls[i]):\n", " s = list(ls)\n", " s[i] = x\n", " yield s" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[0, 1, 3, 7, 15, 31, 63, 127, 255, 378, 439, 469, 484, 492, 496, 498, 499]" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(shrink_integer(500))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This gives us a reasonable distribution of O(log(n)) values in the middle while still making sure we start with 0 and finish with n - 1.\n", "\n", "In Hypothesis's actual implementation we also try random values in the probe region in case there's something special about things near powers of two, but we won't worry about that here." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [1000]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [3]\n", "✗ [7]\n", "✗ [15]\n", "✗ [31]\n", "✗ [63]\n", "✗ [127]\n", "✗ [255]\n", "✓ [511]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [3]\n", "✗ [7]\n", "✗ [15]\n", "✗ [31]\n", "✗ [63]\n", "✗ [127]\n", "✗ [255]\n", "✗ [383]\n", "✗ [447]\n", "✗ [479]\n", "✗ [495]\n", "✓ [503]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [3]\n", "✗ [7]\n", "✗ [15]\n", "✗ [31]\n", "✗ [63]\n", "✗ [127]\n", "✗ [255]\n", "✗ [379]\n", "✗ [441]\n", "✗ [472]\n", "✗ [487]\n", "✗ [495]\n", "✗ [499]\n", "✓ [501]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [3]\n", "✗ [7]\n", "✗ [15]\n", "✗ [31]\n", "✗ [63]\n", "✗ [127]\n", "✗ [255]\n", "✗ [378]\n", "✗ [439]\n", "✗ [470]\n", "✗ [485]\n", "✗ [493]\n", "✗ [497]\n", "✗ [499]\n", "✓ [500]\n", "✗ []\n", "✗ [0]\n", "✗ [1]\n", "✗ [3]\n", "✗ [7]\n", "✗ [15]\n", "✗ [31]\n", "✗ [63]\n", "✗ [127]\n", "✗ [255]\n", "✗ [378]\n", "✗ [439]\n", "✗ [469]\n", "✗ [484]\n", "✗ [492]\n", "✗ [496]\n", "✗ [498]\n", "✗ [499]\n", "\n", "4 shrinks with 79 function calls\n" ] } ], "source": [ "show_trace([1000], lambda x: sum(x) >= 500, partial(\n", " greedy_shrink, shrink=shrink3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This now runs in a much more reasonable number of function calls.\n", "\n", "Now we want to look at how to reduce the number of elements in the list more efficiently. We're currently making the same mistake we did with n umbers. Only reducing one at a time." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2, 2]\n", "✓ [2, 2, 2, 2]\n", "✓ [2, 2, 2]\n", "✓ [2, 2]\n", "✗ [2]\n", "✗ [0, 2]\n", "✓ [1, 2]\n", "✗ [2]\n", "✗ [0, 2]\n", "✗ [1]\n", "✗ [1, 0]\n", "✗ [1, 1]\n", "\n", "19 shrinks with 26 function calls\n" ] } ], "source": [ "show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n", " greedy_shrink, shrink=shrink3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We won't try too hard here, because typically our lists are not *that* long. We will just attempt to start by finding a shortish initial prefix that demonstrates the behaviour:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def shrink_to_prefix(ls):\n", " i = 1\n", " while i < len(ls):\n", " yield ls[:i]\n", " i *= 2\n", "\n", "\n", "def delete_individual_elements(ls):\n", " for i in range(len(ls)):\n", " s = list(ls)\n", " del s[i]\n", " yield list(s)\n", "\n", "\n", "def shrink_individual_elements(ls):\n", " for i in range(len(ls)):\n", " for x in shrink_integer(ls[i]):\n", " s = list(ls)\n", " s[i] = x\n", " yield s\n", " \n", "def shrink4(ls):\n", " yield from shrink_to_prefix(ls)\n", " yield from delete_individual_elements(ls)\n", " yield from shrink_individual_elements(ls) " ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [2]\n", "✓ [2, 2]\n", "✗ [2]\n", "✗ [2]\n", "✗ [2]\n", "✗ [0, 2]\n", "✓ [1, 2]\n", "✗ [1]\n", "✗ [2]\n", "✗ [1]\n", "✗ [0, 2]\n", "✗ [1, 0]\n", "✗ [1, 1]\n", "\n", "2 shrinks with 13 function calls\n" ] } ], "source": [ "show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n", " greedy_shrink, shrink=shrink4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem we now want to address is the fact that when we're shrinking elements we're only shrinking them one at a time. This means that even though we're only O(log(k)) in each element, we're O(log(k)^n) in the whole list where n is the length of the list. For even very modest k this is bad.\n", "\n", "In general we may not be able to fix this, but in practice for a lot of common structures we can exploit similarity to try to do simultaneous shrinking.\n", "\n", "Here is our starting example: We start and finish with all identical values. We would like to be able to shortcut through a lot of the uninteresting intermediate examples somehow." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [20, 20, 20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✓ [20, 20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✓ [20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [0, 20, 20, 20, 20]\n", "✗ [1, 20, 20, 20, 20]\n", "✗ [3, 20, 20, 20, 20]\n", "✓ [7, 20, 20, 20, 20]\n", "✗ [7]\n", "✗ [7, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [0, 20, 20, 20, 20]\n", "✗ [1, 20, 20, 20, 20]\n", "✗ [3, 20, 20, 20, 20]\n", "✓ [5, 20, 20, 20, 20]\n", "✗ [5]\n", "✗ [5, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [0, 20, 20, 20, 20]\n", "✗ [1, 20, 20, 20, 20]\n", "✗ [3, 20, 20, 20, 20]\n", "✗ [4, 20, 20, 20, 20]\n", "✗ [5, 0, 20, 20, 20]\n", "✗ [5, 1, 20, 20, 20]\n", "✗ [5, 3, 20, 20, 20]\n", "✓ [5, 7, 20, 20, 20]\n", "✗ [5]\n", "✗ [5, 7]\n", "✗ [5, 7, 20, 20]\n", "✗ [7, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 7, 20, 20]\n", "✗ [5, 7, 20, 20]\n", "✗ [5, 7, 20, 20]\n", "✗ [0, 7, 20, 20, 20]\n", "✗ [1, 7, 20, 20, 20]\n", "✗ [3, 7, 20, 20, 20]\n", "✗ [4, 7, 20, 20, 20]\n", "✗ [5, 0, 20, 20, 20]\n", "✗ [5, 1, 20, 20, 20]\n", "✗ [5, 3, 20, 20, 20]\n", "✓ [5, 5, 20, 20, 20]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 20, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [0, 5, 20, 20, 20]\n", "✗ [1, 5, 20, 20, 20]\n", "✗ [3, 5, 20, 20, 20]\n", "✗ [4, 5, 20, 20, 20]\n", "✗ [5, 0, 20, 20, 20]\n", "✗ [5, 1, 20, 20, 20]\n", "✗ [5, 3, 20, 20, 20]\n", "✗ [5, 4, 20, 20, 20]\n", "✗ [5, 5, 0, 20, 20]\n", "✗ [5, 5, 1, 20, 20]\n", "✗ [5, 5, 3, 20, 20]\n", "✓ [5, 5, 7, 20, 20]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 7, 20]\n", "✗ [5, 7, 20, 20]\n", "✗ [5, 7, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 7, 20]\n", "✗ [5, 5, 7, 20]\n", "✗ [0, 5, 7, 20, 20]\n", "✗ [1, 5, 7, 20, 20]\n", "✗ [3, 5, 7, 20, 20]\n", "✗ [4, 5, 7, 20, 20]\n", "✗ [5, 0, 7, 20, 20]\n", "✗ [5, 1, 7, 20, 20]\n", "✗ [5, 3, 7, 20, 20]\n", "✗ [5, 4, 7, 20, 20]\n", "✗ [5, 5, 0, 20, 20]\n", "✗ [5, 5, 1, 20, 20]\n", "✗ [5, 5, 3, 20, 20]\n", "✓ [5, 5, 5, 20, 20]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 20, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [0, 5, 5, 20, 20]\n", "✗ [1, 5, 5, 20, 20]\n", "✗ [3, 5, 5, 20, 20]\n", "✗ [4, 5, 5, 20, 20]\n", "✗ [5, 0, 5, 20, 20]\n", "✗ [5, 1, 5, 20, 20]\n", "✗ [5, 3, 5, 20, 20]\n", "✗ [5, 4, 5, 20, 20]\n", "✗ [5, 5, 0, 20, 20]\n", "✗ [5, 5, 1, 20, 20]\n", "✗ [5, 5, 3, 20, 20]\n", "✗ [5, 5, 4, 20, 20]\n", "✗ [5, 5, 5, 0, 20]\n", "✗ [5, 5, 5, 1, 20]\n", "✗ [5, 5, 5, 3, 20]\n", "✓ [5, 5, 5, 7, 20]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 7, 20]\n", "✗ [5, 5, 7, 20]\n", "✗ [5, 5, 7, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 7]\n", "✗ [0, 5, 5, 7, 20]\n", "✗ [1, 5, 5, 7, 20]\n", "✗ [3, 5, 5, 7, 20]\n", "✗ [4, 5, 5, 7, 20]\n", "✗ [5, 0, 5, 7, 20]\n", "✗ [5, 1, 5, 7, 20]\n", "✗ [5, 3, 5, 7, 20]\n", "✗ [5, 4, 5, 7, 20]\n", "✗ [5, 5, 0, 7, 20]\n", "✗ [5, 5, 1, 7, 20]\n", "✗ [5, 5, 3, 7, 20]\n", "✗ [5, 5, 4, 7, 20]\n", "✗ [5, 5, 5, 0, 20]\n", "✗ [5, 5, 5, 1, 20]\n", "✗ [5, 5, 5, 3, 20]\n", "✓ [5, 5, 5, 5, 20]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 20]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 5, 5, 5, 20]\n", "✗ [1, 5, 5, 5, 20]\n", "✗ [3, 5, 5, 5, 20]\n", "✗ [4, 5, 5, 5, 20]\n", "✗ [5, 0, 5, 5, 20]\n", "✗ [5, 1, 5, 5, 20]\n", "✗ [5, 3, 5, 5, 20]\n", "✗ [5, 4, 5, 5, 20]\n", "✗ [5, 5, 0, 5, 20]\n", "✗ [5, 5, 1, 5, 20]\n", "✗ [5, 5, 3, 5, 20]\n", "✗ [5, 5, 4, 5, 20]\n", "✗ [5, 5, 5, 0, 20]\n", "✗ [5, 5, 5, 1, 20]\n", "✗ [5, 5, 5, 3, 20]\n", "✗ [5, 5, 5, 4, 20]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✓ [5, 5, 5, 5, 7]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 5, 5, 5, 7]\n", "✗ [1, 5, 5, 5, 7]\n", "✗ [3, 5, 5, 5, 7]\n", "✗ [4, 5, 5, 5, 7]\n", "✗ [5, 0, 5, 5, 7]\n", "✗ [5, 1, 5, 5, 7]\n", "✗ [5, 3, 5, 5, 7]\n", "✗ [5, 4, 5, 5, 7]\n", "✗ [5, 5, 0, 5, 7]\n", "✗ [5, 5, 1, 5, 7]\n", "✗ [5, 5, 3, 5, 7]\n", "✗ [5, 5, 4, 5, 7]\n", "✗ [5, 5, 5, 0, 7]\n", "✗ [5, 5, 5, 1, 7]\n", "✗ [5, 5, 5, 3, 7]\n", "✗ [5, 5, 5, 4, 7]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✓ [5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 5, 5, 5, 5]\n", "✗ [1, 5, 5, 5, 5]\n", "✗ [3, 5, 5, 5, 5]\n", "✗ [4, 5, 5, 5, 5]\n", "✗ [5, 0, 5, 5, 5]\n", "✗ [5, 1, 5, 5, 5]\n", "✗ [5, 3, 5, 5, 5]\n", "✗ [5, 4, 5, 5, 5]\n", "✗ [5, 5, 0, 5, 5]\n", "✗ [5, 5, 1, 5, 5]\n", "✗ [5, 5, 3, 5, 5]\n", "✗ [5, 5, 4, 5, 5]\n", "✗ [5, 5, 5, 0, 5]\n", "✗ [5, 5, 5, 1, 5]\n", "✗ [5, 5, 5, 3, 5]\n", "✗ [5, 5, 5, 4, 5]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✗ [5, 5, 5, 5, 4]\n", "\n", "12 shrinks with 236 function calls\n" ] } ], "source": [ "show_trace([20] * 7,\n", " lambda x: len([t for t in x if t >= 5]) >= 5,\n", " partial(greedy_shrink, shrink=shrink4))" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def shrink_shared(ls):\n", " \"\"\"\n", " Look for all sets of shared indices and try to perform a simultaneous shrink on\n", " their value, replacing all of them at once.\n", " \n", " In actual Hypothesis we also try replacing only subsets of the values when there\n", " are more than two shared values, but we won't worry about that here.\n", " \"\"\"\n", " shared_indices = {}\n", " for i in range(len(ls)):\n", " shared_indices.setdefault(ls[i], []).append(i)\n", " for sharing in shared_indices.values():\n", " if len(sharing) > 1:\n", " for v in shrink_integer(ls[sharing[0]]):\n", " s = list(ls)\n", " for i in sharing:\n", " s[i] = v\n", " yield s\n", "\n", "\n", "def shrink5(ls):\n", " yield from shrink_to_prefix(ls)\n", " yield from delete_individual_elements(ls)\n", " yield from shrink_shared(ls)\n", " yield from shrink_individual_elements(ls)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [20, 20, 20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✓ [20, 20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✓ [20, 20, 20, 20, 20]\n", "✗ [20]\n", "✗ [20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [20, 20, 20, 20]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✓ [7, 7, 7, 7, 7]\n", "✗ [7]\n", "✗ [7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✓ [5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✗ [4, 4, 4, 4, 4]\n", "✗ [0, 5, 5, 5, 5]\n", "✗ [1, 5, 5, 5, 5]\n", "✗ [3, 5, 5, 5, 5]\n", "✗ [4, 5, 5, 5, 5]\n", "✗ [5, 0, 5, 5, 5]\n", "✗ [5, 1, 5, 5, 5]\n", "✗ [5, 3, 5, 5, 5]\n", "✗ [5, 4, 5, 5, 5]\n", "✗ [5, 5, 0, 5, 5]\n", "✗ [5, 5, 1, 5, 5]\n", "✗ [5, 5, 3, 5, 5]\n", "✗ [5, 5, 4, 5, 5]\n", "✗ [5, 5, 5, 0, 5]\n", "✗ [5, 5, 5, 1, 5]\n", "✗ [5, 5, 5, 3, 5]\n", "✗ [5, 5, 5, 4, 5]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✗ [5, 5, 5, 5, 4]\n", "\n", "4 shrinks with 64 function calls\n" ] } ], "source": [ "show_trace([20] * 7,\n", " lambda x: len([t for t in x if t >= 5]) >= 5,\n", " partial(greedy_shrink, shrink=shrink5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This achieves the desired result. We rapidly progress through all of the intermediate stages. We do still have to perform individual shrinks at the end unfortunately (this is unavoidable), but the size of the elements is much smaller now so it takes less time.\n", "\n", "Unfortunately while this solves the problem in this case it's almost useless, because unless you find yourself in the exact right starting position it never does anything." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [20, 21, 22, 23, 24, 25, 26]\n", "✗ [20]\n", "✗ [20, 21]\n", "✗ [20, 21, 22, 23]\n", "✓ [21, 22, 23, 24, 25, 26]\n", "✗ [21]\n", "✗ [21, 22]\n", "✗ [21, 22, 23, 24]\n", "✓ [22, 23, 24, 25, 26]\n", "✗ [22]\n", "✗ [22, 23]\n", "✗ [22, 23, 24, 25]\n", "✗ [23, 24, 25, 26]\n", "✗ [22, 24, 25, 26]\n", "✗ [22, 23, 25, 26]\n", "✗ [22, 23, 24, 26]\n", "✗ [22, 23, 24, 25]\n", "✗ [0, 23, 24, 25, 26]\n", "✗ [1, 23, 24, 25, 26]\n", "✗ [3, 23, 24, 25, 26]\n", "✓ [7, 23, 24, 25, 26]\n", "✗ [7]\n", "✗ [7, 23]\n", "✗ [7, 23, 24, 25]\n", "✗ [23, 24, 25, 26]\n", "✗ [7, 24, 25, 26]\n", "✗ [7, 23, 25, 26]\n", "✗ [7, 23, 24, 26]\n", "✗ [7, 23, 24, 25]\n", "✗ [0, 23, 24, 25, 26]\n", "✗ [1, 23, 24, 25, 26]\n", "✗ [3, 23, 24, 25, 26]\n", "✓ [5, 23, 24, 25, 26]\n", "✗ [5]\n", "✗ [5, 23]\n", "✗ [5, 23, 24, 25]\n", "✗ [23, 24, 25, 26]\n", "✗ [5, 24, 25, 26]\n", "✗ [5, 23, 25, 26]\n", "✗ [5, 23, 24, 26]\n", "✗ [5, 23, 24, 25]\n", "✗ [0, 23, 24, 25, 26]\n", "✗ [1, 23, 24, 25, 26]\n", "✗ [3, 23, 24, 25, 26]\n", "✗ [4, 23, 24, 25, 26]\n", "✗ [5, 0, 24, 25, 26]\n", "✗ [5, 1, 24, 25, 26]\n", "✗ [5, 3, 24, 25, 26]\n", "✓ [5, 7, 24, 25, 26]\n", "✗ [5]\n", "✗ [5, 7]\n", "✗ [5, 7, 24, 25]\n", "✗ [7, 24, 25, 26]\n", "✗ [5, 24, 25, 26]\n", "✗ [5, 7, 25, 26]\n", "✗ [5, 7, 24, 26]\n", "✗ [5, 7, 24, 25]\n", "✗ [0, 7, 24, 25, 26]\n", "✗ [1, 7, 24, 25, 26]\n", "✗ [3, 7, 24, 25, 26]\n", "✗ [4, 7, 24, 25, 26]\n", "✗ [5, 0, 24, 25, 26]\n", "✗ [5, 1, 24, 25, 26]\n", "✗ [5, 3, 24, 25, 26]\n", "✓ [5, 5, 24, 25, 26]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 24, 25]\n", "✗ [5, 24, 25, 26]\n", "✗ [5, 24, 25, 26]\n", "✗ [5, 5, 25, 26]\n", "✗ [5, 5, 24, 26]\n", "✗ [5, 5, 24, 25]\n", "✗ [0, 0, 24, 25, 26]\n", "✗ [1, 1, 24, 25, 26]\n", "✗ [3, 3, 24, 25, 26]\n", "✗ [4, 4, 24, 25, 26]\n", "✗ [0, 5, 24, 25, 26]\n", "✗ [1, 5, 24, 25, 26]\n", "✗ [3, 5, 24, 25, 26]\n", "✗ [4, 5, 24, 25, 26]\n", "✗ [5, 0, 24, 25, 26]\n", "✗ [5, 1, 24, 25, 26]\n", "✗ [5, 3, 24, 25, 26]\n", "✗ [5, 4, 24, 25, 26]\n", "✗ [5, 5, 0, 25, 26]\n", "✗ [5, 5, 1, 25, 26]\n", "✗ [5, 5, 3, 25, 26]\n", "✓ [5, 5, 7, 25, 26]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 7, 25]\n", "✗ [5, 7, 25, 26]\n", "✗ [5, 7, 25, 26]\n", "✗ [5, 5, 25, 26]\n", "✗ [5, 5, 7, 26]\n", "✗ [5, 5, 7, 25]\n", "✗ [0, 0, 7, 25, 26]\n", "✗ [1, 1, 7, 25, 26]\n", "✗ [3, 3, 7, 25, 26]\n", "✗ [4, 4, 7, 25, 26]\n", "✗ [0, 5, 7, 25, 26]\n", "✗ [1, 5, 7, 25, 26]\n", "✗ [3, 5, 7, 25, 26]\n", "✗ [4, 5, 7, 25, 26]\n", "✗ [5, 0, 7, 25, 26]\n", "✗ [5, 1, 7, 25, 26]\n", "✗ [5, 3, 7, 25, 26]\n", "✗ [5, 4, 7, 25, 26]\n", "✗ [5, 5, 0, 25, 26]\n", "✗ [5, 5, 1, 25, 26]\n", "✗ [5, 5, 3, 25, 26]\n", "✓ [5, 5, 5, 25, 26]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 25]\n", "✗ [5, 5, 25, 26]\n", "✗ [5, 5, 25, 26]\n", "✗ [5, 5, 25, 26]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 25]\n", "✗ [0, 0, 0, 25, 26]\n", "✗ [1, 1, 1, 25, 26]\n", "✗ [3, 3, 3, 25, 26]\n", "✗ [4, 4, 4, 25, 26]\n", "✗ [0, 5, 5, 25, 26]\n", "✗ [1, 5, 5, 25, 26]\n", "✗ [3, 5, 5, 25, 26]\n", "✗ [4, 5, 5, 25, 26]\n", "✗ [5, 0, 5, 25, 26]\n", "✗ [5, 1, 5, 25, 26]\n", "✗ [5, 3, 5, 25, 26]\n", "✗ [5, 4, 5, 25, 26]\n", "✗ [5, 5, 0, 25, 26]\n", "✗ [5, 5, 1, 25, 26]\n", "✗ [5, 5, 3, 25, 26]\n", "✗ [5, 5, 4, 25, 26]\n", "✗ [5, 5, 5, 0, 26]\n", "✗ [5, 5, 5, 1, 26]\n", "✗ [5, 5, 5, 3, 26]\n", "✓ [5, 5, 5, 7, 26]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 7, 26]\n", "✗ [5, 5, 7, 26]\n", "✗ [5, 5, 7, 26]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 7]\n", "✗ [0, 0, 0, 7, 26]\n", "✗ [1, 1, 1, 7, 26]\n", "✗ [3, 3, 3, 7, 26]\n", "✗ [4, 4, 4, 7, 26]\n", "✗ [0, 5, 5, 7, 26]\n", "✗ [1, 5, 5, 7, 26]\n", "✗ [3, 5, 5, 7, 26]\n", "✗ [4, 5, 5, 7, 26]\n", "✗ [5, 0, 5, 7, 26]\n", "✗ [5, 1, 5, 7, 26]\n", "✗ [5, 3, 5, 7, 26]\n", "✗ [5, 4, 5, 7, 26]\n", "✗ [5, 5, 0, 7, 26]\n", "✗ [5, 5, 1, 7, 26]\n", "✗ [5, 5, 3, 7, 26]\n", "✗ [5, 5, 4, 7, 26]\n", "✗ [5, 5, 5, 0, 26]\n", "✗ [5, 5, 5, 1, 26]\n", "✗ [5, 5, 5, 3, 26]\n", "✓ [5, 5, 5, 5, 26]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 26]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 0, 0, 0, 26]\n", "✗ [1, 1, 1, 1, 26]\n", "✗ [3, 3, 3, 3, 26]\n", "✗ [4, 4, 4, 4, 26]\n", "✗ [0, 5, 5, 5, 26]\n", "✗ [1, 5, 5, 5, 26]\n", "✗ [3, 5, 5, 5, 26]\n", "✗ [4, 5, 5, 5, 26]\n", "✗ [5, 0, 5, 5, 26]\n", "✗ [5, 1, 5, 5, 26]\n", "✗ [5, 3, 5, 5, 26]\n", "✗ [5, 4, 5, 5, 26]\n", "✗ [5, 5, 0, 5, 26]\n", "✗ [5, 5, 1, 5, 26]\n", "✗ [5, 5, 3, 5, 26]\n", "✗ [5, 5, 4, 5, 26]\n", "✗ [5, 5, 5, 0, 26]\n", "✗ [5, 5, 5, 1, 26]\n", "✗ [5, 5, 5, 3, 26]\n", "✗ [5, 5, 5, 4, 26]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✓ [5, 5, 5, 5, 7]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 7]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 0, 0, 0, 7]\n", "✗ [1, 1, 1, 1, 7]\n", "✗ [3, 3, 3, 3, 7]\n", "✗ [4, 4, 4, 4, 7]\n", "✗ [0, 5, 5, 5, 7]\n", "✗ [1, 5, 5, 5, 7]\n", "✗ [3, 5, 5, 5, 7]\n", "✗ [4, 5, 5, 5, 7]\n", "✗ [5, 0, 5, 5, 7]\n", "✗ [5, 1, 5, 5, 7]\n", "✗ [5, 3, 5, 5, 7]\n", "✗ [5, 4, 5, 5, 7]\n", "✗ [5, 5, 0, 5, 7]\n", "✗ [5, 5, 1, 5, 7]\n", "✗ [5, 5, 3, 5, 7]\n", "✗ [5, 5, 4, 5, 7]\n", "✗ [5, 5, 5, 0, 7]\n", "✗ [5, 5, 5, 1, 7]\n", "✗ [5, 5, 5, 3, 7]\n", "✗ [5, 5, 5, 4, 7]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✓ [5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✗ [4, 4, 4, 4, 4]\n", "✗ [0, 5, 5, 5, 5]\n", "✗ [1, 5, 5, 5, 5]\n", "✗ [3, 5, 5, 5, 5]\n", "✗ [4, 5, 5, 5, 5]\n", "✗ [5, 0, 5, 5, 5]\n", "✗ [5, 1, 5, 5, 5]\n", "✗ [5, 3, 5, 5, 5]\n", "✗ [5, 4, 5, 5, 5]\n", "✗ [5, 5, 0, 5, 5]\n", "✗ [5, 5, 1, 5, 5]\n", "✗ [5, 5, 3, 5, 5]\n", "✗ [5, 5, 4, 5, 5]\n", "✗ [5, 5, 5, 0, 5]\n", "✗ [5, 5, 5, 1, 5]\n", "✗ [5, 5, 5, 3, 5]\n", "✗ [5, 5, 5, 4, 5]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✗ [5, 5, 5, 5, 4]\n", "\n", "12 shrinks with 264 function calls\n" ] } ], "source": [ "show_trace([20 + i for i in range(7)],\n", " lambda x: len([t for t in x if t >= 5]) >= 5,\n", " partial(greedy_shrink, shrink=shrink5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what we're going to try to do is to try a simplification first which *creates* that exact right starting condition. Further it's one that will be potentially very useful even if we don't actually have the situation where we have shared shrinks.\n", "\n", "What we're going to do is we're going to use values from the list to act as evidence for how complex things need to be. Starting from the smallest, we'll try capping the array at each individual value and see what happens.\n", "\n", "As well as being potentially a very rapid shrink, this creates lists with lots of duplicates, which enables the simultaneous shrinking to shine." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def replace_with_simpler(ls):\n", " if not ls:\n", " return\n", " values = set(ls)\n", " values.remove(max(ls))\n", " values = sorted(values)\n", " for v in values:\n", " yield [min(v, l) for l in ls]\n", "\n", "\n", "def shrink6(ls):\n", " yield from shrink_to_prefix(ls)\n", " yield from delete_individual_elements(ls)\n", " yield from replace_with_simpler(ls)\n", " yield from shrink_shared(ls)\n", " yield from shrink_individual_elements(ls)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [20, 21, 22, 23, 24, 25, 26]\n", "✗ [20]\n", "✗ [20, 21]\n", "✗ [20, 21, 22, 23]\n", "✓ [21, 22, 23, 24, 25, 26]\n", "✗ [21]\n", "✗ [21, 22]\n", "✗ [21, 22, 23, 24]\n", "✓ [22, 23, 24, 25, 26]\n", "✗ [22]\n", "✗ [22, 23]\n", "✗ [22, 23, 24, 25]\n", "✗ [23, 24, 25, 26]\n", "✗ [22, 24, 25, 26]\n", "✗ [22, 23, 25, 26]\n", "✗ [22, 23, 24, 26]\n", "✗ [22, 23, 24, 25]\n", "✓ [22, 22, 22, 22, 22]\n", "✗ [22]\n", "✗ [22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [22, 22, 22, 22]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✓ [7, 7, 7, 7, 7]\n", "✗ [7]\n", "✗ [7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [7, 7, 7, 7]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✓ [5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [5, 5, 5, 5]\n", "✗ [0, 0, 0, 0, 0]\n", "✗ [1, 1, 1, 1, 1]\n", "✗ [3, 3, 3, 3, 3]\n", "✗ [4, 4, 4, 4, 4]\n", "✗ [0, 5, 5, 5, 5]\n", "✗ [1, 5, 5, 5, 5]\n", "✗ [3, 5, 5, 5, 5]\n", "✗ [4, 5, 5, 5, 5]\n", "✗ [5, 0, 5, 5, 5]\n", "✗ [5, 1, 5, 5, 5]\n", "✗ [5, 3, 5, 5, 5]\n", "✗ [5, 4, 5, 5, 5]\n", "✗ [5, 5, 0, 5, 5]\n", "✗ [5, 5, 1, 5, 5]\n", "✗ [5, 5, 3, 5, 5]\n", "✗ [5, 5, 4, 5, 5]\n", "✗ [5, 5, 5, 0, 5]\n", "✗ [5, 5, 5, 1, 5]\n", "✗ [5, 5, 5, 3, 5]\n", "✗ [5, 5, 5, 4, 5]\n", "✗ [5, 5, 5, 5, 0]\n", "✗ [5, 5, 5, 5, 1]\n", "✗ [5, 5, 5, 5, 3]\n", "✗ [5, 5, 5, 5, 4]\n", "\n", "5 shrinks with 73 function calls\n" ] } ], "source": [ "show_trace([20 + i for i in range(7)],\n", " lambda x: len([t for t in x if t >= 5]) >= 5,\n", " partial(greedy_shrink, shrink=shrink6))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're going to start looking at some numbers.\n", "\n", "What we'll do is we'll generate 1000 random lists satisfying some predicate, and then simplify them down to the smallest possible examples satisfying those predicates. This lets us verify that these aren't just cherry-picked examples and our methods help in the general case. We fix the set of examples per predicate so that we're comparing like for like.\n", "\n", "A more proper statistical treatment would probably be a good idea." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from collections import OrderedDict\n", "\n", "conditions = OrderedDict([\n", " (\"length >= 2\", lambda xs: len(xs) >= 2),\n", " (\"sum >= 500\", lambda xs: sum(xs) >= 500),\n", " (\"sum >= 3\", lambda xs: sum(xs) >= 3),\n", " (\"At least 10 by 5\", lambda xs: len(\n", " [t for t in xs if t >= 5]) >= 10),\n", "])" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[17861213645196285187,\n", " 15609796832515195084,\n", " 8808697621832673046,\n", " 1013319847337885109,\n", " 1252281976438780211,\n", " 15526909770962854196,\n", " 2065337703776048239,\n", " 11654092230944134701,\n", " 5554896851708700201,\n", " 17485190250805381572,\n", " 7700396730246958474,\n", " 402840882133605445,\n", " 5303116940477413125,\n", " 7459257850255946545,\n", " 10349184495871650178,\n", " 4361155591615075311,\n", " 15194020468024244632,\n", " 14428821588688846242,\n", " 5754975712549869618,\n", " 13740966788951413307,\n", " 15209704957418077856,\n", " 12562588328524673262,\n", " 8415556016795311987,\n", " 3993098291779210741,\n", " 16874756914619597640,\n", " 7932421182532982309,\n", " 1080869529149674704,\n", " 13878842261614060122,\n", " 229976195287031921,\n", " 8378461140013520338,\n", " 6189522326946191255,\n", " 16684625600934047114,\n", " 12533448641134015292,\n", " 10459192142175991903,\n", " 15688511015570391481,\n", " 3091340728247101611,\n", " 4034760776171697910,\n", " 6258572097778886531,\n", " 13555449085571665140,\n", " 6727488149749641424,\n", " 7125107819562430884,\n", " 1557872425804423698,\n", " 4810250441100696888,\n", " 10500486959813930693,\n", " 841300069403644975,\n", " 9278626999406014662,\n", " 17219731431761688449,\n", " 15650446646901259126,\n", " 8683172055034528265,\n", " 5138373693056086816,\n", " 4055877702343936882,\n", " 5696765901584750542,\n", " 7133363948804979946,\n", " 988518370429658551,\n", " 16302597472193523184,\n", " 579078764159525857,\n", " 10678347012503400890,\n", " 8433836779160269996,\n", " 13884258181758870664,\n", " 13594877609651310055]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import random\n", "\n", "N_EXAMPLES = 1000\n", "\n", "datasets = {}\n", "\n", "def gen_list(rnd):\n", " return [\n", " random.getrandbits(64)\n", " for _ in range(random.randint(0, 100))\n", " ]\n", "\n", "def dataset_for(condition):\n", " if condition in datasets:\n", " return datasets[condition]\n", " constraint = conditions[condition]\n", " dataset = []\n", " rnd = random.Random(condition)\n", " while len(dataset) < N_EXAMPLES:\n", " ls = gen_list(rnd)\n", " if constraint(ls):\n", " dataset.append(ls)\n", " datasets[condition] = dataset\n", " return dataset\n", "\n", "dataset_for(\"sum >= 3\")[1]" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "13" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# In order to avoid run-away cases where things will take basically forever\n", "# we cap at 5000 as \"you've taken too long. Stop it\". Because we're only ever\n", "# showing the worst case scenario we'll just display this as > 5000 if we ever\n", "# hit it and it won't distort statistics.\n", "MAX_COUNT = 5000\n", "\n", "class MaximumCountExceeded(Exception):\n", " pass\n", "\n", "def call_counts(condition, simplifier):\n", " constraint = conditions[condition]\n", " dataset = dataset_for(condition)\n", " counts = []\n", "\n", " for ex in dataset:\n", " counter = [0]\n", " \n", " def run_and_count(ls):\n", " counter[0] += 1\n", " if counter[0] > MAX_COUNT:\n", " raise MaximumCountExceeded()\n", " return constraint(ls)\n", " \n", " try:\n", " simplifier(ex, run_and_count)\n", " counts.extend(counter)\n", " except MaximumCountExceeded:\n", " counts.append(MAX_COUNT + 1)\n", " break\n", " return counts\n", " \n", "def worst_case(condition, simplifier):\n", " return max(call_counts(condition, simplifier))\n", "\n", "worst_case(\n", " \"length >= 2\",\n", " partial(greedy_shrink, shrink=shrink6))" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from IPython.display import HTML\n", "\n", "def compare_simplifiers(named_simplifiers):\n", " \"\"\"\n", " Given a list of (name, simplifier) pairs, output a table comparing\n", " the worst case performance of each on our current set of examples.\n", " \"\"\"\n", " html_fragments = []\n", " html_fragments.append(\"\\n\\n\")\n", " header = [\"Condition\"]\n", " header.extend(name for name, _ in named_simplifiers)\n", " for h in header:\n", " html_fragments.append(\"\" % (h,))\n", " html_fragments.append(\"\\n\\n\")\n", " \n", " for name in conditions:\n", " bits = [name.replace(\">\", \">\")] \n", " for _, simplifier in named_simplifiers:\n", " value = worst_case(name, simplifier)\n", " if value <= MAX_COUNT:\n", " bits.append(str(value))\n", " else:\n", " bits.append(\" > %d\" % (MAX_COUNT,))\n", " html_fragments.append(\" \")\n", " html_fragments.append(' '.join(\n", " \"\" % (b,) for b in bits))\n", " html_fragments.append(\"\")\n", " html_fragments.append(\"\\n
%s
%s
\")\n", " return HTML('\\n'.join(html_fragments))" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
Condition23456
length >= 2 106 105 13 13 13
sum >= 500 1102 178 80 80 80
sum >= 3 108 107 9 9 9
At least 10 by 5 535 690 809 877 144
" ], "text/plain": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (f.__name__[-1], partial(greedy_shrink, shrink=f))\n", " for f in [shrink2, shrink3, shrink4, shrink5, shrink6]\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So you can see from the above table, the iterations 2 through 5 were a little ambiguous ion that they helped a lot in the cases they were designed to help with but hurt in other cases. 6 however is clearly the best of the lot, being no worse than any of the others on any of the cases and often significantly better.\n", "\n", "Rather than continuing to refine our shrink further, we instead look to improvements to how we use shrinking. We'll start by noting a simple optimization: If you look at our traces above, we often checked the same example twice. We're only interested in deterministic conditions, so this isn't useful to do. So we'll start by simply pruning out all duplicates. This should have exactly the same set and order of successful shrinks but will avoid a bunch of redundant work." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def greedy_shrink_with_dedupe(ls, constraint, shrink):\n", " seen = set()\n", " while True:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " ls = s\n", " break\n", " else:\n", " return ls" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionNormalDeduped
length >= 2 13 6
sum >= 500 80 35
sum >= 3 9 6
At least 10 by 5 144 107
" ], "text/plain": [ "" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Normal\", partial(greedy_shrink, shrink=shrink6)),\n", " (\"Deduped\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", "\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As expected, this is a significant improvement in some cases. It is logically impossible that it could ever make things worse, but it's nice that it makes it better.\n", "\n", "So far we've only looked at things where the interaction between elements was fairly light - the sum cases the values of other elements mattered a bit, but shrinking an integer could never enable other shrinks. Lets look at one where this is not the case: Where our condition is that we have at least 10 distinct elements." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100]\n", "✗ [100, 101]\n", "✗ [100, 101, 102, 103]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107]\n", "✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n", "✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n", "✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n", "✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n", "✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n", "✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 101]\n", "✗ [0, 101, 102, 103]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 107]\n", "✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 101, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 101, 102, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 101, 102, 103, 105, 106, 107, 108, 109]\n", "✗ [0, 101, 102, 103, 104, 106, 107, 108, 109]\n", "✗ [0, 101, 102, 103, 104, 105, 107, 108, 109]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 108, 109]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 107, 109]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n", "✗ [0, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n", "✗ [0, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [0, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [0, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [0, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 102, 103]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 107]\n", "✗ [1, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 102, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 102, 103, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 102, 103, 104, 106, 107, 108, 109]\n", "✗ [0, 1, 102, 103, 104, 105, 107, 108, 109]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 108, 109]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 107, 109]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 102, 102, 102, 102, 102, 102, 102, 102]\n", "✗ [0, 1, 102, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [0, 1, 102, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [0, 1, 102, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 102, 103, 104, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 3, 103]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 107]\n", "✗ [1, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 103, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 103, 104, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 103, 104, 105, 107, 108, 109]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 108, 109]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 107, 109]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 3, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 3, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [0, 1, 3, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [0, 1, 3, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 3, 103, 104, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 103]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 107]\n", "✗ [1, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 103, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 103, 104, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 103, 104, 105, 107, 108, 109]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 108, 109]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 107, 109]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [0, 1, 2, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [0, 1, 2, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 103, 104, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 107]\n", "✗ [1, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 104, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 104, 105, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 104, 104, 104, 104, 104, 104]\n", "✗ [0, 1, 2, 3, 104, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 104, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 107]\n", "✗ [1, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 7, 105, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 7, 7, 7, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 7, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 7, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 107]\n", "✗ [1, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 105, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 5, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 5, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 5, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 107]\n", "✗ [1, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 105, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 105, 105, 105, 105, 105]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 4, 105, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 107]\n", "✗ [1, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 7, 7, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 4, 7, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 107]\n", "✗ [1, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 107, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 106, 106, 106]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 4, 5, 106, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 107]\n", "✗ [1, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 107, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 107]\n", "✗ [1, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 107, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 107, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 107, 107, 107]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 107, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 108]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 108, 108]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 15, 15]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 11, 11]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 9, 9]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0]\n", "✗ [0, 1]\n", "✗ [0, 1, 2, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7]\n", "✗ [1, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n", "✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n", "✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n", "✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n", "✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "\n", "20 shrinks with 848 function calls\n" ] } ], "source": [ "show_trace([100 + i for i in range(10)],\n", " lambda x: len(set(x)) >= 10,\n", " partial(greedy_shrink, shrink=shrink6))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This does not do very well at all.\n", "\n", "The reason it doesn't is that we keep trying useless shrinks. e.g. none of the shrinks done by shrink\\_to\\_prefix, replace\\_with\\_simpler or shrink\\_shared will ever do anything useful here.\n", "\n", "So lets switch to an approach where we try shrink types until they stop working and then we move on to the next type:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def multicourse_shrink1(ls, constraint):\n", " seen = set()\n", " for shrink in [\n", " shrink_to_prefix,\n", " replace_with_simpler,\n", " shrink_shared,\n", " shrink_individual_elements,\n", " ]:\n", " while True:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " ls = s\n", " break\n", " else:\n", " break\n", " return ls" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100]\n", "✗ [100, 101]\n", "✗ [100, 101, 102, 103]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107]\n", "✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n", "✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n", "✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n", "✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n", "✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "\n", "20 shrinks with 318 function calls\n" ] } ], "source": [ "show_trace([100 + i for i in range(10)],\n", " lambda x: len(set(x)) >= 10,\n", " multicourse_shrink1)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "collapsed": false }, "outputs": [], "source": [ "conditions[\"10 distinct elements\"] = lambda xs: len(set(xs)) >= 10" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionSingle passMulti pass
length >= 2 6 4
sum >= 500 35 34
sum >= 3 6 5
At least 10 by 5 107 58
10 distinct elements 623 320
" ], "text/plain": [ "" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Single pass\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", " (\"Multi pass\", multicourse_shrink1)\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So that helped, but not as much as we'd have liked. It's saved us about half the calls, when really we wanted to save 90% of the calls.\n", "\n", "We're on the right track though. The problem is not that our solution isn't good, it's that it didn't go far enough: We're *still* making an awful lot of useless calls. The problem is that each time we shrink the element at index i we try shrinking the elements at indexes 0 through i - 1, and this will never work. So what we want to do is to break shrinking elements into a separate shrinker for each index:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def simplify_index(i):\n", " def accept(ls):\n", " if i >= len(ls):\n", " return\n", " for v in shrink_integer(ls[i]):\n", " s = list(ls)\n", " s[i] = v\n", " yield s\n", " return accept\n", "\n", "def shrinkers_for(ls):\n", " yield shrink_to_prefix\n", " yield delete_individual_elements\n", " yield replace_with_simpler\n", " yield shrink_shared\n", " for i in range(len(ls)):\n", " yield simplify_index(i)\n", "\n", "def multicourse_shrink2(ls, constraint):\n", " seen = set()\n", " for shrink in shrinkers_for(ls):\n", " while True:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " ls = s\n", " break\n", " else:\n", " break\n", " return ls" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100]\n", "✗ [100, 101]\n", "✗ [100, 101, 102, 103]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107]\n", "✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n", "✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n", "✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n", "✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n", "✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n", "✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n", "✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n", "✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n", "✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n", "✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", "✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n", "\n", "20 shrinks with 75 function calls\n" ] } ], "source": [ "show_trace([100 + i for i in range(10)],\n", " lambda x: len(set(x)) >= 10,\n", " multicourse_shrink2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This worked great! It saved us a huge number of function calls.\n", "\n", "Unfortunately it's wrong. Actually the previous one was wrong too, but this one is more obviously wrong. The problem is that shrinking later elements can unlock more shrinks for earlier elements and we'll never be able to benefit from that here:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [101, 100]\n", "✗ [101]\n", "✗ [100]\n", "✗ [100, 100]\n", "✗ [0, 100]\n", "✗ [1, 100]\n", "✗ [3, 100]\n", "✗ [7, 100]\n", "✗ [15, 100]\n", "✗ [31, 100]\n", "✗ [63, 100]\n", "✗ [82, 100]\n", "✗ [91, 100]\n", "✗ [96, 100]\n", "✗ [98, 100]\n", "✗ [99, 100]\n", "✓ [101, 0]\n", "\n", "1 shrinks with 16 function calls\n" ] } ], "source": [ "show_trace([101, 100],\n", " lambda x: len(x) >= 2 and x[0] > x[1],\n", " multicourse_shrink2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Armed with this example we can also show an example where the previous one is wrong because a later simplification unlocks an earlier one because shrinking values allows us to delete more elements:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5, 5, 5]\n", "✓ [0, 0, 0, 0, 0, 0, 0, 0]\n", "\n", "2 shrinks with 5 function calls\n" ] } ], "source": [ "show_trace([5] * 10,\n", " lambda x: x and len(x) > max(x),\n", " multicourse_shrink1)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": true }, "outputs": [], "source": [ "conditions[\"First > Second\"] = lambda xs: len(xs) >= 2 and xs[0] > xs[1]" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Note: We modify this to mask off the high bits because otherwise the probability of\n", "# hitting the condition at random is too low.\n", "conditions[\"Size > max & 63\"] = lambda xs: xs and len(xs) > (max(xs) & 63)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what we'll try doing is iterating this to a fixed point and see what happens:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def multicourse_shrink3(ls, constraint):\n", " seen = set()\n", " while True:\n", " old_ls = ls\n", " for shrink in shrinkers_for(ls):\n", " while True:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " ls = s\n", " break\n", " else:\n", " break\n", " if ls == old_ls:\n", " return ls" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [101, 100]\n", "✗ [101]\n", "✗ [100]\n", "✗ [100, 100]\n", "✗ [0, 100]\n", "✗ [1, 100]\n", "✗ [3, 100]\n", "✗ [7, 100]\n", "✗ [15, 100]\n", "✗ [31, 100]\n", "✗ [63, 100]\n", "✗ [82, 100]\n", "✗ [91, 100]\n", "✗ [96, 100]\n", "✗ [98, 100]\n", "✗ [99, 100]\n", "✓ [101, 0]\n", "✗ [0]\n", "✗ [0, 0]\n", "✓ [1, 0]\n", "✗ [1]\n", "\n", "2 shrinks with 20 function calls\n" ] } ], "source": [ "show_trace([101, 100],\n", " lambda xs: len(xs) >= 2 and xs[0] > xs[1],\n", " multicourse_shrink3)" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5]\n", "✗ [5, 5, 5, 5, 5]\n", "✓ [0, 0, 0, 0, 0, 0]\n", "✓ [0]\n", "✗ []\n", "\n", "5 shrinks with 10 function calls\n" ] } ], "source": [ "show_trace([5] * 10,\n", " lambda x: x and len(x) > max(x),\n", " multicourse_shrink3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So that worked. Yay!\n", "\n", "Lets compare how this does to our single pass implementation." ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionSingle passMulti pass
length >= 2 6 6
sum >= 500 35 35
sum >= 3 6 6
At least 10 by 5 107 73
10 distinct elements 623 131
First > Second 1481 1445
Size > max & 63 600 > 5000
" ], "text/plain": [ "" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Single pass\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", " (\"Multi pass\", multicourse_shrink3)\n", " \n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So the answer is generally favourably but *ouch* that last one.\n", "\n", "What's happening there is that because later shrinks are opening up potentially very large improvements accessible to the lower shrinks, the original greedy algorithm can exploit that much better, while the multi pass algorithm spends a lot of time in the later stages with their incremental shrinks.\n", "\n", "Lets see another similar example before we try to fix this:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import hashlib\n", "\n", "conditions[\"Messy\"] = lambda xs: hashlib.md5(repr(xs).encode('utf-8')).hexdigest()[0] == '0'" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionSingle passMulti pass
length >= 2 6 6
sum >= 500 35 35
sum >= 3 6 6
At least 10 by 5 107 73
10 distinct elements 623 131
First > Second 1481 1445
Size > max & 63 600 > 5000
Messy 1032 > 5000
" ], "text/plain": [ "" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Single pass\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", " (\"Multi pass\", multicourse_shrink3)\n", " \n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This one is a bit different in that the problem is not that the structure is one we're ill suited to exploiting, it's that there is no structure at all so we have no hope of exploiting it. Literally any change at all will unlock earlier shrinks we could have done.\n", "\n", "What we're going to try to do is hybridize the two approaches. If we notice we're performing an awful lot of shrinks we can take that as a hint that we should be trying again from earlier stages.\n", "\n", "Here is our first approach. We simply restart the whole process every five shrinks:" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "collapsed": true }, "outputs": [], "source": [ "MAX_SHRINKS_PER_RUN = 2\n", "\n", "\n", "def multicourse_shrink4(ls, constraint):\n", " seen = set()\n", " while True:\n", " old_ls = ls\n", " shrinks_this_run = 0\n", " for shrink in shrinkers_for(ls):\n", " while shrinks_this_run < MAX_SHRINKS_PER_RUN:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " shrinks_this_run += 1\n", " ls = s\n", " break\n", " else:\n", " break\n", " if ls == old_ls:\n", " return ls" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionSingle passMulti passMulti pass with restart
length >= 2 6 6 6
sum >= 500 35 35 35
sum >= 3 6 6 6
At least 10 by 5 107 73 90
10 distinct elements 623 131 396
First > Second 1481 1445 1463
Size > max & 63 600 > 5000 > 5000
Messy 1032 > 5000 1423
" ], "text/plain": [ "" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Single pass\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", " (\"Multi pass\", multicourse_shrink3),\n", " (\"Multi pass with restart\", multicourse_shrink4) \n", " \n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That works OK, but it's pretty unsatisfying as it loses us most of the benefits of the multi pass shrinking - we're now at most twice as good as the greedy one.\n", "\n", "So what we're going to do is bet on the multi pass working and then gradually degrade to the greedy algorithm as it fails to work." ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def multicourse_shrink5(ls, constraint):\n", " seen = set()\n", " max_shrinks_per_run = 10\n", " while True:\n", " shrinks_this_run = 0\n", " for shrink in shrinkers_for(ls):\n", " while shrinks_this_run < max_shrinks_per_run:\n", " for s in shrink(ls):\n", " key = tuple(s)\n", " if key in seen:\n", " continue\n", " seen.add(key)\n", " if constraint(s):\n", " shrinks_this_run += 1\n", " ls = s\n", " break\n", " else:\n", " break\n", " if max_shrinks_per_run > 1:\n", " max_shrinks_per_run -= 2\n", " if not shrinks_this_run:\n", " return ls" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n", "✗ [5]\n", "✗ [5, 5]\n", "✗ [5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5, 5]\n", "✓ [5, 5, 5, 5, 5, 5]\n", "✗ [5, 5, 5, 5, 5]\n", "✓ [0, 0, 0, 0, 0, 0]\n", "✓ [0]\n", "✗ []\n", "\n", "5 shrinks with 10 function calls\n" ] } ], "source": [ "show_trace([5] * 10,\n", " lambda x: x and len(x) > max(x),\n", " multicourse_shrink5)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", "
ConditionSingle passMulti passMulti pass with restartMulti pass with variable restart
length >= 2 6 6 6 6
sum >= 500 35 35 35 35
sum >= 3 6 6 6 6
At least 10 by 5 107 73 90 73
10 distinct elements 623 131 396 212
First > Second 1481 1445 1463 1168
Size > max & 63 600 > 5000 > 5000 1002
Messy 1032 > 5000 1423 824
" ], "text/plain": [ "" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compare_simplifiers([\n", " (\"Single pass\", partial(greedy_shrink_with_dedupe,\n", " shrink=shrink6)),\n", " (\"Multi pass\", multicourse_shrink3), \n", " (\"Multi pass with restart\", multicourse_shrink4),\n", " (\"Multi pass with variable restart\", multicourse_shrink5) \n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is now more or less the current state of the art (it's actually a bit different from the Hypothesis state of the art at the time of this writing. I'm planning to merge some of the things I figured out in the course of writing this back in). We've got something that is able to adaptively take advantage of structure where it is present, but degrades reasonably gracefully back to the more aggressive version that works better in unstructured examples.\n", "\n", "Surprisingly, on some examples it seems to even be best of all of them. I think that's more coincidence than truth though." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.4.3" } }, "nbformat": 4, "nbformat_minor": 0 } hypothesis-python-3.44.1/requirements/000077500000000000000000000000001321557765100200605ustar00rootroot00000000000000hypothesis-python-3.44.1/requirements/benchmark.in000066400000000000000000000000411321557765100223350ustar00rootroot00000000000000attrs click numpy scipy coverage hypothesis-python-3.44.1/requirements/benchmark.txt000066400000000000000000000003341321557765100225530ustar00rootroot00000000000000# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file requirements/benchmark.txt requirements/benchmark.in # attrs==17.3.0 click==6.7 coverage==4.4.2 numpy==1.13.3 scipy==1.0.0 hypothesis-python-3.44.1/requirements/coverage.in000066400000000000000000000000331321557765100221770ustar00rootroot00000000000000numpy coverage pytz pandas hypothesis-python-3.44.1/requirements/coverage.txt000066400000000000000000000004471321557765100224210ustar00rootroot00000000000000# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file requirements/coverage.txt requirements/coverage.in # coverage==4.4.2 numpy==1.13.3 pandas==0.21.0 python-dateutil==2.6.1 # via pandas pytz==2017.3 six==1.11.0 # via python-dateutil hypothesis-python-3.44.1/requirements/test.in000066400000000000000000000000451321557765100213660ustar00rootroot00000000000000flaky pytest pytest-xdist mock attrs hypothesis-python-3.44.1/requirements/test.txt000066400000000000000000000007721321557765100216060ustar00rootroot00000000000000# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file requirements/test.txt requirements/test.in # apipkg==1.4 # via execnet attrs==17.3.0 execnet==1.5.0 # via pytest-xdist flaky==3.4.0 mock==2.0.0 pbr==3.1.1 # via mock pluggy==0.6.0 # via pytest py==1.5.2 # via pytest pytest-forked==0.2 # via pytest-xdist pytest-xdist==1.20.1 pytest==3.3.1 six==1.11.0 # via mock, pytest hypothesis-python-3.44.1/requirements/tools.in000066400000000000000000000001651321557765100215520ustar00rootroot00000000000000flake8 isort pip-tools pyformat pytest restructuredtext-lint Sphinx sphinx-rtd-theme tox twine attrs coverage pyupio hypothesis-python-3.44.1/requirements/tools.txt000066400000000000000000000042711321557765100217650ustar00rootroot00000000000000# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file requirements/tools.txt requirements/tools.in # alabaster==0.7.10 # via sphinx attrs==17.3.0 autoflake==1.0 # via pyformat autopep8==1.3.3 # via pyformat babel==2.5.1 # via sphinx certifi==2017.11.5 # via requests chardet==3.0.4 # via requests click==6.7 # via pip-tools, pyupio, safety coverage==4.4.2 docformatter==0.8 # via pyformat docutils==0.14 # via restructuredtext-lint, sphinx dparse==0.2.1 # via pyupio, safety first==2.0.1 # via pip-tools flake8==3.5.0 hashin-pyup==0.7.2 # via pyupio idna==2.6 # via requests imagesize==0.7.1 # via sphinx isort==4.2.15 jinja2==2.10 # via pyupio, sphinx markupsafe==1.0 # via jinja2 mccabe==0.6.1 # via flake8 packaging==16.8 # via dparse, pyupio, safety pip-tools==1.11.0 pkginfo==1.4.1 # via twine pluggy==0.6.0 # via pytest, tox py==1.5.2 # via pytest, tox pycodestyle==2.3.1 # via autopep8, flake8 pyflakes==1.6.0 # via autoflake, flake8 pyformat==0.7 pygithub==1.35 # via pyupio pygments==2.2.0 # via sphinx pyjwt==1.5.3 # via pygithub pyparsing==2.2.0 # via packaging pytest==3.3.1 python-gitlab==1.1.0 # via pyupio pytz==2017.3 # via babel pyupio==0.8.2 pyyaml==3.12 # via dparse, pyupio requests-toolbelt==0.8.0 # via twine requests==2.18.4 # via python-gitlab, pyupio, requests-toolbelt, safety, sphinx, twine restructuredtext-lint==1.1.2 safety==1.6.1 # via pyupio six==1.11.0 # via dparse, packaging, pip-tools, pytest, python-gitlab, pyupio, sphinx, tox snowballstemmer==1.2.1 # via sphinx sphinx-rtd-theme==0.2.4 sphinx==1.6.5 sphinxcontrib-websupport==1.0.1 # via sphinx tox==2.9.1 tqdm==4.19.5 # via pyupio, twine twine==1.9.1 unify==0.4 # via pyformat untokenize==0.1.1 # via docformatter, unify urllib3==1.22 # via requests virtualenv==15.1.0 # via tox hypothesis-python-3.44.1/requirements/typing.in000066400000000000000000000000071321557765100217170ustar00rootroot00000000000000typing hypothesis-python-3.44.1/requirements/typing.txt000066400000000000000000000002401321557765100221270ustar00rootroot00000000000000# # This file is autogenerated by pip-compile # To update, run: # # pip-compile --output-file requirements/typing.txt requirements/typing.in # typing==3.6.2 hypothesis-python-3.44.1/scripts/000077500000000000000000000000001321557765100170245ustar00rootroot00000000000000hypothesis-python-3.44.1/scripts/basic-test.sh000077500000000000000000000032551321557765100214260ustar00rootroot00000000000000#!/bin/bash set -e -o xtrace # We run a reduced set of tests on OSX mostly so the CI runs in vaguely # reasonable time. if [[ "$(uname -s)" == 'Darwin' ]]; then DARWIN=true else DARWIN=false fi python -c ' import os for k, v in sorted(dict(os.environ).items()): print("%s=%s" % (k, v)) ' pip install . PYTEST="python -m pytest" $PYTEST tests/cover COVERAGE_TEST_TRACER=timid $PYTEST tests/cover if [ "$(python -c 'import sys; print(sys.version_info[0] == 2)')" = "True" ] ; then $PYTEST tests/py2 else $PYTEST tests/py3 fi $PYTEST --runpytest=subprocess tests/pytest pip install ".[datetime]" $PYTEST tests/datetime/ pip uninstall -y pytz if [ "$DARWIN" = true ]; then exit 0 fi if [ "$(python -c 'import sys; print(sys.version_info[:2] in ((2, 7), (3, 6)))')" = "False" ] ; then exit 0 fi for f in tests/nocover/test_*.py; do $PYTEST "$f" done # fake-factory doesn't have a correct universal wheel pip install --no-binary :all: faker $PYTEST tests/fakefactory/ pip uninstall -y faker if [ "$(python -c 'import platform; print(platform.python_implementation())')" != "PyPy" ]; then if [ "$(python -c 'import sys; print(sys.version_info[0] == 2 or sys.version_info[:2] >= (3, 4))')" == "True" ] ; then pip install .[django] HYPOTHESIS_DJANGO_USETZ=TRUE python -m tests.django.manage test tests.django HYPOTHESIS_DJANGO_USETZ=FALSE python -m tests.django.manage test tests.django pip uninstall -y django fi if [ "$(python -c 'import sys; print(sys.version_info[:2] in ((2, 7), (3, 6)))')" = "True" ] ; then pip install numpy $PYTEST tests/numpy pip install pandas $PYTEST tests/pandas pip uninstall -y numpy pandas fi fi hypothesis-python-3.44.1/scripts/benchmarks.py000066400000000000000000000377641321557765100215340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # pylint: skip-file from __future__ import division, print_function, absolute_import import os import sys import json import zlib import base64 import random import hashlib from collections import OrderedDict import attr import click import numpy as np import hypothesis.strategies as st import hypothesis.extra.numpy as npst from hypothesis import settings, unlimited from hypothesis.errors import UnsatisfiedAssumption from hypothesis.internal.conjecture.engine import ConjectureRunner ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) DATA_DIR = os.path.join( ROOT, 'benchmark-data', ) BENCHMARK_SETTINGS = settings( max_examples=100, max_iterations=1000, max_shrinks=1000, database=None, timeout=unlimited, use_coverage=False, perform_health_check=False, ) BENCHMARKS = OrderedDict() @attr.s() class Benchmark(object): name = attr.ib() strategy = attr.ib() valid = attr.ib() interesting = attr.ib() @attr.s() class BenchmarkData(object): sizes = attr.ib() seed = attr.ib(default=0) STRATEGIES = OrderedDict([ ('ints', st.integers()), ('intlists', st.lists(st.integers())), ('sizedintlists', st.integers(0, 10).flatmap( lambda n: st.lists(st.integers(), min_size=n, max_size=n))), ('text', st.text()), ('text5', st.text(min_size=5)), ('arrays10', npst.arrays('int8', 10)), ('arraysvar', npst.arrays('int8', st.integers(0, 10))), ]) def define_benchmark(strategy_name, valid, interesting): name = '%s-valid=%s-interesting=%s' % ( strategy_name, valid.__name__, interesting.__name__) assert name not in BENCHMARKS strategy = STRATEGIES[strategy_name] BENCHMARKS[name] = Benchmark(name, strategy, valid, interesting) def always(seed, testdata, value): return True def never(seed, testdata, value): return False def nontrivial(seed, testdata, value): return sum(testdata.buffer) >= 255 def sometimes(p, name=None): def accept(seed, testdata, value): hasher = hashlib.md5() hasher.update(testdata.buffer) hasher.update(seed) return random.Random(hasher.digest()).random() <= p accept.__name__ = name or 'sometimes(%r)' % (p,) return accept def array_average(seed, testdata, value): if np.prod(value.shape) == 0: return False avg = random.Random(seed).randint(0, 255) return value.mean() >= avg def lower_bound(seed, testdata, value): """Benchmarking condition for testing the lexicographic minimization aspect of test case reduction. This lets us test for the sort of behaviour that happens when we e.g. have a lower bound on an integer, but in more generality. """ # We implicitly define an infinite stream of bytes, and compare the buffer # of the testdata object with the prefix of the stream of the same length. # If it is >= that prefix we accept the testdata, if not we reject it. rnd = random.Random(seed) for b in testdata.buffer: c = rnd.randint(0, 255) if c < b: return True if c > b: return False return True def size_lower_bound(seed, testdata, value): rnd = random.Random(seed) return len(testdata.buffer) >= rnd.randint(1, 50) usually = sometimes(0.9, 'usually') def minsum(seed, testdata, value): return sum(value) >= 1000 def has_duplicates(seed, testdata, value): return len(set(value)) < len(value) for k in STRATEGIES: define_benchmark(k, always, never) define_benchmark(k, always, always) define_benchmark(k, always, usually) define_benchmark(k, always, lower_bound) define_benchmark(k, always, size_lower_bound) define_benchmark(k, usually, size_lower_bound) define_benchmark('intlists', always, minsum) define_benchmark('intlists', always, has_duplicates) define_benchmark('intlists', has_duplicates, minsum) for p in [always, usually]: define_benchmark('arrays10', p, array_average) define_benchmark('arraysvar', p, array_average) def run_benchmark_for_sizes(benchmark, n_runs): click.echo('Calculating data for %s' % (benchmark.name,)) total_sizes = [] with click.progressbar(range(n_runs)) as runs: for _ in runs: sizes = [] valid_seed = random.getrandbits(64).to_bytes(8, 'big') interesting_seed = random.getrandbits(64).to_bytes(8, 'big') def test_function(data): try: try: value = data.draw(benchmark.strategy) except UnsatisfiedAssumption: data.mark_invalid() if not data.frozen: if not benchmark.valid(valid_seed, data, value): data.mark_invalid() if benchmark.interesting( interesting_seed, data, value ): data.mark_interesting() finally: sizes.append(len(data.buffer)) engine = ConjectureRunner( test_function, settings=BENCHMARK_SETTINGS, random=random ) engine.run() assert len(sizes) > 0 total_sizes.append(sum(sizes)) return total_sizes def benchmark_difference_p_value(existing, recent): """This is a bootstrapped permutation test for the difference of means. Under the null hypothesis that the two sides come from the same distribution, we can randomly reassign values to different populations and see how large a difference in mean we get. This gives us a p-value for our actual observed difference in mean by counting the fraction of times our resampling got a value that large. See https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests for details. """ rnd = random.Random(0) threshold = abs(np.mean(existing) - np.mean(recent)) n = len(existing) n_runs = 1000 greater = 0 all_values = existing + recent for _ in range(n_runs): rnd.shuffle(all_values) l = all_values[:n] r = all_values[n:] score = abs(np.mean(l) - np.mean(r)) if score >= threshold: greater += 1 return greater / n_runs def benchmark_file(name): return os.path.join(DATA_DIR, name) def have_existing_data(name): return os.path.exists(benchmark_file(name)) EXISTING_CACHE = {} BLOBSTART = 'START' BLOBEND = 'END' def existing_data(name): try: return EXISTING_CACHE[name] except KeyError: pass fname = benchmark_file(name) result = None with open(fname) as i: for l in i: l = l.strip() if not l: continue if l.startswith('#'): continue key, blob = l.split(': ', 1) magic, n = key.split(' ') assert magic == 'Data' n = int(n) assert blob.startswith(BLOBSTART) assert blob.endswith(BLOBEND), blob[-len(BLOBEND) * 2:] assert len(blob) == n + len(BLOBSTART) + len(BLOBEND) blob = blob[len(BLOBSTART):len(blob) - len(BLOBEND)] assert len(blob) == n result = blob_to_data(blob) break assert result is not None EXISTING_CACHE[name] = result return result def data_to_blob(data): as_json = json.dumps(attr.asdict(data)).encode('utf-8') compressed = zlib.compress(as_json) as_base64 = base64.b32encode(compressed) return as_base64.decode('ascii') def blob_to_data(blob): from_base64 = base64.b32decode(blob.encode('ascii')) decompressed = zlib.decompress(from_base64) parsed = json.loads(decompressed) return BenchmarkData(**parsed) BENCHMARK_HEADER = """ # This is an automatically generated file from Hypothesis's benchmarking # script (scripts/benchmarks.py). # # Lines like this starting with a # are designed to be useful for human # consumption when reviewing, specifically with a goal of producing # useful diffs so that you can get a sense of the impact of a change. # # This benchmark is for %(strategy_name)s [%(strategy)r], with the validity # condition "%(valid)s" and the interestingness condition "%(interesting)s". # See the script for the exact definitions of these criteria. # # This benchmark was generated with seed %(seed)d # # Key statistics for this benchmark: # # * %(count)d examples # * Mean size: %(mean).2f bytes, standard deviation: %(sd).2f bytes # # Additional interesting statistics: # # * Ranging from %(min)d [%(nmin)s] to %(max)d [%(nmax)s] bytes. # * Median size: %(median)d # * 99%% of examples had at least %(lo)d bytes # * 99%% of examples had at most %(hi)d bytes # # All data after this point is an opaque binary blob. You are not expected # to understand it. """.strip() def times(n): assert n > 0 if n > 1: return '%d times' % (n,) else: return 'once' def write_data(name, new_data): benchmark = BENCHMARKS[name] strategy_name = [ k for k, v in STRATEGIES.items() if v == benchmark.strategy ][0] sizes = new_data.sizes with open(benchmark_file(name), 'w') as o: o.write(BENCHMARK_HEADER % { 'strategy_name': strategy_name, 'strategy': benchmark.strategy, 'valid': benchmark.valid.__name__, 'interesting': benchmark.interesting.__name__, 'seed': new_data.seed, 'count': len(sizes), 'min': min(sizes), 'nmin': times(sizes.count(min(sizes))), 'nmax': times(sizes.count(max(sizes))), 'max': max(sizes), 'mean': np.mean(sizes), 'sd': np.std(sizes), 'median': int(np.percentile(sizes, 50, interpolation='lower')), 'lo': int(np.percentile(sizes, 1, interpolation='lower')), 'hi': int(np.percentile(sizes, 99, interpolation='higher')), }) o.write('\n') o.write('\n') blob = data_to_blob(new_data) assert '\n' not in blob o.write('Data %d: ' % (len(blob),)) o.write(BLOBSTART) o.write(blob) o.write(BLOBEND) o.write('\n') NONE = 'none' NEW = 'new' ALL = 'all' CHANGED = 'changed' IMPROVED = 'improved' @attr.s class Report(object): name = attr.ib() p = attr.ib() old_mean = attr.ib() new_mean = attr.ib() new_data = attr.ib() new_seed = attr.ib() def seed_by_int(i): # Get an actually good seed from an integer, as Random() doesn't guarantee # similar but distinct seeds giving different distributions. as_bytes = i.to_bytes(i.bit_length() // 8 + 1, 'big') digest = hashlib.sha1(as_bytes).digest() seedint = int.from_bytes(digest, 'big') random.seed(seedint) @click.command() @click.option( '--nruns', default=200, type=int, help=""" Specify the number of runs of each benchmark to perform. If this is larger than the number of stored runs then this will result in the existing data treated as if it were non-existing. If it is smaller, the existing data will be sampled. """) @click.argument('benchmarks', nargs=-1) @click.option('--check/--no-check', default=False) @click.option('--skip-existing/--no-skip-existing', default=False) @click.option('--fdr', default=0.0001) @click.option('--update', type=click.Choice([ NONE, NEW, ALL, CHANGED, IMPROVED ]), default=NEW) @click.option('--only-update-headers/--full-run', default=False) def cli( benchmarks, nruns, check, update, fdr, skip_existing, only_update_headers, ): """This is the benchmark runner script for Hypothesis. Rather than running benchmarks by *time* this runs benchmarks by *amount of data*. This is the major determiner of performance in Hypothesis (other than speed of the end user's tests) and has the important property that we can benchmark it without reference to the underlying system's performance. """ if check: if update not in [NONE, NEW]: raise click.UsageError('check and update cannot be used together') if skip_existing: raise click.UsageError( 'check and skip-existing cannot be used together') if only_update_headers: raise click.UsageError( 'check and rewrite-only cannot be used together') if only_update_headers: for name in BENCHMARKS: if have_existing_data(name): write_data(name, existing_data(name)) sys.exit(0) for name in benchmarks: if name not in BENCHMARKS: raise click.UsageError('Invalid benchmark name %s' % (name,)) try: os.mkdir(DATA_DIR) except FileExistsError: pass last_seed = 0 for name in BENCHMARKS: if have_existing_data(name): last_seed = max(existing_data(name).seed, last_seed) next_seed = last_seed + 1 reports = [] if check: for name in benchmarks or BENCHMARKS: if not have_existing_data(name): click.echo('No existing data for benchmark %s' % ( name, )) sys.exit(1) for name in benchmarks or BENCHMARKS: new_seed = next_seed next_seed += 1 seed_by_int(new_seed) if have_existing_data(name): if skip_existing: continue old_data = existing_data(name) new_data = run_benchmark_for_sizes(BENCHMARKS[name], nruns) pp = benchmark_difference_p_value(old_data.sizes, new_data) click.echo( '%r -> %r. p-value for difference %.5f' % ( np.mean(old_data.sizes), np.mean(new_data), pp,)) reports.append(Report( name, pp, np.mean(old_data.sizes), np.mean(new_data), new_data, new_seed=new_seed, )) if update == ALL: write_data(name, BenchmarkData(sizes=new_data, seed=next_seed)) elif update != NONE: new_data = run_benchmark_for_sizes(BENCHMARKS[name], nruns) write_data(name, BenchmarkData(sizes=new_data, seed=next_seed)) if not reports: sys.exit(0) click.echo('Checking for different means') # We now perform a Benjamini Hochberg test. This gives us a list of # possibly significant differences while controlling the false discovery # rate. https://en.wikipedia.org/wiki/False_discovery_rate reports.sort(key=lambda x: x.p) threshold = 0 n = len(reports) for k, report in enumerate(reports, 1): if report.p <= k * fdr / n: assert report.p <= fdr threshold = k different = reports[:threshold] if threshold > 0: click.echo(( 'Found %d benchmark%s with significant difference ' 'at false discovery rate %r' ) % ( threshold, 's' if threshold > 1 else '', fdr, )) if different: for report in different: click.echo('Different means for %s: %.2f -> %.2f. p=%.5f' % ( report.name, report.old_mean, report.new_mean, report.p )) if check: sys.exit(1) for r in different: if update == CHANGED: write_data(r.name, BenchmarkData(r.new_data, r.new_seed)) elif update == IMPROVED and r.new_mean < r.old_mean: write_data(r.name, BenchmarkData(r.new_data, r.new_seed)) else: click.echo('No significant differences') if __name__ == '__main__': cli() hypothesis-python-3.44.1/scripts/build-documentation.sh000077500000000000000000000005461321557765100233360ustar00rootroot00000000000000#!/usr/bin/env bash set -e set -u set -x SPHINX_BUILD=$1 PYTHON=$2 HERE="$(dirname "$0")" cd "$HERE"/.. if [ -e RELEASE.rst ] ; then trap "git checkout docs/changes.rst src/hypothesis/version.py" EXIT $PYTHON scripts/update-changelog-for-docs.py fi export PYTHONPATH=src $SPHINX_BUILD -W -b html -d docs/_build/doctrees docs docs/_build/html hypothesis-python-3.44.1/scripts/check-ancient-pip.sh000077500000000000000000000013431321557765100226460ustar00rootroot00000000000000#!/usr/bin/env bash set -e set -x PYTHON=$1 BROKEN_VIRTUALENV=$($PYTHON -c'import tempfile; print(tempfile.mkdtemp())') trap 'rm -rf $BROKEN_VIRTUALENV' EXIT rm -rf tmp-dist-dir $PYTHON setup.py sdist --dist-dir=tmp-dist-dir $PYTHON -m pip install virtualenv $PYTHON -m virtualenv "$BROKEN_VIRTUALENV" "$BROKEN_VIRTUALENV"/bin/pip install -rrequirements/test.txt # These are versions from debian stable as of 2017-04-21 # See https://packages.debian.org/stable/python/ "$BROKEN_VIRTUALENV"/bin/python -m pip install --upgrade pip==1.5.6 "$BROKEN_VIRTUALENV"/bin/pip install --upgrade setuptools==5.5.1 "$BROKEN_VIRTUALENV"/bin/pip install tmp-dist-dir/* "$BROKEN_VIRTUALENV"/bin/python -m pytest tests/cover/test_testdecorators.py hypothesis-python-3.44.1/scripts/check-release-file.py000066400000000000000000000022041321557765100230040ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import hypothesistooling as tools sys.path.append(os.path.dirname(__file__)) # noqa if __name__ == '__main__': if tools.has_source_changes(): if not tools.has_release(): print( 'There are source changes but no RELEASE.rst. Please create ' 'one to describe your changes.' ) sys.exit(1) tools.parse_release_file() hypothesis-python-3.44.1/scripts/check_encoding_header.py000066400000000000000000000022561321557765100236360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import VALID_STARTS = ( '# coding=utf-8', '#!/usr/bin/env python', ) if __name__ == '__main__': import sys n = max(map(len, VALID_STARTS)) bad = False for f in sys.argv[1:]: with open(f, 'r', encoding='utf-8') as i: start = i.read(n) if not any(start.startswith(s) for s in VALID_STARTS): print( '%s has incorrect start %r' % (f, start), file=sys.stderr) bad = True sys.exit(int(bad)) hypothesis-python-3.44.1/scripts/deploy.py000066400000000000000000000122121321557765100206700ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import random import shutil import subprocess from time import time, sleep import hypothesistooling as tools sys.path.append(os.path.dirname(__file__)) # noqa DIST = os.path.join(tools.ROOT, 'dist') PENDING_STATUS = ('started', 'created') if __name__ == '__main__': last_release = tools.latest_version() print('Current version: %s. Latest released version: %s' % ( tools.__version__, last_release )) HEAD = tools.hash_for_name('HEAD') MASTER = tools.hash_for_name('origin/master') print('Current head:', HEAD) print('Current master:', MASTER) on_master = tools.is_ancestor(HEAD, MASTER) has_release = tools.has_release() if has_release: print('Updating changelog and version') tools.update_for_pending_release() print('Building an sdist...') if os.path.exists(DIST): shutil.rmtree(DIST) subprocess.check_output([ sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST, ]) if not on_master: print('Not deploying due to not being on master') sys.exit(0) if not has_release: print('Not deploying due to no release') sys.exit(0) start_time = time() prev_pending = None # We time out after an hour, which is a stupidly long time and it should # never actually take that long: A full Travis run only takes about 20-30 # minutes! This is really just here as a guard in case something goes # wrong and we're not paying attention so as to not be too mean to Travis.. while time() <= start_time + 60 * 60: jobs = tools.build_jobs() failed_jobs = [ (k, v) for k, vs in jobs.items() if k not in PENDING_STATUS + ('passed',) for v in vs ] if failed_jobs: print('Failing this due to failure of jobs %s' % ( ', '.join('%s(%s)' % (s, j) for j, s in failed_jobs), )) sys.exit(1) else: pending = [j for s in PENDING_STATUS for j in jobs.get(s, ())] try: # This allows us to test the deploy job for a build locally. pending.remove('deploy') except ValueError: pass if pending: still_pending = set(pending) if prev_pending is None: print('Waiting for the following jobs to complete:') for p in sorted(still_pending): print(' * %s' % (p,)) print() else: completed = prev_pending - still_pending if completed: print('%s completed since last check.' % ( ', '.join(sorted(completed)),)) prev_pending = still_pending naptime = 10.0 * (2 + random.random()) print('Waiting %.2fs for %d more job%s to complete' % ( naptime, len(pending), 's' if len(pending) > 1 else '',)) sleep(naptime) else: break else: print("We've been waiting for an hour. That seems bad. Failing now.") sys.exit(1) print('Looks good to release!') if os.environ.get('TRAVIS_SECURE_ENV_VARS', None) != 'true': print("But we don't have the keys to do it") sys.exit(0) print('Decrypting secrets') # We'd normally avoid the use of shell=True, but this is more or less # intended as an opaque string that was given to us by Travis that happens # to be a shell command that we run, and there are a number of good reasons # this particular instance is harmless and would be high effort to # convert (principally: Lack of programmatic generation of the string and # extensive use of environment variables in it), so we're making an # exception here. subprocess.check_call( 'openssl aes-256-cbc -K $encrypted_39cb4cc39a80_key ' '-iv $encrypted_39cb4cc39a80_iv -in secrets.tar.enc ' '-out secrets.tar -d', shell=True ) subprocess.check_call([ 'tar', '-xvf', 'secrets.tar', ]) print('Release seems good. Pushing to github now.') tools.create_tag_and_push() print('Now uploading to pypi.') subprocess.check_call([ sys.executable, '-m', 'twine', 'upload', '--config-file', './.pypirc', os.path.join(DIST, '*'), ]) sys.exit(0) hypothesis-python-3.44.1/scripts/enforce_header.py000066400000000000000000000037161321557765100223360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys from datetime import datetime HEADER_FILE = 'scripts/header.py' CURRENT_YEAR = datetime.utcnow().year HEADER_SOURCE = open(HEADER_FILE).read().strip().format(year=CURRENT_YEAR) def main(): rootdir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) os.chdir(rootdir) files = sys.argv[1:] for f in files: print(f) lines = [] with open(f, encoding='utf-8') as o: shebang = None first = True header_done = False for l in o.readlines(): if first: first = False if l[:2] == '#!': shebang = l continue if 'END HEADER' in l and not header_done: lines = [] header_done = True else: lines.append(l) source = ''.join(lines).strip() with open(f, 'w', encoding='utf-8') as o: if shebang is not None: o.write(shebang) o.write('\n') o.write(HEADER_SOURCE) if source: o.write('\n\n') o.write(source) o.write('\n') if __name__ == '__main__': main() hypothesis-python-3.44.1/scripts/files-to-format.py000066400000000000000000000031651321557765100224130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import hypothesistooling as tools sys.path.append(os.path.dirname(__file__)) # noqa def should_format_file(path): if os.path.basename(path) in ('header.py', 'test_lambda_formatting.py'): return False if 'vendor' in path.split(os.path.sep): return False return path.endswith('.py') if __name__ == '__main__': changed = tools.modified_files() format_all = os.environ.get('FORMAT_ALL', '').lower() == 'true' if 'scripts/header.py' in changed: # We've changed the header, so everything needs its header updated. format_all = True if 'requirements/tools.txt' in changed: # We've changed the tools, which includes a lot of our formatting # logic, so we need to rerun formatters. format_all = True files = tools.all_files() if format_all else changed for f in sorted(files): if should_format_file(f): print(f) hypothesis-python-3.44.1/scripts/fix_doctests.py000066400000000000000000000110261321557765100220740ustar00rootroot00000000000000#!/usr/bin/env python3 # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import re import sys from subprocess import PIPE, run from collections import defaultdict import hypothesistooling as tools class FailingExample(object): def __init__(self, chunk): """Turn a chunk of text into an object representing the test.""" location, *lines = [l + '\n' for l in chunk.split('\n') if l.strip()] self.location = location.strip() pattern = 'File "(.+?)", line (\d+?), in .+' file, line = re.match(pattern, self.location).groups() self.file = os.path.join('docs', file) self.line = int(line) + 1 got = lines.index('Got:\n') self.expected_lines = lines[lines.index('Expected:\n') + 1:got] self.got_lines = lines[got + 1:] self.checked_ok = None self.adjust() @property def indices(self): return slice(self.line, self.line + len(self.expected_lines)) def adjust(self): with open(self.file) as f: lines = f.readlines() # The raw line number is the first line of *input*, so adjust to # first line of output by skipping lines which start with a prompt while self.line < len(lines): if lines[self.line].strip()[:4] not in ('>>> ', '... '): break self.line += 1 # Sadly the filename and line number for doctests in docstrings is # wrong - see https://github.com/sphinx-doc/sphinx/issues/4223 # Luckily, we can just cheat because they're all in one file for now! # (good luck if this changes without an upstream fix...) if lines[self.indices] != self.expected_lines: self.file = 'src/hypothesis/strategies.py' with open(self.file) as f: lines = f.readlines() self.line = 0 while self.expected_lines[0] in lines: self.line = lines[self.line:].index(self.expected_lines[0]) if lines[self.indices] == self.expected_lines: break # Finally, set the flag for location quality self.checked_ok = lines[self.indices] == self.expected_lines def __repr__(self): return '{}\nExpected: {!r:.60}\nGot: {!r:.60}'.format( self.location, self.expected, self.got) def get_doctest_output(): # Return a dict of filename: list of examples, sorted from last to first # so that replacing them in sequence works command = run(['sphinx-build', '-b', 'doctest', 'docs', 'docs/_build'], stdout=PIPE, stderr=PIPE, encoding='utf-8') output = [FailingExample(c) for c in command.stdout.split('*' * 70) if c.strip().startswith('File "')] if not all(ex.checked_ok for ex in output): broken = '\n'.join(ex.location for ex in output if not ex.checked_ok) print('Could not find some tests:\n' + broken) sys.exit(1) tests = defaultdict(set) for ex in output: tests[ex.file].add(ex) return {fname: sorted(examples, key=lambda x: x.line, reverse=True) for fname, examples in tests.items()} def main(): os.chdir(tools.ROOT) failing = get_doctest_output() if not failing: print('All doctests are OK') sys.exit(0) if tools.has_uncommitted_changes('.'): print('Cannot fix doctests in place with uncommited changes') sys.exit(1) for fname, examples in failing.items(): with open(fname) as f: lines = f.readlines() for ex in examples: lines[ex.indices] = ex.got_lines with open(fname, 'w') as f: f.writelines(lines) still_failing = get_doctest_output() if still_failing: print('Fixes failed: script broken or flaky tests.\n', still_failing) sys.exit(1) print('All failing doctests have been fixed.') sys.exit(0) if __name__ == '__main__': main() hypothesis-python-3.44.1/scripts/header.py000066400000000000000000000012041321557765100206230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-{year} David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/scripts/hypothesistooling.py000066400000000000000000000261641321557765100232020ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import re import sys import subprocess from datetime import datetime, timedelta def current_branch(): return subprocess.check_output([ 'git', 'rev-parse', '--abbrev-ref', 'HEAD' ]).decode('ascii').strip() def tags(): result = [t.decode('ascii') for t in subprocess.check_output([ 'git', 'tag' ]).split(b'\n')] assert len(set(result)) == len(result) return set(result) ROOT = subprocess.check_output([ 'git', 'rev-parse', '--show-toplevel']).decode('ascii').strip() SRC = os.path.join(ROOT, 'src') assert os.path.exists(SRC) __version__ = None __version_info__ = None VERSION_FILE = os.path.join(ROOT, 'src/hypothesis/version.py') with open(VERSION_FILE) as o: exec(o.read()) assert __version__ is not None assert __version_info__ is not None def latest_version(): versions = [] for t in tags(): # All versions get tags but not all tags are versions (and there are # a large number of historic tags with a different format for versions) # so we parse each tag as a triple of ints (MAJOR, MINOR, PATCH) # and skip any tag that doesn't match that. assert t == t.strip() parts = t.split('.') if len(parts) != 3: continue try: v = tuple(map(int, parts)) except ValueError: continue versions.append((v, t)) _, latest = max(versions) assert latest in tags() return latest def hash_for_name(name): return subprocess.check_output([ 'git', 'rev-parse', name ]).decode('ascii').strip() def is_ancestor(a, b): check = subprocess.call([ 'git', 'merge-base', '--is-ancestor', a, b ]) assert 0 <= check <= 1 return check == 0 CHANGELOG_FILE = os.path.join(ROOT, 'docs', 'changes.rst') def changelog(): with open(CHANGELOG_FILE) as i: return i.read() def merge_base(a, b): return subprocess.check_output([ 'git', 'merge-base', a, b, ]).strip() def has_source_changes(version=None): if version is None: version = latest_version() # Check where we branched off from the version. We're only interested # in whether *we* introduced any source changes, so we check diff from # there rather than the diff to the other side. point_of_divergence = merge_base('HEAD', version) return subprocess.call([ 'git', 'diff', '--exit-code', point_of_divergence, 'HEAD', '--', SRC, ]) != 0 def has_uncommitted_changes(filename): return subprocess.call([ 'git', 'diff', '--exit-code', filename ]) != 0 def git(*args): subprocess.check_call(('git',) + args) def create_tag_and_push(): assert __version__ not in tags() git('config', 'user.name', 'Travis CI on behalf of David R. MacIver') git('config', 'user.email', 'david@drmaciver.com') git('config', 'core.sshCommand', 'ssh -i deploy_key') git( 'remote', 'add', 'ssh-origin', 'git@github.com:HypothesisWorks/hypothesis-python.git' ) git('tag', __version__) subprocess.check_call([ 'ssh-agent', 'sh', '-c', 'chmod 0600 deploy_key && ' + 'ssh-add deploy_key && ' + 'git push ssh-origin HEAD:master &&' 'git push ssh-origin --tags' ]) def build_jobs(): """Query the Travis API to find out what the state of the other build jobs is. Note: This usage of Travis has been somewhat reverse engineered due to a certain dearth of documentation as to what values what takes when. """ import requests build_id = os.environ['TRAVIS_BUILD_ID'] url = 'https://api.travis-ci.org/builds/%s' % (build_id,) data = requests.get(url, headers={ 'Accept': 'application/vnd.travis-ci.2+json' }).json() matrix = data['jobs'] jobs = {} for m in matrix: name = m['config']['env'].replace('TASK=', '') status = m['state'] jobs.setdefault(status, []).append(name) return jobs def modified_files(): files = set() for command in [ ['git', 'diff', '--name-only', '--diff-filter=d', latest_version(), 'HEAD'], ['git', 'diff', '--name-only'] ]: diff_output = subprocess.check_output(command).decode('ascii') for l in diff_output.split('\n'): filepath = l.strip() if filepath: assert os.path.exists(filepath), filepath files.add(filepath) return files def all_files(): return subprocess.check_output(['git', 'ls-files']).decode( 'ascii').splitlines() RELEASE_FILE = os.path.join(ROOT, 'RELEASE.rst') def has_release(): return os.path.exists(RELEASE_FILE) CHANGELOG_BORDER = re.compile(r"^-+$") CHANGELOG_HEADER = re.compile(r"^\d+\.\d+\.\d+ - \d\d\d\d-\d\d-\d\d$") RELEASE_TYPE = re.compile(r"^RELEASE_TYPE: +(major|minor|patch)") MAJOR = 'major' MINOR = 'minor' PATCH = 'patch' VALID_RELEASE_TYPES = (MAJOR, MINOR, PATCH) def parse_release_file(): with open(RELEASE_FILE) as i: release_contents = i.read() release_lines = release_contents.split('\n') m = RELEASE_TYPE.match(release_lines[0]) if m is not None: release_type = m.group(1) if release_type not in VALID_RELEASE_TYPES: print('Unrecognised release type %r' % (release_type,)) sys.exit(1) del release_lines[0] release_contents = '\n'.join(release_lines).strip() else: print( 'RELEASE.rst does not start by specifying release type. The first ' 'line of the file should be RELEASE_TYPE: followed by one of ' 'major, minor, or patch, to specify the type of release that ' 'this is (i.e. which version number to increment). Instead the ' 'first line was %r' % (release_lines[0],) ) sys.exit(1) return release_type, release_contents def update_changelog_and_version(): global __version_info__ global __version__ with open(CHANGELOG_FILE) as i: contents = i.read() assert '\r' not in contents lines = contents.split('\n') assert contents == '\n'.join(lines) for i, l in enumerate(lines): if CHANGELOG_BORDER.match(l): assert CHANGELOG_HEADER.match(lines[i + 1]), repr(lines[i + 1]) assert CHANGELOG_BORDER.match(lines[i + 2]), repr(lines[i + 2]) beginning = '\n'.join(lines[:i]) rest = '\n'.join(lines[i:]) assert '\n'.join((beginning, rest)) == contents break release_type, release_contents = parse_release_file() new_version = list(__version_info__) bump = VALID_RELEASE_TYPES.index(release_type) new_version[bump] += 1 for i in range(bump + 1, len(new_version)): new_version[i] = 0 new_version = tuple(new_version) new_version_string = '.'.join(map(str, new_version)) __version_info__ = new_version __version__ = new_version_string with open(VERSION_FILE) as i: version_lines = i.read().split('\n') for i, l in enumerate(version_lines): if 'version_info' in l: version_lines[i] = '__version_info__ = %r' % (new_version,) break with open(VERSION_FILE, 'w') as o: o.write('\n'.join(version_lines)) now = datetime.utcnow() date = max([ d.strftime('%Y-%m-%d') for d in (now, now + timedelta(hours=1)) ]) heading_for_new_version = ' - '.join((new_version_string, date)) border_for_new_version = '-' * len(heading_for_new_version) new_changelog_parts = [ beginning.strip(), '', border_for_new_version, heading_for_new_version, border_for_new_version, '', release_contents, '', rest ] with open(CHANGELOG_FILE, 'w') as o: o.write('\n'.join(new_changelog_parts)) def update_for_pending_release(): update_changelog_and_version() git('rm', RELEASE_FILE) git('add', CHANGELOG_FILE, VERSION_FILE) git( 'commit', '-m', 'Bump version to %s and update changelog\n\n[skip ci]' % (__version__,) ) def could_affect_tests(path): """Does this file have any effect on test results?""" # RST files are the input to some tests -- in particular, the # documentation build and doctests. Both of those jobs are always run, # so we can ignore their effect here. # # IPython notebooks aren't currently used in any tests. if path.endswith(('.rst', '.ipynb')): return False # These files exist but have no effect on tests. if path in ('CITATION', 'LICENSE.txt', ): return False # We default to marking a file "interesting" unless we know otherwise -- # it's better to run tests that could have been skipped than skip tests # when they needed to be run. return True def changed_files_from_master(): """Returns a list of files which have changed between a branch and master.""" files = set() command = ['git', 'diff', '--name-only', 'HEAD', 'master'] diff_output = subprocess.check_output(command).decode('ascii') for line in diff_output.splitlines(): filepath = line.strip() if filepath: files.add(filepath) return files def should_run_ci_task(task, is_pull_request): """Given a task name, should we run this task?""" if not is_pull_request: print('We only skip tests if the job is a pull request.') return True # These tests are usually fast; we always run them rather than trying # to keep up-to-date rules of exactly which changed files mean they # should run. if task in [ 'check-pyup-yml', 'check-release-file', 'check-shellcheck', 'documentation', 'lint', ]: print('We always run the %s task.' % task) return True # The remaining tasks are all some sort of test of Hypothesis # functionality. Since it's better to run tests when we don't need to # than skip tests when it was important, we remove any files which we # know are safe to ignore, and run tests if there's anything left. changed_files = changed_files_from_master() interesting_changed_files = [ f for f in changed_files if could_affect_tests(f) ] if interesting_changed_files: print( 'Changes to the following files mean we need to run tests: %s' % ', '.join(interesting_changed_files) ) return True else: print('There are no changes which would need a test run.') return False hypothesis-python-3.44.1/scripts/install.ps1000066400000000000000000000135431321557765100211250ustar00rootroot00000000000000# Sample script to install Python and pip under Windows # Authors: Olivier Grisel, Jonathan Helmus and Kyle Kastner # License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/ $MINICONDA_URL = "http://repo.continuum.io/miniconda/" $BASE_URL = "https://www.python.org/ftp/python/" $GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py" $GET_PIP_PATH = "C:\get-pip.py" function DownloadPython ($python_version, $platform_suffix) { $webclient = New-Object System.Net.WebClient $filename = "python-" + $python_version + $platform_suffix + ".msi" $url = $BASE_URL + $python_version + "/" + $filename $basedir = $pwd.Path + "\" $filepath = $basedir + $filename if (Test-Path $filename) { Write-Host "Reusing" $filepath return $filepath } # Download and retry up to 3 times in case of network transient errors. Write-Host "Downloading" $filename "from" $url $retry_attempts = 2 for($i=0; $i -lt $retry_attempts; $i++){ try { $webclient.DownloadFile($url, $filepath) break } Catch [Exception]{ Start-Sleep 1 } } if (Test-Path $filepath) { Write-Host "File saved at" $filepath } else { # Retry once to get the error message if any at the last try $webclient.DownloadFile($url, $filepath) } return $filepath } function InstallPython ($python_version, $architecture, $python_home) { Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home if (Test-Path $python_home) { Write-Host $python_home "already exists, skipping." return $false } if ($architecture -eq "32") { $platform_suffix = "" } else { $platform_suffix = ".amd64" } $msipath = DownloadPython $python_version $platform_suffix Write-Host "Installing" $msipath "to" $python_home $install_log = $python_home + ".log" $install_args = "/qn /log $install_log /i $msipath TARGETDIR=$python_home" $uninstall_args = "/qn /x $msipath" RunCommand "msiexec.exe" $install_args if (-not(Test-Path $python_home)) { Write-Host "Python seems to be installed else-where, reinstalling." RunCommand "msiexec.exe" $uninstall_args RunCommand "msiexec.exe" $install_args } if (Test-Path $python_home) { Write-Host "Python $python_version ($architecture) installation complete" } else { Write-Host "Failed to install Python in $python_home" Get-Content -Path $install_log Exit 1 } } function RunCommand ($command, $command_args) { Write-Host $command $command_args Start-Process -FilePath $command -ArgumentList $command_args -Wait -Passthru } function InstallPip ($python_home) { $pip_path = $python_home + "\Scripts\pip.exe" $python_path = $python_home + "\python.exe" if (-not(Test-Path $pip_path)) { Write-Host "Installing pip..." $webclient = New-Object System.Net.WebClient $webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH) Write-Host "Executing:" $python_path $GET_PIP_PATH Start-Process -FilePath "$python_path" -ArgumentList "$GET_PIP_PATH" -Wait -Passthru } else { Write-Host "pip already installed." } } function DownloadMiniconda ($python_version, $platform_suffix) { $webclient = New-Object System.Net.WebClient if ($python_version -eq "3.4") { $filename = "Miniconda3-3.5.5-Windows-" + $platform_suffix + ".exe" } else { $filename = "Miniconda-3.5.5-Windows-" + $platform_suffix + ".exe" } $url = $MINICONDA_URL + $filename $basedir = $pwd.Path + "\" $filepath = $basedir + $filename if (Test-Path $filename) { Write-Host "Reusing" $filepath return $filepath } # Download and retry up to 3 times in case of network transient errors. Write-Host "Downloading" $filename "from" $url $retry_attempts = 2 for($i=0; $i -lt $retry_attempts; $i++){ try { $webclient.DownloadFile($url, $filepath) break } Catch [Exception]{ Start-Sleep 1 } } if (Test-Path $filepath) { Write-Host "File saved at" $filepath } else { # Retry once to get the error message if any at the last try $webclient.DownloadFile($url, $filepath) } return $filepath } function InstallMiniconda ($python_version, $architecture, $python_home) { Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home if (Test-Path $python_home) { Write-Host $python_home "already exists, skipping." return $false } if ($architecture -eq "32") { $platform_suffix = "x86" } else { $platform_suffix = "x86_64" } $filepath = DownloadMiniconda $python_version $platform_suffix Write-Host "Installing" $filepath "to" $python_home $install_log = $python_home + ".log" $args = "/S /D=$python_home" Write-Host $filepath $args Start-Process -FilePath $filepath -ArgumentList $args -Wait -Passthru if (Test-Path $python_home) { Write-Host "Python $python_version ($architecture) installation complete" } else { Write-Host "Failed to install Python in $python_home" Get-Content -Path $install_log Exit 1 } } function InstallMinicondaPip ($python_home) { $pip_path = $python_home + "\Scripts\pip.exe" $conda_path = $python_home + "\Scripts\conda.exe" if (-not(Test-Path $pip_path)) { Write-Host "Installing pip..." $args = "install --yes pip" Write-Host $conda_path $args Start-Process -FilePath "$conda_path" -ArgumentList $args -Wait -Passthru } else { Write-Host "pip already installed." } } function main () { InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON InstallPip $env:PYTHON } main hypothesis-python-3.44.1/scripts/install.sh000077500000000000000000000053701321557765100210360ustar00rootroot00000000000000#!/usr/bin/env bash # Special license: Take literally anything you want out of this file. I don't # care. Consider it WTFPL licensed if you like. # Basically there's a lot of suffering encoded here that I don't want you to # have to go through and you should feel free to use this to avoid some of # that suffering in advance. set -e set -x # OS X seems to have some weird Localse problems on Travis. This attempts to set # the Locale to known good ones during install env | grep UTF # This is to guard against multiple builds in parallel. The various installers will tend # to stomp all over each other if you do this and they haven't previously successfully # succeeded. We use a lock file to block progress so only one install runs at a time. # This script should be pretty fast once files are cached, so the lost of concurrency # is not a major problem. # This should be using the lockfile command, but that's not available on the # containerized travis and we can't install it without sudo. # Is is unclear if this is actually useful. I was seeing behaviour that suggested # concurrent runs of the installer, but I can't seem to find any evidence of this lock # ever not being acquired. BASE=${BUILD_RUNTIMES-$PWD/.runtimes} mkdir -p "$BASE" LOCKFILE="$BASE/.install-lockfile" while true; do if mkdir "$LOCKFILE" 2>/dev/null; then echo "Successfully acquired installer." break else echo "Failed to acquire lock. Is another installer running? Waiting a bit." fi sleep $(( ( RANDOM % 10 ) + 1 )).$(( RANDOM % 100 ))s if (( $(date '+%s') > 300 + $(stat --format=%X "$LOCKFILE") )); then echo "We've waited long enough" rm -rf "$LOCKFILE" fi done trap 'rm -rf $LOCKFILE' EXIT PYENV=$BASE/pyenv if [ ! -d "$PYENV/.git" ]; then rm -rf "$PYENV" git clone https://github.com/yyuu/pyenv.git "$BASE/pyenv" else back=$PWD cd "$PYENV" git fetch || echo "Update failed to complete. Ignoring" git reset --hard origin/master cd "$back" fi SNAKEPIT=$BASE/snakepit install () { VERSION="$1" ALIAS="$2" mkdir -p "$BASE/versions" SOURCE=$BASE/versions/$ALIAS if [ ! -e "$SOURCE" ]; then mkdir -p "$SNAKEPIT" mkdir -p "$BASE/versions" "$BASE/pyenv/plugins/python-build/bin/python-build" "$VERSION" "$SOURCE" fi rm -f "$SNAKEPIT/$ALIAS" mkdir -p "$SNAKEPIT" "$SOURCE/bin/python" -m pip.__main__ install --upgrade pip wheel virtualenv ln -s "$SOURCE/bin/python" "$SNAKEPIT/$ALIAS" } for var in "$@"; do case "${var}" in 2.7) install 2.7.11 python2.7 ;; 2.7.3) install 2.7.3 python2.7.3 ;; 3.4) install 3.4.3 python3.4 ;; 3.5) install 3.5.1 python3.5 ;; 3.6) install 3.6.1 python3.6 ;; pypy) install pypy2.7-5.8.0 pypy ;; esac done hypothesis-python-3.44.1/scripts/pyenv-installer000077500000000000000000000041771321557765100221170ustar00rootroot00000000000000#!/usr/bin/env bash set -e [ -n "$PYENV_DEBUG" ] && set -x if [ -z "$PYENV_ROOT" ]; then PYENV_ROOT="${HOME}/.pyenv" fi shell="$1" if [ -z "$shell" ]; then shell="$(ps c -p "$PPID" -o 'ucomm=' 2>/dev/null || true)" shell="${shell##-}" shell="${shell%% *}" shell="$(basename "${shell:-$SHELL}")" fi colorize() { if [ -t 1 ]; then printf "\e[%sm%s\e[m" "$1" "$2" else echo -n "$2" fi } checkout() { [ -d "$2" ] || git clone "$1" "$2" } if ! command -v git 1>/dev/null 2>&1; then echo "pyenv: Git is not installed, can't continue." >&2 exit 1 fi if [ -n "${USE_HTTPS}" ]; then GITHUB="https://github.com" else GITHUB="git://github.com" fi checkout "${GITHUB}/yyuu/pyenv.git" "${PYENV_ROOT}" checkout "${GITHUB}/yyuu/pyenv-doctor.git" "${PYENV_ROOT}/plugins/pyenv-doctor" checkout "${GITHUB}/yyuu/pyenv-installer.git" "${PYENV_ROOT}/plugins/pyenv-installer" checkout "${GITHUB}/yyuu/pyenv-pip-rehash.git" "${PYENV_ROOT}/plugins/pyenv-pip-rehash" checkout "${GITHUB}/yyuu/pyenv-update.git" "${PYENV_ROOT}/plugins/pyenv-update" checkout "${GITHUB}/yyuu/pyenv-virtualenv.git" "${PYENV_ROOT}/plugins/pyenv-virtualenv" checkout "${GITHUB}/yyuu/pyenv-which-ext.git" "${PYENV_ROOT}/plugins/pyenv-which-ext" if ! command -v pyenv 1>/dev/null; then { echo colorize 1 "WARNING" echo ": seems you still have not added 'pyenv' to the load path." echo } >&2 case "$shell" in bash ) profile="~/.bash_profile" ;; zsh ) profile="~/.zshrc" ;; ksh ) profile="~/.profile" ;; fish ) profile="~/.config/fish/config.fish" ;; * ) profile="your profile" ;; esac { echo "# Load pyenv automatically by adding" echo "# the following to ${profile}:" echo case "$shell" in fish ) echo "set -x PATH \"\$HOME/.pyenv/bin\" \$PATH" echo 'status --is-interactive; and . (pyenv init -|psub)' echo 'status --is-interactive; and . (pyenv virtualenv-init -|psub)' ;; * ) echo "export PATH=\"\$HOME/.pyenv/bin:\$PATH\"" echo "eval \"\$(pyenv init -)\"" echo "eval \"\$(pyenv virtualenv-init -)\"" ;; esac } >&2 fi hypothesis-python-3.44.1/scripts/retry.sh000077500000000000000000000003641321557765100205330ustar00rootroot00000000000000#!/usr/bin/env bash for _ in $(seq 5); do if "$@" ; then exit 0 fi echo "Command failed. Retrying..." sleep $(( ( RANDOM % 10 ) + 1 )).$(( RANDOM % 100 ))s done echo "Command failed five times. Giving up now" exit 1 hypothesis-python-3.44.1/scripts/run_circle.py000077500000000000000000000024351321557765100215320ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import subprocess from hypothesistooling import should_run_ci_task if __name__ == '__main__': if ( os.environ['CIRCLE_BRANCH'] != 'master' and os.environ['CI_PULL_REQUESTS'] == '' ): print('We only run CI builds on the master branch or in pull requests') sys.exit(0) is_pull_request = (os.environ['CI_PULL_REQUESTS'] != '') for task in ['check-pypy', 'check-py36', 'check-py27']: if should_run_ci_task(task=task, is_pull_request=is_pull_request): subprocess.check_call(['make', task]) hypothesis-python-3.44.1/scripts/run_travis_make_task.py000077500000000000000000000020461321557765100236160ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import subprocess from hypothesistooling import should_run_ci_task if __name__ == '__main__': is_pull_request = (os.environ.get('TRAVIS_EVENT_TYPE') == 'pull_request') task = os.environ['TASK'] if should_run_ci_task(task=task, is_pull_request=is_pull_request): subprocess.check_call(['make', task]) hypothesis-python-3.44.1/scripts/run_with_env.cmd000066400000000000000000000034621321557765100222250ustar00rootroot00000000000000:: To build extensions for 64 bit Python 3, we need to configure environment :: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1) :: :: To build extensions for 64 bit Python 2, we need to configure environment :: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0) :: :: 32 bit builds do not require specific environment configurations. :: :: Note: this script needs to be run with the /E:ON and /V:ON flags for the :: cmd interpreter, at least for (SDK v7.0) :: :: More details at: :: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows :: http://stackoverflow.com/a/13751649/163740 :: :: Author: Olivier Grisel :: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/ @ECHO OFF SET COMMAND_TO_RUN=%* SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows SET MAJOR_PYTHON_VERSION="%PYTHON_VERSION:~0,1%" IF %MAJOR_PYTHON_VERSION% == "2" ( SET WINDOWS_SDK_VERSION="v7.0" ) ELSE IF %MAJOR_PYTHON_VERSION% == "3" ( SET WINDOWS_SDK_VERSION="v7.1" ) ELSE ( ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%" EXIT 1 ) IF "%PYTHON_ARCH%"=="64" ( ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture SET DISTUTILS_USE_SDK=1 SET MSSdk=1 "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION% "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ELSE ( ECHO Using default MSVC build environment for 32 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) hypothesis-python-3.44.1/scripts/tool-hash.py000077500000000000000000000022131321557765100212750ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import hashlib SCRIPTS_DIR = os.path.dirname(os.path.abspath(__file__)) ROOT_DIR = os.path.dirname(SCRIPTS_DIR) if __name__ == '__main__': name = sys.argv[1] requirements = os.path.join( ROOT_DIR, 'requirements', '%s.txt' % (name,) ) assert os.path.exists(requirements) with open(requirements, 'rb') as f: tools = f.read() print(hashlib.sha1(tools).hexdigest()[:10]) hypothesis-python-3.44.1/scripts/unicodechecker.py000066400000000000000000000032341321557765100223530ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import inspect import warnings from tempfile import mkdtemp import unicodenazi from hypothesis import settings, unlimited from hypothesis.errors import HypothesisDeprecationWarning from hypothesis.configuration import set_hypothesis_home_dir warnings.filterwarnings('error', category=UnicodeWarning) warnings.filterwarnings('error', category=HypothesisDeprecationWarning) unicodenazi.enable() set_hypothesis_home_dir(mkdtemp()) assert isinstance(settings, type) settings.register_profile( 'default', settings(timeout=unlimited) ) settings.load_profile('default') TESTS = [ 'test_testdecorators', ] sys.path.append(os.path.join( os.path.dirname(__file__), '..', 'tests', 'cover', )) if __name__ == '__main__': for t in TESTS: module = __import__(t) for k, v in sorted(module.__dict__.items(), key=lambda x: x[0]): if k.startswith('test_') and inspect.isfunction(v): print(k) v() hypothesis-python-3.44.1/scripts/update-changelog-for-docs.py000066400000000000000000000023341321557765100243210ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import hypothesistooling as tools sys.path.append(os.path.dirname(__file__)) # noqa if __name__ == '__main__': if not tools.has_release(): sys.exit(0) if tools.has_uncommitted_changes(tools.CHANGELOG_FILE): print( 'Cannot build documentation with uncommitted changes to ' 'changelog and a pending release. Please commit your changes or ' 'delete your release file.') sys.exit(1) tools.update_changelog_and_version() hypothesis-python-3.44.1/scripts/validate_branch_check.py000066400000000000000000000033261321557765100236450ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys import json from collections import defaultdict if __name__ == '__main__': with open('branch-check') as i: data = [ json.loads(l) for l in i ] checks = defaultdict(set) for d in data: checks[d['name']].add(d['value']) always_true = [] always_false = [] for c, vs in sorted(checks.items()): if len(vs) < 2: v = list(vs)[0] assert v in (False, True) if v: always_true.append(c) else: always_false.append(c) failure = always_true or always_false if failure: print('Some branches were not properly covered.') print() if always_true: print('The following were always True:') print() for c in always_true: print(' * %s' % (c,)) if always_false: print('The following were always False:') print() for c in always_false: print(' * %s' % (c,)) if failure: sys.exit(1) hypothesis-python-3.44.1/scripts/validate_pyup.py000066400000000000000000000022021321557765100222400ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import yaml from pyup.config import Config from hypothesistooling import ROOT PYUP_FILE = os.path.join(ROOT, '.pyup.yml') if __name__ == '__main__': with open(PYUP_FILE, 'r') as i: data = yaml.safe_load(i.read()) config = Config() config.update_config(data) if not config.is_valid_schedule(): print('Schedule %r is invalid' % (config.schedule,)) sys.exit(1) hypothesis-python-3.44.1/secrets.tar.enc000066400000000000000000000240201321557765100202570ustar00rootroot00000000000000Io$K.Sߦ%  ٛgՂLn!z"<*ztȋDD0YLtR&OPk:aB]Fbu}!tPJWmVAD߲|ӏ>lva)r#^ vI15#-Ϋ..GAݾJ%cdDm>˅i,8` 3 Xs7h"fy):&ؾhee 5(T{>wNqf x#x\ w7Iq5̸!&bL>5;QR\c]e(,kd. p9aSLnd ml ^ҩtMNF]+Z[=KN.wӆs]5 2' a}^BXw[~ݙcĀ:-B|q( eOq E>dF1reeT\-*;TPݦg޳IA\)tM(j՜j.1{kQMo儜yw)XXoϚ3(SC[A(hx֔TiùTL3a_| ؓ$Ŀ$\^''l{l>prQ7qǑ=D3&PI>|m,ǖTZɿhZJUD?$yew&5F͔]˴l=ܑFpop8螠X֩G OZnMHP=uє,CfGzXnSCA! D_(hΩ?aq`bJI],߭f9KHBSCQ&RHذ)Sz(H }W^"&Hc^u|$,>ڜO9Vn Zj{&^6U=ZK.HuT"q,^+! @l.X.G:_ێ2f5ϥ]?FlC4?գ IQ*mQ>STTCwݬ<əZGzY4T#g2$5u.0ֶ+<ʍK!#yyCU2\Jx1|}-0\"_׳ޓY"v#D%xÔEweV=Dm`7 ͍te(/ pñ+ԩ,PCJL̀1MpZz]Z9 ѣˈvDN}M64~ a2^}e GxC[#ﺋdoQ u*)&Όg*-,E,cUže7JtyJY e,g:d=êG4/艵6 m+EW ("Yu0?(!MDY&PVy 5$GS -ubUR!lPTWV[{y@vNiCZEՋ%LX&6.nۃ_D ]:Q2*}6!XvH!o5M \cgWP˟A#zB_yp`fIJ嶓:/s<'d2忽qkM/hiax)[N^Ո9eatH*|^v d\lm'Ybu.RM2(Kn9$#l\@|pyt7aS9~͡~VY2״cc[cGjljM!L̔<_nj![Ika8l9*^я\惙YðWrEj3W c!9{EHCL;G}U.QO=:Zk$0piFA+ +hR/MPԝ.)qN.Mp6H@nWJp,MY Y-8 V`D1ΣwiIC PF"uG0[T@]e1 \5 DAVyF W1yEhNu7b1:zQ$ȘgElvB-+UӲj1dAG uql}<1~;Lף /0>r0#٨xe(BݠhiK7'3:e 0 09NϬx#QM'@DM 0Y%_{ iZZw5wP9J9ۃQlؿr!LCԌ0I (J:ЬwNVOQ~V<Mk&!~bvDߑ꘹ߊ5j@ iBNp(viM8. 5YU3S~WJp6R0i bt;JbbTn͟nw!}kkHoy_I` Y]B`/Axr"C2*{tPMs asT HG^wD7ޛA^\}5y21tOp1:p|QRS%C-: [&ܭAzm:R峛6E7'{&XI!)#%Qf@^9ZwWcUָ,ɱۚ8͉EՑx%૟,!ٹ|c!P/ O-(%]Uܕ;@)WaZ(-O[P:S2n/s* IĄ:qU(wǺn:@hI@ fɕH*mhSd tx WAjaM-L dwV7^|usU]%icȰrW}a;3)9E.+B[lrݠ9Z,W;8G˥EMo&q}B +vL(ЂƔ\2kYLX IW_ls0 >*Ӛ07TZ(.>90V]t(PA"Ydٝ9ÎCM4!op/sk#޿F2JW9m\DrB!wղ%~CfJ/^L[$i3 uu>dEː$_B)K&?@.l4z]rv<5X%Ւ"B$,fx9>ZtW(a{ U| 1NCI9W Ja&8 'Oxwg+7A.7\E(v } lOQ]?3.aXn9}IYN;]N*Cj{>+y/YIΡȑv{owz|uK $XS{5p8_^HvĴ[K| [+;q2j? 6}e^MT|bDNbX€>ÐE91Wy95DJן)| %c|.gY˻( JE:Aߐ3E a Q7y1_7Dfq~oKF*]XF7h5.'5u|ePF#snث6s-;N,|?~-:$ Mҳ0M1‡3>ܨ*!) 'gq<:fQ:$V6Sנ>hɁ_"ZwT1MWKR`cmzi6e3N/~Zȼӳ5I yLE偫8L![0 FmuI;c0Tj:f*\_'jfeʹR,ÈB- 3MP|T dJA =aސs݌Hr49@+lxMmI.Sl-=y, *O$otrsi^ɯG݆~\48sSR~aSH#~޺!oHIvsV^>:-i ^xw\hPBn"}T\xǢs*=5$ZQء#K2a)PjqEÏmx_U i5Z%vV*,pQHwXX +]ř{Nue>X;nZ= yg#fٱAX.d&pd]+9>W 3wI=UōyRۢ0Lm}4R3ȅeN"f/b uM?bcW v5Z u醸bH;sM<^@5t-4υlY{FYeu= ?L2)g>&1uZ$F%DuLXNy(+R.U{)z>BIU$I=E \ 'WpL2K^KaHhsg{}M|$NmК{Dra٭N 0 ̮8f.S߶z kZ6tf0~ )$.t/E8Me alv:/q,x_aF杰<3ɪih]"HSsU4]:tzvWkK6wz,[ȄM_^ʁ?$ZJ@!n= ~q&EfƉǎ\F7= s]a-&'!^ٻ4C/fi5iLH-/E4`E09TZ)0ލY^9 Sފ~yǕM;;n{)ՃoM~qnIZȟE]m㴧 vYm氮Z鍨;7X׫V<%]m<@; ;T5^q*ه _K{=_Qwqd7{;Ϟ#{idp^&nT9n%@" dI Sxqnk:ݎ98 JFؚdF T>3!@ ]>3_B΋bj$( /B6dsDBLZĊ^ ͒ AuldgKA=ԋS"k#veu0~cs벏,GA ;/6\ɨu]Dɲ/^̆qe`8Ā;vD ӮEcD^WD Ս|"wC Ё{mꋏv'G w`n,UKx%/CAFUr)HTh3\OyN}&XCDI ci?[iv^G,fs]%.`D?[z?A[nz ~R'"f ; |2mt` 4I.;[h@@OTD_LBAw m. 85rz03:5 @k:bE~kXȟUB=u󊝗3`TPfrZcxCjy';2H:G&;ү OʯB猲Zm޽gNfݭ%Kؾcϕiɲ`Xf3] c9nsk1yE{ +l8faR8v{My}bCbtvN(Π:2oJ Kbpl@Bע(:/FF@b~|˗z/T@!8n|1S<ܶ9'#IYrWI_*q,H< ,]H2! _Π(M6gJX ̽=J75候sQ?F׏0N9.l|M!ip ojMnLJR5IO/{U$]:\ϙQ<>69~lrh]w4zj4P/^bH6a^Na@h 9^D6Wڤ&\KGgff\%(MNq*ȎOl4ZKtgkT\."#v&8[S.$rQ*DP 0 Df:VwZY0<=H7_/)X5@iMwɄZ|m(R0|x2UKĮvbu/^ M*ܼ&5~ܮ|y;W}y¥pW剌~?2b7=!YV˂C\~WW'DY[%zݶuCE E3f,9&-j#'MECcx܊섔c,CR_Q|` ]8w gޙ k▝ɏ3Q]\0A 1Dܛ G&PҺhD&K6'D B:|/Ҙ0xD.d׹ޛ%E0ujE+k諲ٔgu?!?MKy:0u=w\i @9w DH0p@y 9X줉!}Rͽ4IxyELR3M\ t NZl\M sl'_|WQgTW& \Y(/I|(9όɍd2пZ@AQ*5 ]LBڻ_Ĝmk|) f@4n"ʳ @dQGz/MxZC PA)g!zդݟ _!@&чZ]3vױTR ]Ǚ$љHoDX!:(% LV?t솗sVb?d2QWtd6SSI.WU<0GI'm\'jihypothesis-python-3.44.1/setup.py000066400000000000000000000057321321557765100170560ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys from setuptools import setup, find_packages def local_file(name): return os.path.relpath(os.path.join(os.path.dirname(__file__), name)) SOURCE = local_file('src') README = local_file('README.rst') # Assignment to placate pyflakes. The actual version is from the exec that # follows. __version__ = None with open(local_file('src/hypothesis/version.py')) as o: exec(o.read()) assert __version__ is not None extras = { 'datetime': ['pytz'], 'pytz': ['pytz'], 'fakefactory': ['Faker>=0.7'], 'django': ['pytz', 'django>=1.8,<2'], 'numpy': ['numpy>=1.9.0'], 'pytest': ['pytest>=2.8.0'], } extras['faker'] = extras['fakefactory'] extras['all'] = sorted(sum(extras.values(), [])) extras[":python_version == '2.7'"] = ['enum34'] install_requires = ['attrs', 'coverage'] if sys.version_info[0] < 3: install_requires.append('enum34') setup( name='hypothesis', version=__version__, author='David R. MacIver', author_email='david@drmaciver.com', packages=find_packages(SOURCE), package_dir={'': SOURCE}, url='https://github.com/HypothesisWorks/hypothesis-python', license='MPL v2', description='A library for property based testing', zip_safe=False, extras_require=extras, install_requires=install_requires, python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)', 'Operating System :: Unix', 'Operating System :: POSIX', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Software Development :: Testing', ], entry_points={ 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'], }, long_description=open(README).read(), ) hypothesis-python-3.44.1/src/000077500000000000000000000000001321557765100161245ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/000077500000000000000000000000001321557765100203235ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/__init__.py000066400000000000000000000027341321557765100224420ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """Hypothesis is a library for writing unit tests which are parametrized by some source of data. It verifies your code against a wide range of input and minimizes any failing examples it finds. """ from hypothesis._settings import settings, Verbosity, Phase, HealthCheck, \ unlimited from hypothesis.version import __version_info__, __version__ from hypothesis.control import assume, note, reject, event from hypothesis.core import given, find, example, seed, reproduce_failure, \ PrintSettings from hypothesis.utils.conventions import infer __all__ = [ 'settings', 'Verbosity', 'HealthCheck', 'Phase', 'PrintSettings', 'assume', 'reject', 'seed', 'given', 'unlimited', 'reproduce_failure', 'find', 'example', 'note', 'event', 'infer', '__version__', '__version_info__', ] hypothesis-python-3.44.1/src/hypothesis/_settings.py000066400000000000000000000545221321557765100227040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """A module controlling settings for Hypothesis to use in falsification. Either an explicit settings object can be used or the default object on this module can be modified. """ from __future__ import division, print_function, absolute_import import os import inspect import warnings import threading from enum import Enum, IntEnum, unique import attr from hypothesis.errors import InvalidArgument, HypothesisDeprecationWarning from hypothesis.configuration import hypothesis_home_dir from hypothesis.utils.conventions import UniqueIdentifier, not_set from hypothesis.internal.validation import try_convert from hypothesis.utils.dynamicvariables import DynamicVariable __all__ = [ 'settings', ] unlimited = UniqueIdentifier('unlimited') all_settings = {} _db_cache = {} class settingsProperty(object): def __init__(self, name, show_default): self.name = name self.show_default = show_default def __get__(self, obj, type=None): if obj is None: return self else: try: return obj.__dict__[self.name] except KeyError: raise AttributeError(self.name) def __set__(self, obj, value): obj.__dict__[self.name] = value def __delete__(self, obj): raise AttributeError('Cannot delete attribute %s' % (self.name,)) @property def __doc__(self): description = all_settings[self.name].description deprecation_message = all_settings[self.name].deprecation_message default = repr(getattr(settings.default, self.name)) if \ self.show_default else '(dynamically calculated)' return '\n\n'.join([description, 'default value: %s' % (default,), (deprecation_message or '').strip()]).strip() default_variable = DynamicVariable(None) class settingsMeta(type): def __init__(self, *args, **kwargs): super(settingsMeta, self).__init__(*args, **kwargs) @property def default(self): v = default_variable.value if v is not None: return v if hasattr(settings, '_current_profile'): settings.load_profile(settings._current_profile) assert default_variable.value is not None return default_variable.value @default.setter def default(self, value): raise AttributeError('Cannot assign settings.default') def _assign_default_internal(self, value): default_variable.value = value class settings(settingsMeta('settings', (object,), {})): """A settings object controls a variety of parameters that are used in falsification. These may control both the falsification strategy and the details of the data that is generated. Default values are picked up from the settings.default object and changes made there will be picked up in newly created settings. """ _WHITELISTED_REAL_PROPERTIES = [ '_database', '_construction_complete', 'storage' ] __definitions_are_locked = False _profiles = {} def __getattr__(self, name): if name in all_settings: d = all_settings[name].default if inspect.isfunction(d): d = d() return d else: raise AttributeError('settings has no attribute %s' % (name,)) def __init__( self, parent=None, **kwargs ): self._construction_complete = False self._database = kwargs.pop('database', not_set) database_file = kwargs.get('database_file', not_set) deprecations = [] defaults = parent or settings.default if defaults is not None: for setting in all_settings.values(): if kwargs.get(setting.name, not_set) is not_set: kwargs[setting.name] = getattr(defaults, setting.name) else: if kwargs[setting.name] != setting.future_default: if setting.deprecation_message is not None: deprecations.append(setting) if setting.validator: kwargs[setting.name] = setting.validator( kwargs[setting.name]) if self._database is not_set and database_file is not_set: self._database = defaults.database for name, value in kwargs.items(): if name not in all_settings: raise InvalidArgument( 'Invalid argument %s' % (name,)) setattr(self, name, value) self.storage = threading.local() self._construction_complete = True for d in deprecations: note_deprecation(d.deprecation_message, self) def defaults_stack(self): try: return self.storage.defaults_stack except AttributeError: self.storage.defaults_stack = [] return self.storage.defaults_stack def __call__(self, test): test._hypothesis_internal_use_settings = self return test @classmethod def define_setting( cls, name, description, default, options=None, deprecation=None, validator=None, show_default=True, future_default=not_set, deprecation_message=None, ): """Add a new setting. - name is the name of the property that will be used to access the setting. This must be a valid python identifier. - description will appear in the property's docstring - default is the default value. This may be a zero argument function in which case it is evaluated and its result is stored the first time it is accessed on any given settings object. """ if settings.__definitions_are_locked: from hypothesis.errors import InvalidState raise InvalidState( 'settings have been locked and may no longer be defined.' ) if options is not None: options = tuple(options) assert default in options if future_default is not_set: future_default = default all_settings[name] = Setting( name, description.strip(), default, options, validator, future_default, deprecation_message, ) setattr(settings, name, settingsProperty(name, show_default)) @classmethod def lock_further_definitions(cls): settings.__definitions_are_locked = True def __setattr__(self, name, value): if name in settings._WHITELISTED_REAL_PROPERTIES: return object.__setattr__(self, name, value) elif name == 'database': assert self._construction_complete raise AttributeError( 'settings objects are immutable and may not be assigned to' ' after construction.' ) elif name in all_settings: if self._construction_complete: raise AttributeError( 'settings objects are immutable and may not be assigned to' ' after construction.' ) else: setting = all_settings[name] if ( setting.options is not None and value not in setting.options ): raise InvalidArgument( 'Invalid %s, %r. Valid options: %r' % ( name, value, setting.options ) ) return object.__setattr__(self, name, value) else: raise AttributeError('No such setting %s' % (name,)) def __repr__(self): bits = [] for name in all_settings: value = getattr(self, name) bits.append('%s=%r' % (name, value)) bits.sort() return 'settings(%s)' % ', '.join(bits) @property def database(self): """An ExampleDatabase instance to use for storage of examples. May be None. If this was explicitly set at settings instantiation then that value will be used (even if it was None). If not and the database_file setting is not None this will be lazily loaded as an ExampleDatabase, using that file the first time that this property is accessed on a particular thread. """ if self._database is not_set and self.database_file is not None: from hypothesis.database import ExampleDatabase if self.database_file not in _db_cache: _db_cache[self.database_file] = ( ExampleDatabase(self.database_file)) return _db_cache[self.database_file] if self._database is not_set: self._database = None return self._database def __enter__(self): default_context_manager = default_variable.with_value(self) self.defaults_stack().append(default_context_manager) default_context_manager.__enter__() return self def __exit__(self, *args, **kwargs): default_context_manager = self.defaults_stack().pop() return default_context_manager.__exit__(*args, **kwargs) @staticmethod def register_profile(name, settings): """registers a collection of values to be used as a settings profile. These settings can be loaded in by name. Enable different defaults for different settings. - settings is a settings object """ settings._profiles[name] = settings @staticmethod def get_profile(name): """Return the profile with the given name. - name is a string representing the name of the profile to load A InvalidArgument exception will be thrown if the profile does not exist """ try: return settings._profiles[name] except KeyError: raise InvalidArgument( "Profile '{0}' has not been registered".format( name ) ) @staticmethod def load_profile(name): """Loads in the settings defined in the profile provided If the profile does not exist an InvalidArgument will be thrown. Any setting not defined in the profile will be the library defined default for that setting """ settings._current_profile = name settings._assign_default_internal(settings.get_profile(name)) @attr.s() class Setting(object): name = attr.ib() description = attr.ib() default = attr.ib() options = attr.ib() validator = attr.ib() future_default = attr.ib() deprecation_message = attr.ib() settings.define_setting( 'min_satisfying_examples', default=5, description=""" Raise Unsatisfiable for any tests which do not produce at least this many values that pass all assume() calls and which have not exhaustively covered the search space. """ ) settings.define_setting( 'max_examples', default=100, description=""" Once this many satisfying examples have been considered without finding any counter-example, falsification will terminate. """ ) settings.define_setting( 'max_iterations', default=1000, description=""" Once this many iterations of the example loop have run, including ones which failed to satisfy assumptions and ones which produced duplicates, falsification will terminate. """ ) settings.define_setting( 'buffer_size', default=8 * 1024, description=""" The size of the underlying data used to generate examples. If you need to generate really large examples you may want to increase this, but it will make your tests slower. """ ) settings.define_setting( 'max_shrinks', default=500, description=""" Once this many successful shrinks have been performed, Hypothesis will assume something has gone a bit wrong and give up rather than continuing to try to shrink the example. """ ) def _validate_timeout(n): if n is unlimited: return -1 else: return n settings.define_setting( 'timeout', default=60, description=""" Once this many seconds have passed, falsify will terminate even if it has not found many examples. This is a soft rather than a hard limit - Hypothesis won't e.g. interrupt execution of the called function to stop it. If this value is <= 0 then no timeout will be applied. """, deprecation_message=""" The timeout setting is deprecated and will be removed in a future version of Hypothesis. To get the future behaviour set ``timeout=hypothesis.unlimited`` instead (which will remain valid for a further deprecation period after this setting has gone away). """, future_default=unlimited, validator=_validate_timeout ) settings.define_setting( 'derandomize', default=False, description=""" If this is True then hypothesis will run in deterministic mode where each falsification uses a random number generator that is seeded based on the hypothesis to falsify, which will be consistent across multiple runs. This has the advantage that it will eliminate any randomness from your tests, which may be preferable for some situations. It does have the disadvantage of making your tests less likely to find novel breakages. """ ) settings.define_setting( 'strict', default=os.getenv('HYPOTHESIS_STRICT_MODE') == 'true', description=""" If set to True, anything that would cause Hypothesis to issue a warning will instead raise an error. Note that new warnings may be added at any time, so running with strict set to True means that new Hypothesis releases may validly break your code. Note also that, as strict mode is itself deprecated, enabling it is now an error! You can enable this setting temporarily by setting the HYPOTHESIS_STRICT_MODE environment variable to the string 'true'. """, deprecation_message=""" Strict mode is deprecated and will go away in a future version of Hypothesis. To get the same behaviour, use warnings.simplefilter('error', HypothesisDeprecationWarning). """, future_default=False, ) settings.define_setting( 'database_file', default=lambda: ( os.getenv('HYPOTHESIS_DATABASE_FILE') or os.path.join(hypothesis_home_dir(), 'examples') ), show_default=False, description=""" An instance of hypothesis.database.ExampleDatabase that will be used to save examples to and load previous examples from. May be None in which case no storage will be used. """ ) @unique class Phase(IntEnum): explicit = 0 reuse = 1 generate = 2 shrink = 3 @unique class HealthCheck(Enum): """Arguments for :attr:`~hypothesis.settings.suppress_health_check`. Each member of this enum is a type of health check to suppress. """ exception_in_generation = 0 """Deprecated and no longer does anything. It used to convert errors in data generation into FailedHealthCheck error.""" data_too_large = 1 """Check for when the typical size of the examples you are generating exceeds the maximum allowed size too often.""" filter_too_much = 2 """Check for when the test is filtering out too many examples, either through use of :func:`~hypothesis.assume()` or :ref:`filter() `, or occasionally for Hypothesis internal reasons.""" too_slow = 3 """Check for when your data generation is extremely slow and likely to hurt testing.""" random_module = 4 """Deprecated and no longer does anything. It used to check for whether your tests used the global random module. Now @given tests automatically seed random so this is no longer an error.""" return_value = 5 """Checks if your tests return a non-None value (which will be ignored and is unlikely to do what you want).""" hung_test = 6 """Checks if your tests have been running for a very long time.""" large_base_example = 7 """Checks if the natural example to shrink towards is very large.""" @unique class Statistics(IntEnum): never = 0 interesting = 1 always = 2 class Verbosity(object): def __repr__(self): return 'Verbosity.%s' % (self.name,) def __init__(self, name, level): self.name = name self.level = level def __eq__(self, other): return isinstance(other, Verbosity) and ( self.level == other.level ) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return self.level def __lt__(self, other): return self.level < other.level def __le__(self, other): return self.level <= other.level def __gt__(self, other): return self.level > other.level def __ge__(self, other): return self.level >= other.level @classmethod def by_name(cls, key): result = getattr(cls, key, None) if isinstance(result, Verbosity): return result raise InvalidArgument('No such verbosity level %r' % (key,)) Verbosity.quiet = Verbosity('quiet', 0) Verbosity.normal = Verbosity('normal', 1) Verbosity.verbose = Verbosity('verbose', 2) Verbosity.debug = Verbosity('debug', 3) Verbosity.all = [ Verbosity.quiet, Verbosity.normal, Verbosity.verbose, Verbosity.debug ] ENVIRONMENT_VERBOSITY_OVERRIDE = os.getenv('HYPOTHESIS_VERBOSITY_LEVEL') if ENVIRONMENT_VERBOSITY_OVERRIDE: # pragma: no cover DEFAULT_VERBOSITY = Verbosity.by_name(ENVIRONMENT_VERBOSITY_OVERRIDE) else: DEFAULT_VERBOSITY = Verbosity.normal settings.define_setting( 'verbosity', options=Verbosity.all, default=DEFAULT_VERBOSITY, description='Control the verbosity level of Hypothesis messages', ) def _validate_phases(phases): if phases is None: return tuple(Phase) phases = tuple(phases) for a in phases: if not isinstance(a, Phase): raise InvalidArgument('%r is not a valid phase' % (a,)) return phases settings.define_setting( 'phases', default=tuple(Phase), description=( 'Control which phases should be run. ' + 'See :ref:`the full documentation for more details `' ), validator=_validate_phases, ) settings.define_setting( name='stateful_step_count', default=50, description=""" Number of steps to run a stateful program for before giving up on it breaking. """ ) settings.define_setting( 'perform_health_check', default=True, description=u""" If set to True, Hypothesis will run a preliminary health check before attempting to actually execute your test. """ ) def validate_health_check_suppressions(suppressions): suppressions = try_convert(list, suppressions, 'suppress_health_check') for s in suppressions: if not isinstance(s, HealthCheck): note_deprecation(( 'Non-HealthCheck value %r of type %s in suppress_health_check ' 'will be ignored, and will become an error in a future ' 'version of Hypothesis') % ( s, type(s).__name__, )) elif s in ( HealthCheck.exception_in_generation, HealthCheck.random_module ): note_deprecation(( '%s is now ignored and suppressing it is a no-op. This will ' 'become an error in a future version of Hypothesis. Simply ' 'remove it from your list of suppressions to get the same ' 'effect.') % (s,)) return suppressions settings.define_setting( 'suppress_health_check', default=(), description="""A list of health checks to disable.""", validator=validate_health_check_suppressions ) settings.define_setting( 'deadline', default=not_set, description=u""" If set, a time in milliseconds (which may be a float to express smaller units of time) that each individual example (i.e. each time your test function is called, not the whole decorated test) within a test is not allowed to exceed. Tests which take longer than that may be converted into errors (but will not necessarily be if close to the deadline, to allow some variability in test run time). Set this to None to disable this behaviour entirely. In future this will default to 200. For now, a HypothesisDeprecationWarning will be emitted if you exceed that default deadline and have not explicitly set a deadline yourself. """ ) settings.define_setting( 'use_coverage', default=True, description=""" Whether to use coverage information to improve Hypothesis's ability to find bugs. You should generally leave this turned on unless your code performs poorly when run under coverage. If you turn it off, please file a bug report or add a comment to an existing one about the problem that prompted you to do so. """ ) class PrintSettings(Enum): """Flags to determine whether or not to print a detailed example blob to use with :func:`~hypothesis.reproduce_failure` for failing test cases.""" NEVER = 0 """Never print a blob.""" INFER = 1 """Make an educated guess as to whether it would be appropriate to print the blob. The current rules are that this will print if both: 1. The output from Hypothesis appears to be unsuitable for use with :func:`~hypothesis.example`. 2. The output is not too long.""" ALWAYS = 2 """Always print a blob on failure.""" settings.define_setting( 'print_blob', default=PrintSettings.INFER, description=""" Determines whether to print blobs after tests that can be used to reproduce failures. See :ref:`the documentation on @reproduce_failure ` for more details of this behaviour. """ ) settings.lock_further_definitions() settings.register_profile('default', settings()) settings.load_profile('default') assert settings.default is not None def note_deprecation(message, s=None): if s is None: s = settings.default assert s is not None verbosity = s.verbosity warning = HypothesisDeprecationWarning(message) if verbosity > Verbosity.quiet: warnings.warn(warning, stacklevel=3) hypothesis-python-3.44.1/src/hypothesis/configuration.py000066400000000000000000000031241321557765100235440ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os __hypothesis_home_directory_default = os.path.join(os.getcwd(), '.hypothesis') __hypothesis_home_directory = None def set_hypothesis_home_dir(directory): global __hypothesis_home_directory __hypothesis_home_directory = directory def mkdir_p(path): try: os.makedirs(path) except OSError: pass def hypothesis_home_dir(): global __hypothesis_home_directory if not __hypothesis_home_directory: __hypothesis_home_directory = os.getenv( 'HYPOTHESIS_STORAGE_DIRECTORY') if not __hypothesis_home_directory: __hypothesis_home_directory = __hypothesis_home_directory_default mkdir_p(__hypothesis_home_directory) return __hypothesis_home_directory def storage_directory(*names): path = os.path.join(hypothesis_home_dir(), *names) mkdir_p(path) return path def tmpdir(): return storage_directory('tmp') hypothesis-python-3.44.1/src/hypothesis/control.py000066400000000000000000000076011321557765100223610ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import traceback from hypothesis.errors import CleanupFailed, InvalidArgument, \ UnsatisfiedAssumption from hypothesis.reporting import report from hypothesis.utils.dynamicvariables import DynamicVariable def reject(): raise UnsatisfiedAssumption() def assume(condition): """``assume()`` is like an :ref:`assert ` that marks the example as bad, rather than failing the test. This allows you to specify properties that you *assume* will be true, and let Hypothesis try to avoid similar examples in future. """ if not condition: raise UnsatisfiedAssumption() return True _current_build_context = DynamicVariable(None) def current_build_context(): context = _current_build_context.value if context is None: raise InvalidArgument( u'No build context registered') return context class BuildContext(object): def __init__(self, data, is_final=False, close_on_capture=True): self.data = data self.tasks = [] self.is_final = is_final self.close_on_capture = close_on_capture self.close_on_del = False self.notes = [] def __enter__(self): self.assign_variable = _current_build_context.with_value(self) self.assign_variable.__enter__() return self def __exit__(self, exc_type, exc_value, tb): self.assign_variable.__exit__(exc_type, exc_value, tb) if self.close() and exc_type is None: raise CleanupFailed() def local(self): return _current_build_context.with_value(self) def close(self): any_failed = False for task in self.tasks: try: task() except BaseException: any_failed = True report(traceback.format_exc()) return any_failed def cleanup(teardown): """Register a function to be called when the current test has finished executing. Any exceptions thrown in teardown will be printed but not rethrown. Inside a test this isn't very interesting, because you can just use a finally block, but note that you can use this inside map, flatmap, etc. in order to e.g. insist that a value is closed at the end. """ context = _current_build_context.value if context is None: raise InvalidArgument( u'Cannot register cleanup outside of build context') context.tasks.append(teardown) def note(value): """Report this value in the final execution.""" context = _current_build_context.value if context is None: raise InvalidArgument( 'Cannot make notes outside of a test') context.notes.append(value) if context.is_final: report(value) def event(value): """Record an event that occurred this test. Statistics on number of test runs with each event will be reported at the end if you run Hypothesis in statistics reporting mode. Events should be strings or convertible to them. """ context = _current_build_context.value if context is None: raise InvalidArgument( 'Cannot make record events outside of a test') if context.data is not None: context.data.note_event(value) hypothesis-python-3.44.1/src/hypothesis/core.py000066400000000000000000001247671321557765100216460ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module provides the core primitives of Hypothesis, such as given.""" from __future__ import division, print_function, absolute_import import os import ast import sys import time import zlib import base64 import traceback from random import Random import attr from coverage import CoverageData from coverage.files import canonical_filename from coverage.collector import Collector import hypothesis.strategies as st from hypothesis import __version__ from hypothesis.errors import Flaky, Timeout, NoSuchExample, \ Unsatisfiable, DidNotReproduce, InvalidArgument, DeadlineExceeded, \ MultipleFailures, FailedHealthCheck, UnsatisfiedAssumption, \ HypothesisDeprecationWarning from hypothesis.control import BuildContext from hypothesis._settings import settings as Settings from hypothesis._settings import Phase, Verbosity, HealthCheck, \ PrintSettings, note_deprecation from hypothesis.executors import new_style_executor from hypothesis.reporting import report, verbose_report, current_verbosity from hypothesis.statistics import note_engine_for_statistics from hypothesis.internal.compat import ceil, str_to_bytes, \ benchmark_time, get_type_hints, getfullargspec, encoded_filepath from hypothesis.internal.coverage import IN_COVERAGE_TESTS from hypothesis.utils.conventions import infer, not_set from hypothesis.internal.escalation import is_hypothesis_file, \ escalate_hypothesis_internal_error from hypothesis.internal.reflection import is_mock, proxies, nicerepr, \ arg_string, impersonate, function_digest, fully_qualified_name, \ define_function_signature, convert_positional_arguments, \ get_pretty_function_description from hypothesis.internal.healthcheck import fail_health_check from hypothesis.internal.conjecture.data import Status, StopTest, \ ConjectureData from hypothesis.searchstrategy.strategies import SearchStrategy from hypothesis.internal.conjecture.engine import ExitReason, \ ConjectureRunner, sort_key try: from coverage.tracer import CFileDisposition as FileDisposition except ImportError: # pragma: no cover from coverage.collector import FileDisposition running_under_pytest = False global_force_seed = None def new_random(): import random return random.Random(random.getrandbits(128)) @attr.s() class Example(object): args = attr.ib() kwargs = attr.ib() def example(*args, **kwargs): """A decorator which ensures a specific example is always tested.""" if args and kwargs: raise InvalidArgument( 'Cannot mix positional and keyword arguments for examples' ) if not (args or kwargs): raise InvalidArgument( 'An example must provide at least one argument' ) def accept(test): if not hasattr(test, 'hypothesis_explicit_examples'): test.hypothesis_explicit_examples = [] test.hypothesis_explicit_examples.append(Example(tuple(args), kwargs)) return test return accept def seed(seed): """seed: Start the test execution from a specific seed. May be any hashable object. No exact meaning for seed is provided other than that for a fixed seed value Hypothesis will try the same actions (insofar as it can given external sources of non- determinism. e.g. timing and hash randomization). Overrides the derandomize setting if it is present. """ def accept(test): test._hypothesis_internal_use_seed = seed return test return accept def reproduce_failure(version, blob): """Run the example that corresponds to this data blob in order to reproduce a failure. A test with this decorator *always* runs only one example and always fails. If the provided example does not cause a failure, or is in some way invalid for this test, then this will fail with a DidNotReproduce error. This decorator is not intended to be a permanent addition to your test suite. It's simply some code you can add to ease reproduction of a problem in the event that you don't have access to the test database. Because of this, *no* compatibility guarantees are made between different versions of Hypothesis - its API may change arbitrarily from version to version. """ def accept(test): test._hypothesis_internal_use_reproduce_failure = (version, blob) return test return accept def encode_failure(buffer): buffer = bytes(buffer) compressed = zlib.compress(buffer) if len(compressed) < len(buffer): buffer = b'\1' + compressed else: buffer = b'\0' + buffer return base64.b64encode(buffer) def decode_failure(blob): try: buffer = base64.b64decode(blob) except Exception: raise InvalidArgument('Invalid base64 encoded string: %r' % (blob,)) prefix = buffer[:1] if prefix == b'\0': return buffer[1:] elif prefix == b'\1': return zlib.decompress(buffer[1:]) else: raise InvalidArgument( 'Could not decode blob %r: Invalid start byte %r' % ( blob, prefix )) class WithRunner(SearchStrategy): def __init__(self, base, runner): assert runner is not None self.base = base self.runner = runner def do_draw(self, data): data.hypothesis_runner = self.runner return self.base.do_draw(data) def is_invalid_test( name, original_argspec, generator_arguments, generator_kwargs ): def invalid(message): def wrapped_test(*arguments, **kwargs): raise InvalidArgument(message) return wrapped_test if not (generator_arguments or generator_kwargs): return invalid( 'given must be called with at least one argument') if generator_arguments and any([original_argspec.varargs, original_argspec.varkw, original_argspec.kwonlyargs]): return invalid( 'positional arguments to @given are not supported with varargs, ' 'varkeywords, or keyword-only arguments' ) if len(generator_arguments) > len(original_argspec.args): return invalid(( 'Too many positional arguments for %s() (got %d but' ' expected at most %d') % ( name, len(generator_arguments), len(original_argspec.args))) if infer in generator_arguments: return invalid('infer was passed as a positional argument to @given, ' 'but may only be passed as a keyword argument') if generator_arguments and generator_kwargs: return invalid( 'cannot mix positional and keyword arguments to @given' ) extra_kwargs = [ k for k in generator_kwargs if k not in original_argspec.args + original_argspec.kwonlyargs ] if extra_kwargs and not original_argspec.varkw: return invalid( '%s() got an unexpected keyword argument %r' % ( name, extra_kwargs[0] )) for a in original_argspec.args: if isinstance(a, list): # pragma: no cover return invalid(( 'Cannot decorate function %s() because it has ' 'destructuring arguments') % ( name, )) if original_argspec.defaults or original_argspec.kwonlydefaults: return invalid('Cannot apply @given to a function with defaults.') missing = [repr(kw) for kw in original_argspec.kwonlyargs if kw not in generator_kwargs] if missing: raise InvalidArgument('Missing required kwarg{}: {}'.format( 's' if len(missing) > 1 else '', ', '.join(missing))) def execute_explicit_examples( test_runner, test, wrapped_test, settings, arguments, kwargs ): original_argspec = getfullargspec(test) for example in reversed(getattr( wrapped_test, 'hypothesis_explicit_examples', () )): example_kwargs = dict(original_argspec.kwonlydefaults or {}) if example.args: if len(example.args) > len(original_argspec.args): raise InvalidArgument( 'example has too many arguments for test. ' 'Expected at most %d but got %d' % ( len(original_argspec.args), len(example.args))) example_kwargs.update(dict(zip( original_argspec.args[-len(example.args):], example.args ))) else: example_kwargs.update(example.kwargs) if Phase.explicit not in settings.phases: continue example_kwargs.update(kwargs) # Note: Test may mutate arguments and we can't rerun explicit # examples, so we have to calculate the failure message at this # point rather than than later. example_string = '%s(%s)' % ( test.__name__, arg_string(test, arguments, example_kwargs) ) try: with BuildContext(None) as b: if settings.verbosity >= Verbosity.verbose: report('Trying example: ' + example_string) test_runner( None, lambda data: test(*arguments, **example_kwargs) ) except BaseException: report('Falsifying example: ' + example_string) for n in b.notes: report(n) raise def get_random_for_wrapped_test(test, wrapped_test): settings = wrapped_test._hypothesis_internal_use_settings wrapped_test._hypothesis_internal_use_generated_seed = None if wrapped_test._hypothesis_internal_use_seed is not None: return Random( wrapped_test._hypothesis_internal_use_seed) elif settings.derandomize: return Random(function_digest(test)) elif global_force_seed is not None: return Random(global_force_seed) else: import random seed = random.getrandbits(128) wrapped_test._hypothesis_internal_use_generated_seed = seed return Random(seed) def process_arguments_to_given( wrapped_test, arguments, kwargs, generator_arguments, generator_kwargs, argspec, test, settings ): selfy = None arguments, kwargs = convert_positional_arguments( wrapped_test, arguments, kwargs) # If the test function is a method of some kind, the bound object # will be the first named argument if there are any, otherwise the # first vararg (if any). if argspec.args: selfy = kwargs.get(argspec.args[0]) elif arguments: selfy = arguments[0] # Ensure that we don't mistake mocks for self here. # This can cause the mock to be used as the test runner. if is_mock(selfy): selfy = None test_runner = new_style_executor(selfy) arguments = tuple(arguments) search_strategy = st.tuples( st.just(arguments), st.fixed_dictionaries(generator_kwargs).map( lambda args: dict(args, **kwargs) ) ) if selfy is not None: search_strategy = WithRunner(search_strategy, selfy) search_strategy.validate() return arguments, kwargs, test_runner, search_strategy def skip_exceptions_to_reraise(): """Return a tuple of exceptions meaning 'skip this test', to re-raise. This is intended to cover most common test runners; if you would like another to be added please open an issue or pull request. """ import unittest # This is a set because nose may simply re-export unittest.SkipTest exceptions = set([unittest.SkipTest]) try: # pragma: no cover from unittest2 import SkipTest exceptions.add(SkipTest) except ImportError: pass try: # pragma: no cover from pytest.runner import Skipped exceptions.add(Skipped) except ImportError: pass try: # pragma: no cover from nose import SkipTest as NoseSkipTest exceptions.add(NoseSkipTest) except ImportError: pass return tuple(sorted(exceptions, key=str)) exceptions_to_reraise = skip_exceptions_to_reraise() def new_given_argspec(original_argspec, generator_kwargs): """Make an updated argspec for the wrapped test.""" new_args = [a for a in original_argspec.args if a not in generator_kwargs] new_kwonlyargs = [a for a in original_argspec.kwonlyargs if a not in generator_kwargs] annots = {k: v for k, v in original_argspec.annotations.items() if k in new_args + new_kwonlyargs} annots['return'] = None return original_argspec._replace( args=new_args, kwonlyargs=new_kwonlyargs, annotations=annots) ROOT = os.path.dirname(__file__) STDLIB = os.path.dirname(os.__file__) def hypothesis_check_include(filename): # pragma: no cover if is_hypothesis_file(filename): return False return filename.endswith('.py') def escalate_warning(msg, slug=None): # pragma: no cover if slug is not None: msg = '%s (%s)' % (msg, slug) raise AssertionError( 'Unexpected warning from coverage: %s' % (msg,) ) class Arc(object): __slots__ = ('filename', 'source', 'target') def __init__(self, filename, source, target): self.filename = filename self.source = source self.target = target ARC_CACHE = {} def arc(filename, source, target): try: return ARC_CACHE[filename][source][target] except KeyError: result = Arc(filename, source, target) ARC_CACHE.setdefault( filename, {}).setdefault(source, {})[target] = result return result in_given = False FORCE_PURE_TRACER = os.getenv('HYPOTHESIS_FORCE_PURE_TRACER') == 'true' class StateForActualGivenExecution(object): def __init__( self, test_runner, search_strategy, test, settings, random, had_seed ): self.test_runner = test_runner self.search_strategy = search_strategy self.settings = settings self.at_least_one_success = False self.last_exception = None self.falsifying_examples = () self.__was_flaky = False self.random = random self.__warned_deadline = False self.__existing_collector = None self.__test_runtime = None self.__had_seed = had_seed self.test = test self.coverage_data = CoverageData() self.files_to_propagate = set() self.failed_normally = False self.used_examples_from_database = False if settings.use_coverage and not IN_COVERAGE_TESTS: # pragma: no cover if Collector._collectors: parent = Collector._collectors[-1] # We include any files the collector has already decided to # trace whether or not on re-investigation we still think it # wants to trace them. The reason for this is that in some # cases coverage gets the wrong answer when we run it # ourselves due to reasons that are our fault but are hard to # fix (we lie about where certain functions come from). # This causes us to not record the actual test bodies as # covered. But if we intended to trace test bodies then the # file must already have been traced when getting to this point # and so will already be in the collector's data. Hence we can # use that information to get the correct answer here. # See issue 997 for more context. self.files_to_propagate = set(parent.data) self.hijack_collector(parent) self.collector = Collector( branch=True, timid=FORCE_PURE_TRACER, should_trace=self.should_trace, check_include=hypothesis_check_include, concurrency='thread', warn=escalate_warning, ) self.collector.reset() # Hide the other collectors from this one so it doesn't attempt to # pause them (we're doing trace function management ourselves so # this will just cause problems). self.collector._collectors = [] else: self.collector = None def execute( self, data, print_example=False, is_final=False, expected_failure=None, collect=False, ): text_repr = [None] if self.settings.deadline is None: test = self.test else: @proxies(self.test) def test(*args, **kwargs): self.__test_runtime = None initial_draws = len(data.draw_times) start = benchmark_time() result = self.test(*args, **kwargs) finish = benchmark_time() internal_draw_time = sum(data.draw_times[initial_draws:]) runtime = (finish - start - internal_draw_time) * 1000 self.__test_runtime = runtime if self.settings.deadline is not_set: if ( not self.__warned_deadline and runtime >= 200 ): self.__warned_deadline = True note_deprecation(( 'Test took %.2fms to run. In future the default ' 'deadline setting will be 200ms, which will ' 'make this an error. You can set deadline to ' 'an explicit value of e.g. %d to turn tests ' 'slower than this into an error, or you can set ' 'it to None to disable this check entirely.') % ( runtime, ceil(runtime / 100) * 100, )) else: current_deadline = self.settings.deadline if not is_final: current_deadline *= 1.25 if runtime >= current_deadline: raise DeadlineExceeded(runtime, self.settings.deadline) return result def run(data): if not hasattr(data, 'can_reproduce_example_from_repr'): data.can_reproduce_example_from_repr = True with self.settings: with BuildContext(data, is_final=is_final): import random as rnd_module rnd_module.seed(0) args, kwargs = data.draw(self.search_strategy) if expected_failure is not None: text_repr[0] = arg_string(test, args, kwargs) if print_example: example = '%s(%s)' % ( test.__name__, arg_string(test, args, kwargs)) try: ast.parse(example) except SyntaxError: data.can_reproduce_example_from_repr = False report('Falsifying example: %s' % (example,)) elif current_verbosity() >= Verbosity.verbose: report( lambda: 'Trying example: %s(%s)' % ( test.__name__, arg_string(test, args, kwargs))) if self.collector is None or not collect: return test(*args, **kwargs) else: # pragma: no cover try: self.collector.start() return test(*args, **kwargs) finally: self.collector.stop() result = self.test_runner(data, run) if expected_failure is not None: exception, traceback = expected_failure if ( isinstance( exception, DeadlineExceeded ) and self.__test_runtime is not None ): report(( 'Unreliable test timings! On an initial run, this ' 'test took %.2fms, which exceeded the deadline of ' '%.2fms, but on a subsequent run it took %.2f ms, ' 'which did not. If you expect this sort of ' 'variability in your test timings, consider turning ' 'deadlines off for this test by setting deadline=None.' ) % ( exception.runtime, self.settings.deadline, self.__test_runtime )) else: report( 'Failed to reproduce exception. Expected: \n' + traceback, ) self.__flaky(( 'Hypothesis %s(%s) produces unreliable results: Falsified' ' on the first call but did not on a subsequent one' ) % (test.__name__, text_repr[0],)) return result def should_trace(self, original_filename, frame): # pragma: no cover disp = FileDisposition() assert original_filename is not None disp.original_filename = original_filename disp.canonical_filename = encoded_filepath( canonical_filename(original_filename)) disp.source_filename = disp.canonical_filename disp.reason = '' disp.file_tracer = None disp.has_dynamic_filename = False disp.trace = hypothesis_check_include(disp.canonical_filename) if not disp.trace: disp.reason = 'hypothesis internal reasons' elif self.__existing_collector is not None: check = self.__existing_collector.should_trace( original_filename, frame) if check.trace: self.files_to_propagate.add(check.canonical_filename) return disp def hijack_collector(self, collector): # pragma: no cover self.__existing_collector = collector original_save_data = collector.save_data def save_data(covdata): original_save_data(covdata) if collector.branch: covdata.add_arcs({ filename: { arc: None for arc in self.coverage_data.arcs(filename) or ()} for filename in self.files_to_propagate }) else: covdata.add_lines({ filename: { line: None for line in self.coverage_data.lines(filename) or ()} for filename in self.files_to_propagate }) collector.save_data = original_save_data collector.save_data = save_data def evaluate_test_data(self, data): try: if self.collector is None: result = self.execute(data) else: # pragma: no cover # This should always be a no-op, but the coverage tracer has # a bad habit of resurrecting itself. original = sys.gettrace() sys.settrace(None) try: self.collector.data = {} result = self.execute(data, collect=True) finally: sys.settrace(original) covdata = CoverageData() self.collector.save_data(covdata) self.coverage_data.update(covdata) for filename in covdata.measured_files(): if is_hypothesis_file(filename): continue data.tags.update( arc(filename, source, target) for source, target in covdata.arcs(filename) ) if result is not None and self.settings.perform_health_check: fail_health_check(self.settings, ( 'Tests run under @given should return None, but ' '%s returned %r instead.' ) % (self.test.__name__, result), HealthCheck.return_value) self.at_least_one_success = True return False except UnsatisfiedAssumption: data.mark_invalid() except ( HypothesisDeprecationWarning, FailedHealthCheck, StopTest, ) + exceptions_to_reraise: raise except Exception as e: escalate_hypothesis_internal_error() data.__expected_traceback = traceback.format_exc() data.__expected_exception = e verbose_report(data.__expected_traceback) error_class, _, tb = sys.exc_info() origin = traceback.extract_tb(tb)[-1] filename = origin[0] lineno = origin[1] data.mark_interesting((error_class, filename, lineno)) def run(self): # Tell pytest to omit the body of this function from tracebacks __tracebackhide__ = True if global_force_seed is None: database_key = str_to_bytes(fully_qualified_name(self.test)) else: database_key = None self.start_time = time.time() global in_given runner = ConjectureRunner( self.evaluate_test_data, settings=self.settings, random=self.random, database_key=database_key, ) try: if in_given or self.collector is None: runner.run() else: # pragma: no cover in_given = True original_trace = sys.gettrace() try: sys.settrace(None) runner.run() finally: in_given = False sys.settrace(original_trace) note_engine_for_statistics(runner) run_time = time.time() - self.start_time finally: self.used_examples_from_database = \ runner.used_examples_from_database if runner.used_examples_from_database: if self.settings.derandomize: note_deprecation( 'In future derandomize will imply database=None, but your ' 'test is currently using examples from the database. To ' 'get the future behaviour, update your settings to ' 'include database=None.' ) if self.__had_seed: note_deprecation( 'In future use of @seed will imply database=None in your ' 'settings, but your test is currently using examples from ' 'the database. To get the future behaviour, update your ' 'settings for this test to include database=None.' ) timed_out = runner.exit_reason == ExitReason.timeout if runner.last_data is None: return if runner.interesting_examples: self.falsifying_examples = sorted( [d for d in runner.interesting_examples.values()], key=lambda d: sort_key(d.buffer), reverse=True ) else: if timed_out: note_deprecation(( 'Your tests are hitting the settings timeout (%.2fs). ' 'This functionality will go away in a future release ' 'and you should not rely on it. Instead, try setting ' 'max_examples to be some value lower than %d (the number ' 'of examples your test successfully ran here). Or, if you ' 'would prefer your tests to run to completion, regardless ' 'of how long they take, you can set the timeout value to ' 'hypothesis.unlimited.' ) % ( self.settings.timeout, runner.valid_examples), self.settings) if runner.valid_examples < min( self.settings.min_satisfying_examples, self.settings.max_examples, ) and not ( runner.exit_reason == ExitReason.finished and self.at_least_one_success ): if timed_out: raise Timeout(( 'Ran out of time before finding a satisfying ' 'example for ' '%s. Only found %d examples in ' + '%.2fs.' ) % ( get_pretty_function_description(self.test), runner.valid_examples, run_time )) else: raise Unsatisfiable(( 'Unable to satisfy assumptions of hypothesis ' '%s. Only %d examples considered ' 'satisfied assumptions' ) % ( get_pretty_function_description(self.test), runner.valid_examples,)) if not self.falsifying_examples: return self.failed_normally = True flaky = 0 for falsifying_example in self.falsifying_examples: ran_example = ConjectureData.for_buffer(falsifying_example.buffer) self.__was_flaky = False assert falsifying_example.__expected_exception is not None try: self.execute( ran_example, print_example=True, is_final=True, expected_failure=( falsifying_example.__expected_exception, falsifying_example.__expected_traceback, ) ) except (UnsatisfiedAssumption, StopTest): report(traceback.format_exc()) self.__flaky( 'Unreliable assumption: An example which satisfied ' 'assumptions on the first run now fails it.' ) except BaseException: if len(self.falsifying_examples) <= 1: raise report(traceback.format_exc()) finally: # pragma: no cover # This section is in fact entirely covered by the tests in # test_reproduce_failure, but it seems to trigger a lovely set # of coverage bugs: The branches show up as uncovered (despite # definitely being covered - you can add an assert False else # branch to verify this and see it fail - and additionally the # second branch still complains about lack of coverage even if # you add a pragma: no cover to it! # See https://bitbucket.org/ned/coveragepy/issues/623/ if self.settings.print_blob is not PrintSettings.NEVER: failure_blob = encode_failure(falsifying_example.buffer) # Have to use the example we actually ran, not the original # falsifying example! Otherwise we won't catch problems # where the repr of the generated example doesn't parse. can_use_repr = ran_example.can_reproduce_example_from_repr if ( self.settings.print_blob is PrintSettings.ALWAYS or ( self.settings.print_blob is PrintSettings.INFER and not can_use_repr and len(failure_blob) < 200 ) ): report(( '\n' 'You can reproduce this example by temporarily ' 'adding @reproduce_failure(%r, %r) as a decorator ' 'on your test case') % ( __version__, failure_blob,)) if self.__was_flaky: flaky += 1 # If we only have one example then we should have raised an error or # flaky prior to this point. assert len(self.falsifying_examples) > 1 if flaky > 0: raise Flaky(( 'Hypothesis found %d distinct failures, but %d of them ' 'exhibited some sort of flaky behaviour.') % ( len(self.falsifying_examples), flaky)) else: raise MultipleFailures(( 'Hypothesis found %d distinct failures.') % ( len(self.falsifying_examples,))) def __flaky(self, message): if len(self.falsifying_examples) <= 1: raise Flaky(message) else: self.__was_flaky = True report('Flaky example! ' + message) def given(*given_arguments, **given_kwargs): """A decorator for turning a test function that accepts arguments into a randomized test. This is the main entry point to Hypothesis. """ def run_test_with_generator(test): generator_arguments = tuple(given_arguments) generator_kwargs = dict(given_kwargs) original_argspec = getfullargspec(test) check_invalid = is_invalid_test( test.__name__, original_argspec, generator_arguments, generator_kwargs) if check_invalid is not None: return check_invalid for name, strategy in zip(reversed(original_argspec.args), reversed(generator_arguments)): generator_kwargs[name] = strategy argspec = new_given_argspec(original_argspec, generator_kwargs) @impersonate(test) @define_function_signature( test.__name__, test.__doc__, argspec ) def wrapped_test(*arguments, **kwargs): # Tell pytest to omit the body of this function from tracebacks __tracebackhide__ = True if getattr(test, 'is_hypothesis_test', False): note_deprecation(( 'You have applied @given to a test more than once. In ' 'future this will be an error. Applying @given twice ' 'wraps the test twice, which can be extremely slow. A ' 'similar effect can be gained by combining the arguments ' 'to the two calls to given. For example, instead of ' '@given(booleans()) @given(integers()), you could write ' '@given(booleans(), integers())' )) settings = wrapped_test._hypothesis_internal_use_settings random = get_random_for_wrapped_test(test, wrapped_test) if infer in generator_kwargs.values(): hints = get_type_hints(test) for name in [name for name, value in generator_kwargs.items() if value is infer]: if name not in hints: raise InvalidArgument( 'passed %s=infer for %s, but %s has no type annotation' % (name, test.__name__, name)) generator_kwargs[name] = st.from_type(hints[name]) processed_args = process_arguments_to_given( wrapped_test, arguments, kwargs, generator_arguments, generator_kwargs, argspec, test, settings ) arguments, kwargs, test_runner, search_strategy = processed_args state = StateForActualGivenExecution( test_runner, search_strategy, test, settings, random, had_seed=wrapped_test._hypothesis_internal_use_seed ) reproduce_failure = \ wrapped_test._hypothesis_internal_use_reproduce_failure if reproduce_failure is not None: expected_version, failure = reproduce_failure if expected_version != __version__: raise InvalidArgument(( 'Attempting to reproduce a failure from a different ' 'version of Hypothesis. This failure is from %s, but ' 'you are currently running %r. Please change your ' 'Hypothesis version to a matching one.' ) % (expected_version, __version__)) try: state.execute(ConjectureData.for_buffer( decode_failure(failure)), print_example=True, is_final=True, ) raise DidNotReproduce( 'Expected the test to raise an error, but it ' 'completed successfully.' ) except StopTest: raise DidNotReproduce( 'The shape of the test data has changed in some way ' 'from where this blob was defined. Are you sure ' "you're running the same test?" ) except UnsatisfiedAssumption: raise DidNotReproduce( 'The test data failed to satisfy an assumption in the ' 'test. Have you added it since this blob was ' 'generated?' ) execute_explicit_examples( test_runner, test, wrapped_test, settings, arguments, kwargs ) if settings.max_examples <= 0: return if not ( Phase.reuse in settings.phases or Phase.generate in settings.phases ): return try: state.run() except BaseException: generated_seed = \ wrapped_test._hypothesis_internal_use_generated_seed if generated_seed is not None and not state.failed_normally: if running_under_pytest: report(( 'You can add @seed(%(seed)d) to this test or run ' 'pytest with --hypothesis-seed=%(seed)d to ' 'reproduce this failure.') % { 'seed': generated_seed},) else: report(( 'You can add @seed(%d) to this test to reproduce ' 'this failure.') % (generated_seed,)) raise for attrib in dir(test): if not (attrib.startswith('_') or hasattr(wrapped_test, attrib)): setattr(wrapped_test, attrib, getattr(test, attrib)) wrapped_test.is_hypothesis_test = True wrapped_test._hypothesis_internal_use_seed = getattr( test, '_hypothesis_internal_use_seed', None ) wrapped_test._hypothesis_internal_use_settings = getattr( test, '_hypothesis_internal_use_settings', None ) or Settings.default wrapped_test._hypothesis_internal_use_reproduce_failure = getattr( test, '_hypothesis_internal_use_reproduce_failure', None ) return wrapped_test return run_test_with_generator def find(specifier, condition, settings=None, random=None, database_key=None): """Returns the minimal example from the given strategy ``specifier`` that matches the predicate function ``condition``.""" settings = settings or Settings( max_examples=2000, min_satisfying_examples=0, max_shrinks=2000, ) settings = Settings(settings, perform_health_check=False) if database_key is None and settings.database is not None: database_key = function_digest(condition) if not isinstance(specifier, SearchStrategy): raise InvalidArgument( 'Expected SearchStrategy but got %r of type %s' % ( specifier, type(specifier).__name__ )) specifier.validate() search = specifier random = random or new_random() successful_examples = [0] last_data = [None] def template_condition(data): with BuildContext(data): try: data.is_find = True result = data.draw(search) data.note(result) success = condition(result) except UnsatisfiedAssumption: data.mark_invalid() if success: successful_examples[0] += 1 if settings.verbosity == Verbosity.verbose: if not successful_examples[0]: report(lambda: u'Trying example %s' % ( nicerepr(result), )) elif success: if successful_examples[0] == 1: report(lambda: u'Found satisfying example %s' % ( nicerepr(result), )) else: report(lambda: u'Shrunk example to %s' % ( nicerepr(result), )) last_data[0] = data if success and not data.frozen: data.mark_interesting() start = time.time() runner = ConjectureRunner( template_condition, settings=settings, random=random, database_key=database_key, ) runner.run() note_engine_for_statistics(runner) run_time = time.time() - start if runner.last_data.status == Status.INTERESTING: data = ConjectureData.for_buffer(runner.last_data.buffer) with BuildContext(data): return data.draw(search) if ( runner.valid_examples <= settings.min_satisfying_examples and runner.exit_reason != ExitReason.finished ): if settings.timeout > 0 and run_time > settings.timeout: raise Timeout(( 'Ran out of time before finding enough valid examples for ' '%s. Only %d valid examples found in %.2f seconds.' ) % ( get_pretty_function_description(condition), runner.valid_examples, run_time)) else: raise Unsatisfiable(( 'Unable to satisfy assumptions of ' '%s. Only %d examples considered satisfied assumptions' ) % ( get_pretty_function_description(condition), runner.valid_examples,)) raise NoSuchExample(get_pretty_function_description(condition)) hypothesis-python-3.44.1/src/hypothesis/database.py000066400000000000000000000214471321557765100224510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import re import binascii import threading from hashlib import sha1 from contextlib import contextmanager from hypothesis._settings import note_deprecation from hypothesis.internal.compat import FileNotFoundError, hbytes, \ b64decode, b64encode sqlite3 = None SQLITE_PATH = re.compile(r"\.\(db|sqlite|sqlite3\)$") def _db_for_path(path=None): if path in (None, ':memory:'): return InMemoryExampleDatabase() path = str(path) if os.path.isdir(path): return DirectoryBasedExampleDatabase(path) if os.path.exists(path): return SQLiteExampleDatabase(path) if SQLITE_PATH.search(path): return SQLiteExampleDatabase(path) else: return DirectoryBasedExampleDatabase(path) class EDMeta(type): def __call__(self, *args, **kwargs): if self is ExampleDatabase: return _db_for_path(*args, **kwargs) return super(EDMeta, self).__call__(*args, **kwargs) class ExampleDatabase(EDMeta('ExampleDatabase', (object,), {})): """Interface class for storage systems. A key -> multiple distinct values mapping. Keys and values are binary data. """ def save(self, key, value): """save this value under this key. If this value is already present for this key, silently do nothing """ raise NotImplementedError('%s.save' % (type(self).__name__)) def delete(self, key, value): """Remove this value from this key. If this value is not present, silently do nothing. """ raise NotImplementedError('%s.delete' % (type(self).__name__)) def move(self, src, dest, value): """Move value from key src to key dest. Equivalent to delete(src, value) followed by save(src, value) but may have a more efficient implementation. Note that value will be inserted at dest regardless of whether it is currently present at src. """ if src == dest: self.save(src, value) return self.delete(src, value) self.save(dest, value) def fetch(self, key): """Return all values matching this key.""" raise NotImplementedError('%s.fetch' % (type(self).__name__)) def close(self): """Clear up any resources associated with this database.""" raise NotImplementedError('%s.close' % (type(self).__name__)) class InMemoryExampleDatabase(ExampleDatabase): def __init__(self): self.data = {} def __repr__(self): return 'InMemoryExampleDatabase(%r)' % (self.data,) def fetch(self, key): for v in self.data.get(key, ()): yield v def save(self, key, value): self.data.setdefault(key, set()).add(hbytes(value)) def delete(self, key, value): self.data.get(key, set()).discard(hbytes(value)) def close(self): pass class SQLiteExampleDatabase(ExampleDatabase): def __init__(self, path=u':memory:'): self.path = path self.db_created = False self.current_connection = threading.local() global sqlite3 import sqlite3 if path == u':memory:': note_deprecation( 'The SQLite database backend has been deprecated. ' 'Use InMemoryExampleDatabase or set database_file=":memory:" ' 'instead.' ) else: note_deprecation( 'The SQLite database backend has been deprecated. ' 'Set database_file to some path name not ending in .db, ' '.sqlite or .sqlite3 to get the new directory based database ' 'backend instead.' ) def connection(self): if not hasattr(self.current_connection, 'connection'): self.current_connection.connection = sqlite3.connect(self.path) return self.current_connection.connection def close(self): if hasattr(self.current_connection, 'connection'): try: self.connection().close() finally: del self.current_connection.connection def __repr__(self): return u'%s(%s)' % (self.__class__.__name__, self.path) @contextmanager def cursor(self): conn = self.connection() cursor = conn.cursor() try: try: yield cursor finally: cursor.close() except BaseException: conn.rollback() raise else: conn.commit() def save(self, key, value): self.create_db_if_needed() with self.cursor() as cursor: try: cursor.execute(""" insert into hypothesis_data_mapping(key, value) values(?, ?) """, (b64encode(key), b64encode(value))) except sqlite3.IntegrityError: pass def delete(self, key, value): self.create_db_if_needed() with self.cursor() as cursor: cursor.execute(""" delete from hypothesis_data_mapping where key = ? and value = ? """, (b64encode(key), b64encode(value))) def fetch(self, key): self.create_db_if_needed() with self.cursor() as cursor: cursor.execute(""" select value from hypothesis_data_mapping where key = ? """, (b64encode(key),)) for (value,) in cursor: try: yield b64decode(value) except (binascii.Error, TypeError): pass def create_db_if_needed(self): if self.db_created: return with self.cursor() as cursor: cursor.execute(""" create table if not exists hypothesis_data_mapping( key text, value text, unique(key, value) ) """) self.db_created = True def mkdirp(path): try: os.makedirs(path) except OSError: pass return path def _hash(key): return sha1(key).hexdigest()[:16] class DirectoryBasedExampleDatabase(ExampleDatabase): def __init__(self, path): self.path = path self.keypaths = {} def __repr__(self): return 'DirectoryBasedExampleDatabase(%r)' % (self.path,) def close(self): pass def _key_path(self, key): try: return self.keypaths[key] except KeyError: pass directory = os.path.join(self.path, _hash(key)) mkdirp(directory) self.keypaths[key] = directory return directory def _value_path(self, key, value): return os.path.join( self._key_path(key), sha1(value).hexdigest()[:16] ) def fetch(self, key): kp = self._key_path(key) for path in os.listdir(kp): try: with open(os.path.join(kp, path), 'rb') as i: yield hbytes(i.read()) except FileNotFoundError: pass def save(self, key, value): path = self._value_path(key, value) if not os.path.exists(path): suffix = binascii.hexlify(os.urandom(16)) if not isinstance(suffix, str): # pragma: no branch # On Python 3, binascii.hexlify returns bytes suffix = suffix.decode('ascii') tmpname = path + '.' + suffix with open(tmpname, 'wb') as o: o.write(value) try: os.rename(tmpname, path) except OSError: # pragma: no cover os.unlink(tmpname) assert not os.path.exists(tmpname) def move(self, src, dest, value): if src == dest: self.save(src, value) return try: os.rename( self._value_path(src, value), self._value_path(dest, value)) except OSError: self.delete(src, value) self.save(dest, value) def delete(self, key, value): try: os.unlink(self._value_path(key, value)) except OSError: pass hypothesis-python-3.44.1/src/hypothesis/errors.py000066400000000000000000000146101321557765100222130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import class HypothesisException(Exception): """Generic parent class for exceptions thrown by Hypothesis.""" class CleanupFailed(HypothesisException): """At least one cleanup task failed and no other exception was raised.""" class UnsatisfiedAssumption(HypothesisException): """An internal error raised by assume. If you're seeing this error something has gone wrong. """ class BadTemplateDraw(HypothesisException): """An internal error raised when something unfortunate happened during template generation and you should restart the draw, preferably with a new parameter. This is not an error condition internally, but if you ever see this in your code it's probably a Hypothesis bug """ class NoSuchExample(HypothesisException): """The condition we have been asked to satisfy appears to be always false. This does not guarantee that no example exists, only that we were unable to find one. """ def __init__(self, condition_string, extra=''): super(NoSuchExample, self).__init__( 'No examples found of condition %s%s' % ( condition_string, extra) ) class DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover """Hypothesis used to be able to detect exhaustive coverage of a search space and no longer can. This exception remains for compatibility reasons for now but can never actually be thrown. """ class NoExamples(HypothesisException): """Raised when example() is called on a strategy but we cannot find any examples after enough tries that we really should have been able to if this was ever going to work.""" class Unsatisfiable(HypothesisException): """We ran out of time or examples before we could find enough examples which satisfy the assumptions of this hypothesis. This could be because the function is too slow. If so, try upping the timeout. It could also be because the function is using assume in a way that is too hard to satisfy. If so, try writing a custom strategy or using a better starting point (e.g if you are requiring a list has unique values you could instead filter out all duplicate values from the list) """ class Flaky(HypothesisException): """This function appears to fail non-deterministically: We have seen it fail when passed this example at least once, but a subsequent invocation did not fail. Common causes for this problem are: 1. The function depends on external state. e.g. it uses an external random number generator. Try to make a version that passes all the relevant state in from Hypothesis. 2. The function is suffering from too much recursion and its failure depends sensitively on where it's been called from. 3. The function is timing sensitive and can fail or pass depending on how long it takes. Try breaking it up into smaller functions which don't do that and testing those instead. """ class Timeout(Unsatisfiable): """We were unable to find enough examples that satisfied the preconditions of this hypothesis in the amount of time allotted to us.""" class WrongFormat(HypothesisException, ValueError): """An exception indicating you have attempted to serialize a value that does not match the type described by this format.""" class BadData(HypothesisException, ValueError): """The data that we got out of the database does not seem to match the data we could have put into the database given this schema.""" class InvalidArgument(HypothesisException, TypeError): """Used to indicate that the arguments to a Hypothesis function were in some manner incorrect.""" class ResolutionFailed(InvalidArgument): """Hypothesis had to resolve a type to a strategy, but this failed. Type inference is best-effort, so this only happens when an annotation exists but could not be resolved for a required argument to the target of ``builds()``, or where the user passed ``infer``. """ class InvalidState(HypothesisException): """The system is not in a state where you were allowed to do that.""" class InvalidDefinition(HypothesisException, TypeError): """Used to indicate that a class definition was not well put together and has something wrong with it.""" class AbnormalExit(HypothesisException): """Raised when a test running in a child process exits without returning or raising an exception.""" class FailedHealthCheck(HypothesisException, Warning): """Raised when a test fails a preliminary healthcheck that occurs before execution.""" def __init__(self, message, check): super(FailedHealthCheck, self).__init__(message) self.health_check = check class HypothesisDeprecationWarning(HypothesisException, FutureWarning): pass class Frozen(HypothesisException): """Raised when a mutation method has been called on a ConjectureData object after freeze() has been called.""" class MultipleFailures(HypothesisException): """Indicates that Hypothesis found more than one distinct bug when testing your code.""" class DeadlineExceeded(HypothesisException): """Raised when an individual test body has taken too long to run.""" def __init__(self, runtime, deadline): super(DeadlineExceeded, self).__init__(( 'Test took %.2fms, which exceeds the deadline of ' '%.2fms') % (runtime, deadline)) self.runtime = runtime self.deadline = deadline class StopTest(BaseException): def __init__(self, testcounter): super(StopTest, self).__init__(repr(testcounter)) self.testcounter = testcounter class DidNotReproduce(HypothesisException): pass hypothesis-python-3.44.1/src/hypothesis/executors.py000066400000000000000000000042021321557765100227140ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import def default_executor(function): # pragma: nocover raise NotImplementedError() # We don't actually use this any more def setup_teardown_executor(setup, teardown): setup = setup or (lambda: None) teardown = teardown or (lambda ex: None) def execute(function): token = None try: token = setup() return function() finally: teardown(token) return execute def executor(runner): try: return runner.execute_example except AttributeError: pass if ( hasattr(runner, 'setup_example') or hasattr(runner, 'teardown_example') ): return setup_teardown_executor( getattr(runner, 'setup_example', None), getattr(runner, 'teardown_example', None), ) return default_executor def default_new_style_executor(data, function): return function(data) class ConjectureRunner(object): def hypothesis_execute_example_with_data(self, data, function): return function(data) def new_style_executor(runner): if runner is None: return default_new_style_executor if isinstance(runner, ConjectureRunner): return runner.hypothesis_execute_example_with_data old_school = executor(runner) if old_school is default_executor: return default_new_style_executor else: return lambda data, function: old_school( lambda: function(data) ) hypothesis-python-3.44.1/src/hypothesis/extra/000077500000000000000000000000001321557765100214465ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/extra/__init__.py000066400000000000000000000012001321557765100235500ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/src/hypothesis/extra/datetime.py000066400000000000000000000075771321557765100236340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module provides deprecated time and date related strategies. It depends on the ``pytz`` package, which is stable enough that almost any version should be compatible - most updates are for the timezone database. """ from __future__ import division, print_function, absolute_import import datetime as dt import pytz import hypothesis.strategies as st from hypothesis.errors import InvalidArgument from hypothesis._settings import note_deprecation from hypothesis.extra.pytz import timezones as timezones_strategy __all__ = ['datetimes', 'dates', 'times'] def tz_args_strat(allow_naive, tz_list, name): if tz_list is None: tz_strat = timezones_strategy() else: tz_strat = st.sampled_from([ tz if isinstance(tz, dt.tzinfo) else pytz.timezone(tz) for tz in tz_list ]) if allow_naive or (allow_naive is None and tz_strat.is_empty): tz_strat = st.none() | tz_strat if tz_strat.is_empty: raise InvalidArgument( 'Cannot create non-naive %s with no timezones allowed.' % name) return tz_strat def convert_year_bound(val, default): if val is None: return default try: return default.replace(val) except ValueError: raise InvalidArgument('Invalid year=%r' % (val,)) @st.defines_strategy def datetimes(allow_naive=None, timezones=None, min_year=None, max_year=None): """Return a strategy for generating datetimes. .. deprecated:: 3.9.0 use :py:func:`hypothesis.strategies.datetimes` instead. allow_naive=True will cause the values to sometimes be naive. timezones is the set of permissible timezones. If set to an empty collection all datetimes will be naive. If set to None all timezones available via pytz will be used. All generated datetimes will be between min_year and max_year, inclusive. """ note_deprecation('Use hypothesis.strategies.datetimes, which supports ' 'full-precision bounds and has a simpler API.') min_dt = convert_year_bound(min_year, dt.datetime.min) max_dt = convert_year_bound(max_year, dt.datetime.max) tzs = tz_args_strat(allow_naive, timezones, 'datetimes') return st.datetimes(min_dt, max_dt, tzs) @st.defines_strategy def dates(min_year=None, max_year=None): """Return a strategy for generating dates. .. deprecated:: 3.9.0 use :py:func:`hypothesis.strategies.dates` instead. All generated dates will be between min_year and max_year, inclusive. """ note_deprecation('Use hypothesis.strategies.dates, which supports bounds ' 'given as date objects for single-day resolution.') return st.dates(convert_year_bound(min_year, dt.date.min), convert_year_bound(max_year, dt.date.max)) @st.defines_strategy def times(allow_naive=None, timezones=None): """Return a strategy for generating times. .. deprecated:: 3.9.0 use :py:func:`hypothesis.strategies.times` instead. The allow_naive and timezones arguments act the same as the datetimes strategy above. """ note_deprecation('Use hypothesis.strategies.times, which supports ' 'min_time and max_time arguments.') return st.times(timezones=tz_args_strat(allow_naive, timezones, 'times')) hypothesis-python-3.44.1/src/hypothesis/extra/django/000077500000000000000000000000001321557765100227105ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/extra/django/__init__.py000066400000000000000000000024001321557765100250150ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER import unittest import django.test as dt class HypothesisTestCase(object): def setup_example(self): self._pre_setup() def teardown_example(self, example): self._post_teardown() def __call__(self, result=None): testMethod = getattr(self, self._testMethodName) if getattr(testMethod, u'is_hypothesis_test', False): return unittest.TestCase.__call__(self, result) else: return dt.SimpleTestCase.__call__(self, result) class TestCase(HypothesisTestCase, dt.TestCase): pass class TransactionTestCase(HypothesisTestCase, dt.TransactionTestCase): pass hypothesis-python-3.44.1/src/hypothesis/extra/django/models.py000066400000000000000000000135021321557765100245460ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import string from decimal import Decimal from datetime import timedelta import django.db.models as dm from django.db import IntegrityError from django.conf import settings as django_settings from django.core.exceptions import ValidationError import hypothesis.strategies as st from hypothesis import reject from hypothesis.errors import InvalidArgument from hypothesis.extra.pytz import timezones from hypothesis.provisional import emails, ip4_addr_strings, \ ip6_addr_strings from hypothesis.utils.conventions import UniqueIdentifier def get_tz_strat(): if getattr(django_settings, 'USE_TZ', False): return timezones() return st.none() __default_field_mappings = None def field_mappings(): global __default_field_mappings if __default_field_mappings is None: # Sized fields are handled in _get_strategy_for_field() # URL fields are not yet handled __default_field_mappings = { dm.SmallIntegerField: st.integers(-32768, 32767), dm.IntegerField: st.integers(-2147483648, 2147483647), dm.BigIntegerField: st.integers(-9223372036854775808, 9223372036854775807), dm.PositiveIntegerField: st.integers(0, 2147483647), dm.PositiveSmallIntegerField: st.integers(0, 32767), dm.BinaryField: st.binary(), dm.BooleanField: st.booleans(), dm.DateField: st.dates(), dm.DateTimeField: st.datetimes(timezones=get_tz_strat()), dm.DurationField: st.timedeltas(), dm.EmailField: emails(), dm.FloatField: st.floats(), dm.NullBooleanField: st.one_of(st.none(), st.booleans()), dm.TimeField: st.times(timezones=get_tz_strat()), dm.UUIDField: st.uuids(), } # SQLite does not support timezone-aware times, or timedeltas that # don't fit in six bytes of microseconds, so we override those db = getattr(django_settings, 'DATABASES', {}).get('default', {}) if db.get('ENGINE', '').endswith('.sqlite3'): # pragma: no branch sqlite_delta = timedelta(microseconds=2 ** 47 - 1) __default_field_mappings.update({ dm.TimeField: st.times(), dm.DurationField: st.timedeltas(-sqlite_delta, sqlite_delta), }) return __default_field_mappings def add_default_field_mapping(field_type, strategy): field_mappings()[field_type] = strategy default_value = UniqueIdentifier(u'default_value') def validator_to_filter(f): """Converts the field run_validators method to something suitable for use in filter.""" def validate(value): try: f.run_validators(value) return True except ValidationError: return False return validate def _get_strategy_for_field(f): if f.choices: choices = [] for value, name_or_optgroup in f.choices: if isinstance(name_or_optgroup, (list, tuple)): choices.extend(key for key, _ in name_or_optgroup) else: choices.append(value) if isinstance(f, (dm.CharField, dm.TextField)) and f.blank: choices.insert(0, u'') strategy = st.sampled_from(choices) elif type(f) == dm.SlugField: strategy = st.text(alphabet=string.ascii_letters + string.digits, min_size=(None if f.blank else 1), max_size=f.max_length) elif type(f) == dm.GenericIPAddressField: lookup = {'both': ip4_addr_strings() | ip6_addr_strings(), 'ipv4': ip4_addr_strings(), 'ipv6': ip6_addr_strings()} strategy = lookup[f.protocol.lower()] elif type(f) in (dm.TextField, dm.CharField): strategy = st.text(min_size=(None if f.blank else 1), max_size=f.max_length) elif type(f) == dm.DecimalField: bound = Decimal(10 ** f.max_digits - 1) / (10 ** f.decimal_places) strategy = st.decimals(min_value=-bound, max_value=bound, places=f.decimal_places) else: strategy = field_mappings().get(type(f), st.nothing()) if f.validators: strategy = strategy.filter(validator_to_filter(f)) if f.null: strategy = st.one_of(st.none(), strategy) return strategy def models(model, **extra): """Return a strategy for instances of a model.""" result = {k: v for k, v in extra.items() if v is not default_value} missed = [] for f in model._meta.concrete_fields: if not (f.name in extra or isinstance(f, dm.AutoField)): result[f.name] = _get_strategy_for_field(f) if result[f.name].is_empty: missed.append(f.name) if missed: raise InvalidArgument( u'Missing arguments for mandatory field%s %s for model %s' % (u's' if missed else u'', u', '.join(missed), model.__name__)) return _models_impl(st.builds(model.objects.get_or_create, **result)) @st.composite def _models_impl(draw, strat): """Handle the nasty part of drawing a value for models()""" try: return draw(strat)[0] except IntegrityError: reject() hypothesis-python-3.44.1/src/hypothesis/extra/fakefactory.py000066400000000000000000000067301321557765100243240ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import random as globalrandom from random import Random import faker from faker.factory import AVAILABLE_LOCALES from hypothesis._settings import note_deprecation from hypothesis.internal.compat import text_type from hypothesis.internal.reflection import check_valid_identifier from hypothesis.searchstrategy.strategies import SearchStrategy def fake_factory(source, locale=None, locales=None, providers=()): note_deprecation( 'hypothesis.extra.fakefactory has been deprecated, because it does ' 'not support example discovery or shrinking. Consider using a lower-' 'level strategy (such as st.text()) instead.' ) check_valid_identifier(source) if source[0] == u'_': raise ValueError(u'Bad source name %s' % (source,)) if locale is not None and locales is not None: raise ValueError(u'Cannot specify both single and multiple locales') if locale: locales = (locale,) elif locales: locales = tuple(locales) else: locales = None for l in (locales or ()): if l not in AVAILABLE_LOCALES: raise ValueError(u'Unsupported locale %r' % (l,)) def supports_source(locale): test_faker = faker.Faker(locale) for provider in providers: test_faker.add_provider(provider) return hasattr(test_faker, source) if locales is None: locales = list(filter(supports_source, AVAILABLE_LOCALES)) if not locales: raise ValueError(u'No such source %r' % (source,)) else: for l in locales: if not supports_source(locale): raise ValueError(u'Unsupported source %s for locale %s' % ( source, l )) return FakeFactoryStrategy(source, providers, locales) class FakeFactoryStrategy(SearchStrategy): def __init__(self, source, providers, locales): self.source = source self.providers = tuple(providers) self.locales = tuple(locales) self.factories = {} def do_draw(self, data): seed = data.draw_bytes(4) random = Random(bytes(seed)) return self.gen_example(random) def factory_for(self, locale): try: return self.factories[locale] except KeyError: pass factory = faker.Faker(locale=locale) self.factories[locale] = factory for p in self.providers: factory.add_provider(p) return factory def gen_example(self, random): factory = self.factory_for(random.choice(self.locales)) original = globalrandom.getstate() seed = random.getrandbits(128) try: factory.seed(seed) return text_type(getattr(factory, self.source)()) finally: globalrandom.setstate(original) hypothesis-python-3.44.1/src/hypothesis/extra/numpy.py000066400000000000000000000502531321557765100231750ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import numpy as np import hypothesis.strategies as st import hypothesis.internal.conjecture.utils as cu from hypothesis.errors import InvalidArgument from hypothesis.searchstrategy import SearchStrategy from hypothesis.internal.compat import hrange, text_type from hypothesis.internal.coverage import check_function from hypothesis.internal.reflection import proxies TIME_RESOLUTIONS = tuple('Y M D h m s ms us ns ps fs as'.split()) @st.defines_strategy_with_reusable_values def from_dtype(dtype): # Compound datatypes, eg 'f4,f4,f4' if dtype.names is not None: # mapping np.void.type over a strategy is nonsense, so return now. return st.tuples( *[from_dtype(dtype.fields[name][0]) for name in dtype.names]) # Subarray datatypes, eg '(2, 3)i4' if dtype.subdtype is not None: subtype, shape = dtype.subdtype return arrays(subtype, shape) # Scalar datatypes if dtype.kind == u'b': result = st.booleans() elif dtype.kind == u'f': result = st.floats() elif dtype.kind == u'c': result = st.complex_numbers() elif dtype.kind in (u'S', u'a'): # Numpy strings are null-terminated; only allow round-trippable values. # `itemsize == 0` means 'fixed length determined at array creation' result = st.binary(max_size=dtype.itemsize or None ).filter(lambda b: b[-1:] != b'\0') elif dtype.kind == u'u': result = st.integers(min_value=0, max_value=2 ** (8 * dtype.itemsize) - 1) elif dtype.kind == u'i': overflow = 2 ** (8 * dtype.itemsize - 1) result = st.integers(min_value=-overflow, max_value=overflow - 1) elif dtype.kind == u'U': # Encoded in UTF-32 (four bytes/codepoint) and null-terminated result = st.text(max_size=(dtype.itemsize or 0) // 4 or None ).filter(lambda b: b[-1:] != u'\0') elif dtype.kind in (u'm', u'M'): if '[' in dtype.str: res = st.just(dtype.str.split('[')[-1][:-1]) else: res = st.sampled_from(TIME_RESOLUTIONS) result = st.builds(dtype.type, st.integers(-2**63, 2**63 - 1), res) else: raise InvalidArgument(u'No strategy inference for {}'.format(dtype)) return result.map(dtype.type) @check_function def check_argument(condition, fail_message, *f_args, **f_kwargs): if not condition: raise InvalidArgument(fail_message.format(*f_args, **f_kwargs)) @check_function def order_check(name, floor, small, large): check_argument( floor <= small, u'min_{name} must be at least {} but was {}', floor, small, name=name ) check_argument( small <= large, u'min_{name}={} is larger than max_{name}={}', small, large, name=name ) class ArrayStrategy(SearchStrategy): def __init__(self, element_strategy, shape, dtype, fill, unique): self.shape = tuple(shape) self.fill = fill check_argument(shape, u'Array shape must have at least one dimension, ' u'provided shape was {}', shape) check_argument(all(isinstance(s, int) for s in shape), u'Array shape must be integer in each dimension, ' u'provided shape was {}', shape) self.array_size = int(np.prod(shape)) self.dtype = dtype self.element_strategy = element_strategy self.unique = unique def do_draw(self, data): if 0 in self.shape: return np.zeros(dtype=self.dtype, shape=self.shape) # This could legitimately be a np.empty, but the performance gains for # that would be so marginal that there's really not much point risking # undefined behaviour shenanigans. result = np.zeros(shape=self.array_size, dtype=self.dtype) if self.fill.is_empty: # We have no fill value (either because the user explicitly # disabled it or because the default behaviour was used and our # elements strategy does not produce reusable values), so we must # generate a fully dense array with a freshly drawn value for each # entry. if self.unique: seen = set() elements = cu.many( data, min_size=self.array_size, max_size=self.array_size, average_size=self.array_size ) i = 0 while elements.more(): # We assign first because this means we check for # uniqueness after numpy has converted it to the relevant # type for us. Because we don't increment the counter on # a duplicate we will overwrite it on the next draw. result[i] = data.draw(self.element_strategy) if result[i] not in seen: seen.add(result[i]) i += 1 else: elements.reject() else: for i in hrange(len(result)): result[i] = data.draw(self.element_strategy) else: # We draw numpy arrays as "sparse with an offset". We draw a # collection of index assignments within the array and assign # fresh values from our elements strategy to those indices. If at # the end we have not assigned every element then we draw a single # value from our fill strategy and use that to populate the # remaining positions with that strategy. elements = cu.many( data, min_size=0, max_size=self.array_size, # sqrt isn't chosen for any particularly principled reason. It # just grows reasonably quickly but sublinearly, and for small # arrays it represents a decent fraction of the array size. average_size=math.sqrt(self.array_size), ) needs_fill = np.full(self.array_size, True) seen = set() while elements.more(): i = cu.integer_range(data, 0, self.array_size - 1) if not needs_fill[i]: elements.reject() continue result[i] = data.draw(self.element_strategy) if self.unique: if result[i] in seen: elements.reject() continue else: seen.add(result[i]) needs_fill[i] = False if needs_fill.any(): # We didn't fill all of the indices in the early loop, so we # put a fill value into the rest. # We have to do this hilarious little song and dance to work # around numpy's special handling of iterable values. If the # value here were e.g. a tuple then neither array creation # nor putmask would do the right thing. But by creating an # array of size one and then assigning the fill value as a # single element, we both get an array with the right value in # it and putmask will do the right thing by repeating the # values of the array across the mask. one_element = np.zeros(shape=1, dtype=self.dtype) one_element[0] = data.draw(self.fill) fill_value = one_element[0] if self.unique: try: is_nan = np.isnan(fill_value) except TypeError: is_nan = False if not is_nan: raise InvalidArgument( 'Cannot fill unique array with non-NaN ' 'value %r' % (fill_value,)) np.putmask(result, needs_fill, one_element) return result.reshape(self.shape) @check_function def fill_for(elements, unique, fill, name=''): if fill is None: if unique or not elements.has_reusable_values: fill = st.nothing() else: fill = elements else: st.check_strategy(fill, '%s.fill' % (name,) if name else 'fill') return fill @st.composite def arrays( draw, dtype, shape, elements=None, fill=None, unique=False ): """Returns a strategy for generating :class:`numpy's ndarrays`. * ``dtype`` may be any valid input to :class:`numpy.dtype ` (this includes ``dtype`` objects), or a strategy that generates such values. * ``shape`` may be an integer >= 0, a tuple of length >= 0 of such integers, or a strategy that generates such values. * ``elements`` is a strategy for generating values to put in the array. If it is None a suitable value will be inferred based on the dtype, which may give any legal value (including eg ``NaN`` for floats). If you have more specific requirements, you should supply your own elements strategy. * ``fill`` is a strategy that may be used to generate a single background value for the array. If None, a suitable default will be inferred based on the other arguments. If set to :func:`st.nothing() ` then filling behaviour will be disabled entirely and every element will be generated independently. * ``unique`` specifies if the elements of the array should all be distinct from one another. Note that in this case multiple NaN values may still be allowed. If fill is also set, the only valid values for it to return are NaN values (anything for which :func:`numpy.isnan` returns True. So e.g. for complex numbers (nan+1j) is also a valid fill). Note that if unique is set to True the generated values must be hashable. Arrays of specified ``dtype`` and ``shape`` are generated for example like this: .. code-block:: pycon >>> import numpy as np >>> arrays(np.int8, (2, 3)).example() array([[-8, 6, 3], [-6, 4, 6]], dtype=int8) - See :doc:`What you can generate and how `. .. code-block:: pycon >>> import numpy as np >>> from hypothesis.strategies import floats >>> arrays(np.float, 3, elements=floats(0, 1)).example() array([ 0.88974794, 0.77387938, 0.1977879 ]) Array values are generated in two parts: 1. Some subset of the coordinates of the array are populated with a value drawn from the elements strategy (or its inferred form). 2. If any coordinates were not assigned in the previous step, a single value is drawn from the fill strategy and is assigned to all remaining places. You can set fill to :func:`~hypothesis.strategies.nothing` if you want to disable this behaviour and draw a value for every element. If fill is set to None then it will attempt to infer the correct behaviour automatically: If unique is True, no filling will occur by default. Otherwise, if it looks safe to reuse the values of elements across multiple coordinates (this will be the case for any inferred strategy, and for most of the builtins, but is not the case for mutable values or strategies built with flatmap, map, composite, etc) then it will use the elements strategy as the fill, else it will default to having no fill. Having a fill helps Hypothesis craft high quality examples, but its main importance is when the array generated is large: Hypothesis is primarily designed around testing small examples. If you have arrays with hundreds or more elements, having a fill value is essential if you want your tests to run in reasonable time. """ if isinstance(dtype, SearchStrategy): dtype = draw(dtype) dtype = np.dtype(dtype) if elements is None: elements = from_dtype(dtype) if isinstance(shape, SearchStrategy): shape = draw(shape) if isinstance(shape, int): shape = (shape,) shape = tuple(shape) if not shape: if dtype.kind != u'O': return draw(elements) fill = fill_for( elements=elements, unique=unique, fill=fill ) return draw(ArrayStrategy(elements, shape, dtype, fill, unique)) @st.defines_strategy def array_shapes(min_dims=1, max_dims=3, min_side=1, max_side=10): """Return a strategy for array shapes (tuples of int >= 1).""" order_check('dims', 1, min_dims, max_dims) order_check('side', 1, min_side, max_side) return st.lists(st.integers(min_side, max_side), min_size=min_dims, max_size=max_dims).map(tuple) @st.defines_strategy def scalar_dtypes(): """Return a strategy that can return any non-flexible scalar dtype.""" return st.one_of(boolean_dtypes(), integer_dtypes(), unsigned_integer_dtypes(), floating_dtypes(), complex_number_dtypes(), datetime64_dtypes(), timedelta64_dtypes()) def defines_dtype_strategy(strat): @st.defines_strategy @proxies(strat) def inner(*args, **kwargs): return strat(*args, **kwargs).map(np.dtype) return inner @defines_dtype_strategy def boolean_dtypes(): return st.just('?') def dtype_factory(kind, sizes, valid_sizes, endianness): # Utility function, shared logic for most integer and string types valid_endian = ('?', '<', '=', '>') check_argument(endianness in valid_endian, u'Unknown endianness: was {}, must be in {}', endianness, valid_endian) if valid_sizes is not None: if isinstance(sizes, int): sizes = (sizes,) check_argument(sizes, 'Dtype must have at least one possible size.') check_argument(all(s in valid_sizes for s in sizes), u'Invalid sizes: was {} must be an item or sequence ' u'in {}', sizes, valid_sizes) if all(isinstance(s, int) for s in sizes): sizes = sorted(set(s // 8 for s in sizes)) strat = st.sampled_from(sizes) if '{}' not in kind: kind += '{}' if endianness == '?': return strat.map(('<' + kind).format) | strat.map(('>' + kind).format) return strat.map((endianness + kind).format) @defines_dtype_strategy def unsigned_integer_dtypes(endianness='?', sizes=(8, 16, 32, 64)): """Return a strategy for unsigned integer dtypes. endianness may be ``<`` for little-endian, ``>`` for big-endian, ``=`` for native byte order, or ``?`` to allow either byte order. This argument only applies to dtypes of more than one byte. sizes must be a collection of integer sizes in bits. The default (8, 16, 32, 64) covers the full range of sizes. """ return dtype_factory('u', sizes, (8, 16, 32, 64), endianness) @defines_dtype_strategy def integer_dtypes(endianness='?', sizes=(8, 16, 32, 64)): """Return a strategy for signed integer dtypes. endianness and sizes are treated as for :func:`unsigned_integer_dtypes`. """ return dtype_factory('i', sizes, (8, 16, 32, 64), endianness) @defines_dtype_strategy def floating_dtypes(endianness='?', sizes=(16, 32, 64)): """Return a strategy for floating-point dtypes. sizes is the size in bits of floating-point number. Some machines support 96- or 128-bit floats, but these are not generated by default. Larger floats (96 and 128 bit real parts) are not supported on all platforms and therefore disabled by default. To generate these dtypes, include these values in the sizes argument. """ return dtype_factory('f', sizes, (16, 32, 64, 96, 128), endianness) @defines_dtype_strategy def complex_number_dtypes(endianness='?', sizes=(64, 128)): """Return a strategy for complex-number dtypes. sizes is the total size in bits of a complex number, which consists of two floats. Complex halfs (a 16-bit real part) are not supported by numpy and will not be generated by this strategy. """ return dtype_factory('c', sizes, (64, 128, 192, 256), endianness) @check_function def validate_time_slice(max_period, min_period): check_argument(max_period in TIME_RESOLUTIONS, u'max_period {} must be a valid resolution in {}', max_period, TIME_RESOLUTIONS) check_argument(min_period in TIME_RESOLUTIONS, u'min_period {} must be a valid resolution in {}', min_period, TIME_RESOLUTIONS) start = TIME_RESOLUTIONS.index(max_period) end = TIME_RESOLUTIONS.index(min_period) + 1 check_argument(start < end, u'max_period {} must be earlier in sequence {} than ' u'min_period {}', max_period, TIME_RESOLUTIONS, min_period) return TIME_RESOLUTIONS[start:end] @defines_dtype_strategy def datetime64_dtypes(max_period='Y', min_period='ns', endianness='?'): """Return a strategy for datetime64 dtypes, with various precisions from year to attosecond.""" return dtype_factory('datetime64[{}]', validate_time_slice(max_period, min_period), TIME_RESOLUTIONS, endianness) @defines_dtype_strategy def timedelta64_dtypes(max_period='Y', min_period='ns', endianness='?'): """Return a strategy for timedelta64 dtypes, with various precisions from year to attosecond.""" return dtype_factory('timedelta64[{}]', validate_time_slice(max_period, min_period), TIME_RESOLUTIONS, endianness) @defines_dtype_strategy def byte_string_dtypes(endianness='?', min_len=0, max_len=16): """Return a strategy for generating bytestring dtypes, of various lengths and byteorder.""" order_check('len', 0, min_len, max_len) return dtype_factory('S', list(range(min_len, max_len + 1)), None, endianness) @defines_dtype_strategy def unicode_string_dtypes(endianness='?', min_len=0, max_len=16): """Return a strategy for generating unicode string dtypes, of various lengths and byteorder.""" order_check('len', 0, min_len, max_len) return dtype_factory('U', list(range(min_len, max_len + 1)), None, endianness) @defines_dtype_strategy def array_dtypes(subtype_strategy=scalar_dtypes(), min_size=1, max_size=5, allow_subarrays=False): """Return a strategy for generating array (compound) dtypes, with members drawn from the given subtype strategy.""" order_check('size', 0, min_size, max_size) native_strings = st.text if text_type is str else st.binary elements = st.tuples(native_strings(), subtype_strategy) if allow_subarrays: elements |= st.tuples(native_strings(), subtype_strategy, array_shapes(max_dims=2, max_side=2)) return st.lists(elements=elements, min_size=min_size, max_size=max_size, unique_by=lambda d: d[0]) @st.defines_strategy def nested_dtypes(subtype_strategy=scalar_dtypes(), max_leaves=10, max_itemsize=None): """Return the most-general dtype strategy. Elements drawn from this strategy may be simple (from the subtype_strategy), or several such values drawn from :func:`array_dtypes` with ``allow_subarrays=True``. Subdtypes in an array dtype may be nested to any depth, subject to the max_leaves argument. """ return st.recursive(subtype_strategy, lambda x: array_dtypes(x, allow_subarrays=True), max_leaves).filter( lambda d: max_itemsize is None or d.itemsize <= max_itemsize) hypothesis-python-3.44.1/src/hypothesis/extra/pandas/000077500000000000000000000000001321557765100227145ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/extra/pandas/__init__.py000066400000000000000000000016271321557765100250330ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.extra.pandas.impl import series, data_frames, column, columns,\ indexes, range_indexes __all__ = [ 'indexes', 'range_indexes', 'series', 'column', 'columns', 'data_frames', ] hypothesis-python-3.44.1/src/hypothesis/extra/pandas/impl.py000066400000000000000000000554401321557765100242370ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from copy import copy from collections import Iterable, OrderedDict import attr import numpy as np import pandas import hypothesis.strategies as st import hypothesis.extra.numpy as npst import hypothesis.internal.conjecture.utils as cu from hypothesis.errors import InvalidArgument from hypothesis.control import reject from hypothesis.internal.compat import hrange from hypothesis.internal.coverage import check, check_function from hypothesis.internal.validation import check_type, try_convert, \ check_strategy, check_valid_size, check_valid_interval try: from pandas.api.types import is_categorical_dtype except ImportError: # pragma: no cover def is_categorical_dtype(dt): if isinstance(dt, np.dtype): return False return dt == 'category' def dtype_for_elements_strategy(s): return st.shared( s.map(lambda x: pandas.Series([x]).dtype), key=('hypothesis.extra.pandas.dtype_for_elements_strategy', s), ) def infer_dtype_if_necessary(dtype, values, elements, draw): if dtype is None and not values: return draw(dtype_for_elements_strategy(elements)) return dtype @check_function def elements_and_dtype(elements, dtype, source=None): if source is None: prefix = '' else: prefix = '%s.' % (source,) if elements is not None: check_strategy(elements, '%selements' % (prefix,)) else: with check('dtype is not None'): if dtype is None: raise InvalidArgument(( 'At least one of %(prefix)selements or %(prefix)sdtype ' 'must be provided.') % {'prefix': prefix}) with check('is_categorical_dtype'): if is_categorical_dtype(dtype): raise InvalidArgument( '%sdtype is categorical, which is currently unsupported' % ( prefix, )) dtype = try_convert(np.dtype, dtype, 'dtype') if elements is None: elements = npst.from_dtype(dtype) elif dtype is not None: def convert_element(value): name = 'draw(%selements)' % (prefix,) try: return np.array([value], dtype=dtype)[0] except TypeError: raise InvalidArgument( 'Cannot convert %s=%r of type %s to dtype %s' % ( name, value, type(value).__name__, dtype.str ) ) except ValueError: raise InvalidArgument( 'Cannot convert %s=%r to type %s' % ( name, value, dtype.str, ) ) elements = elements.map(convert_element) assert elements is not None return elements, dtype class ValueIndexStrategy(st.SearchStrategy): def __init__(self, elements, dtype, min_size, max_size, unique): super(ValueIndexStrategy, self).__init__() self.elements = elements self.dtype = dtype self.min_size = min_size self.max_size = max_size self.unique = unique def do_draw(self, data): result = [] seen = set() iterator = cu.many( data, min_size=self.min_size, max_size=self.max_size, average_size=(self.min_size + self.max_size) / 2 ) while iterator.more(): elt = data.draw(self.elements) if self.unique: if elt in seen: iterator.reject() continue seen.add(elt) result.append(elt) dtype = infer_dtype_if_necessary( dtype=self.dtype, values=result, elements=self.elements, draw=data.draw ) return pandas.Index(result, dtype=dtype, tupleize_cols=False) DEFAULT_MAX_SIZE = 10 @st.cacheable @st.defines_strategy def range_indexes(min_size=0, max_size=None): """Provides a strategy which generates an :class:`~pandas.Index` whose values are 0, 1, ..., n for some n. Arguments: * min_size is the smallest number of elements the index can have. * max_size is the largest number of elements the index can have. If None it will default to some suitable value based on min_size. """ check_valid_size(min_size, 'min_size') check_valid_size(max_size, 'max_size') if max_size is None: max_size = min([min_size + DEFAULT_MAX_SIZE, 2 ** 63 - 1]) check_valid_interval(min_size, max_size, 'min_size', 'max_size') return st.integers(min_size, max_size).map(pandas.RangeIndex) @st.cacheable @st.defines_strategy def indexes( elements=None, dtype=None, min_size=0, max_size=None, unique=True, ): """Provides a strategy for producing a :class:`pandas.Index`. Arguments: * elements is a strategy which will be used to generate the individual values of the index. If None, it will be inferred from the dtype. Note: even if the elements strategy produces tuples, the generated value will not be a MultiIndex, but instead be a normal index whose elements are tuples. * dtype is the dtype of the resulting index. If None, it will be inferred from the elements strategy. At least one of dtype or elements must be provided. * min_size is the minimum number of elements in the index. * max_size is the maximum number of elements in the index. If None then it will default to a suitable small size. If you want larger indexes you should pass a max_size explicitly. * unique specifies whether all of the elements in the resulting index should be distinct. """ check_valid_size(min_size, 'min_size') check_valid_size(max_size, 'max_size') check_valid_interval(min_size, max_size, 'min_size', 'max_size') check_type(bool, unique, 'unique') elements, dtype = elements_and_dtype(elements, dtype) if max_size is None: max_size = min_size + DEFAULT_MAX_SIZE return ValueIndexStrategy( elements, dtype, min_size, max_size, unique) @st.defines_strategy def series(elements=None, dtype=None, index=None, fill=None, unique=False): """Provides a strategy for producing a :class:`pandas.Series`. Arguments: * elements: a strategy that will be used to generate the individual values in the series. If None, we will attempt to infer a suitable default from the dtype. * dtype: the dtype of the resulting series and may be any value that can be passed to :class:`numpy.dtype`. If None, will use pandas's standard behaviour to infer it from the type of the elements values. Note that if the type of values that comes out of your elements strategy varies, then so will the resulting dtype of the series. * index: If not None, a strategy for generating indexes for the resulting Series. This can generate either :class:`pandas.Index` objects or any sequence of values (which will be passed to the Index constructor). You will probably find it most convenient to use the :func:`~hypothesis.extra.pandas.indexes` or :func:`~hypothesis.extra.pandas.range_indexes` function to produce values for this argument. Usage: .. code-block:: pycon >>> series(dtype=int).example() 0 -2001747478 1 1153062837 """ if index is None: index = range_indexes() else: check_strategy(index) elements, dtype = elements_and_dtype(elements, dtype) index_strategy = index @st.composite def result(draw): index = draw(index_strategy) if len(index) > 0: if dtype is not None: result_data = draw(npst.arrays( dtype=dtype, elements=elements, shape=len(index), fill=fill, unique=unique, )) else: result_data = list(draw(npst.arrays( dtype=object, elements=elements, shape=len(index), fill=fill, unique=unique, ))) return pandas.Series( result_data, index=index, dtype=dtype ) else: return pandas.Series( (), index=index, dtype=dtype if dtype is not None else draw( dtype_for_elements_strategy(elements))) return result() @attr.s(slots=True) class column(object): """Data object for describing a column in a DataFrame. Arguments: * name: the column name, or None to default to the column position. Must be hashable, but can otherwise be any value supported as a pandas column name. * elements: the strategy for generating values in this column, or None to infer it from the dtype. * dtype: the dtype of the column, or None to infer it from the element strategy. At least one of dtype or elements must be provided. * fill: A default value for elements of the column. See :func:`~hypothesis.extra.numpy.arrays` for a full explanation. * unique: If all values in this column should be distinct. """ name = attr.ib(default=None) elements = attr.ib(default=None) dtype = attr.ib(default=None) fill = attr.ib(default=None) unique = attr.ib(default=False) def columns( names_or_number, dtype=None, elements=None, fill=None, unique=False ): """A convenience function for producing a list of :class:`column` objects of the same general shape. The names_or_number argument is either a sequence of values, the elements of which will be used as the name for individual column objects, or a number, in which case that many unnamed columns will be created. All other arguments are passed through verbatim to create the columns. """ try: names = list(names_or_number) except TypeError: names = [None] * names_or_number return [ column( name=n, dtype=dtype, elements=elements, fill=fill, unique=unique ) for n in names ] @st.defines_strategy def data_frames( columns=None, rows=None, index=None ): """Provides a strategy for producing a :class:`pandas.DataFrame`. Arguments: * columns: An iterable of :class:`column` objects describing the shape of the generated DataFrame. * rows: A strategy for generating a row object. Should generate either dicts mapping column names to values or a sequence mapping column position to the value in that position (note that unlike the :class:`pandas.DataFrame` constructor, single values are not allowed here. Passing e.g. an integer is an error, even if there is only one column). At least one of rows and columns must be provided. If both are provided then the generated rows will be validated against the columns and an error will be raised if they don't match. Caveats on using rows: * In general you should prefer using columns to rows, and only use rows if the columns interface is insufficiently flexible to describe what you need - you will get better performance and example quality that way. * If you provide rows and not columns, then the shape and dtype of the resulting DataFrame may vary. e.g. if you have a mix of int and float in the values for one column in your row entries, the column will sometimes have an integral dtype and sometimes a float. * index: If not None, a strategy for generating indexes for the resulting DataFrame. This can generate either :class:`pandas.Index` objects or any sequence of values (which will be passed to the Index constructor). You will probably find it most convenient to use the :func:`~hypothesis.extra.pandas.indexes` or :func:`~hypothesis.extra.pandas.range_indexes` function to produce values for this argument. Usage: The expected usage pattern is that you use :class:`column` and :func:`columns` to specify a fixed shape of the DataFrame you want as follows. For example the following gives a two column data frame: .. code-block:: pycon >>> from hypothesis.extra.pandas import column, data_frames >>> data_frames([ ... column('A', dtype=int), column('B', dtype=float)]).example() A B 0 2021915903 1.793898e+232 1 1146643993 inf 2 -2096165693 1.000000e+07 If you want the values in different columns to interact in some way you can use the rows argument. For example the following gives a two column DataFrame where the value in the first column is always at most the value in the second: .. code-block:: pycon >>> from hypothesis.extra.pandas import column, data_frames >>> import hypothesis.strategies as st >>> data_frames( ... rows=st.tuples(st.floats(allow_nan=False), ... st.floats(allow_nan=False)).map(sorted) ... ).example() 0 1 0 -3.402823e+38 9.007199e+15 1 -1.562796e-298 5.000000e-01 You can also combine the two: .. code-block:: pycon >>> from hypothesis.extra.pandas import columns, data_frames >>> import hypothesis.strategies as st >>> data_frames( ... columns=columns(["lo", "hi"], dtype=float), ... rows=st.tuples(st.floats(allow_nan=False), ... st.floats(allow_nan=False)).map(sorted) ... ).example() lo hi 0 9.314723e-49 4.353037e+45 1 -9.999900e-01 1.000000e+07 2 -2.152861e+134 -1.069317e-73 (Note that the column dtype must still be specified and will not be inferred from the rows. This restriction may be lifted in future). Combining rows and columns has the following behaviour: * The column names and dtypes will be used. * If the column is required to be unique, this will be enforced. * Any values missing from the generated rows will be provided using the column's fill. * Any values in the row not present in the column specification (if dicts are passed, if there are keys with no corresponding column name, if sequences are passed if there are too many items) will result in InvalidArgument being raised. """ if index is None: index = range_indexes() else: check_strategy(index) index_strategy = index if columns is None: if rows is None: raise InvalidArgument( 'At least one of rows and columns must be provided' ) else: @st.composite def rows_only(draw): index = draw(index_strategy) @check_function def row(): result = draw(rows) check_type(Iterable, result, 'draw(row)') return result if len(index) > 0: return pandas.DataFrame( [row() for _ in index], index=index ) else: # If we haven't drawn any rows we need to draw one row and # then discard it so that we get a consistent shape for the # DataFrame. base = pandas.DataFrame([row()]) return base.drop(0) return rows_only() assert columns is not None columns = try_convert(tuple, columns, 'columns') rewritten_columns = [] column_names = set() for i, c in enumerate(columns): check_type(column, c, 'columns[%d]' % (i,)) c = copy(c) if c.name is None: label = 'columns[%d]' % (i,) c.name = i else: label = c.name try: hash(c.name) except TypeError: raise InvalidArgument( 'Column names must be hashable, but columns[%d].name was ' '%r of type %s, which cannot be hashed.' % ( i, c.name, type(c.name).__name__,)) if c.name in column_names: raise InvalidArgument( 'duplicate definition of column name %r' % (c.name,)) column_names.add(c.name) c.elements, c.dtype = elements_and_dtype( c.elements, c.dtype, label ) if c.dtype is None and rows is not None: raise InvalidArgument( 'Must specify a dtype for all columns when combining rows with' ' columns.' ) c.fill = npst.fill_for( fill=c.fill, elements=c.elements, unique=c.unique, name=label ) rewritten_columns.append(c) if rows is None: @st.composite def just_draw_columns(draw): index = draw(index_strategy) local_index_strategy = st.just(index) data = OrderedDict((c.name, None) for c in rewritten_columns) # Depending on how the columns are going to be generated we group # them differently to get better shrinking. For columns with fill # enabled, the elements can be shrunk independently of the size, # so we can just shrink by shrinking the index then shrinking the # length and are generally much more free to move data around. # For columns with no filling the problem is harder, and drawing # them like that would result in rows being very far apart from # each other in the underlying data stream, which gets in the way # of shrinking. So what we do is reorder and draw those columns # row wise, so that the values of each row are next to each other. # This makes life easier for the shrinker when deleting blocks of # data. columns_without_fill = [ c for c in rewritten_columns if c.fill.is_empty] if columns_without_fill: for c in columns_without_fill: data[c.name] = pandas.Series( np.zeros(shape=len(index), dtype=c.dtype), index=index, ) seen = { c.name: set() for c in columns_without_fill if c.unique} for i in hrange(len(index)): for c in columns_without_fill: if c.unique: for _ in range(5): value = draw(c.elements) if value not in seen[c.name]: seen[c.name].add(value) break else: reject() else: value = draw(c.elements) data[c.name][i] = value for c in rewritten_columns: if not c.fill.is_empty: data[c.name] = draw(series( index=local_index_strategy, dtype=c.dtype, elements=c.elements, fill=c.fill, unique=c.unique)) return pandas.DataFrame(data, index=index) return just_draw_columns() else: @st.composite def assign_rows(draw): index = draw(index_strategy) result = pandas.DataFrame(OrderedDict( (c.name, pandas.Series( np.zeros(dtype=c.dtype, shape=len(index)), dtype=c.dtype)) for c in rewritten_columns ), index=index) fills = {} any_unique = any(c.unique for c in rewritten_columns) if any_unique: all_seen = [ set() if c.unique else None for c in rewritten_columns] while all_seen[-1] is None: all_seen.pop() for row_index in hrange(len(index)): for _ in hrange(5): original_row = draw(rows) row = original_row if isinstance(row, dict): as_list = [None] * len(rewritten_columns) for i, c in enumerate(rewritten_columns): try: as_list[i] = row[c.name] except KeyError: try: as_list[i] = fills[i] except KeyError: fills[i] = draw(c.fill) as_list[i] = fills[i] for k in row: if k not in column_names: raise InvalidArgument(( 'Row %r contains column %r not in ' 'columns %r)' % ( row, k, [ c.name for c in rewritten_columns ]))) row = as_list if any_unique: has_duplicate = False for seen, value in zip(all_seen, row): if seen is None: continue if value in seen: has_duplicate = True break seen.add(value) if has_duplicate: continue row = list(try_convert(tuple, row, 'draw(rows)')) if len(row) > len(rewritten_columns): raise InvalidArgument(( 'Row %r contains too many entries. Has %d but ' 'expected at most %d') % ( original_row, len(row), len(rewritten_columns) )) while len(row) < len(rewritten_columns): row.append(draw(rewritten_columns[len(row)].fill)) result.iloc[row_index] = row break else: reject() return result return assign_rows() hypothesis-python-3.44.1/src/hypothesis/extra/pytestplugin.py000066400000000000000000000116071321557765100245740ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.core as core from hypothesis.reporting import default as default_reporter from hypothesis.reporting import with_reporter from hypothesis.statistics import collector from hypothesis.internal.compat import OrderedDict, text_type from hypothesis.internal.detection import is_hypothesis_test LOAD_PROFILE_OPTION = '--hypothesis-profile' PRINT_STATISTICS_OPTION = '--hypothesis-show-statistics' SEED_OPTION = '--hypothesis-seed' class StoringReporter(object): def __init__(self, config): self.config = config self.results = [] def __call__(self, msg): if self.config.getoption('capture', 'fd') == 'no': default_reporter(msg) if not isinstance(msg, text_type): msg = repr(msg) self.results.append(msg) def pytest_addoption(parser): group = parser.getgroup('hypothesis', 'Hypothesis') group.addoption( LOAD_PROFILE_OPTION, action='store', help='Load in a registered hypothesis.settings profile' ) group.addoption( PRINT_STATISTICS_OPTION, action='store_true', help='Configure when statistics are printed', default=False ) group.addoption( SEED_OPTION, action='store', help='Set a seed to use for all Hypothesis tests' ) def pytest_configure(config): core.running_under_pytest = True from hypothesis import settings profile = config.getoption(LOAD_PROFILE_OPTION) if profile: settings.load_profile(profile) seed = config.getoption(SEED_OPTION) if seed is not None: try: seed = int(seed) except ValueError: pass core.global_force_seed = seed config.addinivalue_line( 'markers', 'hypothesis: Tests which use hypothesis.') gathered_statistics = OrderedDict() @pytest.mark.hookwrapper def pytest_runtest_call(item): if not (hasattr(item, 'obj') and is_hypothesis_test(item.obj)): yield else: store = StoringReporter(item.config) def note_statistics(stats): gathered_statistics[item.nodeid] = stats with collector.with_value(note_statistics): with with_reporter(store): yield if store.results: item.hypothesis_report_information = list(store.results) @pytest.mark.hookwrapper def pytest_runtest_makereport(item, call): report = (yield).get_result() if hasattr(item, 'hypothesis_report_information'): report.sections.append(( 'Hypothesis', '\n'.join(item.hypothesis_report_information) )) def pytest_terminal_summary(terminalreporter): if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION): return terminalreporter.section('Hypothesis Statistics') for name, statistics in gathered_statistics.items(): terminalreporter.write_line(name + ':') terminalreporter.write_line('') if not statistics.has_runs: terminalreporter.write_line(' - Test was never run') continue terminalreporter.write_line(( ' - %d passing examples, %d failing examples,' ' %d invalid examples') % ( statistics.passing_examples, statistics.failing_examples, statistics.invalid_examples, )) terminalreporter.write_line( ' - Typical runtimes: %s' % (statistics.runtimes,) ) terminalreporter.write_line( ' - Fraction of time spent in data generation: %s' % ( statistics.draw_time_percentage,)) terminalreporter.write_line( ' - Stopped because %s' % (statistics.exit_reason,) ) if statistics.events: terminalreporter.write_line(' - Events:') for event in statistics.events: terminalreporter.write_line( ' * %s' % (event,) ) terminalreporter.write_line('') def pytest_collection_modifyitems(items): for item in items: if not isinstance(item, pytest.Function): continue if getattr(item.function, 'is_hypothesis_test', False): item.add_marker('hypothesis') def load(): pass hypothesis-python-3.44.1/src/hypothesis/extra/pytz.py000066400000000000000000000036431321557765100230340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module provides ``pytz`` timezones. You can use this strategy to make :py:func:`hypothesis.strategies.datetimes` and :py:func:`hypothesis.strategies.times` produce timezone-aware values. """ from __future__ import division, print_function, absolute_import import datetime as dt import pytz import hypothesis.strategies as st __all__ = ['timezones'] @st.cacheable @st.defines_strategy def timezones(): """Any timezone in the Olsen database, as a pytz tzinfo object. This strategy minimises to UTC, or the smallest possible fixed offset, and is designed for use with :py:func:`hypothesis.strategies.datetimes`. """ all_timezones = [pytz.timezone(tz) for tz in pytz.all_timezones] # Some timezones have always had a constant offset from UTC. This makes # them simpler than timezones with daylight savings, and the smaller the # absolute offset the simpler they are. Of course, UTC is even simpler! static = [pytz.UTC] + sorted( (t for t in all_timezones if isinstance(t, pytz.tzfile.StaticTzInfo)), key=lambda tz: abs(tz.utcoffset(dt.datetime(2000, 1, 1))) ) # Timezones which have changed UTC offset; best ordered by name. dynamic = [tz for tz in all_timezones if tz not in static] return st.sampled_from(static + dynamic) hypothesis-python-3.44.1/src/hypothesis/internal/000077500000000000000000000000001321557765100221375ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/internal/__init__.py000066400000000000000000000012001321557765100242410ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/src/hypothesis/internal/cache.py000066400000000000000000000166271321557765100235700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import attr @attr.s(slots=True) class Entry(object): key = attr.ib() value = attr.ib() score = attr.ib() class GenericCache(object): """Generic supertype for cache implementations. Defines a dict-like mapping with a maximum size, where as well as mapping to a value, each key also maps to a score. When a write would cause the dict to exceed its maximum size, it first evicts the existing key with the smallest score, then adds the new key to the map. A key has the following lifecycle: 1. key is written for the first time, the key is given the score self.new_entry(key, value) 2. whenever an existing key is read or written, self.on_access(key, value, score) is called. This returns a new score for the key. 3. When a key is evicted, self.on_evict(key, value, score) is called. The cache will be in a valid state in all of these cases. Implementations are expected to implement new_entry and optionally on_access and on_evict to implement a specific scoring strategy. """ __slots__ = ('keys_to_indices', 'data', 'max_size') def __init__(self, max_size): self.max_size = max_size # Implementation: We store a binary heap of Entry objects in self.data, # with the heap property requiring that a parent's score is <= that of # its children. keys_to_index then maps keys to their index in the # heap. We keep these two in sync automatically - the heap is never # reordered without updating the index. self.keys_to_indices = {} self.data = [] def __len__(self): assert len(self.keys_to_indices) == len(self.data) return len(self.data) def __getitem__(self, key): i = self.keys_to_indices[key] result = self.data[i] self.on_access(result.key, result.value, result.score) self.__balance(i) return result.value def __setitem__(self, key, value): if self.max_size == 0: return evicted = None try: i = self.keys_to_indices[key] except KeyError: entry = Entry(key, value, self.new_entry(key, value)) if len(self.data) >= self.max_size: evicted = self.data[0] del self.keys_to_indices[evicted.key] i = 0 self.data[0] = entry else: i = len(self.data) self.data.append(entry) self.keys_to_indices[key] = i else: entry = self.data[i] assert entry.key == key entry.value = value entry.score = self.on_access(entry.key, entry.value, entry.score) self.__balance(i) if evicted is not None: if self.data[0] is not entry: assert evicted.score <= self.data[0].score self.on_evict(evicted.key, evicted.value, evicted.score) def clear(self): del self.data[:] self.keys_to_indices.clear() def __repr__(self): return '{%s}' % (', '.join( '%r: %r' % (e.key, e.value) for e in self.data),) def new_entry(self, key, value): """Called when a key is written that does not currently appear in the map. Returns the score to associate with the key. """ raise NotImplementedError() def on_access(self, key, value, score): """Called every time a key that is already in the map is read or written. Returns the new score for the key. """ return score def on_evict(self, key, value, score): """Called after a key has been evicted, with the score it had had at the point of eviction.""" pass def check_valid(self): """Debugging method for use in tests. Asserts that all of the cache's invariants hold. When everything is working correctly this should be an expensive no-op. """ for i, e in enumerate(self.data): assert self.keys_to_indices[e.key] == i for j in [i * 2 + 1, i * 2 + 2]: if j < len(self.data): assert e.score <= self.data[j].score, self.data def __swap(self, i, j): assert i < j assert self.data[j].score < self.data[i].score self.data[i], self.data[j] = self.data[j], self.data[i] self.keys_to_indices[self.data[i].key] = i self.keys_to_indices[self.data[j].key] = j def __balance(self, i): """When we have made a modification to the heap such that means that the heap property has been violated locally around i but previously held for all other indexes (and no other values have been modified), this fixes the heap so that the heap property holds everywhere.""" while i > 0: parent = (i - 1) // 2 if self.__out_of_order(parent, i): self.__swap(parent, i) i = parent else: break while True: children = [ j for j in (2 * i + 1, 2 * i + 2) if j < len(self.data) ] if len(children) == 2: children.sort(key=lambda j: self.data[j].score) for j in children: if self.__out_of_order(i, j): self.__swap(i, j) i = j break else: break def __out_of_order(self, i, j): """Returns True if the indices i, j are in the wrong order. i must be the parent of j. """ assert i == (j - 1) // 2 return self.data[j].score < self.data[i].score class LRUReusedCache(GenericCache): """The only concrete implementation of GenericCache we use outside of tests currently. Adopts a modified least-frequently used eviction policy: It evicts the key that has been used least recently, but it will always preferentially evict keys that have only ever been accessed once. Among keys that have been accessed more than once, it ignores the number of accesses. This retains most of the benefits of an LRU cache, but adds an element of scan-resistance to the process: If we end up scanning through a large number of keys without reusing them, this does not evict the existing entries in preference for the new ones. """ __slots__ = ('__tick',) def __init__(self, max_size, ): super(LRUReusedCache, self).__init__(max_size) self.__tick = 0 def tick(self): self.__tick += 1 return self.__tick def new_entry(self, key, value): return [1, self.tick()] def on_access(self, key, value, score): score[0] = 2 score[1] = self.tick() return score hypothesis-python-3.44.1/src/hypothesis/internal/charmap.py000066400000000000000000000255411321557765100241330ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import gzip import pickle import tempfile import unicodedata from hypothesis.configuration import tmpdir, storage_directory from hypothesis.internal.compat import hunichr def charmap_file(): return os.path.join( storage_directory('unicodedata', unicodedata.unidata_version), 'charmap.pickle.gz' ) _charmap = None def charmap(): """Return a dict that maps a Unicode category, to a tuple of 2-tuples covering the codepoint intervals for characters in that category. >>> charmap()['Co'] ((57344, 63743), (983040, 1048573), (1048576, 1114109)) """ global _charmap # Best-effort caching in the face of missing files and/or unwritable # filesystems is fairly simple: check if loaded, else try loading, # else calculate and try writing the cache. if _charmap is None: f = charmap_file() try: with gzip.GzipFile(f, 'rb') as i: _charmap = dict(pickle.load(i)) except Exception: tmp_charmap = {} for i in range(0, sys.maxunicode + 1): cat = unicodedata.category(hunichr(i)) rs = tmp_charmap.setdefault(cat, []) if rs and rs[-1][-1] == i - 1: rs[-1][-1] += 1 else: rs.append([i, i]) _charmap = {k: tuple(tuple(pair) for pair in pairs) for k, pairs in tmp_charmap.items()} try: # Write the Unicode table atomically fd, tmpfile = tempfile.mkstemp(dir=tmpdir()) os.close(fd) # Explicitly set the mtime to get reproducible output with gzip.GzipFile(tmpfile, 'wb', mtime=1) as o: pickle.dump(sorted(_charmap.items()), o, pickle.HIGHEST_PROTOCOL) os.rename(tmpfile, f) except Exception: # pragma: no cover pass assert _charmap is not None return _charmap _categories = None def categories(): """Return a tuple of Unicode categories in a normalised order. >>> categories() # doctest: +ELLIPSIS ('Zl', 'Zp', 'Co', 'Me', 'Pc', ..., 'Cc', 'Cs') """ global _categories if _categories is None: cm = charmap() _categories = sorted( cm.keys(), key=lambda c: len(cm[c]) ) _categories.remove('Cc') # Other, Control _categories.remove('Cs') # Other, Surrogate _categories.append('Cc') _categories.append('Cs') return tuple(_categories) def _union_intervals(x, y): """Merge two sequences of intervals into a single tuple of intervals. Any integer bounded by `x` or `y` is also bounded by the result. >>> _union_intervals([(3, 10)], [(1, 2), (5, 17)]) ((1, 17),) """ if not x: return tuple((u, v) for u, v in y) if not y: return tuple((u, v) for u, v in x) intervals = sorted(x + y, reverse=True) result = [intervals.pop()] while intervals: # 1. intervals is in descending order # 2. pop() takes from the RHS. # 3. (a, b) was popped 1st, then (u, v) was popped 2nd # 4. Therefore: a <= u # 5. We assume that u <= v and a <= b # 6. So we need to handle 2 cases of overlap, and one disjoint case # | u--v | u----v | u--v | # | a----b | a--b | a--b | u, v = intervals.pop() a, b = result[-1] if u <= b + 1: # Overlap cases result[-1] = (a, max(v, b)) else: # Disjoint case result.append((u, v)) return tuple(result) def _subtract_intervals(x, y): """Set difference for lists of intervals. That is, returns a list of intervals that bounds all values bounded by x that are not also bounded by y. x and y are expected to be in sorted order. For example _subtract_intervals([(1, 10)], [(2, 3), (9, 15)]) would return [(1, 1), (4, 8)], removing the values 2, 3, 9 and 10 from the interval. """ if not y: return tuple(x) x = list(map(list, x)) i = 0 j = 0 result = [] while i < len(x) and j < len(y): # Iterate in parallel over x and y. j stays pointing at the smallest # interval in the left hand side that could still overlap with some # element of x at index >= i. # Similarly, i is not incremented until we know that it does not # overlap with any element of y at index >= j. xl, xr = x[i] assert xl <= xr yl, yr = y[j] assert yl <= yr if yr < xl: # The interval at y[j] is strictly to the left of the interval at # x[i], so will not overlap with it or any later interval of x. j += 1 elif yl > xr: # The interval at y[j] is strictly to the right of the interval at # x[i], so all of x[i] goes into the result as no further intervals # in y will intersect it. result.append(x[i]) i += 1 elif yl <= xl: if yr >= xr: # x[i] is contained entirely in y[j], so we just skip over it # without adding it to the result. i += 1 else: # The beginning of x[i] is contained in y[j], so we update the # left endpoint of x[i] to remove this, and increment j as we # now have moved past it. Note that this is not added to the # result as is, as more intervals from y may intersect it so it # may need updating further. x[i][0] = yr + 1 j += 1 else: # yl > xl, so the left hand part of x[i] is not contained in y[j], # so there are some values we should add to the result. result.append((xl, yl - 1)) if yr + 1 <= xr: # If y[j] finishes before x[i] does, there may be some values # in x[i] left that should go in the result (or they may be # removed by a later interval in y), so we update x[i] to # reflect that and increment j because it no longer overlaps # with any remaining element of x. x[i][0] = yr + 1 j += 1 else: # Every element of x[i] other than the initial part we have # already added is contained in y[j], so we move to the next # interval. i += 1 # Any remaining intervals in x do not overlap with any of y, as if they did # we would not have incremented j to the end, so can be added to the result # as they are. result.extend(x[i:]) return tuple(map(tuple, result)) def _intervals(s): """Return a tuple of intervals, covering the codepoints of characters in `s`. >>> _intervals('abcdef0123456789') ((48, 57), (97, 102)) """ intervals = tuple((ord(c), ord(c)) for c in sorted(s)) return _union_intervals(intervals, intervals) category_index_cache = { (): (), } def _category_key(exclude, include): """Return a normalised tuple of all Unicode categories that are in `include`, but not in `exclude`. If include is None then default to including all categories. Any item in include that is not a unicode character will be excluded. >>> _category_key(exclude=['So'], include=['Lu', 'Me', 'Cs', 'So', 'Xx']) ('Me', 'Lu', 'Cs') """ cs = categories() if include is None: include = set(cs) else: include = set(include) exclude = set(exclude or ()) include -= exclude result = tuple(c for c in cs if c in include) return result def _query_for_key(key): """Return a tuple of codepoint intervals covering characters that match one or more categories in the tuple of categories `key`. >>> _query_for_key(categories()) ((0, 1114111),) >>> _query_for_key(('Zl', 'Zp', 'Co')) ((8232, 8233), (57344, 63743), (983040, 1048573), (1048576, 1114109)) """ try: return category_index_cache[key] except KeyError: pass assert key if set(key) == set(categories()): result = ((0, sys.maxunicode),) else: result = _union_intervals( _query_for_key(key[:-1]), charmap()[key[-1]] ) category_index_cache[key] = result return result limited_category_index_cache = {} def query( exclude_categories=(), include_categories=None, min_codepoint=None, max_codepoint=None, include_characters='', exclude_characters='', ): """Return a tuple of intervals covering the codepoints for all characters that meet the critera (min_codepoint <= codepoint(c) <= max_codepoint and any(cat in include_categories for cat in categories(c)) and all(cat not in exclude_categories for cat in categories(c)) or (c in include_characters) >>> query() ((0, 1114111),) >>> query(min_codepoint=0, max_codepoint=128) ((0, 128),) >>> query(min_codepoint=0, max_codepoint=128, include_categories=['Lu']) ((65, 90),) >>> query(min_codepoint=0, max_codepoint=128, include_categories=['Lu'], ... include_characters=u'☃') ((65, 90), (9731, 9731)) """ if min_codepoint is None: min_codepoint = 0 if max_codepoint is None: max_codepoint = sys.maxunicode catkey = _category_key(exclude_categories, include_categories) character_intervals = _intervals(include_characters or '') exclude_intervals = _intervals(exclude_characters or '') qkey = ( catkey, min_codepoint, max_codepoint, character_intervals, exclude_intervals ) try: return limited_category_index_cache[qkey] except KeyError: pass base = _query_for_key(catkey) result = [] for u, v in base: if v >= min_codepoint and u <= max_codepoint: result.append(( max(u, min_codepoint), min(v, max_codepoint) )) result = tuple(result) result = _union_intervals(result, character_intervals) result = _subtract_intervals(result, exclude_intervals) limited_category_index_cache[qkey] = result return result hypothesis-python-3.44.1/src/hypothesis/internal/compat.py000066400000000000000000000316451321557765100240050ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # pylint: skip-file from __future__ import division, print_function, absolute_import import re import sys import math import time import codecs import platform import importlib from base64 import b64encode from collections import namedtuple try: from collections import OrderedDict, Counter except ImportError: # pragma: no cover from ordereddict import OrderedDict from counter import Counter PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PYPY = platform.python_implementation() == 'PyPy' CAN_UNPACK_BYTE_ARRAY = sys.version_info[:3] >= (2, 7, 4) WINDOWS = platform.system() == 'Windows' if sys.version_info[:2] <= (2, 6): raise ImportError( 'Hypothesis is not supported on Python versions before 2.7' ) def bit_length(n): return n.bit_length() if PY3: def str_to_bytes(s): return s.encode(a_good_encoding()) def int_to_text(i): return str(i) text_type = str binary_type = bytes hrange = range ARG_NAME_ATTRIBUTE = 'arg' integer_types = (int,) hunichr = chr def unicode_safe_repr(x): return repr(x) def isidentifier(s): return s.isidentifier() def escape_unicode_characters(s): return codecs.encode(s, 'unicode_escape').decode('ascii') def print_unicode(x): print(x) exec(""" def quiet_raise(exc): raise exc from None """) def int_from_bytes(data): return int.from_bytes(data, 'big') def int_to_bytes(i, size): return i.to_bytes(size, 'big') def to_bytes_sequence(ls): return bytes(ls) def int_to_byte(i): return bytes([i]) import struct struct_pack = struct.pack struct_unpack = struct.unpack def benchmark_time(): return time.monotonic() else: import struct def struct_pack(*args): return hbytes(struct.pack(*args)) if CAN_UNPACK_BYTE_ARRAY: def struct_unpack(fmt, string): return struct.unpack(fmt, string) else: def struct_unpack(fmt, string): return struct.unpack(fmt, str(string)) def int_from_bytes(data): assert isinstance(data, bytearray) if CAN_UNPACK_BYTE_ARRAY: unpackable_data = data else: unpackable_data = bytes(data) result = 0 i = 0 while i + 4 <= len(data): result <<= 32 result |= struct.unpack('>I', unpackable_data[i:i + 4])[0] i += 4 while i < len(data): result <<= 8 result |= data[i] i += 1 return int(result) def int_to_bytes(i, size): assert i >= 0 result = bytearray(size) j = size - 1 while i and j >= 0: result[j] = i & 255 i >>= 8 j -= 1 if i: raise OverflowError('int too big to convert') return hbytes(result) int_to_byte = chr def to_bytes_sequence(ls): return bytearray(ls) def str_to_bytes(s): return s def int_to_text(i): return str(i).decode('ascii') VALID_PYTHON_IDENTIFIER = re.compile( r"^[a-zA-Z_][a-zA-Z0-9_]*$" ) def isidentifier(s): return VALID_PYTHON_IDENTIFIER.match(s) def unicode_safe_repr(x): r = repr(x) assert isinstance(r, str) return r.decode(a_good_encoding()) text_type = unicode binary_type = str def hrange(start_or_finish, finish=None, step=None): try: if step is None: if finish is None: return xrange(start_or_finish) else: return xrange(start_or_finish, finish) else: return xrange(start_or_finish, finish, step) except OverflowError: if step == 0: raise ValueError(u'step argument may not be zero') if step is None: step = 1 if finish is not None: start = start_or_finish else: start = 0 finish = start_or_finish assert step != 0 if step > 0: def shimrange(): i = start while i < finish: yield i i += step else: def shimrange(): i = start while i > finish: yield i i += step return shimrange() ARG_NAME_ATTRIBUTE = 'id' integer_types = (int, long) hunichr = unichr def escape_unicode_characters(s): return codecs.encode(s, 'string_escape') def print_unicode(x): if isinstance(x, unicode): x = x.encode(a_good_encoding()) print(x) def quiet_raise(exc): raise exc def benchmark_time(): return time.time() # coverage mixes unicode and str filepaths on Python 2, which causes us # problems if we're running under unicodenazi (it might also cause problems # when not running under unicodenazi, but hard to say for sure). This method # exists to work around that: If we're given a unicode filepath, we turn it # into a string file path using the appropriate encoding. See # https://bitbucket.org/ned/coveragepy/issues/602/ for more information. if PY2: def encoded_filepath(filepath): if isinstance(filepath, text_type): return filepath.encode(sys.getfilesystemencoding()) else: return filepath else: def encoded_filepath(filepath): return filepath def a_good_encoding(): return 'utf-8' def to_unicode(x): if isinstance(x, text_type): return x else: return x.decode(a_good_encoding()) def qualname(f): try: return f.__qualname__ except AttributeError: pass try: return f.im_class.__name__ + '.' + f.__name__ except AttributeError: return f.__name__ if PY2: FullArgSpec = namedtuple('FullArgSpec', 'args, varargs, varkw, defaults, ' 'kwonlyargs, kwonlydefaults, annotations') def getfullargspec(func): import inspect args, varargs, varkw, defaults = inspect.getargspec(func) return FullArgSpec(args, varargs, varkw, defaults, [], None, getattr(func, '__annotations__', {})) else: from inspect import getfullargspec, FullArgSpec if sys.version_info[:2] < (3, 6): def get_type_hints(thing): try: spec = getfullargspec(thing) return { k: v for k, v in spec.annotations.items() if k in (spec.args + spec.kwonlyargs) and isinstance(v, type) } except TypeError: return {} else: def get_type_hints(thing): try: import typing return typing.get_type_hints(thing) except TypeError: return {} importlib_invalidate_caches = getattr( importlib, 'invalidate_caches', lambda: ()) if PY2: CODE_FIELD_ORDER = [ 'co_argcount', 'co_nlocals', 'co_stacksize', 'co_flags', 'co_code', 'co_consts', 'co_names', 'co_varnames', 'co_filename', 'co_name', 'co_firstlineno', 'co_lnotab', 'co_freevars', 'co_cellvars', ] else: CODE_FIELD_ORDER = [ 'co_argcount', 'co_kwonlyargcount', 'co_nlocals', 'co_stacksize', 'co_flags', 'co_code', 'co_consts', 'co_names', 'co_varnames', 'co_filename', 'co_name', 'co_firstlineno', 'co_lnotab', 'co_freevars', 'co_cellvars', ] def update_code_location(code, newfile, newlineno): """Take a code object and lie shamelessly about where it comes from. Why do we want to do this? It's for really shallow reasons involving hiding the hypothesis_temporary_module code from test runners like py.test's verbose mode. This is a vastly disproportionate terrible hack that I've done purely for vanity, and if you're reading this code you're probably here because it's broken something and now you're angry at me. Sorry. """ unpacked = [ getattr(code, name) for name in CODE_FIELD_ORDER ] unpacked[CODE_FIELD_ORDER.index('co_filename')] = newfile unpacked[CODE_FIELD_ORDER.index('co_firstlineno')] = newlineno return type(code)(*unpacked) class compatbytes(bytearray): __name__ = 'bytes' def __init__(self, *args, **kwargs): bytearray.__init__(self, *args, **kwargs) self.__hash = None def __str__(self): return bytearray.__str__(self) def __repr__(self): return 'compatbytes(b%r)' % (str(self),) def __hash__(self): if self.__hash is None: self.__hash = hash(str(self)) return self.__hash def count(self, value): c = 0 for w in self: if w == value: c += 1 return c def index(self, value): for i, v in enumerate(self): if v == value: return i raise ValueError('Value %r not in sequence %r' % (value, self)) def __add__(self, value): assert isinstance(value, compatbytes) return compatbytes(bytearray.__add__(self, value)) def __radd__(self, value): assert isinstance(value, compatbytes) return compatbytes(bytearray.__add__(value, self)) def __mul__(self, value): return compatbytes(bytearray.__mul__(self, value)) def __rmul__(self, value): return compatbytes(bytearray.__rmul__(self, value)) def __getitem__(self, *args, **kwargs): r = bytearray.__getitem__(self, *args, **kwargs) if isinstance(r, bytearray): return compatbytes(r) else: return r __setitem__ = None def join(self, parts): result = bytearray() first = True for p in parts: if not first: result.extend(self) first = False result.extend(p) return compatbytes(result) def __contains__(self, value): return any(v == value for v in self) if PY2: hbytes = compatbytes reasonable_byte_type = bytearray string_types = (str, unicode) else: hbytes = bytes reasonable_byte_type = bytes string_types = (str,) EMPTY_BYTES = hbytes(b'') if PY2: def to_str(s): if isinstance(s, unicode): return s.encode(a_good_encoding()) assert isinstance(s, str) return s else: def to_str(s): return s def cast_unicode(s, encoding=None): if isinstance(s, bytes): return s.decode(encoding or a_good_encoding(), 'replace') return s def get_stream_enc(stream, default=None): return getattr(stream, 'encoding', None) or default def implements_iterator(it): """Turn things with a __next__ attribute into iterators on Python 2.""" if PY2 and not hasattr(it, 'next') and hasattr(it, '__next__'): it.next = it.__next__ return it if PY3: FileNotFoundError = FileNotFoundError else: FileNotFoundError = IOError # We need to know what sort of exception gets thrown when you try to write over # an existing file where you're not allowed to. This is rather less consistent # between versions than might be hoped. if PY3: FileExistsError = FileExistsError elif WINDOWS: FileExistsError = WindowsError else: # This doesn't happen in this case: We're not on windows and don't support # the x flag because it's Python 2, so there are no places where this can # be thrown. FileExistsError = None if PY2: # Under Python 2, math.floor and math.ceil return floats, which cannot # represent large integers - eg `float(2**53) == float(2**53 + 1)`. # We therefore implement them entirely in (long) integer operations. def floor(x): if int(x) != x and x < 0: return int(x) - 1 return int(x) def ceil(x): if int(x) != x and x > 0: return int(x) + 1 return int(x) else: floor = math.floor ceil = math.ceil try: from math import gcd except ImportError: from fractions import gcd if PY2: def b64decode(s): from base64 import b64decode as base return hbytes(base(s)) else: from base64 import b64decode hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/000077500000000000000000000000001321557765100243005ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/__init__.py000066400000000000000000000012001321557765100264020ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/data.py000066400000000000000000000200461321557765100255650ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys from enum import IntEnum from hypothesis.errors import Frozen, StopTest, InvalidArgument from hypothesis.internal.compat import hbytes, hrange, text_type, \ bit_length, benchmark_time, int_from_bytes, unicode_safe_repr from hypothesis.internal.coverage import IN_COVERAGE_TESTS from hypothesis.internal.escalation import mark_for_escalation class Status(IntEnum): OVERRUN = 0 INVALID = 1 VALID = 2 INTERESTING = 3 global_test_counter = 0 MAX_DEPTH = 100 class ConjectureData(object): @classmethod def for_buffer(self, buffer): buffer = hbytes(buffer) return ConjectureData( max_length=len(buffer), draw_bytes=lambda data, n: hbytes(buffer[data.index:data.index + n]) ) def __init__(self, max_length, draw_bytes): self.max_length = max_length self.is_find = False self._draw_bytes = draw_bytes self.overdraw = 0 self.level = 0 self.block_starts = {} self.blocks = [] self.buffer = bytearray() self.output = u'' self.status = Status.VALID self.frozen = False self.intervals_by_level = [] self.interval_stack = [] global global_test_counter self.testcounter = global_test_counter global_test_counter += 1 self.start_time = benchmark_time() self.events = set() self.forced_indices = set() self.capped_indices = {} self.interesting_origin = None self.tags = set() self.draw_times = [] self.__intervals = None def __assert_not_frozen(self, name): if self.frozen: raise Frozen( 'Cannot call %s on frozen ConjectureData' % ( name,)) def add_tag(self, tag): self.tags.add(tag) @property def depth(self): return len(self.interval_stack) @property def index(self): return len(self.buffer) def note(self, value): self.__assert_not_frozen('note') if not isinstance(value, text_type): value = unicode_safe_repr(value) self.output += value def draw(self, strategy): if self.is_find and not strategy.supports_find: raise InvalidArgument(( 'Cannot use strategy %r within a call to find (presumably ' 'because it would be invalid after the call had ended).' ) % (strategy,)) if strategy.is_empty: self.mark_invalid() if self.depth >= MAX_DEPTH: self.mark_invalid() if self.depth == 0 and not IN_COVERAGE_TESTS: # pragma: no cover original_tracer = sys.gettrace() try: sys.settrace(None) return self.__draw(strategy) finally: sys.settrace(original_tracer) else: return self.__draw(strategy) def __draw(self, strategy): at_top_level = self.depth == 0 self.start_example() try: if not at_top_level: return strategy.do_draw(self) else: start_time = benchmark_time() try: return strategy.do_draw(self) except BaseException as e: mark_for_escalation(e) raise finally: self.draw_times.append(benchmark_time() - start_time) finally: if not self.frozen: self.stop_example() def start_example(self): self.__assert_not_frozen('start_example') self.interval_stack.append(self.index) self.level += 1 def stop_example(self): if self.frozen: return self.level -= 1 while self.level >= len(self.intervals_by_level): self.intervals_by_level.append([]) k = self.interval_stack.pop() if k != self.index: t = (k, self.index) self.intervals_by_level[self.level].append(t) def note_event(self, event): self.events.add(event) @property def intervals(self): assert self.frozen if self.__intervals is None: intervals = set(self.blocks) for l in self.intervals_by_level: intervals.update(l) for i in hrange(len(l) - 1): if l[i][1] == l[i + 1][0]: intervals.add((l[i][0], l[i + 1][1])) for i in hrange(len(self.blocks) - 1): intervals.add((self.blocks[i][0], self.blocks[i + 1][1])) # Intervals are sorted as longest first, then by interval start. self.__intervals = tuple(sorted( set(intervals), key=lambda se: (se[0] - se[1], se[0]) )) del self.intervals_by_level return self.__intervals def freeze(self): if self.frozen: assert isinstance(self.buffer, hbytes) return self.frozen = True self.finish_time = benchmark_time() self.buffer = hbytes(self.buffer) self.events = frozenset(self.events) del self._draw_bytes def draw_bits(self, n): self.__assert_not_frozen('draw_bits') if n == 0: result = 0 elif n % 8 == 0: return int_from_bytes(self.draw_bytes(n // 8)) else: n_bytes = (n // 8) + 1 self.__check_capacity(n_bytes) buf = bytearray(self._draw_bytes(self, n_bytes)) assert len(buf) == n_bytes mask = (1 << (n % 8)) - 1 buf[0] &= mask self.capped_indices[self.index] = mask buf = hbytes(buf) self.__write(buf) result = int_from_bytes(buf) assert bit_length(result) <= n return result def write(self, string): self.__assert_not_frozen('write') self.__check_capacity(len(string)) assert isinstance(string, hbytes) original = self.index self.__write(string) self.forced_indices.update(hrange(original, self.index)) return string def __check_capacity(self, n): if self.index + n > self.max_length: self.overdraw = self.index + n - self.max_length self.status = Status.OVERRUN self.freeze() raise StopTest(self.testcounter) def __write(self, result): initial = self.index n = len(result) self.block_starts.setdefault(n, []).append(initial) self.blocks.append((initial, initial + n)) assert len(result) == n assert self.index == initial self.buffer.extend(result) def draw_bytes(self, n): self.__assert_not_frozen('draw_bytes') if n == 0: return hbytes(b'') self.__check_capacity(n) result = self._draw_bytes(self, n) assert len(result) == n self.__write(result) return hbytes(result) def mark_interesting(self, interesting_origin=None): self.__assert_not_frozen('mark_interesting') self.interesting_origin = interesting_origin self.status = Status.INTERESTING self.freeze() raise StopTest(self.testcounter) def mark_invalid(self): self.__assert_not_frozen('mark_invalid') self.status = Status.INVALID self.freeze() raise StopTest(self.testcounter) hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/engine.py000066400000000000000000001463521321557765100261320ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import heapq from enum import Enum from random import Random, getrandbits from weakref import WeakKeyDictionary from collections import defaultdict import attr from hypothesis import settings as Settings from hypothesis import Phase, HealthCheck from hypothesis.reporting import debug_report from hypothesis.internal.compat import EMPTY_BYTES, Counter, ceil, \ hbytes, hrange, int_to_text, int_to_bytes, benchmark_time, \ to_bytes_sequence, unicode_safe_repr from hypothesis.utils.conventions import UniqueIdentifier from hypothesis.internal.healthcheck import fail_health_check from hypothesis.internal.conjecture.data import MAX_DEPTH, Status, \ StopTest, ConjectureData from hypothesis.internal.conjecture.minimizer import minimize # Tell pytest to omit the body of this module from tracebacks # http://doc.pytest.org/en/latest/example/simple.html#writing-well-integrated-assertion-helpers __tracebackhide__ = True HUNG_TEST_TIME_LIMIT = 5 * 60 @attr.s class HealthCheckState(object): valid_examples = attr.ib(default=0) invalid_examples = attr.ib(default=0) overrun_examples = attr.ib(default=0) draw_times = attr.ib(default=attr.Factory(list)) class ExitReason(Enum): max_examples = 0 max_iterations = 1 timeout = 2 max_shrinks = 3 finished = 4 flaky = 5 class RunIsComplete(Exception): pass class ConjectureRunner(object): def __init__( self, test_function, settings=None, random=None, database_key=None, ): self._test_function = test_function self.settings = settings or Settings() self.last_data = None self.shrinks = 0 self.call_count = 0 self.event_call_counts = Counter() self.valid_examples = 0 self.start_time = benchmark_time() self.random = random or Random(getrandbits(128)) self.database_key = database_key self.status_runtimes = {} self.all_drawtimes = [] self.all_runtimes = [] self.events_to_strings = WeakKeyDictionary() self.target_selector = TargetSelector(self.random) # Tree nodes are stored in an array to prevent heavy nesting of data # structures. Branches are dicts mapping bytes to child nodes (which # will in general only be partially populated). Leaves are # ConjectureData objects that have been previously seen as the result # of following that path. self.tree = [{}] # A node is dead if there is nothing left to explore past that point. # Recursively, a node is dead if either it is a leaf or every byte # leads to a dead node when starting from here. self.dead = set() # We rewrite the byte stream at various points during parsing, to one # that will produce an equivalent result but is in some sense more # canonical. We keep track of these so that when walking the tree we # can identify nodes where the exact byte value doesn't matter and # treat all bytes there as equivalent. This significantly reduces the # size of the search space and removes a lot of redundant examples. # Maps tree indices where to the unique byte that is valid at that # point. Corresponds to data.write() calls. self.forced = {} # Maps tree indices to the maximum byte that is valid at that point. # Currently this is only used inside draw_bits, but it potentially # could get used elsewhere. self.capped = {} # Where a tree node consists of the beginning of a block we track the # size of said block. This allows us to tell when an example is too # short even if it goes off the unexplored region of the tree - if it # is at the beginning of a block of size 4 but only has 3 bytes left, # it's going to overrun the end of the buffer regardless of the # buffer contents. self.block_sizes = {} self.interesting_examples = {} self.covering_examples = {} self.shrunk_examples = set() self.tag_intern_table = {} self.health_check_state = None self.used_examples_from_database = False def __tree_is_exhausted(self): return 0 in self.dead def test_function(self, data): if benchmark_time() - self.start_time >= HUNG_TEST_TIME_LIMIT: fail_health_check(self.settings, ( 'Your test has been running for at least five minutes. This ' 'is probably not what you intended, so by default Hypothesis ' 'turns it into an error.' ), HealthCheck.hung_test) self.call_count += 1 try: self._test_function(data) data.freeze() except StopTest as e: if e.testcounter != data.testcounter: self.save_buffer(data.buffer) raise e except BaseException: self.save_buffer(data.buffer) raise finally: data.freeze() self.note_details(data) self.target_selector.add(data) self.debug_data(data) tags = frozenset(data.tags) data.tags = self.tag_intern_table.setdefault(tags, tags) if data.status == Status.VALID: self.valid_examples += 1 for t in data.tags: existing = self.covering_examples.get(t) if ( existing is None or sort_key(data.buffer) < sort_key(existing.buffer) ): self.covering_examples[t] = data if self.database is not None: self.database.save(self.covering_key, data.buffer) if existing is not None: self.database.delete( self.covering_key, existing.buffer) tree_node = self.tree[0] indices = [] node_index = 0 for i, b in enumerate(data.buffer): indices.append(node_index) if i in data.forced_indices: self.forced[node_index] = b try: self.capped[node_index] = data.capped_indices[i] except KeyError: pass try: node_index = tree_node[b] except KeyError: node_index = len(self.tree) self.tree.append({}) tree_node[b] = node_index tree_node = self.tree[node_index] if node_index in self.dead: break for u, v in data.blocks: # This can happen if we hit a dead node when walking the buffer. # In that case we alrady have this section of the tree mapped. if u >= len(indices): break self.block_sizes[indices[u]] = v - u if data.status != Status.OVERRUN and node_index not in self.dead: self.dead.add(node_index) self.tree[node_index] = data for j in reversed(indices): if ( len(self.tree[j]) < self.capped.get(j, 255) + 1 and j not in self.forced ): break if set(self.tree[j].values()).issubset(self.dead): self.dead.add(j) else: break last_data_is_interesting = ( self.last_data is not None and self.last_data.status == Status.INTERESTING ) if data.status == Status.INTERESTING: first_call = len(self.interesting_examples) == 0 key = data.interesting_origin changed = False try: existing = self.interesting_examples[key] except KeyError: changed = True else: if sort_key(data.buffer) < sort_key(existing.buffer): self.downgrade_buffer(existing.buffer) changed = True if changed: self.save_buffer(data.buffer) self.interesting_examples[key] = data self.shrunk_examples.discard(key) if last_data_is_interesting and not first_call: self.shrinks += 1 if not last_data_is_interesting or ( sort_key(data.buffer) < sort_key(self.last_data.buffer) and data.interesting_origin == self.last_data.interesting_origin ): self.last_data = data if self.shrinks >= self.settings.max_shrinks: self.exit_with(ExitReason.max_shrinks) elif ( self.last_data is None or self.last_data.status < Status.INTERESTING ): self.last_data = data if ( self.settings.timeout > 0 and benchmark_time() >= self.start_time + self.settings.timeout ): self.exit_with(ExitReason.timeout) if not self.interesting_examples: if self.valid_examples >= self.settings.max_examples: self.exit_with(ExitReason.max_examples) if self.call_count >= max( self.settings.max_iterations, self.settings.max_examples ): self.exit_with(ExitReason.max_iterations) if self.__tree_is_exhausted(): self.exit_with(ExitReason.finished) self.record_for_health_check(data) def record_for_health_check(self, data): # Once we've actually found a bug, there's no point in trying to run # health checks - they'll just mask the actually important information. if data.status == Status.INTERESTING: self.health_check_state = None state = self.health_check_state if state is None: return state.draw_times.extend(data.draw_times) if data.status == Status.VALID: state.valid_examples += 1 elif data.status == Status.INVALID: state.invalid_examples += 1 else: assert data.status == Status.OVERRUN state.overrun_examples += 1 max_valid_draws = 10 max_invalid_draws = 50 max_overrun_draws = 20 assert state.valid_examples <= max_valid_draws if state.valid_examples == max_valid_draws: self.health_check_state = None return if state.overrun_examples == max_overrun_draws: fail_health_check(self.settings, ( 'Examples routinely exceeded the max allowable size. ' '(%d examples overran while generating %d valid ones)' '. Generating examples this large will usually lead to' ' bad results. You could try setting max_size parameters ' 'on your collections and turning ' 'max_leaves down on recursive() calls.') % ( state.overrun_examples, state.valid_examples ), HealthCheck.data_too_large) if state.invalid_examples == max_invalid_draws: fail_health_check(self.settings, ( 'It looks like your strategy is filtering out a lot ' 'of data. Health check found %d filtered examples but ' 'only %d good ones. This will make your tests much ' 'slower, and also will probably distort the data ' 'generation quite a lot. You should adapt your ' 'strategy to filter less. This can also be caused by ' 'a low max_leaves parameter in recursive() calls') % ( state.invalid_examples, state.valid_examples ), HealthCheck.filter_too_much) draw_time = sum(state.draw_times) if draw_time > 1.0: fail_health_check(self.settings, ( 'Data generation is extremely slow: Only produced ' '%d valid examples in %.2f seconds (%d invalid ones ' 'and %d exceeded maximum size). Try decreasing ' "size of the data you're generating (with e.g." 'max_size or max_leaves parameters).' ) % ( state.valid_examples, draw_time, state.invalid_examples, state.overrun_examples), HealthCheck.too_slow,) def save_buffer(self, buffer): if self.settings.database is not None: key = self.database_key if key is None: return self.settings.database.save(key, hbytes(buffer)) def downgrade_buffer(self, buffer): if self.settings.database is not None: self.settings.database.move( self.database_key, self.secondary_key, buffer) @property def secondary_key(self): return b'.'.join((self.database_key, b'secondary')) @property def covering_key(self): return b'.'.join((self.database_key, b'coverage')) def note_details(self, data): runtime = max(data.finish_time - data.start_time, 0.0) self.all_runtimes.append(runtime) self.all_drawtimes.extend(data.draw_times) self.status_runtimes.setdefault(data.status, []).append(runtime) for event in set(map(self.event_to_string, data.events)): self.event_call_counts[event] += 1 def debug(self, message): with self.settings: debug_report(message) def debug_data(self, data): buffer_parts = [u"["] for i, (u, v) in enumerate(data.blocks): if i > 0: buffer_parts.append(u" || ") buffer_parts.append( u', '.join(int_to_text(int(i)) for i in data.buffer[u:v])) buffer_parts.append(u']') status = unicode_safe_repr(data.status) if data.status == Status.INTERESTING: status = u'%s (%s)' % ( status, unicode_safe_repr(data.interesting_origin,)) self.debug(u'%d bytes %s -> %s, %s' % ( data.index, u''.join(buffer_parts), status, data.output, )) def prescreen_buffer(self, buffer): """Attempt to rule out buffer as a possible interesting candidate. Returns False if we know for sure that running this buffer will not produce an interesting result. Returns True if it might (because it explores territory we have not previously tried). This is purely an optimisation to try to reduce the number of tests we run. "return True" would be a valid but inefficient implementation. """ node_index = 0 n = len(buffer) for k, b in enumerate(buffer): if node_index in self.dead: return False try: # The block size at that point provides a lower bound on how # many more bytes are required. If the buffer does not have # enough bytes to fulfill that block size then we can rule out # this buffer. if k + self.block_sizes[node_index] > n: return False except KeyError: pass try: b = self.forced[node_index] except KeyError: pass try: b = min(b, self.capped[node_index]) except KeyError: pass try: node_index = self.tree[node_index][b] except KeyError: return True else: return False def incorporate_new_buffer(self, buffer): assert self.last_data.status == Status.INTERESTING start = self.last_data.interesting_origin buffer = hbytes(buffer[:self.last_data.index]) assert sort_key(buffer) < sort_key(self.last_data.buffer) if not self.prescreen_buffer(buffer): return False assert sort_key(buffer) <= sort_key(self.last_data.buffer) data = ConjectureData.for_buffer(buffer) self.test_function(data) assert self.last_data.interesting_origin == start return data is self.last_data def run(self): with self.settings: try: self._run() except RunIsComplete: pass if self.interesting_examples: self.last_data = max( self.interesting_examples.values(), key=lambda d: sort_key(d.buffer)) if self.last_data is not None: self.debug_data(self.last_data) self.debug( u'Run complete after %d examples (%d valid) and %d shrinks' % ( self.call_count, self.valid_examples, self.shrinks, )) def _new_mutator(self): def draw_new(data, n): return uniform(self.random, n) def draw_existing(data, n): return self.last_data.buffer[data.index:data.index + n] def draw_smaller(data, n): existing = self.last_data.buffer[data.index:data.index + n] r = uniform(self.random, n) if r <= existing: return r return _draw_predecessor(self.random, existing) def draw_larger(data, n): existing = self.last_data.buffer[data.index:data.index + n] r = uniform(self.random, n) if r >= existing: return r return _draw_successor(self.random, existing) def reuse_existing(data, n): choices = data.block_starts.get(n, []) or \ self.last_data.block_starts.get(n, []) if choices: i = self.random.choice(choices) return self.last_data.buffer[i:i + n] else: result = uniform(self.random, n) assert isinstance(result, hbytes) return result def flip_bit(data, n): buf = bytearray( self.last_data.buffer[data.index:data.index + n]) i = self.random.randint(0, n - 1) k = self.random.randint(0, 7) buf[i] ^= (1 << k) return hbytes(buf) def draw_zero(data, n): return hbytes(b'\0' * n) def draw_max(data, n): return hbytes([255]) * n def draw_constant(data, n): return hbytes([self.random.randint(0, 255)]) * n def redraw_last(data, n): u = self.last_data.blocks[-1][0] if data.index + n <= u: return self.last_data.buffer[data.index:data.index + n] else: return uniform(self.random, n) options = [ draw_new, redraw_last, redraw_last, reuse_existing, reuse_existing, draw_existing, draw_smaller, draw_larger, flip_bit, draw_zero, draw_max, draw_zero, draw_max, draw_constant, ] bits = [ self.random.choice(options) for _ in hrange(3) ] def draw_mutated(data, n): if data.index + n > len(self.last_data.buffer): result = uniform(self.random, n) else: result = self.random.choice(bits)(data, n) return self.__rewrite_for_novelty( data, self.__zero_bound(data, result)) return draw_mutated def __rewrite(self, data, result): return self.__rewrite_for_novelty( data, self.__zero_bound(data, result) ) def __zero_bound(self, data, result): """This tries to get the size of the generated data under control by replacing the result with zero if we are too deep or have already generated too much data. This causes us to enter "shrinking mode" there and thus reduce the size of the generated data. """ if ( data.depth * 2 >= MAX_DEPTH or (data.index + len(result)) * 2 >= self.settings.buffer_size ): if any(result): data.hit_zero_bound = True return hbytes(len(result)) else: return result def __rewrite_for_novelty(self, data, result): """Take a block that is about to be added to data as the result of a draw_bytes call and rewrite it a small amount to ensure that the result will be novel: that is, not hit a part of the tree that we have fully explored. This is mostly useful for test functions which draw a small number of blocks. """ assert isinstance(result, hbytes) try: node_index = data.__current_node_index except AttributeError: node_index = 0 data.__current_node_index = node_index data.__hit_novelty = False data.__evaluated_to = 0 if data.__hit_novelty: return result node = self.tree[node_index] for i in hrange(data.__evaluated_to, len(data.buffer)): node = self.tree[node_index] try: node_index = node[data.buffer[i]] assert node_index not in self.dead node = self.tree[node_index] except KeyError: # pragma: no cover assert False, ( 'This should be impossible. If you see this error, please ' 'report it as a bug (ideally with a reproducible test ' 'case).' ) for i, b in enumerate(result): assert isinstance(b, int) try: new_node_index = node[b] except KeyError: data.__hit_novelty = True return result new_node = self.tree[new_node_index] if new_node_index in self.dead: if isinstance(result, hbytes): result = bytearray(result) for c in range(256): if c not in node: assert c <= self.capped.get(node_index, c) result[i] = c data.__hit_novelty = True return hbytes(result) else: new_node_index = node[c] new_node = self.tree[new_node_index] if new_node_index not in self.dead: result[i] = c break else: # pragma: no cover assert False, ( 'Found a tree node which is live despite all its ' 'children being dead.') node_index = new_node_index node = new_node assert node_index not in self.dead data.__current_node_index = node_index data.__evaluated_to = data.index + len(result) return hbytes(result) @property def database(self): if self.database_key is None: return None return self.settings.database def has_existing_examples(self): return ( self.database is not None and Phase.reuse in self.settings.phases ) def reuse_existing_examples(self): """If appropriate (we have a database and have been told to use it), try to reload existing examples from the database. If there are a lot we don't try all of them. We always try the smallest example in the database (which is guaranteed to be the last failure) and the largest (which is usually the seed example which the last failure came from but we don't enforce that). We then take a random sampling of the remainder and try those. Any examples that are no longer interesting are cleared out. """ if self.has_existing_examples(): self.debug('Reusing examples from database') # We have to do some careful juggling here. We have two database # corpora: The primary and secondary. The primary corpus is a # small set of minimized examples each of which has at one point # demonstrated a distinct bug. We want to retry all of these. # We also have a secondary corpus of examples that have at some # point demonstrated interestingness (currently only ones that # were previously non-minimal examples of a bug, but this will # likely expand in future). These are a good source of potentially # interesting examples, but there are a lot of them, so we down # sample the secondary corpus to a more manageable size. corpus = sorted( self.settings.database.fetch(self.database_key), key=sort_key ) desired_size = max(2, ceil(0.1 * self.settings.max_examples)) for extra_key in [self.secondary_key, self.covering_key]: if len(corpus) < desired_size: extra_corpus = list( self.settings.database.fetch(extra_key), ) shortfall = desired_size - len(corpus) if len(extra_corpus) <= shortfall: extra = extra_corpus else: extra = self.random.sample(extra_corpus, shortfall) extra.sort(key=sort_key) corpus.extend(extra) self.used_examples_from_database = len(corpus) > 0 for existing in corpus: self.last_data = ConjectureData.for_buffer(existing) try: self.test_function(self.last_data) finally: if self.last_data.status != Status.INTERESTING: self.settings.database.delete( self.database_key, existing) self.settings.database.delete( self.secondary_key, existing) def exit_with(self, reason): self.exit_reason = reason raise RunIsComplete() def generate_new_examples(self): if Phase.generate not in self.settings.phases: return zero_data = self.cached_test_function( hbytes(self.settings.buffer_size)) if zero_data.status == Status.OVERRUN or ( zero_data.status == Status.VALID and len(zero_data.buffer) * 2 > self.settings.buffer_size ): fail_health_check( self.settings, 'The smallest natural example for your test is extremely ' 'large. This makes it difficult for Hypothesis to generate ' 'good examples, especially when trying to reduce failing ones ' 'at the end. Consider reducing the size of your data if it is ' 'of a fixed size. You could also fix this by improving how ' 'your data shrinks (see https://hypothesis.readthedocs.io/en/' 'latest/data.html#shrinking for details), or by introducing ' 'default values inside your strategy. e.g. could you replace ' 'some arguments with their defaults by using ' 'one_of(none(), some_complex_strategy)?', HealthCheck.large_base_example ) if self.settings.perform_health_check: self.health_check_state = HealthCheckState() count = 0 while not self.interesting_examples and ( count < 10 or self.health_check_state is not None ): def draw_bytes(data, n): return self.__rewrite_for_novelty( data, self.__zero_bound(data, uniform(self.random, n)) ) targets_found = len(self.covering_examples) self.last_data = ConjectureData( max_length=self.settings.buffer_size, draw_bytes=draw_bytes ) self.test_function(self.last_data) self.last_data.freeze() if len(self.covering_examples) > targets_found: count = 0 else: count += 1 mutations = 0 mutator = self._new_mutator() zero_bound_queue = [] while not self.interesting_examples: if zero_bound_queue: # Whenever we generated an example and it hits a bound # which forces zero blocks into it, this creates a weird # distortion effect by making certain parts of the data # stream (especially ones to the right) much more likely # to be zero. We fix this by redistributing the generated # data by shuffling it randomly. This results in the # zero data being spread evenly throughout the buffer. # Hopefully the shrinking this causes will cause us to # naturally fail to hit the bound. # If it doesn't then we will queue the new version up again # (now with more zeros) and try again. overdrawn = zero_bound_queue.pop() buffer = bytearray(overdrawn.buffer) # These will have values written to them that are different # from what's in them anyway, so the value there doesn't # really "count" for distributional purposes, and if we # leave them in then they can cause the fraction of non # zero bytes to increase on redraw instead of decrease. for i in overdrawn.forced_indices: buffer[i] = 0 self.random.shuffle(buffer) buffer = hbytes(buffer) def draw_bytes(data, n): result = buffer[data.index:data.index + n] if len(result) < n: result += hbytes(n - len(result)) return self.__rewrite(data, result) data = ConjectureData( draw_bytes=draw_bytes, max_length=self.settings.buffer_size, ) self.test_function(data) data.freeze() else: target, last_data = self.target_selector.select() mutations += 1 targets_found = len(self.covering_examples) prev_data = self.last_data data = ConjectureData( draw_bytes=mutator, max_length=self.settings.buffer_size ) self.test_function(data) data.freeze() if ( data.status > prev_data.status or len(self.covering_examples) > targets_found ): mutations = 0 elif ( data.status < prev_data.status or not self.target_selector.has_tag(target, data) or mutations >= 10 ): # Cap the variations of a single example and move on to # an entirely fresh start. Ten is an entirely arbitrary # constant, but it's been working well for years. mutations = 0 mutator = self._new_mutator() if getattr(data, 'hit_zero_bound', False): zero_bound_queue.append(data) mutations += 1 def _run(self): self.last_data = None self.start_time = benchmark_time() self.reuse_existing_examples() self.generate_new_examples() if ( Phase.shrink not in self.settings.phases or not self.interesting_examples ): self.exit_with(ExitReason.finished) for prev_data in sorted( self.interesting_examples.values(), key=lambda d: sort_key(d.buffer) ): assert prev_data.status == Status.INTERESTING data = ConjectureData.for_buffer(prev_data.buffer) self.test_function(data) if data.status != Status.INTERESTING: self.exit_with(ExitReason.flaky) while len(self.shrunk_examples) < len(self.interesting_examples): target, self.last_data = min([ (k, v) for k, v in self.interesting_examples.items() if k not in self.shrunk_examples], key=lambda kv: (sort_key(kv[1].buffer), sort_key(repr(kv[0]))), ) self.debug('Shrinking %r' % (target,)) assert self.last_data.interesting_origin == target self.shrink() self.shrunk_examples.add(target) self.exit_with(ExitReason.finished) def cached_test_function(self, buffer): node_index = 0 for i in hrange(self.settings.buffer_size): try: c = self.forced[node_index] except KeyError: if i < len(buffer): c = buffer[i] else: c = 0 try: node_index = self.tree[node_index][c] except KeyError: break node = self.tree[node_index] if isinstance(node, ConjectureData): return node result = ConjectureData.for_buffer(buffer) self.test_function(result) return result def try_buffer_with_rewriting_from(self, initial_attempt, v): initial_data = self.cached_test_function(initial_attempt) if initial_data.status == Status.INTERESTING: return initial_data is self.last_data # If this produced something completely invalid we ditch it # here rather than trying to persevere. if initial_data.status < Status.VALID: return False if len(initial_data.buffer) < v: return False lost_data = len(self.last_data.buffer) - len(initial_data.buffer) # If this did not in fact cause the data size to shrink we # bail here because it's not worth trying to delete stuff from # the remainder. if lost_data <= 0: return False try_with_deleted = bytearray(initial_attempt) del try_with_deleted[v:v + lost_data] try_with_deleted.extend(hbytes(lost_data - 1)) if self.incorporate_new_buffer(try_with_deleted): return True for r, s in self.last_data.intervals: if ( r >= v and s - r <= lost_data and r < len(initial_data.buffer) ): try_with_deleted = bytearray(initial_attempt) del try_with_deleted[r:s] try_with_deleted.extend(hbytes(s - r - 1)) if self.incorporate_new_buffer(try_with_deleted): return True return False def delta_interval_deletion(self): """Attempt to delete every interval in the example.""" self.debug('delta interval deletes') # We do a delta-debugging style thing here where we initially try to # delete many intervals at once and prune it down exponentially to # eventually only trying to delete one interval at a time. # I'm a little skeptical that this is helpful in general, but we've # got at least one benchmark where it does help. k = len(self.last_data.intervals) // 2 while k > 0: i = 0 while i + k <= len(self.last_data.intervals): bitmask = [True] * len(self.last_data.buffer) for u, v in self.last_data.intervals[i:i + k]: for t in range(u, v): bitmask[t] = False if not self.incorporate_new_buffer(hbytes( b for b, v in zip(self.last_data.buffer, bitmask) if v )): i += k k //= 2 def greedy_interval_deletion(self): """Attempt to delete every interval in the example.""" self.debug('greedy interval deletes') i = 0 while i < len(self.last_data.intervals): u, v = self.last_data.intervals[i] if not self.incorporate_new_buffer( self.last_data.buffer[:u] + self.last_data.buffer[v:] ): i += 1 def coarse_block_replacement(self): """Attempts to zero every block. This is a very coarse pass that we only run once to attempt to remove some irrelevant detail. The main purpose of it is that if we manage to zero a lot of data then many attempted deletes become duplicates of each other, so we run fewer tests. If more blocks become possible to zero later that will be handled by minimize_individual_blocks. The point of this is simply to provide a fairly fast initial pass. """ self.debug('Zeroing blocks') i = 0 while i < len(self.last_data.blocks): buf = self.last_data.buffer u, v = self.last_data.blocks[i] assert u < v block = buf[u:v] if any(block): self.incorporate_new_buffer(buf[:u] + hbytes(v - u) + buf[v:]) i += 1 def minimize_duplicated_blocks(self): """Find blocks that have been duplicated in multiple places and attempt to minimize all of the duplicates simultaneously.""" self.debug('Simultaneous shrinking of duplicated blocks') counts = Counter( self.last_data.buffer[u:v] for u, v in self.last_data.blocks ) blocks = [buffer for buffer, count in counts.items() if count > 1] thresholds = {} for u, v in self.last_data.blocks: b = self.last_data.buffer[u:v] thresholds[b] = v blocks.sort(reverse=True) blocks.sort(key=lambda b: counts[b] * len(b), reverse=True) for block in blocks: parts = [ self.last_data.buffer[r:s] for r, s in self.last_data.blocks ] def replace(b): return hbytes(EMPTY_BYTES.join( hbytes(b if c == block else c) for c in parts )) threshold = thresholds[block] minimize( block, lambda b: self.try_buffer_with_rewriting_from( replace(b), threshold), random=self.random, full=False ) def minimize_individual_blocks(self): self.debug('Shrinking of individual blocks') i = 0 while i < len(self.last_data.blocks): u, v = self.last_data.blocks[i] minimize( self.last_data.buffer[u:v], lambda b: self.try_buffer_with_rewriting_from( self.last_data.buffer[:u] + b + self.last_data.buffer[v:], v ), random=self.random, full=False, ) i += 1 def reorder_blocks(self): self.debug('Reordering blocks') block_lengths = sorted(self.last_data.block_starts, reverse=True) for n in block_lengths: i = 1 while i < len(self.last_data.block_starts.get(n, ())): j = i while j > 0: buf = self.last_data.buffer blocks = self.last_data.block_starts[n] a_start = blocks[j - 1] b_start = blocks[j] a = buf[a_start:a_start + n] b = buf[b_start:b_start + n] if a <= b: break swapped = ( buf[:a_start] + b + buf[a_start + n:b_start] + a + buf[b_start + n:]) assert len(swapped) == len(buf) assert swapped < buf if self.incorporate_new_buffer(swapped): j -= 1 else: break i += 1 def shrink(self): # We assume that if an all-zero block of bytes is an interesting # example then we're not going to do better than that. # This might not technically be true: e.g. for integers() | booleans() # the simplest example is actually [1, 0]. Missing this case is fairly # harmless and this allows us to make various simplifying assumptions # about the structure of the data (principally that we're never # operating on a block of all zero bytes so can use non-zeroness as a # signpost of complexity). if ( not any(self.last_data.buffer) or self.incorporate_new_buffer(hbytes(len(self.last_data.buffer))) ): return if self.has_existing_examples(): # If we have any smaller examples in the secondary corpus, now is # a good time to try them to see if they work as shrinks. They # probably won't, but it's worth a shot and gives us a good # opportunity to clear out the database. # It's not worth trying the primary corpus because we already # tried all of those in the initial phase. corpus = sorted( self.settings.database.fetch(self.secondary_key), key=sort_key ) for c in corpus: if sort_key(c) >= sort_key(self.last_data.buffer): break elif self.incorporate_new_buffer(c): break else: self.settings.database.delete(self.secondary_key, c) # Coarse passes that are worth running once when the example is likely # to be "far from shrunk" but not worth repeating in a loop because # they are subsumed by more fine grained passes. self.delta_interval_deletion() self.coarse_block_replacement() change_counter = -1 while self.shrinks > change_counter: change_counter = self.shrinks self.minimize_duplicated_blocks() self.minimize_individual_blocks() self.reorder_blocks() self.greedy_interval_deletion() def event_to_string(self, event): if isinstance(event, str): return event try: return self.events_to_strings[event] except KeyError: pass result = str(event) self.events_to_strings[event] = result return result def _draw_predecessor(rnd, xs): r = bytearray() any_strict = False for x in to_bytes_sequence(xs): if not any_strict: c = rnd.randint(0, x) if c < x: any_strict = True else: c = rnd.randint(0, 255) r.append(c) return hbytes(r) def _draw_successor(rnd, xs): r = bytearray() any_strict = False for x in to_bytes_sequence(xs): if not any_strict: c = rnd.randint(x, 255) if c > x: any_strict = True else: c = rnd.randint(0, 255) r.append(c) return hbytes(r) def sort_key(buffer): return (len(buffer), buffer) def uniform(random, n): return int_to_bytes(random.getrandbits(n * 8), n) class SampleSet(object): """Set data type with the ability to sample uniformly at random from it. The mechanism is that we store the set in two parts: A mapping of values to their index in an array. Sampling uniformly at random then becomes simply a matter of sampling from the array, but we can use the index for efficient lookup to add and remove values. """ __slots__ = ('__values', '__index') def __init__(self): self.__values = [] self.__index = {} def __len__(self): return len(self.__values) def __repr__(self): return 'SampleSet(%r)' % (self.__values,) def add(self, value): assert value not in self.__index # Adding simply consists of adding the value to the end of the array # and updating the index. self.__index[value] = len(self.__values) self.__values.append(value) def remove(self, value): # To remove a value we first remove it from the index. But this leaves # us with the value still in the array, so we have to fix that. We # can't simply remove the value from the array, as that would a) Be an # O(n) operation and b) Leave the index completely wrong for every # value after that index. # So what we do is we take the last element of the array and place it # in the position of the value we just deleted (if the value was not # already the last element of the array. If it was then we don't have # to do anything extra). This reorders the array, but that's OK because # we don't care about its order, we just need to sample from it. i = self.__index.pop(value) last = self.__values.pop() if i < len(self.__values): self.__values[i] = last self.__index[last] = i def choice(self, random): return random.choice(self.__values) class Negated(object): __slots__ = ('tag',) def __init__(self, tag): self.tag = tag NEGATED_CACHE = {} def negated(tag): try: return NEGATED_CACHE[tag] except KeyError: result = Negated(tag) NEGATED_CACHE[tag] = result return result universal = UniqueIdentifier('universal') class TargetSelector(object): """Data structure for selecting targets to use for mutation. The goal is to do a good job of exploiting novelty in examples without getting too obsessed with any particular novel factor. Roughly speaking what we want to do is give each distinct coverage target equal amounts of time. However some coverage targets may be harder to fuzz than others, or may only appear in a very small minority of examples, so we don't want to let those dominate the testing. Targets are selected according to the following rules: 1. We ideally want valid examples as our starting point. We ignore interesting examples entirely, and other than that we restrict ourselves to the best example status we've seen so far. If we've only seen OVERRUN examples we use those. If we've seen INVALID but not VALID examples we use those. Otherwise we use VALID examples. 2. Among the examples we've seen with the right status, when asked to select a target, we select a coverage target and return that along with an example exhibiting that target uniformly at random. Coverage target selection proceeds as follows: 1. Whenever we return an example from select, we update the usage count of each of its tags. 2. Whenever we see an example, we add it to the list of examples for all of its tags. 3. When selecting a tag, we select one with a minimal usage count. Among those of minimal usage count we select one with the fewest examples. Among those, we select one uniformly at random. This has the following desirable properties: 1. When two coverage targets are intrinsically linked (e.g. when you have multiple lines in a conditional so that either all or none of them will be covered in a conditional) they are naturally deduplicated. 2. Popular coverage targets will largely be ignored for considering what test to run - if every example exhibits a coverage target, picking an example because of that target is rather pointless. 3. When we discover new coverage targets we immediately exploit them until we get to the point where we've spent about as much time on them as the existing targets. 4. Among the interesting deduplicated coverage targets we essentially round-robin between them, but with a more consistent distribution than uniformly at random, which is important particularly for short runs. """ def __init__(self, random): self.random = random self.best_status = Status.OVERRUN self.reset() def reset(self): self.examples_by_tags = defaultdict(list) self.tag_usage_counts = Counter() self.tags_by_score = defaultdict(SampleSet) self.scores_by_tag = {} self.scores = [] self.mutation_counts = 0 self.example_counts = 0 self.non_universal_tags = set() self.universal_tags = None def add(self, data): if data.status == Status.INTERESTING: return if data.status < self.best_status: return if data.status > self.best_status: self.best_status = data.status self.reset() if self.universal_tags is None: self.universal_tags = set(data.tags) else: not_actually_universal = self.universal_tags - data.tags for t in not_actually_universal: self.universal_tags.remove(t) self.non_universal_tags.add(t) self.examples_by_tags[t] = list( self.examples_by_tags[universal] ) new_tags = data.tags - self.non_universal_tags for t in new_tags: self.non_universal_tags.add(t) self.examples_by_tags[negated(t)] = list( self.examples_by_tags[universal] ) self.example_counts += 1 for t in self.tags_for(data): self.examples_by_tags[t].append(data) self.rescore(t) def has_tag(self, tag, data): if tag is universal: return True if isinstance(tag, Negated): return tag.tag not in data.tags return tag in data.tags def tags_for(self, data): yield universal for t in data.tags: yield t for t in self.non_universal_tags: if t not in data.tags: yield negated(t) def rescore(self, tag): new_score = ( self.tag_usage_counts[tag], len(self.examples_by_tags[tag])) try: old_score = self.scores_by_tag[tag] except KeyError: pass else: self.tags_by_score[old_score].remove(tag) self.scores_by_tag[tag] = new_score sample = self.tags_by_score[new_score] if len(sample) == 0: heapq.heappush(self.scores, new_score) sample.add(tag) def select_tag(self): while True: peek = self.scores[0] sample = self.tags_by_score[peek] if len(sample) == 0: heapq.heappop(self.scores) else: return sample.choice(self.random) def select_example_for_tag(self, t): return self.random.choice(self.examples_by_tags[t]) def select(self): t = self.select_tag() self.mutation_counts += 1 result = self.select_example_for_tag(t) assert self.has_tag(t, result) for s in self.tags_for(result): self.tag_usage_counts[s] += 1 self.rescore(s) return t, result hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/floats.py000066400000000000000000000171451321557765100261520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from array import array from hypothesis.internal.compat import hbytes, hrange, int_to_bytes from hypothesis.internal.floats import float_to_int, int_to_float """ This module implements support for arbitrary floating point numbers in Conjecture. It doesn't make any attempt to get a good distribution, only to get a format that will shrink well. It works by defining an encoding of non-negative floating point numbers (including NaN values with a zero sign bit) that has good lexical shrinking properties. This encoding is a tagged union of two separate encodings for floating point numbers, with the tag being the first bit of 64 and the remaining 63-bits being the payload. If the tag bit is 0, the next 7 bits are ignored, and the remaining 7 bytes are interpreted as a 7 byte integer in big-endian order and then converted to a float (there is some redundancy here, as 7 * 8 = 56, which is larger than the largest integer that floating point numbers can represent exactly, so multiple encodings may map to the same float). If the tag bit is 1, we instead use somemthing that is closer to the normal representation of floats (and can represent every non-negative float exactly) but has a better ordering: 1. NaNs are ordered after everything else. 2. Infinity is ordered after every finite number. 3. The sign is ignored unless two floating point numbers are identical in absolute magnitude. In that case, the positive is ordered before the negative. 4. Positive floating point numbers are ordered first by int(x) where encoding(x) < encoding(y) if int(x) < int(y). 5. If int(x) == int(y) then x and y are sorted towards lower denominators of their fractional parts. The format of this encoding of floating point goes as follows: [exponent] [mantissa] Each of these is the same size their equivalent in IEEE floating point, but are in a different format. We translate exponents as follows: 1. The maximum exponent (2 ** 11 - 1) is left unchanged. 2. We reorder the remaining exponents so that all of the positive exponents are first, in increasing order, followed by all of the negative exponents in decreasing order (where positive/negative is done by the unbiased exponent e - 1023). We translate the mantissa as follows: 1. If the unbiased exponent is <= 0 we reverse it bitwise. 2. If the unbiased exponent is >= 52 we leave it alone. 3. If the unbiased exponent is in the range [1, 51] then we reverse the low k bits, where k is 52 - unbiased exponen. The low bits correspond to the fractional part of the floating point number. Reversing it bitwise means that we try to minimize the low bits, which kills off the higher powers of 2 in the fraction first. """ MAX_EXPONENT = 0x7ff SPECIAL_EXPONENTS = (0, MAX_EXPONENT) BIAS = 1023 MAX_POSITIVE_EXPONENT = (MAX_EXPONENT - 1 - BIAS) def exponent_key(e): if e == MAX_EXPONENT: return float('inf') unbiased = e - BIAS if unbiased < 0: return 10000 - unbiased else: return unbiased ENCODING_TABLE = array('H', sorted(hrange(MAX_EXPONENT + 1), key=exponent_key)) DECODING_TABLE = array('H', [0]) * len(ENCODING_TABLE) for i, b in enumerate(ENCODING_TABLE): DECODING_TABLE[b] = i del i, b def decode_exponent(e): """Take draw_bits(11) and turn it into a suitable floating point exponent such that lexicographically simpler leads to simpler floats.""" assert 0 <= e <= MAX_EXPONENT return ENCODING_TABLE[e] def encode_exponent(e): """Take a floating point exponent and turn it back into the equivalent result from conjecture.""" assert 0 <= e <= MAX_EXPONENT return DECODING_TABLE[e] def reverse_byte(b): result = 0 for _ in range(8): result <<= 1 result |= (b & 1) b >>= 1 return result # Table mapping individual bytes to the equivalent byte with the bits of the # byte reversed. e.g. 1=0b1 is mapped to 0xb10000000=0x80=128. We use this # precalculated table to simplify calculating the bitwise reversal of a longer # integer. REVERSE_BITS_TABLE = bytearray(map(reverse_byte, range(256))) def reverse64(v): """Reverse a 64-bit integer bitwise. We do this by breaking it up into 8 bytes. The 64-bit integer is then the concatenation of each of these bytes. We reverse it by reversing each byte on its own using the REVERSE_BITS_TABLE above, and then concatenating the reversed bytes. In this case concatenating consists of shifting them into the right position for the word and then oring the bits together. """ assert v.bit_length() <= 64 return ( (REVERSE_BITS_TABLE[(v >> 0) & 0xff] << 56) | (REVERSE_BITS_TABLE[(v >> 8) & 0xff] << 48) | (REVERSE_BITS_TABLE[(v >> 16) & 0xff] << 40) | (REVERSE_BITS_TABLE[(v >> 24) & 0xff] << 32) | (REVERSE_BITS_TABLE[(v >> 32) & 0xff] << 24) | (REVERSE_BITS_TABLE[(v >> 40) & 0xff] << 16) | (REVERSE_BITS_TABLE[(v >> 48) & 0xff] << 8) | (REVERSE_BITS_TABLE[(v >> 56) & 0xff] << 0) ) MANTISSA_MASK = ((1 << 52) - 1) def reverse_bits(x, n): assert x.bit_length() <= n <= 64 x = reverse64(x) x >>= (64 - n) return x def update_mantissa(unbiased_exponent, mantissa): if unbiased_exponent <= 0: mantissa = reverse_bits(mantissa, 52) elif unbiased_exponent <= 51: n_fractional_bits = (52 - unbiased_exponent) fractional_part = mantissa & ((1 << n_fractional_bits) - 1) mantissa ^= fractional_part mantissa |= reverse_bits(fractional_part, n_fractional_bits) return mantissa def lex_to_float(i): assert i.bit_length() <= 64 has_fractional_part = i >> 63 if has_fractional_part: exponent = (i >> 52) & ((1 << 11) - 1) exponent = decode_exponent(exponent) mantissa = i & MANTISSA_MASK mantissa = update_mantissa(exponent - BIAS, mantissa) assert mantissa.bit_length() <= 52 return int_to_float((exponent << 52) | mantissa) else: integral_part = i & ((1 << 56) - 1) return float(integral_part) def float_to_lex(f): if is_simple(f): assert f >= 0 return int(f) i = float_to_int(f) i &= ((1 << 63) - 1) exponent = i >> 52 mantissa = i & MANTISSA_MASK mantissa = update_mantissa(exponent - BIAS, mantissa) exponent = encode_exponent(exponent) assert mantissa.bit_length() <= 52 return (1 << 63) | (exponent << 52) | mantissa def is_simple(f): try: i = int(f) except (ValueError, OverflowError): return False if i != f: return False return i.bit_length() <= 56 def draw_float(data): try: data.start_example() f = lex_to_float(data.draw_bits(64)) if data.draw_bits(1): f = -f return f finally: data.stop_example() def write_float(data, f): data.write(int_to_bytes(float_to_lex(abs(f)), 8)) sign = float_to_int(f) >> 63 data.write(hbytes([sign])) hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/minimizer.py000066400000000000000000000251361321557765100266640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys import math from hypothesis.internal.compat import ceil, floor, hbytes, hrange, \ int_to_bytes, int_from_bytes from hypothesis.internal.conjecture.floats import is_simple, \ float_to_lex, lex_to_float """ This module implements a lexicographic minimizer for blocks of bytes. That is, given a block of bytes of a given size, and a predicate that accepts such blocks, it tries to find a lexicographically minimal block of that size that satisfies the predicate, by repeatedly making local changes to that starting point. Assuming it is allowed to run to completion (which due to the way we use it it actually often isn't) it makes the following guarantees, but it usually tries to do better in practice: 1. The lexicographic predecessor (i.e. the largest block smaller than it) of the answer is not a solution. 2. No individual byte in the solution may be lowered while holding the others fixed. """ class Minimizer(object): def __init__(self, initial, condition, random, full): self.current = hbytes(initial) self.size = len(self.current) self.condition = condition self.random = random self.full = full self.changes = 0 self.seen = set() def incorporate(self, buffer): """Consider this buffer as a possible replacement for the current best buffer. Return True if it succeeds as such. """ assert isinstance(buffer, hbytes) assert len(buffer) == self.size assert buffer <= self.current if buffer in self.seen: return False self.seen.add(buffer) if buffer != self.current and self.condition(buffer): self.current = buffer self.changes += 1 return True return False def shift(self): """Attempt to shift individual byte values right as far as they can go.""" prev = -1 while prev != self.changes: prev = self.changes for i in hrange(self.size): block = bytearray(self.current) c = block[i] for k in hrange(c.bit_length(), 0, -1): block[i] = c >> k if self.incorporate(hbytes(block)): break def rotate_suffixes(self): for significant, c in enumerate(self.current): # pragma: no branch if c: break assert self.current[significant] prefix = hbytes(significant) for i in hrange(1, self.size - significant): left = self.current[significant:significant + i] right = self.current[significant + i:] rotated = prefix + right + left if rotated < self.current: self.incorporate(rotated) def shrink_indices(self): # We take a bet that there is some monotonic lower bound such that # whenever current >= lower_bound the result works. for i in hrange(self.size): if self.current[i] == 0: continue if self.incorporate( self.current[:i] + hbytes([0]) + self.current[i + 1:] ): continue prefix = self.current[:i] original_suffix = self.current[i + 1:] for suffix in [ original_suffix, hbytes([255]) * len(original_suffix), ]: minimize_byte( self.current[i], lambda c: self.current[i] == c or self.incorporate( prefix + hbytes([c]) + suffix) ) def incorporate_int(self, i): return self.incorporate(int_to_bytes(i, self.size)) def incorporate_float(self, f): assert self.size == 8 return self.incorporate_int(float_to_lex(f)) def float_hack(self): """Our encoding of floating point numbers does the right thing when you lexically shrink it, but there are some highly non-obvious lexical shrinks corresponding to natural floating point operations. We can't actually tell when the floating point encoding is being used (that would break the assumptions that Hypothesis doesn't inspect the generated values), but we can cheat: We just guess when it might be being used and perform shrinks that are valid regardless of our guess is correct. So that's what this method does. It's a cheat to give us good shrinking of floating at low cost in runtime and only moderate cost in elegance. """ # If the block is of the wrong size then we're certainly not using the # float encoding. if self.size != 8: return # If the high bit is zero then we're in the integer representation of # floats so we don't need these hacks because it will shrink normally. if self.current[0] >> 7 == 0: return i = int_from_bytes(self.current) f = lex_to_float(i) # This floating point number can be represented in our simple format. # So we try converting it to that (which will give the same float, but # a different encoding of it). If that doesn't work then the float # value of this doesn't unambiguously give the desired predicate, so # this approach isn't useful. If it *does* work, then we're now in a # situation where we don't need it, so either way we return here. if is_simple(f): self.incorporate_float(f) return # We check for a bunch of standard "large" floats. If we're currently # worse than them and the shrink downwards doesn't help, abort early # because there's not much useful we can do here. for g in [ float('nan'), float('inf'), sys.float_info.max, ]: j = float_to_lex(g) if j < i: if self.incorporate_int(j): f = g i = j if math.isinf(f) or math.isnan(f): return # Finally we get to the important bit: Each of these is a small change # to the floating point number that corresponds to a large change in # the lexical representation. Trying these ensures that our floating # point shrink can always move past these obstacles. In particular it # ensures we can always move to integer boundaries and shrink past a # change that would require shifting the exponent while not changing # the float value much. for g in [floor(f), ceil(f)]: if self.incorporate_float(g): return if f > 2: self.incorporate_float(f - 1) def run(self): if not any(self.current): return if len(self.current) == 1: minimize_byte( self.current[0], lambda c: c == self.current[0] or self.incorporate(hbytes([c])) ) return # Initial checks as to whether the two smallest possible buffers of # this length can work. If so there's nothing to do here. if self.incorporate(hbytes(self.size)): return if self.incorporate(hbytes([0] * (self.size - 1) + [1])): return # Perform a binary search to try to replace a long initial segment with # zero bytes. # Note that because this property isn't monotonic this will not always # find the longest subsequence we can replace with zero, only some # subsequence. # Replacing the first nonzero bytes with zero does *not* work nonzero = len(self.current) # Replacing the first canzero bytes with zero *does* work. canzero = 0 while self.current[canzero] == 0: canzero += 1 base = self.current @binsearch(canzero, nonzero) def zero_prefix(mid): return self.incorporate( hbytes(mid) + base[mid:] ) base = self.current @binsearch(0, self.size) def shift_right(mid): if mid == 0: return True if mid == self.size: return False return self.incorporate(hbytes(mid) + base[:-mid]) change_counter = -1 first = True while ( (first or self.full) and change_counter < self.changes ): first = False change_counter = self.changes self.float_hack() self.shift() self.shrink_indices() self.rotate_suffixes() def minimize(initial, condition, random, full=True): """Perform a lexicographical minimization of the byte string 'initial' such that the predicate 'condition' returns True, and return the minimized string.""" m = Minimizer(initial, condition, random, full) m.run() return m.current def binsearch(_lo, _hi): """Run a binary search to find the point at which a function changes value between two bounds. This function is used purely for its side effects and returns nothing. """ def accept(f): lo = _lo hi = _hi loval = f(lo) hival = f(hi) if loval == hival: return while lo + 1 < hi: mid = (lo + hi) // 2 midval = f(mid) if midval == loval: lo = mid else: assert hival == midval hi = mid return accept def minimize_byte(c, f): """Return the smallest byte for which a function `f` returns True, starting with the byte `c` as an unsigned integer.""" if f(0): return 0 if c == 1 or f(1): return 1 elif c == 2: return 2 if f(c - 1): lo = 1 hi = c - 1 while lo + 1 < hi: mid = (lo + hi) // 2 if f(mid): hi = mid else: lo = mid return hi return c hypothesis-python-3.44.1/src/hypothesis/internal/conjecture/utils.py000066400000000000000000000303451321557765100260170ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import enum import math from collections import Sequence, OrderedDict from hypothesis._settings import note_deprecation from hypothesis.internal.compat import floor, hbytes, bit_length, \ int_from_bytes from hypothesis.internal.floats import int_to_float def integer_range(data, lower, upper, center=None): assert lower <= upper if lower == upper: return int(lower) if center is None: center = lower center = min(max(center, lower), upper) if center == upper: above = False elif center == lower: above = True else: above = boolean(data) if above: gap = upper - center else: gap = center - lower assert gap > 0 bits = bit_length(gap) probe = gap + 1 while probe > gap: probe = data.draw_bits(bits) if above: result = center + probe else: result = center - probe assert lower <= result <= upper return int(result) def centered_integer_range(data, lower, upper, center): return integer_range( data, lower, upper, center=center ) def check_sample(values): try: from numpy import ndarray if isinstance(values, ndarray): if values.ndim != 1: note_deprecation(( 'Only one-dimensional arrays are supported for sampling, ' 'and the given value has {ndim} dimensions (shape ' '{shape}). This array would give samples of array slices ' 'instead of elements! Use np.ravel(values) to convert ' 'to a one-dimensional array, or tuple(values) if you ' 'want to sample slices. Sampling a multi-dimensional ' 'array will be an error in a future version of Hypothesis.' ).format(ndim=values.ndim, shape=values.shape)) return tuple(values) except ImportError: # pragma: no cover pass if not isinstance(values, (OrderedDict, Sequence, enum.EnumMeta)): note_deprecation( ('Cannot sample from %r, not a sequence. ' % (values,)) + 'Hypothesis goes to some length to ensure that sampling an ' 'element from a collection (with `sampled_from` or `choices`) is ' 'replayable and can be minimised. To replay a saved example, ' 'the sampled values must have the same iteration order on every ' 'run - ruling out sets, dicts, etc due to hash randomisation. ' 'Most cases can simply use `sorted(values)`, but mixed types or ' 'special values such as math.nan require careful handling - and ' 'note that when simplifying an example, Hypothesis treats ' 'earlier values as simpler.') return tuple(values) def choice(data, values): return values[integer_range(data, 0, len(values) - 1)] def getrandbits(data, n): n_bytes = n // 8 if n % 8 != 0: n_bytes += 1 return int_from_bytes(data.draw_bytes(n_bytes)) & ((1 << n) - 1) FLOAT_PREFIX = 0b1111111111 << 52 FULL_FLOAT = int_to_float(FLOAT_PREFIX | ((2 << 53) - 1)) - 1 def fractional_float(data): return ( int_to_float(FLOAT_PREFIX | getrandbits(data, 52)) - 1 ) / FULL_FLOAT def geometric(data, p): denom = math.log1p(-p) data.start_example() while True: probe = fractional_float(data) if probe < 1.0: result = int(math.log1p(-probe) / denom) assert result >= 0, (probe, p, result) data.stop_example() return result def boolean(data): return bool(data.draw_bits(1)) def biased_coin(data, p): """Return False with probability p (assuming a uniform generator), shrinking towards False.""" data.start_example() while True: # The logic here is a bit complicated and special cased to make it # play better with the shrinker. # We imagine partitioning the real interval [0, 1] into 256 equal parts # and looking at each part and whether its interior is wholly <= p # or wholly >= p. At most one part can be neither. # We then pick a random part. If it's wholly on one side or the other # of p then we use that as the answer. If p is contained in the # interval then we start again with a new probability that is given # by the fraction of that interval that was <= our previous p. # We then take advantage of the fact that we have control of the # labelling to make this shrink better, using the following tricks: # If p is <= 0 or >= 1 the result of this coin is certain. We make sure # to write a byte to the data stream anyway so that these don't cause # difficulties when shrinking. if p <= 0: data.write(hbytes([0])) result = False elif p >= 1: data.write(hbytes([1])) result = True else: falsey = floor(256 * (1 - p)) truthy = floor(256 * p) remainder = 256 * p - truthy if falsey + truthy == 256: m, n = p.as_integer_ratio() assert n & (n - 1) == 0, n # n is a power of 2 assert n > m > 0 truthy = m falsey = n - m bits = bit_length(n) - 1 partial = False else: bits = 8 partial = True i = data.draw_bits(bits) # We always label the region that causes us to repeat the loop as # 255 so that shrinking this byte never causes us to need to draw # more data. if partial and i == 255: p = remainder continue if falsey == 0: # Every other partition is truthy, so the result is true result = True elif truthy == 0: # Every other partition is falsey, so the result is false result = False elif i <= 1: # We special case so that zero is always false and 1 is always # true which makes shrinking easier because we can always # replace a truthy block with 1. This has the slightly weird # property that shrinking from 2 to 1 can cause the result to # grow, but the shrinker always tries 0 and 1 first anyway, so # this will usually be fine. result = bool(i) else: # Originally everything in the region 0 <= i < falsey was false # and everything above was true. We swapped one truthy element # into this region, so the region becomes 0 <= i <= falsey # except for i = 1. We know i > 1 here, so the test for truth # becomes i > falsey. result = i > falsey break data.stop_example() return result class Sampler(object): """Sampler based on "Succinct Sampling from Discrete Distributions" by Bringmann and Larsen. In general it has some advantages and disadvantages compared to the more normal alias method, but its big advantage for us is that it plays well with shrinking: The values are laid out in their natural order, so shrink in that order. Its big disadvantage is that for heavily biased distributions it can use a lot of memory. Solution is some mix of "don't do that then" and not worrying about it because Hypothesis is something of a memory hog anyway. """ def __init__(self, weights): # We consider each weight expressed in terms of the average weight, # say t. We write the weight of i as nt + f, where n is an integer and # 0 <= f < 1. We then store n items for this weight which correspond # to drawing i unconditionally, and if f > 0 we store an additional # item that corresponds to drawing i with probability f. This ensures # that (under a uniform model) we draw i with probability proportionate # to its weight. # We then rearrange things to shrink better. The table with the whole # weights is kept in sorted order so that shrinking still corresponds # to shrinking leftwards. The fractional weights however are put in # a second table that is logically "to the right" of the whole weights # and are sorted in order of decreasing acceptance probaility. This # ensures that shrinking lexicographically always results in drawing # less data. self.table = [] self.extras = [] self.acceptance = [] total = sum(weights) n = len(weights) for i, x in enumerate(weights): whole_occurrences = floor(x * n / total) acceptance = x - whole_occurrences self.acceptance.append(acceptance) for _ in range(whole_occurrences): self.table.append(i) if acceptance > 0: self.extras.append(i) self.extras.sort(key=self.acceptance.__getitem__, reverse=True) def sample(self, data): while True: data.start_example() # We always draw the acceptance data even if we don't need it, # because that way we keep the amount of data we use stable. i = integer_range(data, 0, len(self.table) + len(self.extras) - 1) if i < len(self.table): result = self.table[i] data.stop_example() return result else: result = self.extras[i - len(self.table)] accept = not biased_coin(data, 1 - self.acceptance[result]) data.stop_example() if accept: return result class many(object): """Utility class for collections. Bundles up the logic we use for "should I keep drawing more values?" and handles starting and stopping examples in the right place. Intended usage is something like: elements = many(data, ...) while elements.more(): add_stuff_to_result() """ def __init__(self, data, min_size, max_size, average_size): self.min_size = min_size self.max_size = max_size self.data = data self.stopping_value = 1 - 1.0 / (1 + average_size) self.count = 0 self.rejections = 0 self.drawn = False self.force_stop = False def more(self): """Should I draw another element to add to the collection?""" if self.drawn: self.data.stop_example() self.drawn = True if self.min_size == self.max_size: should_continue = self.count < self.min_size elif self.force_stop: should_continue = False else: if self.count < self.min_size: p_continue = 1.0 elif self.count >= self.max_size: p_continue = 0.0 else: p_continue = self.stopping_value should_continue = biased_coin(self.data, p_continue) if should_continue: self.data.start_example() self.count += 1 return True else: return False def reject(self): """Reject the last example (i.e. don't count it towards our budget of elements because it's not going to go in the final collection)""" assert self.count > 0 self.count -= 1 self.rejections += 1 if self.rejections > 2 * self.count: if self.count < self.min_size: self.data.mark_invalid() else: self.force_stop = True hypothesis-python-3.44.1/src/hypothesis/internal/coverage.py000066400000000000000000000067771321557765100243250ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import json from contextlib import contextmanager from hypothesis.internal.reflection import proxies """ This module implements a custom coverage system that records conditions and then validates that every condition has been seen to be both True and False during the execution of our tests. The only thing we use it for at present is our argument validation functions, where we assert that every validation function has been seen to both pass and fail in the course of testing. When not running with a magic environment variable set, this module disables itself and has essentially no overhead. """ pretty_file_name_cache = {} def pretty_file_name(f): try: return pretty_file_name_cache[f] except KeyError: pass parts = f.split(os.path.sep) parts = parts[parts.index('hypothesis'):] result = os.path.sep.join(parts) pretty_file_name_cache[f] = result return result IN_COVERAGE_TESTS = os.getenv('HYPOTHESIS_INTERNAL_COVERAGE') == 'true' if IN_COVERAGE_TESTS: log = open('branch-check', 'w') written = set() def record_branch(name, value): key = (name, value) if key in written: return written.add(key) log.write( json.dumps({'name': name, 'value': value}) ) log.write('\n') log.flush() description_stack = [] @contextmanager def check_block(name, depth): # We add an extra two callers to the stack: One for the contextmanager # function, one for our actual caller, so we want to go two extra # stack frames up. caller = sys._getframe(depth + 2) local_description = '%s at %s:%d' % ( name, pretty_file_name(caller.f_code.co_filename), caller.f_lineno, ) try: description_stack.append(local_description) description = ' in '.join(reversed(description_stack)) + ' passed' yield record_branch(description, True) except BaseException: record_branch(description, False) raise finally: description_stack.pop() @contextmanager def check(name): with check_block(name, 2): yield def check_function(f): @proxies(f) def accept(*args, **kwargs): # depth of 2 because of the proxy function calling us. with check_block(f.__name__, 2): return f(*args, **kwargs) return accept else: def check_function(f): return f @contextmanager def check(name): yield class suppress_tracing(object): def __enter__(self): self.__original_trace = sys.gettrace() sys.settrace(None) def __exit__(self, exc_type, exc_value, traceback): sys.settrace(self.__original_trace) hypothesis-python-3.44.1/src/hypothesis/internal/detection.py000066400000000000000000000016141321557765100244710ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from types import MethodType def is_hypothesis_test(test): if isinstance(test, MethodType): return is_hypothesis_test(test.__func__) return getattr(test, 'is_hypothesis_test', False) hypothesis-python-3.44.1/src/hypothesis/internal/escalation.py000066400000000000000000000047431321557765100246430ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import coverage import hypothesis from hypothesis.errors import StopTest, DeadlineExceeded, \ HypothesisException, UnsatisfiedAssumption from hypothesis.internal.compat import text_type, binary_type, \ encoded_filepath def belongs_to(package): root = os.path.dirname(package.__file__) cache = {text_type: {}, binary_type: {}} def accept(filepath): try: return cache[type(filepath)][filepath] except KeyError: pass filepath = encoded_filepath(filepath) result = os.path.abspath(filepath).startswith(root) cache[type(filepath)][filepath] = result return result accept.__name__ = 'is_%s_file' % (package.__name__,) return accept PREVENT_ESCALATION = os.getenv('HYPOTHESIS_DO_NOT_ESCALATE') == 'true' FILE_CACHE = {} is_hypothesis_file = belongs_to(hypothesis) is_coverage_file = belongs_to(coverage) HYPOTHESIS_CONTROL_EXCEPTIONS = ( DeadlineExceeded, StopTest, UnsatisfiedAssumption ) def mark_for_escalation(e): if not isinstance(e, HYPOTHESIS_CONTROL_EXCEPTIONS): e.hypothesis_internal_always_escalate = True def escalate_hypothesis_internal_error(): if PREVENT_ESCALATION: return error_type, e, tb = sys.exc_info() if getattr(e, 'hypothesis_internal_always_escalate', False): raise import traceback filepath = traceback.extract_tb(tb)[-1][0] if is_hypothesis_file(filepath) and not isinstance( e, (HypothesisException,) + HYPOTHESIS_CONTROL_EXCEPTIONS, ): raise # This is so that if we do something wrong and trigger an internal Coverage # error we don't try to catch it. It should be impossible to trigger, but # you never know. if is_coverage_file(filepath): # pragma: no cover raise hypothesis-python-3.44.1/src/hypothesis/internal/floats.py000066400000000000000000000030601321557765100240000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math from hypothesis.internal.compat import struct_pack, struct_unpack def sign(x): try: return math.copysign(1.0, x) except TypeError: raise TypeError('Expected float but got %r of type %s' % ( x, type(x).__name__ )) def is_negative(x): return sign(x) < 0 def count_between_floats(x, y): assert x <= y if is_negative(x): if is_negative(y): return float_to_int(x) - float_to_int(y) + 1 else: return count_between_floats(x, -0.0) + count_between_floats(0.0, y) else: assert not is_negative(y) return float_to_int(y) - float_to_int(x) + 1 def float_to_int(value): return ( struct_unpack(b'!Q', struct_pack(b'!d', value))[0] ) def int_to_float(value): return ( struct_unpack(b'!d', struct_pack(b'!Q', value))[0] ) hypothesis-python-3.44.1/src/hypothesis/internal/healthcheck.py000066400000000000000000000026551321557765100247640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.errors import FailedHealthCheck def fail_health_check(settings, message, label): # Tell pytest to omit the body of this function from tracebacks # http://doc.pytest.org/en/latest/example/simple.html#writing-well-integrated-assertion-helpers __tracebackhide__ = True if label in settings.suppress_health_check: return if not settings.perform_health_check: return message += ( '\nSee https://hypothesis.readthedocs.io/en/latest/health' 'checks.html for more information about this. ' 'If you want to disable just this health check, add %s ' 'to the suppress_health_check settings for this test.' ) % (label,) raise FailedHealthCheck(message, label) hypothesis-python-3.44.1/src/hypothesis/internal/intervalsets.py000066400000000000000000000050771321557765100252450ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import class IntervalSet(object): def __init__(self, intervals): self.intervals = tuple(intervals) self.offsets = [0] for u, v in self.intervals: self.offsets.append( self.offsets[-1] + v - u + 1 ) self.size = self.offsets.pop() def __len__(self): return self.size def __iter__(self): for u, v in self.intervals: for i in range(u, v + 1): yield i def __getitem__(self, i): if i < 0: i = self.size + i if i < 0 or i >= self.size: raise IndexError('Invalid index %d for [0, %d)' % (i, self.size)) # Want j = maximal such that offsets[j] <= i j = len(self.intervals) - 1 if self.offsets[j] > i: hi = j lo = 0 # Invariant: offsets[lo] <= i < offsets[hi] while lo + 1 < hi: mid = (lo + hi) // 2 if self.offsets[mid] <= i: lo = mid else: hi = mid j = lo t = i - self.offsets[j] u, v = self.intervals[j] r = u + t assert r <= v return r def __repr__(self): return 'IntervalSet(%r)' % (self.intervals,) def index(self, value): for offset, (u, v) in zip(self.offsets, self.intervals): if u == value: return offset elif u > value: raise ValueError('%d is not in list' % (value,)) if value <= v: return offset + (value - u) raise ValueError('%d is not in list' % (value,)) def index_above(self, value): for offset, (u, v) in zip(self.offsets, self.intervals): if u >= value: return offset if value <= v: return offset + (value - u) return self.size hypothesis-python-3.44.1/src/hypothesis/internal/lazyformat.py000066400000000000000000000024651321557765100247100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import class lazyformat(object): """A format string that isn't evaluated until it's needed.""" def __init__(self, format_string, *args): self.__format_string = format_string self.__args = args def __str__(self): return self.__format_string % self.__args def __eq__(self, other): return ( isinstance(other, lazyformat) and self.__format_string == other.__format_string and self.__args == other.__args ) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self.__format_string) hypothesis-python-3.44.1/src/hypothesis/internal/reflection.py000066400000000000000000000420571321557765100246530ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This file can approximately be considered the collection of hypothesis going to really unreasonable lengths to produce pretty output.""" from __future__ import division, print_function, absolute_import import re import ast import uuid import types import hashlib import inspect from types import ModuleType from functools import wraps from hypothesis.configuration import storage_directory from hypothesis.vendor.pretty import pretty from hypothesis.internal.compat import ARG_NAME_ATTRIBUTE, hrange, \ to_str, qualname, to_unicode, isidentifier, str_to_bytes, \ getfullargspec, update_code_location def fully_qualified_name(f): """Returns a unique identifier for f pointing to the module it was define on, and an containing functions.""" if f.__module__ is not None: return f.__module__ + '.' + qualname(f) else: return qualname(f) def is_mock(obj): """Determine if the given argument is a mock type. We want to be able to detect these when dealing with various test args. As they are sneaky and can look like almost anything else, we'll check this by looking for random attributes. This is more robust than looking for types. """ for _ in range(10): if not hasattr(obj, str(uuid.uuid4())): return False return True def function_digest(function): """Returns a string that is stable across multiple invocations across multiple processes and is prone to changing significantly in response to minor changes to the function. No guarantee of uniqueness though it usually will be. """ hasher = hashlib.md5() try: hasher.update(to_unicode(inspect.getsource(function)).encode('utf-8')) # Different errors on different versions of python. What fun. except (OSError, IOError, TypeError): pass try: hasher.update(str_to_bytes(function.__name__)) except AttributeError: pass try: hasher.update(function.__module__.__name__.encode('utf-8')) except AttributeError: pass try: hasher.update(str_to_bytes(repr(getfullargspec(function)))) except TypeError: pass return hasher.digest() def required_args(target, args=(), kwargs=()): """Return a set of names of required args to target that were not supplied in args or kwargs. This is used in builds() to determine which arguments to attempt to fill from type hints. target may be any callable (including classes and bound methods). args and kwargs should be as they are passed to builds() - that is, a tuple of values and a dict of names: values. """ try: spec = getfullargspec( target.__init__ if inspect.isclass(target) else target) except TypeError: # pragma: no cover return None # self appears in the argspec of __init__ and bound methods, but it's an # error to explicitly supply it - so we might skip the first argument. skip_self = int(inspect.isclass(target) or inspect.ismethod(target)) # Start with the args that were not supplied and all kwonly arguments, # then remove all positional arguments with default values, and finally # remove kwonly defaults and any supplied keyword arguments return set(spec.args[skip_self + len(args):] + spec.kwonlyargs) \ - set(spec.args[len(spec.args) - len(spec.defaults or ()):]) \ - set(spec.kwonlydefaults or ()) - set(kwargs) def convert_keyword_arguments(function, args, kwargs): """Returns a pair of a tuple and a dictionary which would be equivalent passed as positional and keyword args to the function. Unless function has. **kwargs the dictionary will always be empty. """ argspec = getfullargspec(function) new_args = [] kwargs = dict(kwargs) defaults = dict(argspec.kwonlydefaults or {}) if argspec.defaults: for name, value in zip( argspec.args[-len(argspec.defaults):], argspec.defaults ): defaults[name] = value n = max(len(args), len(argspec.args)) for i in hrange(n): if i < len(args): new_args.append(args[i]) else: arg_name = argspec.args[i] if arg_name in kwargs: new_args.append(kwargs.pop(arg_name)) elif arg_name in defaults: new_args.append(defaults[arg_name]) else: raise TypeError('No value provided for argument %r' % ( arg_name )) if kwargs and not argspec.varkw: if len(kwargs) > 1: raise TypeError('%s() got unexpected keyword arguments %s' % ( function.__name__, ', '.join(map(repr, kwargs)) )) else: bad_kwarg = next(iter(kwargs)) raise TypeError('%s() got an unexpected keyword argument %r' % ( function.__name__, bad_kwarg )) return tuple(new_args), kwargs def convert_positional_arguments(function, args, kwargs): """Return a tuple (new_args, new_kwargs) where all possible arguments have been moved to kwargs. new_args will only be non-empty if function has a variadic argument. """ argspec = getfullargspec(function) new_kwargs = dict(argspec.kwonlydefaults or {}) new_kwargs.update(kwargs) if not argspec.varkw: for k in new_kwargs.keys(): if k not in argspec.args and k not in argspec.kwonlyargs: raise TypeError( '%s() got an unexpected keyword argument %r' % ( function.__name__, k )) if len(args) < len(argspec.args): for i in hrange( len(args), len(argspec.args) - len(argspec.defaults or ()) ): if argspec.args[i] not in kwargs: raise TypeError('No value provided for argument %s' % ( argspec.args[i], )) for kw in argspec.kwonlyargs: if kw not in new_kwargs: raise TypeError('No value provided for argument %s' % kw) if len(args) > len(argspec.args) and not argspec.varargs: raise TypeError( '%s() takes at most %d positional arguments (%d given)' % ( function.__name__, len(argspec.args), len(args) ) ) for arg, name in zip(args, argspec.args): if name in new_kwargs: raise TypeError( '%s() got multiple values for keyword argument %r' % ( function.__name__, name )) else: new_kwargs[name] = arg return ( tuple(args[len(argspec.args):]), new_kwargs, ) def extract_all_lambdas(tree): lambdas = [] class Visitor(ast.NodeVisitor): def visit_Lambda(self, node): lambdas.append(node) Visitor().visit(tree) return lambdas def args_for_lambda_ast(l): return [getattr(n, ARG_NAME_ATTRIBUTE) for n in l.args.args] LINE_CONTINUATION = re.compile(r"\\\n") WHITESPACE = re.compile(r"\s+") PROBABLY_A_COMMENT = re.compile("""#[^'"]*$""") SPACE_FOLLOWS_OPEN_BRACKET = re.compile(r"\( ") SPACE_PRECEDES_CLOSE_BRACKET = re.compile(r" \)") def extract_lambda_source(f): """Extracts a single lambda expression from the string source. Returns a string indicating an unknown body if it gets confused in any way. This is not a good function and I am sorry for it. Forgive me my sins, oh lord """ argspec = getfullargspec(f) arg_strings = [] # In Python 2 you can have destructuring arguments to functions. This # results in an argspec with non-string values. I'm not very interested in # handling these properly, but it's important to not crash on them. bad_lambda = False for a in argspec.args: if isinstance(a, (tuple, list)): # pragma: no cover arg_strings.append('(%s)' % (', '.join(a),)) bad_lambda = True else: assert isinstance(a, str) arg_strings.append(a) if argspec.varargs: arg_strings.append('*' + argspec.varargs) elif argspec.kwonlyargs: arg_strings.append('*') for a in (argspec.kwonlyargs or []): default = (argspec.kwonlydefaults or {}).get(a) if default: arg_strings.append('{}={}'.format(a, default)) else: arg_strings.append(a) if_confused = 'lambda %s: ' % (', '.join(arg_strings),) if bad_lambda: # pragma: no cover return if_confused try: source = inspect.getsource(f) except IOError: return if_confused source = LINE_CONTINUATION.sub(' ', source) source = WHITESPACE.sub(' ', source) source = source.strip() assert 'lambda' in source tree = None try: tree = ast.parse(source) except SyntaxError: for i in hrange(len(source) - 1, len('lambda'), -1): prefix = source[:i] if 'lambda' not in prefix: break try: tree = ast.parse(prefix) source = prefix break except SyntaxError: continue if tree is None: if source.startswith('@'): # This will always eventually find a valid expression because # the decorator must be a valid Python function call, so will # eventually be syntactically valid and break out of the loop. Thus # this loop can never terminate normally, so a no branch pragma is # appropriate. for i in hrange(len(source) + 1): # pragma: no branch p = source[1:i] if 'lambda' in p: try: tree = ast.parse(p) source = p break except SyntaxError: pass if tree is None: return if_confused all_lambdas = extract_all_lambdas(tree) aligned_lambdas = [ l for l in all_lambdas if args_for_lambda_ast(l) == argspec.args ] if len(aligned_lambdas) != 1: return if_confused lambda_ast = aligned_lambdas[0] assert lambda_ast.lineno == 1 source = source[lambda_ast.col_offset:].strip() source = source[source.index('lambda'):] for i in hrange(len(source), len('lambda'), -1): # pragma: no branch try: parsed = ast.parse(source[:i]) assert len(parsed.body) == 1 assert parsed.body if isinstance(parsed.body[0].value, ast.Lambda): source = source[:i] break except SyntaxError: pass lines = source.split('\n') lines = [PROBABLY_A_COMMENT.sub('', l) for l in lines] source = '\n'.join(lines) source = WHITESPACE.sub(' ', source) source = SPACE_FOLLOWS_OPEN_BRACKET.sub('(', source) source = SPACE_PRECEDES_CLOSE_BRACKET.sub(')', source) source = source.strip() return source def get_pretty_function_description(f): if not hasattr(f, '__name__'): return repr(f) name = f.__name__ if name == '': result = extract_lambda_source(f) return result elif isinstance(f, types.MethodType): self = f.__self__ if not (self is None or inspect.isclass(self)): return '%r.%s' % (self, name) return name def nicerepr(v): if inspect.isfunction(v): return get_pretty_function_description(v) elif isinstance(v, type): return v.__name__ else: return to_str(pretty(v)) def arg_string(f, args, kwargs, reorder=True): if reorder: args, kwargs = convert_positional_arguments(f, args, kwargs) argspec = getfullargspec(f) bits = [] for a in argspec.args: if a in kwargs: bits.append('%s=%s' % (a, nicerepr(kwargs.pop(a)))) if kwargs: for a in sorted(kwargs): bits.append('%s=%s' % (a, nicerepr(kwargs[a]))) return ', '.join([nicerepr(x) for x in args] + bits) def unbind_method(f): """Take something that might be a method or a function and return the underlying function.""" return getattr(f, 'im_func', getattr(f, '__func__', f)) def check_valid_identifier(identifier): if not isidentifier(identifier): raise ValueError('%r is not a valid python identifier' % (identifier,)) def eval_directory(): return storage_directory('eval_source') eval_cache = {} def source_exec_as_module(source): try: return eval_cache[source] except KeyError: pass result = ModuleType('hypothesis_temporary_module_%s' % ( hashlib.sha1(str_to_bytes(source)).hexdigest(), )) assert isinstance(source, str) exec(source, result.__dict__) eval_cache[source] = result return result COPY_ARGSPEC_SCRIPT = """ from hypothesis.utils.conventions import not_set def accept(%(funcname)s): def %(name)s(%(argspec)s): return %(funcname)s(%(invocation)s) return %(name)s """.strip() + '\n' def define_function_signature(name, docstring, argspec): """A decorator which sets the name, argspec and docstring of the function passed into it.""" check_valid_identifier(name) for a in argspec.args: check_valid_identifier(a) if argspec.varargs is not None: check_valid_identifier(argspec.varargs) if argspec.varkw is not None: check_valid_identifier(argspec.varkw) n_defaults = len(argspec.defaults or ()) if n_defaults: parts = [] for a in argspec.args[:-n_defaults]: parts.append(a) for a in argspec.args[-n_defaults:]: parts.append('%s=not_set' % (a,)) else: parts = list(argspec.args) used_names = list(argspec.args) + list(argspec.kwonlyargs) used_names.append(name) for a in argspec.kwonlyargs: check_valid_identifier(a) def accept(f): fargspec = getfullargspec(f) must_pass_as_kwargs = [] invocation_parts = [] for a in argspec.args: if a not in fargspec.args and not fargspec.varargs: must_pass_as_kwargs.append(a) else: invocation_parts.append(a) if argspec.varargs: used_names.append(argspec.varargs) parts.append('*' + argspec.varargs) invocation_parts.append('*' + argspec.varargs) elif argspec.kwonlyargs: parts.append('*') for k in must_pass_as_kwargs: invocation_parts.append('%(k)s=%(k)s' % {'k': k}) for k in argspec.kwonlyargs: invocation_parts.append('%(k)s=%(k)s' % {'k': k}) if k in (argspec.kwonlydefaults or []): parts.append('%(k)s=not_set' % {'k': k}) else: parts.append(k) if argspec.varkw: used_names.append(argspec.varkw) parts.append('**' + argspec.varkw) invocation_parts.append('**' + argspec.varkw) candidate_names = ['f'] + [ 'f_%d' % (i,) for i in hrange(1, len(used_names) + 2) ] for funcname in candidate_names: # pragma: no branch if funcname not in used_names: break base_accept = source_exec_as_module( COPY_ARGSPEC_SCRIPT % { 'name': name, 'funcname': funcname, 'argspec': ', '.join(parts), 'invocation': ', '.join(invocation_parts) }).accept result = base_accept(f) result.__doc__ = docstring result.__defaults__ = argspec.defaults if argspec.kwonlydefaults: result.__kwdefaults__ = argspec.kwonlydefaults if argspec.annotations: result.__annotations__ = argspec.annotations return result return accept def impersonate(target): """Decorator to update the attributes of a function so that to external introspectors it will appear to be the target function. Note that this updates the function in place, it doesn't return a new one. """ def accept(f): f.__code__ = update_code_location( f.__code__, target.__code__.co_filename, target.__code__.co_firstlineno ) f.__name__ = target.__name__ f.__module__ = target.__module__ f.__doc__ = target.__doc__ return f return accept def proxies(target): def accept(proxy): return impersonate(target)(wraps(target)(define_function_signature( target.__name__, target.__doc__, getfullargspec(target))(proxy))) return accept hypothesis-python-3.44.1/src/hypothesis/internal/renaming.py000066400000000000000000000051331321557765100243130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis._settings import note_deprecation from hypothesis.internal.reflection import proxies def renamed_arguments(**rename_mapping): """Helper function for deprecating arguments that have been renamed to a new form.""" assert len(set(rename_mapping.values())) == len(rename_mapping) def accept(f): @proxies(f) def with_name_check(**kwargs): for k, v in list(kwargs.items()): if k in rename_mapping and v is not None: t = rename_mapping[k] note_deprecation(( 'The argument %s has been renamed to %s. The old ' 'name will go away in a future version of ' 'Hypothesis.') % (k, t)) kwargs[t] = kwargs.pop(k) return f(**kwargs) # This decorates things in the public API, which all have docstrings. # (If they're not in the public API, we don't need a deprecation path.) # But docstrings are stripped when running with PYTHONOPTIMIZE=2. # # If somebody's running with that flag, they don't expect any # docstrings to be present, so this message isn't useful. Absence of # a docstring is a strong indicator that they're running in this mode, # so skip adding this message if that's the case. if with_name_check.__doc__ is not None: with_name_check.__doc__ += '\n'.join(( '', '', 'The following arguments have been renamed:', '', ) + tuple( ' * %s has been renamed to %s' % s for s in rename_mapping.items() ) + ( '', 'Use of the old names has been deprecated and will be removed', 'in a future version of Hypothesis.' ) ) return with_name_check return accept hypothesis-python-3.44.1/src/hypothesis/internal/validation.py000066400000000000000000000105011321557765100246400ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math from numbers import Rational from hypothesis.errors import InvalidArgument from hypothesis.internal.compat import integer_types from hypothesis.internal.coverage import check_function @check_function def check_type(typ, arg, name=''): if name: name += '=' if not isinstance(arg, typ): if isinstance(typ, type): typ_string = typ.__name__ else: typ_string = 'one of %s' % ( ', '.join(t.__name__ for t in typ)) raise InvalidArgument('Expected %s but got %s%r (type=%s)' % (typ_string, name, arg, type(arg).__name__)) @check_function def check_strategy(arg, name=''): from hypothesis.searchstrategy import SearchStrategy check_type(SearchStrategy, arg, name) @check_function def check_valid_integer(value): """Checks that value is either unspecified, or a valid integer. Otherwise raises InvalidArgument. """ if value is None: return check_type(integer_types, value) @check_function def check_valid_bound(value, name): """Checks that value is either unspecified, or a valid interval bound. Otherwise raises InvalidArgument. """ if value is None or isinstance(value, integer_types + (Rational,)): return if math.isnan(value): raise InvalidArgument(u'Invalid end point %s=%r' % (name, value)) @check_function def try_convert(typ, value, name): if value is None: return None if isinstance(value, typ): return value try: return typ(value) except TypeError: raise InvalidArgument( 'Cannot convert %s=%r of type %s to type %s' % ( name, value, type(value).__name__, typ.__name__ ) ) except (OverflowError, ValueError): raise InvalidArgument( 'Cannot convert %s=%r to type %s' % ( name, value, typ.__name__ ) ) @check_function def check_valid_size(value, name): """Checks that value is either unspecified, or a valid non-negative size expressed as an integer/float. Otherwise raises InvalidArgument. """ if value is None: return check_type(integer_types + (float,), value) if value < 0: raise InvalidArgument(u'Invalid size %s=%r < 0' % (name, value)) if isinstance(value, float) and math.isnan(value): raise InvalidArgument(u'Invalid size %s=%r' % (name, value)) @check_function def check_valid_interval(lower_bound, upper_bound, lower_name, upper_name): """Checks that lower_bound and upper_bound are either unspecified, or they define a valid interval on the number line. Otherwise raises InvalidArgument. """ if lower_bound is None or upper_bound is None: return if upper_bound < lower_bound: raise InvalidArgument( 'Cannot have %s=%r < %s=%r' % ( upper_name, upper_bound, lower_name, lower_bound )) @check_function def check_valid_sizes(min_size, average_size, max_size): check_valid_size(min_size, 'min_size') check_valid_size(max_size, 'max_size') check_valid_size(average_size, 'average_size') check_valid_interval(min_size, max_size, 'min_size', 'max_size') check_valid_interval(average_size, max_size, 'average_size', 'max_size') check_valid_interval(min_size, average_size, 'min_size', 'average_size') if ( average_size == 0 and ( max_size is None or max_size > 0 ) ): raise InvalidArgument( 'Cannot have average_size=%r with non-zero max_size=%r' % ( average_size, min_size )) hypothesis-python-3.44.1/src/hypothesis/provisional.py000066400000000000000000000055251321557765100232510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module contains various provisional APIs and strategies. It is intended for internal use, to ease code reuse, and is not stable. Point releases may move or break the contents at any time! Internet strategies should conform to https://tools.ietf.org/html/rfc3696 or the authoritative definitions it links to. If not, report the bug! """ from __future__ import division, print_function, absolute_import import string import hypothesis.strategies as st @st.defines_strategy_with_reusable_values def domains(): """A strategy for :rfc:`1035` fully qualified domain names.""" atoms = st.text(string.ascii_letters + '0123456789-', min_size=1, max_size=63 ).filter(lambda s: '-' not in s[0] + s[-1]) return st.builds( lambda x, y: '.'.join(x + [y]), st.lists(atoms, min_size=1), # TODO: be more devious about top-level domains st.sampled_from(['com', 'net', 'org', 'biz', 'info']) ).filter(lambda url: len(url) <= 255) @st.defines_strategy_with_reusable_values def emails(): """A strategy for email addresses. See https://github.com/HypothesisWorks/hypothesis-python/issues/162 for work on a permanent replacement. """ local_chars = string.ascii_letters + string.digits + "!#$%&'*+-/=^_`{|}~" local_part = st.text(local_chars, min_size=1, max_size=64) # TODO: include dot-atoms, quoted strings, escaped chars, etc in local part return st.builds('{}@{}'.format, local_part, domains()).filter( lambda addr: len(addr) <= 255) @st.defines_strategy_with_reusable_values def ip4_addr_strings(): """A strategy for IPv4 address strings. This consists of four strings representing integers [0..255], without zero-padding, joined by dots. """ return st.builds('{}.{}.{}.{}'.format, *(4 * [st.integers(0, 255)])) @st.defines_strategy_with_reusable_values def ip6_addr_strings(): """A strategy for IPv6 address strings. This consists of sixteen quads of hex digits (0000 .. FFFF), joined by colons. Values do not currently have zero-segments collapsed. """ part = st.integers(0, 2**16 - 1).map(u'{:04x}'.format) return st.tuples(*[part] * 8).map(lambda a: u':'.join(a).upper()) hypothesis-python-3.44.1/src/hypothesis/reporting.py000066400000000000000000000035101321557765100227050ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect from hypothesis._settings import Verbosity, settings from hypothesis.internal.compat import print_unicode, \ escape_unicode_characters from hypothesis.utils.dynamicvariables import DynamicVariable def silent(value): pass def default(value): try: print_unicode(value) except UnicodeEncodeError: print_unicode(escape_unicode_characters(value)) reporter = DynamicVariable(default) def current_reporter(): return reporter.value def with_reporter(new_reporter): return reporter.with_value(new_reporter) def current_verbosity(): return settings.default.verbosity def to_text(textish): if inspect.isfunction(textish): textish = textish() if isinstance(textish, bytes): textish = textish.decode('utf-8') return textish def verbose_report(text): if current_verbosity() >= Verbosity.verbose: current_reporter()(to_text(text)) def debug_report(text): if current_verbosity() >= Verbosity.debug: current_reporter()(to_text(text)) def report(text): if current_verbosity() >= Verbosity.normal: current_reporter()(to_text(text)) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/000077500000000000000000000000001321557765100233535ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/searchstrategy/__init__.py000066400000000000000000000014631321557765100254700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """Package defining SearchStrategy, which is the core type that Hypothesis uses to explore data.""" from .strategies import SearchStrategy __all__ = [ 'SearchStrategy', ] hypothesis-python-3.44.1/src/hypothesis/searchstrategy/collections.py000066400000000000000000000154031321557765100262460ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.internal.conjecture.utils as cu from hypothesis.errors import InvalidArgument from hypothesis.internal.compat import OrderedDict, hbytes from hypothesis.searchstrategy.strategies import SearchStrategy, \ MappedSearchStrategy, one_of_strategies class TupleStrategy(SearchStrategy): """A strategy responsible for fixed length tuples based on heterogenous strategies for each of their elements.""" def __init__(self, strategies, tuple_type): SearchStrategy.__init__(self) strategies = tuple(strategies) self.element_strategies = strategies def do_validate(self): for s in self.element_strategies: s.validate() def __repr__(self): if len(self.element_strategies) == 1: tuple_string = '%s,' % (repr(self.element_strategies[0]),) else: tuple_string = ', '.join(map(repr, self.element_strategies)) return 'TupleStrategy((%s))' % ( tuple_string, ) def calc_has_reusable_values(self, recur): return all(recur(e) for e in self.element_strategies) def newtuple(self, xs): """Produce a new tuple of the correct type.""" return tuple(xs) def do_draw(self, data): return self.newtuple( data.draw(e) for e in self.element_strategies ) def calc_is_empty(self, recur): return any(recur(e) for e in self.element_strategies) TERMINATOR = hbytes(b'\0') class ListStrategy(SearchStrategy): """A strategy for lists which takes an intended average length and a strategy for each of its element types and generates lists containing any of those element types. The conditional distribution of the length is geometric, and the conditional distribution of each parameter is whatever their strategies define. """ def __init__( self, strategies, average_length=50.0, min_size=0, max_size=float('inf') ): SearchStrategy.__init__(self) assert average_length > 0 self.average_length = average_length strategies = tuple(strategies) self.min_size = min_size or 0 self.max_size = max_size or float('inf') self.element_strategy = one_of_strategies(strategies) def do_validate(self): self.element_strategy.validate() if self.is_empty: raise InvalidArgument(( 'Cannot create non-empty lists with elements drawn from ' 'strategy %r because it has no values.') % ( self.element_strategy,)) def calc_is_empty(self, recur): if self.min_size == 0: return False else: return recur(self.element_strategy) def do_draw(self, data): if self.element_strategy.is_empty: assert self.min_size == 0 return [] elements = cu.many( data, min_size=self.min_size, max_size=self.max_size, average_size=self.average_length ) result = [] while elements.more(): result.append(data.draw(self.element_strategy)) return result def __repr__(self): return ( 'ListStrategy(%r, min_size=%r, average_size=%r, max_size=%r)' ) % ( self.element_strategy, self.min_size, self.average_length, self.max_size ) class UniqueListStrategy(SearchStrategy): def __init__( self, elements, min_size, max_size, average_size, key ): super(UniqueListStrategy, self).__init__() assert min_size <= average_size <= max_size self.min_size = min_size self.max_size = max_size self.average_size = average_size self.element_strategy = elements self.key = key def do_validate(self): self.element_strategy.validate() if self.is_empty: raise InvalidArgument(( 'Cannot create non-empty lists with elements drawn from ' 'strategy %r because it has no values.') % ( self.element_strategy,)) def calc_is_empty(self, recur): if self.min_size == 0: return False else: return recur(self.element_strategy) def do_draw(self, data): if self.element_strategy.is_empty: assert self.min_size == 0 return [] elements = cu.many( data, min_size=self.min_size, max_size=self.max_size, average_size=self.average_size ) seen = set() result = [] while elements.more(): value = data.draw(self.element_strategy) k = self.key(value) if k in seen: elements.reject() else: seen.add(k) result.append(value) assert self.max_size >= len(result) >= self.min_size return result class FixedKeysDictStrategy(MappedSearchStrategy): """A strategy which produces dicts with a fixed set of keys, given a strategy for each of their equivalent values. e.g. {'foo' : some_int_strategy} would generate dicts with the single key 'foo' mapping to some integer. """ def __init__(self, strategy_dict): self.dict_type = type(strategy_dict) if isinstance(strategy_dict, OrderedDict): self.keys = tuple(strategy_dict.keys()) else: try: self.keys = tuple(sorted( strategy_dict.keys(), )) except TypeError: self.keys = tuple(sorted( strategy_dict.keys(), key=repr, )) super(FixedKeysDictStrategy, self).__init__( strategy=TupleStrategy( (strategy_dict[k] for k in self.keys), tuple ) ) def calc_is_empty(self, recur): return recur(self.mapped_strategy) def __repr__(self): return 'FixedKeysDictStrategy(%r, %r)' % ( self.keys, self.mapped_strategy) def pack(self, value): return self.dict_type(zip(self.keys, value)) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/datetime.py000066400000000000000000000105551321557765100255270ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import datetime as dt from hypothesis.internal.conjecture import utils from hypothesis.searchstrategy.strategies import SearchStrategy __all__ = ['DateStrategy', 'DatetimeStrategy', 'TimedeltaStrategy'] def is_pytz_timezone(tz): if not isinstance(tz, dt.tzinfo): return False module = type(tz).__module__ return module == 'pytz' or module.startswith('pytz.') class DatetimeStrategy(SearchStrategy): def __init__(self, min_value, max_value, timezones_strat): assert isinstance(min_value, dt.datetime) assert isinstance(max_value, dt.datetime) assert min_value.tzinfo is None assert max_value.tzinfo is None assert min_value <= max_value assert isinstance(timezones_strat, SearchStrategy) self.min_dt = min_value self.max_dt = max_value self.tz_strat = timezones_strat def _attempt_one_draw(self, data): result = dict() cap_low, cap_high = True, True for name in ('year', 'month', 'day', 'hour', 'minute', 'second', 'microsecond'): low = getattr(self.min_dt if cap_low else dt.datetime.min, name) high = getattr(self.max_dt if cap_high else dt.datetime.max, name) if name == 'year': val = utils.centered_integer_range(data, low, high, 2000) else: val = utils.integer_range(data, low, high) result[name] = val cap_low = cap_low and val == low cap_high = cap_high and val == high tz = data.draw(self.tz_strat) try: result = dt.datetime(**result) if is_pytz_timezone(tz): # Can't just construct; see http://pytz.sourceforge.net return tz.normalize(tz.localize(result)) return result.replace(tzinfo=tz) except (ValueError, OverflowError): return None def do_draw(self, data): for _ in range(3): result = self._attempt_one_draw(data) if result is not None: return result data.note_event('3 attempts to create a datetime between %r and %r ' 'with timezone from %r failed.' % (self.min_dt, self.max_dt, self.tz_strat)) data.mark_invalid() class DateStrategy(SearchStrategy): def __init__(self, min_value, max_value): assert isinstance(min_value, dt.date) assert isinstance(max_value, dt.date) assert min_value < max_value self.min_value = min_value self.days_apart = (max_value - min_value).days self.center = (dt.date(2000, 1, 1) - min_value).days def do_draw(self, data): return self.min_value + dt.timedelta(days=utils.centered_integer_range( data, 0, self.days_apart, center=self.center)) class TimedeltaStrategy(SearchStrategy): def __init__(self, min_value, max_value): assert isinstance(min_value, dt.timedelta) assert isinstance(max_value, dt.timedelta) assert min_value < max_value self.min_value = min_value self.max_value = max_value def do_draw(self, data): result = dict() low_bound = True high_bound = True for name in ('days', 'seconds', 'microseconds'): low = getattr( self.min_value if low_bound else dt.timedelta.min, name) high = getattr( self.max_value if high_bound else dt.timedelta.max, name) val = utils.centered_integer_range(data, low, high, 0) result[name] = val low_bound = low_bound and val == low high_bound = high_bound and val == high return dt.timedelta(**result) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/deferred.py000066400000000000000000000060341321557765100255100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect from hypothesis.errors import InvalidArgument from hypothesis.internal.reflection import get_pretty_function_description from hypothesis.searchstrategy.strategies import SearchStrategy class DeferredStrategy(SearchStrategy): """A strategy which may be used before it is fully defined.""" def __init__(self, definition): SearchStrategy.__init__(self) self.__wrapped_strategy = None self.__in_repr = False self.__is_empty = None self.__definition = definition @property def wrapped_strategy(self): if self.__wrapped_strategy is None: if not inspect.isfunction(self.__definition): raise InvalidArgument(( 'Excepted a definition to be a function but got %r of type' ' %s instead.') % ( self.__definition, type(self.__definition).__name__)) result = self.__definition() if result is self: raise InvalidArgument( 'Cannot define a deferred strategy to be itself') if not isinstance(result, SearchStrategy): raise InvalidArgument(( 'Expected definition to return a SearchStrategy but ' 'returned %r of type %s') % ( result, type(result).__name__ )) self.__wrapped_strategy = result del self.__definition return self.__wrapped_strategy @property def branches(self): return self.wrapped_strategy.branches @property def supports_find(self): return self.wrapped_strategy.supports_find def calc_is_empty(self, recur): return recur(self.wrapped_strategy) def calc_has_reusable_values(self, recur): return recur(self.wrapped_strategy) def __repr__(self): if self.__wrapped_strategy is not None: if self.__in_repr: return '(deferred@%r)' % (id(self),) try: self.__in_repr = True return repr(self.__wrapped_strategy) finally: self.__in_repr = False else: return 'deferred(%s)' % ( get_pretty_function_description(self.__definition) ) def do_draw(self, data): return data.draw(self.wrapped_strategy) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/flatmapped.py000066400000000000000000000033151321557765100260440ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.reflection import get_pretty_function_description from hypothesis.searchstrategy.strategies import SearchStrategy class FlatMapStrategy(SearchStrategy): def __init__( self, strategy, expand ): super(FlatMapStrategy, self).__init__() self.flatmapped_strategy = strategy self.expand = expand def calc_is_empty(self, recur): return recur(self.flatmapped_strategy) def __repr__(self): if not hasattr(self, u'_cached_repr'): self._cached_repr = u'%r.flatmap(%s)' % ( self.flatmapped_strategy, get_pretty_function_description( self.expand)) return self._cached_repr def do_draw(self, data): source = data.draw(self.flatmapped_strategy) return data.draw(self.expand(source)) @property def branches(self): return [ FlatMapStrategy(strategy=strategy, expand=self.expand) for strategy in self.flatmapped_strategy.branches ] hypothesis-python-3.44.1/src/hypothesis/searchstrategy/lazy.py000066400000000000000000000120351321557765100247050ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.compat import getfullargspec from hypothesis.internal.reflection import arg_string, \ convert_keyword_arguments, convert_positional_arguments from hypothesis.searchstrategy.strategies import SearchStrategy unwrap_cache = {} unwrap_depth = 0 def unwrap_strategies(s): global unwrap_depth if not isinstance(s, SearchStrategy): return s try: return unwrap_cache[s] except KeyError: pass unwrap_cache[s] = s try: unwrap_depth += 1 try: result = unwrap_strategies(s.wrapped_strategy) unwrap_cache[s] = result try: assert result.force_has_reusable_values == \ s.force_has_reusable_values except AttributeError: pass try: result.force_has_reusable_values = s.force_has_reusable_values except AttributeError: pass return result except AttributeError: return s finally: unwrap_depth -= 1 if unwrap_depth <= 0: unwrap_cache.clear() assert unwrap_depth >= 0 class LazyStrategy(SearchStrategy): """A strategy which is defined purely by conversion to and from another strategy. Its parameter and distribution come from that other strategy. """ def __init__(self, function, args, kwargs): SearchStrategy.__init__(self) self.__wrapped_strategy = None self.__representation = None self.__function = function self.__args = args self.__kwargs = kwargs @property def supports_find(self): return self.wrapped_strategy.supports_find def calc_is_empty(self, recur): return recur(self.wrapped_strategy) def calc_has_reusable_values(self, recur): return recur(self.wrapped_strategy) def calc_is_cacheable(self, recur): for source in (self.__args, self.__kwargs.values()): for v in source: if isinstance(v, SearchStrategy) and not v.is_cacheable: return False return True @property def wrapped_strategy(self): if self.__wrapped_strategy is None: unwrapped_args = tuple( unwrap_strategies(s) for s in self.__args) unwrapped_kwargs = { k: unwrap_strategies(v) for k, v in self.__kwargs.items() } base = self.__function( *self.__args, **self.__kwargs ) if ( unwrapped_args == self.__args and unwrapped_kwargs == self.__kwargs ): self.__wrapped_strategy = base else: self.__wrapped_strategy = self.__function( *unwrapped_args, **unwrapped_kwargs) return self.__wrapped_strategy def do_validate(self): w = self.wrapped_strategy assert isinstance(w, SearchStrategy), \ '%r returned non-strategy %r' % (self, w) w.validate() def __repr__(self): if self.__representation is None: _args = self.__args _kwargs = self.__kwargs argspec = getfullargspec(self.__function) defaults = dict(argspec.kwonlydefaults or {}) if argspec.defaults is not None: for name, value in zip(reversed(argspec.args), reversed(argspec.defaults)): defaults[name] = value if len(argspec.args) > 1 or argspec.defaults: _args, _kwargs = convert_positional_arguments( self.__function, _args, _kwargs) else: _args, _kwargs = convert_keyword_arguments( self.__function, _args, _kwargs) kwargs_for_repr = dict(_kwargs) for k, v in defaults.items(): if k in kwargs_for_repr and kwargs_for_repr[k] is defaults[k]: del kwargs_for_repr[k] self.__representation = '%s(%s)' % ( self.__function.__name__, arg_string( self.__function, _args, kwargs_for_repr, reorder=False), ) return self.__representation def do_draw(self, data): return data.draw(self.wrapped_strategy) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/misc.py000066400000000000000000000054311321557765100246630ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.internal.conjecture.utils as d from hypothesis.types import RandomWithSeed from hypothesis.searchstrategy.strategies import SearchStrategy, \ MappedSearchStrategy class BoolStrategy(SearchStrategy): """A strategy that produces Booleans with a Bernoulli conditional distribution.""" def __repr__(self): return u'BoolStrategy()' def calc_has_reusable_values(self, recur): return True def do_draw(self, data): return d.boolean(data) def is_simple_data(value): try: hash(value) return True except TypeError: return False class JustStrategy(SearchStrategy): """A strategy which simply returns a single fixed value with probability 1.""" def __init__(self, value): SearchStrategy.__init__(self) self.value = value def __repr__(self): return 'just(%r)' % (self.value,) def calc_has_reusable_values(self, recur): return True def calc_is_cacheable(self, recur): return is_simple_data(self.value) def do_draw(self, data): return self.value class RandomStrategy(MappedSearchStrategy): """A strategy which produces Random objects. The conditional distribution is simply a RandomWithSeed seeded with a 128 bits of data chosen uniformly at random. """ def pack(self, i): return RandomWithSeed(i) class SampledFromStrategy(SearchStrategy): """A strategy which samples from a set of elements. This is essentially equivalent to using a OneOfStrategy over Just strategies but may be more efficient and convenient. The conditional distribution chooses uniformly at random from some non-empty subset of the elements. """ def __init__(self, elements): SearchStrategy.__init__(self) self.elements = d.check_sample(elements) assert self.elements def calc_has_reusable_values(self, recur): return True def calc_is_cacheable(self, recur): return is_simple_data(self.elements) def do_draw(self, data): return d.choice(data, self.elements) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/numbers.py000066400000000000000000000135561321557765100254120ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import hypothesis.internal.conjecture.utils as d import hypothesis.internal.conjecture.floats as flt from hypothesis.control import assume from hypothesis.internal.compat import int_from_bytes from hypothesis.internal.floats import sign from hypothesis.searchstrategy.strategies import SearchStrategy, \ MappedSearchStrategy class IntStrategy(SearchStrategy): """A generic strategy for integer types that provides the basic methods other than produce. Subclasses should provide the produce method. """ class IntegersFromStrategy(SearchStrategy): def __init__(self, lower_bound, average_size=100000.0): super(IntegersFromStrategy, self).__init__() self.lower_bound = lower_bound self.average_size = average_size def __repr__(self): return 'IntegersFromStrategy(%d)' % (self.lower_bound,) def do_draw(self, data): return int( self.lower_bound + d.geometric(data, 1.0 / self.average_size)) class WideRangeIntStrategy(IntStrategy): def __repr__(self): return 'WideRangeIntStrategy()' def do_draw(self, data): size = 16 sign_mask = 2 ** (size * 8 - 1) byt = data.draw_bytes(size) r = int_from_bytes(byt) negative = r & sign_mask r &= (~sign_mask) if negative: r = -r return int(r) class BoundedIntStrategy(SearchStrategy): """A strategy for providing integers in some interval with inclusive endpoints.""" def __init__(self, start, end): SearchStrategy.__init__(self) self.start = start self.end = end def __repr__(self): return 'BoundedIntStrategy(%d, %d)' % (self.start, self.end) def do_draw(self, data): return d.integer_range(data, self.start, self.end) NASTY_FLOATS = sorted([ 0.0, 0.5, 1.1, 1.5, 1.9, 1.0 / 3, 10e6, 10e-6, 1.175494351e-38, 2.2250738585072014e-308, 1.7976931348623157e+308, 3.402823466e+38, 9007199254740992, 1 - 10e-6, 2 + 10e-6, 1.192092896e-07, 2.2204460492503131e-016, ] + [float('inf'), float('nan')] * 5, key=flt.float_to_lex) NASTY_FLOATS = list(map(float, NASTY_FLOATS)) NASTY_FLOATS.extend([-x for x in NASTY_FLOATS]) class FloatStrategy(SearchStrategy): """Generic superclass for strategies which produce floats.""" def __init__(self, allow_infinity, allow_nan): SearchStrategy.__init__(self) assert isinstance(allow_infinity, bool) assert isinstance(allow_nan, bool) self.allow_infinity = allow_infinity self.allow_nan = allow_nan self.nasty_floats = [f for f in NASTY_FLOATS if self.permitted(f)] weights = [ 0.6 * len(self.nasty_floats) ] + [0.4] * len(self.nasty_floats) self.sampler = d.Sampler(weights) def __repr__(self): return '%s()' % (self.__class__.__name__,) def permitted(self, f): assert isinstance(f, float) if not self.allow_infinity and math.isinf(f): return False if not self.allow_nan and math.isnan(f): return False return True def do_draw(self, data): while True: data.start_example() i = self.sampler.sample(data) if i == 0: result = flt.draw_float(data) else: result = self.nasty_floats[i - 1] flt.write_float(data, result) data.stop_example() if self.permitted(result): return result def float_order_key(k): return (sign(k), k) class FixedBoundedFloatStrategy(SearchStrategy): """A strategy for floats distributed between two endpoints. The conditional distribution tries to produce values clustered closer to one of the ends. """ def __init__(self, lower_bound, upper_bound): SearchStrategy.__init__(self) self.lower_bound = float(lower_bound) self.upper_bound = float(upper_bound) assert not math.isinf(self.upper_bound - self.lower_bound) lb = float_order_key(self.lower_bound) ub = float_order_key(self.upper_bound) self.critical = [ z for z in (-0.0, 0.0) if lb <= float_order_key(z) <= ub ] self.critical.append(self.lower_bound) self.critical.append(self.upper_bound) def __repr__(self): return 'FixedBoundedFloatStrategy(%s, %s)' % ( self.lower_bound, self.upper_bound, ) def do_draw(self, data): f = self.lower_bound + ( self.upper_bound - self.lower_bound) * d.fractional_float(data) assume(self.lower_bound <= f <= self.upper_bound) assume(sign(self.lower_bound) <= sign(f) <= sign(self.upper_bound)) # Special handling for bounds of -0.0 for g in [self.lower_bound, self.upper_bound]: if f == g: f = math.copysign(f, g) return f class ComplexStrategy(MappedSearchStrategy): """A strategy over complex numbers, with real and imaginary values distributed according to some provided strategy for floating point numbers.""" def __repr__(self): return 'ComplexStrategy()' def pack(self, value): return complex(*value) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/recursive.py000066400000000000000000000070771321557765100257470ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from contextlib import contextmanager from hypothesis.errors import InvalidArgument from hypothesis.internal.lazyformat import lazyformat from hypothesis.internal.reflection import get_pretty_function_description from hypothesis.searchstrategy.strategies import OneOfStrategy, \ SearchStrategy class LimitReached(BaseException): pass class LimitedStrategy(SearchStrategy): def __init__(self, strategy): super(LimitedStrategy, self).__init__() self.base_strategy = strategy self.marker = 0 self.currently_capped = False def do_validate(self): self.base_strategy.validate() def do_draw(self, data): assert self.currently_capped if self.marker <= 0: raise LimitReached() self.marker -= 1 return data.draw(self.base_strategy) @contextmanager def capped(self, max_templates): assert not self.currently_capped try: self.currently_capped = True self.marker = max_templates yield finally: self.currently_capped = False class RecursiveStrategy(SearchStrategy): def __init__(self, base, extend, max_leaves): self.max_leaves = max_leaves self.base = base self.limited_base = LimitedStrategy(base) self.extend = extend strategies = [self.limited_base, self.extend(self.limited_base)] while 2 ** len(strategies) <= max_leaves: strategies.append( extend(OneOfStrategy(tuple(strategies), bias=0.8))) self.strategy = OneOfStrategy(strategies) def __repr__(self): if not hasattr(self, '_cached_repr'): self._cached_repr = 'recursive(%r, %s, max_leaves=%d)' % ( self.base, get_pretty_function_description(self.extend), self.max_leaves ) return self._cached_repr def do_validate(self): if not isinstance(self.base, SearchStrategy): raise InvalidArgument( 'Expected base to be SearchStrategy but got %r' % (self.base,) ) extended = self.extend(self.limited_base) if not isinstance(extended, SearchStrategy): raise InvalidArgument( 'Expected extend(%r) to be a SearchStrategy but got %r' % ( self.limited_base, extended )) self.limited_base.validate() self.extend(self.limited_base).validate() def do_draw(self, data): count = 0 while True: try: with self.limited_base.capped(self.max_leaves): return data.draw(self.strategy) except LimitReached: if count == 0: data.note_event(lazyformat( 'Draw for %r exceeded max_leaves ' 'and had to be retried', self,)) count += 1 hypothesis-python-3.44.1/src/hypothesis/searchstrategy/regex.py000066400000000000000000000432761321557765100250530ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import re import sys import operator import sre_parse import sre_constants as sre import hypothesis.strategies as st from hypothesis import reject from hypothesis.internal.compat import PY3, hrange, hunichr, text_type, \ int_to_byte HAS_SUBPATTERN_FLAGS = sys.version_info[:2] >= (3, 6) UNICODE_CATEGORIES = set([ 'Cf', 'Cn', 'Co', 'LC', 'Ll', 'Lm', 'Lo', 'Lt', 'Lu', 'Mc', 'Me', 'Mn', 'Nd', 'Nl', 'No', 'Pc', 'Pd', 'Pe', 'Pf', 'Pi', 'Po', 'Ps', 'Sc', 'Sk', 'Sm', 'So', 'Zl', 'Zp', 'Zs', ]) SPACE_CHARS = set(u' \t\n\r\f\v') UNICODE_SPACE_CHARS = SPACE_CHARS | set(u'\x1c\x1d\x1e\x1f\x85') UNICODE_DIGIT_CATEGORIES = set(['Nd']) UNICODE_SPACE_CATEGORIES = set(['Zs', 'Zl', 'Zp']) UNICODE_LETTER_CATEGORIES = set(['LC', 'Ll', 'Lm', 'Lo', 'Lt', 'Lu']) UNICODE_WORD_CATEGORIES = UNICODE_LETTER_CATEGORIES | set(['Nd', 'Nl', 'No']) # This is verbose, but correct on all versions of Python BYTES_ALL = set(int_to_byte(i) for i in range(256)) BYTES_DIGIT = set(b for b in BYTES_ALL if re.match(b'\\d', b)) BYTES_SPACE = set(b for b in BYTES_ALL if re.match(b'\\s', b)) BYTES_WORD = set(b for b in BYTES_ALL if re.match(b'\\w', b)) BYTES_LOOKUP = { sre.CATEGORY_DIGIT: BYTES_DIGIT, sre.CATEGORY_SPACE: BYTES_SPACE, sre.CATEGORY_WORD: BYTES_WORD, sre.CATEGORY_NOT_DIGIT: BYTES_ALL - BYTES_DIGIT, sre.CATEGORY_NOT_SPACE: BYTES_ALL - BYTES_SPACE, sre.CATEGORY_NOT_WORD: BYTES_ALL - BYTES_WORD, } # On Python < 3.4 (including 2.7), the following unicode chars are weird. # They are matched by the \W, meaning 'not word', but unicodedata.category(c) # returns one of the word categories above. There's special handling below. HAS_WEIRD_WORD_CHARS = sys.version_info[:2] < (3, 4) UNICODE_WEIRD_NONWORD_CHARS = set(u'\U00012432\U00012433\U00012456\U00012457') GROUP_CACHE_STRATEGY = st.shared( st.builds(dict), key='hypothesis.regex.group_cache' ) @st.composite def update_group(draw, group_name, strategy): cache = draw(GROUP_CACHE_STRATEGY) result = draw(strategy) cache[group_name] = result return result @st.composite def reuse_group(draw, group_name): cache = draw(GROUP_CACHE_STRATEGY) try: return cache[group_name] except KeyError: reject() @st.composite def group_conditional(draw, group_name, if_yes, if_no): cache = draw(GROUP_CACHE_STRATEGY) if group_name in cache: return draw(if_yes) else: return draw(if_no) @st.composite def clear_cache_after_draw(draw, base_strategy): cache = draw(GROUP_CACHE_STRATEGY) result = draw(base_strategy) cache.clear() return result class Context(object): __slots__ = ['flags'] def __init__(self, groups=None, flags=0): self.flags = flags class CharactersBuilder(object): """Helper object that allows to configure `characters` strategy with various unicode categories and characters. Also allows negation of configured set. :param negate: If True, configure :func:`hypothesis.strategies.characters` to match anything other than configured character set :param flags: Regex flags. They affect how and which characters are matched """ def __init__(self, negate=False, flags=0): self._categories = set() self._whitelist_chars = set() self._blacklist_chars = set() self._negate = negate self._ignorecase = flags & re.IGNORECASE self._unicode = not bool(flags & re.ASCII) \ if PY3 else bool(flags & re.UNICODE) self.code_to_char = hunichr @property def strategy(self): """Returns resulting strategy that generates configured char set.""" max_codepoint = None if self._unicode else 127 if self._negate: black_chars = self._blacklist_chars - self._whitelist_chars return st.characters( blacklist_categories=self._categories | {'Cc', 'Cs'}, blacklist_characters=self._whitelist_chars, whitelist_characters=black_chars, max_codepoint=max_codepoint, ) white_chars = self._whitelist_chars - self._blacklist_chars return st.characters( whitelist_categories=self._categories, blacklist_characters=self._blacklist_chars, whitelist_characters=white_chars, max_codepoint=max_codepoint, ) def add_category(self, category): """Update unicode state to match sre_parse object ``category``.""" if category == sre.CATEGORY_DIGIT: self._categories |= UNICODE_DIGIT_CATEGORIES elif category == sre.CATEGORY_NOT_DIGIT: self._categories |= UNICODE_CATEGORIES - UNICODE_DIGIT_CATEGORIES elif category == sre.CATEGORY_SPACE: self._categories |= UNICODE_SPACE_CATEGORIES self._whitelist_chars |= UNICODE_SPACE_CHARS \ if self._unicode else SPACE_CHARS elif category == sre.CATEGORY_NOT_SPACE: self._categories |= UNICODE_CATEGORIES - UNICODE_SPACE_CATEGORIES self._blacklist_chars |= UNICODE_SPACE_CHARS \ if self._unicode else SPACE_CHARS elif category == sre.CATEGORY_WORD: self._categories |= UNICODE_WORD_CATEGORIES self._whitelist_chars.add(u'_') if HAS_WEIRD_WORD_CHARS and self._unicode: # pragma: no cover # This code is workaround of weird behavior in # specific Python versions and run only on those versions self._blacklist_chars |= UNICODE_WEIRD_NONWORD_CHARS elif category == sre.CATEGORY_NOT_WORD: self._categories |= UNICODE_CATEGORIES - UNICODE_WORD_CATEGORIES self._blacklist_chars.add(u'_') if HAS_WEIRD_WORD_CHARS and self._unicode: # pragma: no cover # This code is workaround of weird behavior in # specific Python versions and run only on those versions self._whitelist_chars |= UNICODE_WEIRD_NONWORD_CHARS else: # pragma: no cover raise AssertionError('Unknown character category: %s' % category) def add_char(self, char): """Add given char to the whitelist.""" c = self.code_to_char(char) self._whitelist_chars.add(c) if self._ignorecase and \ re.match(c, c.swapcase(), re.IGNORECASE) is not None: self._whitelist_chars.add(c.swapcase()) class BytesBuilder(CharactersBuilder): def __init__(self, negate=False, flags=0): self._whitelist_chars = set() self._blacklist_chars = set() self._negate = negate self._ignorecase = flags & re.IGNORECASE self.code_to_char = int_to_byte @property def strategy(self): """Returns resulting strategy that generates configured char set.""" allowed = self._whitelist_chars if self._negate: allowed = BYTES_ALL - allowed return st.sampled_from(sorted(allowed)) def add_category(self, category): """Update characters state to match sre_parse object ``category``.""" self._whitelist_chars |= BYTES_LOOKUP[category] @st.composite def maybe_pad(draw, regex, strategy, left_pad_strategy, right_pad_strategy): """Attempt to insert padding around the result of a regex draw while preserving the match.""" result = draw(strategy) left_pad = draw(left_pad_strategy) if left_pad and regex.search(left_pad + result): result = left_pad + result right_pad = draw(right_pad_strategy) if right_pad and regex.search(result + right_pad): result += right_pad return result def base_regex_strategy(regex, parsed=None): if parsed is None: parsed = sre_parse.parse(regex.pattern, flags=regex.flags) return clear_cache_after_draw(_strategy( parsed, Context(flags=regex.flags), isinstance(regex.pattern, text_type) )) def regex_strategy(regex): if not hasattr(regex, 'pattern'): regex = re.compile(regex) is_unicode = isinstance(regex.pattern, text_type) parsed = sre_parse.parse(regex.pattern, flags=regex.flags) if not parsed: if is_unicode: return st.text() else: return st.binary() if is_unicode: base_padding_strategy = st.text(average_size=1) empty = st.just(u'') newline = st.just(u'\n') else: base_padding_strategy = st.binary(average_size=1) empty = st.just(b'') newline = st.just(b'\n') right_pad = base_padding_strategy left_pad = base_padding_strategy if parsed[-1][0] == sre.AT: if parsed[-1][1] == sre.AT_END_STRING: right_pad = empty elif parsed[-1][1] == sre.AT_END: if regex.flags & re.MULTILINE: right_pad = st.one_of( empty, st.builds(operator.add, newline, right_pad) ) else: right_pad = st.one_of(empty, newline) if parsed[0][0] == sre.AT: if parsed[0][1] == sre.AT_BEGINNING_STRING: left_pad = empty elif parsed[0][1] == sre.AT_BEGINNING: if regex.flags & re.MULTILINE: left_pad = st.one_of( empty, st.builds(operator.add, left_pad, newline), ) else: left_pad = empty base = base_regex_strategy(regex, parsed).filter(regex.search) return maybe_pad(regex, base, left_pad, right_pad) def _strategy(codes, context, is_unicode): """Convert SRE regex parse tree to strategy that generates strings matching that regex represented by that parse tree. `codes` is either a list of SRE regex elements representations or a particular element representation. Each element is a tuple of element code (as string) and parameters. E.g. regex 'ab[0-9]+' compiles to following elements: [ (LITERAL, 97), (LITERAL, 98), (MAX_REPEAT, (1, 4294967295, [ (IN, [ (RANGE, (48, 57)) ]) ])) ] The function recursively traverses regex element tree and converts each element to strategy that generates strings that match that element. Context stores 1. List of groups (for backreferences) 2. Active regex flags (e.g. IGNORECASE, DOTALL, UNICODE, they affect behavior of various inner strategies) """ def recurse(codes): return _strategy(codes, context, is_unicode) if is_unicode: empty = u'' to_char = hunichr else: empty = b'' to_char = int_to_byte binary_char = st.binary(min_size=1, max_size=1) if not isinstance(codes, tuple): # List of codes strategies = [] i = 0 while i < len(codes): if codes[i][0] == sre.LITERAL and \ not context.flags & re.IGNORECASE: # Merge subsequent "literals" into one `just()` strategy # that generates corresponding text if no IGNORECASE j = i + 1 while j < len(codes) and codes[j][0] == sre.LITERAL: j += 1 if i + 1 < j: strategies.append(st.just( empty.join([to_char(charcode) for (_, charcode) in codes[i:j]]) )) i = j continue strategies.append(recurse(codes[i])) i += 1 # We handle this separately at the top level, but some regex can # contain empty lists internally, so we need to handle this here too. if not strategies: return st.just(empty) if len(strategies) == 1: return strategies[0] return st.tuples(*strategies).map(empty.join) else: # Single code code, value = codes if code == sre.LITERAL: # Regex 'a' (single char) c = to_char(value) if context.flags & re.IGNORECASE and \ re.match(c, c.swapcase(), re.IGNORECASE) is not None: # We do the explicit check for swapped-case matching because # eg 'ß'.upper() == 'SS' and ignorecase doesn't match it. return st.sampled_from([c, c.swapcase()]) return st.just(c) elif code == sre.NOT_LITERAL: # Regex '[^a]' (negation of a single char) c = to_char(value) blacklist = set(c) if context.flags & re.IGNORECASE and \ re.match(c, c.swapcase(), re.IGNORECASE) is not None: blacklist |= set(c.swapcase()) if is_unicode: return st.characters(blacklist_characters=blacklist) else: return binary_char.filter(lambda c: c not in blacklist) elif code == sre.IN: # Regex '[abc0-9]' (set of characters) negate = value[0][0] == sre.NEGATE if is_unicode: builder = CharactersBuilder(negate, context.flags) else: builder = BytesBuilder(negate, context.flags) for charset_code, charset_value in value: if charset_code == sre.NEGATE: # Regex '[^...]' (negation) # handled by builder = CharactersBuilder(...) above pass elif charset_code == sre.LITERAL: # Regex '[a]' (single char) builder.add_char(charset_value) elif charset_code == sre.RANGE: # Regex '[a-z]' (char range) low, high = charset_value for char_code in hrange(low, high + 1): builder.add_char(char_code) elif charset_code == sre.CATEGORY: # Regex '[\w]' (char category) builder.add_category(charset_value) else: # pragma: no cover # Currently there are no known code points other than # handled here. This code is just future proofing raise AssertionError('Unknown charset code: %s' % charset_code) return builder.strategy elif code == sre.ANY: # Regex '.' (any char) if is_unicode: if context.flags & re.DOTALL: return st.characters() return st.characters(blacklist_characters=u'\n') else: if context.flags & re.DOTALL: return binary_char return binary_char.filter(lambda c: c != b'\n') elif code == sre.AT: # Regexes like '^...', '...$', '\bfoo', '\Bfoo' # An empty string (or newline) will match the token itself, but # we don't and can't check the position (eg '%' at the end) return st.just(empty) elif code == sre.SUBPATTERN: # Various groups: '(...)', '(:...)' or '(?P...)' old_flags = context.flags if HAS_SUBPATTERN_FLAGS: # pragma: no cover # This feature is available only in specific Python versions context.flags = (context.flags | value[1]) & ~value[2] strat = _strategy(value[-1], context, is_unicode) context.flags = old_flags if value[0]: strat = update_group(value[0], strat) return strat elif code == sre.GROUPREF: # Regex '\\1' or '(?P=name)' (group reference) return reuse_group(value) elif code == sre.ASSERT: # Regex '(?=...)' or '(?<=...)' (positive lookahead/lookbehind) return recurse(value[1]) elif code == sre.ASSERT_NOT: # Regex '(?!...)' or '(? 50: # pragma: no cover key = frozenset(mapping.items()) assert key not in seen, (key, name) seen.add(key) to_update = needs_update needs_update = set() for strat in to_update: def recur(other): try: return forced_value(other) except AttributeError: pass listeners[other].add(strat) try: return mapping[other] except KeyError: needs_update.add(other) mapping[other] = default return default new_value = getattr(strat, calculation)(recur) if new_value != mapping[strat]: needs_update.update(listeners[strat]) mapping[strat] = new_value # We now have a complete and accurate calculation of the # property values for everything we have seen in the course of # running this calculation. We simultaneously update all of # them (not just the strategy we started out with). for k, v in mapping.items(): setattr(k, cache_key, v) return getattr(self, cache_key) accept.__name__ = name return property(accept) # Returns True if this strategy can never draw a value and will always # result in the data being marked invalid. # The fact that this returns False does not guarantee that a valid value # can be drawn - this is not intended to be perfect, and is primarily # intended to be an optimisation for some cases. is_empty = recursive_property('is_empty', True) # Returns True if values from this strategy can safely be reused without # this causing unexpected behaviour. has_reusable_values = recursive_property('has_reusable_values', True) # Whether this strategy is suitable for holding onto in a cache. is_cacheable = recursive_property('is_cacheable', True) def calc_is_cacheable(self, recur): return True def calc_is_empty(self, recur): # Note: It is correct and significant that the default return value # from calc_is_empty is False despite the default value for is_empty # being true. The reason for this is that strategies should be treated # as empty absent evidence to the contrary, but most basic strategies # are trivially non-empty and it would be annoying to have to override # this method to show that. return False def calc_has_reusable_values(self, recur): return False def example(self, random=None): """Provide an example of the sort of value that this strategy generates. This is biased to be slightly simpler than is typical for values from this strategy, for clarity purposes. This method shouldn't be taken too seriously. It's here for interactive exploration of the API, not for any sort of real testing. This method is part of the public API. """ context = _current_build_context.value if context is not None: if context.data is not None and context.data.depth > 0: note_deprecation( 'Using example() inside a strategy definition is a bad ' 'idea. It will become an error in a future version of ' "Hypothesis, but it's unlikely that it's doing what you " 'intend even now. Instead consider using ' 'hypothesis.strategies.builds() or ' '@hypothesis.strategies.composite to define your strategy.' ' See ' 'https://hypothesis.readthedocs.io/en/latest/data.html' '#hypothesis.strategies.builds or ' 'https://hypothesis.readthedocs.io/en/latest/data.html' '#composite-strategies for more details.' ) else: note_deprecation( 'Using example() inside a test function is a bad ' 'idea. It will become an error in a future version of ' "Hypothesis, but it's unlikely that it's doing what you " 'intend even now. Instead consider using ' 'hypothesis.strategies.data() to draw ' 'more examples during testing. See ' 'https://hypothesis.readthedocs.io/en/latest/data.html' '#drawing-interactively-in-tests for more details.' ) from hypothesis import find, settings, Verbosity # Conjecture will always try the zero example first. This would result # in us producing the same example each time, which is boring, so we # deliberately skip the first example it feeds us. first = [] def condition(x): if first: return True else: first.append(x) return False try: return find( self, condition, random=random, settings=settings( max_shrinks=0, max_iterations=1000, database=None, verbosity=Verbosity.quiet, ) ) except (NoSuchExample, Unsatisfiable): # This can happen when a strategy has only one example. e.g. # st.just(x). In that case we wanted the first example after all. if first: return first[0] raise NoExamples( u'Could not find any valid examples in 100 tries' ) def map(self, pack): """Returns a new strategy that generates values by generating a value from this strategy and then calling pack() on the result, giving that. This method is part of the public API. """ return MappedSearchStrategy( pack=pack, strategy=self ) def flatmap(self, expand): """Returns a new strategy that generates values by generating a value from this strategy, say x, then generating a value from strategy(expand(x)) This method is part of the public API. """ from hypothesis.searchstrategy.flatmapped import FlatMapStrategy return FlatMapStrategy( expand=expand, strategy=self ) def filter(self, condition): """Returns a new strategy that generates values from this strategy which satisfy the provided condition. Note that if the condition is too hard to satisfy this might result in your tests failing with Unsatisfiable. This method is part of the public API. """ return FilteredStrategy( condition=condition, strategy=self, ) @property def branches(self): return [self] def __or__(self, other): """Return a strategy which produces values by randomly drawing from one of this strategy or the other strategy. This method is part of the public API. """ if not isinstance(other, SearchStrategy): raise ValueError('Cannot | a SearchStrategy with %r' % (other,)) return one_of_strategies((self, other)) def validate(self): """Throw an exception if the strategy is not valid. This can happen due to lazy construction """ if self.validate_called: return try: self.validate_called = True self.do_validate() self.is_empty self.has_reusable_values except Exception: self.validate_called = False raise def do_validate(self): pass def do_draw(self, data): raise NotImplementedError('%s.do_draw' % (type(self).__name__,)) def __init__(self): pass class OneOfStrategy(SearchStrategy): """Implements a union of strategies. Given a number of strategies this generates values which could have come from any of them. The conditional distribution draws uniformly at random from some non-empty subset of these strategies and then draws from the conditional distribution of that strategy. """ def __init__(self, strategies, bias=None): SearchStrategy.__init__(self) strategies = tuple(strategies) self.original_strategies = list(strategies) self.__element_strategies = None self.bias = bias self.__in_branches = False if bias is not None: assert 0 < bias < 1 self.sampler = cu.Sampler( [bias ** i for i in range(len(strategies))]) else: self.sampler = None def calc_is_empty(self, recur): return all(recur(e) for e in self.original_strategies) def calc_has_reusable_values(self, recur): return all(recur(e) for e in self.original_strategies) def calc_is_cacheable(self, recur): return all(recur(e) for e in self.original_strategies) @property def element_strategies(self): from hypothesis.strategies import check_strategy if self.__element_strategies is None: strategies = [] for arg in self.original_strategies: check_strategy(arg) if not arg.is_empty: strategies.extend( [s for s in arg.branches if not s.is_empty]) pruned = [] seen = set() for s in strategies: if s is self: continue if s in seen: continue seen.add(s) pruned.append(s) self.__element_strategies = pruned return self.__element_strategies def do_draw(self, data): n = len(self.element_strategies) assert n > 0 if n == 1: return data.draw(self.element_strategies[0]) elif self.sampler is None: i = cu.integer_range(data, 0, n - 1) else: i = self.sampler.sample(data) return data.draw(self.element_strategies[i]) def __repr__(self): return ' | '.join(map(repr, self.original_strategies)) def do_validate(self): for e in self.element_strategies: e.validate() @property def branches(self): if self.bias is None and not self.__in_branches: try: self.__in_branches = True return self.element_strategies finally: self.__in_branches = False else: return [self] class MappedSearchStrategy(SearchStrategy): """A strategy which is defined purely by conversion to and from another strategy. Its parameter and distribution come from that other strategy. """ def __init__(self, strategy, pack=None): SearchStrategy.__init__(self) self.mapped_strategy = strategy if pack is not None: self.pack = pack def calc_is_empty(self, recur): return recur(self.mapped_strategy) def calc_is_cacheable(self, recur): return recur(self.mapped_strategy) def __repr__(self): if not hasattr(self, '_cached_repr'): self._cached_repr = '%r.map(%s)' % ( self.mapped_strategy, get_pretty_function_description( self.pack) ) return self._cached_repr def do_validate(self): self.mapped_strategy.validate() def pack(self, x): """Take a value produced by the underlying mapped_strategy and turn it into a value suitable for outputting from this strategy.""" raise NotImplementedError( '%s.pack()' % (self.__class__.__name__)) def do_draw(self, data): for _ in range(3): i = data.index try: return self.pack(data.draw(self.mapped_strategy)) except UnsatisfiedAssumption: if data.index == i: raise reject() @property def branches(self): return [ MappedSearchStrategy(pack=self.pack, strategy=strategy) for strategy in self.mapped_strategy.branches ] class FilteredStrategy(SearchStrategy): def __init__(self, strategy, condition): super(FilteredStrategy, self).__init__() self.condition = condition self.filtered_strategy = strategy def calc_is_empty(self, recur): return recur(self.filtered_strategy) def calc_is_cacheable(self, recur): return recur(self.filtered_strategy) def __repr__(self): if not hasattr(self, '_cached_repr'): self._cached_repr = '%r.filter(%s)' % ( self.filtered_strategy, get_pretty_function_description( self.condition) ) return self._cached_repr def do_validate(self): self.filtered_strategy.validate() def do_draw(self, data): for i in hrange(3): start_index = data.index value = data.draw(self.filtered_strategy) if self.condition(value): return value else: if i == 0: data.note_event(lazyformat( 'Retried draw from %r to satisfy filter', self,)) # This is to guard against the case where we consume no data. # As long as we consume data, we'll eventually pass or raise. # But if we don't this could be an infinite loop. assume(data.index > start_index) data.note_event('Aborted test because unable to satisfy %r' % ( self, )) data.mark_invalid() @property def branches(self): branches = [ FilteredStrategy(strategy=strategy, condition=self.condition) for strategy in self.filtered_strategy.branches ] return branches hypothesis-python-3.44.1/src/hypothesis/searchstrategy/streams.py000066400000000000000000000024341321557765100254060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.types import Stream from hypothesis.searchstrategy.strategies import SearchStrategy class StreamStrategy(SearchStrategy): supports_find = False def __init__(self, source_strategy): super(StreamStrategy, self).__init__() self.source_strategy = source_strategy def __repr__(self): return u'StreamStrategy(%r)' % (self.source_strategy,) def do_draw(self, data): data.can_reproduce_example_from_repr = False def gen(): while True: yield data.draw(self.source_strategy) return Stream(gen()) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/strings.py000066400000000000000000000065311321557765100254230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.errors import InvalidArgument from hypothesis.internal import charmap from hypothesis.internal.compat import hunichr, text_type, binary_type from hypothesis.internal.intervalsets import IntervalSet from hypothesis.internal.conjecture.utils import integer_range from hypothesis.searchstrategy.strategies import SearchStrategy, \ MappedSearchStrategy class OneCharStringStrategy(SearchStrategy): """A strategy which generates single character strings of text type.""" specifier = text_type zero_point = ord('0') def __init__(self, whitelist_categories=None, blacklist_categories=None, blacklist_characters=None, min_codepoint=None, max_codepoint=None, whitelist_characters=None): intervals = charmap.query( include_categories=whitelist_categories, exclude_categories=blacklist_categories, min_codepoint=min_codepoint, max_codepoint=max_codepoint, include_characters=whitelist_characters, exclude_characters=blacklist_characters, ) if not intervals: raise InvalidArgument( 'No valid characters in set' ) self.intervals = IntervalSet(intervals) if whitelist_characters: self.whitelist_characters = set(whitelist_characters) else: self.whitelist_characters = set() self.zero_point = self.intervals.index_above(ord('0')) def do_draw(self, data): i = integer_range( data, 0, len(self.intervals) - 1, center=self.zero_point, ) return hunichr(self.intervals[i]) class StringStrategy(MappedSearchStrategy): """A strategy for text strings, defined in terms of a strategy for lists of single character text strings.""" def __init__(self, list_of_one_char_strings_strategy): super(StringStrategy, self).__init__( strategy=list_of_one_char_strings_strategy ) def __repr__(self): return 'StringStrategy()' def pack(self, ls): return u''.join(ls) class BinaryStringStrategy(MappedSearchStrategy): """A strategy for strings of bytes, defined in terms of a strategy for lists of bytes.""" def __repr__(self): return 'BinaryStringStrategy()' def pack(self, x): assert isinstance(x, list), repr(x) ba = bytearray(x) return binary_type(ba) class FixedSizeBytes(SearchStrategy): def __init__(self, size): self.size = size def do_draw(self, data): return binary_type(data.draw_bytes(self.size)) hypothesis-python-3.44.1/src/hypothesis/searchstrategy/types.py000066400000000000000000000227371321557765100251040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import io import uuid import decimal import datetime import fractions import functools import collections import hypothesis.strategies as st from hypothesis.errors import ResolutionFailed from hypothesis.internal.compat import text_type, integer_types def type_sorting_key(t): """Minimise to None, then non-container types, then container types.""" if t is None or t is type(None): # noqa: E721 return -1 return issubclass(t, collections.abc.Container) def try_issubclass(thing, maybe_superclass): try: return issubclass(thing, maybe_superclass) except (AttributeError, TypeError): # pragma: no cover # Some types can't be the subject or object of an instance or # subclass check under Python 3.5 return False def from_typing_type(thing): # We start with special-case support for Union and Tuple - the latter # isn't actually a generic type. Support for Callable may be added to # this section later. # We then explicitly error on non-Generic types, which don't carry enough # information to sensibly resolve to strategies at runtime. # Finally, we run a variation of the subclass lookup in st.from_type # among generic types in the lookup. import typing # Under 3.6 Union is handled directly in st.from_type, as the argument is # not an instance of `type`. However, under Python 3.5 Union *is* a type # and we have to handle it here, including failing if it has no parameters. if hasattr(thing, '__union_params__'): # pragma: no cover args = sorted(thing.__union_params__ or (), key=type_sorting_key) if not args: raise ResolutionFailed('Cannot resolve Union of no types.') return st.one_of([st.from_type(t) for t in args]) if isinstance(thing, typing.TupleMeta): elem_types = getattr(thing, '__tuple_params__', None) or () elem_types += getattr(thing, '__args__', None) or () if getattr(thing, '__tuple_use_ellipsis__', False) or \ len(elem_types) == 2 and elem_types[-1] is Ellipsis: return st.lists(st.from_type(elem_types[0])).map(tuple) return st.tuples(*map(st.from_type, elem_types)) # Now, confirm that we're dealing with a generic type as we expected if not isinstance(thing, typing.GenericMeta): # pragma: no cover raise ResolutionFailed('Cannot resolve %s to a strategy' % (thing,)) # Parametrised generic types have their __origin__ attribute set to the # un-parametrised version, which we need to use in the subclass checks. # e.g.: typing.List[int].__origin__ == typing.List mapping = {k: v for k, v in _global_type_lookup.items() if isinstance(k, typing.GenericMeta) and try_issubclass(k, getattr(thing, '__origin__', None) or thing)} if typing.Dict in mapping: # The subtype relationships between generic and concrete View types # are sometimes inconsistent under Python 3.5, so we pop them out to # preserve our invariant that all examples of from_type(T) are # instances of type T - and simplify the strategy for abstract types # such as Container for t in (typing.KeysView, typing.ValuesView, typing.ItemsView): mapping.pop(t, None) strategies = [v if isinstance(v, st.SearchStrategy) else v(thing) for k, v in mapping.items() if sum(try_issubclass(k, T) for T in mapping) == 1] empty = ', '.join(repr(s) for s in strategies if s.is_empty) if empty or not strategies: # pragma: no cover raise ResolutionFailed( 'Could not resolve %s to a strategy; consider using ' 'register_type_strategy' % (empty or thing,)) return st.one_of(strategies) _global_type_lookup = { # Types with core Hypothesis strategies type(None): st.none(), bool: st.booleans(), float: st.floats(), complex: st.complex_numbers(), fractions.Fraction: st.fractions(), decimal.Decimal: st.decimals(), text_type: st.text(), bytes: st.binary(), datetime.datetime: st.datetimes(), datetime.date: st.dates(), datetime.time: st.times(), datetime.timedelta: st.timedeltas(), uuid.UUID: st.uuids(), tuple: st.builds(tuple), list: st.builds(list), set: st.builds(set), frozenset: st.builds(frozenset), dict: st.builds(dict), # Built-in types type: st.sampled_from([type(None), bool, int, str, list, set, dict]), type(Ellipsis): st.just(Ellipsis), type(NotImplemented): st.just(NotImplemented), bytearray: st.binary().map(bytearray), memoryview: st.binary().map(memoryview), # Pull requests with more types welcome! } for t in integer_types: _global_type_lookup[t] = st.integers() try: from hypothesis.extra.pytz import timezones _global_type_lookup[datetime.tzinfo] = timezones() except ImportError: # pragma: no cover pass try: # pragma: no cover import numpy as np from hypothesis.extra.numpy import \ arrays, array_shapes, scalar_dtypes, nested_dtypes _global_type_lookup.update({ np.dtype: nested_dtypes(), np.ndarray: arrays(scalar_dtypes(), array_shapes(max_dims=2)), }) except ImportError: # pragma: no cover pass try: import typing except ImportError: # pragma: no cover pass else: _global_type_lookup.update({ typing.ByteString: st.binary(), typing.io.BinaryIO: st.builds(io.BytesIO, st.binary()), typing.io.TextIO: st.builds(io.StringIO, st.text()), typing.Reversible: st.lists(st.integers()), typing.SupportsAbs: st.complex_numbers(), typing.SupportsComplex: st.complex_numbers(), typing.SupportsFloat: st.complex_numbers(), typing.SupportsInt: st.complex_numbers(), }) try: # These aren't present in the typing module backport. _global_type_lookup[typing.SupportsBytes] = st.binary() _global_type_lookup[typing.SupportsRound] = st.complex_numbers() except AttributeError: # pragma: no cover pass def register(type_, fallback=None): if isinstance(type_, str): # Use the name of generic types which are not available on all # versions, and the function just won't be added to the registry type_ = getattr(typing, type_, None) if type_ is None: # pragma: no cover return lambda f: f def inner(func): if fallback is None: _global_type_lookup[type_] = func return func @functools.wraps(func) def really_inner(thing): if getattr(thing, '__args__', None) is None: return fallback return func(thing) _global_type_lookup[type_] = really_inner return really_inner return inner @register('Type') def resolve_Type(thing): if thing.__args__ is None: return st.just(type) inner = thing.__args__[0] if getattr(inner, '__origin__', None) is typing.Union: return st.sampled_from(inner.__args__) elif hasattr(inner, '__union_params__'): # pragma: no cover return st.sampled_from(inner.__union_params__) return st.just(inner) @register(typing.List, st.builds(list)) def resolve_List(thing): return st.lists(st.from_type(thing.__args__[0])) @register(typing.Set, st.builds(set)) def resolve_Set(thing): return st.sets(st.from_type(thing.__args__[0])) @register(typing.FrozenSet, st.builds(frozenset)) def resolve_FrozenSet(thing): return st.frozensets(st.from_type(thing.__args__[0])) @register(typing.Dict, st.builds(dict)) def resolve_Dict(thing): # If thing is a Collection instance, we need to fill in the values keys_vals = [st.from_type(t) for t in thing.__args__] * 2 return st.dictionaries(keys_vals[0], keys_vals[1]) @register('DefaultDict', st.builds(collections.defaultdict)) def resolve_DefaultDict(thing): return resolve_Dict(thing).map( lambda d: collections.defaultdict(None, d)) @register(typing.ItemsView, st.builds(dict).map(dict.items)) def resolve_ItemsView(thing): return resolve_Dict(thing).map(dict.items) @register(typing.KeysView, st.builds(dict).map(dict.keys)) def resolve_KeysView(thing): return st.dictionaries(st.from_type(thing.__args__[0]), st.none() ).map(dict.keys) @register(typing.ValuesView, st.builds(dict).map(dict.values)) def resolve_ValuesView(thing): return st.dictionaries(st.integers(), st.from_type(thing.__args__[0]) ).map(dict.values) @register(typing.Iterator, st.iterables(st.nothing())) def resolve_Iterator(thing): return st.iterables(st.from_type(thing.__args__[0])) hypothesis-python-3.44.1/src/hypothesis/stateful.py000066400000000000000000000452641321557765100225370ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module provides support for a stateful style of testing, where tests attempt to find a sequence of operations that cause a breakage rather than just a single value. Notably, the set of steps available at any point may depend on the execution to date. """ from __future__ import division, print_function, absolute_import import inspect import traceback from unittest import TestCase import attr import hypothesis.internal.conjecture.utils as cu from hypothesis.core import find from hypothesis.errors import Flaky, NoSuchExample, InvalidDefinition, \ HypothesisException from hypothesis.control import BuildContext from hypothesis._settings import settings as Settings from hypothesis._settings import Verbosity from hypothesis.reporting import report, verbose_report, current_verbosity from hypothesis.strategies import just, lists, builds, one_of, runner, \ integers from hypothesis.vendor.pretty import CUnicodeIO, RepresentationPrinter from hypothesis.internal.reflection import proxies, nicerepr from hypothesis.internal.conjecture.data import StopTest from hypothesis.internal.conjecture.utils import integer_range from hypothesis.searchstrategy.strategies import SearchStrategy from hypothesis.searchstrategy.collections import TupleStrategy, \ FixedKeysDictStrategy class TestCaseProperty(object): # pragma: no cover def __get__(self, obj, typ=None): if obj is not None: typ = type(obj) return typ._to_test_case() def __set__(self, obj, value): raise AttributeError(u'Cannot set TestCase') def __delete__(self, obj): raise AttributeError(u'Cannot delete TestCase') def find_breaking_runner(state_machine_factory, settings=None): def is_breaking_run(runner): try: runner.run(state_machine_factory()) return False except HypothesisException: raise except Exception: verbose_report(traceback.format_exc) return True if settings is None: try: settings = state_machine_factory.TestCase.settings except AttributeError: settings = Settings.default search_strategy = StateMachineSearchStrategy(settings) return find( search_strategy, is_breaking_run, settings=settings, database_key=state_machine_factory.__name__.encode('utf-8') ) def run_state_machine_as_test(state_machine_factory, settings=None): """Run a state machine definition as a test, either silently doing nothing or printing a minimal breaking program and raising an exception. state_machine_factory is anything which returns an instance of GenericStateMachine when called with no arguments - it can be a class or a function. settings will be used to control the execution of the test. """ try: breaker = find_breaking_runner(state_machine_factory, settings) except NoSuchExample: return try: with BuildContext(None, is_final=True): breaker.run(state_machine_factory(), print_steps=True) except StopTest: pass raise Flaky( u'Run failed initially but succeeded on a second try' ) class GenericStateMachine(object): """A GenericStateMachine is the basic entry point into Hypothesis's approach to stateful testing. The intent is for it to be subclassed to provide state machine descriptions The way this is used is that Hypothesis will repeatedly execute something that looks something like:: x = MyStatemachineSubclass() x.check_invariants() try: for _ in range(n_steps): x.execute_step(x.steps().example()) x.check_invariants() finally: x.teardown() And if this ever produces an error it will shrink it down to a small sequence of example choices demonstrating that. """ def steps(self): """Return a SearchStrategy instance the defines the available next steps.""" raise NotImplementedError(u'%r.steps()' % (self,)) def execute_step(self, step): """Execute a step that has been previously drawn from self.steps()""" raise NotImplementedError(u'%r.execute_step()' % (self,)) def print_step(self, step): """Print a step to the current reporter. This is called right before a step is executed. """ self.step_count = getattr(self, u'step_count', 0) + 1 report(u'Step #%d: %s' % (self.step_count, nicerepr(step))) def teardown(self): """Called after a run has finished executing to clean up any necessary state. Does nothing by default """ pass def check_invariants(self): """Called after initializing and after executing each step.""" pass _test_case_cache = {} TestCase = TestCaseProperty() @classmethod def _to_test_case(state_machine_class): try: return state_machine_class._test_case_cache[state_machine_class] except KeyError: pass class StateMachineTestCase(TestCase): settings = Settings( min_satisfying_examples=1 ) # We define this outside of the class and assign it because you can't # assign attributes to instance method values in Python 2 def runTest(self): run_state_machine_as_test(state_machine_class) runTest.is_hypothesis_test = True StateMachineTestCase.runTest = runTest base_name = state_machine_class.__name__ StateMachineTestCase.__name__ = str( base_name + u'.TestCase' ) StateMachineTestCase.__qualname__ = str( getattr(state_machine_class, u'__qualname__', base_name) + u'.TestCase' ) state_machine_class._test_case_cache[state_machine_class] = ( StateMachineTestCase ) return StateMachineTestCase GenericStateMachine.find_breaking_runner = classmethod(find_breaking_runner) class StateMachineRunner(object): """A StateMachineRunner is a description of how to run a state machine. It contains values that it will use to shape the examples. """ def __init__(self, data, n_steps): self.data = data self.data.is_find = False self.n_steps = n_steps def run(self, state_machine, print_steps=None): if print_steps is None: print_steps = current_verbosity() >= Verbosity.debug self.data.hypothesis_runner = state_machine stopping_value = 1 - 1.0 / (1 + self.n_steps * 0.5) try: state_machine.check_invariants() steps = 0 while True: if steps >= self.n_steps: stopping_value = 0 self.data.start_example() if not cu.biased_coin(self.data, stopping_value): self.data.stop_example() break assert steps < self.n_steps value = self.data.draw(state_machine.steps()) steps += 1 if print_steps: state_machine.print_step(value) state_machine.execute_step(value) self.data.stop_example() state_machine.check_invariants() finally: state_machine.teardown() class StateMachineSearchStrategy(SearchStrategy): def __init__(self, settings=None): self.program_size = (settings or Settings.default).stateful_step_count def do_draw(self, data): return StateMachineRunner(data, self.program_size) @attr.s() class Rule(object): targets = attr.ib() function = attr.ib() arguments = attr.ib() precondition = attr.ib() self_strategy = runner() class Bundle(SearchStrategy): def __init__(self, name): self.name = name def do_draw(self, data): machine = data.draw(self_strategy) bundle = machine.bundle(self.name) if not bundle: data.mark_invalid() reference = bundle.pop() bundle.insert(integer_range(data, 0, len(bundle)), reference) return machine.names_to_values[reference.name] RULE_MARKER = u'hypothesis_stateful_rule' PRECONDITION_MARKER = u'hypothesis_stateful_precondition' INVARIANT_MARKER = u'hypothesis_stateful_invariant' def rule(targets=(), target=None, **kwargs): """Decorator for RuleBasedStateMachine. Any name present in target or targets will define where the end result of this function should go. If both are empty then the end result will be discarded. targets may either be a Bundle or the name of a Bundle. kwargs then define the arguments that will be passed to the function invocation. If their value is a Bundle then values that have previously been produced for that bundle will be provided, if they are anything else it will be turned into a strategy and values from that will be provided. """ if target is not None: targets += (target,) converted_targets = [] for t in targets: while isinstance(t, Bundle): t = t.name converted_targets.append(t) def accept(f): existing_rule = getattr(f, RULE_MARKER, None) if existing_rule is not None: raise InvalidDefinition( 'A function cannot be used for two distinct rules. ', Settings.default, ) precondition = getattr(f, PRECONDITION_MARKER, None) rule = Rule(targets=tuple(converted_targets), arguments=kwargs, function=f, precondition=precondition) @proxies(f) def rule_wrapper(*args, **kwargs): return f(*args, **kwargs) setattr(rule_wrapper, RULE_MARKER, rule) return rule_wrapper return accept @attr.s() class VarReference(object): name = attr.ib() def precondition(precond): """Decorator to apply a precondition for rules in a RuleBasedStateMachine. Specifies a precondition for a rule to be considered as a valid step in the state machine. The given function will be called with the instance of RuleBasedStateMachine and should return True or False. Usually it will need to look at attributes on that instance. For example:: class MyTestMachine(RuleBasedStateMachine): state = 1 @precondition(lambda self: self.state != 0) @rule(numerator=integers()) def divide_with(self, numerator): self.state = numerator / self.state This is better than using assume in your rule since more valid rules should be able to be run. """ def decorator(f): @proxies(f) def precondition_wrapper(*args, **kwargs): return f(*args, **kwargs) rule = getattr(f, RULE_MARKER, None) if rule is None: setattr(precondition_wrapper, PRECONDITION_MARKER, precond) else: new_rule = Rule(targets=rule.targets, arguments=rule.arguments, function=rule.function, precondition=precond) setattr(precondition_wrapper, RULE_MARKER, new_rule) invariant = getattr(f, INVARIANT_MARKER, None) if invariant is not None: new_invariant = Invariant(function=invariant.function, precondition=precond) setattr(precondition_wrapper, INVARIANT_MARKER, new_invariant) return precondition_wrapper return decorator @attr.s() class Invariant(object): function = attr.ib() precondition = attr.ib() def invariant(): """Decorator to apply an invariant for rules in a RuleBasedStateMachine. The decorated function will be run after every rule and can raise an exception to indicate failed invariants. For example:: class MyTestMachine(RuleBasedStateMachine): state = 1 @invariant() def is_nonzero(self): assert self.state != 0 """ def accept(f): existing_invariant = getattr(f, INVARIANT_MARKER, None) if existing_invariant is not None: raise InvalidDefinition( 'A function cannot be used for two distinct invariants.', Settings.default, ) precondition = getattr(f, PRECONDITION_MARKER, None) rule = Invariant(function=f, precondition=precondition) @proxies(f) def invariant_wrapper(*args, **kwargs): return f(*args, **kwargs) setattr(invariant_wrapper, INVARIANT_MARKER, rule) return invariant_wrapper return accept @attr.s() class ShuffleBundle(object): bundle = attr.ib() swaps = attr.ib() class RuleBasedStateMachine(GenericStateMachine): """A RuleBasedStateMachine gives you a more structured way to define state machines. The idea is that a state machine carries a bunch of types of data divided into Bundles, and has a set of rules which may read data from bundles (or just from normal strategies) and push data onto bundles. At any given point a random applicable rule will be executed. """ _rules_per_class = {} _invariants_per_class = {} _base_rules_per_class = {} def __init__(self): if not self.rules(): raise InvalidDefinition(u'Type %s defines no rules' % ( type(self).__name__, )) self.bundles = {} self.name_counter = 1 self.names_to_values = {} self.__stream = CUnicodeIO() self.__printer = RepresentationPrinter(self.__stream) def __pretty(self, value): self.__stream.seek(0) self.__stream.truncate(0) self.__printer.output_width = 0 self.__printer.buffer_width = 0 self.__printer.buffer.clear() self.__printer.pretty(value) self.__printer.flush() return self.__stream.getvalue() def __repr__(self): return u'%s(%s)' % ( type(self).__name__, nicerepr(self.bundles), ) def upcoming_name(self): return u'v%d' % (self.name_counter,) def new_name(self): result = self.upcoming_name() self.name_counter += 1 return result def bundle(self, name): return self.bundles.setdefault(name, []) @classmethod def rules(cls): try: return cls._rules_per_class[cls] except KeyError: pass for k, v in inspect.getmembers(cls): r = getattr(v, RULE_MARKER, None) if r is not None: cls.define_rule( r.targets, r.function, r.arguments, r.precondition, ) cls._rules_per_class[cls] = cls._base_rules_per_class.pop(cls, []) return cls._rules_per_class[cls] @classmethod def invariants(cls): try: return cls._invariants_per_class[cls] except KeyError: pass target = [] for k, v in inspect.getmembers(cls): i = getattr(v, INVARIANT_MARKER, None) if i is not None: target.append(i) cls._invariants_per_class[cls] = target return cls._invariants_per_class[cls] @classmethod def define_rule(cls, targets, function, arguments, precondition=None): converted_arguments = {} for k, v in arguments.items(): converted_arguments[k] = v if cls in cls._rules_per_class: target = cls._rules_per_class[cls] else: target = cls._base_rules_per_class.setdefault(cls, []) return target.append( Rule( targets, function, converted_arguments, precondition, ) ) def steps(self): strategies = [] for rule in self.rules(): converted_arguments = {} valid = True if rule.precondition and not rule.precondition(self): continue for k, v in sorted(rule.arguments.items()): if isinstance(v, Bundle): bundle = self.bundle(v.name) if not bundle: valid = False break converted_arguments[k] = v if valid: strategies.append(TupleStrategy(( just(rule), FixedKeysDictStrategy(converted_arguments) ), tuple)) if not strategies: raise InvalidDefinition( u'No progress can be made from state %r' % (self,) ) for name, bundle in self.bundles.items(): if len(bundle) > 1: strategies.append( builds( ShuffleBundle, just(name), lists(integers(0, len(bundle) - 1)))) return one_of(strategies) def print_step(self, step): if isinstance(step, ShuffleBundle): return rule, data = step data_repr = {} for k, v in data.items(): data_repr[k] = self.__pretty(v) self.step_count = getattr(self, u'step_count', 0) + 1 report(u'Step #%d: %s%s(%s)' % ( self.step_count, u'%s = ' % (self.upcoming_name(),) if rule.targets else u'', rule.function.__name__, u', '.join(u'%s=%s' % kv for kv in data_repr.items()) )) def execute_step(self, step): if isinstance(step, ShuffleBundle): bundle = self.bundle(step.bundle) for i in step.swaps: bundle.insert(i, bundle.pop()) return rule, data = step data = dict(data) result = rule.function(self, **data) if rule.targets: name = self.new_name() self.names_to_values[name] = result self.__printer.singleton_pprinters.setdefault( id(result), lambda obj, p, cycle: p.text(name), ) for target in rule.targets: self.bundle(target).append(VarReference(name)) def check_invariants(self): for invar in self.invariants(): if invar.precondition and not invar.precondition(self): continue invar.function(self) hypothesis-python-3.44.1/src/hypothesis/statistics.py000066400000000000000000000062511321557765100230730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math from hypothesis.utils.dynamicvariables import DynamicVariable from hypothesis.internal.conjecture.data import Status from hypothesis.internal.conjecture.engine import ExitReason collector = DynamicVariable(None) class Statistics(object): def __init__(self, engine): self.passing_examples = len( engine.status_runtimes.get(Status.VALID, ())) self.invalid_examples = len( engine.status_runtimes.get(Status.INVALID, []) + engine.status_runtimes.get(Status.OVERRUN, []) ) self.failing_examples = len(engine.status_runtimes.get( Status.INTERESTING, ())) runtimes = sorted( engine.status_runtimes.get(Status.VALID, []) + engine.status_runtimes.get(Status.INVALID, []) + engine.status_runtimes.get(Status.INTERESTING, []) ) self.has_runs = bool(runtimes) if not self.has_runs: return n = max(0, len(runtimes) - 1) lower = int(runtimes[int(math.floor(n * 0.05))] * 1000) upper = int(runtimes[int(math.ceil(n * 0.95))] * 1000) if upper == 0: self.runtimes = '< 1ms' elif lower == upper: self.runtimes = '~ %dms' % (lower,) else: self.runtimes = '%d-%d ms' % (lower, upper) if engine.exit_reason == ExitReason.finished: self.exit_reason = 'nothing left to do' elif engine.exit_reason == ExitReason.flaky: self.exit_reason = 'test was flaky' else: self.exit_reason = ( 'settings.%s=%r' % ( engine.exit_reason.name, getattr(engine.settings, engine.exit_reason.name) ) ) self.events = [ '%.2f%%, %s' % ( c / engine.call_count * 100, e ) for e, c in sorted( engine.event_call_counts.items(), key=lambda x: -x[1]) ] total_runtime = math.fsum(engine.all_runtimes) total_drawtime = math.fsum(engine.all_drawtimes) if total_drawtime == 0.0: self.draw_time_percentage = '~ 0%' else: draw_time_percentage = 100.0 * min( 1, total_drawtime / total_runtime) self.draw_time_percentage = '~ %d%%' % ( round(draw_time_percentage),) def note_engine_for_statistics(engine): callback = collector.value if callback is not None: callback(Statistics(engine)) hypothesis-python-3.44.1/src/hypothesis/strategies.py000066400000000000000000002036541321557765100230610ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import enum import math import datetime as dt import operator from decimal import Context, Decimal from inspect import isclass, isfunction from fractions import Fraction from functools import reduce from hypothesis.errors import InvalidArgument, ResolutionFailed from hypothesis.control import assume from hypothesis._settings import note_deprecation from hypothesis.internal.cache import LRUReusedCache from hypothesis.searchstrategy import SearchStrategy from hypothesis.internal.compat import gcd, ceil, floor, hrange, \ text_type, get_type_hints, getfullargspec, implements_iterator from hypothesis.internal.floats import is_negative, float_to_int, \ int_to_float, count_between_floats from hypothesis.internal.renaming import renamed_arguments from hypothesis.utils.conventions import infer, not_set from hypothesis.internal.reflection import proxies, required_args from hypothesis.internal.validation import check_type, try_convert, \ check_strategy, check_valid_bound, check_valid_sizes, \ check_valid_integer, check_valid_interval __all__ = [ 'nothing', 'just', 'one_of', 'none', 'choices', 'streaming', 'booleans', 'integers', 'floats', 'complex_numbers', 'fractions', 'decimals', 'characters', 'text', 'from_regex', 'binary', 'uuids', 'tuples', 'lists', 'sets', 'frozensets', 'iterables', 'dictionaries', 'fixed_dictionaries', 'sampled_from', 'permutations', 'datetimes', 'dates', 'times', 'timedeltas', 'builds', 'randoms', 'random_module', 'recursive', 'composite', 'shared', 'runner', 'data', 'deferred', 'from_type', 'register_type_strategy', ] _strategies = set() class FloatKey(object): def __init__(self, f): self.value = float_to_int(f) def __eq__(self, other): return isinstance(other, FloatKey) and ( other.value == self.value ) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self.value) def convert_value(v): if isinstance(v, float): return FloatKey(v) return (type(v), v) STRATEGY_CACHE = LRUReusedCache(1024) def cacheable(fn): @proxies(fn) def cached_strategy(*args, **kwargs): kwargs_cache_key = set() try: for k, v in kwargs.items(): kwargs_cache_key.add((k, convert_value(v))) except TypeError: return fn(*args, **kwargs) cache_key = ( fn, tuple(map(convert_value, args)), frozenset(kwargs_cache_key)) try: return STRATEGY_CACHE[cache_key] except TypeError: return fn(*args, **kwargs) except KeyError: result = fn(*args, **kwargs) if not isinstance(result, SearchStrategy) or result.is_cacheable: STRATEGY_CACHE[cache_key] = result return result cached_strategy.__clear_cache = STRATEGY_CACHE.clear return cached_strategy def base_defines_strategy(force_reusable): def decorator(strategy_definition): from hypothesis.searchstrategy.lazy import LazyStrategy _strategies.add(strategy_definition.__name__) @proxies(strategy_definition) def accept(*args, **kwargs): result = LazyStrategy(strategy_definition, args, kwargs) if force_reusable: result.force_has_reusable_values = True assert result.has_reusable_values return result return accept return decorator defines_strategy = base_defines_strategy(False) defines_strategy_with_reusable_values = base_defines_strategy(True) class Nothing(SearchStrategy): def calc_is_empty(self, recur): return True def do_draw(self, data): # This method should never be called because draw() will mark the # data as invalid immediately because is_empty is True. assert False # pragma: no cover def calc_has_reusable_values(self, recur): return True def __repr__(self): return 'nothing()' def map(self, f): return self def filter(self, f): return self def flatmap(self, f): return self NOTHING = Nothing() @cacheable def nothing(): """This strategy never successfully draws a value and will always reject on an attempt to draw. Examples from this strategy do not shrink (because there are none). """ return NOTHING def just(value): """Return a strategy which only generates ``value``. Note: ``value`` is not copied. Be wary of using mutable values. If ``value`` is the result of a callable, you can use :func:`builds(callable) ` instead of ``just(callable())`` to get a fresh value each time. Examples from this strategy do not shrink (because there is only one). """ from hypothesis.searchstrategy.misc import JustStrategy return JustStrategy(value) @defines_strategy def none(): """Return a strategy which only generates None. Examples from this strategy do not shrink (because there is only one). """ return just(None) def one_of(*args): """Return a strategy which generates values from any of the argument strategies. This may be called with one iterable argument instead of multiple strategy arguments. In which case one_of(x) and one_of(\*x) are equivalent. Examples from this strategy will generally shrink to ones that come from strategies earlier in the list, then shrink according to behaviour of the strategy that produced them. In order to get good shrinking behaviour, try to put simpler strategies first. e.g. ``one_of(none(), text())`` is better than ``one_of(text(), none())``. This is especially important when using recursive strategies. e.g. ``x = st.deferred(lambda: st.none() | st.tuples(x, x))`` will shrink well, but ``x = st.deferred(lambda: st.tuples(x, x) | st.none())`` will shrink very badly indeed. """ if len(args) == 1 and not isinstance(args[0], SearchStrategy): try: args = tuple(args[0]) except TypeError: pass from hypothesis.searchstrategy.strategies import OneOfStrategy return OneOfStrategy(args) @cacheable @defines_strategy_with_reusable_values def integers(min_value=None, max_value=None): """Returns a strategy which generates integers (in Python 2 these may be ints or longs). If min_value is not None then all values will be >= min_value. If max_value is not None then all values will be <= max_value Examples from this strategy will shrink towards being positive (e.g. 1000 is considered simpler than -1) and then towards zero. """ check_valid_bound(min_value, 'min_value') check_valid_bound(max_value, 'max_value') check_valid_interval(min_value, max_value, 'min_value', 'max_value') from hypothesis.searchstrategy.numbers import IntegersFromStrategy, \ BoundedIntStrategy, WideRangeIntStrategy min_int_value = None if min_value is None else ceil(min_value) max_int_value = None if max_value is None else floor(max_value) if min_int_value is not None and max_int_value is not None and \ min_int_value > max_int_value: raise InvalidArgument('No integers between min_value=%r and ' 'max_value=%r' % (min_value, max_value)) if min_int_value is None: if max_int_value is None: return ( WideRangeIntStrategy() ) else: return IntegersFromStrategy(0).map(lambda x: max_int_value - x) else: if max_int_value is None: return IntegersFromStrategy(min_int_value) else: assert min_int_value <= max_int_value if min_int_value == max_int_value: return just(min_int_value) elif min_int_value >= 0: return BoundedIntStrategy(min_int_value, max_int_value) elif max_int_value <= 0: return BoundedIntStrategy( -max_int_value, -min_int_value ).map(lambda t: -t) else: return integers(min_value=0, max_value=max_int_value) | \ integers(min_value=min_int_value, max_value=0) @cacheable @defines_strategy def booleans(): """Returns a strategy which generates instances of bool. Examples from this strategy will shrink towards False (i.e. shrinking will try to replace True with False where possible). """ from hypothesis.searchstrategy.misc import BoolStrategy return BoolStrategy() @cacheable @defines_strategy_with_reusable_values def floats( min_value=None, max_value=None, allow_nan=None, allow_infinity=None ): """Returns a strategy which generates floats. - If min_value is not None, all values will be >= min_value. - If max_value is not None, all values will be <= max_value. - If min_value or max_value is not None, it is an error to enable allow_nan. - If both min_value and max_value are not None, it is an error to enable allow_infinity. Where not explicitly ruled out by the bounds, all of infinity, -infinity and NaN are possible values generated by this strategy. Examples from this strategy have a complicated and hard to explain shrinking behaviour, but it tries to improve "human readability". Finite numbers will be preferred to infinity and infinity will be preferred to NaN. """ if allow_nan is None: allow_nan = bool(min_value is None and max_value is None) elif allow_nan: if min_value is not None or max_value is not None: raise InvalidArgument( 'Cannot have allow_nan=%r, with min_value or max_value' % ( allow_nan )) min_value = try_convert(float, min_value, 'min_value') max_value = try_convert(float, max_value, 'max_value') check_valid_bound(min_value, 'min_value') check_valid_bound(max_value, 'max_value') check_valid_interval(min_value, max_value, 'min_value', 'max_value') if min_value == float(u'-inf'): min_value = None if max_value == float(u'inf'): max_value = None if allow_infinity is None: allow_infinity = bool(min_value is None or max_value is None) elif allow_infinity: if min_value is not None and max_value is not None: raise InvalidArgument( 'Cannot have allow_infinity=%r, with both min_value and ' 'max_value' % ( allow_infinity )) from hypothesis.searchstrategy.numbers import FloatStrategy, \ FixedBoundedFloatStrategy if min_value is None and max_value is None: return FloatStrategy( allow_infinity=allow_infinity, allow_nan=allow_nan, ) elif min_value is not None and max_value is not None: if min_value == max_value: return just(min_value) elif is_negative(min_value): if is_negative(max_value): return floats(min_value=-max_value, max_value=-min_value).map( operator.neg ) else: return floats(min_value=0.0, max_value=max_value) | floats( min_value=0.0, max_value=-min_value).map(operator.neg) elif count_between_floats(min_value, max_value) > 1000: return FixedBoundedFloatStrategy( lower_bound=min_value, upper_bound=max_value ) else: ub_int = float_to_int(max_value) lb_int = float_to_int(min_value) assert lb_int <= ub_int return integers(min_value=lb_int, max_value=ub_int).map( int_to_float ) elif min_value is not None: if min_value < 0: result = floats( min_value=0.0 ) | floats(min_value=min_value, max_value=-0.0) else: result = ( floats(allow_infinity=allow_infinity, allow_nan=False).map( lambda x: assume(not math.isnan(x)) and min_value + abs(x) ) ) if min_value == 0 and not is_negative(min_value): result = result.filter(lambda x: math.copysign(1.0, x) == 1) return result else: assert max_value is not None if max_value > 0: result = floats( min_value=0.0, max_value=max_value, ) | floats(max_value=-0.0) else: result = ( floats(allow_infinity=allow_infinity, allow_nan=False).map( lambda x: assume(not math.isnan(x)) and max_value - abs(x) ) ) if max_value == 0 and is_negative(max_value): result = result.filter(is_negative) return result @cacheable @defines_strategy_with_reusable_values def complex_numbers(): """Returns a strategy that generates complex numbers. Examples from this strategy shrink by shrinking their component real and imaginary parts. """ from hypothesis.searchstrategy.numbers import ComplexStrategy return ComplexStrategy( tuples(floats(), floats()) ) @cacheable @defines_strategy def tuples(*args): """Return a strategy which generates a tuple of the same length as args by generating the value at index i from args[i]. e.g. tuples(integers(), integers()) would generate a tuple of length two with both values an integer. Examples from this strategy shrink by shrinking their component parts. """ for arg in args: check_strategy(arg) from hypothesis.searchstrategy.collections import TupleStrategy return TupleStrategy(args, tuple) @defines_strategy def sampled_from(elements): """Returns a strategy which generates any value present in ``elements``. Note that as with :func:`~hypotheses.strategies.just`, values will not be copied and thus you should be careful of using mutable data. ``sampled_from`` supports ordered collections, as well as :class:`~python:enum.Enum` objects. :class:`~python:enum.Flag` objects may also generate any combination of their members. Examples from this strategy shrink by replacing them with values earlier in the list. So e.g. sampled_from((10, 1)) will shrink by trying to replace 1 values with 10, and sampled_from((1, 10)) will shrink by trying to replace 10 values with 1. """ from hypothesis.searchstrategy.misc import SampledFromStrategy from hypothesis.internal.conjecture.utils import check_sample values = check_sample(elements) if not values: return nothing() if len(values) == 1: return just(values[0]) if hasattr(enum, 'Flag') and isclass(elements) and \ issubclass(elements, enum.Flag): # Combinations of enum.Flag members are also members. We generate # these dynamically, because static allocation takes O(2^n) memory. return sets(sampled_from(values), min_size=1).map( lambda s: reduce(operator.or_, s)) return SampledFromStrategy(values) _AVERAGE_LIST_LENGTH = 5.0 @cacheable @defines_strategy def lists( elements=None, min_size=None, average_size=None, max_size=None, unique_by=None, unique=False, ): """Returns a list containing values drawn from elements with length in the interval [min_size, max_size] (no bounds in that direction if these are None). If max_size is 0 then elements may be None and only the empty list will be drawn. average_size may be used as a size hint to roughly control the size of the list but it may not be the actual average of sizes you get, due to a variety of factors. If unique is True (or something that evaluates to True), we compare direct object equality, as if unique_by was `lambda x: x`. This comparison only works for hashable types. if unique_by is not None it must be a function returning a hashable type when given a value drawn from elements. The resulting list will satisfy the condition that for i != j, unique_by(result[i]) != unique_by(result[j]). Examples from this strategy shrink by trying to remove elements from the list, and by shrinking each individual element of the list. """ check_valid_sizes(min_size, average_size, max_size) if elements is None or (max_size is not None and max_size <= 0): if max_size is None or max_size > 0: raise InvalidArgument( u'Cannot create non-empty lists without an element type' ) else: return builds(list) check_strategy(elements) if unique: if unique_by is not None: raise InvalidArgument(( 'cannot specify both unique and unique_by (you probably only ' 'want to set unique_by)' )) else: def unique_by(x): return x if unique_by is not None: from hypothesis.searchstrategy.collections import UniqueListStrategy min_size = min_size or 0 max_size = max_size or float(u'inf') if average_size is None: if max_size < float(u'inf'): if max_size <= 5: average_size = min_size + 0.75 * (max_size - min_size) else: average_size = (max_size + min_size) / 2 else: average_size = max( _AVERAGE_LIST_LENGTH, min_size * 2 ) result = UniqueListStrategy( elements=elements, average_size=average_size, max_size=max_size, min_size=min_size, key=unique_by ) else: from hypothesis.searchstrategy.collections import ListStrategy if min_size is None: min_size = 0 if average_size is None: if max_size is None: average_size = _AVERAGE_LIST_LENGTH else: average_size = (min_size + max_size) * 0.5 result = ListStrategy( (elements,), average_length=average_size, min_size=min_size, max_size=max_size, ) return result @cacheable @defines_strategy def sets(elements=None, min_size=None, average_size=None, max_size=None): """This has the same behaviour as lists, but returns sets instead. Note that Hypothesis cannot tell if values are drawn from elements are hashable until running the test, so you can define a strategy for sets of an unhashable type but it will fail at test time. Examples from this strategy shrink by trying to remove elements from the set, and by shrinking each individual element of the set. """ return lists( elements=elements, min_size=min_size, average_size=average_size, max_size=max_size, unique=True ).map(set) @cacheable @defines_strategy def frozensets(elements=None, min_size=None, average_size=None, max_size=None): """This is identical to the sets function but instead returns frozensets.""" return lists( elements=elements, min_size=min_size, average_size=average_size, max_size=max_size, unique=True ).map(frozenset) @defines_strategy def iterables(elements=None, min_size=None, average_size=None, max_size=None, unique_by=None, unique=False): """This has the same behaviour as lists, but returns iterables instead. Some iterables cannot be indexed (e.g. sets) and some do not have a fixed length (e.g. generators). This strategy produces iterators, which cannot be indexed and do not have a fixed length. This ensures that you do not accidentally depend on sequence behaviour. """ @implements_iterator class PrettyIter(object): def __init__(self, values): self._values = values self._iter = iter(self._values) def __iter__(self): return self._iter def __next__(self): return next(self._iter) def __repr__(self): return 'iter({!r})'.format(self._values) return lists( elements=elements, min_size=min_size, average_size=average_size, max_size=max_size, unique_by=unique_by, unique=unique ).map(PrettyIter) @defines_strategy def fixed_dictionaries(mapping): """Generates a dictionary of the same type as mapping with a fixed set of keys mapping to strategies. mapping must be a dict subclass. Generated values have all keys present in mapping, with the corresponding values drawn from mapping[key]. If mapping is an instance of OrderedDict the keys will also be in the same order, otherwise the order is arbitrary. Examples from this strategy shrink by shrinking each individual value in the generated dictionary. """ from hypothesis.searchstrategy.collections import FixedKeysDictStrategy check_type(dict, mapping, 'mapping') for v in mapping.values(): check_strategy(v) return FixedKeysDictStrategy(mapping) @cacheable @defines_strategy def dictionaries( keys, values, dict_class=dict, min_size=None, average_size=None, max_size=None ): """Generates dictionaries of type dict_class with keys drawn from the keys argument and values drawn from the values argument. The size parameters have the same interpretation as for lists. Examples from this strategy shrink by trying to remove keys from the generated dictionary, and by shrinking each generated key and value. """ check_valid_sizes(min_size, average_size, max_size) if max_size == 0: return fixed_dictionaries(dict_class()) check_strategy(keys) check_strategy(values) return lists( tuples(keys, values), min_size=min_size, average_size=average_size, max_size=max_size, unique_by=lambda x: x[0] ).map(dict_class) @defines_strategy def streaming(elements): """Generates an infinite stream of values where each value is drawn from elements. The result is iterable (the iterator will never terminate) and indexable. Examples from this strategy shrink by trying to shrink each value drawn. .. deprecated:: 3.15.0 Use :func:`data() ` instead. """ note_deprecation( 'streaming() has been deprecated. Use the data() strategy instead and ' 'replace stream iteration with data.draw() calls.' ) check_strategy(elements) from hypothesis.searchstrategy.streams import StreamStrategy return StreamStrategy(elements) @cacheable @defines_strategy_with_reusable_values def characters(whitelist_categories=None, blacklist_categories=None, blacklist_characters=None, min_codepoint=None, max_codepoint=None, whitelist_characters=None): """Generates unicode text type (unicode on python 2, str on python 3) characters following specified filtering rules. When no filtering rules are specifed, any character can be produced. If ``min_codepoint`` or ``max_codepoint`` is specifed, then only characters having a codepoint in that range will be produced. If ``whitelist_categories`` is specified, then only characters from those Unicode categories will be produced. This is a further restriction, characters must also satisfy ``min_codepoint`` and ``max_codepoint``. If ``blacklist_categories`` is specified, then any character from those categories will not be produced. This is a further restriction, characters that match both ``whitelist_categories`` and ``blacklist_categories`` will not be produced. If ``whitelist_characters`` is specified, then any additional characters in that list will also be produced. If ``blacklist_characters`` is specified, then any characters in that list will be not be produced. Any overlap between ``whitelist_characters`` and ``blacklist_characters`` will raise an exception. Examples from this strategy shrink towards smaller codepoints. """ if ( min_codepoint is not None and max_codepoint is not None and min_codepoint > max_codepoint ): raise InvalidArgument( 'Cannot have min_codepoint=%d > max_codepoint=%d ' % ( min_codepoint, max_codepoint ) ) if all((whitelist_characters is not None, min_codepoint is None, max_codepoint is None, whitelist_categories is None, blacklist_categories is None, )): raise InvalidArgument( 'Cannot have just whitelist_characters=%r alone, ' 'it would have no effect. Perhaps you want sampled_from()' % ( whitelist_characters, ) ) if ( whitelist_characters is not None and blacklist_characters is not None and set(blacklist_characters).intersection(set(whitelist_characters)) ): raise InvalidArgument( 'Characters %r are present in both whitelist_characters=%r, and ' 'blacklist_characters=%r' % ( set(blacklist_characters).intersection( set(whitelist_characters) ), whitelist_characters, blacklist_characters, ) ) from hypothesis.searchstrategy.strings import OneCharStringStrategy return OneCharStringStrategy(whitelist_categories=whitelist_categories, blacklist_categories=blacklist_categories, blacklist_characters=blacklist_characters, min_codepoint=min_codepoint, max_codepoint=max_codepoint, whitelist_characters=whitelist_characters) @cacheable @defines_strategy_with_reusable_values def text( alphabet=None, min_size=None, average_size=None, max_size=None ): """Generates values of a unicode text type (unicode on python 2, str on python 3) with values drawn from alphabet, which should be an iterable of length one strings or a strategy generating such. If it is None it will default to generating the full unicode range. If it is an empty collection this will only generate empty strings. min_size, max_size and average_size have the usual interpretations. Examples from this strategy shrink towards shorter strings, and with the characters in the text shrinking as per the alphabet strategy. """ from hypothesis.searchstrategy.strings import StringStrategy if alphabet is None: char_strategy = characters(blacklist_categories=('Cs',)) elif not alphabet: if (min_size or 0) > 0: raise InvalidArgument( 'Invalid min_size %r > 0 for empty alphabet' % ( min_size, ) ) return just(u'') elif isinstance(alphabet, SearchStrategy): char_strategy = alphabet else: char_strategy = sampled_from(list(map(text_type, alphabet))) return StringStrategy(lists( char_strategy, average_size=average_size, min_size=min_size, max_size=max_size )) @cacheable @defines_strategy def from_regex(regex): """Generates strings that contain a match for the given regex (i.e. ones for which :func:`re.search` will return a non-None result). ``regex`` may be a pattern or :func:`compiled regex `. Both byte-strings and unicode strings are supported, and will generate examples of the same type. You can use regex flags such as :const:`re.IGNORECASE`, :const:`re.DOTALL` or :const:`re.UNICODE` to control generation. Flags can be passed either in compiled regex or inside the pattern with a ``(?iLmsux)`` group. Some regular expressions are only partly supported - the underlying strategy checks local matching and relies on filtering to resolve context-dependent expressions. Using too many of these constructs may cause health-check errors as too many examples are filtered out. This mainly includes (positive or negative) lookahead and lookbehind groups. If you want the generated string to match the whole regex you should use boundary markers. So e.g. ``r"\\A.\\Z"`` will return a single character string, while ``"."`` will return any string, and ``r"\\A.$"`` will return a single character optionally followed by a ``"\\n"``. Examples from this strategy shrink towards shorter strings and lower character values. """ from hypothesis.searchstrategy.regex import regex_strategy return regex_strategy(regex) @cacheable @defines_strategy_with_reusable_values def binary( min_size=None, average_size=None, max_size=None ): """Generates the appropriate binary type (str in python 2, bytes in python 3). min_size, average_size and max_size have the usual interpretations. Examples from this strategy shrink towards smaller strings and lower byte values. """ from hypothesis.searchstrategy.strings import BinaryStringStrategy, \ FixedSizeBytes check_valid_sizes(min_size, average_size, max_size) if min_size == max_size is not None: return FixedSizeBytes(min_size) return BinaryStringStrategy( lists( integers(min_value=0, max_value=255), average_size=average_size, min_size=min_size, max_size=max_size ) ) @cacheable @defines_strategy def randoms(): """Generates instances of Random (actually a Hypothesis specific RandomWithSeed class which displays what it was initially seeded with) Examples from this strategy shrink to seeds closer to zero. """ from hypothesis.searchstrategy.misc import RandomStrategy return RandomStrategy(integers()) class RandomSeeder(object): def __init__(self, seed): self.seed = seed def __repr__(self): return 'random.seed(%r)' % (self.seed,) @cacheable @defines_strategy def random_module(): """If your code depends on the global random module then you need to use this. It will explicitly seed the random module at the start of your test so that tests are reproducible. The value it passes you is an opaque object whose only useful feature is that its repr displays the random seed. It is not itself a random number generator. If you want a random number generator you should use the randoms() strategy which will give you one. Examples from these strategy shrink to seeds closer to zero. """ from hypothesis.control import cleanup import random class RandomModule(SearchStrategy): def do_draw(self, data): data.can_reproduce_example_from_repr = False seed = data.draw(integers()) state = random.getstate() random.seed(seed) cleanup(lambda: random.setstate(state)) return RandomSeeder(seed) return shared(RandomModule(), 'hypothesis.strategies.random_module()') @cacheable @defines_strategy def builds(target, *args, **kwargs): """Generates values by drawing from ``args`` and ``kwargs`` and passing them to ``target`` in the appropriate argument position. e.g. ``builds(target, integers(), flag=booleans())`` would draw an integer ``i`` and a boolean ``b`` and call ``target(i, flag=b)``. If ``target`` has type annotations, they will be used to infer a strategy for required arguments that were not passed to builds. You can also tell builds to infer a strategy for an optional argument by passing the special value :const:`hypothesis.infer` as a keyword argument to builds, instead of a strategy for that argument to ``target``. Examples from this strategy shrink by shrinking the argument values to the target. """ if infer in args: # Avoid an implementation nightmare juggling tuples and worse things raise InvalidArgument('infer was passed as a positional argument to ' 'builds(), but is only allowed as a keyword arg') hints = get_type_hints(target.__init__ if isclass(target) else target) for kw in [k for k, v in kwargs.items() if v is infer]: if kw not in hints: raise InvalidArgument( 'passed %s=infer for %s, but %s has no type annotation' % (kw, target.__name__, kw)) kwargs[kw] = from_type(hints[kw]) required = required_args(target, args, kwargs) for ms in set(hints) & (required or set()): kwargs[ms] = from_type(hints[ms]) return tuples(tuples(*args), fixed_dictionaries(kwargs)).map( lambda value: target(*value[0], **value[1]) ) def delay_error(func): """A decorator to make exceptions lazy but success immediate. We want from_type to resolve to a strategy immediately if possible, for a useful repr and interactive use, but delay errors until a value would be drawn to localise them to a particular test. """ @proxies(func) def inner(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: error = e def lazy_error(): raise error return builds(lazy_error) return inner @cacheable @delay_error def from_type(thing): """Looks up the appropriate search strategy for the given type. ``from_type`` is used internally to fill in missing arguments to :func:`~hypothesis.strategies.builds` and can be used interactively to explore what strategies are available or to debug type resolution. You can use :func:`~hypothesis.strategies.register_type_strategy` to handle your custom types, or to globally redefine certain strategies - for example excluding NaN from floats, or use timezone-aware instead of naive time and datetime strategies. The resolution logic may be changed in a future version, but currently tries these four options: 1. If ``thing`` is in the default lookup mapping or user-registered lookup, return the corresponding strategy. The default lookup covers all types with Hypothesis strategies, including extras where possible. 2. If ``thing`` is from the :mod:`python:typing` module, return the corresponding strategy (special logic). 3. If ``thing`` has one or more subtypes in the merged lookup, return the union of the strategies for those types that are not subtypes of other elements in the lookup. 4. Finally, if ``thing`` has type annotations for all required arguments, it is resolved via :func:`~hypothesis.strategies.builds`. """ from hypothesis.searchstrategy import types if not isinstance(thing, type): try: # At runtime, `typing.NewType` returns an identity function rather # than an actual type, but we can check that for a possible match # and then read the magic attribute to unwrap it. import typing if all([ hasattr(thing, '__supertype__'), hasattr(typing, 'NewType'), isfunction(thing), getattr(thing, '__module__', 0) == 'typing' ]): return from_type(thing.__supertype__) # Under Python 3.6, Unions are not instances of `type` - but we # still want to resolve them! if getattr(thing, '__origin__', None) is typing.Union: args = sorted(thing.__args__, key=types.type_sorting_key) return one_of([from_type(t) for t in args]) except ImportError: # pragma: no cover pass raise InvalidArgument('thing=%s must be a type' % (thing,)) # Now that we know `thing` is a type, the first step is to check for an # explicitly registered strategy. This is the best (and hopefully most # common) way to resolve a type to a strategy. Note that the value in the # lookup may be a strategy or a function from type -> strategy; and we # convert empty results into an explicit error. if thing in types._global_type_lookup: strategy = types._global_type_lookup[thing] if not isinstance(strategy, SearchStrategy): strategy = strategy(thing) if strategy.is_empty: raise ResolutionFailed( 'Error: %r resolved to an empty strategy' % (thing,)) return strategy # If there's no explicitly registered strategy, maybe a subtype of thing # is registered - if so, we can resolve it to the subclass strategy. # We'll start by checking if thing is from from the typing module, # because there are several special cases that don't play well with # subclass and instance checks. try: import typing if isinstance(thing, typing.TypingMeta): return types.from_typing_type(thing) except ImportError: # pragma: no cover pass # If it's not from the typing module, we get all registered types that are # a subclass of `thing` and are not themselves a subtype of any other such # type. For example, `Number -> integers() | floats()`, but bools() is # not included because bool is a subclass of int as well as Number. strategies = [ v if isinstance(v, SearchStrategy) else v(thing) for k, v in types._global_type_lookup.items() if issubclass(k, thing) and sum(types.try_issubclass(k, T) for T in types._global_type_lookup) == 1 ] empty = ', '.join(repr(s) for s in strategies if s.is_empty) if empty: raise ResolutionFailed( 'Could not resolve %s to a strategy; consider using ' 'register_type_strategy' % empty) elif strategies: return one_of(strategies) # If we don't have a strategy registered for this type or any subtype, we # may be able to fall back on type annotations. # Types created via typing.NamedTuple use a custom attribute instead - # but we can still use builds(), if we work out the right kwargs. if issubclass(thing, tuple) and hasattr(thing, '_fields') \ and hasattr(thing, '_field_types'): kwargs = {k: from_type(thing._field_types[k]) for k in thing._fields} return builds(thing, **kwargs) if issubclass(thing, enum.Enum): assert len(thing), repr(thing) + ' has no members to sample' return sampled_from(thing) # If the constructor has an annotation for every required argument, # we can (and do) use builds() without supplying additional arguments. required = required_args(thing) if not required or required.issubset(get_type_hints(thing.__init__)): return builds(thing) # We have utterly failed, and might as well say so now. raise ResolutionFailed('Could not resolve %r to a strategy; consider ' 'using register_type_strategy' % (thing,)) @cacheable @defines_strategy_with_reusable_values def fractions(min_value=None, max_value=None, max_denominator=None): """Returns a strategy which generates Fractions. If min_value is not None then all generated values are no less than min_value. If max_value is not None then all generated values are no greater than max_value. min_value and max_value may be anything accepted by the :class:`~fractions.Fraction` constructor. If max_denominator is not None then the denominator of any generated values is no greater than max_denominator. Note that max_denominator must be None or a positive integer. Examples from this strategy shrink towards smaller denominators, then closer to zero. """ min_value = try_convert(Fraction, min_value, 'min_value') max_value = try_convert(Fraction, max_value, 'max_value') check_valid_interval(min_value, max_value, 'min_value', 'max_value') check_valid_integer(max_denominator) if max_denominator is not None: if max_denominator < 1: raise InvalidArgument( 'max_denominator=%r must be >= 1' % max_denominator) def fraction_bounds(value): """Find the best lower and upper approximation for value.""" # Adapted from CPython's Fraction.limit_denominator here: # https://github.com/python/cpython/blob/3.6/Lib/fractions.py#L219 if value is None or value.denominator <= max_denominator: return value, value p0, q0, p1, q1 = 0, 1, 1, 0 n, d = value.numerator, value.denominator while True: a = n // d q2 = q0 + a * q1 if q2 > max_denominator: break p0, q0, p1, q1 = p1, q1, p0 + a * p1, q2 n, d = d, n - a * d k = (max_denominator - q0) // q1 low, high = Fraction(p1, q1), Fraction(p0 + k * p1, q0 + k * q1) assert low < value < high return low, high # Take the high approximation for min_value and low for max_value bounds = (max_denominator, min_value, max_value) _, min_value = fraction_bounds(min_value) max_value, _ = fraction_bounds(max_value) if None not in (min_value, max_value) and min_value > max_value: raise InvalidArgument( 'There are no fractions with a denominator <= %r between ' 'min_value=%r and max_value=%r' % bounds) if min_value is not None and min_value == max_value: return just(min_value) def dm_func(denom): """Take denom, construct numerator strategy, and build fraction.""" # Four cases of algebra to get integer bounds and scale factor. min_num, max_num = None, None if max_value is None and min_value is None: pass elif min_value is None: max_num = denom * max_value.numerator denom *= max_value.denominator elif max_value is None: min_num = denom * min_value.numerator denom *= min_value.denominator else: low = min_value.numerator * max_value.denominator high = max_value.numerator * min_value.denominator scale = min_value.denominator * max_value.denominator # After calculating our integer bounds and scale factor, we remove # the gcd to avoid drawing more bytes for the example than needed. # Note that `div` can be at most equal to `scale`. div = gcd(scale, gcd(low, high)) min_num = denom * low // div max_num = denom * high // div denom *= scale // div return builds( Fraction, integers(min_value=min_num, max_value=max_num), just(denom) ) if max_denominator is None: return integers(min_value=1).flatmap(dm_func) return integers(1, max_denominator).flatmap(dm_func).map( lambda f: f.limit_denominator(max_denominator)) @cacheable @defines_strategy_with_reusable_values def decimals(min_value=None, max_value=None, allow_nan=None, allow_infinity=None, places=None): """Generates instances of :class:`decimals.Decimal`, which may be: - A finite rational number, between ``min_value`` and ``max_value``. - Not a Number, if ``allow_nan`` is True. None means "allow NaN, unless ``min_value`` and ``max_value`` are not None". - Positive or negative infinity, if ``max_value`` and ``min_value`` respectively are None, and ``allow_infinity`` is not False. None means "allow infinity, unless excluded by the min and max values". Note that where floats have one ``NaN`` value, Decimals have four: signed, and either *quiet* or *signalling*. See `the decimal module docs `_ for more information on special values. If ``places`` is not None, all finite values drawn from the strategy will have that number of digits after the decimal place. Examples from this strategy do not have a well defined shrink order but try to maximize human readability when shrinking. """ # Convert min_value and max_value to Decimal values, and validate args check_valid_integer(places) if places is not None and places < 0: raise InvalidArgument('places=%r may not be negative' % places) if min_value is not None: min_value = try_convert(Decimal, min_value, 'min_value') if min_value.is_infinite() and min_value < 0: if not (allow_infinity or allow_infinity is None): raise InvalidArgument('allow_infinity=%r, but min_value=%r' % (allow_infinity, min_value)) min_value = None elif not min_value.is_finite(): # This could be positive infinity, quiet NaN, or signalling NaN raise InvalidArgument(u'Invalid min_value=%r' % min_value) if max_value is not None: max_value = try_convert(Decimal, max_value, 'max_value') if max_value.is_infinite() and max_value > 0: if not (allow_infinity or allow_infinity is None): raise InvalidArgument('allow_infinity=%r, but max_value=%r' % (allow_infinity, max_value)) max_value = None elif not max_value.is_finite(): raise InvalidArgument(u'Invalid max_value=%r' % max_value) check_valid_interval(min_value, max_value, 'min_value', 'max_value') if allow_infinity and (None not in (min_value, max_value)): raise InvalidArgument('Cannot allow infinity between finite bounds') # Set up a strategy for finite decimals. Note that both floating and # fixed-point decimals require careful handling to remain isolated from # any external precision context - in short, we always work out the # required precision for lossless operation and use context methods. if places is not None: # Fixed-point decimals are basically integers with a scale factor def ctx(val): """Return a context in which this value is lossless.""" precision = ceil(math.log10(abs(val) or 1)) + places + 1 return Context(prec=max([precision, 1])) def int_to_decimal(val): context = ctx(val) return context.quantize(context.multiply(val, factor), factor) factor = Decimal(10) ** -places min_num, max_num = None, None if min_value is not None: min_num = ceil(ctx(min_value).divide(min_value, factor)) if max_value is not None: max_num = floor(ctx(max_value).divide(max_value, factor)) if None not in (min_num, max_num) and min_num > max_num: raise InvalidArgument( 'There are no decimals with %d places between min_value=%r ' 'and max_value=%r ' % (places, min_value, max_value)) strat = integers(min_num, max_num).map(int_to_decimal) else: # Otherwise, they're like fractions featuring a power of ten def fraction_to_decimal(val): precision = ceil(math.log10(abs(val.numerator) or 1) + math.log10(val.denominator)) + 1 return Context(prec=precision or 1).divide( Decimal(val.numerator), val.denominator) strat = fractions(min_value, max_value).map(fraction_to_decimal) # Compose with sampled_from for infinities and NaNs as appropriate special = [] if allow_nan or (allow_nan is None and (None in (min_value, max_value))): special.extend(map(Decimal, ('NaN', '-NaN', 'sNaN', '-sNaN'))) if allow_infinity or (allow_infinity is max_value is None): special.append(Decimal('Infinity')) if allow_infinity or (allow_infinity is min_value is None): special.append(Decimal('-Infinity')) return strat | sampled_from(special) def recursive(base, extend, max_leaves=100): """base: A strategy to start from. extend: A function which takes a strategy and returns a new strategy. max_leaves: The maximum number of elements to be drawn from base on a given run. This returns a strategy ``S`` such that ``S = extend(base | S)``. That is, values may be drawn from base, or from any strategy reachable by mixing applications of | and extend. An example may clarify: ``recursive(booleans(), lists)`` would return a strategy that may return arbitrarily nested and mixed lists of booleans. So e.g. ``False``, ``[True]``, ``[False, []]``, and ``[[[[True]]]]`` are all valid values to be drawn from that strategy. Examples from this strategy shrink by trying to reduce the amount of recursion and by shrinking according to the shrinking behaviour of base and the result of extend. """ from hypothesis.searchstrategy.recursive import RecursiveStrategy return RecursiveStrategy(base, extend, max_leaves) @defines_strategy def permutations(values): """Return a strategy which returns permutations of the collection ``values``. Examples from this strategy shrink by trying to become closer to the original order of values. """ from hypothesis.internal.conjecture.utils import integer_range values = list(values) if not values: return builds(list) class PermutationStrategy(SearchStrategy): def do_draw(self, data): # Reversed Fisher-Yates shuffle. Reverse order so that it shrinks # propertly: This way we prefer things that are lexicographically # closer to the identity. result = list(values) for i in hrange(len(result)): j = integer_range(data, i, len(result) - 1) result[i], result[j] = result[j], result[i] return result return PermutationStrategy() @defines_strategy_with_reusable_values @renamed_arguments( min_datetime='min_value', max_datetime='max_value', ) def datetimes( min_value=dt.datetime.min, max_value=dt.datetime.max, timezones=none(), min_datetime=None, max_datetime=None, ): """A strategy for generating datetimes, which may be timezone-aware. This strategy works by drawing a naive datetime between ``min_datetime`` and ``max_datetime``, which must both be naive (have no timezone). ``timezones`` must be a strategy that generates tzinfo objects (or None, which is valid for naive datetimes). A value drawn from this strategy will be added to a naive datetime, and the resulting tz-aware datetime returned. .. note:: tz-aware datetimes from this strategy may be ambiguous or non-existent due to daylight savings, leap seconds, timezone and calendar adjustments, etc. This is intentional, as malformed timestamps are a common source of bugs. :py:func:`hypothesis.extra.timezones` requires the ``pytz`` package, but provides all timezones in the Olsen database. If you also want to allow naive datetimes, combine strategies like ``none() | timezones()``. Alternatively, you can create a list of the timezones you wish to allow (e.g. from the standard library, ``datetutil``, or ``pytz``) and use :py:func:`sampled_from`. Ensure that simple values such as None or UTC are at the beginning of the list for proper minimisation. Examples from this strategy shrink towards midnight on January 1st 2000. """ # Why must bounds be naive? In principle, we could also write a strategy # that took aware bounds, but the API and validation is much harder. # If you want to generate datetimes between two particular momements in # time I suggest (a) just filtering out-of-bounds values; (b) if bounds # are very close, draw a value and subtract it's UTC offset, handling # overflows and nonexistent times; or (c) do something customised to # handle datetimes in e.g. a four-microsecond span which is not # representable in UTC. Handling (d), all of the above, leads to a much # more complex API for all users and a useful feature for very few. from hypothesis.searchstrategy.datetime import DatetimeStrategy check_type(dt.datetime, min_value, 'min_value') check_type(dt.datetime, max_value, 'max_value') if min_value.tzinfo is not None: raise InvalidArgument('min_value=%r must not have tzinfo' % (min_value,)) if max_value.tzinfo is not None: raise InvalidArgument('max_value=%r must not have tzinfo' % (max_value,)) check_valid_interval(min_value, max_value, 'min_value', 'max_value') if not isinstance(timezones, SearchStrategy): raise InvalidArgument( 'timezones=%r must be a SearchStrategy that can provide tzinfo ' 'for datetimes (either None or dt.tzinfo objects)' % (timezones,)) return DatetimeStrategy(min_value, max_value, timezones) @defines_strategy_with_reusable_values @renamed_arguments( min_date='min_value', max_date='max_value', ) def dates( min_value=dt.date.min, max_value=dt.date.max, min_date=None, max_date=None, ): """A strategy for dates between ``min_date`` and ``max_date``. Examples from this strategy shrink towards January 1st 2000. """ from hypothesis.searchstrategy.datetime import DateStrategy check_type(dt.date, min_value, 'min_value') check_type(dt.date, max_value, 'max_value') check_valid_interval(min_value, max_value, 'min_value', 'max_value') if min_value == max_value: return just(min_value) return DateStrategy(min_value, max_value) @defines_strategy_with_reusable_values @renamed_arguments( min_time='min_value', max_time='max_value', ) def times( min_value=dt.time.min, max_value=dt.time.max, timezones=none(), min_time=None, max_time=None, ): """A strategy for times between ``min_time`` and ``max_time``. The ``timezones`` argument is handled as for :py:func:`datetimes`. Examples from this strategy shrink towards midnight, with the timezone component shrinking as for the strategy that provided it. """ check_type(dt.time, min_value, 'min_value') check_type(dt.time, max_value, 'max_value') if min_value.tzinfo is not None: raise InvalidArgument('min_value=%r must not have tzinfo' % min_value) if max_value.tzinfo is not None: raise InvalidArgument('max_value=%r must not have tzinfo' % max_value) check_valid_interval(min_value, max_value, 'min_value', 'max_value') day = dt.date(2000, 1, 1) return datetimes(min_value=dt.datetime.combine(day, min_value), max_value=dt.datetime.combine(day, max_value), timezones=timezones).map(lambda t: t.timetz()) @defines_strategy_with_reusable_values @renamed_arguments( min_delta='min_value', max_delta='max_value', ) def timedeltas( min_value=dt.timedelta.min, max_value=dt.timedelta.max, min_delta=None, max_delta=None ): """A strategy for timedeltas between ``min_value`` and ``max_value``. Examples from this strategy shrink towards zero. """ from hypothesis.searchstrategy.datetime import TimedeltaStrategy check_type(dt.timedelta, min_value, 'min_value') check_type(dt.timedelta, max_value, 'max_value') check_valid_interval(min_value, max_value, 'min_value', 'max_value') if min_value == max_value: return just(min_value) return TimedeltaStrategy(min_value=min_value, max_value=max_value) @cacheable def composite(f): """Defines a strategy that is built out of potentially arbitrarily many other strategies. This is intended to be used as a decorator. See :ref:`the full documentation for more details ` about how to use this function. Examples from this strategy shrink by shrinking the output of each draw call. """ from hypothesis.internal.reflection import define_function_signature argspec = getfullargspec(f) if ( argspec.defaults is not None and len(argspec.defaults) == len(argspec.args) ): raise InvalidArgument( 'A default value for initial argument will never be used') if len(argspec.args) == 0 and not argspec.varargs: raise InvalidArgument( 'Functions wrapped with composite must take at least one ' 'positional argument.' ) annots = {k: v for k, v in argspec.annotations.items() if k in (argspec.args + argspec.kwonlyargs + ['return'])} new_argspec = argspec._replace(args=argspec.args[1:], annotations=annots) @defines_strategy @define_function_signature(f.__name__, f.__doc__, new_argspec) def accept(*args, **kwargs): class CompositeStrategy(SearchStrategy): def do_draw(self, data): first_draw = [True] def draw(strategy): first_draw[0] = False return data.draw(strategy) return f(draw, *args, **kwargs) return CompositeStrategy() accept.__module__ = f.__module__ return accept def shared(base, key=None): """Returns a strategy that draws a single shared value per run, drawn from base. Any two shared instances with the same key will share the same value, otherwise the identity of this strategy will be used. That is: >>> s = integers() # or any other strategy >>> x = shared(s) >>> y = shared(s) In the above x and y may draw different (or potentially the same) values. In the following they will always draw the same: >>> x = shared(s, key="hi") >>> y = shared(s, key="hi") Examples from this strategy shrink as per their base strategy. """ from hypothesis.searchstrategy.shared import SharedStrategy return SharedStrategy(base, key) @defines_strategy def choices(): """Strategy that generates a function that behaves like random.choice. Will note choices made for reproducibility. .. deprecated:: 3.15.0 Use :func:`data() ` with :func:`sampled_from() ` instead. Examples from this strategy shrink by making each choice function return an earlier value in the sequence passed to it. """ from hypothesis.control import note, current_build_context from hypothesis.internal.conjecture.utils import choice, check_sample note_deprecation( 'choices() has been deprecated. Use the data() strategy instead and ' 'replace its usage with data.draw(sampled_from(elements))) calls.' ) class Chooser(object): def __init__(self, build_context, data): self.build_context = build_context self.data = data self.choice_count = 0 def __call__(self, values): if not values: raise IndexError('Cannot choose from empty sequence') result = choice(self.data, check_sample(values)) with self.build_context.local(): self.choice_count += 1 note('Choice #%d: %r' % (self.choice_count, result)) return result def __repr__(self): return 'choice' class ChoiceStrategy(SearchStrategy): supports_find = False def do_draw(self, data): data.can_reproduce_example_from_repr = False return Chooser(current_build_context(), data) return shared( ChoiceStrategy(), key='hypothesis.strategies.chooser.choice_function' ) @cacheable @defines_strategy_with_reusable_values def uuids(version=None): """Returns a strategy that generates :class:`UUIDs `. If the optional version argument is given, value is passed through to :class:`~python:uuid.UUID` and only UUIDs of that version will be generated. All returned values from this will be unique, so e.g. if you do ``lists(uuids())`` the resulting list will never contain duplicates. Examples from this strategy don't have any meaningful shrink order. """ from uuid import UUID if version not in (None, 1, 2, 3, 4, 5): raise InvalidArgument(( 'version=%r, but version must be in (None, 1, 2, 3, 4, 5) ' 'to pass to the uuid.UUID constructor.') % (version, ) ) return shared(randoms(), key='hypothesis.strategies.uuids.generator').map( lambda r: UUID(version=version, int=r.getrandbits(128)) ) @defines_strategy_with_reusable_values def runner(default=not_set): """A strategy for getting "the current test runner", whatever that may be. The exact meaning depends on the entry point, but it will usually be the associated 'self' value for it. If there is no current test runner and a default is provided, return that default. If no default is provided, raises InvalidArgument. Examples from this strategy do not shrink (because there is only one). """ class RunnerStrategy(SearchStrategy): def do_draw(self, data): runner = getattr(data, 'hypothesis_runner', not_set) if runner is not_set: if default is not_set: raise InvalidArgument( 'Cannot use runner() strategy with no ' 'associated runner or explicit default.' ) else: return default else: return runner return RunnerStrategy() @cacheable def data(): """This isn't really a normal strategy, but instead gives you an object which can be used to draw data interactively from other strategies. It can only be used within :func:`@given `, not :func:`find() `. This is because the lifetime of the object cannot outlast the test body. See :ref:`the rest of the documentation ` for more complete information. Examples from this strategy do not shrink (because there is only one), but the result of calls to each draw() call shrink as they normally would. """ from hypothesis.control import note class DataObject(object): def __init__(self, data): self.count = 0 self.data = data def __repr__(self): return 'data(...)' def draw(self, strategy, label=None): result = self.data.draw(strategy) self.count += 1 if label is not None: note('Draw %d (%s): %r' % (self.count, label, result)) else: note('Draw %d: %r' % (self.count, result)) return result class DataStrategy(SearchStrategy): supports_find = False def do_draw(self, data): data.can_reproduce_example_from_repr = False if not hasattr(data, 'hypothesis_shared_data_strategy'): data.hypothesis_shared_data_strategy = DataObject(data) return data.hypothesis_shared_data_strategy def __repr__(self): return 'data()' def map(self, f): self.__not_a_first_class_strategy('map') def filter(self, f): self.__not_a_first_class_strategy('filter') def flatmap(self, f): self.__not_a_first_class_strategy('flatmap') def example(self): self.__not_a_first_class_strategy('example') def __not_a_first_class_strategy(self, name): raise InvalidArgument(( 'Cannot call %s on a DataStrategy. You should probably be ' "using @composite for whatever it is you're trying to do." ) % (name,)) return DataStrategy() def register_type_strategy(custom_type, strategy): """Add an entry to the global type-to-strategy lookup. This lookup is used in :func:`~hypothesis.strategies.builds` and :func:`@given `. :func:`~hypothesis.strategies.builds` will be used automatically for classes with type annotations on ``__init__`` , so you only need to register a strategy if one or more arguments need to be more tightly defined than their type-based default, or if you want to supply a strategy for an argument with a default value. ``strategy`` may be a search strategy, or a function that takes a type and returns a strategy (useful for generic types). """ from hypothesis.searchstrategy import types if not isinstance(custom_type, type): raise InvalidArgument('custom_type=%r must be a type') elif not (isinstance(strategy, SearchStrategy) or callable(strategy)): raise InvalidArgument( 'strategy=%r must be a SearchStrategy, or a function that takes ' 'a generic type and returns a specific SearchStrategy') elif isinstance(strategy, SearchStrategy) and strategy.is_empty: raise InvalidArgument('strategy=%r must not be empty') types._global_type_lookup[custom_type] = strategy from_type.__clear_cache() @cacheable def deferred(definition): """A deferred strategy allows you to write a strategy that references other strategies that have not yet been defined. This allows for the easy definition of recursive and mutually recursive strategies. The definition argument should be a zero-argument function that returns a strategy. It will be evaluated the first time the strategy is used to produce an example. Example usage: >>> import hypothesis.strategies as st >>> x = st.deferred(lambda: st.booleans() | st.tuples(x, x)) >>> x.example() (((False, (True, True)), (False, True)), (True, True)) >>> x.example() (True, True) Mutual recursion also works fine: >>> a = st.deferred(lambda: st.booleans() | b) >>> b = st.deferred(lambda: st.tuples(a, a)) >>> a.example() (True, (True, False)) >>> b.example() (False, True) Examples from this strategy shrink as they normally would from the strategy returned by the definition. """ from hypothesis.searchstrategy.deferred import DeferredStrategy return DeferredStrategy(definition) assert _strategies.issubset(set(__all__)), _strategies - set(__all__) hypothesis-python-3.44.1/src/hypothesis/types.py000066400000000000000000000071111321557765100220410ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect from random import Random from itertools import islice from hypothesis.errors import InvalidArgument class RandomWithSeed(Random): """A subclass of Random designed to expose the seed it was initially provided with. We consistently use this instead of Random objects because it makes examples much easier to recreate. """ def __init__(self, seed): super(RandomWithSeed, self).__init__(seed) self.seed = seed def __copy__(self): result = RandomWithSeed(self.seed) result.setstate(self.getstate()) return result def __deepcopy__(self, table): return self.__copy__() def __repr__(self): return u'RandomWithSeed(%s)' % (self.seed,) class Stream(object): """A stream is a possibly infinite list. You can index into it, and you can iterate over it, but you can't ask its length and iterating over it will not necessarily terminate. Behind the scenes streams are backed by a generator, but they "remember" the values as they evaluate them so you can replay them later. Internally Hypothesis uses the fact that you can tell how much of a stream has been evaluated, but you shouldn't use that. The only public APIs of a Stream are that you can index, slice, and iterate it. """ def __init__(self, generator=None): if generator is None: generator = iter(()) elif not inspect.isgenerator(generator): generator = iter(generator) self.generator = generator self.fetched = [] def map(self, f): return Stream(f(v) for v in self) def __iter__(self): i = 0 while i < len(self.fetched): yield self.fetched[i] i += 1 for v in self.generator: self.fetched.append(v) yield v def __getitem__(self, key): if isinstance(key, slice): return Stream(islice( iter(self), key.start, key.stop, key.step )) if not isinstance(key, int): raise InvalidArgument(u'Cannot index stream with %s' % ( type(key).__name__,)) self._thunk_to(key + 1) return self.fetched[key] def _thunk_to(self, i): it = iter(self) try: while len(self.fetched) < i: next(it) except StopIteration: raise IndexError( u'Index %d out of bounds for finite stream of length %d' % ( i, len(self.fetched) ) ) def _thunked(self): return len(self.fetched) def __repr__(self): if not self.fetched: return u'Stream(...)' return u'Stream(%s, ...)' % ( u', '.join(map(repr, self.fetched)) ) def __deepcopy__(self, table): return self def __copy__(self): return self hypothesis-python-3.44.1/src/hypothesis/utils/000077500000000000000000000000001321557765100214635ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/utils/__init__.py000066400000000000000000000014171321557765100235770ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """hypothesis.utils is a package for things that you can consider part of the semi-public Hypothesis API but aren't really the core point.""" hypothesis-python-3.44.1/src/hypothesis/utils/conventions.py000066400000000000000000000016621321557765100244070ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import class UniqueIdentifier(object): def __init__(self, identifier): self.identifier = identifier def __repr__(self): return self.identifier infer = UniqueIdentifier(u'infer') not_set = UniqueIdentifier(u'not_set') hypothesis-python-3.44.1/src/hypothesis/utils/dynamicvariables.py000066400000000000000000000024121321557765100253510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import threading from contextlib import contextmanager class DynamicVariable(object): def __init__(self, default): self.default = default self.data = threading.local() @property def value(self): return getattr(self.data, 'value', self.default) @value.setter def value(self, value): setattr(self.data, 'value', value) @contextmanager def with_value(self, value): old_value = self.value try: self.data.value = value yield finally: self.data.value = old_value hypothesis-python-3.44.1/src/hypothesis/vendor/000077500000000000000000000000001321557765100216205ustar00rootroot00000000000000hypothesis-python-3.44.1/src/hypothesis/vendor/__init__.py000066400000000000000000000012001321557765100237220ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2016 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/src/hypothesis/vendor/pretty.py000066400000000000000000000700731321557765100235300ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2016 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # -*- coding: utf-8 -*- """ Python advanced pretty printer. This pretty printer is intended to replace the old `pprint` python module which does not allow developers to provide their own pretty print callbacks. This module is based on ruby's `prettyprint.rb` library by `Tanaka Akira`. Example Usage ------------- To directly print the representation of an object use `pprint`:: from pretty import pprint pprint(complex_object) To get a string of the output use `pretty`:: from pretty import pretty string = pretty(complex_object) Extending --------- The pretty library allows developers to add pretty printing rules for their own objects. This process is straightforward. All you have to do is to add a `_repr_pretty_` method to your object and call the methods on the pretty printer passed:: class MyObject(object): def _repr_pretty_(self, p, cycle): ... Here is an example implementation of a `_repr_pretty_` method for a list subclass:: class MyList(list): def _repr_pretty_(self, p, cycle): if cycle: p.text('MyList(...)') else: with p.group(8, 'MyList([', '])'): for idx, item in enumerate(self): if idx: p.text(',') p.breakable() p.pretty(item) The `cycle` parameter is `True` if pretty detected a cycle. You *have* to react to that or the result is an infinite loop. `p.text()` just adds non breaking text to the output, `p.breakable()` either adds a whitespace or breaks here. If you pass it an argument it's used instead of the default space. `p.pretty` prettyprints another object using the pretty print method. The first parameter to the `group` function specifies the extra indentation of the next line. In this example the next item will either be on the same line (if the items are short enough) or aligned with the right edge of the opening bracket of `MyList`. If you just want to indent something you can use the group function without open / close parameters. You can also use this code:: with p.indent(2): ... Inheritance diagram: .. inheritance-diagram:: IPython.lib.pretty :parts: 3 :copyright: 2007 by Armin Ronacher. Portions (c) 2009 by Robert Kern. :license: BSD License. """ from __future__ import division, print_function, absolute_import import re import sys import types import datetime import platform from io import StringIO from contextlib import contextmanager from collections import deque from hypothesis.internal.compat import PY3, cast_unicode, string_types, \ get_stream_enc __all__ = ['pretty', 'pprint', 'PrettyPrinter', 'RepresentationPrinter', 'for_type_by_name'] MAX_SEQ_LENGTH = 1000 _re_pattern_type = type(re.compile('')) PYPY = platform.python_implementation() == 'PyPy' def _safe_getattr(obj, attr, default=None): """Safe version of getattr. Same as getattr, but will return ``default`` on any Exception, rather than raising. """ try: return getattr(obj, attr, default) except Exception: return default if PY3: CUnicodeIO = StringIO else: # pragma: no cover class CUnicodeIO(StringIO): """StringIO that casts str to unicode on Python 2.""" def write(self, text): return super(CUnicodeIO, self).write( cast_unicode(text, encoding=get_stream_enc(sys.stdout))) def pretty( obj, verbose=False, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH ): """Pretty print the object's representation.""" stream = CUnicodeIO() printer = RepresentationPrinter( stream, verbose, max_width, newline, max_seq_length=max_seq_length) printer.pretty(obj) printer.flush() return stream.getvalue() def pprint( obj, verbose=False, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH ): """Like `pretty` but print to stdout.""" printer = RepresentationPrinter( sys.stdout, verbose, max_width, newline, max_seq_length=max_seq_length) printer.pretty(obj) printer.flush() sys.stdout.write(newline) sys.stdout.flush() class _PrettyPrinterBase(object): @contextmanager def indent(self, indent): """with statement support for indenting/dedenting.""" self.indentation += indent try: yield finally: self.indentation -= indent @contextmanager def group(self, indent=0, open='', close=''): """like begin_group / end_group but for the with statement.""" self.begin_group(indent, open) try: yield finally: self.end_group(indent, close) class PrettyPrinter(_PrettyPrinterBase): """Baseclass for the `RepresentationPrinter` prettyprinter that is used to generate pretty reprs of objects. Contrary to the `RepresentationPrinter` this printer knows nothing about the default pprinters or the `_repr_pretty_` callback method. """ def __init__( self, output, max_width=79, newline='\n', max_seq_length=MAX_SEQ_LENGTH ): self.broken = False self.output = output self.max_width = max_width self.newline = newline self.max_seq_length = max_seq_length self.output_width = 0 self.buffer_width = 0 self.buffer = deque() root_group = Group(0) self.group_stack = [root_group] self.group_queue = GroupQueue(root_group) self.indentation = 0 def _break_outer_groups(self): while self.max_width < self.output_width + self.buffer_width: group = self.group_queue.deq() if not group: return while group.breakables: x = self.buffer.popleft() self.output_width = x.output(self.output, self.output_width) self.buffer_width -= x.width while self.buffer and isinstance(self.buffer[0], Text): x = self.buffer.popleft() self.output_width = x.output(self.output, self.output_width) self.buffer_width -= x.width def text(self, obj): """Add literal text to the output.""" width = len(obj) if self.buffer: text = self.buffer[-1] if not isinstance(text, Text): text = Text() self.buffer.append(text) text.add(obj, width) self.buffer_width += width self._break_outer_groups() else: self.output.write(obj) self.output_width += width def breakable(self, sep=' '): """Add a breakable separator to the output. This does not mean that it will automatically break here. If no breaking on this position takes place the `sep` is inserted which default to one space. """ width = len(sep) group = self.group_stack[-1] if group.want_break: self.flush() self.output.write(self.newline) self.output.write(' ' * self.indentation) self.output_width = self.indentation self.buffer_width = 0 else: self.buffer.append(Breakable(sep, width, self)) self.buffer_width += width self._break_outer_groups() def break_(self): """Explicitly insert a newline into the output, maintaining correct indentation.""" self.flush() self.output.write(self.newline) self.output.write(' ' * self.indentation) self.output_width = self.indentation self.buffer_width = 0 def begin_group(self, indent=0, open=''): """ Begin a group. If you want support for python < 2.5 which doesn't has the with statement this is the preferred way: p.begin_group(1, '{') ... p.end_group(1, '}') The python 2.5 expression would be this: with p.group(1, '{', '}'): ... The first parameter specifies the indentation for the next line ( usually the width of the opening text), the second the opening text. All parameters are optional. """ if open: self.text(open) group = Group(self.group_stack[-1].depth + 1) self.group_stack.append(group) self.group_queue.enq(group) self.indentation += indent def _enumerate(self, seq): """like enumerate, but with an upper limit on the number of items.""" for idx, x in enumerate(seq): if self.max_seq_length and idx >= self.max_seq_length: self.text(',') self.breakable() self.text('...') return yield idx, x def end_group(self, dedent=0, close=''): """End a group. See `begin_group` for more details. """ self.indentation -= dedent group = self.group_stack.pop() if not group.breakables: self.group_queue.remove(group) if close: self.text(close) def flush(self): """Flush data that is left in the buffer.""" for data in self.buffer: self.output_width += data.output(self.output, self.output_width) self.buffer.clear() self.buffer_width = 0 def _get_mro(obj_class): """Get a reasonable method resolution order of a class and its superclasses for both old-style and new-style classes.""" if not hasattr(obj_class, '__mro__'): # pragma: no cover # Old-style class. Mix in object to make a fake new-style class. try: obj_class = type(obj_class.__name__, (obj_class, object), {}) except TypeError: # Old-style extension type that does not descend from object. # FIXME: try to construct a more thorough MRO. mro = [obj_class] else: mro = obj_class.__mro__[1:-1] else: mro = obj_class.__mro__ return mro class RepresentationPrinter(PrettyPrinter): """Special pretty printer that has a `pretty` method that calls the pretty printer for a python object. This class stores processing data on `self` so you must *never* use this class in a threaded environment. Always lock it or reinstanciate it. Instances also have a verbose flag callbacks can access to control their output. For example the default instance repr prints all attributes and methods that are not prefixed by an underscore if the printer is in verbose mode. """ def __init__(self, output, verbose=False, max_width=79, newline='\n', singleton_pprinters=None, type_pprinters=None, deferred_pprinters=None, max_seq_length=MAX_SEQ_LENGTH): PrettyPrinter.__init__(self, output, max_width, newline, max_seq_length=max_seq_length) self.verbose = verbose self.stack = [] if singleton_pprinters is None: singleton_pprinters = _singleton_pprinters.copy() self.singleton_pprinters = singleton_pprinters if type_pprinters is None: type_pprinters = _type_pprinters.copy() self.type_pprinters = type_pprinters if deferred_pprinters is None: deferred_pprinters = _deferred_type_pprinters.copy() self.deferred_pprinters = deferred_pprinters def pretty(self, obj): """Pretty print the given object.""" obj_id = id(obj) cycle = obj_id in self.stack self.stack.append(obj_id) self.begin_group() try: obj_class = _safe_getattr(obj, '__class__', None) or type(obj) # First try to find registered singleton printers for the type. try: printer = self.singleton_pprinters[obj_id] except (TypeError, KeyError): pass else: return printer(obj, self, cycle) # Next walk the mro and check for either: # 1) a registered printer # 2) a _repr_pretty_ method for cls in _get_mro(obj_class): if cls in self.type_pprinters: # printer registered in self.type_pprinters return self.type_pprinters[cls](obj, self, cycle) else: # deferred printer printer = self._in_deferred_types(cls) if printer is not None: return printer(obj, self, cycle) else: # Finally look for special method names. # Some objects automatically create any requested # attribute. Try to ignore most of them by checking for # callability. if '_repr_pretty_' in cls.__dict__: meth = cls._repr_pretty_ if callable(meth): return meth(obj, self, cycle) return _default_pprint(obj, self, cycle) finally: self.end_group() self.stack.pop() def _in_deferred_types(self, cls): """Check if the given class is specified in the deferred type registry. Returns the printer from the registry if it exists, and None if the class is not in the registry. Successful matches will be moved to the regular type registry for future use. """ mod = _safe_getattr(cls, '__module__', None) name = _safe_getattr(cls, '__name__', None) key = (mod, name) printer = None if key in self.deferred_pprinters: # Move the printer over to the regular registry. printer = self.deferred_pprinters.pop(key) self.type_pprinters[cls] = printer return printer class Printable(object): def output(self, stream, output_width): # pragma: no cover raise NotImplementedError() class Text(Printable): def __init__(self): self.objs = [] self.width = 0 def output(self, stream, output_width): for obj in self.objs: stream.write(obj) return output_width + self.width def add(self, obj, width): self.objs.append(obj) self.width += width class Breakable(Printable): def __init__(self, seq, width, pretty): self.obj = seq self.width = width self.pretty = pretty self.indentation = pretty.indentation self.group = pretty.group_stack[-1] self.group.breakables.append(self) def output(self, stream, output_width): self.group.breakables.popleft() if self.group.want_break: stream.write(self.pretty.newline) stream.write(' ' * self.indentation) return self.indentation if not self.group.breakables: self.pretty.group_queue.remove(self.group) stream.write(self.obj) return output_width + self.width class Group(Printable): def __init__(self, depth): self.depth = depth self.breakables = deque() self.want_break = False class GroupQueue(object): def __init__(self, *groups): self.queue = [] for group in groups: self.enq(group) def enq(self, group): depth = group.depth while depth > len(self.queue) - 1: self.queue.append([]) self.queue[depth].append(group) def deq(self): for stack in self.queue: for idx, group in enumerate(reversed(stack)): if group.breakables: del stack[idx] group.want_break = True return group for group in stack: group.want_break = True del stack[:] def remove(self, group): try: self.queue[group.depth].remove(group) except ValueError: pass try: _baseclass_reprs = (object.__repr__, types.InstanceType.__repr__) except AttributeError: # Python 3 _baseclass_reprs = (object.__repr__,) def _default_pprint(obj, p, cycle): """The default print function. Used if an object does not provide one and it's none of the builtin objects. """ klass = _safe_getattr(obj, '__class__', None) or type(obj) if _safe_getattr(klass, '__repr__', None) not in _baseclass_reprs: # A user-provided repr. Find newlines and replace them with p.break_() _repr_pprint(obj, p, cycle) return p.begin_group(1, '<') p.pretty(klass) p.text(' at 0x%x' % id(obj)) if cycle: p.text(' ...') elif p.verbose: first = True for key in dir(obj): if not key.startswith('_'): try: value = getattr(obj, key) except AttributeError: continue if isinstance(value, types.MethodType): continue if not first: p.text(',') p.breakable() p.text(key) p.text('=') step = len(key) + 1 p.indentation += step p.pretty(value) p.indentation -= step first = False p.end_group(1, '>') def _seq_pprinter_factory(start, end, basetype): """Factory that returns a pprint function useful for sequences. Used by the default pprint for tuples, dicts, and lists. """ def inner(obj, p, cycle): typ = type(obj) if ( basetype is not None and typ is not basetype and typ.__repr__ != basetype.__repr__ ): # If the subclass provides its own repr, use it instead. return p.text(typ.__repr__(obj)) if cycle: return p.text(start + '...' + end) step = len(start) p.begin_group(step, start) for idx, x in p._enumerate(obj): if idx: p.text(',') p.breakable() p.pretty(x) if len(obj) == 1 and type(obj) is tuple: # Special case for 1-item tuples. p.text(',') p.end_group(step, end) return inner def _set_pprinter_factory(start, end, basetype): """Factory that returns a pprint function useful for sets and frozensets.""" def inner(obj, p, cycle): typ = type(obj) if ( basetype is not None and typ is not basetype and typ.__repr__ != basetype.__repr__ ): # If the subclass provides its own repr, use it instead. return p.text(typ.__repr__(obj)) if cycle: return p.text(start + '...' + end) if len(obj) == 0: # Special case. p.text(basetype.__name__ + '()') else: step = len(start) p.begin_group(step, start) # Like dictionary keys, we will try to sort the items if there # aren't too many items = obj if not (p.max_seq_length and len(obj) >= p.max_seq_length): try: items = sorted(obj) except Exception: # Sometimes the items don't sort. pass for idx, x in p._enumerate(items): if idx: p.text(',') p.breakable() p.pretty(x) p.end_group(step, end) return inner def _dict_pprinter_factory(start, end, basetype=None): """Factory that returns a pprint function used by the default pprint of dicts and dict proxies.""" def inner(obj, p, cycle): typ = type(obj) if ( basetype is not None and typ is not basetype and typ.__repr__ != basetype.__repr__ ): # If the subclass provides its own repr, use it instead. return p.text(typ.__repr__(obj)) if cycle: return p.text('{...}') p.begin_group(1, start) keys = obj.keys() # if dict isn't large enough to be truncated, sort keys before # displaying if not (p.max_seq_length and len(obj) >= p.max_seq_length): try: keys = sorted(keys) except Exception: # Sometimes the keys don't sort. pass for idx, key in p._enumerate(keys): if idx: p.text(',') p.breakable() p.pretty(key) p.text(': ') p.pretty(obj[key]) p.end_group(1, end) inner.__name__ = '_dict_pprinter_factory(%r, %r, %r)' % ( start, end, basetype ) return inner def _super_pprint(obj, p, cycle): """The pprint for the super type.""" try: # This section works around various pypy versions that don't do # have the same attributes on super objects obj.__thisclass__ obj.__self__ except AttributeError: # pragma: no cover assert PYPY _repr_pprint(obj, p, cycle) return p.begin_group(8, '') def _re_pattern_pprint(obj, p, cycle): """The pprint function for regular expression patterns.""" p.text('re.compile(') pattern = repr(obj.pattern) if pattern[:1] in 'uU': # pragma: no cover pattern = pattern[1:] prefix = 'ur' else: prefix = 'r' pattern = prefix + pattern.replace('\\\\', '\\') p.text(pattern) if obj.flags: p.text(',') p.breakable() done_one = False for flag in ('TEMPLATE', 'IGNORECASE', 'LOCALE', 'MULTILINE', 'DOTALL', 'UNICODE', 'VERBOSE', 'DEBUG'): if obj.flags & getattr(re, flag): if done_one: p.text('|') p.text('re.' + flag) done_one = True p.text(')') def _type_pprint(obj, p, cycle): """The pprint for classes and types.""" # Heap allocated types might not have the module attribute, # and others may set it to None. # Checks for a __repr__ override in the metaclass # != rather than is not because pypy compatibility if type(obj).__repr__ != type.__repr__: _repr_pprint(obj, p, cycle) return mod = _safe_getattr(obj, '__module__', None) try: name = obj.__qualname__ if not isinstance(name, string_types): # pragma: no cover # This can happen if the type implements __qualname__ as a property # or other descriptor in Python 2. raise Exception('Try __name__') except Exception: # pragma: no cover name = obj.__name__ if not isinstance(name, string_types): name = '' if mod in (None, '__builtin__', 'builtins', 'exceptions'): p.text(name) else: p.text(mod + '.' + name) def _repr_pprint(obj, p, cycle): """A pprint that just redirects to the normal repr function.""" # Find newlines and replace them with p.break_() output = repr(obj) for idx, output_line in enumerate(output.splitlines()): if idx: p.break_() p.text(output_line) def _function_pprint(obj, p, cycle): """Base pprint for all functions and builtin functions.""" name = _safe_getattr(obj, '__qualname__', obj.__name__) mod = obj.__module__ if mod and mod not in ('__builtin__', 'builtins', 'exceptions'): name = mod + '.' + name p.text('' % name) def _exception_pprint(obj, p, cycle): """Base pprint for all exceptions.""" name = getattr(obj.__class__, '__qualname__', obj.__class__.__name__) if obj.__class__.__module__ not in ('exceptions', 'builtins'): name = '%s.%s' % (obj.__class__.__module__, name) step = len(name) + 1 p.begin_group(step, name + '(') for idx, arg in enumerate(getattr(obj, 'args', ())): if idx: p.text(',') p.breakable() p.pretty(arg) p.end_group(step, ')') #: the exception base try: _exception_base = BaseException except NameError: # pragma: no cover _exception_base = Exception #: printers for builtin types _type_pprinters = { int: _repr_pprint, float: _repr_pprint, str: _repr_pprint, tuple: _seq_pprinter_factory('(', ')', tuple), list: _seq_pprinter_factory('[', ']', list), dict: _dict_pprinter_factory('{', '}', dict), set: _set_pprinter_factory('{', '}', set), frozenset: _set_pprinter_factory('frozenset({', '})', frozenset), super: _super_pprint, _re_pattern_type: _re_pattern_pprint, type: _type_pprint, types.FunctionType: _function_pprint, types.BuiltinFunctionType: _function_pprint, types.MethodType: _repr_pprint, datetime.datetime: _repr_pprint, datetime.timedelta: _repr_pprint, _exception_base: _exception_pprint } try: # pragma: no cover if types.DictProxyType != dict: _type_pprinters[types.DictProxyType] = _dict_pprinter_factory( '') _type_pprinters[types.ClassType] = _type_pprint _type_pprinters[types.SliceType] = _repr_pprint except AttributeError: # Python 3 _type_pprinters[slice] = _repr_pprint try: # pragma: no cover _type_pprinters[xrange] = _repr_pprint _type_pprinters[long] = _repr_pprint _type_pprinters[unicode] = _repr_pprint except NameError: _type_pprinters[range] = _repr_pprint _type_pprinters[bytes] = _repr_pprint #: printers for types specified by name _deferred_type_pprinters = { } def for_type_by_name(type_module, type_name, func): """Add a pretty printer for a type specified by the module and name of a type rather than the type object itself.""" key = (type_module, type_name) oldfunc = _deferred_type_pprinters.get(key, None) _deferred_type_pprinters[key] = func return oldfunc #: printers for the default singletons _singleton_pprinters = dict.fromkeys(map(id, [None, True, False, Ellipsis, NotImplemented]), _repr_pprint) def _defaultdict_pprint(obj, p, cycle): name = obj.__class__.__name__ with p.group(len(name) + 1, name + '(', ')'): if cycle: p.text('...') else: p.pretty(obj.default_factory) p.text(',') p.breakable() p.pretty(dict(obj)) def _ordereddict_pprint(obj, p, cycle): name = obj.__class__.__name__ with p.group(len(name) + 1, name + '(', ')'): if cycle: p.text('...') elif len(obj): p.pretty(list(obj.items())) def _deque_pprint(obj, p, cycle): name = obj.__class__.__name__ with p.group(len(name) + 1, name + '(', ')'): if cycle: p.text('...') else: p.pretty(list(obj)) def _counter_pprint(obj, p, cycle): name = obj.__class__.__name__ with p.group(len(name) + 1, name + '(', ')'): if cycle: p.text('...') elif len(obj): p.pretty(dict(obj)) for_type_by_name('collections', 'defaultdict', _defaultdict_pprint) for_type_by_name('collections', 'OrderedDict', _ordereddict_pprint) for_type_by_name('ordereddict', 'OrderedDict', _ordereddict_pprint) for_type_by_name('collections', 'deque', _deque_pprint) for_type_by_name('collections', 'Counter', _counter_pprint) for_type_by_name('counter', 'Counter', _counter_pprint) for_type_by_name('_collections', 'defaultdict', _defaultdict_pprint) for_type_by_name('_collections', 'OrderedDict', _ordereddict_pprint) for_type_by_name('_collections', 'deque', _deque_pprint) for_type_by_name('_collections', 'Counter', _counter_pprint) hypothesis-python-3.44.1/src/hypothesis/version.py000066400000000000000000000014241321557765100223630ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import __version_info__ = (3, 44, 1) __version__ = '.'.join(map(str, __version_info__)) hypothesis-python-3.44.1/tests/000077500000000000000000000000001321557765100164775ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/__init__.py000066400000000000000000000012001321557765100206010ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/common/000077500000000000000000000000001321557765100177675ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/common/__init__.py000066400000000000000000000062511321557765100221040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER import sys from collections import namedtuple try: import pytest except ImportError: pytest = None from tests.common.debug import TIME_INCREMENT from hypothesis.strategies import integers, floats, just, one_of, \ sampled_from, lists, booleans, dictionaries, tuples, \ frozensets, complex_numbers, sets, text, binary, decimals, fractions, \ none, randoms, builds, fixed_dictionaries, recursive __all__ = ['small_verifier', 'standard_types', 'OrderedPair', 'TIME_INCREMENT'] OrderedPair = namedtuple('OrderedPair', ('left', 'right')) ordered_pair = integers().flatmap( lambda right: integers(min_value=0).map( lambda length: OrderedPair(right - length, right))) def constant_list(strat): return strat.flatmap( lambda v: lists(just(v), average_size=10), ) ABC = namedtuple('ABC', ('a', 'b', 'c')) def abc(x, y, z): return builds(ABC, x, y, z) standard_types = [ lists(max_size=0), tuples(), sets(max_size=0), frozensets(max_size=0), fixed_dictionaries({}), abc(booleans(), booleans(), booleans()), abc(booleans(), booleans(), integers()), fixed_dictionaries({'a': integers(), 'b': booleans()}), dictionaries(booleans(), integers()), dictionaries(text(), booleans()), one_of(integers(), tuples(booleans())), sampled_from(range(10)), one_of(just('a'), just('b'), just('c')), sampled_from(('a', 'b', 'c')), integers(), integers(min_value=3), integers(min_value=(-2 ** 32), max_value=(2 ** 64)), floats(), floats(min_value=-2.0, max_value=3.0), floats(), floats(min_value=-2.0), floats(), floats(max_value=-0.0), floats(), floats(min_value=0.0), floats(min_value=3.14, max_value=3.14), text(), binary(), booleans(), tuples(booleans(), booleans()), frozensets(integers()), sets(frozensets(booleans())), complex_numbers(), fractions(), decimals(), lists(lists(booleans(), average_size=10), average_size=10), lists(lists(booleans(), average_size=100)), lists(floats(0.0, 0.0), average_size=1.0), ordered_pair, constant_list(integers()), integers().filter(lambda x: abs(x) > 100), floats(min_value=-sys.float_info.max, max_value=sys.float_info.max), none(), randoms(), booleans().flatmap(lambda x: booleans() if x else complex_numbers()), recursive( base=booleans(), extend=lambda x: lists(x, max_size=3), max_leaves=10, ) ] if pytest is not None: def parametrize(args, values): return pytest.mark.parametrize( args, values, ids=list(map(repr, values))) hypothesis-python-3.44.1/tests/common/arguments.py000066400000000000000000000027001321557765100223450ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import given from hypothesis.errors import InvalidArgument def e(a, *args, **kwargs): return (a, args, kwargs) def e_to_str(elt): f, args, kwargs = elt bits = list(map(repr, args)) bits.extend(sorted('%s=%r' % (k, v) for k, v in kwargs.items())) return '%s(%s)' % (f.__name__, ', '.join(bits)) def argument_validation_test(bad_args): @pytest.mark.parametrize( ('function', 'args', 'kwargs'), bad_args, ids=list(map(e_to_str, bad_args)) ) def test_raise_invalid_argument(function, args, kwargs): @given(function(*args, **kwargs)) def test(x): pass with pytest.raises(InvalidArgument): test() return test_raise_invalid_argument hypothesis-python-3.44.1/tests/common/debug.py000066400000000000000000000061131321557765100214300ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys from hypothesis import settings as Settings from hypothesis import find, given, assume, reject from hypothesis.errors import NoSuchExample, Unsatisfiable TIME_INCREMENT = 0.01 class Timeout(BaseException): pass def minimal( definition, condition=None, settings=None, timeout_after=10, random=None ): settings = Settings( settings, max_examples=50000, max_iterations=100000, max_shrinks=5000, database=None, ) runtime = [] if condition is None: def condition(x): return True def wrapped_condition(x): if runtime: runtime[0] += TIME_INCREMENT if runtime[0] >= timeout_after: raise Timeout() result = condition(x) if result and not runtime: runtime.append(0.0) return result try: orig = sys.gettrace() return find( definition, wrapped_condition, settings=settings, random=random, ) finally: sys.settrace(orig) def find_any( definition, condition=None, settings=None, random=None ): settings = Settings( settings, max_examples=10000, max_iterations=10000, max_shrinks=0000, database=None, ) if condition is None: def condition(x): return True return find( definition, condition, settings=settings, random=random, ) def assert_no_examples(strategy, condition=None): if condition is None: def predicate(x): reject() else: def predicate(x): assume(condition(x)) try: result = find( strategy, predicate, settings=Settings(max_iterations=100, max_shrinks=1) ) assert False, 'Expected no results but found %r' % (result,) except (Unsatisfiable, NoSuchExample): pass def assert_all_examples(strategy, predicate): """Asserts that all examples of the given strategy match the predicate. :param strategy: Hypothesis strategy to check :param predicate: (callable) Predicate that takes example and returns bool """ @given(strategy) def assert_examples(s): assert predicate(s), \ 'Found %r using strategy %s which does not match' % (s, strategy) assert_examples() hypothesis-python-3.44.1/tests/common/setup.py000066400000000000000000000042771321557765100215130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import warnings from tempfile import mkdtemp from hypothesis import settings, unlimited from hypothesis.errors import HypothesisDeprecationWarning from hypothesis.configuration import set_hypothesis_home_dir from hypothesis.internal.charmap import charmap, charmap_file from hypothesis.internal.coverage import IN_COVERAGE_TESTS def run(): warnings.filterwarnings('error', category=UnicodeWarning) warnings.filterwarnings('error', category=HypothesisDeprecationWarning) set_hypothesis_home_dir(mkdtemp()) charmap() assert os.path.exists(charmap_file()), charmap_file() assert isinstance(settings, type) # We do a smoke test here before we mess around with settings. x = settings() import hypothesis._settings as settings_module for s in settings_module.all_settings.values(): v = getattr(x, s.name) # Check if it has a dynamically defined default and if so skip # comparison. if getattr(settings, s.name).show_default: assert v == s.default, '%r == x.%s != s.%s == %r' % ( v, s.name, s.name, s.default, ) settings.register_profile('default', settings( timeout=unlimited, use_coverage=not IN_COVERAGE_TESTS)) settings.register_profile('with_coverage', settings( timeout=unlimited, use_coverage=True, )) settings.register_profile( 'speedy', settings( max_examples=5, )) settings.load_profile(os.getenv('HYPOTHESIS_PROFILE', 'default')) hypothesis-python-3.44.1/tests/common/strategies.py000066400000000000000000000032251321557765100225150ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time from hypothesis.searchstrategy import SearchStrategy from hypothesis.internal.compat import hrange class _Slow(SearchStrategy): def do_draw(self, data): time.sleep(1.0) data.draw_bytes(2) return None SLOW = _Slow() class HardToShrink(SearchStrategy): def __init__(self): self.__last = None self.accepted = set() def do_draw(self, data): x = data.draw_bytes(100) if x in self.accepted: return True ls = self.__last if ls is None: if all(x): self.__last = x self.accepted.add(x) return True else: return False diffs = [i for i in hrange(len(x)) if x[i] != ls[i]] if len(diffs) == 1: i = diffs[0] if x[i] + 1 == ls[i]: self.__last = x self.accepted.add(x) return True return False hypothesis-python-3.44.1/tests/common/utils.py000066400000000000000000000054131321557765100215040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys import traceback import contextlib from io import BytesIO, StringIO from hypothesis.errors import HypothesisDeprecationWarning from hypothesis.reporting import default, with_reporter from hypothesis.internal.compat import PY2 from hypothesis.internal.reflection import proxies @contextlib.contextmanager def capture_out(): old_out = sys.stdout try: new_out = BytesIO() if PY2 else StringIO() sys.stdout = new_out with with_reporter(default): yield new_out finally: sys.stdout = old_out class ExcInfo(object): pass @contextlib.contextmanager def raises(exctype): e = ExcInfo() try: yield e assert False, "Expected to raise an exception but didn't" except exctype as err: traceback.print_exc() e.value = err return def fails_with(e): def accepts(f): @proxies(f) def inverted_test(*arguments, **kwargs): with raises(e): f(*arguments, **kwargs) return inverted_test return accepts fails = fails_with(AssertionError) @contextlib.contextmanager def validate_deprecation(): import warnings try: warnings.simplefilter('always', HypothesisDeprecationWarning) with warnings.catch_warnings(record=True) as w: yield assert any( e.category == HypothesisDeprecationWarning for e in w ), 'Expected to get a deprecation warning but got %r' % ( [e.category for e in w],) finally: warnings.simplefilter('error', HypothesisDeprecationWarning) def checks_deprecated_behaviour(func): """A decorator for testing deprecated behaviour.""" @proxies(func) def _inner(*args, **kwargs): with validate_deprecation(): return func(*args, **kwargs) return _inner def all_values(db): return set(v for vs in db.data.values() for v in vs) def non_covering_examples(database): return { v for k, vs in database.data.items() if not k.endswith(b'.coverage') for v in vs } hypothesis-python-3.44.1/tests/conftest.py000066400000000000000000000046351321557765100207060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import gc import sys import time as time_module import pytest from tests.common import TIME_INCREMENT from tests.common.setup import run from hypothesis.internal.coverage import IN_COVERAGE_TESTS run() def pytest_configure(config): config.addinivalue_line( 'markers', 'slow: pandas expects this marker to exist.') @pytest.fixture(scope=u'function', autouse=True) def gc_before_each_test(): gc.collect() @pytest.fixture(scope=u'function', autouse=True) def consistently_increment_time(monkeypatch): """Rather than rely on real system time we monkey patch time.time so that it passes at a consistent rate between calls. The reason for this is that when these tests run on travis, performance is extremely variable and the VM the tests are on might go to sleep for a bit, introducing arbitrary delays. This can cause a number of tests to fail flakily. Replacing time with a fake version under our control avoids this problem. """ frozen = [False] current_time = [time_module.time()] def time(): if not frozen[0]: current_time[0] += TIME_INCREMENT return current_time[0] def sleep(naptime): current_time[0] += naptime def freeze(): frozen[0] = True monkeypatch.setattr(time_module, 'time', time) try: monkeypatch.setattr(time_module, 'monotonic', time) except AttributeError: assert sys.version_info[0] == 2 monkeypatch.setattr(time_module, 'sleep', sleep) monkeypatch.setattr(time_module, 'freeze', freeze, raising=False) if not IN_COVERAGE_TESTS: @pytest.fixture(scope=u'function', autouse=True) def validate_lack_of_trace_function(): assert sys.gettrace() is None hypothesis-python-3.44.1/tests/cover/000077500000000000000000000000001321557765100176155ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/cover/__init__.py000066400000000000000000000012001321557765100217170ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/cover/test_arbitrary_data.py000066400000000000000000000057741321557765100242330ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import find, given, reporting from hypothesis.errors import InvalidArgument from tests.common.utils import raises, capture_out @given(st.integers(), st.data()) def test_conditional_draw(x, data): y = data.draw(st.integers(min_value=x)) assert y >= x def test_prints_on_failure(): @given(st.data()) def test(data): x = data.draw(st.lists(st.integers(), min_size=1)) y = data.draw(st.sampled_from(x)) x.remove(y) if y in x: raise ValueError() with raises(ValueError): with capture_out() as out: with reporting.with_reporter(reporting.default): test() result = out.getvalue() assert 'Draw 1: [0, 0]' in result assert 'Draw 2: 0' in result def test_prints_labels_if_given_on_failure(): @given(st.data()) def test(data): x = data.draw(st.lists(st.integers(), min_size=1), label='Some numbers') y = data.draw(st.sampled_from(x), label='A number') assert y in x x.remove(y) assert y not in x with raises(AssertionError): with capture_out() as out: with reporting.with_reporter(reporting.default): test() result = out.getvalue() assert 'Draw 1 (Some numbers): [0, 0]' in result assert 'Draw 2 (A number): 0' in result def test_given_twice_is_same(): @given(st.data(), st.data()) def test(data1, data2): data1.draw(st.integers()) data2.draw(st.integers()) raise ValueError() with raises(ValueError): with capture_out() as out: with reporting.with_reporter(reporting.default): test() result = out.getvalue() assert 'Draw 1: 0' in result assert 'Draw 2: 0' in result def test_errors_when_used_in_find(): with raises(InvalidArgument): find(st.data(), lambda x: x.draw(st.booleans())) @pytest.mark.parametrize('f', [ 'filter', 'map', 'flatmap', ]) def test_errors_when_normal_strategy_functions_are_used(f): with raises(InvalidArgument): getattr(st.data(), f)(lambda x: 1) def test_errors_when_asked_for_example(): with raises(InvalidArgument): st.data().example() def test_nice_repr(): assert repr(st.data()) == 'data()' hypothesis-python-3.44.1/tests/cover/test_argument_validation.py000066400000000000000000000030241321557765100252610ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from tests.common.arguments import e, argument_validation_test BAD_ARGS = [] def adjust(ex, **kwargs): f, a, b = ex b = dict(b) b.update(kwargs) BAD_ARGS.append((f, a, b)) for ex in [ e(st.lists, st.integers()), e(st.sets, st.integers()), e(st.frozensets, st.integers()), e(st.dictionaries, st.integers(), st.integers()), e(st.text), e(st.binary) ]: adjust(ex, average_size=10, max_size=9), adjust(ex, average_size=10, min_size=11), adjust(ex, min_size=-1) adjust(ex, average_size=-1) adjust(ex, max_size=-1) adjust(ex, min_size='no') adjust(ex, average_size='no') adjust(ex, max_size='no') BAD_ARGS.extend([ e(st.lists, st.nothing(), unique=True, min_size=1), ]) test_raise_invalid_argument = argument_validation_test(BAD_ARGS) hypothesis-python-3.44.1/tests/cover/test_bad_repr.py000066400000000000000000000035101321557765100230030ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given from hypothesis.internal.compat import PY3 from hypothesis.internal.reflection import arg_string class BadRepr(object): def __init__(self, value): self.value = value def __repr__(self): return self.value Frosty = BadRepr('☃') def test_just_frosty(): assert repr(st.just(Frosty)) == 'just(☃)' def test_sampling_snowmen(): assert repr(st.sampled_from(( Frosty, 'hi'))) == 'sampled_from((☃, %s))' % (repr('hi'),) def varargs(*args, **kwargs): pass @pytest.mark.skipif(PY3, reason='Unicode repr is kosher on python 3') def test_arg_strings_are_bad_repr_safe(): assert arg_string(varargs, (Frosty,), {}) == '☃' @pytest.mark.skipif(PY3, reason='Unicode repr is kosher on python 3') def test_arg_string_kwargs_are_bad_repr_safe(): assert arg_string(varargs, (), {'x': Frosty}) == 'x=☃' @given(st.sampled_from([ '✐', '✑', '✒', '✓', '✔', '✕', '✖', '✗', '✘', '✙', '✚', '✛', '✜', '✝', '✞', '✟', '✠', '✡', '✢', '✣'])) def test_sampled_from_bad_repr(c): pass hypothesis-python-3.44.1/tests/cover/test_cache_implementation.py000066400000000000000000000120011321557765100253700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random import pytest import hypothesis.strategies as st from hypothesis import HealthCheck, note, given, assume, example, settings from hypothesis.internal.cache import GenericCache, LRUReusedCache class LRUCache(GenericCache): __slots__ = ('__tick',) def __init__(self, max_size): super(LRUCache, self).__init__(max_size) self.__tick = 0 def new_entry(self, key, value): return self.tick() def on_access(self, key, value, score): return self.tick() def tick(self): self.__tick += 1 return self.__tick class LFUCache(GenericCache): def new_entry(self, key, value): return 1 def on_access(self, key, value, score): return score + 1 @st.composite def write_pattern(draw, min_size=0): keys = draw(st.lists(st.integers(0, 1000), unique=True, min_size=1)) values = draw(st.lists(st.integers(), unique=True, min_size=1)) return draw( st.lists(st.tuples(st.sampled_from(keys), st.sampled_from(values)), min_size=min_size)) class ValueScored(GenericCache): def new_entry(self, key, value): return value class RandomCache(GenericCache): def __init__(self, max_size): super(RandomCache, self).__init__(max_size) self.random = Random(0) def new_entry(self, key, value): return self.random.random() def on_access(self, key, value, score): return self.random.random() @pytest.mark.parametrize( 'implementation', [ LRUCache, LFUCache, LRUReusedCache, ValueScored, RandomCache ] ) @example(writes=[(0, 0), (3, 0), (1, 0), (2, 0), (2, 0), (1, 0)], size=4) @example(writes=[(0, 0)], size=1) @example(writes=[(1, 0), (2, 0), (0, -1), (1, 0)], size=3) @given(write_pattern(), st.integers(1, 10)) def test_behaves_like_a_dict_with_losses(implementation, writes, size): model = {} target = implementation(max_size=size) for k, v in writes: try: assert model[k] == target[k] except KeyError: pass model[k] = v target[k] = v target.check_valid() assert target[k] == v for r, s in model.items(): try: assert s == target[r] except KeyError: pass assert len(target) <= min(len(model), size) @settings(suppress_health_check=[HealthCheck.too_slow], deadline=None) @given(write_pattern(min_size=2), st.data()) def test_always_evicts_the_lowest_scoring_value(writes, data): scores = {} n_keys = len({k for k, _ in writes}) assume(n_keys > 1) size = data.draw(st.integers(1, n_keys - 1)) evicted = set() def new_score(key): scores[key] = data.draw( st.integers(0, 1000), label='scores[%r]' % (key,)) return scores[key] last_entry = [None] class Cache(GenericCache): def new_entry(self, key, value): last_entry[0] = key evicted.discard(key) assert key not in scores return new_score(key) def on_access(self, key, value, score): assert key in scores return new_score(key) def on_evict(self, key, value, score): note('Evicted %r' % (key,)) assert score == scores[key] del scores[key] if len(scores) > 1: assert score <= min( v for k, v in scores.items() if k != last_entry[0] ) evicted.add(key) target = Cache(max_size=size) model = {} for k, v in writes: target[k] = v model[k] = v assert evicted assert len(evicted) + len(target) == len(model) assert len(scores) == len(target) for k, v in model.items(): try: assert target[k] == v assert k not in evicted except KeyError: assert k in evicted def test_basic_access(): cache = ValueScored(max_size=2) cache[1] = 0 cache[1] = 0 cache[0] = 1 cache[2] = 0 assert cache[2] == 0 assert cache[0] == 1 assert len(cache) == 2 def test_can_clear_a_cache(): x = ValueScored(1) x[0] = 1 assert len(x) == 1 x.clear() assert len(x) == 0 def test_max_size_cache_ignores(): x = ValueScored(0) x[0] = 1 with pytest.raises(KeyError): x[0] hypothesis-python-3.44.1/tests/cover/test_cacheable.py000066400000000000000000000025361321557765100231230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st @pytest.mark.parametrize('s', [ st.floats(), st.tuples(st.integers()), st.tuples(), st.one_of(st.integers(), st.text()), ]) def test_is_cacheable(s): assert s.is_cacheable @pytest.mark.parametrize('s', [ st.just([]), st.tuples(st.integers(), st.just([])), st.one_of(st.integers(), st.text(), st.just([])), ]) def test_is_not_cacheable(s): assert not s.is_cacheable def test_non_cacheable_things_are_not_cached(): x = st.just([]) assert st.tuples(x) != st.tuples(x) def test_cacheable_things_are_cached(): x = st.just(()) assert st.tuples(x) == st.tuples(x) hypothesis-python-3.44.1/tests/cover/test_caching.py000066400000000000000000000032371321557765100226270ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis.errors import InvalidArgument def test_no_args(): assert st.text() is st.text() def test_tuple_lengths(): assert st.tuples(st.integers()) is st.tuples(st.integers()) assert st.tuples(st.integers()) is not st.tuples( st.integers(), st.integers()) def test_values(): assert st.integers() is not st.integers(min_value=1) def test_alphabet_key(): assert st.text(alphabet='abcs') is st.text(alphabet='abcs') def test_does_not_error_on_unhashable_posarg(): st.text(['a', 'b', 'c']) def test_does_not_error_on_unhashable_kwarg(): with pytest.raises(InvalidArgument): st.builds(lambda alphabet: 1, alphabet=['a', 'b', 'c']).validate() def test_caches_floats_sensitively(): assert st.floats(min_value=0.0) is st.floats(min_value=0.0) assert st.floats(min_value=0.0) is not st.floats(min_value=0) assert st.floats(min_value=0.0) is not st.floats(min_value=-0.0) hypothesis-python-3.44.1/tests/cover/test_charmap.py000066400000000000000000000116621321557765100226470ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys import unicodedata import hypothesis.strategies as st import hypothesis.internal.charmap as cm from hypothesis import given, assume from hypothesis.internal.compat import hunichr def test_charmap_contains_all_unicode(): n = 0 for vs in cm.charmap().values(): for u, v in vs: n += (v - u + 1) assert n == sys.maxunicode + 1 def test_charmap_has_right_categories(): for cat, intervals in cm.charmap().items(): for u, v in intervals: for i in range(u, v + 1): real = unicodedata.category(hunichr(i)) assert real == cat, \ '%d is %s but reported in %s' % (i, real, cat) def assert_valid_range_list(ls): for u, v in ls: assert u <= v for i in range(len(ls) - 1): assert ls[i] <= ls[i + 1] assert ls[i][-1] < ls[i + 1][0] @given( st.sets(st.sampled_from(cm.categories())), st.sets(st.sampled_from(cm.categories())) | st.none(), ) def test_query_matches_categories(exclude, include): values = cm.query(exclude, include) assert_valid_range_list(values) for u, v in values: for i in (u, v, (u + v) // 2): cat = unicodedata.category(hunichr(i)) if include is not None: assert cat in include assert cat not in exclude @given( st.sets(st.sampled_from(cm.categories())), st.sets(st.sampled_from(cm.categories())) | st.none(), st.integers(0, sys.maxunicode), st.integers(0, sys.maxunicode), ) def test_query_matches_categories_codepoints(exclude, include, m1, m2): m1, m2 = sorted((m1, m2)) values = cm.query(exclude, include, min_codepoint=m1, max_codepoint=m2) assert_valid_range_list(values) for u, v in values: assert m1 <= u assert v <= m2 @given(st.sampled_from(cm.categories()), st.integers(0, sys.maxunicode)) def test_exclude_only_excludes_from_that_category(cat, i): c = hunichr(i) assume(unicodedata.category(c) != cat) intervals = cm.query(exclude_categories=(cat,)) assert any(a <= i <= b for a, b in intervals) def test_reload_charmap(): x = cm.charmap() assert x is cm.charmap() cm._charmap = None y = cm.charmap() assert x is not y assert x == y def test_recreate_charmap(): x = cm.charmap() assert x is cm.charmap() cm._charmap = None os.unlink(cm.charmap_file()) y = cm.charmap() assert x is not y assert x == y def test_union_empty(): assert cm._union_intervals([], []) == () assert cm._union_intervals([], [[1, 2]]) == ((1, 2),) assert cm._union_intervals([[1, 2]], []) == ((1, 2),) def test_union_handles_totally_overlapped_gap(): # < xx > Imagine the intervals x and y as bit strings. # | The bit at position n is set if n falls inside that interval. # = In this model _union_intervals() performs bit-wise or. assert cm._union_intervals([[2, 3]], [[1, 2], [4, 5]]) == ((1, 5),) def test_union_handles_partially_overlapped_gap(): # < x > Imagine the intervals x and y as bit strings. # | The bit at position n is set if n falls inside that interval. # = In this model _union_intervals() performs bit-wise or. assert cm._union_intervals([[3, 3]], [[1, 2], [5, 5]]) == ((1, 3), (5, 5)) def test_successive_union(): x = [] for v in cm.charmap().values(): x = cm._union_intervals(x, v) assert x == ((0, sys.maxunicode),) def test_can_handle_race_between_exist_and_create(monkeypatch): x = cm.charmap() cm._charmap = None monkeypatch.setattr(os.path, 'exists', lambda p: False) y = cm.charmap() assert x is not y assert x == y def test_exception_in_write_does_not_lead_to_broken_charmap(monkeypatch): def broken(*args, **kwargs): raise ValueError() cm._charmap = None monkeypatch.setattr(os.path, 'exists', lambda p: False) monkeypatch.setattr(os, 'rename', broken) cm.charmap() cm.charmap() def test_regenerate_broken_charmap_file(): cm.charmap() file_loc = cm.charmap_file() with open(file_loc, 'wb'): pass cm._charmap = None cm.charmap() def test_exclude_characters_are_included_in_key(): assert cm.query() != cm.query(exclude_characters='0') hypothesis-python-3.44.1/tests/cover/test_choices.py000066400000000000000000000032111321557765100226400ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import find, given from hypothesis.errors import InvalidArgument from tests.common.utils import checks_deprecated_behaviour @checks_deprecated_behaviour def test_exhaustion(): @given(st.lists(st.text(), min_size=10), st.choices()) def test(ls, choice): while ls: s = choice(ls) assert s in ls ls.remove(s) test() @checks_deprecated_behaviour def test_choice_is_shared(): @given(st.choices(), st.choices()) def test(choice1, choice2): assert choice1 is choice2 test() @checks_deprecated_behaviour def test_cannot_use_choices_within_find(): with pytest.raises(InvalidArgument): find(st.choices(), lambda c: True) @checks_deprecated_behaviour def test_fails_to_draw_from_empty_sequence(): @given(st.choices()) def test(choice): choice([]) with pytest.raises(IndexError): test() hypothesis-python-3.44.1/tests/cover/test_completion.py000066400000000000000000000016761321557765100234110ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import given, settings @given(st.data()) def test_never_draw_anything(data): pass @settings(min_satisfying_examples=1000) @given(st.booleans()) def test_want_more_than_exist(b): pass hypothesis-python-3.44.1/tests/cover/test_composite.py000066400000000000000000000074231321557765100232360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from flaky import flaky import hypothesis.strategies as st from hypothesis import find, given, assume from hypothesis.errors import InvalidArgument from tests.common.debug import minimal from hypothesis.internal.compat import hrange @st.composite def badly_draw_lists(draw, m=0): length = draw(st.integers(m, m + 10)) return [ draw(st.integers()) for _ in hrange(length) ] def test_simplify_draws(): assert find(badly_draw_lists(), lambda x: len(x) >= 3) == [0] * 3 def test_can_pass_through_arguments(): assert find(badly_draw_lists(5), lambda x: True) == [0] * 5 assert find(badly_draw_lists(m=6), lambda x: True) == [0] * 6 @st.composite def draw_ordered_with_assume(draw): x = draw(st.floats()) y = draw(st.floats()) assume(x < y) return (x, y) @given(draw_ordered_with_assume()) def test_can_assume_in_draw(xy): assert xy[0] < xy[1] def test_uses_definitions_for_reprs(): assert repr(badly_draw_lists()) == 'badly_draw_lists()' assert repr(badly_draw_lists(1)) == 'badly_draw_lists(m=1)' assert repr(badly_draw_lists(m=1)) == 'badly_draw_lists(m=1)' def test_errors_given_default_for_draw(): with pytest.raises(InvalidArgument): @st.composite def foo(x=None): pass def test_errors_given_function_of_no_arguments(): with pytest.raises(InvalidArgument): @st.composite def foo(): pass def test_errors_given_kwargs_only(): with pytest.raises(InvalidArgument): @st.composite def foo(**kwargs): pass def test_can_use_pure_args(): @st.composite def stuff(*args): return args[0](st.sampled_from(args[1:])) assert find(stuff(1, 2, 3, 4, 5), lambda x: True) == 1 def test_composite_of_lists(): @st.composite def f(draw): return draw(st.integers()) + draw(st.integers()) assert find(st.lists(f()), lambda x: len(x) >= 10) == [0] * 10 @flaky(min_passes=5, max_runs=5) def test_can_shrink_matrices_with_length_param(): @st.composite def matrix(draw): rows = draw(st.integers(1, 10)) columns = draw(st.integers(1, 10)) return [ [draw(st.integers(0, 10000)) for _ in range(columns)] for _ in range(rows) ] def transpose(m): rows = len(m) columns = len(m[0]) result = [ [None] * rows for _ in range(columns) ] for i in range(rows): for j in range(columns): result[j][i] = m[i][j] return result def is_square(m): return len(m) == len(m[0]) value = minimal(matrix(), lambda m: is_square(m) and transpose(m) != m) assert len(value) == 2 assert len(value[0]) == 2 assert sorted(value[0] + value[1]) == [0, 0, 0, 1] class MyList(list): pass @given(st.data(), st.lists(st.integers()).map(MyList)) def test_does_not_change_arguments(data, ls): # regression test for issue #1017 or other argument mutation @st.composite def strat(draw, arg): return arg ex = data.draw(strat(ls)) assert ex is ls hypothesis-python-3.44.1/tests/cover/test_conjecture_engine.py000066400000000000000000000476501321557765100247300ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time from random import seed as seed_random from random import Random import pytest from hypothesis import Phase, HealthCheck, settings, unlimited from hypothesis.errors import FailedHealthCheck from tests.common.utils import all_values, checks_deprecated_behaviour from hypothesis.database import ExampleDatabase, InMemoryExampleDatabase from tests.common.strategies import SLOW, HardToShrink from hypothesis.internal.compat import hbytes, hrange, int_from_bytes from hypothesis.internal.conjecture.data import Status, ConjectureData from hypothesis.internal.conjecture.engine import ConjectureRunner MAX_SHRINKS = 1000 def run_to_buffer(f): runner = ConjectureRunner(f, settings=settings( max_examples=5000, max_iterations=10000, max_shrinks=MAX_SHRINKS, buffer_size=1024, database=None, perform_health_check=False, )) runner.run() assert runner.last_data.status == Status.INTERESTING return hbytes(runner.last_data.buffer) def test_can_index_results(): @run_to_buffer def f(data): data.draw_bytes(5) data.mark_interesting() assert f.index(0) == 0 assert f.count(0) == 5 def test_non_cloneable_intervals(): @run_to_buffer def x(data): data.draw_bytes(10) data.draw_bytes(9) data.mark_interesting() assert x == hbytes(19) def test_duplicate_buffers(): @run_to_buffer def x(data): t = data.draw_bytes(10) if not any(t): data.mark_invalid() s = data.draw_bytes(10) if s == t: data.mark_interesting() assert x == hbytes([0] * 9 + [1]) * 2 def test_deletable_draws(): @run_to_buffer def x(data): while True: x = data.draw_bytes(2) if x[0] == 255: data.mark_interesting() assert x == hbytes([255, 0]) def zero_dist(random, n): return hbytes(n) def test_can_load_data_from_a_corpus(): key = b'hi there' db = ExampleDatabase() value = b'=\xc3\xe4l\x81\xe1\xc2H\xc9\xfb\x1a\xb6bM\xa8\x7f' db.save(key, value) def f(data): if data.draw_bytes(len(value)) == value: data.mark_interesting() runner = ConjectureRunner( f, settings=settings(database=db), database_key=key) runner.run() assert runner.last_data.status == Status.INTERESTING assert runner.last_data.buffer == value assert len(list(db.fetch(key))) == 1 def slow_shrinker(): strat = HardToShrink() def accept(data): if data.draw(strat): data.mark_interesting() return accept @pytest.mark.parametrize('n', [1, 5]) def test_terminates_shrinks(n, monkeypatch): db = InMemoryExampleDatabase() def generate_new_examples(self): def draw_bytes(data, n): return hbytes([255] * n) self.test_function(ConjectureData( draw_bytes=draw_bytes, max_length=self.settings.buffer_size)) monkeypatch.setattr( ConjectureRunner, 'generate_new_examples', generate_new_examples) runner = ConjectureRunner(slow_shrinker(), settings=settings( max_examples=5000, max_iterations=10000, max_shrinks=n, database=db, timeout=unlimited, ), random=Random(0), database_key=b'key') runner.run() assert runner.last_data.status == Status.INTERESTING assert runner.shrinks == n in_db = set( v for vs in db.data.values() for v in vs ) assert len(in_db) == n + 1 def test_detects_flakiness(): failed_once = [False] count = [0] def tf(data): data.draw_bytes(1) count[0] += 1 if not failed_once[0]: failed_once[0] = True data.mark_interesting() runner = ConjectureRunner(tf) runner.run() assert count == [2] def test_variadic_draw(): def draw_list(data): result = [] while True: data.start_example() d = data.draw_bytes(1)[0] & 7 if d: result.append(data.draw_bytes(d)) data.stop_example() if not d: break return result @run_to_buffer def b(data): if any(all(d) for d in draw_list(data)): data.mark_interesting() ls = draw_list(ConjectureData.for_buffer(b)) assert len(ls) == 1 assert len(ls[0]) == 1 def test_draw_to_overrun(): @run_to_buffer def x(data): d = (data.draw_bytes(1)[0] - 8) & 0xff data.draw_bytes(128 * d) if d >= 2: data.mark_interesting() assert x == hbytes([10]) + hbytes(128 * 2) def test_can_navigate_to_a_valid_example(): def f(data): i = int_from_bytes(data.draw_bytes(2)) data.draw_bytes(i) data.mark_interesting() runner = ConjectureRunner(f, settings=settings( max_examples=5000, max_iterations=10000, buffer_size=2, database=None, )) runner.run() assert runner.last_data.status == Status.INTERESTING return hbytes(runner.last_data.buffer) def test_stops_after_max_iterations_when_generating(): key = b'key' value = b'rubber baby buggy bumpers' max_iterations = 100 db = ExampleDatabase(':memory:') db.save(key, value) seen = [] def f(data): seen.append(data.draw_bytes(len(value))) data.mark_invalid() runner = ConjectureRunner(f, settings=settings( max_examples=1, max_iterations=max_iterations, database=db, perform_health_check=False, ), database_key=key) runner.run() assert len(seen) == max_iterations assert value in seen def test_stops_after_max_iterations_when_reading(): key = b'key' max_iterations = 1 db = ExampleDatabase(':memory:') for i in range(10): db.save(key, hbytes([i])) seen = [] def f(data): seen.append(data.draw_bytes(1)) data.mark_invalid() runner = ConjectureRunner(f, settings=settings( max_examples=1, max_iterations=max_iterations, database=db, ), database_key=key) runner.run() assert len(seen) == max_iterations def test_stops_after_max_examples_when_reading(): key = b'key' db = ExampleDatabase(':memory:') for i in range(10): db.save(key, hbytes([i])) seen = [] def f(data): seen.append(data.draw_bytes(1)) runner = ConjectureRunner(f, settings=settings( max_examples=1, database=db, ), database_key=key) runner.run() assert len(seen) == 1 def test_stops_after_max_examples_when_generating(): seen = [] def f(data): seen.append(data.draw_bytes(1)) runner = ConjectureRunner(f, settings=settings( max_examples=1, database=None, )) runner.run() assert len(seen) == 1 def test_interleaving_engines(): children = [] @run_to_buffer def x(data): rnd = Random(data.draw_bytes(1)) def g(d2): d2.draw_bytes(1) data.mark_interesting() runner = ConjectureRunner(g, random=rnd) children.append(runner) runner.run() assert x == b'\0' for c in children: assert not c.interesting_examples @checks_deprecated_behaviour def test_run_with_timeout_while_shrinking(): def f(data): time.sleep(0.1) x = data.draw_bytes(32) if any(x): data.mark_interesting() runner = ConjectureRunner( f, settings=settings(database=None, timeout=0.2)) start = time.time() runner.run() assert time.time() <= start + 1 assert runner.last_data.status == Status.INTERESTING @checks_deprecated_behaviour def test_run_with_timeout_while_boring(): def f(data): time.sleep(0.1) runner = ConjectureRunner( f, settings=settings(database=None, timeout=0.2)) start = time.time() runner.run() assert time.time() <= start + 1 assert runner.last_data.status == Status.VALID def test_max_shrinks_can_disable_shrinking(): seen = set() def f(data): seen.add(hbytes(data.draw_bytes(32))) data.mark_interesting() runner = ConjectureRunner( f, settings=settings(database=None, max_shrinks=0,)) runner.run() assert len(seen) == 1 def test_phases_can_disable_shrinking(): seen = set() def f(data): seen.add(hbytes(data.draw_bytes(32))) data.mark_interesting() runner = ConjectureRunner(f, settings=settings( database=None, phases=(Phase.reuse, Phase.generate), )) runner.run() assert len(seen) == 1 def test_erratic_draws(): n = [0] @run_to_buffer def x(data): data.draw_bytes(n[0]) data.draw_bytes(255 - n[0]) if n[0] == 255: data.mark_interesting() else: n[0] += 1 def test_no_read_no_shrink(): count = [0] @run_to_buffer def x(data): count[0] += 1 data.mark_interesting() assert x == b'' assert count == [1] def test_one_dead_branch(): seed_random(0) seen = set() @run_to_buffer def x(data): i = data.draw_bytes(1)[0] if i > 0: data.mark_invalid() i = data.draw_bytes(1)[0] if len(seen) < 255: seen.add(i) elif i not in seen: data.mark_interesting() def test_fully_exhaust_base(monkeypatch): """In this test we generate all possible values for the first byte but never get to the point where we exhaust the root of the tree.""" seed_random(0) seen = set() def f(data): key = data.draw_bytes(2) assert key not in seen seen.add(key) runner = ConjectureRunner(f, settings=settings( max_examples=10000, max_iterations=10000, max_shrinks=0, buffer_size=1024, database=None, )) def call_with(buf): buf = hbytes(buf) def draw_bytes(data, n): return runner._ConjectureRunner__rewrite_for_novelty( data, buf[data.index:data.index + n]) d = ConjectureData( draw_bytes=draw_bytes, max_length=2 ) runner.test_function(d) return d # First we ensure that all children of 0 are dead. for c in hrange(256): call_with([0, c]) assert 1 in runner.dead # This must rewrite the first byte in order to get to a non-dead node. assert call_with([0, 0]).buffer == hbytes([1, 0]) # This must rewrite the first byte in order to get to a non-dead node, but # the result of doing that is *still* dead, so it must rewrite the second # byte too. assert call_with([0, 0]).buffer == hbytes([1, 1]) def test_will_save_covering_examples(): tags = {} def tagged(data): b = hbytes(data.draw_bytes(4)) try: tag = tags[b] except KeyError: if len(tags) < 10: tag = len(tags) tags[b] = tag else: tag = None if tag is not None: data.add_tag(tag) db = InMemoryExampleDatabase() runner = ConjectureRunner(tagged, settings=settings( max_examples=100, max_iterations=10000, max_shrinks=0, buffer_size=1024, database=db, ), database_key=b'stuff') runner.run() assert len(all_values(db)) == len(tags) def test_will_shrink_covering_examples(): best = [None] replaced = [] def tagged(data): b = hbytes(data.draw_bytes(4)) if any(b): data.add_tag('nonzero') if best[0] is None: best[0] = b elif b < best[0]: replaced.append(best[0]) best[0] = b db = InMemoryExampleDatabase() runner = ConjectureRunner(tagged, settings=settings( max_examples=100, max_iterations=10000, max_shrinks=0, buffer_size=1024, database=db, ), database_key=b'stuff') runner.run() saved = set(all_values(db)) assert best[0] in saved for r in replaced: assert r not in saved def test_can_cover_without_a_database_key(): def tagged(data): data.add_tag(0) runner = ConjectureRunner(tagged, settings=settings(), database_key=None) runner.run() assert len(runner.covering_examples) == 1 def test_saves_on_interrupt(): def interrupts(data): raise KeyboardInterrupt() db = InMemoryExampleDatabase() runner = ConjectureRunner( interrupts, settings=settings(database=db), database_key=b'key') with pytest.raises(KeyboardInterrupt): runner.run() assert db.data def test_returns_written(): value = hbytes(b'\0\1\2\3') @run_to_buffer def written(data): data.write(value) data.mark_interesting() assert value == written def fails_health_check(label): def accept(f): runner = ConjectureRunner(f, settings=settings( max_examples=100, max_iterations=100, max_shrinks=0, buffer_size=1024, database=None, perform_health_check=True, )) with pytest.raises(FailedHealthCheck) as e: runner.run() assert e.value.health_check == label assert not runner.interesting_examples return accept def test_fails_health_check_for_all_invalid(): @fails_health_check(HealthCheck.filter_too_much) def _(data): data.draw_bytes(2) data.mark_invalid() def test_fails_health_check_for_large_base(): @fails_health_check(HealthCheck.large_base_example) def _(data): data.draw_bytes(10 ** 6) def test_fails_health_check_for_large_non_base(): @fails_health_check(HealthCheck.data_too_large) def _(data): if data.draw_bits(8): data.draw_bytes(10 ** 6) def test_fails_health_check_for_slow_draws(): @fails_health_check(HealthCheck.too_slow) def _(data): data.draw(SLOW) def test_fails_healthcheck_for_hung_test(): @fails_health_check(HealthCheck.hung_test) def _(data): data.draw_bytes(1) time.sleep(3600) @pytest.mark.parametrize('n_large', [1, 5, 8, 15]) def test_can_shrink_variable_draws(n_large): target = 128 * n_large @run_to_buffer def x(data): n = data.draw_bits(4) b = [data.draw_bits(8) for _ in hrange(n)] if sum(b) >= target: data.mark_interesting() assert x.count(0) == 0 assert sum(x[1:]) == target def test_run_nothing(): def f(data): assert False runner = ConjectureRunner(f, settings=settings(phases=())) runner.run() assert runner.call_count == 0 class Foo(object): def __repr__(self): return 'stuff' @pytest.mark.parametrize('event', ['hi', Foo()]) def test_note_events(event): def f(data): data.note_event(event) data.draw_bytes(1) runner = ConjectureRunner(f) runner.run() assert runner.event_call_counts[str(event)] == runner.call_count > 0 def test_zeroes_bytes_above_bound(): def f(data): if data.draw_bits(1): x = data.draw_bytes(9) assert not any(x[4:8]) ConjectureRunner(f, settings=settings(buffer_size=10)).run() def test_can_write_bytes_towards_the_end(): buf = b'\1\2\3' def f(data): if data.draw_bits(1): data.draw_bytes(5) data.write(hbytes(buf)) assert hbytes(data.buffer[-len(buf):]) == buf ConjectureRunner(f, settings=settings(buffer_size=10)).run() def test_can_increase_number_of_bytes_drawn_in_tail(): # This is designed to trigger a case where the zero bound queue will end up # increasing the size of data drawn because moving zeroes into the initial # prefix will increase the amount drawn. def f(data): x = data.draw_bytes(5) n = x.count(0) b = data.draw_bytes(n + 1) assert not any(b[:-1]) runner = ConjectureRunner( f, settings=settings(buffer_size=11, perform_health_check=False)) runner.run() @pytest.mark.xfail( strict=True, reason="""This is currently demonstrating that __rewrite_for_novelty is broken. It should start passing once we have a more sensible deduplication mechanism.""") def test_uniqueness_is_preserved_when_writing_at_beginning(): seen = set() def f(data): data.write(hbytes(1)) n = data.draw_bits(3) assert n not in seen seen.add(n) runner = ConjectureRunner( f, settings=settings(max_examples=50)) runner.run() assert runner.valid_examples == len(seen) @pytest.mark.parametrize('skip_target', [False, True]) @pytest.mark.parametrize('initial_attempt', [127, 128]) def test_clears_out_its_database_on_shrinking( initial_attempt, skip_target, monkeypatch ): def generate_new_examples(self): self.test_function( ConjectureData.for_buffer(hbytes([initial_attempt]))) monkeypatch.setattr( ConjectureRunner, 'generate_new_examples', generate_new_examples) key = b'key' db = InMemoryExampleDatabase() def f(data): if data.draw_bits(8) >= 127: data.mark_interesting() runner = ConjectureRunner( f, settings=settings(database=db, max_examples=256), database_key=key, random=Random(0), ) for n in hrange(256): if n != 127 or not skip_target: db.save(runner.secondary_key, hbytes([n])) runner.run() assert len(runner.interesting_examples) == 1 for b in db.fetch(runner.secondary_key): assert b[0] >= 127 assert len(list(db.fetch(runner.database_key))) == 1 def test_saves_negated_examples_in_covering(): def f(data): if data.draw_bits(8) & 1: data.add_tag('hi') runner = ConjectureRunner(f) runner.run() assert len(runner.target_selector.examples_by_tags) == 3 def test_can_delete_intervals(monkeypatch): def generate_new_examples(self): self.test_function( ConjectureData.for_buffer(hbytes([255] * 10 + [0]))) monkeypatch.setattr( ConjectureRunner, 'generate_new_examples', generate_new_examples) monkeypatch.setattr( ConjectureRunner, 'shrink', ConjectureRunner.greedy_interval_deletion ) def f(data): if data.draw_bits(1): while data.draw_bits(8): pass data.mark_interesting() runner = ConjectureRunner(f, settings=settings(database=None)) runner.run() x, = runner.interesting_examples.values() assert x.buffer == hbytes([1, 0]) def test_shrinks_both_interesting_examples(monkeypatch): def generate_new_examples(self): self.test_function(ConjectureData.for_buffer(hbytes([1]))) monkeypatch.setattr( ConjectureRunner, 'generate_new_examples', generate_new_examples) def f(data): n = data.draw_bits(8) data.mark_interesting(n & 1) runner = ConjectureRunner(f, database_key=b'key') runner.run() assert runner.interesting_examples[0].buffer == hbytes([0]) assert runner.interesting_examples[1].buffer == hbytes([1]) def test_reorder_blocks(monkeypatch): target = hbytes([1, 2, 3]) def generate_new_examples(self): self.test_function(ConjectureData.for_buffer(hbytes(reversed(target)))) monkeypatch.setattr( ConjectureRunner, 'generate_new_examples', generate_new_examples) monkeypatch.setattr( ConjectureRunner, 'shrink', ConjectureRunner.reorder_blocks) @run_to_buffer def x(data): if sorted( data.draw_bits(8) for _ in hrange(len(target)) ) == sorted(target): data.mark_interesting() assert x == target hypothesis-python-3.44.1/tests/cover/test_conjecture_float_encoding.py000066400000000000000000000137061321557765100264310ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys from random import Random import pytest import hypothesis.internal.conjecture.floats as flt from hypothesis import strategies as st from hypothesis import given, assume, example from hypothesis.internal.compat import ceil, floor, hbytes, int_to_bytes, \ int_from_bytes from hypothesis.internal.floats import float_to_int from hypothesis.internal.conjecture.data import ConjectureData from hypothesis.internal.conjecture.minimizer import minimize EXPONENTS = list(range(0, flt.MAX_EXPONENT + 1)) assert len(EXPONENTS) == 2 ** 11 def assert_reordered_exponents(res): res = list(res) assert len(res) == len(EXPONENTS) for x in res: assert res.count(x) == 1 assert 0 <= x <= flt.MAX_EXPONENT def test_encode_permutes_elements(): assert_reordered_exponents(map(flt.encode_exponent, EXPONENTS)) def test_decode_permutes_elements(): assert_reordered_exponents(map(flt.decode_exponent, EXPONENTS)) def test_decode_encode(): for e in EXPONENTS: assert flt.decode_exponent(flt.encode_exponent(e)) == e def test_encode_decode(): for e in EXPONENTS: assert flt.decode_exponent(flt.encode_exponent(e)) == e @given(st.data()) def test_double_reverse_bounded(data): n = data.draw(st.integers(1, 64)) i = data.draw(st.integers(0, 2 ** n - 1)) j = flt.reverse_bits(i, n) assert flt.reverse_bits(j, n) == i @given(st.integers(0, 2 ** 64 - 1)) def test_double_reverse(i): j = flt.reverse64(i) assert flt.reverse64(j) == i @example(1.25) @example(1.0) @given(st.floats()) def test_draw_write_round_trip(f): d = ConjectureData.for_buffer(hbytes(10)) flt.write_float(d, f) d2 = ConjectureData.for_buffer(d.buffer) g = flt.draw_float(d2) if f == f: assert f == g assert float_to_int(f) == float_to_int(g) d3 = ConjectureData.for_buffer(d2.buffer) flt.draw_float(d3) assert d3.buffer == d2.buffer @example(0.0) @example(2.5) @example(8.000000000000007) @example(3.0) @example(2.0) @example(1.9999999999999998) @example(1.0) @given(st.floats(min_value=0.0)) def test_floats_round_trip(f): i = flt.float_to_lex(f) g = flt.lex_to_float(i) assert float_to_int(f) == float_to_int(g) @example(1, 0.5) @given( st.integers(1, 2 ** 53), st.floats(0, 1).filter(lambda x: x not in (0, 1)) ) def test_floats_order_worse_than_their_integral_part(n, g): f = n + g assume(int(f) != f) assume(int(f) != 0) i = flt.float_to_lex(f) if f < 0: g = ceil(f) else: g = floor(f) assert flt.float_to_lex(float(g)) < i integral_floats = st.floats( allow_infinity=False, allow_nan=False, min_value=0.0 ).map(lambda x: float(int(x))) @given(integral_floats, integral_floats) def test_integral_floats_order_as_integers(x, y): assume(x != y) x, y = sorted((x, y)) assume(y < 0 or x > 0) if y < 0: assert flt.float_to_lex(y) < flt.float_to_lex(x) else: assert flt.float_to_lex(x) < flt.float_to_lex(y) @given(st.floats(0, 1)) def test_fractional_floats_are_worse_than_one(f): assume(0 < f < 1) assert flt.float_to_lex(f) > flt.float_to_lex(1) def test_reverse_bits_table_reverses_bits(): def bits(x): result = [] for _ in range(8): result.append(x & 1) x >>= 1 result.reverse() return result for i, b in enumerate(flt.REVERSE_BITS_TABLE): assert bits(i) == list(reversed(bits(b))) def test_reverse_bits_table_has_right_elements(): assert sorted(flt.REVERSE_BITS_TABLE) == list(range(256)) def minimal_from(start, condition): buf = int_to_bytes(flt.float_to_lex(start), 8) def parse_buf(b): return flt.lex_to_float(int_from_bytes(b)) shrunk = minimize( buf, lambda b: condition(parse_buf(b)), full=True, random=Random(0) ) return parse_buf(shrunk) INTERESTING_FLOATS = [ 0.0, 1.0, 2.0, sys.float_info.max, float('inf'), float('nan') ] @pytest.mark.parametrize(('start', 'end'), [ (a, b) for a in INTERESTING_FLOATS for b in INTERESTING_FLOATS if flt.float_to_lex(a) > flt.float_to_lex(b) ]) def test_can_shrink_downwards(start, end): assert minimal_from(start, lambda x: not (x < end)) == end @pytest.mark.parametrize( 'f', [1, 2, 4, 8, 10, 16, 32, 64, 100, 128, 256, 500, 512, 1000, 1024] ) @pytest.mark.parametrize( 'mul', [1.1, 1.5, 9.99, 10] ) def test_shrinks_downwards_to_integers(f, mul): g = minimal_from(f * mul, lambda x: x >= f) assert g == f def test_shrink_to_integer_upper_bound(): assert minimal_from(1.1, lambda x: 1 < x <= 2) == 2 def test_shrink_up_to_one(): assert minimal_from(0.5, lambda x: 0.5 <= x <= 1.5) == 1 def test_shrink_down_to_half(): assert minimal_from(0.75, lambda x: 0 < x < 1) == 0.5 def test_does_not_shrink_across_one(): # This is something of an odd special case. Because of our encoding we # prefer all numbers >= 1 to all numbers in 0 < x < 1. For the most part # this is the correct thing to do, but there are some low negative exponent # cases where we get odd behaviour like this. # This test primarily exists to validate that we don't try to subtract one # from the starting point and trigger an internal exception. assert minimal_from(1.1, lambda x: x == 1.1 or 0 < x < 1) == 1.1 hypothesis-python-3.44.1/tests/cover/test_conjecture_minimizer.py000066400000000000000000000023671321557765100254620ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random from hypothesis.internal.compat import hbytes from hypothesis.internal.conjecture.minimizer import minimize def test_shrink_to_zero(): assert minimize( hbytes([255] * 8), lambda x: True, random=Random(0)) == hbytes(8) def test_shrink_to_smallest(): assert minimize( hbytes([255] * 8), lambda x: sum(x) > 10, random=Random(0), ) == hbytes([0] * 7 + [11]) def test_float_hack_fails(): assert minimize( hbytes([255] * 8), lambda x: x[0] >> 7, random=Random(0), ) == hbytes([128] + [0] * 7) hypothesis-python-3.44.1/tests/cover/test_conjecture_test_data.py000066400000000000000000000060201321557765100254150ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import given from hypothesis.errors import Frozen from hypothesis.internal.conjecture.data import Status, StopTest, \ ConjectureData from hypothesis.searchstrategy.strategies import SearchStrategy @given(st.binary()) def test_buffer_draws_as_self(buf): x = ConjectureData.for_buffer(buf) assert x.draw_bytes(len(buf)) == buf def test_cannot_draw_after_freeze(): x = ConjectureData.for_buffer(b'hi') x.draw_bytes(1) x.freeze() with pytest.raises(Frozen): x.draw_bytes(1) def test_can_double_freeze(): x = ConjectureData.for_buffer(b'hi') x.freeze() assert x.frozen x.freeze() assert x.frozen def test_can_draw_zero_bytes(): x = ConjectureData.for_buffer(b'') for _ in range(10): assert x.draw_bytes(0) == b'' def test_draw_past_end_sets_overflow(): x = ConjectureData.for_buffer(bytes(5)) with pytest.raises(StopTest) as e: x.draw_bytes(6) assert e.value.testcounter == x.testcounter assert x.frozen assert x.status == Status.OVERRUN def test_notes_repr(): x = ConjectureData.for_buffer(b'') x.note(b'hi') assert repr(b'hi') in x.output def test_can_mark_interesting(): x = ConjectureData.for_buffer(bytes()) with pytest.raises(StopTest): x.mark_interesting() assert x.frozen assert x.status == Status.INTERESTING def test_drawing_zero_bits_is_free(): x = ConjectureData.for_buffer(bytes()) assert x.draw_bits(0) == 0 def test_can_mark_invalid(): x = ConjectureData.for_buffer(bytes()) with pytest.raises(StopTest): x.mark_invalid() assert x.frozen assert x.status == Status.INVALID class BoomStrategy(SearchStrategy): def do_draw(self, data): data.draw_bytes(1) raise ValueError() def test_closes_interval_on_error_in_strategy(): x = ConjectureData.for_buffer(b'hi') with pytest.raises(ValueError): x.draw(BoomStrategy()) x.freeze() assert len(x.intervals) == 1 class BigStrategy(SearchStrategy): def do_draw(self, data): data.draw_bytes(10 ** 6) def test_does_not_double_freeze_in_interval_close(): x = ConjectureData.for_buffer(b'hi') with pytest.raises(StopTest): x.draw(BigStrategy()) assert x.frozen assert len(x.intervals) == 0 hypothesis-python-3.44.1/tests/cover/test_conjecture_utils.py000066400000000000000000000063221321557765100246120ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from collections import Counter import hypothesis.internal.conjecture.utils as cu from hypothesis.internal.compat import hbytes from hypothesis.internal.conjecture.data import ConjectureData def test_does_not_draw_data_for_empty_range(): assert cu.integer_range(ConjectureData.for_buffer(b''), 1, 1) == 1 def test_uniform_float_shrinks_to_zero(): d = ConjectureData.for_buffer(hbytes([0] * 7)) assert cu.fractional_float(d) == 0.0 assert len(d.buffer) == 7 def test_uniform_float_can_draw_1(): d = ConjectureData.for_buffer(hbytes([255] * 7)) assert cu.fractional_float(d) == 1.0 assert len(d.buffer) == 7 def test_geometric_can_handle_bad_first_draw(): assert cu.geometric(ConjectureData.for_buffer(hbytes( [255] * 7 + [0] * 7)), 0.5) == 0 def test_coin_biased_towards_truth(): p = 1 - 1.0 / 500 for i in range(255): assert cu.biased_coin( ConjectureData.for_buffer([i]), p ) second_order = [ cu.biased_coin(ConjectureData.for_buffer([255, i]), p) for i in range(255) ] assert False in second_order assert True in second_order def test_coin_biased_towards_falsehood(): p = 1.0 / 500 for i in range(255): assert not cu.biased_coin( ConjectureData.for_buffer([i]), p ) second_order = [ cu.biased_coin(ConjectureData.for_buffer([255, i]), p) for i in range(255) ] assert False in second_order assert True in second_order def test_unbiased_coin_has_no_second_order(): counts = Counter() for i in range(256): buf = hbytes([i]) data = ConjectureData.for_buffer(buf) result = cu.biased_coin(data, 0.5) if data.buffer == buf: counts[result] += 1 assert counts[False] == counts[True] > 0 def test_can_get_odd_number_of_bits(): counts = Counter() for i in range(256): x = cu.getrandbits(ConjectureData.for_buffer([i]), 3) assert 0 <= x <= 7 counts[x] += 1 assert len(set(counts.values())) == 1 def test_8_bits_just_reads_stream(): for i in range(256): assert cu.getrandbits(ConjectureData.for_buffer([i]), 8) == i def test_drawing_certain_coin_still_writes(): data = ConjectureData.for_buffer([0, 1]) assert not data.buffer assert cu.biased_coin(data, 1) assert data.buffer def test_drawing_impossible_coin_still_writes(): data = ConjectureData.for_buffer([1, 0]) assert not data.buffer assert not cu.biased_coin(data, 0) assert data.buffer hypothesis-python-3.44.1/tests/cover/test_control.py000066400000000000000000000067521321557765100227200ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis.errors import CleanupFailed, InvalidArgument from hypothesis.control import BuildContext, note, event, cleanup, \ current_build_context, _current_build_context from tests.common.utils import capture_out from hypothesis.internal.conjecture.data import ConjectureData as TD def bc(): return BuildContext(TD.for_buffer(b'')) def test_cannot_cleanup_with_no_context(): with pytest.raises(InvalidArgument): cleanup(lambda: None) assert _current_build_context.value is None def test_cannot_event_with_no_context(): with pytest.raises(InvalidArgument): event('hi') assert _current_build_context.value is None def test_cleanup_executes_on_leaving_build_context(): data = [] with bc(): cleanup(lambda: data.append(1)) assert not data assert data == [1] assert _current_build_context.value is None def test_can_nest_build_context(): data = [] with bc(): cleanup(lambda: data.append(1)) with bc(): cleanup(lambda: data.append(2)) assert not data assert data == [2] assert data == [2, 1] assert _current_build_context.value is None def test_does_not_suppress_exceptions(): with pytest.raises(AssertionError): with bc(): assert False assert _current_build_context.value is None def test_suppresses_exceptions_in_teardown(): with capture_out() as o: with pytest.raises(AssertionError): with bc(): def foo(): raise ValueError() cleanup(foo) assert False assert u'ValueError' in o.getvalue() assert _current_build_context.value is None def test_runs_multiple_cleanup_with_teardown(): with capture_out() as o: with pytest.raises(AssertionError): with bc(): def foo(): raise ValueError() cleanup(foo) def bar(): raise TypeError() cleanup(foo) cleanup(bar) assert False assert u'ValueError' in o.getvalue() assert u'TypeError' in o.getvalue() assert _current_build_context.value is None def test_raises_error_if_cleanup_fails_but_block_does_not(): with pytest.raises(CleanupFailed): with bc(): def foo(): raise ValueError() cleanup(foo) assert _current_build_context.value is None def test_raises_if_note_out_of_context(): with pytest.raises(InvalidArgument): note('Hi') def test_raises_if_current_build_context_out_of_context(): with pytest.raises(InvalidArgument): current_build_context() def test_current_build_context_is_current(): with bc() as a: assert current_build_context() is a hypothesis-python-3.44.1/tests/cover/test_conventions.py000066400000000000000000000015461321557765100236010ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.utils.conventions import UniqueIdentifier def test_unique_identifier_repr(): assert repr(UniqueIdentifier(u'hello_world')) == u'hello_world' hypothesis-python-3.44.1/tests/cover/test_core.py000066400000000000000000000050111321557765100221530ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import pytest from flaky import flaky import hypothesis.strategies as s from hypothesis import find, given, reject, settings from hypothesis.core import arc from hypothesis.errors import NoSuchExample, Unsatisfiable from tests.common.utils import checks_deprecated_behaviour def test_stops_after_max_examples_if_satisfying(): tracker = [] def track(x): tracker.append(x) return False max_examples = 100 with pytest.raises(NoSuchExample): find( s.integers(0, 10000), track, settings=settings(max_examples=max_examples)) assert len(tracker) == max_examples def test_stops_after_max_iterations_if_not_satisfying(): tracker = set() def track(x): tracker.add(x) reject() max_examples = 100 max_iterations = 200 with pytest.raises(Unsatisfiable): find( s.integers(0, 10000), track, settings=settings( max_examples=max_examples, max_iterations=max_iterations)) # May be less because of duplication assert len(tracker) <= max_iterations @checks_deprecated_behaviour @flaky(min_passes=1, max_runs=2) def test_can_time_out_in_simplify(): def slow_always_true(x): time.sleep(0.1) return True start = time.time() find( s.lists(s.booleans()), slow_always_true, settings=settings(timeout=0.1, database=None) ) finish = time.time() run_time = finish - start assert run_time <= 0.3 some_normal_settings = settings() def test_is_not_normally_default(): assert settings.default is not some_normal_settings @given(s.booleans()) @some_normal_settings def test_settings_are_default_in_given(x): assert settings.default is some_normal_settings def test_arc_is_memoized(): assert arc('foo', 1, 2) is arc('foo', 1, 2) hypothesis-python-3.44.1/tests/cover/test_custom_reprs.py000066400000000000000000000036601321557765100237600ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given def test_includes_non_default_args_in_repr(): assert repr(st.integers()) == 'integers()' assert repr(st.integers(min_value=1)) == 'integers(min_value=1)' def hi(there, stuff): return there def test_supports_positional_and_keyword_args_in_builds(): assert repr(st.builds(hi, st.integers(), there=st.booleans())) == \ 'builds(hi, integers(), there=booleans())' def test_preserves_sequence_type_of_argument(): assert repr(st.sampled_from([0])) == 'sampled_from([0])' class IHaveABadRepr(object): def __repr__(self): raise ValueError('Oh no!') def test_errors_are_deferred_until_repr_is_calculated(): s = st.builds( lambda x, y: 1, st.just(IHaveABadRepr()), y=st.one_of( st.sampled_from((IHaveABadRepr(),)), st.just(IHaveABadRepr())) ).map(lambda t: t).filter(lambda t: True).flatmap( lambda t: st.just(IHaveABadRepr())) with pytest.raises(ValueError): repr(s) @given(st.iterables(st.integers())) def test_iterables_repr_is_useful(it): # fairly hard-coded but useful; also ensures _values are inexhaustible assert repr(it) == 'iter({!r})'.format(it._values) hypothesis-python-3.44.1/tests/cover/test_database_backend.py000066400000000000000000000145161321557765100244500ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import base64 import pytest from hypothesis import given, settings from tests.common.utils import validate_deprecation, \ checks_deprecated_behaviour from hypothesis.database import ExampleDatabase, SQLiteExampleDatabase, \ InMemoryExampleDatabase, DirectoryBasedExampleDatabase from hypothesis.strategies import lists, binary, tuples small_settings = settings(max_examples=50) @given(lists(tuples(binary(), binary()))) @small_settings def test_backend_returns_what_you_put_in(xs): backend = InMemoryExampleDatabase() mapping = {} for key, value in xs: mapping.setdefault(key, set()).add(value) backend.save(key, value) for key, values in mapping.items(): backend_contents = list(backend.fetch(key)) distinct_backend_contents = set(backend_contents) assert len(backend_contents) == len(distinct_backend_contents) assert distinct_backend_contents == set(values) @checks_deprecated_behaviour def test_does_not_commit_in_error_state(): backend = SQLiteExampleDatabase(':memory:') backend.create_db_if_needed() try: with backend.cursor() as cursor: cursor.execute(""" insert into hypothesis_data_mapping(key, value) values("a", "b") """) raise ValueError() except ValueError: pass assert list(backend.fetch(b'a')) == [] @checks_deprecated_behaviour def test_can_double_close(): backend = SQLiteExampleDatabase(':memory:') backend.create_db_if_needed() backend.close() backend.close() def test_can_delete_keys(): backend = InMemoryExampleDatabase() backend.save(b'foo', b'bar') backend.save(b'foo', b'baz') backend.delete(b'foo', b'bar') assert list(backend.fetch(b'foo')) == [b'baz'] @checks_deprecated_behaviour def test_ignores_badly_stored_values(): backend = SQLiteExampleDatabase(':memory:') backend.create_db_if_needed() with backend.cursor() as cursor: cursor.execute(""" insert into hypothesis_data_mapping(key, value) values(?, ?) """, (base64.b64encode(b'foo'), u'kittens')) assert list(backend.fetch(b'foo')) == [] def test_default_database_is_in_memory(): assert isinstance(ExampleDatabase(), InMemoryExampleDatabase) def test_default_on_disk_database_is_dir(tmpdir): assert isinstance( ExampleDatabase(tmpdir.join('foo')), DirectoryBasedExampleDatabase) @checks_deprecated_behaviour def test_selects_sqlite_database_if_name_matches(tmpdir): assert isinstance( ExampleDatabase(tmpdir.join('foo.db')), SQLiteExampleDatabase) assert isinstance( ExampleDatabase(tmpdir.join('foo.sqlite')), SQLiteExampleDatabase) assert isinstance( ExampleDatabase(tmpdir.join('foo.sqlite3')), SQLiteExampleDatabase) def test_selects_directory_based_if_already_directory(tmpdir): path = str(tmpdir.join('hi.sqlite3')) DirectoryBasedExampleDatabase(path).save(b'foo', b'bar') assert isinstance(ExampleDatabase(path), DirectoryBasedExampleDatabase) @checks_deprecated_behaviour def test_selects_sqlite_if_already_sqlite(tmpdir): path = str(tmpdir.join('hi')) SQLiteExampleDatabase(path).save(b'foo', b'bar') assert isinstance(ExampleDatabase(path), SQLiteExampleDatabase) def test_does_not_error_when_fetching_when_not_exist(tmpdir): db = DirectoryBasedExampleDatabase(tmpdir.join('examples')) db.fetch(b'foo') @pytest.fixture(scope='function', params=['memory', 'sql', 'directory']) def exampledatabase(request, tmpdir): if request.param == 'memory': return ExampleDatabase() if request.param == 'sql': with validate_deprecation(): return SQLiteExampleDatabase(str(tmpdir.join('example.db'))) if request.param == 'directory': return DirectoryBasedExampleDatabase(str(tmpdir.join('examples'))) assert False def test_can_delete_a_key_that_is_not_present(exampledatabase): exampledatabase.delete(b'foo', b'bar') def test_can_fetch_a_key_that_is_not_present(exampledatabase): assert list(exampledatabase.fetch(b'foo')) == [] def test_saving_a_key_twice_fetches_it_once(exampledatabase): exampledatabase.save(b'foo', b'bar') exampledatabase.save(b'foo', b'bar') assert list(exampledatabase.fetch(b'foo')) == [b'bar'] def test_can_close_a_database_without_touching_it(exampledatabase): exampledatabase.close() def test_can_close_a_database_after_saving(exampledatabase): exampledatabase.save(b'foo', b'bar') def test_class_name_is_in_repr(exampledatabase): assert type(exampledatabase).__name__ in repr(exampledatabase) exampledatabase.close() def test_an_absent_value_is_present_after_it_moves(exampledatabase): exampledatabase.move(b'a', b'b', b'c') assert next(exampledatabase.fetch(b'b')) == b'c' def test_an_absent_value_is_present_after_it_moves_to_self(exampledatabase): exampledatabase.move(b'a', b'a', b'b') assert next(exampledatabase.fetch(b'a')) == b'b' def test_two_directory_databases_can_interact(tmpdir): path = str(tmpdir) db1 = DirectoryBasedExampleDatabase(path) db2 = DirectoryBasedExampleDatabase(path) db1.save(b'foo', b'bar') assert list(db2.fetch(b'foo')) == [b'bar'] db2.save(b'foo', b'bar') db2.save(b'foo', b'baz') assert sorted(db1.fetch(b'foo')) == [b'bar', b'baz'] def test_can_handle_disappearing_files(tmpdir, monkeypatch): path = str(tmpdir) db = DirectoryBasedExampleDatabase(path) db.save(b'foo', b'bar') base_listdir = os.listdir monkeypatch.setattr(os, 'listdir', lambda d: base_listdir(d) + ['this-does-not-exist']) assert list(db.fetch(b'foo')) == [b'bar'] hypothesis-python-3.44.1/tests/cover/test_database_seed_deprecation.py000066400000000000000000000024321321557765100263500ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import seed, given, settings from tests.common.utils import validate_deprecation from hypothesis.database import InMemoryExampleDatabase @pytest.mark.parametrize('dec', [ settings(database=InMemoryExampleDatabase(), derandomize=True), seed(1) ]) def test_deprecated_determinism_with_database(dec): @dec @given(st.booleans()) def test(i): raise ValueError() with pytest.raises(ValueError): test() with validate_deprecation(): with pytest.raises(ValueError): test() hypothesis-python-3.44.1/tests/cover/test_database_usage.py000066400000000000000000000117731321557765100241670ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import Verbosity, core, find, given, assume, settings, \ unlimited from hypothesis.errors import NoSuchExample, Unsatisfiable from tests.common.utils import all_values, non_covering_examples from hypothesis.database import InMemoryExampleDatabase from hypothesis.internal.compat import hbytes def has_a_non_zero_byte(x): return any(hbytes(x)) def test_saves_incremental_steps_in_database(): key = b'a database key' database = InMemoryExampleDatabase() find( st.binary(min_size=10), lambda x: has_a_non_zero_byte(x), settings=settings(database=database), database_key=key ) assert len(all_values(database)) > 1 def test_clears_out_database_as_things_get_boring(): key = b'a database key' database = InMemoryExampleDatabase() do_we_care = True def stuff(): try: find( st.binary(min_size=50), lambda x: do_we_care and has_a_non_zero_byte(x), settings=settings(database=database, max_examples=10), database_key=key ) except NoSuchExample: pass stuff() assert len(all_values(database)) > 1 do_we_care = False stuff() initial = len(all_values(database)) assert initial > 0 for _ in range(initial): stuff() keys = len(all_values(database)) if not keys: break else: assert False def test_trashes_invalid_examples(): key = b'a database key' database = InMemoryExampleDatabase() finicky = False def stuff(): try: find( st.binary(min_size=100), lambda x: assume(not finicky) and has_a_non_zero_byte(x), settings=settings(database=database, max_shrinks=10), database_key=key ) except Unsatisfiable: pass stuff() original = len(all_values(database)) assert original > 1 finicky = True stuff() assert len(all_values(database)) < original def test_respects_max_examples_in_database_usage(): key = b'a database key' database = InMemoryExampleDatabase() do_we_care = True counter = [0] def check(x): counter[0] += 1 return do_we_care and has_a_non_zero_byte(x) def stuff(): try: find( st.binary(min_size=100), check, settings=settings(database=database, max_examples=10), database_key=key ) except NoSuchExample: pass stuff() assert len(all_values(database)) > 10 do_we_care = False counter[0] = 0 stuff() assert counter == [10] def test_clears_out_everything_smaller_than_the_interesting_example(): target = None # We retry the test run a few times to get a large enough initial # set of examples that we're not going to explore them all in the # initial run. last_sum = [None] database = InMemoryExampleDatabase() seen = set() @settings( database=database, verbosity=Verbosity.quiet, max_examples=100, timeout=unlimited, max_shrinks=100 ) @given(st.binary(min_size=10, max_size=10)) def test(b): if target is not None: if len(seen) < 30: seen.add(b) if b in seen: return if b >= target: raise ValueError() return b = hbytes(b) s = sum(b) if ( (last_sum[0] is None and s > 1000) or (last_sum[0] is not None and s >= last_sum[0] - 1) ): last_sum[0] = s raise ValueError() with pytest.raises(ValueError): test() saved = non_covering_examples(database) assert len(saved) > 30 target = sorted(saved)[len(saved) // 2] with pytest.raises(ValueError): test() saved = non_covering_examples(database) assert target in saved or target in seen for s in saved: assert s >= target def test_does_not_use_database_when_seed_is_forced(monkeypatch): monkeypatch.setattr(core, 'global_force_seed', 42) database = InMemoryExampleDatabase() database.fetch = None @settings(database=database) @given(st.integers()) def test(i): pass test() hypothesis-python-3.44.1/tests/cover/test_datetimes.py000066400000000000000000000113261321557765100232100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import datetime as dt import pytest from flaky import flaky from hypothesis import find, given, settings, unlimited from tests.common.debug import minimal, find_any from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import none, dates, times, binary, datetimes, \ timedeltas from hypothesis.internal.compat import hrange from hypothesis.searchstrategy.datetime import DatetimeStrategy from hypothesis.internal.conjecture.data import Status, StopTest, \ ConjectureData def test_can_find_positive_delta(): assert minimal(timedeltas(), lambda x: x.days > 0) == dt.timedelta(1) def test_can_find_negative_delta(): assert minimal(timedeltas(max_value=dt.timedelta(10**6)), lambda x: x.days < 0) == dt.timedelta(-1) def test_can_find_on_the_second(): find_any(timedeltas(), lambda x: x.seconds == 0) def test_can_find_off_the_second(): find_any(timedeltas(), lambda x: x.seconds != 0) def test_simplifies_towards_zero_delta(): d = minimal(timedeltas()) assert d.days == d.seconds == d.microseconds == 0 def test_min_value_is_respected(): assert minimal(timedeltas(min_value=dt.timedelta(days=10))).days == 10 def test_max_value_is_respected(): assert minimal(timedeltas(max_value=dt.timedelta(days=-10))).days == -10 @given(timedeltas()) def test_single_timedelta(val): assert find_any(timedeltas(val, val)) is val def test_simplifies_towards_millenium(): d = minimal(datetimes()) assert d.year == 2000 assert d.month == d.day == 1 assert d.hour == d.minute == d.second == d.microsecond == 0 @given(datetimes()) def test_default_datetimes_are_naive(dt): assert dt.tzinfo is None @flaky(max_runs=3, min_passes=1) def test_bordering_on_a_leap_year(): with settings(database=None, max_examples=10 ** 7, timeout=unlimited): x = minimal(datetimes(dt.datetime.min.replace(year=2003), dt.datetime.max.replace(year=2005)), lambda x: x.month == 2 and x.day == 29, timeout_after=60) assert x.year == 2004 def test_DatetimeStrategy_draw_may_fail(): def is_failure_inducing(b): try: return strat._attempt_one_draw( ConjectureData.for_buffer(b)) is None except StopTest: return False strat = DatetimeStrategy(dt.datetime.min, dt.datetime.max, none()) failure_inducing = find(binary(), is_failure_inducing) data = ConjectureData.for_buffer(failure_inducing * 100) with pytest.raises(StopTest): data.draw(strat) assert data.status == Status.INVALID def test_can_find_after_the_year_2000(): assert minimal(dates(), lambda x: x.year > 2000).year == 2001 def test_can_find_before_the_year_2000(): assert minimal(dates(), lambda x: x.year < 2000).year == 1999 def test_can_find_each_month(): for month in hrange(1, 13): find_any(dates(), lambda x: x.month == month) def test_min_year_is_respected(): assert minimal(dates(min_value=dt.date.min.replace(2003))).year == 2003 def test_max_year_is_respected(): assert minimal(dates(max_value=dt.date.min.replace(1998))).year == 1998 @given(dates()) def test_single_date(val): assert find_any(dates(val, val)) is val def test_can_find_midnight(): find_any(times(), lambda x: x.hour == x.minute == x.second == 0) def test_can_find_non_midnight(): assert minimal(times(), lambda x: x.hour != 0).hour == 1 def test_can_find_on_the_minute(): find_any(times(), lambda x: x.second == 0) def test_can_find_off_the_minute(): find_any(times(), lambda x: x.second != 0) def test_simplifies_towards_midnight(): d = minimal(times()) assert d.hour == d.minute == d.second == d.microsecond == 0 def test_can_generate_naive_time(): find_any(times(), lambda d: not d.tzinfo) @given(times()) def test_naive_times_are_naive(dt): assert dt.tzinfo is None @checks_deprecated_behaviour def test_deprecated_min_date_is_respected(): assert minimal(dates(min_date=dt.date.min.replace(2003))).year == 2003 hypothesis-python-3.44.1/tests/cover/test_deadline.py000066400000000000000000000077131321557765100230030ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import warnings import pytest import hypothesis.strategies as st from hypothesis import HealthCheck, given, settings, unlimited from hypothesis.errors import Flaky, DeadlineExceeded, \ HypothesisDeprecationWarning from tests.common.utils import capture_out, checks_deprecated_behaviour def test_raises_deadline_on_slow_test(): @settings(deadline=500) @given(st.integers()) def slow(i): time.sleep(1) with pytest.raises(DeadlineExceeded): slow() def test_only_warns_once(): @given(st.integers()) def slow(i): time.sleep(1) try: warnings.simplefilter('always', HypothesisDeprecationWarning) with warnings.catch_warnings(record=True) as w: slow() finally: warnings.simplefilter('error', HypothesisDeprecationWarning) assert len(w) == 1 @checks_deprecated_behaviour @given(st.integers()) def test_slow_tests_are_deprecated_by_default(i): time.sleep(1) @given(st.integers()) @settings(deadline=None) def test_slow_with_none_deadline(i): time.sleep(1) def test_raises_flaky_if_a_test_becomes_fast_on_rerun(): once = [True] @settings(deadline=500) @given(st.integers()) def test_flaky_slow(i): if once[0]: once[0] = False time.sleep(1) with pytest.raises(Flaky): test_flaky_slow() def test_deadlines_participate_in_shrinking(): @settings(deadline=500) @given(st.integers()) def slow_if_large(i): if i >= 10000: time.sleep(1) with capture_out() as o: with pytest.raises(DeadlineExceeded): slow_if_large() assert 'slow_if_large(i=10000)' in o.getvalue() def test_keeps_you_well_above_the_deadline(): seen = set() failed_once = [False] @settings(deadline=100, timeout=unlimited, suppress_health_check=[ HealthCheck.hung_test ]) @given(st.integers(0, 2000)) def slow(i): # Make sure our initial failure isn't something that immediately goes # flaky. if not failed_once[0]: if i * 0.9 <= 100: return else: failed_once[0] = True t = i / 1000 if i in seen: time.sleep(0.9 * t) else: seen.add(i) time.sleep(t) with pytest.raises(DeadlineExceeded): slow() def test_gives_a_deadline_specific_flaky_error_message(): once = [True] @settings(deadline=100) @given(st.integers()) def slow_once(i): if once[0]: once[0] = False time.sleep(0.2) with capture_out() as o: with pytest.raises(Flaky): slow_once() assert 'Unreliable test timing' in o.getvalue() assert 'took 2' in o.getvalue() @pytest.mark.parametrize('slow_strategy', [False, True]) @pytest.mark.parametrize('slow_test', [False, True]) def test_should_only_fail_a_deadline_if_the_test_is_slow( slow_strategy, slow_test ): s = st.integers() if slow_strategy: s = s.map(lambda x: time.sleep(0.08)) @settings(deadline=50) @given(st.data()) def test(data): data.draw(s) if slow_test: time.sleep(0.1) if slow_test: with pytest.raises(DeadlineExceeded): test() else: test() hypothesis-python-3.44.1/tests/cover/test_deferred_errors.py000066400000000000000000000037111321557765100244040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import find, given from hypothesis.errors import InvalidArgument def test_does_not_error_on_initial_calculation(): st.floats(max_value=float('nan')) st.sampled_from([]) st.lists(st.integers(), min_size=5, max_size=2) st.floats(min_value=2.0, max_value=1.0) def test_errors_each_time(): s = st.integers(max_value=1, min_value=3) with pytest.raises(InvalidArgument): s.example() with pytest.raises(InvalidArgument): s.example() def test_errors_on_test_invocation(): @given(st.integers(max_value=1, min_value=3)) def test(x): pass with pytest.raises(InvalidArgument): test() def test_errors_on_find(): s = st.lists(st.integers(), min_size=5, max_size=2) with pytest.raises(InvalidArgument): find(s, lambda x: True) def test_errors_on_example(): s = st.floats(min_value=2.0, max_value=1.0) with pytest.raises(InvalidArgument): s.example() def test_does_not_recalculate_the_strategy(): calls = [0] @st.defines_strategy def foo(): calls[0] += 1 return st.just(1) f = foo() assert calls == [0] f.example() assert calls == [1] f.example() assert calls == [1] hypothesis-python-3.44.1/tests/cover/test_deferred_strategies.py000066400000000000000000000140301321557765100252360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import given from hypothesis.errors import InvalidArgument from tests.common.debug import minimal, assert_no_examples from hypothesis.internal.compat import hrange def test_binary_tree(): tree = st.deferred(lambda: st.integers() | st.tuples(tree, tree)) assert minimal(tree) == 0 assert minimal(tree, lambda x: isinstance(x, tuple)) == (0, 0) def test_bad_binary_tree(): tree = st.deferred(lambda: st.tuples(tree, tree) | st.integers()) assert minimal(tree) == 0 assert minimal(tree, lambda x: isinstance(x, tuple)) == (0, 0) def test_large_branching_tree(): tree = st.deferred( lambda: st.integers() | st.tuples(tree, tree, tree, tree, tree)) assert minimal(tree) == 0 assert minimal(tree, lambda x: isinstance(x, tuple)) == (0,) * 5 def test_bad_branching_tree(): tree = st.deferred( lambda: st.tuples(tree, tree, tree, tree, tree) | st.integers()) assert minimal(tree) == 0 assert minimal(tree, lambda x: isinstance(x, tuple)) == (0,) * 5 def test_mutual_recursion(): t = st.deferred(lambda: a | b) a = st.deferred(lambda: st.none() | st.tuples(st.just('a'), b)) b = st.deferred(lambda: st.none() | st.tuples(st.just('b'), a)) for c in ('a', 'b'): assert minimal( t, lambda x: x is not None and x[0] == c) == (c, None) def test_non_trivial_json(): json = st.deferred( lambda: st.none() | st.floats() | st.text() | lists | objects ) lists = st.lists(json) objects = st.dictionaries(st.text(), json) assert minimal(json) is None small_list = minimal(json, lambda x: isinstance(x, list) and x) assert small_list == [None] x = minimal( json, lambda x: isinstance(x, dict) and isinstance(x.get(''), list)) assert x == {'': []} def test_errors_on_non_function_define(): x = st.deferred(1) with pytest.raises(InvalidArgument): x.example() def test_errors_if_define_does_not_return_search_strategy(): x = st.deferred(lambda: 1) with pytest.raises(InvalidArgument): x.example() def test_errors_on_definition_as_self(): x = st.deferred(lambda: x) with pytest.raises(InvalidArgument): x.example() def test_branches_pass_through_deferred(): x = st.one_of(st.booleans(), st.integers()) y = st.deferred(lambda: x) assert x.branches == y.branches def test_can_draw_one_of_self(): x = st.deferred(lambda: st.one_of(st.booleans(), x)) assert minimal(x) is False assert len(x.branches) == 1 def test_hidden_self_references_just_result_in_no_example(): bad = st.deferred(lambda: st.none().flatmap(lambda _: bad)) assert_no_examples(bad) def test_self_recursive_flatmap(): bad = st.deferred(lambda: bad.flatmap(lambda x: st.none())) assert_no_examples(bad) def test_self_reference_through_one_of_can_detect_emptiness(): bad = st.deferred(lambda: st.one_of(bad, bad)) assert bad.is_empty def test_self_tuple_draws_nothing(): x = st.deferred(lambda: st.tuples(x)) assert_no_examples(x) def test_mutually_recursive_tuples_draw_nothing(): x = st.deferred(lambda: st.tuples(y)) y = st.tuples(x) assert_no_examples(x) assert_no_examples(y) def test_self_recursive_lists(): x = st.deferred(lambda: st.lists(x)) assert minimal(x) == [] assert minimal(x, bool) == [[]] assert minimal(x, lambda x: len(x) > 1) == [[], []] def test_literals_strategy_is_valid(): literals = st.deferred(lambda: st.one_of( st.booleans(), st.tuples(literals, literals), literals.map(lambda x: [x]), )) @given(literals) def test(e): pass test() assert not literals.has_reusable_values def test_impossible_self_recursion(): x = st.deferred(lambda: st.tuples(st.none(), x)) assert x.is_empty assert x.has_reusable_values def test_very_deep_deferral(): # This test is designed so that the recursive properties take a very long # time to converge: Although we can rapidly determine them for the original # value, each round in the fixed point calculation only manages to update # a single value in the related strategies, so it takes 100 rounds to # update everything. Most importantly this triggers our infinite loop # detection heuristic and we start tracking duplicates, but we shouldn't # see any because this loop isn't infinite, just long. def strat(i): if i == 0: return st.deferred(lambda: st.one_of(strategies + [st.none()])) else: return st.deferred( lambda: st.tuples(strategies[(i + 1) % len(strategies)])) strategies = list(map(strat, hrange(100))) assert strategies[0].has_reusable_values assert not strategies[0].is_empty def test_recursion_in_middle(): # This test is significant because the integers().map(abs) is not checked # in the initial pass - when we recurse into x initially we decide that # x is empty, so the tuple is empty, and don't need to check the third # argument. Then when we do the more refined test we've discovered that x # is non-empty, so we need to check the non-emptiness of the last component # to determine the non-emptiness of the tuples. x = st.deferred( lambda: st.tuples(st.none(), x, st.integers().map(abs)) | st.none()) assert not x.is_empty hypothesis-python-3.44.1/tests/cover/test_detection.py000066400000000000000000000032451321557765100232100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import given from hypothesis.stateful import GenericStateMachine from hypothesis.strategies import integers from hypothesis.internal.detection import is_hypothesis_test def test_functions_default_to_not_tests(): def foo(): pass assert not is_hypothesis_test(foo) def test_methods_default_to_not_tests(): class Foo(object): def foo(): pass assert not is_hypothesis_test(Foo().foo) def test_detection_of_functions(): @given(integers()) def test(i): pass assert is_hypothesis_test(test) def test_detection_of_methods(): class Foo(object): @given(integers()) def test(self, i): pass assert is_hypothesis_test(Foo().test) def test_detection_of_stateful_tests(): class Stuff(GenericStateMachine): def steps(self): return integers() def execute_step(self, step): pass assert is_hypothesis_test(Stuff.TestCase().runTest) hypothesis-python-3.44.1/tests/cover/test_direct_strategies.py000066400000000000000000000272231321557765100247400ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import decimal import fractions from datetime import date, time, datetime, timedelta import pytest import hypothesis.strategies as ds from hypothesis import find, given, settings from hypothesis.errors import InvalidArgument from hypothesis.internal.reflection import nicerepr def fn_test(*fnkwargs): fnkwargs = list(fnkwargs) return pytest.mark.parametrize( ('fn', 'args'), fnkwargs, ids=[ '%s(%s)' % (fn.__name__, ', '.join(map(nicerepr, args))) for fn, args in fnkwargs ] ) def fn_ktest(*fnkwargs): fnkwargs = list(fnkwargs) return pytest.mark.parametrize( ('fn', 'kwargs'), fnkwargs, ids=[ '%s(%s)' % (fn.__name__, ', '.join(sorted( '%s=%r' % (k, v) for k, v in kwargs.items() )),) for fn, kwargs in fnkwargs ] ) @fn_ktest( (ds.integers, {'min_value': float('nan')}), (ds.integers, {'min_value': 2, 'max_value': 1}), (ds.integers, {'min_value': 0.1, 'max_value': 0.2}), (ds.integers, {'min_value': float('nan')}), (ds.integers, {'max_value': float('nan')}), (ds.dates, {'min_value': 'fish'}), (ds.dates, {'max_value': 'fish'}), (ds.dates, { 'min_value': date(2017, 8, 22), 'max_value': date(2017, 8, 21)}), (ds.datetimes, {'min_value': 'fish'}), (ds.datetimes, {'max_value': 'fish'}), (ds.datetimes, { 'min_value': datetime(2017, 8, 22), 'max_value': datetime(2017, 8, 21)}), (ds.decimals, {'min_value': float('nan')}), (ds.decimals, {'max_value': float('nan')}), (ds.decimals, {'min_value': 2, 'max_value': 1}), (ds.decimals, {'max_value': '-snan'}), (ds.decimals, {'max_value': complex(1, 2)}), (ds.decimals, {'places': -1}), (ds.decimals, {'places': 0.5}), (ds.decimals, {'max_value': 0.0, 'min_value': 1.0}), (ds.decimals, {'min_value': 1.0, 'max_value': 0.0}), (ds.decimals, { 'min_value': 0.0, 'max_value': 1.0, 'allow_infinity': True}), (ds.decimals, {'min_value': 'inf'}), (ds.decimals, {'max_value': '-inf'}), (ds.decimals, {'min_value': '-inf', 'allow_infinity': False}), (ds.decimals, {'max_value': 'inf', 'allow_infinity': False}), (ds.decimals, {'min_value': complex(1, 2)}), (ds.decimals, {'min_value': '0.1', 'max_value': '0.9', 'places': 0}), (ds.dictionaries, { 'keys': ds.booleans(), 'values': ds.booleans(), 'min_size': 10, 'max_size': 1}), (ds.floats, {'min_value': float('nan')}), (ds.floats, {'max_value': float('nan')}), (ds.floats, {'min_value': complex(1, 2)}), (ds.floats, {'max_value': complex(1, 2)}), (ds.fractions, {'min_value': 2, 'max_value': 1}), (ds.fractions, {'min_value': '1/3', 'max_value': '1/3', 'max_denominator': 2}), (ds.fractions, {'min_value': float('nan')}), (ds.fractions, {'max_value': float('nan')}), (ds.fractions, {'max_denominator': 0}), (ds.fractions, {'max_denominator': 1.5}), (ds.fractions, {'min_value': complex(1, 2)}), (ds.lists, {}), (ds.lists, {'average_size': '5'}), (ds.lists, {'average_size': float('nan')}), (ds.lists, {'min_size': 10, 'max_size': 9}), (ds.lists, {'min_size': -10, 'max_size': -9}), (ds.lists, {'max_size': -9}), (ds.lists, {'max_size': 10}), (ds.lists, {'min_size': -10}), (ds.lists, {'max_size': 10, 'average_size': 20}), (ds.lists, {'min_size': 1.0, 'average_size': 0.5}), (ds.lists, {'elements': 'hi'}), (ds.text, {'min_size': 10, 'max_size': 9}), (ds.text, {'max_size': 10, 'average_size': 20}), (ds.binary, {'min_size': 10, 'max_size': 9}), (ds.binary, {'max_size': 10, 'average_size': 20}), (ds.floats, {'min_value': float('nan')}), (ds.floats, {'max_value': 0.0, 'min_value': 1.0}), (ds.floats, {'min_value': 0.0, 'allow_nan': True}), (ds.floats, {'max_value': 0.0, 'allow_nan': True}), (ds.floats, {'min_value': 0.0, 'max_value': 1.0, 'allow_infinity': True}), (ds.fixed_dictionaries, {'mapping': 'fish'}), (ds.fixed_dictionaries, {'mapping': {1: 'fish'}}), (ds.dictionaries, {'keys': ds.integers(), 'values': 1}), (ds.dictionaries, {'keys': 1, 'values': ds.integers()}), (ds.text, {'alphabet': '', 'min_size': 1}), (ds.timedeltas, {'min_value': 'fish'}), (ds.timedeltas, {'max_value': 'fish'}), (ds.timedeltas, { 'min_value': timedelta(hours=1), 'max_value': timedelta(minutes=1)}), (ds.times, {'min_value': 'fish'}), (ds.times, {'max_value': 'fish'}), (ds.times, { 'min_value': time(2, 0), 'max_value': time(1, 0)}), (ds.uuids, {'version': 6}), ) def test_validates_keyword_arguments(fn, kwargs): with pytest.raises(InvalidArgument): fn(**kwargs).example() @fn_ktest( (ds.integers, {'min_value': 0}), (ds.integers, {'min_value': 11}), (ds.integers, {'min_value': 11, 'max_value': 100}), (ds.integers, {'max_value': 0}), (ds.integers, {'min_value': decimal.Decimal('1.5')}), (ds.integers, {'min_value': -1.5, 'max_value': -0.5}), (ds.decimals, {'min_value': 1.0, 'max_value': 1.5}), (ds.decimals, {'min_value': '1.0', 'max_value': '1.5'}), (ds.decimals, {'min_value': decimal.Decimal('1.5')}), (ds.decimals, { 'max_value': 1.0, 'min_value': -1.0, 'allow_infinity': False}), (ds.decimals, {'min_value': 1.0, 'allow_nan': False}), (ds.decimals, {'max_value': 1.0, 'allow_nan': False}), (ds.decimals, {'max_value': 1.0, 'min_value': -1.0, 'allow_nan': False}), (ds.decimals, {'min_value': '-inf'}), (ds.decimals, {'max_value': 'inf'}), (ds.fractions, { 'min_value': -1, 'max_value': 1, 'max_denominator': 1000}), (ds.fractions, {'min_value': 1, 'max_value': 1}), (ds.fractions, {'min_value': 1, 'max_value': 1, 'max_denominator': 2}), (ds.fractions, {'min_value': 1.0}), (ds.fractions, {'min_value': decimal.Decimal('1.0')}), (ds.fractions, {'min_value': fractions.Fraction(1, 2)}), (ds.fractions, {'min_value': '1/2', 'max_denominator': 1}), (ds.fractions, {'max_value': '1/2', 'max_denominator': 1}), (ds.lists, {'max_size': 0}), (ds.lists, {'elements': ds.integers()}), (ds.lists, {'elements': ds.integers(), 'max_size': 5}), (ds.lists, {'elements': ds.booleans(), 'min_size': 5}), (ds.lists, {'elements': ds.booleans(), 'min_size': 5, 'max_size': 10}), (ds.lists, { 'average_size': 20, 'elements': ds.booleans(), 'max_size': 25}), (ds.sets, { 'min_size': 10, 'max_size': 10, 'elements': ds.integers(), }), (ds.booleans, {}), (ds.just, {'value': 'hi'}), (ds.integers, {'min_value': 12, 'max_value': 12}), (ds.floats, {}), (ds.floats, {'min_value': 1.0}), (ds.floats, {'max_value': 1.0}), (ds.floats, {'max_value': 1.0, 'min_value': -1.0}), (ds.floats, { 'max_value': 1.0, 'min_value': -1.0, 'allow_infinity': False}), (ds.floats, {'min_value': 1.0, 'allow_nan': False}), (ds.floats, {'max_value': 1.0, 'allow_nan': False}), (ds.floats, {'max_value': 1.0, 'min_value': -1.0, 'allow_nan': False}), (ds.sampled_from, {'elements': [1]}), (ds.sampled_from, {'elements': [1, 2, 3]}), (ds.fixed_dictionaries, {'mapping': {1: ds.integers()}}), (ds.dictionaries, {'keys': ds.booleans(), 'values': ds.integers()}), (ds.text, {'alphabet': 'abc'}), (ds.text, {'alphabet': ''}), (ds.text, {'alphabet': ds.sampled_from('abc')}), ) def test_produces_valid_examples_from_keyword(fn, kwargs): fn(**kwargs).example() @fn_test( (ds.one_of, (1,)), (ds.tuples, (1,)), ) def test_validates_args(fn, args): with pytest.raises(InvalidArgument): fn(*args).example() @fn_test( (ds.one_of, (ds.booleans(), ds.tuples(ds.booleans()))), (ds.one_of, (ds.booleans(),)), (ds.text, ()), (ds.binary, ()), (ds.builds, (lambda x, y: x + y, ds.integers(), ds.integers())), ) def test_produces_valid_examples_from_args(fn, args): fn(*args).example() def test_tuples_raise_error_on_bad_kwargs(): with pytest.raises(TypeError): ds.tuples(stuff='things') @given(ds.lists(ds.booleans(), min_size=10, max_size=10)) def test_has_specified_length(xs): assert len(xs) == 10 @given(ds.integers(max_value=100)) @settings(max_examples=100) def test_has_upper_bound(x): assert x <= 100 @given(ds.integers(min_value=100)) def test_has_lower_bound(x): assert x >= 100 @given(ds.integers(min_value=1, max_value=2)) def test_is_in_bounds(x): assert 1 <= x <= 2 @given(ds.fractions(min_value=-1, max_value=1, max_denominator=1000)) def test_fraction_is_in_bounds(x): assert -1 <= x <= 1 and abs(x.denominator) <= 1000 @given(ds.fractions(min_value=fractions.Fraction(1, 2))) def test_fraction_gt_positive(x): assert fractions.Fraction(1, 2) <= x @given(ds.fractions(max_value=fractions.Fraction(-1, 2))) def test_fraction_lt_negative(x): assert x <= fractions.Fraction(-1, 2) @given(ds.decimals(min_value=-1.5, max_value=1.5, allow_nan=False)) def test_decimal_is_in_bounds(x): # decimal.Decimal("-1.5") == -1.5 (not explicitly testable in py2.6) assert decimal.Decimal('-1.5') <= x <= decimal.Decimal('1.5') def test_float_can_find_max_value_inf(): assert find( ds.floats(max_value=float('inf')), lambda x: math.isinf(x) ) == float('inf') assert find( ds.floats(min_value=0.0), lambda x: math.isinf(x)) == float('inf') def test_float_can_find_min_value_inf(): find(ds.floats(), lambda x: x < 0 and math.isinf(x)) find( ds.floats(min_value=float('-inf'), max_value=0.0), lambda x: math.isinf(x)) def test_can_find_none_list(): assert find(ds.lists(ds.none()), lambda x: len(x) >= 3) == [None] * 3 def test_fractions(): assert find(ds.fractions(), lambda f: f >= 1) == 1 def test_decimals(): assert find(ds.decimals(), lambda f: f.is_finite() and f >= 1) == 1 def test_non_float_decimal(): find( ds.decimals(), lambda d: d.is_finite() and decimal.Decimal(float(d)) != d) def test_produces_dictionaries_of_at_least_minimum_size(): t = find( ds.dictionaries(ds.booleans(), ds.integers(), min_size=2), lambda x: True) assert t == {False: 0, True: 0} @given(ds.dictionaries(ds.integers(), ds.integers(), max_size=5)) @settings(max_examples=50) def test_dictionaries_respect_size(d): assert len(d) <= 5 @given(ds.dictionaries(ds.integers(), ds.integers(), max_size=0)) @settings(max_examples=50) def test_dictionaries_respect_zero_size(d): assert len(d) <= 5 @given( ds.lists(ds.none(), max_size=5) ) def test_none_lists_respect_max_size(ls): assert len(ls) <= 5 @given( ds.lists(ds.none(), max_size=5, min_size=1) ) def test_none_lists_respect_max_and_min_size(ls): assert 1 <= len(ls) <= 5 @given(ds.iterables(ds.integers(), max_size=5, min_size=1)) def test_iterables_are_exhaustible(it): for _ in it: pass with pytest.raises(StopIteration): next(it) def test_minimal_iterable(): assert list(find(ds.iterables(ds.integers()), lambda x: True)) == [] hypothesis-python-3.44.1/tests/cover/test_draw_example.py000066400000000000000000000021271321557765100237000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from tests.common import standard_types from hypothesis.strategies import lists @pytest.mark.parametrize( u'spec', standard_types, ids=list(map(repr, standard_types))) def test_single_example(spec): spec.example() @pytest.mark.parametrize( u'spec', standard_types, ids=list(map(repr, standard_types))) def test_list_example(spec): lists(spec, average_size=2).example() hypothesis-python-3.44.1/tests/cover/test_duplication.py000066400000000000000000000041141321557765100235410ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from collections import Counter import pytest from hypothesis import given, settings from hypothesis.searchstrategy import SearchStrategy class Blocks(SearchStrategy): def __init__(self, n): self.n = n def do_draw(self, data): return data.draw_bytes(self.n) @pytest.mark.parametrize('n', range(1, 5)) def test_does_not_duplicate_blocks(n): counts = Counter() @given(Blocks(n)) def test(b): counts[b] += 1 test() assert set(counts.values()) == {1} @pytest.mark.parametrize('n', range(1, 5)) def test_mostly_does_not_duplicate_blocks_even_when_failing(n): counts = Counter() @settings(database=None) @given(Blocks(n)) def test(b): counts[b] += 1 if len(counts) > 3: raise ValueError() try: test() except ValueError: pass # There are two circumstances in which a duplicate is allowed: We replay # the failing test once to check for flakiness, and then we replay the # fully minimized failing test at the end to display the error. The # complication comes from the fact that these may or may not be the same # test case, so we can see either two test cases each run twice or one # test case which has been run three times. seen_counts = set(counts.values()) assert seen_counts in ({1, 2}, {1, 3}) assert len([k for k, v in counts.items() if v > 1]) <= 2 hypothesis-python-3.44.1/tests/cover/test_dynamic_variable.py000066400000000000000000000021651321557765100245230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.utils.dynamicvariables import DynamicVariable def test_can_assign(): d = DynamicVariable(1) assert d.value == 1 with d.with_value(2): assert d.value == 2 assert d.value == 1 def test_can_nest(): d = DynamicVariable(1) with d.with_value(2): assert d.value == 2 with d.with_value(3): assert d.value == 3 assert d.value == 2 assert d.value == 1 hypothesis-python-3.44.1/tests/cover/test_escalation.py000066400000000000000000000035501321557765100233530ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st import hypothesis.internal.escalation as esc from hypothesis import given def test_does_not_escalate_errors_in_non_hypothesis_file(): try: assert False except AssertionError: esc.escalate_hypothesis_internal_error() def test_does_escalate_errors_in_hypothesis_file(monkeypatch): monkeypatch.setattr(esc, 'is_hypothesis_file', lambda x: True) with pytest.raises(AssertionError): try: assert False except AssertionError: esc.escalate_hypothesis_internal_error() def test_does_not_escalate_errors_in_hypothesis_file_if_disabled(monkeypatch): monkeypatch.setattr(esc, 'is_hypothesis_file', lambda x: True) monkeypatch.setattr(esc, 'PREVENT_ESCALATION', True) try: assert False except AssertionError: esc.escalate_hypothesis_internal_error() def test_immediately_escalates_errors_in_generation(): count = [0] def explode(s): count[0] += 1 raise ValueError() @given(st.integers().map(explode)) def test(i): pass with pytest.raises(ValueError): test() assert count == [1] hypothesis-python-3.44.1/tests/cover/test_eval_as_source.py000066400000000000000000000023661321557765100242270ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.reflection import source_exec_as_module def test_can_eval_as_source(): assert source_exec_as_module('foo=1').foo == 1 def test_caches(): x = source_exec_as_module('foo=2') y = source_exec_as_module('foo=2') assert x is y RECURSIVE = """ from hypothesis.internal.reflection import source_exec_as_module def test_recurse(): assert not ( source_exec_as_module("too_much_recursion = False").too_much_recursion) """ def test_can_call_self_recursively(): source_exec_as_module(RECURSIVE).test_recurse() hypothesis-python-3.44.1/tests/cover/test_example.py000066400000000000000000000042061321557765100226630ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random from decimal import Decimal import pytest import hypothesis.strategies as st from hypothesis import find, given, example, settings from hypothesis.errors import NoExamples from hypothesis.control import _current_build_context from tests.common.utils import checks_deprecated_behaviour @settings(deadline=None) @given(st.integers()) def test_deterministic_examples_are_deterministic(seed): with _current_build_context.with_value(None): assert st.lists(st.integers()).example(Random(seed)) == \ st.lists(st.integers()).example(Random(seed)) def test_example_of_none_is_none(): assert st.none().example() is None def test_exception_in_compare_can_still_have_example(): st.one_of( st.none().map(lambda n: Decimal('snan')), st.just(Decimal(0))).example() def test_does_not_always_give_the_same_example(): s = st.integers() assert len(set( s.example() for _ in range(100) )) >= 10 def test_raises_on_no_examples(): with pytest.raises(NoExamples): st.nothing().example() @checks_deprecated_behaviour @example(False) @given(st.booleans()) def test_example_inside_given(b): st.integers().example() @checks_deprecated_behaviour def test_example_inside_find(): find(st.integers(0, 100), lambda x: st.integers().example()) @checks_deprecated_behaviour def test_example_inside_strategy(): st.booleans().map(lambda x: st.integers().example()).example() hypothesis-python-3.44.1/tests/cover/test_executors.py000066400000000000000000000042651321557765100232560ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect from unittest import TestCase import pytest from hypothesis import given, example from hypothesis.executors import ConjectureRunner from hypothesis.strategies import booleans, integers def test_must_use_result_of_test(): class DoubleRun(object): def execute_example(self, function): x = function() if inspect.isfunction(x): return x() @given(booleans()) def boom(self, b): def f(): raise ValueError() return f with pytest.raises(ValueError): DoubleRun().boom() class TestTryReallyHard(TestCase): @given(integers()) def test_something(self, i): pass def execute_example(self, f): f() return f() class Valueless(object): def execute_example(self, f): try: return f() except ValueError: return None @given(integers()) @example(1) def test_no_boom_on_example(self, x): raise ValueError() @given(integers()) def test_no_boom(self, x): raise ValueError() @given(integers()) def test_boom(self, x): assert False def test_boom(): with pytest.raises(AssertionError): Valueless().test_boom() def test_no_boom(): Valueless().test_no_boom() def test_no_boom_on_example(): Valueless().test_no_boom_on_example() class TestNormal(ConjectureRunner, TestCase): @given(booleans()) def test_stuff(self, b): pass hypothesis-python-3.44.1/tests/cover/test_explicit_examples.py000066400000000000000000000134611321557765100247520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from unittest import TestCase import pytest from hypothesis import Verbosity, note, given, example, settings, reporting from hypothesis.errors import InvalidArgument from tests.common.utils import capture_out from hypothesis.strategies import text, integers from hypothesis.internal.compat import integer_types, print_unicode class TestInstanceMethods(TestCase): @given(integers()) @example(1) def test_hi_1(self, x): assert isinstance(x, integer_types) @given(integers()) @example(x=1) def test_hi_2(self, x): assert isinstance(x, integer_types) @given(x=integers()) @example(x=1) def test_hi_3(self, x): assert isinstance(x, integer_types) def test_kwarg_example_on_testcase(): class Stuff(TestCase): @given(integers()) @example(x=1) def test_hi(self, x): assert isinstance(x, integer_types) Stuff(u'test_hi').test_hi() def test_errors_when_run_with_not_enough_args(): @given(integers(), int) @example(1) def foo(x, y): pass with pytest.raises(TypeError): foo() def test_errors_when_run_with_not_enough_kwargs(): @given(integers(), int) @example(x=1) def foo(x, y): pass with pytest.raises(TypeError): foo() def test_can_use_examples_after_given(): long_str = u"This is a very long string that you've no chance of hitting" @example(long_str) @given(text()) def test_not_long_str(x): assert x != long_str with pytest.raises(AssertionError): test_not_long_str() def test_can_use_examples_before_given(): long_str = u"This is a very long string that you've no chance of hitting" @given(text()) @example(long_str) def test_not_long_str(x): assert x != long_str with pytest.raises(AssertionError): test_not_long_str() def test_can_use_examples_around_given(): long_str = u"This is a very long string that you've no chance of hitting" short_str = u'Still no chance' seen = [] @example(short_str) @given(text()) @example(long_str) def test_not_long_str(x): seen.append(x) test_not_long_str() assert set(seen[:2]) == set((long_str, short_str)) @pytest.mark.parametrize((u'x', u'y'), [(1, False), (2, True)]) @example(z=10) @given(z=integers()) def test_is_a_thing(x, y, z): pass def test_no_args_and_kwargs(): with pytest.raises(InvalidArgument): example(1, y=2) def test_no_empty_examples(): with pytest.raises(InvalidArgument): example() def test_does_not_print_on_explicit_examples_if_no_failure(): @example(1) @given(integers()) def test_positive(x): assert x > 0 with reporting.with_reporter(reporting.default): with pytest.raises(AssertionError): with capture_out() as out: test_positive() out = out.getvalue() assert u'Falsifying example: test_positive(1)' not in out def test_prints_output_for_explicit_examples(): @example(-1) @given(integers()) def test_positive(x): assert x > 0 with reporting.with_reporter(reporting.default): with pytest.raises(AssertionError): with capture_out() as out: test_positive() out = out.getvalue() assert u'Falsifying example: test_positive(x=-1)' in out def test_prints_verbose_output_for_explicit_examples(): @settings(verbosity=Verbosity.verbose) @example('NOT AN INTEGER') @given(integers()) def test_always_passes(x): pass with reporting.with_reporter(reporting.default): with capture_out() as out: test_always_passes() out = out.getvalue() assert u"Trying example: test_always_passes(x='NOT AN INTEGER')" in out def test_captures_original_repr_of_example(): @example(x=[]) @given(integers()) def test_mutation(x): x.append(1) assert not x with reporting.with_reporter(reporting.default): with pytest.raises(AssertionError): with capture_out() as out: test_mutation() out = out.getvalue() assert u'Falsifying example: test_mutation(x=[])' in out def test_examples_are_tried_in_order(): @example(x=1) @example(x=2) @given(integers()) @settings(max_examples=0) @example(x=3) def test(x): print_unicode(u"x -> %d" % (x,)) with capture_out() as out: with reporting.with_reporter(reporting.default): test() ls = out.getvalue().splitlines() assert ls == [u"x -> 1", 'x -> 2', 'x -> 3'] def test_prints_note_in_failing_example(): @example(x=42) @example(x=43) @given(integers()) def test(x): note('x -> %d' % (x,)) assert x == 42 with capture_out() as out: with reporting.with_reporter(reporting.default): with pytest.raises(AssertionError): test() v = out.getvalue() print_unicode(v) assert 'x -> 43' in v assert 'x -> 42' not in v def test_must_agree_with_number_of_arguments(): @example(1, 2) @given(integers()) def test(a): pass with pytest.raises(InvalidArgument): test() hypothesis-python-3.44.1/tests/cover/test_fancy_repr.py000066400000000000000000000030461321557765100233610ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st def test_floats_is_floats(): assert repr(st.floats()) == u'floats()' def test_includes_non_default_values(): assert repr(st.floats(max_value=1.0)) == u'floats(max_value=1.0)' def foo(*args, **kwargs): pass def test_builds_repr(): assert repr(st.builds(foo, st.just(1), x=st.just(10))) == \ u'builds(foo, just(1), x=just(10))' def test_map_repr(): assert repr(st.integers().map(abs)) == u'integers().map(abs)' assert repr(st.integers().map(lambda x: x * 2)) == \ u'integers().map(lambda x: x * 2)' def test_filter_repr(): assert repr(st.integers().filter(lambda x: x != 3)) == \ u'integers().filter(lambda x: x != 3)' def test_flatmap_repr(): assert repr(st.integers().flatmap(lambda x: st.booleans())) == \ u'integers().flatmap(lambda x: st.booleans())' hypothesis-python-3.44.1/tests/cover/test_filestorage.py000066400000000000000000000032521321557765100235340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import hypothesis.configuration as fs previous_home_dir = None def setup_function(function): global previous_home_dir previous_home_dir = fs.hypothesis_home_dir() fs.set_hypothesis_home_dir(None) def teardown_function(function): global previous_home_dir fs.set_hypothesis_home_dir(previous_home_dir) previous_home_dir = None def test_homedir_exists_automatically(): assert os.path.exists(fs.hypothesis_home_dir()) def test_can_set_homedir_and_it_will_exist(tmpdir): fs.set_hypothesis_home_dir(str(tmpdir.mkdir(u'kittens'))) d = fs.hypothesis_home_dir() assert u'kittens' in d assert os.path.exists(d) def test_will_pick_up_location_from_env(tmpdir): os.environ[u'HYPOTHESIS_STORAGE_DIRECTORY'] = str(tmpdir) assert fs.hypothesis_home_dir() == str(tmpdir) def test_storage_directories_are_created_automatically(tmpdir): fs.set_hypothesis_home_dir(str(tmpdir)) assert os.path.exists(fs.storage_directory(u'badgers')) hypothesis-python-3.44.1/tests/cover/test_filtering.py000066400000000000000000000021131321557765100232060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import given from hypothesis.strategies import lists, integers @pytest.mark.parametrize((u'specifier', u'condition'), [ (integers(), lambda x: x > 1), (lists(integers()), bool), ]) def test_filter_correctly(specifier, condition): @given(specifier.filter(condition)) def test_is_filtered(x): assert condition(x) test_is_filtered() hypothesis-python-3.44.1/tests/cover/test_find.py000066400000000000000000000052661321557765100221570ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import time import pytest from hypothesis import settings as Settings from hypothesis import find from hypothesis.errors import Timeout, NoSuchExample from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import lists, floats, booleans, integers, \ dictionaries def test_can_find_an_int(): assert find(integers(), lambda x: True) == 0 assert find(integers(), lambda x: x >= 13) == 13 def test_can_find_list(): x = find(lists(integers()), lambda x: sum(x) >= 10) assert sum(x) == 10 def test_can_find_nan(): find(floats(), math.isnan) def test_can_find_nans(): x = find(lists(floats()), lambda x: math.isnan(sum(x))) if len(x) == 1: assert math.isnan(x[0]) else: assert 2 <= len(x) <= 3 def test_raises_when_no_example(): settings = Settings( max_examples=20, min_satisfying_examples=0, ) with pytest.raises(NoSuchExample): find(integers(), lambda x: False, settings=settings) def test_condition_is_name(): settings = Settings( max_examples=20, min_satisfying_examples=0, ) with pytest.raises(NoSuchExample) as e: find(booleans(), lambda x: False, settings=settings) assert 'lambda x:' in e.value.args[0] with pytest.raises(NoSuchExample) as e: find(integers(), lambda x: '☃' in str(x), settings=settings) assert 'lambda x:' in e.value.args[0] def bad(x): return False with pytest.raises(NoSuchExample) as e: find(integers(), bad, settings=settings) assert 'bad' in e.value.args[0] def test_find_dictionary(): assert len(find( dictionaries(keys=integers(), values=integers()), lambda xs: any(kv[0] > kv[1] for kv in xs.items()))) == 1 @checks_deprecated_behaviour def test_times_out(): with pytest.raises(Timeout) as e: find( integers(), lambda x: time.sleep(0.05) or False, settings=Settings(timeout=0.01)) e.value.args[0] hypothesis-python-3.44.1/tests/cover/test_flakiness.py000066400000000000000000000057701321557765100232160ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import Verbosity, given, assume, reject, example, settings from hypothesis.errors import Flaky, Unsatisfiable, UnsatisfiedAssumption from hypothesis.strategies import lists, booleans, integers, composite, \ random_module class Nope(Exception): pass def test_fails_only_once_is_flaky(): first_call = [True] @given(integers()) def rude(x): if first_call[0]: first_call[0] = False raise Nope() with pytest.raises(Flaky): rude() def test_gives_flaky_error_if_assumption_is_flaky(): seen = set() @given(integers()) @settings(verbosity=Verbosity.quiet) def oops(s): assume(s not in seen) seen.add(s) assert False with pytest.raises(Flaky): oops() def test_does_not_attempt_to_shrink_flaky_errors(): values = [] @given(integers()) def test(x): values.append(x) assert len(values) != 1 with pytest.raises(Flaky): test() assert len(set(values)) == 1 class SatisfyMe(Exception): pass @composite def single_bool_lists(draw): n = draw(integers(0, 20)) result = [False] * (n + 1) result[n] = True return result @example([True, False, False, False], [3], None,) @example([False, True, False, False], [3], None,) @example([False, False, True, False], [3], None,) @example([False, False, False, True], [3], None,) @settings(max_examples=0) @given( lists(booleans(), average_size=20) | single_bool_lists(), lists(integers(1, 3), average_size=20), random_module()) def test_failure_sequence_inducing(building, testing, rnd): buildit = iter(building) testit = iter(testing) def build(x): try: assume(not next(buildit)) except StopIteration: pass return x @given(integers().map(build)) @settings( verbosity=Verbosity.quiet, database=None, perform_health_check=False, max_shrinks=0 ) def test(x): try: i = next(testit) except StopIteration: return if i == 1: return elif i == 2: reject() else: raise Nope() try: test() except (Nope, Unsatisfiable, Flaky): pass except UnsatisfiedAssumption: raise SatisfyMe() hypothesis-python-3.44.1/tests/cover/test_flatmap.py000066400000000000000000000061771321557765100226650ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import find, given, assume, settings from hypothesis.database import ExampleDatabase from hypothesis.strategies import just, text, lists, floats, tuples, \ booleans, integers from hypothesis.internal.compat import Counter ConstantLists = integers().flatmap(lambda i: lists(just(i))) OrderedPairs = integers(1, 200).flatmap( lambda e: tuples(integers(0, e - 1), just(e)) ) with settings(max_examples=100): @given(ConstantLists) def test_constant_lists_are_constant(x): assume(len(x) >= 3) assert len(set(x)) == 1 @given(OrderedPairs) def test_in_order(x): assert x[0] < x[1] def test_flatmap_retrieve_from_db(): constant_float_lists = floats(0, 1).flatmap( lambda x: lists(just(x)) ) track = [] db = ExampleDatabase() @given(constant_float_lists) @settings(database=db) def record_and_test_size(xs): if sum(xs) >= 1: track.append(xs) assert False with pytest.raises(AssertionError): record_and_test_size() assert track example = track[-1] track = [] with pytest.raises(AssertionError): record_and_test_size() assert track[0] == example def test_flatmap_does_not_reuse_strategies(): s = lists(max_size=0).flatmap(just) assert s.example() is not s.example() def test_flatmap_has_original_strategy_repr(): ints = integers() ints_up = ints.flatmap(lambda n: integers(min_value=n)) assert repr(ints) in repr(ints_up) def test_mixed_list_flatmap(): s = lists( booleans().flatmap(lambda b: booleans() if b else text()) ) def criterion(ls): c = Counter(type(l) for l in ls) return len(c) >= 2 and min(c.values()) >= 3 result = find(s, criterion) assert len(result) == 6 assert set(result) == set([False, u'']) @pytest.mark.parametrize('n', range(1, 10)) def test_can_shrink_through_a_binding(n): bool_lists = integers(0, 100).flatmap( lambda k: lists(booleans(), min_size=k, max_size=k)) assert find( bool_lists, lambda x: len(list(filter(bool, x))) >= n ) == [True] * n @pytest.mark.parametrize('n', range(1, 10)) def test_can_delete_in_middle_of_a_binding(n): bool_lists = integers(1, 100).flatmap( lambda k: lists(booleans(), min_size=k, max_size=k)) assert find( bool_lists, lambda x: x[0] and x[-1] and x.count(False) >= n ) == [True] + [False] * n + [True] hypothesis-python-3.44.1/tests/cover/test_float_nastiness.py000066400000000000000000000113251321557765100244240ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys import math import pytest import hypothesis.strategies as st from hypothesis import find, given, assume, settings from hypothesis.internal.compat import WINDOWS from hypothesis.internal.floats import float_to_int, int_to_float @pytest.mark.parametrize(('lower', 'upper'), [ # Exact values don't matter, but they're large enough so that x + y = inf. (9.9792015476736e+291, 1.7976931348623157e+308), (-sys.float_info.max, sys.float_info.max) ]) def test_floats_are_in_range(lower, upper): @given(st.floats(lower, upper)) def test_is_in_range(t): assert lower <= t <= upper test_is_in_range() def test_can_generate_both_zeros(): find( st.floats(), lambda x: assume(x >= 0) and math.copysign(1, x) < 0, settings=settings(max_examples=10000) ) @pytest.mark.parametrize((u'l', u'r'), [ (-1.0, 1.0), (-0.0, 1.0), (-1.0, 0.0), (-sys.float_info.min, sys.float_info.min), ]) def test_can_generate_both_zeros_when_in_interval(l, r): interval = st.floats(l, r) find( interval, lambda x: assume(x == 0) and math.copysign(1, x) == 1, settings=settings(max_iterations=20000)) find( interval, lambda x: assume(x == 0) and math.copysign(1, x) == -1, settings=settings(max_iterations=20000)) @given(st.floats(0.0, 1.0)) def test_does_not_generate_negative_if_right_boundary_is_positive(x): assert math.copysign(1, x) == 1 @given(st.floats(-1.0, -0.0)) def test_does_not_generate_positive_if_right_boundary_is_negative(x): assert math.copysign(1, x) == -1 @pytest.mark.parametrize((u'l', u'r'), [ (0.0, 1.0), (-1.0, 0.0), (-sys.float_info.min, sys.float_info.min), ]) def test_can_generate_interval_endpoints(l, r): interval = st.floats(l, r) find(interval, lambda x: x == l, settings=settings(max_examples=10000)) find(interval, lambda x: x == r, settings=settings(max_examples=10000)) def test_half_bounded_generates_endpoint(): find(st.floats(min_value=-1.0), lambda x: x == -1.0, settings=settings(max_examples=10000)) find(st.floats(max_value=-1.0), lambda x: x == -1.0, settings=settings(max_examples=10000)) def test_half_bounded_generates_zero(): find(st.floats(min_value=-1.0), lambda x: x == 0.0) find(st.floats(max_value=1.0), lambda x: x == 0.0) @pytest.mark.xfail( WINDOWS, reason=( 'Seems to be triggering a floating point bug on 2.7 + windows + x64')) @given(st.floats(max_value=-0.0)) def test_half_bounded_respects_sign_of_upper_bound(x): assert math.copysign(1, x) == -1 @given(st.floats(min_value=0.0)) def test_half_bounded_respects_sign_of_lower_bound(x): assert math.copysign(1, x) == 1 @given(st.floats(allow_nan=False)) def test_filter_nan(x): assert not math.isnan(x) @given(st.floats(allow_infinity=False)) def test_filter_infinity(x): assert not math.isinf(x) def test_can_guard_against_draws_of_nan(): """In this test we create a NaN value that naturally "tries" to shrink into the first strategy, where it is not permitted. This tests a case that is very unlikely to happen in random generation: When the unconstrained first branch of generating a float just happens to produce a NaN value. Here what happens is that we get a NaN from the *second* strategy, but this then shrinks into its unconstrained branch. The natural thing to happen is then to try to zero the branch parameter of the one_of, but that will put an illegal value there, so it's not allowed to happen. """ tagged_floats = st.one_of( st.tuples(st.just(0), st.floats(allow_nan=False)), st.tuples(st.just(1), st.floats(allow_nan=True)), ) tag, f = find(tagged_floats, lambda x: math.isnan(x[1])) assert tag == 1 def test_very_narrow_interval(): upper_bound = -1.0 lower_bound = int_to_float(float_to_int(upper_bound) + 10) assert lower_bound < upper_bound @given(st.floats(lower_bound, upper_bound)) def test(f): assert lower_bound <= f <= upper_bound test() hypothesis-python-3.44.1/tests/cover/test_float_utils.py000066400000000000000000000015301321557765100235520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.floats import count_between_floats def test_can_handle_straddling_zero(): assert count_between_floats(-0.0, 0.0) == 2 hypothesis-python-3.44.1/tests/cover/test_given_error_conditions.py000066400000000000000000000044221321557765100260020ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import pytest from hypothesis import given, infer, assume, reject, settings from hypothesis.errors import Timeout, Unsatisfiable, InvalidArgument from tests.common.utils import validate_deprecation from hypothesis.strategies import booleans, integers def test_raises_timeout_on_slow_test(): with validate_deprecation(): @given(integers()) @settings(timeout=0.01) def test_is_slow(x): time.sleep(0.02) with validate_deprecation(): with pytest.raises(Timeout): test_is_slow() def test_raises_unsatisfiable_if_all_false(): @given(integers()) @settings(max_examples=50, perform_health_check=False) def test_assume_false(x): reject() with pytest.raises(Unsatisfiable): test_assume_false() def test_raises_unsatisfiable_if_all_false_in_finite_set(): @given(booleans()) def test_assume_false(x): reject() with pytest.raises(Unsatisfiable): test_assume_false() def test_does_not_raise_unsatisfiable_if_some_false_in_finite_set(): @given(booleans()) def test_assume_x(x): assume(x) test_assume_x() def test_error_if_has_no_hints(): @given(a=infer) def inner(a): pass with pytest.raises(InvalidArgument): inner() def test_error_if_infer_is_posarg(): @given(infer) def inner(ex): pass with pytest.raises(InvalidArgument): inner() def test_given_twice_deprecated(): @given(booleans()) @given(integers()) def inner(a, b): pass with validate_deprecation(): inner() hypothesis-python-3.44.1/tests/cover/test_given_reuse.py000066400000000000000000000023121321557765100235370ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import given given_booleans = given(st.booleans()) @given_booleans def test_has_an_arg_named_x(x): pass @given_booleans def test_has_an_arg_named_y(y): pass given_named_booleans = given(z=st.text()) def test_fail_independently(): @given_named_booleans def test_z1(z): assert False @given_named_booleans def test_z2(z): pass with pytest.raises(AssertionError): test_z1() test_z2() hypothesis-python-3.44.1/tests/cover/test_health_checks.py000066400000000000000000000123021321557765100240110ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import pytest from pytest import raises import hypothesis.strategies as st from hypothesis import HealthCheck, given, settings from hypothesis.errors import InvalidArgument, FailedHealthCheck from hypothesis.control import assume from tests.common.utils import checks_deprecated_behaviour from hypothesis.internal.compat import int_from_bytes from hypothesis.searchstrategy.strategies import SearchStrategy def test_slow_generation_fails_a_health_check(): @given(st.integers().map(lambda x: time.sleep(0.2))) def test(x): pass with raises(FailedHealthCheck): test() def test_slow_generation_inline_fails_a_health_check(): @settings(deadline=None) @given(st.data()) def test(data): data.draw(st.integers().map(lambda x: time.sleep(0.2))) with raises(FailedHealthCheck): test() def test_default_health_check_can_weaken_specific(): import random @given(st.lists(st.integers(), min_size=1)) def test(x): random.choice(x) with settings(perform_health_check=False): test() def test_suppressing_filtering_health_check(): count = [0] def too_soon(x): count[0] += 1 return count[0] >= 200 @given(st.integers().filter(too_soon)) def test1(x): raise ValueError() with raises(FailedHealthCheck): test1() count[0] = 0 @settings(suppress_health_check=[ HealthCheck.filter_too_much, HealthCheck.too_slow]) @given(st.integers().filter(too_soon)) def test2(x): raise ValueError() with raises(ValueError): test2() def test_filtering_everything_fails_a_health_check(): @given(st.integers().filter(lambda x: False)) @settings(database=None) def test(x): pass with raises(FailedHealthCheck) as e: test() assert 'filter' in e.value.args[0] class fails_regularly(SearchStrategy): def do_draw(self, data): b = int_from_bytes(data.draw_bytes(2)) assume(b == 3) print('ohai') @settings(max_shrinks=0) def test_filtering_most_things_fails_a_health_check(): @given(fails_regularly()) @settings(database=None) def test(x): pass with raises(FailedHealthCheck) as e: test() assert 'filter' in e.value.args[0] def test_large_data_will_fail_a_health_check(): @given(st.lists(st.binary(min_size=1024, max_size=1024), average_size=100)) @settings(database=None, buffer_size=1000) def test(x): pass with raises(FailedHealthCheck) as e: test() assert 'allowable size' in e.value.args[0] def test_returning_non_none_is_forbidden(): @given(st.integers()) def a(x): return 1 with raises(FailedHealthCheck): a() def test_a_very_slow_test_will_fail_a_health_check(): @given(st.integers()) @settings(deadline=None) def a(x): time.sleep(1000) with raises(FailedHealthCheck): a() def test_the_slow_test_health_check_can_be_disabled(): @given(st.integers()) @settings(suppress_health_check=[ HealthCheck.hung_test, ], deadline=None) def a(x): time.sleep(1000) a() def test_the_slow_test_health_only_runs_if_health_checks_are_on(): @given(st.integers()) @settings(perform_health_check=False, deadline=None) def a(x): time.sleep(1000) a() def test_returning_non_none_does_not_fail_if_health_check_disabled(): @given(st.integers()) @settings(perform_health_check=False) def a(x): return 1 a() def test_large_base_example_fails_health_check(): @given(st.binary(min_size=7000, max_size=7000)) def test(b): pass with pytest.raises(FailedHealthCheck) as exc: test() assert exc.value.health_check == HealthCheck.large_base_example def test_example_that_shrinks_to_overrun_fails_health_check(): @given(st.binary(min_size=9000, max_size=9000) | st.none()) def test(b): pass with pytest.raises(FailedHealthCheck) as exc: test() assert exc.value.health_check == HealthCheck.large_base_example @pytest.mark.parametrize( 'check', [HealthCheck.random_module, HealthCheck.exception_in_generation]) @checks_deprecated_behaviour def test_noop_health_checks(check): settings(suppress_health_check=[check]) def test_it_is_an_error_to_suppress_non_iterables(): with raises(InvalidArgument): settings(suppress_health_check=1) @checks_deprecated_behaviour def test_is_is_deprecated_to_suppress_non_healthchecks(): settings(suppress_health_check=[1]) hypothesis-python-3.44.1/tests/cover/test_imports.py000066400000000000000000000016601321557765100227260ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import * from hypothesis.strategies import * def test_can_star_import_from_hypothesis(): find(lists(integers()), lambda x: sum(x) > 1, settings=settings( max_examples=10000, verbosity=Verbosity.quiet )) hypothesis-python-3.44.1/tests/cover/test_integer_ranges.py000066400000000000000000000034251321557765100242260ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import find, given, settings, unlimited from hypothesis.internal.conjecture.utils import integer_range from hypothesis.searchstrategy.strategies import SearchStrategy class interval(SearchStrategy): def __init__(self, lower, upper, center=None): self.lower = lower self.upper = upper self.center = center def do_draw(self, data): return integer_range( data, self.lower, self.upper, center=self.center, ) @given( st.tuples(st.integers(), st.integers(), st.integers()).map(sorted), st.random_module(), ) @settings(max_examples=100, max_shrinks=0, deadline=None) def test_intervals_shrink_to_center(inter, rnd): lower, center, upper = inter s = interval(lower, upper, center) with settings(database=None, max_shrinks=2000, timeout=unlimited): assert find(s, lambda x: True) == center if lower < center: assert find(s, lambda x: x < center) == center - 1 if center < upper: assert find(s, lambda x: x > center) == center + 1 hypothesis-python-3.44.1/tests/cover/test_interleaving.py000066400000000000000000000030011321557765100237070ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import find, note, given, settings from tests.common.utils import checks_deprecated_behaviour @checks_deprecated_behaviour def test_can_eval_stream_inside_find(): @given(st.streaming(st.integers(min_value=0)), st.random_module()) @settings( buffer_size=200, max_shrinks=5, max_examples=10, perform_health_check=False) def test(stream, rnd): x = find( st.lists(st.integers(min_value=0), min_size=10), lambda t: any(t > s for (t, s) in zip(t, stream)), settings=settings( database=None, max_shrinks=2000, max_examples=2000) ) note('x: %r' % (x,)) note('Evalled: %r' % (stream,)) assert len([1 for i, v in enumerate(x) if stream[i] < v]) == 1 test() hypothesis-python-3.44.1/tests/cover/test_internal_helpers.py000066400000000000000000000016231321557765100245660ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis.internal.floats import sign def test_sign_gives_good_type_error(): x = 'foo' with pytest.raises(TypeError) as e: sign(x) assert repr(x) in e.value.args[0] hypothesis-python-3.44.1/tests/cover/test_intervalset.py000066400000000000000000000057241321557765100235760ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given, assume, example from hypothesis.internal.charmap import _subtract_intervals from hypothesis.internal.intervalsets import IntervalSet def build_intervals(ls): ls.sort() result = [] for u, l in ls: v = u + l if result: a, b = result[-1] if u <= b + 1: result[-1] = (a, v) continue result.append((u, v)) return result IntervalLists = st.builds( build_intervals, st.lists(st.tuples(st.integers(0, 200), st.integers(0, 20))) ) Intervals = st.builds(IntervalSet, IntervalLists) @given(Intervals) def test_intervals_are_equivalent_to_their_lists(intervals): ls = list(intervals) assert len(ls) == len(intervals) for i in range(len(ls)): assert ls[i] == intervals[i] for i in range(1, len(ls) - 1): assert ls[-i] == intervals[-i] @given(Intervals) def test_intervals_match_indexes(intervals): ls = list(intervals) for v in ls: assert ls.index(v) == intervals.index(v) @given(Intervals, st.integers()) def test_error_for_index_of_not_present_value(intervals, v): assume(v not in intervals) with pytest.raises(ValueError): intervals.index(v) def test_validates_index(): with pytest.raises(IndexError): IntervalSet([])[1] with pytest.raises(IndexError): IntervalSet([[1, 10]])[11] with pytest.raises(IndexError): IntervalSet([[1, 10]])[-11] def test_index_above_is_index_if_present(): assert IntervalSet([[1, 10]]).index_above(1) == 0 assert IntervalSet([[1, 10]]).index_above(2) == 1 def test_index_above_is_length_if_higher(): assert IntervalSet([[1, 10]]).index_above(100) == 10 def intervals_to_set(ints): return set(IntervalSet(ints)) @example(x=[(0, 1), (3, 3)], y=[(1, 3)]) @example(x=[(0, 1)], y=[(0, 0), (1, 1)]) @example(x=[(0, 1)], y=[(1, 1)]) @given(IntervalLists, IntervalLists) def test_subtraction_of_intervals(x, y): xs = intervals_to_set(x) ys = intervals_to_set(x) assume(not xs.isdisjoint(ys)) z = _subtract_intervals(x, y) assert z == tuple(sorted(z)) for a, b in z: assert a <= b assert intervals_to_set(z) == intervals_to_set(x) - intervals_to_set(y) hypothesis-python-3.44.1/tests/cover/test_lambda_formatting.py000066400000000000000000000112061321557765100247000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2016 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.reflection import get_pretty_function_description def test_bracket_whitespace_is_striped(): assert get_pretty_function_description( lambda x: (x + 1) ) == 'lambda x: (x + 1)' def test_can_have_unicode_in_lambda_sources(): t = lambda x: 'é' not in x assert get_pretty_function_description(t) == ( "lambda x: 'é' not in x" ) ordered_pair = ( lambda right: [].map( lambda length: ())) def test_can_get_descriptions_of_nested_lambdas_with_different_names(): assert get_pretty_function_description(ordered_pair) == \ 'lambda right: [].map(lambda length: ())' def test_does_not_error_on_unparsable_source(): t = [ lambda x: \ # This will break ast.parse, but the brackets are needed for the real # parser to accept this lambda x][0] assert get_pretty_function_description(t) == 'lambda x: ' def test_source_of_lambda_is_pretty(): assert get_pretty_function_description( lambda x: True ) == 'lambda x: True' def test_variable_names_are_not_pretty(): t = lambda x: True assert get_pretty_function_description(t) == 'lambda x: True' def test_does_not_error_on_dynamically_defined_functions(): x = eval('lambda t: 1') get_pretty_function_description(x) def test_collapses_whitespace_nicely(): t = ( lambda x, y: 1 ) assert get_pretty_function_description(t) == 'lambda x, y: 1' def test_is_not_confused_by_tuples(): p = (lambda x: x > 1, 2)[0] assert get_pretty_function_description(p) == 'lambda x: x > 1' def test_strips_comments_from_the_end(): t = lambda x: 1 # A lambda comment assert get_pretty_function_description(t) == 'lambda x: 1' def test_does_not_strip_hashes_within_a_string(): t = lambda x: '#' assert get_pretty_function_description(t) == "lambda x: '#'" def test_can_distinguish_between_two_lambdas_with_different_args(): a, b = (lambda x: 1, lambda y: 2) assert get_pretty_function_description(a) == 'lambda x: 1' assert get_pretty_function_description(b) == 'lambda y: 2' def test_does_not_error_if_it_cannot_distinguish_between_two_lambdas(): a, b = (lambda x: 1, lambda x: 2) assert 'lambda x:' in get_pretty_function_description(a) assert 'lambda x:' in get_pretty_function_description(b) def test_lambda_source_break_after_def_with_brackets(): f = (lambda n: 'aaa') source = get_pretty_function_description(f) assert source == "lambda n: 'aaa'" def test_lambda_source_break_after_def_with_line_continuation(): f = lambda n:\ 'aaa' source = get_pretty_function_description(f) assert source == "lambda n: 'aaa'" def arg_decorator(*s): def accept(f): return s return accept @arg_decorator(lambda x: x + 1) def plus_one(): pass @arg_decorator(lambda x: x + 1, lambda y: y * 2) def two_decorators(): pass def test_can_extract_lambda_repr_in_a_decorator(): assert get_pretty_function_description(plus_one[0]) == 'lambda x: x + 1' def test_can_extract_two_lambdas_from_a_decorator_if_args_differ(): a, b = two_decorators assert get_pretty_function_description(a) == 'lambda x: x + 1' assert get_pretty_function_description(b) == 'lambda y: y * 2' @arg_decorator( lambda x: x + 1 ) def decorator_with_space(): pass def test_can_extract_lambda_repr_in_a_decorator_with_spaces(): assert get_pretty_function_description( decorator_with_space[0]) == 'lambda x: x + 1' @arg_decorator(lambda: ()) def to_brackets(): pass def test_can_handle_brackets_in_decorator_argument(): assert get_pretty_function_description(to_brackets[0]) == "lambda: ()" def identity(x): return x @arg_decorator(identity(lambda x: x + 1)) def decorator_with_wrapper(): pass def test_can_handle_nested_lambda_in_decorator_argument(): assert get_pretty_function_description( decorator_with_wrapper[0]) == "lambda x: x + 1" hypothesis-python-3.44.1/tests/cover/test_limits.py000066400000000000000000000017611321557765100225340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import given, settings def test_max_examples_are_respected(): counter = [0] @given(st.random_module(), st.integers()) @settings(max_examples=100) def test(rnd, i): counter[0] += 1 test() assert counter == [100] hypothesis-python-3.44.1/tests/cover/test_map.py000066400000000000000000000020501321557765100220000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import given, assume from tests.common.debug import assert_no_examples @given(st.integers().map(lambda x: assume(x % 3 != 0) and x)) def test_can_assume_in_map(x): assert x % 3 != 0 def test_assume_in_just_raises_immediately(): assert_no_examples(st.just(1).map(lambda x: assume(x == 2))) hypothesis-python-3.44.1/tests/cover/test_nesting.py000066400000000000000000000023561321557765100227030ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from pytest import raises import hypothesis.strategies as st from hypothesis import Verbosity, given, settings def test_nesting_1(): @given(st.integers(0, 100)) @settings(max_examples=5, database=None, deadline=None) def test_blah(x): @given(st.integers()) @settings( max_examples=100, max_shrinks=0, database=None, verbosity=Verbosity.quiet) def test_nest(y): if y >= x: raise ValueError() with raises(ValueError): test_nest() test_blah() hypothesis-python-3.44.1/tests/cover/test_nothing.py000066400000000000000000000040551321557765100227000ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import strategies as st from hypothesis import find, given from hypothesis.errors import InvalidArgument from tests.common.debug import assert_no_examples def test_resampling(): x = find( st.lists(st.integers()).flatmap( lambda x: st.lists(st.sampled_from(x))), lambda x: len(x) >= 10 and len(set(x)) == 1) assert x == [0] * 10 @given(st.lists(st.nothing())) def test_list_of_nothing(xs): assert xs == [] @given(st.sets(st.nothing())) def test_set_of_nothing(xs): assert xs == set() def test_validates_min_size(): with pytest.raises(InvalidArgument): st.lists(st.nothing(), min_size=1).validate() def test_function_composition(): assert st.nothing().map(lambda x: 'hi').is_empty assert st.nothing().filter(lambda x: True).is_empty assert st.nothing().flatmap(lambda x: st.integers()).is_empty def test_tuples_detect_empty_elements(): assert st.tuples(st.nothing()).is_empty def test_fixed_dictionaries_detect_empty_values(): assert st.fixed_dictionaries({'a': st.nothing()}).is_empty def test_no_examples(): assert_no_examples(st.nothing()) @pytest.mark.parametrize('s', [ st.nothing(), st.nothing().map(lambda x: x), st.nothing().filter(lambda x: True), st.nothing().flatmap(lambda x: st.integers()) ]) def test_empty(s): assert s.is_empty hypothesis-python-3.44.1/tests/cover/test_numerics.py000066400000000000000000000076111321557765100230600ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import decimal import pytest from hypothesis import given, assume, reject from hypothesis.errors import InvalidArgument from tests.common.utils import fails from hypothesis.strategies import data, none, tuples, decimals, integers, \ fractions @given(data()) def test_fuzz_fractions_bounds(data): denom = data.draw(none() | integers(1, 100), label='denominator') fracs = none() | fractions(max_denominator=denom) low, high = data.draw(tuples(fracs, fracs), label='low, high') if low is not None and high is not None and low > high: low, high = high, low try: val = data.draw(fractions(low, high, denom), label='value') except InvalidArgument: reject() # fractions too close for given max_denominator if low is not None: assert low <= val if high is not None: assert val <= high if denom is not None: assert 1 <= val.denominator <= denom @given(data()) def test_fuzz_decimals_bounds(data): places = data.draw(none() | integers(0, 20), label='places') finite_decs = decimals(allow_nan=False, allow_infinity=False, places=places) | none() low, high = data.draw(tuples(finite_decs, finite_decs), label='low, high') if low is not None and high is not None and low > high: low, high = high, low ctx = decimal.Context(prec=data.draw(integers(1, 100), label='precision')) try: with decimal.localcontext(ctx): strat = decimals(low, high, allow_nan=False, allow_infinity=False, places=places) val = data.draw(strat, label='value') except InvalidArgument: reject() # decimals too close for given places if low is not None: assert low <= val if high is not None: assert val <= high if places is not None: assert val.as_tuple().exponent == -places @fails @given(decimals()) def test_all_decimals_can_be_exact_floats(x): assume(x.is_finite()) assert decimal.Decimal(float(x)) == x @given(fractions(), fractions(), fractions()) def test_fraction_addition_is_well_behaved(x, y, z): assert x + y + z == y + x + z @fails @given(decimals()) def test_decimals_include_nan(x): assert not math.isnan(x) @fails @given(decimals()) def test_decimals_include_inf(x): assume(not x.is_snan()) assert not math.isinf(x) @given(decimals(allow_nan=False)) def test_decimals_can_disallow_nan(x): assert not math.isnan(x) @given(decimals(allow_infinity=False)) def test_decimals_can_disallow_inf(x): assume(not x.is_snan()) assert not math.isinf(x) @pytest.mark.parametrize('places', range(10)) def test_decimals_have_correct_places(places): @given(decimals(0, 10, allow_nan=False, places=places)) def inner_tst(n): assert n.as_tuple().exponent == -places inner_tst() @given(decimals(min_value='0.1', max_value='0.2', allow_nan=False, places=1)) def test_works_with_few_values(dec): assert dec in (decimal.Decimal('0.1'), decimal.Decimal('0.2')) @given(decimals(places=3, allow_nan=False, allow_infinity=False)) def test_issue_725_regression(x): pass @given(decimals(min_value='0.1', max_value='0.3')) def test_issue_739_regression(x): pass hypothesis-python-3.44.1/tests/cover/test_one_of.py000066400000000000000000000015671321557765100225040ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from tests.common.debug import assert_no_examples def test_one_of_empty(): e = st.one_of() assert e.is_empty assert_no_examples(e) hypothesis-python-3.44.1/tests/cover/test_permutations.py000066400000000000000000000022461321557765100237640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import find, given from hypothesis.strategies import permutations def test_can_find_non_trivial_permutation(): x = find( permutations(list(range(5))), lambda x: x[0] != 0 ) assert x == [1, 0, 2, 3, 4] @given(permutations(list(u'abcd'))) def test_permutation_values_are_permutations(perm): assert len(perm) == 4 assert set(perm) == set(u'abcd') @given(permutations([])) def test_empty_permutations_are_empty(xs): assert xs == [] hypothesis-python-3.44.1/tests/cover/test_phases.py000066400000000000000000000046421321557765100225170ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import Phase, given, example, settings from hypothesis.errors import InvalidArgument from hypothesis.database import ExampleDatabase, InMemoryExampleDatabase @example(11) @settings(phases=(Phase.explicit,)) @given(st.integers()) def test_only_runs_explicit_examples(i): assert i == 11 @example(u"hello world") @settings(phases=(Phase.reuse, Phase.generate, Phase.shrink)) @given(st.booleans()) def test_does_not_use_explicit_examples(i): assert isinstance(i, bool) @settings(phases=(Phase.reuse, Phase.shrink)) @given(st.booleans()) def test_this_would_fail_if_you_ran_it(b): assert False def test_phases_default_to_all(): assert settings(phases=None).phases == tuple(Phase) def test_does_not_reuse_saved_examples_if_reuse_not_in_phases(): class BadDatabase(ExampleDatabase): def save(self, key, value): pass def delete(self, key, value): pass def fetch(self, key): raise ValueError() def close(self): pass @settings(database=BadDatabase(), phases=(Phase.generate,)) @given(st.integers()) def test_usage(i): pass test_usage() def test_will_save_when_reuse_not_in_phases(): database = InMemoryExampleDatabase() assert not database.data @settings(database=database, phases=(Phase.generate,)) @given(st.integers()) def test_usage(i): raise ValueError() with pytest.raises(ValueError): test_usage() saved, = [v for k, v in database.data.items() if b'coverage' not in k] assert len(saved) == 1 def test_rejects_non_phases(): with pytest.raises(InvalidArgument): settings(phases=['cabbage']) hypothesis-python-3.44.1/tests/cover/test_pretty.py000066400000000000000000000450761321557765100225710ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # coding: utf-8 """This file originates in the IPython project and is made use of under the following licensing terms: The IPython licensing terms IPython is licensed under the terms of the Modified BSD License (also known as New or Revised or 3-Clause BSD), as follows: Copyright (c) 2008-2014, IPython Development Team Copyright (c) 2001-2007, Fernando Perez Copyright (c) 2001, Janko Hauser Copyright (c) 2001, Nathaniel Gray All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the IPython Development Team nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ from __future__ import division, print_function, absolute_import import re from collections import deque, defaultdict import pytest from hypothesis.vendor import pretty from tests.common.utils import capture_out from hypothesis.internal.compat import PY3, Counter, OrderedDict, \ a_good_encoding py2_only = pytest.mark.skipif(PY3, reason='This test only runs on python 2') if PY3: from io import StringIO def unicode_to_str(x, encoding=None): return x else: from StringIO import StringIO def unicode_to_str(x, encoding=None): return x.encode(encoding or a_good_encoding()) def assert_equal(x, y): assert x == y def assert_true(x): assert x def assert_in(x, xs): assert x in xs def skip_without(mod): try: __import__(mod) return lambda f: f except ImportError: return pytest.mark.skipif(True, reason='Missing %s' % (mod,)) assert_raises = pytest.raises class MyList(object): def __init__(self, content): self.content = content def _repr_pretty_(self, p, cycle): if cycle: p.text('MyList(...)') else: with p.group(3, 'MyList(', ')'): for (i, child) in enumerate(self.content): if i: p.text(',') p.breakable() else: p.breakable('') p.pretty(child) class MyDict(dict): def _repr_pretty_(self, p, cycle): p.text('MyDict(...)') class MyObj(object): def somemethod(self): pass class Dummy1(object): def _repr_pretty_(self, p, cycle): p.text('Dummy1(...)') class Dummy2(Dummy1): _repr_pretty_ = None class NoModule(object): pass NoModule.__module__ = None class Breaking(object): def _repr_pretty_(self, p, cycle): with p.group(4, 'TG: ', ':'): p.text('Breaking(') p.break_() p.text(')') class BreakingRepr(object): def __repr__(self): return 'Breaking(\n)' class BreakingReprParent(object): def _repr_pretty_(self, p, cycle): with p.group(4, 'TG: ', ':'): p.pretty(BreakingRepr()) class BadRepr(object): def __repr__(self): return 1 / 0 def test_list(): assert pretty.pretty([]) == '[]' assert pretty.pretty([1]) == '[1]' def test_dict(): assert pretty.pretty({}) == '{}' assert pretty.pretty({1: 1}) == '{1: 1}' def test_tuple(): assert pretty.pretty(()) == '()' assert pretty.pretty((1,)) == '(1,)' assert pretty.pretty((1, 2)) == '(1, 2)' class ReprDict(dict): def __repr__(self): return 'hi' def test_dict_with_custom_repr(): assert pretty.pretty(ReprDict()) == 'hi' class ReprList(list): def __repr__(self): return 'bye' class ReprSet(set): def __repr__(self): return 'cat' def test_set_with_custom_repr(): assert pretty.pretty(ReprSet()) == 'cat' def test_list_with_custom_repr(): assert pretty.pretty(ReprList()) == 'bye' def test_indentation(): """Test correct indentation in groups.""" count = 40 gotoutput = pretty.pretty(MyList(range(count))) expectedoutput = 'MyList(\n' + ',\n'.join(' %d' % i for i in range(count)) + ')' assert_equal(gotoutput, expectedoutput) def test_dispatch(): """Test correct dispatching: The _repr_pretty_ method for MyDict must be found before the registered printer for dict.""" gotoutput = pretty.pretty(MyDict()) expectedoutput = 'MyDict(...)' assert_equal(gotoutput, expectedoutput) def test_callability_checking(): """Test that the _repr_pretty_ method is tested for callability and skipped if not.""" gotoutput = pretty.pretty(Dummy2()) expectedoutput = 'Dummy1(...)' assert_equal(gotoutput, expectedoutput) def test_sets(): """Test that set and frozenset use Python 3 formatting.""" objects = [set(), frozenset(), set([1]), frozenset([1]), set([1, 2]), frozenset([1, 2]), set([-1, -2, -3])] expected = ['set()', 'frozenset()', '{1}', 'frozenset({1})', '{1, 2}', 'frozenset({1, 2})', '{-3, -2, -1}'] for obj, expected_output in zip(objects, expected): got_output = pretty.pretty(obj) assert_equal(got_output, expected_output) def test_unsortable_set(): xs = set([1, 2, 3, 'foo', 'bar', 'baz', object()]) p = pretty.pretty(xs) for x in xs: assert pretty.pretty(x) in p def test_unsortable_dict(): xs = dict((k, 1) for k in [1, 2, 3, 'foo', 'bar', 'baz', object()]) p = pretty.pretty(xs) for x in xs: assert pretty.pretty(x) in p @skip_without('xxlimited') def test_pprint_heap_allocated_type(): """Test that pprint works for heap allocated types.""" import xxlimited output = pretty.pretty(xxlimited.Null) assert_equal(output, 'xxlimited.Null') def test_pprint_nomod(): """Test that pprint works for classes with no __module__.""" output = pretty.pretty(NoModule) assert_equal(output, 'NoModule') def test_pprint_break(): """Test that p.break_ produces expected output.""" output = pretty.pretty(Breaking()) expected = 'TG: Breaking(\n ):' assert_equal(output, expected) def test_pprint_break_repr(): """Test that p.break_ is used in repr.""" output = pretty.pretty(BreakingReprParent()) expected = 'TG: Breaking(\n ):' assert_equal(output, expected) def test_bad_repr(): """Don't catch bad repr errors.""" with assert_raises(ZeroDivisionError): pretty.pretty(BadRepr()) class BadException(Exception): def __str__(self): return -1 class ReallyBadRepr(object): __module__ = 1 @property def __class__(self): raise ValueError('I am horrible') def __repr__(self): raise BadException() def test_really_bad_repr(): with assert_raises(BadException): pretty.pretty(ReallyBadRepr()) class SA(object): pass class SB(SA): pass try: super(SA).__self__ def test_super_repr(): output = pretty.pretty(super(SA)) assert_in('SA', output) sb = SB() output = pretty.pretty(super(SA, sb)) assert_in('SA', output) except AttributeError: def test_super_repr(): pretty.pretty(super(SA)) sb = SB() pretty.pretty(super(SA, sb)) def test_long_list(): lis = list(range(10000)) p = pretty.pretty(lis) last2 = p.rsplit('\n', 2)[-2:] assert_equal(last2, [' 999,', ' ...]']) def test_long_set(): s = set(range(10000)) p = pretty.pretty(s) last2 = p.rsplit('\n', 2)[-2:] assert_equal(last2, [' 999,', ' ...}']) def test_long_tuple(): tup = tuple(range(10000)) p = pretty.pretty(tup) last2 = p.rsplit('\n', 2)[-2:] assert_equal(last2, [' 999,', ' ...)']) def test_long_dict(): d = dict((n, n) for n in range(10000)) p = pretty.pretty(d) last2 = p.rsplit('\n', 2)[-2:] assert_equal(last2, [' 999: 999,', ' ...}']) def test_unbound_method(): output = pretty.pretty(MyObj.somemethod) assert_in('MyObj.somemethod', output) class MetaClass(type): def __new__(cls, name): return type.__new__(cls, name, (object,), {'name': name}) def __repr__(self): return '[CUSTOM REPR FOR CLASS %s]' % self.name ClassWithMeta = MetaClass('ClassWithMeta') def test_metaclass_repr(): output = pretty.pretty(ClassWithMeta) assert_equal(output, '[CUSTOM REPR FOR CLASS ClassWithMeta]') def test_unicode_repr(): u = u"üniçodé" ustr = unicode_to_str(u) class C(object): def __repr__(self): return ustr c = C() p = pretty.pretty(c) assert_equal(p, u) p = pretty.pretty([c]) assert_equal(p, u'[%s]' % u) def test_basic_class(): def type_pprint_wrapper(obj, p, cycle): if obj is MyObj: type_pprint_wrapper.called = True return pretty._type_pprint(obj, p, cycle) type_pprint_wrapper.called = False stream = StringIO() printer = pretty.RepresentationPrinter(stream) printer.type_pprinters[type] = type_pprint_wrapper printer.pretty(MyObj) printer.flush() output = stream.getvalue() assert_equal(output, '%s.MyObj' % __name__) assert_true(type_pprint_wrapper.called) # This is only run on Python 2 because in Python 3 the language prevents you # from setting a non-unicode value for __qualname__ on a metaclass, and it # doesn't respect the descriptor protocol if you subclass unicode and implement # __get__. @py2_only def test_fallback_to__name__on_type(): # Test that we correctly repr types that have non-string values for # __qualname__ by falling back to __name__ class Type(object): __qualname__ = 5 # Test repring of the type. stream = StringIO() printer = pretty.RepresentationPrinter(stream) printer.pretty(Type) printer.flush() output = stream.getvalue() # If __qualname__ is malformed, we should fall back to __name__. expected = '.'.join([__name__, Type.__name__]) assert_equal(output, expected) # Clear stream buffer. stream.buf = '' # Test repring of an instance of the type. instance = Type() printer.pretty(instance) printer.flush() output = stream.getvalue() # Should look like: # prefix = '<' + '.'.join([__name__, Type.__name__]) + ' at 0x' assert_true(output.startswith(prefix)) @py2_only def test_fail_gracefully_on_bogus__qualname__and__name__(): # Test that we correctly repr types that have non-string values for both # __qualname__ and __name__ class Meta(type): __name__ = 5 class Type(object): __metaclass__ = Meta __qualname__ = 5 stream = StringIO() printer = pretty.RepresentationPrinter(stream) printer.pretty(Type) printer.flush() output = stream.getvalue() # If we can't find __name__ or __qualname__ just use a sentinel string. expected = '.'.join([__name__, '']) assert_equal(output, expected) # Clear stream buffer. stream.buf = '' # Test repring of an instance of the type. instance = Type() printer.pretty(instance) printer.flush() output = stream.getvalue() # Should look like: # at 0x7f7658ae07d0> prefix = '<' + '.'.join([__name__, '']) + ' at 0x' assert_true(output.startswith(prefix)) def test_collections_defaultdict(): # Create defaultdicts with cycles a = defaultdict() a.default_factory = a b = defaultdict(list) b['key'] = b # Dictionary order cannot be relied on, test against single keys. cases = [ (defaultdict(list), 'defaultdict(list, {})'), (defaultdict(list, {'key': '-' * 50}), 'defaultdict(list,\n' " {'key': '-----------------------------------------" "---------'})"), (a, 'defaultdict(defaultdict(...), {})'), (b, "defaultdict(list, {'key': defaultdict(...)})"), ] for obj, expected in cases: assert_equal(pretty.pretty(obj), expected) def test_collections_ordereddict(): # Create OrderedDict with cycle a = OrderedDict() a['key'] = a cases = [ (OrderedDict(), 'OrderedDict()'), (OrderedDict((i, i) for i in range(1000, 1010)), 'OrderedDict([(1000, 1000),\n' ' (1001, 1001),\n' ' (1002, 1002),\n' ' (1003, 1003),\n' ' (1004, 1004),\n' ' (1005, 1005),\n' ' (1006, 1006),\n' ' (1007, 1007),\n' ' (1008, 1008),\n' ' (1009, 1009)])'), (a, "OrderedDict([('key', OrderedDict(...))])"), ] for obj, expected in cases: assert_equal(pretty.pretty(obj), expected) def test_collections_deque(): # Create deque with cycle a = deque() a.append(a) cases = [ (deque(), 'deque([])'), (deque(i for i in range(1000, 1020)), 'deque([1000,\n' ' 1001,\n' ' 1002,\n' ' 1003,\n' ' 1004,\n' ' 1005,\n' ' 1006,\n' ' 1007,\n' ' 1008,\n' ' 1009,\n' ' 1010,\n' ' 1011,\n' ' 1012,\n' ' 1013,\n' ' 1014,\n' ' 1015,\n' ' 1016,\n' ' 1017,\n' ' 1018,\n' ' 1019])'), (a, 'deque([deque(...)])'), ] for obj, expected in cases: assert_equal(pretty.pretty(obj), expected) def test_collections_counter(): class MyCounter(Counter): pass cases = [ (Counter(), 'Counter()'), (Counter(a=1), "Counter({'a': 1})"), (MyCounter(a=1), "MyCounter({'a': 1})"), ] for obj, expected in cases: assert_equal(pretty.pretty(obj), expected) def test_cyclic_list(): x = [] x.append(x) assert pretty.pretty(x) == '[[...]]' def test_cyclic_dequeue(): x = deque() x.append(x) assert pretty.pretty(x) == 'deque([deque(...)])' class HashItAnyway(object): def __init__(self, value): self.value = value def __hash__(self): return 0 def __eq__(self, other): return isinstance(other, HashItAnyway) and self.value == other.value def __ne__(self, other): return not self.__eq__(other) def _repr_pretty_(self, pretty, cycle): pretty.pretty(self.value) def test_cyclic_counter(): c = Counter() k = HashItAnyway(c) c[k] = 1 assert pretty.pretty(c) == 'Counter({Counter(...): 1})' def test_cyclic_dict(): x = {} k = HashItAnyway(x) x[k] = x assert pretty.pretty(x) == '{{...}: {...}}' def test_cyclic_set(): x = set() x.add(HashItAnyway(x)) assert pretty.pretty(x) == '{{...}}' def test_pprint(): t = {'hi': 1} with capture_out() as o: pretty.pprint(t) assert o.getvalue().strip() == pretty.pretty(t) class BigList(list): def _repr_pretty_(self, printer, cycle): if cycle: return '[...]' else: with printer.group(open='[', close=']'): with printer.indent(5): for v in self: printer.pretty(v) printer.breakable(',') def test_print_with_indent(): pretty.pretty(BigList([1, 2, 3])) class MyException(Exception): pass def test_exception(): assert pretty.pretty(ValueError('hi')) == "ValueError('hi')" assert pretty.pretty(ValueError('hi', 'there')) == \ "ValueError('hi', 'there')" assert 'test_pretty.' in pretty.pretty(MyException()) def test_re_evals(): for r in [ re.compile(r'hi'), re.compile(r'b\nc', re.MULTILINE), re.compile(br'hi', 0), re.compile(u'foo', re.MULTILINE | re.UNICODE), ]: assert repr(eval(pretty.pretty(r), globals())) == repr(r) class CustomStuff(object): def __init__(self): self.hi = 1 self.bye = 'fish' self.spoon = self @property def oops(self): raise AttributeError('Nope') def squirrels(self): pass def test_custom(): assert 'bye' not in pretty.pretty(CustomStuff()) assert 'bye=' in pretty.pretty(CustomStuff(), verbose=True) assert 'squirrels' not in pretty.pretty(CustomStuff(), verbose=True) def test_print_builtin_function(): assert pretty.pretty(abs) == '' def test_pretty_function(): assert '.' in pretty.pretty(test_pretty_function) def test_empty_printer(): printer = pretty.RepresentationPrinter( pretty.CUnicodeIO(), singleton_pprinters={}, type_pprinters={ int: pretty._repr_pprint, list: pretty._repr_pprint, }, deferred_pprinters={}, ) printer.pretty([1, 2, 3]) assert printer.output.getvalue() == u'[1, 2, 3]' def test_breakable_at_group_boundary(): assert '\n' in pretty.pretty([[], '000000'], max_width=5) hypothesis-python-3.44.1/tests/cover/test_provisional_strategies.py000066400000000000000000000031031321557765100260220ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from binascii import unhexlify from hypothesis import given from hypothesis.provisional import emails, ip4_addr_strings, \ ip6_addr_strings @given(emails()) def test_is_valid_email(address): local, at_, domain = address.rpartition('@') assert at_ == '@' assert local assert domain @given(ip4_addr_strings()) def test_is_IP4_addr(address): as_num = [int(n) for n in address.split('.')] assert len(as_num) == 4 assert all(0 <= n <= 255 for n in as_num) @given(ip6_addr_strings()) def test_is_IP6_addr(address): # Works for non-normalised addresses produced by this strategy, but not # a particularly general test assert address == address.upper() as_hex = address.split(':') assert len(as_hex) == 8 assert all(len(part) == 4 for part in as_hex) raw = unhexlify(address.replace(u':', u'').encode('ascii')) assert len(raw) == 16 hypothesis-python-3.44.1/tests/cover/test_random_module.py000066400000000000000000000026251321557765100240600ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import random import pytest import hypothesis.strategies as st from hypothesis import given, reporting from tests.common.utils import capture_out def test_can_seed_random(): with capture_out() as out: with reporting.with_reporter(reporting.default): with pytest.raises(AssertionError): @given(st.random_module()) def test(r): assert False test() assert 'random.seed(0)' in out.getvalue() @given(st.random_module(), st.random_module()) def test_seed_random_twice(r, r2): assert repr(r) == repr(r2) @given(st.random_module()) def test_does_not_fail_health_check_if_randomness_is_used(r): random.getrandbits(128) hypothesis-python-3.44.1/tests/cover/test_randomization.py000066400000000000000000000030371321557765100241070ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import random from pytest import raises import hypothesis.strategies as st from hypothesis import Verbosity, find, given, settings def test_seeds_off_random(): s = settings(max_shrinks=0, database=None) r = random.getstate() x = find(st.integers(), lambda x: True, settings=s) random.setstate(r) y = find(st.integers(), lambda x: True, settings=s) assert x == y def test_nesting_with_control_passes_health_check(): @given(st.integers(0, 100), st.random_module()) @settings(max_examples=5, database=None, deadline=None) def test_blah(x, rnd): @given(st.integers()) @settings( max_examples=100, max_shrinks=0, database=None, verbosity=Verbosity.quiet) def test_nest(y): assert y < x with raises(AssertionError): test_nest() test_blah() hypothesis-python-3.44.1/tests/cover/test_recursive.py000066400000000000000000000032471321557765100232430ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import find, given from hypothesis.errors import InvalidArgument @given( st.recursive( st.booleans(), lambda x: st.lists(x, average_size=20), max_leaves=10)) def test_respects_leaf_limit(xs): def flatten(x): if isinstance(x, list): return sum(map(flatten, x), []) else: return [x] assert len(flatten(xs)) <= 10 def test_can_find_nested(): x = find( st.recursive(st.booleans(), lambda x: st.tuples(x, x)), lambda x: isinstance(x, tuple) and isinstance(x[0], tuple) ) assert x == ((False, False), False) def test_recursive_call_validates_expand_returns_strategies(): with pytest.raises(InvalidArgument): st.recursive(st.booleans(), lambda x: 1).example() def test_recursive_call_validates_base_is_strategy(): x = st.recursive(1, lambda x: st.none()) with pytest.raises(InvalidArgument): x.example() hypothesis-python-3.44.1/tests/cover/test_reflection.py000066400000000000000000000353541321557765100233720ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys from copy import deepcopy from functools import partial import pytest from mock import Mock, MagicMock, NonCallableMock, NonCallableMagicMock from tests.common.utils import raises from hypothesis.internal.compat import PY3, FullArgSpec, getfullargspec from hypothesis.internal.reflection import is_mock, proxies, arg_string, \ required_args, unbind_method, eval_directory, function_digest, \ fully_qualified_name, source_exec_as_module, \ convert_keyword_arguments, define_function_signature, \ convert_positional_arguments, get_pretty_function_description def do_conversion_test(f, args, kwargs): result = f(*args, **kwargs) cargs, ckwargs = convert_keyword_arguments(f, args, kwargs) assert result == f(*cargs, **ckwargs) cargs2, ckwargs2 = convert_positional_arguments(f, args, kwargs) assert result == f(*cargs2, **ckwargs2) def test_simple_conversion(): def foo(a, b, c): return (a, b, c) assert convert_keyword_arguments( foo, (1, 2, 3), {}) == ((1, 2, 3), {}) assert convert_keyword_arguments( foo, (), {'a': 3, 'b': 2, 'c': 1}) == ((3, 2, 1), {}) do_conversion_test(foo, (1, 0), {'c': 2}) do_conversion_test(foo, (1,), {'c': 2, 'b': 'foo'}) def test_populates_defaults(): def bar(x=[], y=1): pass assert convert_keyword_arguments(bar, (), {}) == (([], 1), {}) assert convert_keyword_arguments(bar, (), {'y': 42}) == (([], 42), {}) do_conversion_test(bar, (), {}) do_conversion_test(bar, (1,), {}) def test_leaves_unknown_kwargs_in_dict(): def bar(x, **kwargs): pass assert convert_keyword_arguments(bar, (1,), {'foo': 'hi'}) == ( (1,), {'foo': 'hi'} ) assert convert_keyword_arguments(bar, (), {'x': 1, 'foo': 'hi'}) == ( (1,), {'foo': 'hi'} ) do_conversion_test(bar, (1,), {}) do_conversion_test(bar, (), {'x': 1, 'y': 1}) def test_errors_on_bad_kwargs(): def bar(): pass with raises(TypeError): convert_keyword_arguments(bar, (), {'foo': 1}) def test_passes_varargs_correctly(): def foo(*args): pass assert convert_keyword_arguments(foo, (1, 2, 3), {}) == ((1, 2, 3), {}) do_conversion_test(foo, (1, 2, 3), {}) def test_errors_if_keyword_precedes_positional(): def foo(x, y): pass with raises(TypeError): convert_keyword_arguments(foo, (1,), {'x': 2}) def test_errors_if_not_enough_args(): def foo(a, b, c, d=1): pass with raises(TypeError): convert_keyword_arguments(foo, (1, 2), {'d': 4}) def test_errors_on_extra_kwargs(): def foo(a): pass with raises(TypeError) as e: convert_keyword_arguments(foo, (1,), {'b': 1}) assert 'keyword' in e.value.args[0] with raises(TypeError) as e2: convert_keyword_arguments(foo, (1,), {'b': 1, 'c': 2}) assert 'keyword' in e2.value.args[0] def test_positional_errors_if_too_many_args(): def foo(a): pass with raises(TypeError) as e: convert_positional_arguments(foo, (1, 2), {}) assert '2 given' in e.value.args[0] def test_positional_errors_if_too_few_args(): def foo(a, b, c): pass with raises(TypeError): convert_positional_arguments(foo, (1, 2), {}) def test_positional_does_not_error_if_extra_args_are_kwargs(): def foo(a, b, c): pass convert_positional_arguments(foo, (1, 2), {'c': 3}) def test_positional_errors_if_given_bad_kwargs(): def foo(a): pass with raises(TypeError) as e: convert_positional_arguments(foo, (), {'b': 1}) assert 'unexpected keyword argument' in e.value.args[0] def test_positional_errors_if_given_duplicate_kwargs(): def foo(a): pass with raises(TypeError) as e: convert_positional_arguments(foo, (2,), {'a': 1}) assert 'multiple values' in e.value.args[0] def test_names_of_functions_are_pretty(): assert get_pretty_function_description( test_names_of_functions_are_pretty ) == 'test_names_of_functions_are_pretty' class Foo(object): @classmethod def bar(cls): pass def baz(cls): pass def __repr__(self): return 'SoNotFoo()' def test_class_names_are_not_included_in_class_method_prettiness(): assert get_pretty_function_description(Foo.bar) == 'bar' def test_repr_is_included_in_bound_method_prettiness(): assert get_pretty_function_description(Foo().baz) == 'SoNotFoo().baz' def test_class_is_not_included_in_unbound_method(): assert ( get_pretty_function_description(Foo.baz) == 'baz' ) def test_does_not_error_on_confused_sources(): def ed(f, *args): return f x = ed( lambda x, y: ( x * y ).conjugate() == x.conjugate() * y.conjugate(), complex, complex) get_pretty_function_description(x) def test_digests_are_reasonably_unique(): assert ( function_digest(test_simple_conversion) != function_digest(test_does_not_error_on_confused_sources) ) def test_digest_returns_the_same_value_for_two_calls(): assert ( function_digest(test_simple_conversion) == function_digest(test_simple_conversion) ) def test_can_digest_a_built_in_function(): import math assert function_digest(math.isnan) != function_digest(range) def test_can_digest_a_unicode_lambda(): function_digest(lambda x: '☃' in str(x)) def test_can_digest_a_function_with_no_name(): def foo(x, y): pass function_digest(partial(foo, 1)) def test_arg_string_is_in_order(): def foo(c, a, b, f, a1): pass assert arg_string(foo, (1, 2, 3, 4, 5), {}) == 'c=1, a=2, b=3, f=4, a1=5' assert arg_string( foo, (1, 2), {'b': 3, 'f': 4, 'a1': 5}) == 'c=1, a=2, b=3, f=4, a1=5' def test_varkwargs_are_sorted_and_after_real_kwargs(): def foo(d, e, f, **kwargs): pass assert arg_string( foo, (), {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} ) == 'd=4, e=5, f=6, a=1, b=2, c=3' def test_varargs_come_without_equals(): def foo(a, *args): pass assert arg_string(foo, (1, 2, 3, 4), {}) == '2, 3, 4, a=1' def test_can_mix_varargs_and_varkwargs(): def foo(*args, **kwargs): pass assert arg_string( foo, (1, 2, 3), {'c': 7} ) == '1, 2, 3, c=7' def test_arg_string_does_not_include_unprovided_defaults(): def foo(a, b, c=9, d=10): pass assert arg_string(foo, (1,), {'b': 1, 'd': 11}) == 'a=1, b=1, d=11' class A(object): def f(self): pass def g(self): pass class B(A): pass class C(A): def f(self): pass def test_unbind_gives_parent_class_function(): assert unbind_method(B().f) == unbind_method(A.f) def test_unbind_distinguishes_different_functions(): assert unbind_method(A.f) != unbind_method(A.g) def test_unbind_distinguishes_overridden_functions(): assert unbind_method(C().f) != unbind_method(A.f) def universal_acceptor(*args, **kwargs): return args, kwargs def has_one_arg(hello): pass def has_two_args(hello, world): pass def has_a_default(x, y, z=1): pass def has_varargs(*args): pass def has_kwargs(**kwargs): pass @pytest.mark.parametrize('f', [ has_one_arg, has_two_args, has_varargs, has_kwargs, ]) def test_copying_preserves_argspec(f): af = getfullargspec(f) t = define_function_signature('foo', 'docstring', af)(universal_acceptor) at = getfullargspec(t) assert af.args == at.args assert af.varargs == at.varargs assert af.varkw == at.varkw assert len(af.defaults or ()) == len(at.defaults or ()) assert af.kwonlyargs == at.kwonlyargs assert af.kwonlydefaults == at.kwonlydefaults assert af.annotations == at.annotations def test_name_does_not_clash_with_function_names(): def f(): pass @define_function_signature('f', 'A docstring for f', getfullargspec(f)) def g(): pass g() def test_copying_sets_name(): f = define_function_signature( 'hello_world', 'A docstring for hello_world', getfullargspec(has_two_args))(universal_acceptor) assert f.__name__ == 'hello_world' def test_copying_sets_docstring(): f = define_function_signature( 'foo', 'A docstring for foo', getfullargspec(has_two_args))(universal_acceptor) assert f.__doc__ == 'A docstring for foo' def test_uses_defaults(): f = define_function_signature( 'foo', 'A docstring for foo', getfullargspec(has_a_default))(universal_acceptor) assert f(3, 2) == ((3, 2, 1), {}) def test_uses_varargs(): f = define_function_signature( 'foo', 'A docstring for foo', getfullargspec(has_varargs))(universal_acceptor) assert f(1, 2) == ((1, 2), {}) DEFINE_FOO_FUNCTION = """ def foo(x): return x """ def test_exec_as_module_execs(): m = source_exec_as_module(DEFINE_FOO_FUNCTION) assert m.foo(1) == 1 def test_exec_as_module_caches(): assert ( source_exec_as_module(DEFINE_FOO_FUNCTION) is source_exec_as_module(DEFINE_FOO_FUNCTION) ) def test_exec_leaves_sys_path_unchanged(): old_path = deepcopy(sys.path) source_exec_as_module('hello_world = 42') assert sys.path == old_path def test_define_function_signature_works_with_conflicts(): def accepts_everything(*args, **kwargs): pass define_function_signature('hello', 'A docstring for hello', FullArgSpec( args=('f',), varargs=None, varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={} ))(accepts_everything)(1) define_function_signature('hello', 'A docstring for hello', FullArgSpec( args=(), varargs='f', varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={} ))(accepts_everything)(1) define_function_signature('hello', 'A docstring for hello', FullArgSpec( args=(), varargs=None, varkw='f', defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={} ))(accepts_everything)() define_function_signature('hello', 'A docstring for hello', FullArgSpec( args=('f', 'f_3'), varargs='f_1', varkw='f_2', defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={} ))(accepts_everything)(1, 2) def test_define_function_signature_validates_arguments(): with raises(ValueError): define_function_signature('hello_world', None, FullArgSpec( args=['a b'], varargs=None, varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={})) def test_define_function_signature_validates_function_name(): with raises(ValueError): define_function_signature('hello world', None, FullArgSpec( args=['a', 'b'], varargs=None, varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={})) class Container(object): def funcy(self): pass def test_fully_qualified_name(): assert fully_qualified_name(test_copying_preserves_argspec) == \ 'tests.cover.test_reflection.test_copying_preserves_argspec' assert fully_qualified_name(Container.funcy) == \ 'tests.cover.test_reflection.Container.funcy' assert fully_qualified_name(fully_qualified_name) == \ 'hypothesis.internal.reflection.fully_qualified_name' def test_qualname_of_function_with_none_module_is_name(): def f(): pass f.__module__ = None assert fully_qualified_name(f)[-1] == 'f' def test_can_proxy_functions_with_mixed_args_and_varargs(): def foo(a, *args): return (a, args) @proxies(foo) def bar(*args, **kwargs): return foo(*args, **kwargs) assert bar(1, 2) == (1, (2,)) def test_can_delegate_to_a_function_with_no_positional_args(): def foo(a, b): return (a, b) @proxies(foo) def bar(**kwargs): return foo(**kwargs) assert bar(2, 1) == (2, 1) class Snowman(object): def __repr__(self): return '☃' class BittySnowman(object): def __repr__(self): return '☃' def test_can_handle_unicode_repr(): def foo(x): pass assert arg_string(foo, [Snowman()], {}) == 'x=☃' assert arg_string(foo, [], {'x': Snowman()}) == 'x=☃' class NoRepr(object): pass def test_can_handle_repr_on_type(): def foo(x): pass assert arg_string(foo, [Snowman], {}) == 'x=Snowman' assert arg_string(foo, [NoRepr], {}) == 'x=NoRepr' def test_can_handle_repr_of_none(): def foo(x): pass assert arg_string(foo, [None], {}) == 'x=None' assert arg_string(foo, [], {'x': None}) == 'x=None' if not PY3: def test_can_handle_non_unicode_repr_containing_non_ascii(): def foo(x): pass assert arg_string(foo, [BittySnowman()], {}) == 'x=☃' assert arg_string(foo, [], {'x': BittySnowman()}) == 'x=☃' def test_does_not_put_eval_directory_on_path(): source_exec_as_module("hello = 'world'") assert eval_directory() not in sys.path def test_kwargs_appear_in_arg_string(): def varargs(*args, **kwargs): pass assert 'x=1' in arg_string(varargs, (), {'x': 1}) def test_is_mock_with_negative_cases(): assert not is_mock(None) assert not is_mock(1234) assert not is_mock(is_mock) assert not is_mock(BittySnowman()) assert not is_mock('foobar') assert not is_mock(Mock(spec=BittySnowman)) assert not is_mock(MagicMock(spec=BittySnowman)) def test_is_mock_with_positive_cases(): assert is_mock(Mock()) assert is_mock(MagicMock()) assert is_mock(NonCallableMock()) assert is_mock(NonCallableMagicMock()) class Target(object): def __init__(self, a, b): pass def method(self, a, b): pass @pytest.mark.parametrize('target', [Target, Target(1, 2).method]) @pytest.mark.parametrize('args,kwargs,expected', [ ((), {}, set('ab')), ((1,), {}, set('b')), ((1, 2), {}, set()), ((), dict(a=1), set('b')), ((), dict(b=2), set('a')), ((), dict(a=1, b=2), set()), ]) def test_required_args(target, args, kwargs, expected): # Mostly checking that `self` (and only self) is correctly excluded assert required_args(target, args, kwargs) == expected hypothesis-python-3.44.1/tests/cover/test_regex.py000066400000000000000000000261741321557765100223520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import re import sys import unicodedata import pytest import hypothesis.strategies as st from hypothesis import given, assume from tests.common.debug import find_any, assert_no_examples, \ assert_all_examples from hypothesis.internal.compat import PY3, hrange, hunichr from hypothesis.searchstrategy.regex import SPACE_CHARS, \ UNICODE_SPACE_CHARS, HAS_WEIRD_WORD_CHARS, UNICODE_WORD_CATEGORIES, \ UNICODE_DIGIT_CATEGORIES, UNICODE_SPACE_CATEGORIES, \ UNICODE_WEIRD_NONWORD_CHARS, base_regex_strategy def is_ascii(s): return all(ord(c) < 128 for c in s) def is_digit(s): return all(unicodedata.category(c) in UNICODE_DIGIT_CATEGORIES for c in s) def is_space(s): return all(c in SPACE_CHARS for c in s) def is_unicode_space(s): return all( unicodedata.category(c) in UNICODE_SPACE_CATEGORIES or c in UNICODE_SPACE_CHARS for c in s ) def is_word(s): return all( c == '_' or ( (not HAS_WEIRD_WORD_CHARS or c not in UNICODE_WEIRD_NONWORD_CHARS) and unicodedata.category(c) in UNICODE_WORD_CATEGORIES ) for c in s ) def ascii_regex(pattern): flags = re.ASCII if PY3 else 0 return re.compile(pattern, flags) def unicode_regex(pattern): return re.compile(pattern, re.UNICODE) def _test_matching_pattern(pattern, isvalidchar, is_unicode=False): r = unicode_regex(pattern) if is_unicode else ascii_regex(pattern) codepoints = hrange(0, sys.maxunicode + 1) \ if is_unicode else hrange(1, 128) for c in [hunichr(x) for x in codepoints]: if isvalidchar(c): assert r.search(c), ( '"%s" supposed to match "%s" (%r, category "%s"), ' "but it doesn't" % (pattern, c, c, unicodedata.category(c)) ) else: assert not r.search(c), ( '"%s" supposed not to match "%s" (%r, category "%s"), ' 'but it does' % (pattern, c, c, unicodedata.category(c)) ) @pytest.mark.parametrize('category,predicate', [ (r'\w', is_word), (r'\d', is_digit), (r'\s', None)]) @pytest.mark.parametrize('invert', [False, True]) @pytest.mark.parametrize('is_unicode', [False, True]) def test_matching(category, predicate, invert, is_unicode): if predicate is None: # Special behaviour due to \x1c, INFORMATION SEPARATOR FOUR predicate = is_unicode_space if is_unicode else is_space pred = predicate if invert: category = category.swapcase() def pred(s): return not predicate(s) _test_matching_pattern(category, pred, is_unicode) @pytest.mark.parametrize('pattern', [ u'.', # anything u'a', u'abc', u'[a][b][c]', u'[^a][^b][^c]', # literals u'[a-z0-9_]', u'[^a-z0-9_]', # range and negative range u'ab?', u'ab*', u'ab+', # quantifiers u'ab{5}', u'ab{5,10}', u'ab{,10}', u'ab{5,}', # repeaters u'ab|cd|ef', # branch u'(foo)+', u'([\'"])[a-z]+\\1', u'(?:[a-z])([\'"])[a-z]+\\1', u'(?P[\'"])[a-z]+(?P=foo)', # groups u'^abc', # beginning u'\\d', u'[\\d]', u'[^\\D]', u'\\w', u'[\\w]', u'[^\\W]', u'\\s', u'[\\s]', u'[^\\S]', # categories ]) @pytest.mark.parametrize('encode', [False, True]) def test_can_generate(pattern, encode): if encode: pattern = pattern.encode('ascii') assert_all_examples(st.from_regex(pattern), re.compile(pattern).search) @pytest.mark.parametrize('pattern', [ re.compile(u'a', re.IGNORECASE), u'(?i)a', re.compile(u'[ab]', re.IGNORECASE), u'(?i)[ab]', ]) def test_literals_with_ignorecase(pattern): strategy = st.from_regex(pattern) find_any(strategy, lambda s: s == u'a') find_any(strategy, lambda s: s == u'A') @pytest.mark.parametrize('pattern', [ re.compile(u'\\A[^a][^b]\\Z', re.IGNORECASE), u'(?i)\\A[^a][^b]\\Z' ]) def test_not_literal_with_ignorecase(pattern): assert_all_examples( st.from_regex(pattern), lambda s: s[0] not in (u'a', u'A') and s[1] not in (u'b', u'B') ) def test_any_doesnt_generate_newline(): assert_all_examples(st.from_regex(u'.'), lambda s: s != u'\n') @pytest.mark.parametrize('pattern', [re.compile(u'.', re.DOTALL), u'(?s).']) def test_any_with_dotall_generate_newline(pattern): find_any(st.from_regex(pattern), lambda s: s == u'\n') @pytest.mark.parametrize('pattern', [re.compile(b'.', re.DOTALL), b'(?s).']) def test_any_with_dotall_generate_newline_binary(pattern): find_any(st.from_regex(pattern), lambda s: s == b'\n') @pytest.mark.parametrize('pattern', [ u'\\d', u'[\\d]', u'[^\\D]', u'\\w', u'[\\w]', u'[^\\W]', u'\\s', u'[\\s]', u'[^\\S]', ]) @pytest.mark.parametrize('is_unicode', [False, True]) @pytest.mark.parametrize('invert', [False, True]) def test_groups(pattern, is_unicode, invert): if u'd' in pattern.lower(): group_pred = is_digit elif u'w' in pattern.lower(): group_pred = is_word else: # Special behaviour due to \x1c, INFORMATION SEPARATOR FOUR group_pred = is_unicode_space if is_unicode else is_space if invert: pattern = pattern.swapcase() _p = group_pred def group_pred(s): return not _p(s) pattern = u'^%s\\Z' % (pattern,) compiler = unicode_regex if is_unicode else ascii_regex strategy = st.from_regex(compiler(pattern)) find_any(strategy.filter(group_pred), is_ascii) if is_unicode: find_any(strategy, lambda s: group_pred(s) and not is_ascii(s)) assert_all_examples(strategy, group_pred) def test_caret_in_the_middle_does_not_generate_anything(): r = re.compile(u'a^b') assert_no_examples(st.from_regex(r)) def test_end_with_terminator_does_not_pad(): assert_all_examples(st.from_regex(u'abc\\Z'), lambda x: x[-3:] == u"abc") def test_end(): strategy = st.from_regex(u'abc$') find_any(strategy, lambda s: s == u'abc') find_any(strategy, lambda s: s == u'abc\n') def test_groupref_exists(): assert_all_examples( st.from_regex(u'^(<)?a(?(1)>)$'), lambda s: s in (u'a', u'a\n', u'', u'\n') ) assert_all_examples( st.from_regex(u'^(a)?(?(1)b|c)$'), lambda s: s in (u'ab', u'ab\n', u'c', u'c\n') ) def test_impossible_negative_lookahead(): assert_no_examples(st.from_regex(u'(?!foo)foo')) @given(st.from_regex(u"(\\Afoo\\Z)")) def test_can_handle_boundaries_nested(s): assert s == u"foo" def test_groupref_not_shared_between_regex(): # If group references are (incorrectly!) shared between regex, this would # fail as the would only be one reference. st.tuples(st.from_regex('(a)\\1'), st.from_regex('(b)\\1')).example() @given(st.data()) def test_group_ref_is_not_shared_between_identical_regex(data): pattern = re.compile(u"^(.+)\\1\\Z", re.UNICODE) x = data.draw(base_regex_strategy(pattern)) y = data.draw(base_regex_strategy(pattern)) assume(x != y) assert pattern.match(x).end() == len(x) assert pattern.match(y).end() == len(y) @given(st.data()) def test_does_not_leak_groups(data): a = data.draw(base_regex_strategy(re.compile(u"^(a)\\Z"))) assert a == 'a' b = data.draw(base_regex_strategy(re.compile(u"^(?(1)a|b)(.)\\Z"))) assert b[0] == 'b' def test_positive_lookbehind(): find_any(st.from_regex(u'.*(?<=ab)c'), lambda s: s.endswith(u'abc')) def test_positive_lookahead(): st.from_regex(u'a(?=bc).*').filter( lambda s: s.startswith(u'abc')).example() def test_negative_lookbehind(): # no efficient support strategy = st.from_regex(u'[abc]*(? 10 ** 6: failing_example[0] = i assert i not in failing_example with capture_out() as o: with pytest.raises(AssertionError): test() assert '@reproduce_failure' in o.getvalue() exp = re.compile(r'reproduce_failure\(([^)]+)\)', re.MULTILINE) extract = exp.search(o.getvalue()) reproduction = eval(extract.group(0)) test = reproduction(test) with pytest.raises(AssertionError): test() def test_does_not_print_reproduction_for_simple_examples_by_default(): @given(st.integers()) def test(i): assert False with capture_out() as o: with pytest.raises(AssertionError): test() assert '@reproduce_failure' not in o.getvalue() def test_does_print_reproduction_for_simple_data_examples_by_default(): @given(st.data()) def test(data): data.draw(st.integers()) assert False with capture_out() as o: with pytest.raises(AssertionError): test() assert '@reproduce_failure' in o.getvalue() def test_does_not_print_reproduction_for_large_data_examples_by_default(): @settings(max_shrinks=0) @given(st.data()) def test(data): b = data.draw(st.binary(min_size=1000, max_size=1000)) if len(zlib.compress(b)) > 1000: raise ValueError() with capture_out() as o: with pytest.raises(ValueError): test() assert '@reproduce_failure' not in o.getvalue() class Foo(object): def __repr__(self): return 'not a valid python expression' def test_does_print_reproduction_given_an_invalid_repr(): @given(st.integers().map(lambda x: Foo())) def test(i): raise ValueError() with capture_out() as o: with pytest.raises(ValueError): test() assert '@reproduce_failure' in o.getvalue() def test_does_not_print_reproduction_if_told_not_to(): @settings(print_blob=PrintSettings.NEVER) @given(st.integers().map(lambda x: Foo())) def test(i): raise ValueError() with capture_out() as o: with pytest.raises(ValueError): test() assert '@reproduce_failure' not in o.getvalue() def test_raises_invalid_if_wrong_version(): b = b'hello world' n = len(b) @reproduce_failure('1.0.0', encode_failure(b)) @given(st.binary(min_size=n, max_size=n)) def test(x): pass with pytest.raises(InvalidArgument): test() hypothesis-python-3.44.1/tests/cover/test_reusable_values.py000066400000000000000000000053731321557765100244170ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given, reject, example from hypothesis.errors import InvalidArgument base_reusable_strategies = ( st.text(), st.binary(), st.dates(), st.times(), st.timedeltas(), st.booleans(), st.complex_numbers(), st.floats(), st.floats(-1.0, 1.0), st.integers(), st.integers(1, 10), st.integers(1), ) @st.deferred def reusable(): return st.one_of( st.sampled_from(base_reusable_strategies), st.builds( st.floats, min_value=st.none() | st.floats(), max_value=st.none() | st.floats(), allow_infinity=st.booleans(), allow_nan=st.booleans() ), st.builds(st.just, st.lists(max_size=0)), st.builds(st.sampled_from, st.lists(st.lists(max_size=0))), st.lists(reusable).map(st.one_of), st.lists(reusable).map(lambda ls: st.tuples(*ls)), ) assert not reusable.is_empty @example(st.integers(min_value=1)) @given(reusable) def test_reusable_strategies_are_all_reusable(s): try: s.validate() except InvalidArgument: reject() assert s.has_reusable_values for s in base_reusable_strategies: test_reusable_strategies_are_all_reusable = example(s)( test_reusable_strategies_are_all_reusable ) test_reusable_strategies_are_all_reusable = example(st.tuples(s))( test_reusable_strategies_are_all_reusable ) def test_composing_breaks_reusability(): s = st.integers() assert s.has_reusable_values assert not s.filter(lambda x: True).has_reusable_values assert not s.map(lambda x: x).has_reusable_values assert not s.flatmap(lambda x: st.just(x)).has_reusable_values @pytest.mark.parametrize('strat', [ st.lists(st.booleans()), st.sets(st.booleans()), st.dictionaries(st.booleans(), st.booleans()), ]) def test_mutable_collections_do_not_have_reusable_values(strat): assert not strat.has_reusable_values def test_recursion_does_not_break_reusability(): x = st.deferred(lambda: st.none() | st.tuples(x)) assert x.has_reusable_values hypothesis-python-3.44.1/tests/cover/test_runner_strategy.py000066400000000000000000000034701321557765100244650ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from unittest import TestCase import pytest from hypothesis import strategies as st from hypothesis import find, given from hypothesis.errors import InvalidArgument from hypothesis.stateful import GenericStateMachine def test_cannot_use_without_a_runner(): @given(st.runner()) def f(x): pass with pytest.raises(InvalidArgument): f() def test_cannot_use_in_find_without_default(): with pytest.raises(InvalidArgument): find(st.runner(), lambda x: True) def test_is_default_in_find(): t = object() assert find(st.runner(t), lambda x: True) == t @given(st.runner(1)) def test_is_default_without_self(runner): assert runner == 1 class TestStuff(TestCase): @given(st.runner()) def test_runner_is_self(self, runner): assert runner is self @given(st.runner(default=3)) def test_runner_is_self_even_with_default(self, runner): assert runner is self class RunnerStateMachine(GenericStateMachine): def steps(self): return st.runner() def execute_step(self, step): assert self is step TestState = RunnerStateMachine.TestCase hypothesis-python-3.44.1/tests/cover/test_sampled_from.py000066400000000000000000000030241321557765100236750ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import enum import collections from hypothesis import given, settings from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import sampled_from an_enum = enum.Enum('A', 'a b c') an_ordereddict = collections.OrderedDict([('a', 1), ('b', 2), ('c', 3)]) @given(sampled_from((1, 2))) @settings(min_satisfying_examples=10) def test_can_handle_sampling_from_fewer_than_min_satisfying(v): pass @checks_deprecated_behaviour def test_can_sample_sets_while_deprecated(): assert sampled_from(set('abc')).example() in 'abc' def test_can_sample_sequence_without_warning(): sampled_from([1, 2, 3]).example() def test_can_sample_ordereddict_without_warning(): sampled_from(an_ordereddict).example() @given(sampled_from(an_enum)) def test_can_sample_enums(member): assert isinstance(member, an_enum) hypothesis-python-3.44.1/tests/cover/test_searchstrategy.py000066400000000000000000000046021321557765100242600ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import functools from collections import namedtuple import pytest from hypothesis.types import RandomWithSeed from tests.common.debug import assert_no_examples from hypothesis.strategies import just, tuples, randoms, booleans, integers from hypothesis.internal.compat import text_type from hypothesis.searchstrategy.strategies import one_of_strategies def test_or_errors_when_given_non_strategy(): bools = tuples(booleans()) with pytest.raises(ValueError): bools | u'foo' def test_joining_zero_strategies_fails(): with pytest.raises(ValueError): one_of_strategies(()) SomeNamedTuple = namedtuple(u'SomeNamedTuple', (u'a', u'b')) def last(xs): t = None for x in xs: t = x return t def test_random_repr_has_seed(): rnd = randoms().example() seed = rnd.seed assert text_type(seed) in repr(rnd) def test_random_only_produces_special_random(): st = randoms() assert isinstance(st.example(), RandomWithSeed) def test_just_strategy_uses_repr(): class WeirdRepr(object): def __repr__(self): return u'ABCDEFG' assert repr( just(WeirdRepr()) ) == u'just(%r)' % (WeirdRepr(),) def test_can_map(): s = integers().map(pack=lambda t: u'foo') assert s.example() == u'foo' def test_example_raises_unsatisfiable_when_too_filtered(): assert_no_examples(integers().filter(lambda x: False)) def nameless_const(x): def f(u, v): return u return functools.partial(f, x) def test_can_map_nameless(): f = nameless_const(2) assert repr(f) in repr(integers().map(f)) def test_can_flatmap_nameless(): f = nameless_const(just(3)) assert repr(f) in repr(integers().flatmap(f)) hypothesis-python-3.44.1/tests/cover/test_seed_printing.py000066400000000000000000000063721321557765100240700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import pytest import hypothesis.core as core import hypothesis.strategies as st from hypothesis import given, assume, settings from hypothesis.errors import FailedHealthCheck from tests.common.utils import all_values, capture_out from hypothesis.database import InMemoryExampleDatabase from hypothesis.internal.compat import hrange @pytest.mark.parametrize('in_pytest', [False, True]) @pytest.mark.parametrize('fail_healthcheck', [False, True]) def test_prints_seed_only_on_healthcheck( monkeypatch, in_pytest, fail_healthcheck ): monkeypatch.setattr(core, 'running_under_pytest', in_pytest) strategy = st.integers() if fail_healthcheck: def slow_map(i): time.sleep(10) return i strategy = strategy.map(slow_map) expected_exc = FailedHealthCheck else: expected_exc = AssertionError @settings(database=None) @given(strategy) def test(i): assert fail_healthcheck with capture_out() as o: with pytest.raises(expected_exc): test() output = o.getvalue() seed = test._hypothesis_internal_use_generated_seed assert seed is not None if fail_healthcheck: assert '@seed(%d)' % (seed,) in output contains_pytest_instruction = ( '--hypothesis-seed=%d' % (seed,)) in output assert contains_pytest_instruction == in_pytest else: assert '@seed' not in output def test_uses_global_force(monkeypatch): monkeypatch.setattr(core, 'global_force_seed', 42) @given(st.integers()) def test(i): raise ValueError() output = [] for _ in hrange(2): with capture_out() as o: with pytest.raises(ValueError): test() output.append(o.getvalue()) assert output[0] == output[1] assert '@seed' not in output[0] def test_does_print_on_reuse_from_database(): passes_healthcheck = False database = InMemoryExampleDatabase() @settings(database=database) @given(st.integers()) def test(i): assume(passes_healthcheck) raise ValueError() with capture_out() as o: with pytest.raises(FailedHealthCheck): test() assert '@seed' in o.getvalue() passes_healthcheck = True with capture_out() as o: with pytest.raises(ValueError): test() assert all_values(database) assert '@seed' not in o.getvalue() passes_healthcheck = False with capture_out() as o: with pytest.raises(FailedHealthCheck): test() assert '@seed' in o.getvalue() hypothesis-python-3.44.1/tests/cover/test_sets.py000066400000000000000000000031261321557765100222060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import find, given, settings from hypothesis.errors import InvalidArgument from hypothesis.strategies import sets, lists, floats, randoms, integers def test_unique_lists_error_on_too_large_average_size(): with pytest.raises(InvalidArgument): lists(integers(), unique=True, average_size=10, max_size=5).example() @given(randoms()) @settings(max_examples=5, deadline=None) def test_can_draw_sets_of_hard_to_find_elements(rnd): rarebool = floats(0, 1).map(lambda x: x <= 0.01) find( sets(rarebool, min_size=2), lambda x: True, random=rnd, settings=settings(database=None)) def test_sets_of_small_average_size(): assert len(sets(integers(), average_size=1.0).example()) <= 10 @given(sets(max_size=0)) def test_empty_sets(x): assert x == set() @given(sets(integers(), max_size=2)) def test_bounded_size_sets(x): assert len(x) <= 2 hypothesis-python-3.44.1/tests/cover/test_settings.py000066400000000000000000000156141321557765100230750ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os from tempfile import mkdtemp import pytest import hypothesis.strategies as st from hypothesis import given, unlimited from hypothesis.errors import InvalidState, InvalidArgument, \ HypothesisDeprecationWarning from tests.common.utils import checks_deprecated_behaviour from hypothesis.database import ExampleDatabase, \ DirectoryBasedExampleDatabase from hypothesis._settings import Verbosity, settings, default_variable, \ note_deprecation def test_has_docstrings(): assert settings.verbosity.__doc__ original_default = settings.get_profile('default').max_examples def setup_function(fn): settings.load_profile('default') settings.register_profile('test_settings', settings()) settings.load_profile('test_settings') def test_cannot_set_non_settings(): s = settings() with pytest.raises(AttributeError): s.databas_file = u'some_file' def test_settings_uses_defaults(): s = settings() assert s.max_examples == settings.default.max_examples def test_raises_attribute_error(): with pytest.raises(AttributeError): settings().kittens def test_respects_none_database(): assert settings(database=None).database is None def test_settings_can_be_used_as_context_manager_to_change_defaults(): with settings(max_examples=12): assert settings.default.max_examples == 12 assert settings.default.max_examples == original_default def test_can_repeatedly_push_the_same_thing(): s = settings(max_examples=12) t = settings(max_examples=17) assert settings().max_examples == original_default with s: assert settings().max_examples == 12 with t: assert settings().max_examples == 17 with s: assert settings().max_examples == 12 with t: assert settings().max_examples == 17 assert settings().max_examples == 12 assert settings().max_examples == 17 assert settings().max_examples == 12 assert settings().max_examples == original_default def test_cannot_create_settings_with_invalid_options(): with pytest.raises(InvalidArgument): settings(a_setting_with_limited_options=u'spoon') def test_can_set_verbosity(): settings(verbosity=Verbosity.quiet) settings(verbosity=Verbosity.normal) settings(verbosity=Verbosity.verbose) def test_can_not_set_verbosity_to_non_verbosity(): with pytest.raises(InvalidArgument): settings(verbosity='kittens') @pytest.mark.parametrize('db', [None, ExampleDatabase()]) def test_inherits_an_empty_database(db): assert settings.default.database is not None s = settings(database=db) assert s.database is db with s: t = settings() assert t.database is db @pytest.mark.parametrize('db', [None, ExampleDatabase()]) def test_can_assign_database(db): x = settings(database=db) assert x.database is db def test_will_reload_profile_when_default_is_absent(): original = settings.default default_variable.value = None assert settings.default is original def test_load_profile(): settings.load_profile('default') assert settings.default.max_examples == 100 assert settings.default.max_shrinks == 500 assert settings.default.min_satisfying_examples == 5 settings.register_profile( 'test', settings( max_examples=10, max_shrinks=5 ) ) settings.load_profile('test') assert settings.default.max_examples == 10 assert settings.default.max_shrinks == 5 assert settings.default.min_satisfying_examples == 5 settings.load_profile('default') assert settings.default.max_examples == 100 assert settings.default.max_shrinks == 500 assert settings.default.min_satisfying_examples == 5 def test_loading_profile_keeps_expected_behaviour(): settings.register_profile('ci', settings(max_examples=10000)) settings.load_profile('ci') assert settings().max_examples == 10000 with settings(max_examples=5): assert settings().max_examples == 5 assert settings().max_examples == 10000 def test_load_non_existent_profile(): with pytest.raises(InvalidArgument): settings.get_profile('nonsense') @pytest.mark.skipif( os.getenv('HYPOTHESIS_PROFILE') not in (None, 'default'), reason='Defaults have been overridden') def test_runs_tests_with_defaults_from_conftest(): assert settings.default.timeout == -1 def test_cannot_delete_a_setting(): x = settings() with pytest.raises(AttributeError): del x.max_examples x.max_examples x = settings() with pytest.raises(AttributeError): del x.foo def test_cannot_set_strict(): with pytest.raises(HypothesisDeprecationWarning): settings(strict=True) @checks_deprecated_behaviour def test_set_deprecated_settings(): assert settings(timeout=3).timeout == 3 def test_setting_to_future_value_gives_future_value_and_no_error(): assert settings(timeout=unlimited).timeout == -1 def test_cannot_set_settings(): x = settings() with pytest.raises(AttributeError): x.max_examples = 'foo' with pytest.raises(AttributeError): x.database = 'foo' assert x.max_examples != 'foo' assert x.database != 'foo' def test_can_have_none_database(): assert settings(database=None).database is None def test_can_have_none_database_file(): assert settings(database_file=None).database is None def test_can_override_database_file(): f = mkdtemp() x = settings(database_file=f) assert isinstance(x.database, DirectoryBasedExampleDatabase) assert x.database.path == f def test_cannot_define_settings_once_locked(): with pytest.raises(InvalidState): settings.define_setting('hi', 'there', 4) def test_cannot_assign_default(): with pytest.raises(AttributeError): settings.default = settings(max_examples=3) assert settings().max_examples != 3 def test_does_not_warn_if_quiet(): with pytest.warns(None) as rec: note_deprecation('This is bad', settings(verbosity=Verbosity.quiet)) assert len(rec) == 0 @settings(max_examples=7) @given(st.builds(lambda: settings.default)) def test_settings_in_strategies_are_from_test_scope(s): assert s.max_examples == 7 hypothesis-python-3.44.1/tests/cover/test_setup_teardown.py000066400000000000000000000062161321557765100242760ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import given, assume from hypothesis.strategies import text, integers class HasSetup(object): def setup_example(self): self.setups = getattr(self, u'setups', 0) self.setups += 1 class HasTeardown(object): def teardown_example(self, ex): self.teardowns = getattr(self, u'teardowns', 0) self.teardowns += 1 class SomeGivens(object): @given(integers()) def give_me_an_int(self, x): pass @given(text()) def give_me_a_string(myself, x): pass @given(integers()) def give_me_a_positive_int(self, x): assert x >= 0 @given(integers().map(lambda x: x.nope)) def fail_in_reify(self, x): pass @given(integers()) def assume_some_stuff(self, x): assume(x > 0) @given(integers().filter(lambda x: x > 0)) def assume_in_reify(self, x): pass class HasSetupAndTeardown(HasSetup, HasTeardown, SomeGivens): pass def test_calls_setup_and_teardown_on_self_as_first_argument(): x = HasSetupAndTeardown() x.give_me_an_int() x.give_me_a_string() assert x.setups > 0 assert x.teardowns == x.setups def test_calls_setup_and_teardown_on_self_unbound(): x = HasSetupAndTeardown() HasSetupAndTeardown.give_me_an_int(x) assert x.setups > 0 assert x.teardowns == x.setups def test_calls_setup_and_teardown_on_failure(): x = HasSetupAndTeardown() with pytest.raises(AssertionError): x.give_me_a_positive_int() assert x.setups > 0 assert x.teardowns == x.setups def test_still_tears_down_on_error_in_generation(): x = HasSetupAndTeardown() with pytest.raises(AttributeError): x.fail_in_reify() assert x.setups > 0 assert x.teardowns == x.setups def test_still_tears_down_on_failed_assume(): x = HasSetupAndTeardown() x.assume_some_stuff() assert x.setups > 0 assert x.teardowns == x.setups def test_still_tears_down_on_failed_assume_in_reify(): x = HasSetupAndTeardown() x.assume_in_reify() assert x.setups > 0 assert x.teardowns == x.setups def test_sets_up_without_teardown(): class Foo(HasSetup, SomeGivens): pass x = Foo() x.give_me_an_int() assert x.setups > 0 assert not hasattr(x, u'teardowns') def test_tears_down_without_setup(): class Foo(HasTeardown, SomeGivens): pass x = Foo() x.give_me_an_int() assert x.teardowns > 0 assert not hasattr(x, u'setups') hypothesis-python-3.44.1/tests/cover/test_sharing.py000066400000000000000000000037351321557765100226710ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import find, given x = st.shared(st.integers()) @given(x, x) def test_sharing_is_by_instance_by_default(a, b): assert a == b @given( st.shared(st.integers(), key='hi'), st.shared(st.integers(), key='hi')) def test_different_instances_with_the_same_key_are_shared(a, b): assert a == b def test_different_instances_are_not_shared(): find( st.tuples(st.shared(st.integers()), st.shared(st.integers())), lambda x: x[0] != x[1] ) def test_different_keys_are_not_shared(): find( st.tuples( st.shared(st.integers(), key=1), st.shared(st.integers(), key=2)), lambda x: x[0] != x[1] ) def test_keys_and_default_are_not_shared(): find( st.tuples( st.shared(st.integers(), key=1), st.shared(st.integers())), lambda x: x[0] != x[1] ) def test_can_simplify_shared_lists(): xs = find( st.lists(st.shared(st.integers())), lambda x: len(x) >= 10 and x[0] != 0 ) assert xs == [1] * 10 def test_simplify_shared_linked_to_size(): xs = find( st.lists(st.shared(st.integers())), lambda t: sum(t) >= 1000 ) assert sum(xs[:-1]) < 1000 assert (xs[0] - 1) * len(xs) < 1000 hypothesis-python-3.44.1/tests/cover/test_shrinking_limits.py000066400000000000000000000021121321557765100245770ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import find, settings def test_max_shrinks(): seen = set() zero = b'\0' * 100 def tracktrue(s): if s == zero: return False seen.add(s) return True find( st.binary(min_size=100, max_size=100), tracktrue, settings=settings(max_shrinks=1) ) assert len(seen) == 2 hypothesis-python-3.44.1/tests/cover/test_simple_characters.py000066400000000000000000000103411321557765100247150ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import unicodedata import pytest from hypothesis import find from hypothesis.errors import InvalidArgument from tests.common.debug import find_any, assert_no_examples from hypothesis.strategies import characters from hypothesis.internal.compat import text_type def test_bad_category_arguments(): with pytest.raises(InvalidArgument): characters( whitelist_categories=['foo'], blacklist_categories=['bar'] ).example() def test_bad_codepoint_arguments(): with pytest.raises(InvalidArgument): characters(min_codepoint=42, max_codepoint=24).example() def test_exclude_all_available_range(): with pytest.raises(InvalidArgument): characters(min_codepoint=ord('0'), max_codepoint=ord('0'), blacklist_characters='0').example() def test_when_nothing_could_be_produced(): with pytest.raises(InvalidArgument): characters(whitelist_categories=['Cc'], min_codepoint=ord('0'), max_codepoint=ord('9')).example() def test_characters_of_specific_groups(): st = characters(whitelist_categories=('Lu', 'Nd')) find(st, lambda c: unicodedata.category(c) == 'Lu') find(st, lambda c: unicodedata.category(c) == 'Nd') assert_no_examples( st, lambda c: unicodedata.category(c) not in ('Lu', 'Nd')) def test_exclude_characters_of_specific_groups(): st = characters(blacklist_categories=('Lu', 'Nd')) find(st, lambda c: unicodedata.category(c) != 'Lu') find(st, lambda c: unicodedata.category(c) != 'Nd') assert_no_examples(st, lambda c: unicodedata.category(c) in ('Lu', 'Nd')) def test_find_one(): char = find(characters(min_codepoint=48, max_codepoint=48), lambda _: True) assert char == u'0' def test_find_something_rare(): st = characters(whitelist_categories=['Zs'], min_codepoint=12288) find(st, lambda c: unicodedata.category(c) == 'Zs') assert_no_examples(st, lambda c: unicodedata.category(c) != 'Zs') def test_whitelisted_characters_alone(): with pytest.raises(InvalidArgument): characters(whitelist_characters=u'te02тест49st').example() def test_whitelisted_characters_overlap_blacklisted_characters(): good_chars = u'te02тест49st' bad_chars = u'ts94тсет' with pytest.raises(InvalidArgument) as exc: characters(min_codepoint=ord('0'), max_codepoint=ord('9'), whitelist_characters=good_chars, blacklist_characters=bad_chars).example() assert repr(good_chars) in text_type(exc) assert repr(bad_chars) in text_type(exc) def test_whitelisted_characters_override(): good_characters = u'teтестst' st = characters(min_codepoint=ord('0'), max_codepoint=ord('9'), whitelist_characters=good_characters) find_any(st, lambda c: c in good_characters) find_any(st, lambda c: c in '0123456789') assert_no_examples(st, lambda c: c not in good_characters + '0123456789') def test_blacklisted_characters(): bad_chars = u'te02тест49st' st = characters(min_codepoint=ord('0'), max_codepoint=ord('9'), blacklist_characters=bad_chars) assert '1' == find(st, lambda c: True) assert_no_examples(st, lambda c: c in bad_chars) def test_whitelist_characters_disjoint_blacklist_characters(): good_chars = u'123abc' bad_chars = u'456def' st = characters(min_codepoint=ord('0'), max_codepoint=ord('9'), blacklist_characters=bad_chars, whitelist_characters=good_chars) assert_no_examples(st, lambda c: c in bad_chars) hypothesis-python-3.44.1/tests/cover/test_simple_collections.py000066400000000000000000000155141321557765100251230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random from collections import namedtuple import pytest from flaky import flaky from hypothesis import find, given, settings from tests.common.debug import minimal, find_any from hypothesis.strategies import none, sets, text, lists, builds, \ tuples, booleans, integers, frozensets, dictionaries, \ fixed_dictionaries from hypothesis.internal.compat import OrderedDict @pytest.mark.parametrize((u'col', u'strat'), [ ((), tuples()), ([], lists(max_size=0)), (set(), sets(max_size=0)), (frozenset(), frozensets(max_size=0)), ({}, fixed_dictionaries({})), ]) def test_find_empty_collection_gives_empty(col, strat): assert find(strat, lambda x: True) == col @pytest.mark.parametrize((u'coltype', u'strat'), [ (list, lists), (set, sets), (frozenset, frozensets), ]) def test_find_non_empty_collection_gives_single_zero(coltype, strat): assert find( strat(integers()), bool ) == coltype((0,)) @pytest.mark.parametrize((u'coltype', u'strat'), [ (list, lists), (set, sets), (frozenset, frozensets), ]) def test_minimizes_to_empty(coltype, strat): assert find( strat(integers()), lambda x: True ) == coltype() def test_minimizes_list_of_lists(): xs = find(lists(lists(booleans())), lambda x: any(x) and not all(x)) xs.sort() assert xs == [[], [False]] def test_minimize_long_list(): assert find( lists(booleans(), average_size=100), lambda x: len(x) >= 70 ) == [False] * 70 def test_minimize_list_of_longish_lists(): xs = find( lists(lists(booleans())), lambda x: len([t for t in x if any(t) and len(t) >= 3]) >= 10) assert len(xs) == 10 for x in xs: assert len(x) == 3 assert len([t for t in x if t]) == 1 def test_minimize_list_of_fairly_non_unique_ints(): xs = find(lists(integers()), lambda x: len(set(x)) < len(x)) assert len(xs) == 2 def test_list_with_complex_sorting_structure(): xs = find( lists(lists(booleans())), lambda x: [list(reversed(t)) for t in x] > x and len(x) > 3) assert len(xs) == 4 def test_list_with_wide_gap(): xs = find(lists(integers()), lambda x: x and (max(x) > min(x) + 10 > 0)) assert len(xs) == 2 xs.sort() assert xs[1] == 11 + xs[0] def test_minimize_namedtuple(): T = namedtuple(u'T', (u'a', u'b')) tab = find( builds(T, integers(), integers()), lambda x: x.a < x.b) assert tab.b == tab.a + 1 def test_minimize_dict(): tab = find( fixed_dictionaries({u'a': booleans(), u'b': booleans()}), lambda x: x[u'a'] or x[u'b'] ) assert not (tab[u'a'] and tab[u'b']) def test_minimize_list_of_sets(): assert find( lists(sets(booleans())), lambda x: len(list(filter(None, x))) >= 3) == ( [set((False,))] * 3 ) def test_minimize_list_of_lists(): assert find( lists(lists(integers())), lambda x: len(list(filter(None, x))) >= 3) == ( [[0]] * 3 ) def test_minimize_list_of_tuples(): xs = find( lists(tuples(integers(), integers())), lambda x: len(x) >= 2) assert xs == [(0, 0), (0, 0)] def test_minimize_multi_key_dicts(): assert find( dictionaries(keys=booleans(), values=booleans()), bool ) == {False: False} def test_minimize_dicts_with_incompatible_keys(): assert find( fixed_dictionaries({1: booleans(), u'hi': lists(booleans())}), lambda x: True ) == {1: False, u'hi': []} def test_multiple_empty_lists_are_independent(): x = find(lists(lists(max_size=0)), lambda t: len(t) >= 2) u, v = x assert u is not v @given(sets(integers(0, 100), min_size=2, max_size=10)) @settings(max_examples=100) def test_sets_are_size_bounded(xs): assert 2 <= len(xs) <= 10 def test_ordered_dictionaries_preserve_keys(): r = Random() keys = list(range(100)) r.shuffle(keys) x = fixed_dictionaries( OrderedDict([(k, booleans()) for k in keys])).example() assert list(x.keys()) == keys @pytest.mark.parametrize(u'n', range(10)) def test_lists_of_fixed_length(n): assert find( lists(integers(), min_size=n, max_size=n), lambda x: True) == [0] * n @pytest.mark.parametrize(u'n', range(10)) def test_sets_of_fixed_length(n): x = find( sets(integers(), min_size=n, max_size=n), lambda x: True) assert len(x) == n if not n: assert x == set() else: assert x == set(range(min(x), min(x) + n)) @pytest.mark.parametrize(u'n', range(10)) def test_dictionaries_of_fixed_length(n): x = set(find( dictionaries(integers(), booleans(), min_size=n, max_size=n), lambda x: True).keys()) if not n: assert x == set() else: assert x == set(range(min(x), min(x) + n)) @pytest.mark.parametrize(u'n', range(10)) def test_lists_of_lower_bounded_length(n): x = find( lists(integers(), min_size=n), lambda x: sum(x) >= 2 * n ) assert n <= len(x) <= 2 * n assert all(t >= 0 for t in x) assert len(x) == n or all(t > 0 for t in x) assert sum(x) == 2 * n @pytest.mark.parametrize(u'n', range(10)) def test_lists_forced_near_top(n): assert find( lists(integers(), min_size=n, max_size=n + 2), lambda t: len(t) == n + 2 ) == [0] * (n + 2) @flaky(max_runs=5, min_passes=1) def test_can_find_unique_lists_of_non_set_order(): ls = minimal( lists(text(), unique=True), lambda x: list(set(reversed(x))) != x ) assert len(set(ls)) == len(ls) assert len(ls) == 2 def test_can_find_sets_unique_by_incomplete_data(): ls = find( lists(lists(integers(min_value=0), min_size=2), unique_by=max), lambda x: len(x) >= 10 ) assert len(ls) == 10 assert sorted(list(map(max, ls))) == list(range(10)) for v in ls: assert 0 in v def test_can_draw_empty_list_from_unsatisfiable_strategy(): assert find_any(lists(integers().filter(lambda s: False))) == [] def test_can_draw_empty_set_from_unsatisfiable_strategy(): assert find_any(sets(integers().filter(lambda s: False))) == set() small_set = sets(none()) @given(lists(small_set, min_size=10)) def test_small_sized_sets(x): pass hypothesis-python-3.44.1/tests/cover/test_simple_numbers.py000066400000000000000000000151111321557765100242510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys import math import pytest from hypothesis import given, settings from tests.common.debug import minimal from hypothesis.strategies import lists, floats, randoms, integers, \ complex_numbers def test_minimize_negative_int(): assert minimal(integers(), lambda x: x < 0) == -1 assert minimal(integers(), lambda x: x < -1) == -2 def test_positive_negative_int(): assert minimal(integers(), lambda x: x > 0) == 1 assert minimal(integers(), lambda x: x > 1) == 2 boundaries = pytest.mark.parametrize(u'boundary', sorted( [2 ** i for i in range(10)] + [2 ** i - 1 for i in range(10)] + [2 ** i + 1 for i in range(10)] + [10 ** i for i in range(6)] )) @boundaries def test_minimizes_int_down_to_boundary(boundary): assert minimal(integers(), lambda x: x >= boundary) == boundary @boundaries def test_minimizes_int_up_to_boundary(boundary): assert minimal(integers(), lambda x: x <= -boundary) == -boundary @boundaries def test_minimizes_ints_from_down_to_boundary(boundary): def is_good(x): assert x >= boundary - 10 return x >= boundary assert minimal( integers(min_value=boundary - 10), is_good) == boundary assert minimal(integers(min_value=boundary), lambda x: True) == boundary def test_minimizes_negative_integer_range_upwards(): assert minimal(integers(min_value=-10, max_value=-1)) == -1 @boundaries def test_minimizes_integer_range_to_boundary(boundary): assert minimal( integers(boundary, boundary + 100), lambda x: True ) == boundary def test_single_integer_range_is_range(): assert minimal(integers(1, 1), lambda x: True) == 1 def test_minimal_small_number_in_large_range(): assert minimal( integers((-2 ** 32), 2 ** 32), lambda x: x >= 101) == 101 def test_minimal_small_sum_float_list(): xs = minimal( lists(floats(), min_size=10), lambda x: sum(x) >= 1.0 ) assert sum(xs) <= 2.0 def test_minimals_boundary_floats(): def f(x): print(x) return True assert -1 <= minimal(floats(min_value=-1, max_value=1), f) <= 1 def test_minimal_non_boundary_float(): x = minimal(floats(min_value=1, max_value=9), lambda x: x > 2) assert 2 < x < 3 def test_can_minimal_standard_complex_numbers(): minimal(complex_numbers(), lambda x: x.imag != 0) == 0j minimal(complex_numbers(), lambda x: x.real != 0) == 1 def test_minimal_float_is_zero(): assert minimal(floats(), lambda x: True) == 0.0 def test_negative_floats_simplify_to_zero(): assert minimal(floats(), lambda x: x <= -1.0) == -1.0 def test_minimal_infinite_float_is_positive(): assert minimal(floats(), math.isinf) == float(u'inf') def test_can_minimal_infinite_negative_float(): assert minimal(floats(), lambda x: x < -sys.float_info.max) def test_can_minimal_float_on_boundary_of_representable(): minimal(floats(), lambda x: x + 1 == x and not math.isinf(x)) def test_minimize_nan(): assert math.isnan(minimal(floats(), math.isnan)) def test_minimize_very_large_float(): t = sys.float_info.max / 2 assert t <= minimal(floats(), lambda x: x >= t) < float(u'inf') def is_integral(value): try: return int(value) == value except (OverflowError, ValueError): return False def test_can_minimal_float_far_from_integral(): minimal(floats(), lambda x: not ( math.isnan(x) or math.isinf(x) or is_integral(x * (2 ** 32)) )) def test_list_of_fractional_float(): assert set(minimal( lists(floats(), average_size=20), lambda x: len([t for t in x if t >= 1.5]) >= 5, timeout_after=60, )) in ( set((1.5,)), set((1.5, 2.0)), set((2.0,)), ) @settings(deadline=None) @given(randoms()) def test_minimal_fractional_float(rnd): assert minimal( floats(), lambda x: x >= 1.5, random=rnd) in (1.5, 2.0) def test_minimizes_lists_of_negative_ints_up_to_boundary(): result = minimal( lists(integers(), min_size=10), lambda x: len([t for t in x if t <= -1]) >= 10, timeout_after=60) assert result == [-1] * 10 @pytest.mark.parametrize((u'left', u'right'), [ (0.0, 5e-324), (-5e-324, 0.0), (-5e-324, 5e-324), (5e-324, 1e-323), ]) def test_floats_in_constrained_range(left, right): @given(floats(left, right)) def test_in_range(r): assert left <= r <= right test_in_range() def test_bounds_are_respected(): assert minimal(floats(min_value=1.0), lambda x: True) == 1.0 assert minimal(floats(max_value=-1.0), lambda x: True) == -1.0 @pytest.mark.parametrize('k', range(10)) def test_floats_from_zero_have_reasonable_range(k): n = 10 ** k assert minimal(floats(min_value=0.0), lambda x: x >= n) == float(n) assert minimal(floats(max_value=0.0), lambda x: x <= -n) == float(-n) def test_explicit_allow_nan(): minimal(floats(allow_nan=True), math.isnan) def test_one_sided_contains_infinity(): minimal(floats(min_value=1.0), math.isinf) minimal(floats(max_value=1.0), math.isinf) @given(floats(min_value=0.0, allow_infinity=False)) def test_no_allow_infinity_upper(x): assert not math.isinf(x) @given(floats(max_value=0.0, allow_infinity=False)) def test_no_allow_infinity_lower(x): assert not math.isinf(x) class TestFloatsAreFloats(object): @given(floats()) def test_unbounded(self, arg): assert isinstance(arg, float) @given(floats(min_value=0, max_value=2 ** 64 - 1)) def test_int_int(self, arg): assert isinstance(arg, float) @given(floats(min_value=0, max_value=float(2 ** 64 - 1))) def test_int_float(self, arg): assert isinstance(arg, float) @given(floats(min_value=float(0), max_value=2 ** 64 - 1)) def test_float_int(self, arg): assert isinstance(arg, float) @given(floats(min_value=float(0), max_value=float(2 ** 64 - 1))) def test_float_float(self, arg): assert isinstance(arg, float) hypothesis-python-3.44.1/tests/cover/test_simple_strings.py000066400000000000000000000065201321557765100242730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import unicodedata from random import Random import pytest from hypothesis import find, given, settings from hypothesis.strategies import text, binary, tuples, characters def test_can_minimize_up_to_zero(): s = find(text(), lambda x: any(lambda t: t <= u'0' for t in x)) assert s == u'0' def test_minimizes_towards_ascii_zero(): s = find(text(), lambda x: any(t < u'0' for t in x)) assert s == chr(ord(u'0') - 1) def test_can_handle_large_codepoints(): s = find(text(), lambda x: x >= u'☃') assert s == u'☃' def test_can_find_mixed_ascii_and_non_ascii_strings(): s = find( text(), lambda x: ( any(t >= u'☃' for t in x) and any(ord(t) <= 127 for t in x))) assert len(s) == 2 assert sorted(s) == [u'0', u'☃'] def test_will_find_ascii_examples_given_the_chance(): s = find( tuples(text(max_size=1), text(max_size=1)), lambda x: x[0] and (x[0] < x[1])) assert ord(s[1]) == ord(s[0]) + 1 assert u'0' in s def test_finds_single_element_strings(): assert find(text(), bool, random=Random(4)) == u'0' def test_binary_respects_changes_in_size(): @given(binary()) def test_foo(x): assert len(x) <= 10 with pytest.raises(AssertionError): test_foo() @given(binary(max_size=10)) def test_foo(x): assert len(x) <= 10 test_foo() @given(text(min_size=1, max_size=1)) @settings(max_examples=2000) def test_does_not_generate_surrogates(t): assert unicodedata.category(t) != u'Cs' def test_does_not_simplify_into_surrogates(): f = find(text(average_size=25.0), lambda x: x >= u'\udfff') assert f == u'\ue000' f = find( text(average_size=25.0), lambda x: len([t for t in x if t >= u'\udfff']) >= 10) assert f == u'\ue000' * 10 @given(text(alphabet=[u'a', u'b'])) def test_respects_alphabet_if_list(xs): assert set(xs).issubset(set(u'ab')) @given(text(alphabet=u'cdef')) def test_respects_alphabet_if_string(xs): assert set(xs).issubset(set(u'cdef')) @given(text()) def test_can_encode_as_utf8(s): s.encode('utf-8') @given(text(characters(blacklist_characters=u'\n'))) def test_can_blacklist_newlines(s): assert u'\n' not in s @given(text(characters(blacklist_categories=('Cc', 'Cs')))) def test_can_exclude_newlines_by_category(s): assert u'\n' not in s @given(text(characters(max_codepoint=127))) def test_can_restrict_to_ascii_only(s): s.encode('ascii') def test_fixed_size_bytes_just_draw_bytes(): from hypothesis.internal.conjecture.data import ConjectureData x = ConjectureData.for_buffer(b'foo') assert x.draw(binary(min_size=3, max_size=3)) == b'foo' hypothesis-python-3.44.1/tests/cover/test_skipping.py000066400000000000000000000031721321557765100230550ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import unittest import pytest from hypothesis import given from hypothesis.core import exceptions_to_reraise from tests.common.utils import capture_out from hypothesis.strategies import integers @pytest.mark.parametrize('skip_exception', exceptions_to_reraise) def test_no_falsifying_example_if_unittest_skip(skip_exception): """If a ``SkipTest`` exception is raised during a test, Hypothesis should not continue running the test and shrink process, nor should it print anything about falsifying examples.""" class DemoTest(unittest.TestCase): @given(xs=integers()) def test_to_be_skipped(self, xs): if xs == 0: raise skip_exception else: assert xs == 0 with capture_out() as o: suite = unittest.defaultTestLoader.loadTestsFromTestCase(DemoTest) unittest.TextTestRunner().run(suite) assert 'Falsifying example' not in o.getvalue() hypothesis-python-3.44.1/tests/cover/test_slippage.py000066400000000000000000000126521321557765100230400ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given, settings from hypothesis.errors import Flaky, MultipleFailures from tests.common.utils import capture_out, non_covering_examples from hypothesis.database import InMemoryExampleDatabase def test_raises_multiple_failures_with_varying_type(): target = [None] @given(st.integers()) def test(i): if abs(i) < 1000: return if target[0] is None: target[0] = i exc_class = TypeError if target[0] == i else ValueError raise exc_class() with capture_out() as o: with pytest.raises(MultipleFailures): test() assert 'TypeError' in o.getvalue() assert 'ValueError' in o.getvalue() def test_raises_multiple_failures_when_position_varies(): target = [None] @given(st.integers()) def test(i): if abs(i) < 1000: return if target[0] is None: target[0] = i if target[0] == i: raise ValueError('loc 1') else: raise ValueError('loc 2') with capture_out() as o: with pytest.raises(MultipleFailures): test() assert 'loc 1' in o.getvalue() assert 'loc 2' in o.getvalue() def test_replays_both_failing_values(): target = [None] @settings(database=InMemoryExampleDatabase()) @given(st.integers()) def test(i): if abs(i) < 1000: return if target[0] is None: target[0] = i exc_class = TypeError if target[0] == i else ValueError raise exc_class() with pytest.raises(MultipleFailures): test() with pytest.raises(MultipleFailures): test() @pytest.mark.parametrize('fix', [TypeError, ValueError]) def test_replays_slipped_examples_once_initial_bug_is_fixed(fix): target = [] bug_fixed = False @settings(database=InMemoryExampleDatabase()) @given(st.integers()) def test(i): if abs(i) < 1000: return if not target: target.append(i) if i == target[0]: if bug_fixed and fix == TypeError: return raise TypeError() if len(target) == 1: target.append(i) if bug_fixed and fix == ValueError: return if i == target[1]: raise ValueError() with pytest.raises(MultipleFailures): test() bug_fixed = True with pytest.raises(ValueError if fix == TypeError else TypeError): test() def test_garbage_collects_the_secondary_key(): target = [] bug_fixed = False db = InMemoryExampleDatabase() @settings(database=db) @given(st.integers()) def test(i): if bug_fixed: return if abs(i) < 1000: return if not target: target.append(i) if i == target[0]: raise TypeError() if len(target) == 1: target.append(i) if i == target[1]: raise ValueError() with pytest.raises(MultipleFailures): test() bug_fixed = True def count(): return len(non_covering_examples(db)) prev = count() while prev > 0: test() current = count() assert current < prev prev = current def test_shrinks_both_failures(): first_has_failed = [False] duds = set() second_target = [None] @given(st.integers()) def test(i): if i >= 10000: first_has_failed[0] = True assert False assert i < 10000 if first_has_failed[0]: if second_target[0] is None: for j in range(10000): if j not in duds: second_target[0] = j break assert i < second_target[0] else: duds.add(i) with capture_out() as o: with pytest.raises(MultipleFailures): test() assert 'test(i=10000)' in o.getvalue() assert 'test(i=%d)' % (second_target[0],) in o.getvalue() def test_handles_flaky_tests_where_only_one_is_flaky(): flaky_fixed = False target = [] flaky_failed_once = [False] @settings(database=InMemoryExampleDatabase()) @given(st.integers()) def test(i): if abs(i) < 1000: return if not target: target.append(i) if i == target[0]: raise TypeError() if flaky_failed_once[0] and not flaky_fixed: return if len(target) == 1: target.append(i) if i == target[1]: flaky_failed_once[0] = True raise ValueError() with pytest.raises(Flaky): test() flaky_fixed = True with pytest.raises(MultipleFailures): test() hypothesis-python-3.44.1/tests/cover/test_stateful.py000066400000000000000000000413271321557765100230640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect from collections import namedtuple import pytest from hypothesis import settings as Settings from hypothesis import assume from hypothesis.errors import Flaky, InvalidDefinition from hypothesis.control import current_build_context from tests.common.utils import raises, capture_out, \ checks_deprecated_behaviour from hypothesis.database import ExampleDatabase from hypothesis.stateful import Bundle, GenericStateMachine, \ RuleBasedStateMachine, rule, invariant, precondition, \ run_state_machine_as_test from hypothesis.strategies import just, none, lists, binary, tuples, \ choices, booleans, integers, sampled_from from hypothesis.internal.compat import print_unicode class SetStateMachine(GenericStateMachine): def __init__(self): self.elements = [] def steps(self): strat = tuples(just(False), integers(0, 5)) if self.elements: strat |= tuples(just(True), sampled_from(self.elements)) return strat def execute_step(self, step): delete, value = step if delete: self.elements.remove(value) assert value not in self.elements else: self.elements.append(value) class OrderedStateMachine(GenericStateMachine): def __init__(self): self.counter = 0 def steps(self): return ( integers(self.counter - 1, self.counter + 50) ) def execute_step(self, step): assert step >= self.counter self.counter = step class GoodSet(GenericStateMachine): def __init__(self): self.stuff = set() def steps(self): return tuples(booleans(), integers()) def execute_step(self, step): delete, value = step if delete: self.stuff.discard(value) else: self.stuff.add(value) assert delete == (value not in self.stuff) Leaf = namedtuple(u'Leaf', (u'label',)) Split = namedtuple(u'Split', (u'left', u'right')) class BalancedTrees(RuleBasedStateMachine): trees = u'BinaryTree' @rule(target=trees, x=booleans()) def leaf(self, x): return Leaf(x) @rule(target=trees, left=Bundle(trees), right=Bundle(trees)) def split(self, left, right): return Split(left, right) @rule(tree=Bundle(trees)) def test_is_balanced(self, tree): if isinstance(tree, Leaf): return else: assert abs(self.size(tree.left) - self.size(tree.right)) <= 1 self.test_is_balanced(tree.left) self.test_is_balanced(tree.right) def size(self, tree): if isinstance(tree, Leaf): return 1 else: return 1 + self.size(tree.left) + self.size(tree.right) class DepthCharge(object): def __init__(self, value): if value is None: self.depth = 0 else: self.depth = value.depth + 1 class DepthMachine(RuleBasedStateMachine): charges = Bundle(u'charges') @rule(targets=(charges,), child=charges) def charge(self, child): return DepthCharge(child) @rule(targets=(charges,)) def none_charge(self): return DepthCharge(None) @rule(check=charges) def is_not_too_deep(self, check): assert check.depth < 3 class MultipleRulesSameFuncMachine(RuleBasedStateMachine): def myfunc(self, data): print_unicode(data) rule1 = rule(data=just(u"rule1data"))(myfunc) rule2 = rule(data=just(u"rule2data"))(myfunc) class PreconditionMachine(RuleBasedStateMachine): num = 0 @rule() def add_one(self): self.num += 1 @rule() def set_to_zero(self): self.num = 0 @rule(num=integers()) @precondition(lambda self: self.num != 0) def div_by_precondition_after(self, num): self.num = num / self.num @precondition(lambda self: self.num != 0) @rule(num=integers()) def div_by_precondition_before(self, num): self.num = num / self.num class RoseTreeStateMachine(RuleBasedStateMachine): nodes = Bundle('nodes') @rule(target=nodes, source=lists(nodes)) def bunch(self, source): return source @rule(source=nodes) def shallow(self, source): def d(ls): if not ls: return 0 else: return 1 + max(map(d, ls)) assert d(source) <= 5 class NotTheLastMachine(RuleBasedStateMachine): stuff = Bundle('stuff') def __init__(self): super(NotTheLastMachine, self).__init__() self.last = None self.bye_called = False @rule(target=stuff) def hi(self): result = object() self.last = result return result @precondition(lambda self: not self.bye_called) @rule(v=stuff) def bye(self, v): assert v == self.last self.bye_called = True bad_machines = ( OrderedStateMachine, SetStateMachine, BalancedTrees, DepthMachine, RoseTreeStateMachine, NotTheLastMachine, ) for m in bad_machines: m.TestCase.settings = Settings( m.TestCase.settings, max_examples=1000, max_iterations=2000 ) cheap_bad_machines = list(bad_machines) cheap_bad_machines.remove(BalancedTrees) with_cheap_bad_machines = pytest.mark.parametrize( u'machine', cheap_bad_machines, ids=[t.__name__ for t in cheap_bad_machines] ) @pytest.mark.parametrize( u'machine', bad_machines, ids=[t.__name__ for t in bad_machines] ) def test_bad_machines_fail(machine): test_class = machine.TestCase try: with capture_out() as o: with raises(AssertionError): test_class().runTest() except Exception: print_unicode(o.getvalue()) raise v = o.getvalue() print_unicode(v) assert u'Step #1' in v assert u'Step #50' not in v def test_multiple_rules_same_func(): test_class = MultipleRulesSameFuncMachine.TestCase with capture_out() as o: test_class().runTest() output = o.getvalue() assert 'rule1data' in output assert 'rule2data' in output class GivenLikeStateMachine(GenericStateMachine): def steps(self): return lists(booleans(), average_size=25.0) def execute_step(self, step): assume(any(step)) def test_can_get_test_case_off_machine_instance(): assert GoodSet().TestCase is GoodSet().TestCase assert GoodSet().TestCase is not None class FlakyDrawLessMachine(GenericStateMachine): def steps(self): cb = current_build_context() if cb.is_final: return binary(min_size=1, max_size=1) else: return binary(min_size=1024, max_size=1024) def execute_step(self, step): cb = current_build_context() if not cb.is_final: assert 0 not in bytearray(step) def test_flaky_draw_less_raises_flaky(): with raises(Flaky): FlakyDrawLessMachine.TestCase().runTest() class FlakyStateMachine(GenericStateMachine): def steps(self): return just(()) def execute_step(self, step): assert not any( t[3] == u'find_breaking_runner' for t in inspect.getouterframes(inspect.currentframe()) ) def test_flaky_raises_flaky(): with raises(Flaky): FlakyStateMachine.TestCase().runTest() class FlakyRatchettingMachine(GenericStateMachine): ratchet = 0 def steps(self): FlakyRatchettingMachine.ratchet += 1 n = FlakyRatchettingMachine.ratchet return lists(integers(), min_size=n, max_size=n) def execute_step(self, step): assert False def test_ratchetting_raises_flaky(): with raises(Flaky): FlakyRatchettingMachine.TestCase().runTest() def test_empty_machine_is_invalid(): class EmptyMachine(RuleBasedStateMachine): pass with raises(InvalidDefinition): EmptyMachine.TestCase().runTest() def test_machine_with_no_terminals_is_invalid(): class NonTerminalMachine(RuleBasedStateMachine): @rule(value=Bundle(u'hi')) def bye(self, hi): pass with raises(InvalidDefinition): NonTerminalMachine.TestCase().runTest() class DynamicMachine(RuleBasedStateMachine): @rule(value=Bundle(u'hi')) def test_stuff(x): pass DynamicMachine.define_rule( targets=(), function=lambda self: 1, arguments={} ) class IntAdder(RuleBasedStateMachine): pass IntAdder.define_rule( targets=(u'ints',), function=lambda self, x: x, arguments={ u'x': integers() } ) IntAdder.define_rule( targets=(u'ints',), function=lambda self, x, y: x, arguments={ u'x': integers(), u'y': Bundle(u'ints'), } ) @checks_deprecated_behaviour def test_can_choose_in_a_machine(): class ChoosingMachine(GenericStateMachine): def steps(self): return choices() def execute_step(self, choices): choices([1, 2, 3]) run_state_machine_as_test(ChoosingMachine) with Settings(max_examples=10): TestGoodSets = GoodSet.TestCase TestGivenLike = GivenLikeStateMachine.TestCase TestDynamicMachine = DynamicMachine.TestCase TestIntAdder = IntAdder.TestCase TestPrecondition = PreconditionMachine.TestCase def test_picks_up_settings_at_first_use_of_testcase(): assert TestDynamicMachine.settings.max_examples == 10 def test_new_rules_are_picked_up_before_and_after_rules_call(): class Foo(RuleBasedStateMachine): pass Foo.define_rule( targets=(), function=lambda self: 1, arguments={} ) assert len(Foo.rules()) == 1 Foo.define_rule( targets=(), function=lambda self: 2, arguments={} ) assert len(Foo.rules()) == 2 def test_settings_are_independent(): s = Settings() orig = s.max_examples with s: class Foo(RuleBasedStateMachine): pass Foo.define_rule( targets=(), function=lambda self: 1, arguments={} ) Foo.TestCase.settings = Settings( Foo.TestCase.settings, max_examples=1000000) assert s.max_examples == orig def test_minimizes_errors_in_teardown(): class Foo(GenericStateMachine): def __init__(self): self.counter = 0 def steps(self): return tuples() def execute_step(self, value): self.counter += 1 def teardown(self): assert not self.counter runner = Foo.find_breaking_runner() f = Foo() with raises(AssertionError): runner.run(f, print_steps=True) assert f.counter == 1 class RequiresInit(GenericStateMachine): def __init__(self, threshold): super(RequiresInit, self).__init__() self.threshold = threshold def steps(self): return integers() def execute_step(self, value): if value > self.threshold: raise ValueError(u'%d is too high' % (value,)) def test_can_use_factory_for_tests(): with raises(ValueError): run_state_machine_as_test(lambda: RequiresInit(42)) class FailsEventually(GenericStateMachine): def __init__(self): super(FailsEventually, self).__init__() self.counter = 0 def steps(self): return none() def execute_step(self, _): self.counter += 1 assert self.counter < 10 FailsEventually.TestCase.settings = Settings( FailsEventually.TestCase.settings, stateful_step_count=5) TestDoesNotFail = FailsEventually.TestCase def test_can_explicitly_pass_settings(): try: FailsEventually.TestCase.settings = Settings( FailsEventually.TestCase.settings, stateful_step_count=15) run_state_machine_as_test( FailsEventually, settings=Settings( stateful_step_count=2, )) finally: FailsEventually.TestCase.settings = Settings( FailsEventually.TestCase.settings, stateful_step_count=5) def test_saves_failing_example_in_database(): db = ExampleDatabase(':memory:') with raises(AssertionError): run_state_machine_as_test( SetStateMachine, Settings(database=db)) assert len(list(db.data.keys())) == 2 def test_can_run_with_no_db(): with raises(AssertionError): run_state_machine_as_test( SetStateMachine, Settings(database=None)) def test_stateful_double_rule_is_forbidden(recwarn): with pytest.raises(InvalidDefinition): class DoubleRuleMachine(RuleBasedStateMachine): @rule(num=just(1)) @rule(num=just(2)) def whatevs(self, num): pass def test_can_explicitly_call_functions_when_precondition_not_satisfied(): class BadPrecondition(RuleBasedStateMachine): def __init__(self): super(BadPrecondition, self).__init__() @precondition(lambda self: False) @rule() def test_blah(self): raise ValueError() @rule() def test_foo(self): self.test_blah() with pytest.raises(ValueError): run_state_machine_as_test(BadPrecondition) def test_invariant(): """If an invariant raise an exception, the exception is propagated.""" class Invariant(RuleBasedStateMachine): def __init__(self): super(Invariant, self).__init__() @invariant() def test_blah(self): raise ValueError() @rule() def do_stuff(self): pass with pytest.raises(ValueError): run_state_machine_as_test(Invariant) def test_no_double_invariant(): """The invariant decorator can't be applied multiple times to a single function.""" with raises(InvalidDefinition): class Invariant(RuleBasedStateMachine): def __init__(self): super(Invariant, self).__init__() @invariant() @invariant() def test_blah(self): pass @rule() def do_stuff(self): pass def test_invariant_precondition(): """If an invariant precodition isn't met, the invariant isn't run. The precondition decorator can be applied in any order. """ class Invariant(RuleBasedStateMachine): def __init__(self): super(Invariant, self).__init__() @invariant() @precondition(lambda _: False) def an_invariant(self): raise ValueError() @precondition(lambda _: False) @invariant() def another_invariant(self): raise ValueError() @rule() def do_stuff(self): pass run_state_machine_as_test(Invariant) def test_multiple_invariants(): """If multiple invariants are present, they all get run.""" class Invariant(RuleBasedStateMachine): def __init__(self): super(Invariant, self).__init__() self.first_invariant_ran = False @invariant() def invariant_1(self): self.first_invariant_ran = True @precondition(lambda self: self.first_invariant_ran) @invariant() def invariant_2(self): raise ValueError() @rule() def do_stuff(self): pass with pytest.raises(ValueError): run_state_machine_as_test(Invariant) def test_explicit_invariant_call_with_precondition(): """Invariants can be called explicitly even if their precondition is not satisfied.""" class BadPrecondition(RuleBasedStateMachine): def __init__(self): super(BadPrecondition, self).__init__() @precondition(lambda self: False) @invariant() def test_blah(self): raise ValueError() @rule() def test_foo(self): self.test_blah() with pytest.raises(ValueError): run_state_machine_as_test(BadPrecondition) def test_invariant_checks_initial_state(): """Invariants are checked before any rules run.""" class BadPrecondition(RuleBasedStateMachine): def __init__(self): super(BadPrecondition, self).__init__() self.num = 0 @invariant() def test_blah(self): if self.num == 0: raise ValueError() @rule() def test_foo(self): self.num += 1 with pytest.raises(ValueError): run_state_machine_as_test(BadPrecondition) hypothesis-python-3.44.1/tests/cover/test_statistical_events.py000066400000000000000000000075151321557765100251460ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import re import time import traceback import pytest from hypothesis import strategies as st from hypothesis import HealthCheck, event, given, example, settings from hypothesis.statistics import collector def call_for_statistics(test_function): result = [None] def callback(statistics): result[0] = statistics with collector.with_value(callback): try: test_function() except Exception: traceback.print_exc() assert result[0] is not None return result[0] def test_can_callback_with_a_string(): @given(st.integers()) def test(i): event('hi') stats = call_for_statistics(test) assert any('hi' in s for s in stats.events) counter = 0 seen = [] class Foo(object): def __eq__(self, other): return True def __ne__(self, other): return False def __hash__(self): return 0 def __str__(self): seen.append(self) global counter counter += 1 return 'COUNTER %d' % (counter,) def test_formats_are_evaluated_only_once(): global counter counter = 0 @given(st.integers()) def test(i): event(Foo()) stats = call_for_statistics(test) assert any('COUNTER 1' in s for s in stats.events) assert not any('COUNTER 2' in s for s in stats.events) def test_does_not_report_on_examples(): @example('hi') @given(st.integers()) def test(i): if isinstance(i, str): event('boo') stats = call_for_statistics(test) assert not any('boo' in e for e in stats.events) def test_exact_timing(): @settings(suppress_health_check=[HealthCheck.too_slow], deadline=None) @given(st.integers()) def test(i): time.sleep(0.5) stats = call_for_statistics(test) assert re.match(r'~ 5\d\dms', stats.runtimes) def test_apparently_instantaneous_tests(): time.freeze() @given(st.integers()) def test(i): pass stats = call_for_statistics(test) assert stats.runtimes == '< 1ms' def test_flaky_exit(): first = [True] @given(st.integers()) def test(i): if i > 1001: if first[0]: first[0] = False print('Hi') assert False stats = call_for_statistics(test) assert stats.exit_reason == 'test was flaky' @pytest.mark.parametrize('draw_delay', [False, True]) @pytest.mark.parametrize('test_delay', [False, True]) def test_draw_time_percentage(draw_delay, test_delay): time.freeze() @st.composite def s(draw): if draw_delay: time.sleep(0.05) @given(s()) def test(_): if test_delay: time.sleep(0.05) stats = call_for_statistics(test) if not draw_delay: assert stats.draw_time_percentage == '~ 0%' elif test_delay: assert stats.draw_time_percentage == '~ 50%' else: assert stats.draw_time_percentage == '~ 100%' def test_has_lambdas_in_output(): @given(st.integers().filter(lambda x: x % 2 == 0)) def test(i): pass stats = call_for_statistics(test) assert any( 'lambda x: x % 2 == 0' in e for e in stats.events ) hypothesis-python-3.44.1/tests/cover/test_streams.py000066400000000000000000000054541321557765100227140ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from itertools import islice import pytest from hypothesis import find, given from hypothesis.errors import InvalidArgument from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import text, lists, booleans, streaming from hypothesis.searchstrategy.streams import Stream @given(lists(booleans())) def test_stream_give_lists(xs): s = Stream(iter(xs)) assert list(s) == xs assert list(s) == xs @given(lists(booleans())) def test_can_zip_streams_with_self(xs): s = Stream(iter(xs)) assert list(zip(s, s)) == list(zip(xs, xs)) def loop(x): while True: yield x def test_can_stream_infinite(): s = Stream(loop(False)) assert list(islice(s, 100)) == [False] * 100 @checks_deprecated_behaviour def test_fetched_repr_is_in_stream_repr(): @given(streaming(text())) def test(s): assert repr(s) == u'Stream(...)' assert repr(next(iter(s))) in repr(s) test() def test_cannot_thunk_past_end_of_list(): with pytest.raises(IndexError): Stream([1])._thunk_to(5) def test_thunking_evaluates_initial_list(): x = Stream([1, 2, 3]) x._thunk_to(1) assert len(x.fetched) == 1 def test_thunking_map_evaluates_source(): x = Stream(loop(False)) y = x.map(lambda t: True) y[100] assert y._thunked() == 101 assert x._thunked() == 101 def test_wrong_index_raises_type_error(): with pytest.raises(InvalidArgument): Stream([])[u'kittens'] def test_can_index_into_unindexed(): x = Stream(loop(1)) assert x[100] == 1 def test_can_map(): x = Stream([1, 2, 3]).map(lambda i: i * 2) assert isinstance(x, Stream) assert list(x) == [2, 4, 6] @checks_deprecated_behaviour def test_streaming_errors_in_find(): with pytest.raises(InvalidArgument): find(streaming(booleans()), lambda x: True) def test_default_stream_is_empty(): assert list(Stream()) == [] def test_can_slice_streams(): assert list(Stream([1, 2, 3])[:2]) == [1, 2] @checks_deprecated_behaviour def test_validates_argument(): with pytest.raises(InvalidArgument): streaming(bool).example() hypothesis-python-3.44.1/tests/cover/test_target_selector.py000066400000000000000000000067771321557765100244350ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import attr import hypothesis.strategies as st from hypothesis import given, settings from hypothesis.internal.compat import hrange from hypothesis.internal.conjecture.data import Status from hypothesis.internal.conjecture.engine import TargetSelector, universal @attr.s() class FakeConjectureData(object): tags = attr.ib() @property def status(self): return Status.VALID @st.composite def fake_randoms(draw): data = draw(st.data()) class FakeRandom(object): def choice(self, values): if len(values) == 1: return values[0] return data.draw(st.sampled_from(values), label='choice(%r)' % ( values,)) return FakeRandom() @settings(deadline=None) @given(fake_randoms()) def test_selects_non_universal_tag(rnd): selector = TargetSelector(rnd) selector.add(FakeConjectureData({0})) selector.add(FakeConjectureData(set())) tag1, x = selector.select() assert tag1 is not universal tag2, y = selector.select() assert tag2 is not universal assert tag1 != tag2 assert x != y data_lists = st.lists( st.builds(FakeConjectureData, st.frozensets(st.integers(0, 10))), min_size=1) def check_bounded_cycle(selector): everything = selector.examples_by_tags[universal] tags = frozenset() for d in everything: tags |= d.tags for _ in hrange(2 * len(tags) + 1): t, x = selector.select() tags -= x.tags if not tags: break assert not tags @settings(use_coverage=False, deadline=None) @given(fake_randoms(), data_lists) def test_cycles_through_all_tags_in_bounded_time(rnd, datas): selector = TargetSelector(rnd) for d in datas: selector.add(d) check_bounded_cycle(selector) @settings(use_coverage=False, deadline=None) @given(fake_randoms(), data_lists, data_lists) def test_cycles_through_all_tags_in_bounded_time_mixed(rnd, d1, d2): selector = TargetSelector(rnd) for d in d1: selector.add(d) check_bounded_cycle(selector) for d in d2: selector.add(d) check_bounded_cycle(selector) @settings(deadline=None) @given(fake_randoms()) def test_a_negated_tag_is_also_interesting(rnd): selector = TargetSelector(rnd) selector.add(FakeConjectureData(tags=frozenset({0}))) selector.add(FakeConjectureData(tags=frozenset({0}))) selector.add(FakeConjectureData(tags=frozenset())) _, data = selector.select() assert not data.tags @settings(deadline=None) @given(fake_randoms(), st.integers(1, 10)) def test_always_starts_with_rare_tags(rnd, n): selector = TargetSelector(rnd) selector.add(FakeConjectureData(tags=frozenset({0}))) for _ in hrange(n): selector.select() selector.add(FakeConjectureData(tags=frozenset({1}))) _, data = selector.select() assert 1 in data.tags hypothesis-python-3.44.1/tests/cover/test_testdecorators.py000066400000000000000000000277411321557765100243060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import functools import threading from collections import namedtuple import hypothesis.reporting as reporting from hypothesis import Verbosity, note, seed, given, assume, reject, \ settings from hypothesis.errors import Unsatisfiable from tests.common.utils import fails, raises, fails_with, capture_out from hypothesis.strategies import data, just, sets, text, lists, binary, \ builds, floats, one_of, booleans, integers, frozensets, sampled_from @given(integers(), integers()) def test_int_addition_is_commutative(x, y): assert x + y == y + x @fails @given(text(), text()) def test_str_addition_is_commutative(x, y): assert x + y == y + x @fails @given(binary(), binary()) def test_bytes_addition_is_commutative(x, y): assert x + y == y + x @given(integers(), integers(), integers()) def test_int_addition_is_associative(x, y, z): assert x + (y + z) == (x + y) + z @fails @given(floats(), floats(), floats()) @settings(max_examples=2000,) def test_float_addition_is_associative(x, y, z): assert x + (y + z) == (x + y) + z @given(lists(integers())) def test_reversing_preserves_integer_addition(xs): assert sum(xs) == sum(reversed(xs)) def test_still_minimizes_on_non_assertion_failures(): @settings(max_examples=50) @given(integers()) def is_not_too_large(x): if x >= 10: raise ValueError('No, %s is just too large. Sorry' % x) with raises(ValueError) as exinfo: is_not_too_large() assert ' 10 ' in exinfo.value.args[0] @given(integers()) def test_integer_division_shrinks_positive_integers(n): assume(n > 0) assert n / 2 < n class TestCases(object): @given(integers()) def test_abs_non_negative(self, x): assert abs(x) >= 0 assert isinstance(self, TestCases) @given(x=integers()) def test_abs_non_negative_varargs(self, x, *args): assert abs(x) >= 0 assert isinstance(self, TestCases) @given(x=integers()) def test_abs_non_negative_varargs_kwargs(self, *args, **kw): assert abs(kw['x']) >= 0 assert isinstance(self, TestCases) @given(x=integers()) def test_abs_non_negative_varargs_kwargs_only(*args, **kw): assert abs(kw['x']) >= 0 assert isinstance(args[0], TestCases) @fails @given(integers()) def test_int_is_always_negative(self, x): assert x < 0 @fails @given(floats(), floats()) def test_float_addition_cancels(self, x, y): assert x + (y - x) == y @fails @given(x=integers(min_value=0, max_value=3), name=text()) def test_can_be_given_keyword_args(x, name): assume(x > 0) assert len(name) < x @fails @given(one_of(floats(), booleans()), one_of(floats(), booleans())) def test_one_of_produces_different_values(x, y): assert type(x) == type(y) @given(just(42)) def test_is_the_answer(x): assert x == 42 @fails @given(text(), text()) def test_text_addition_is_not_commutative(x, y): assert x + y == y + x @fails @given(binary(), binary()) def test_binary_addition_is_not_commutative(x, y): assert x + y == y + x @given(integers(1, 10)) def test_integers_are_in_range(x): assert 1 <= x <= 10 @given(integers(min_value=100)) def test_integers_from_are_from(x): assert x >= 100 def test_does_not_catch_interrupt_during_falsify(): calls = [0] @given(integers()) def flaky_base_exception(x): if not calls[0]: calls[0] = 1 raise KeyboardInterrupt() with raises(KeyboardInterrupt): flaky_base_exception() def test_contains_the_test_function_name_in_the_exception_string(): calls = [0] @given(integers()) @settings(max_iterations=10, max_examples=10) def this_has_a_totally_unique_name(x): calls[0] += 1 reject() with raises(Unsatisfiable) as e: this_has_a_totally_unique_name() print('Called %d times' % tuple(calls)) assert this_has_a_totally_unique_name.__name__ in e.value.args[0] calls2 = [0] class Foo(object): @given(integers()) @settings(max_iterations=10, max_examples=10) def this_has_a_unique_name_and_lives_on_a_class(self, x): calls2[0] += 1 reject() with raises(Unsatisfiable) as e: Foo().this_has_a_unique_name_and_lives_on_a_class() print('Called %d times' % tuple(calls2)) assert ( Foo.this_has_a_unique_name_and_lives_on_a_class.__name__ ) in e.value.args[0] @given(lists(integers(), unique=True), integers()) def test_removing_an_element_from_a_unique_list(xs, y): assume(len(set(xs)) == len(xs)) try: xs.remove(y) except ValueError: pass assert y not in xs @fails @given(lists(integers(), average_size=25.0), data()) def test_removing_an_element_from_a_non_unique_list(xs, data): y = data.draw(sampled_from(xs)) xs.remove(y) assert y not in xs @given(sets(sampled_from(list(range(10))))) def test_can_test_sets_sampled_from(xs): assert all(isinstance(x, int) for x in xs) assert all(0 <= x < 10 for x in xs) mix = one_of(sampled_from([1, 2, 3]), text()) @fails @given(mix, mix) def test_can_mix_sampling_with_generating(x, y): assert type(x) == type(y) @fails @given(frozensets(integers())) def test_can_find_large_sum_frozenset(xs): assert sum(xs) < 100 def test_prints_on_failure_by_default(): @given(integers(), integers()) @settings(max_examples=100) def test_ints_are_sorted(balthazar, evans): assume(evans >= 0) assert balthazar <= evans with raises(AssertionError): with capture_out() as out: with reporting.with_reporter(reporting.default): test_ints_are_sorted() out = out.getvalue() lines = [l.strip() for l in out.split('\n')] assert ( 'Falsifying example: test_ints_are_sorted(balthazar=1, evans=0)' in lines) def test_does_not_print_on_success(): with settings(verbosity=Verbosity.normal): @given(integers()) def test_is_an_int(x): return with capture_out() as out: test_is_an_int() out = out.getvalue() lines = [l.strip() for l in out.split(u'\n')] assert all(not l for l in lines), lines @given(sampled_from([1])) def test_can_sample_from_single_element(x): assert x == 1 @fails @given(lists(integers())) def test_list_is_sorted(xs): assert sorted(xs) == xs @fails @given(floats(1.0, 2.0)) def test_is_an_endpoint(x): assert x == 1.0 or x == 2.0 def test_breaks_bounds(): @fails @given(x=integers()) def test_is_bounded(t, x): assert x < t for t in [1, 10, 100, 1000]: test_is_bounded(t) @given(x=booleans()) def test_can_test_kwargs_only_methods(**kwargs): assert isinstance(kwargs['x'], bool) @fails_with(UnicodeEncodeError) @given(text()) @settings(max_examples=100) def test_is_ascii(x): x.encode('ascii') @fails @given(text()) def test_is_not_ascii(x): try: x.encode('ascii') assert False except UnicodeEncodeError: pass @fails @given(text()) def test_can_find_string_with_duplicates(s): assert len(set(s)) == len(s) @fails @given(text()) def test_has_ascii(x): if not x: return ascii_characters = ( u'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ \t\n' ) assert any(c in ascii_characters for c in x) def test_uses_provided_seed(): import random random.seed(0) initial = random.getstate() @given(integers()) @seed(42) def test_foo(x): pass test_foo() assert hash(repr(random.getstate())) == hash(repr(initial)) def test_can_derandomize(): values = [] @fails @given(integers()) @settings(derandomize=True, database=None) def test_blah(x): values.append(x) assert x > 0 test_blah() assert values v1 = values values = [] test_blah() assert v1 == values def test_can_run_without_database(): @given(integers()) @settings(database=None) def test_blah(x): assert False with raises(AssertionError): test_blah() def test_can_run_with_database_in_thread(): results = [] @given(integers()) def test_blah(x): assert False def run_test(): try: test_blah() except AssertionError: results.append('success') # Run once in the main thread and once in another thread. Execution is # strictly serial, so no need for locking. run_test() thread = threading.Thread(target=run_test) thread.start() thread.join() assert results == ['success', 'success'] @given(integers()) def test_can_call_an_argument_f(f): # See issue https://github.com/HypothesisWorks/hypothesis-python/issues/38 # for details pass Litter = namedtuple('Litter', ('kitten1', 'kitten2')) @given(builds(Litter, integers(), integers())) def test_named_tuples_are_of_right_type(litter): assert isinstance(litter, Litter) @fails_with(AttributeError) @given(integers().map(lambda x: x.nope)) @settings(perform_health_check=False) def test_fails_in_reify(x): pass @given(text(u'a')) def test_a_text(x): assert set(x).issubset(set(u'a')) @given(text(u'')) def test_empty_text(x): assert not x @given(text(u'abcdefg')) def test_mixed_text(x): assert set(x).issubset(set(u'abcdefg')) def test_when_set_to_no_simplifies_runs_failing_example_twice(): failing = [0] @given(integers()) @settings(max_shrinks=0, max_examples=100) def foo(x): if x > 11: note('Lo') failing[0] += 1 assert False with settings(verbosity=Verbosity.normal): with raises(AssertionError): with capture_out() as out: foo() assert failing == [2] assert 'Falsifying example' in out.getvalue() assert 'Lo' in out.getvalue() @given(integers()) @settings(max_examples=1) def test_should_not_fail_if_max_examples_less_than_min_satisfying(x): pass @given(integers().filter(lambda x: x % 4 == 0)) def test_filtered_values_satisfy_condition(i): assert i % 4 == 0 def nameless_const(x): def f(u, v): return u return functools.partial(f, x) @given(sets(booleans()).map(nameless_const(2))) def test_can_map_nameless(x): assert x == 2 @given( integers(0, 10).flatmap(nameless_const(just(3)))) def test_can_flatmap_nameless(x): assert x == 3 def test_can_be_used_with_none_module(): def test_is_cool(i): pass test_is_cool.__module__ = None test_is_cool = given(integers())(test_is_cool) test_is_cool() def test_does_not_print_notes_if_all_succeed(): @given(integers()) @settings(verbosity=Verbosity.normal) def test(i): note('Hi there') with capture_out() as out: with reporting.with_reporter(reporting.default): test() assert not out.getvalue() def test_prints_notes_once_on_failure(): @given(lists(integers())) @settings(database=None, verbosity=Verbosity.normal) def test(xs): note('Hi there') if sum(xs) <= 100: raise ValueError() with capture_out() as out: with reporting.with_reporter(reporting.default): with raises(ValueError): test() lines = out.getvalue().strip().splitlines() assert lines.count('Hi there') == 1 @given(lists(max_size=0)) def test_empty_lists(xs): assert xs == [] hypothesis-python-3.44.1/tests/cover/test_threading.py000066400000000000000000000020261321557765100231730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import threading import hypothesis.strategies as st from hypothesis import given def test_can_run_given_in_thread(): has_run_successfully = [False] @given(st.integers()) def test(n): has_run_successfully[0] = True t = threading.Thread(target=test) t.start() t.join() assert has_run_successfully[0] hypothesis-python-3.44.1/tests/cover/test_timeout.py000066400000000000000000000051151321557765100227160ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time import pytest from hypothesis import given, settings from hypothesis.errors import Unsatisfiable, HypothesisDeprecationWarning from tests.common.utils import fails, fails_with, validate_deprecation from hypothesis.strategies import integers def test_hitting_timeout_is_deprecated(): with validate_deprecation(): @settings(timeout=0.1) @given(integers()) def test_slow_test_times_out(x): time.sleep(0.05) with validate_deprecation(): with pytest.raises(Unsatisfiable): test_slow_test_times_out() # Cheap hack to make test functions which fail on their second invocation calls = [0, 0, 0, 0] with validate_deprecation(): timeout_settings = settings(timeout=0.2, min_satisfying_examples=2) # The following tests exist to test that verifiers start their timeout # from when the test first executes, not from when it is defined. @fails @given(integers()) @timeout_settings def test_slow_failing_test_1(x): time.sleep(0.05) assert not calls[0] calls[0] = 1 @fails @timeout_settings @given(integers()) def test_slow_failing_test_2(x): time.sleep(0.05) assert not calls[1] calls[1] = 1 @fails @given(integers()) @timeout_settings def test_slow_failing_test_3(x): time.sleep(0.05) assert not calls[2] calls[2] = 1 @fails @timeout_settings @given(integers()) def test_slow_failing_test_4(x): time.sleep(0.05) assert not calls[3] calls[3] = 1 with validate_deprecation(): strict_timeout_settings = settings(timeout=60) # @checks_deprecated_behaviour only works if test passes otherwise @fails_with(HypothesisDeprecationWarning) @strict_timeout_settings @given(integers()) def test_deprecated_behaviour(i): time.sleep(100) @fails_with(AssertionError) @strict_timeout_settings @given(integers()) def test_does_not_hide_errors_with_deprecation(i): time.sleep(100) assert False hypothesis-python-3.44.1/tests/cover/test_type_lookup.py000066400000000000000000000113471321557765100236060ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given, infer from hypothesis.errors import InvalidArgument, ResolutionFailed from hypothesis.searchstrategy import types from hypothesis.internal.compat import PY2, integer_types # Build a set of all types output by core strategies blacklist = [ 'builds', 'iterables', 'permutations', 'random_module', 'randoms', 'runner', 'sampled_from', 'streaming', 'choices', ] types_with_core_strat = set(integer_types) for thing in (getattr(st, name) for name in sorted(st._strategies) if name in dir(st) and name not in blacklist): for n in range(3): try: ex = thing(*([st.nothing()] * n)).example() types_with_core_strat.add(type(ex)) break except (TypeError, InvalidArgument): continue @pytest.mark.parametrize('typ', sorted(types_with_core_strat, key=str)) def test_resolve_core_strategies(typ): @given(st.from_type(typ)) def inner(ex): if PY2 and issubclass(typ, integer_types): assert isinstance(ex, integer_types) else: assert isinstance(ex, typ) inner() def test_lookup_knows_about_all_core_strategies(): cannot_lookup = types_with_core_strat - set(types._global_type_lookup) assert not cannot_lookup def test_lookup_keys_are_types(): with pytest.raises(InvalidArgument): st.register_type_strategy('int', st.integers()) assert 'int' not in types._global_type_lookup def test_lookup_values_are_strategies(): with pytest.raises(InvalidArgument): st.register_type_strategy(int, 42) assert 42 not in types._global_type_lookup.values() @pytest.mark.parametrize('typ', sorted(types_with_core_strat, key=str)) def test_lookup_overrides_defaults(typ): sentinel = object() try: strat = types._global_type_lookup[typ] st.register_type_strategy(typ, st.just(sentinel)) assert st.from_type(typ).example() is sentinel finally: st.register_type_strategy(typ, strat) st.from_type.__clear_cache() assert st.from_type(typ).example() is not sentinel class ParentUnknownType(object): pass class UnknownType(ParentUnknownType): def __init__(self, arg): pass def test_custom_type_resolution(): fails = st.from_type(UnknownType) with pytest.raises(ResolutionFailed): fails.example() sentinel = object() try: st.register_type_strategy(UnknownType, st.just(sentinel)) assert st.from_type(UnknownType).example() is sentinel # Also covered by registration of child class assert st.from_type(ParentUnknownType).example() is sentinel finally: types._global_type_lookup.pop(UnknownType) st.from_type.__clear_cache() fails = st.from_type(UnknownType) with pytest.raises(ResolutionFailed): fails.example() def test_errors_if_generic_resolves_empty(): try: st.register_type_strategy(UnknownType, lambda _: st.nothing()) fails_1 = st.from_type(UnknownType) with pytest.raises(ResolutionFailed): fails_1.example() fails_2 = st.from_type(ParentUnknownType) with pytest.raises(ResolutionFailed): fails_2.example() finally: types._global_type_lookup.pop(UnknownType) st.from_type.__clear_cache() def test_cannot_register_empty(): # Cannot register and did not register with pytest.raises(InvalidArgument): st.register_type_strategy(UnknownType, st.nothing()) fails = st.from_type(UnknownType) with pytest.raises(ResolutionFailed): fails.example() assert UnknownType not in types._global_type_lookup def test_pulic_interface_works(): st.from_type(int).example() fails = st.from_type('not a type or annotated function') with pytest.raises(InvalidArgument): fails.example() def test_given_can_infer_on_py2(): # Editing annotations before decorating is hilariously awkward, but works! def inner(a): pass inner.__annotations__ = {'a': int} given(a=infer)(inner)() hypothesis-python-3.44.1/tests/cover/test_uuids.py000066400000000000000000000024741321557765100223660ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import find, given, settings @given(st.lists(st.uuids())) def test_are_unique(ls): assert len(set(ls)) == len(ls) @settings(deadline=None) @given(st.lists(st.uuids()), st.randoms()) def test_retains_uniqueness_in_simplify(ls, rnd): ts = find(st.lists(st.uuids()), lambda x: len(x) >= 5, random=rnd) assert len(ts) == len(set(ts)) == 5 @pytest.mark.parametrize('version', (1, 2, 3, 4, 5)) def test_can_generate_specified_version(version): @given(st.uuids(version=version)) def inner(uuid): assert version == uuid.version inner() hypothesis-python-3.44.1/tests/cover/test_validation.py000066400000000000000000000123571321557765100233700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import find, given from hypothesis.errors import InvalidArgument from tests.common.utils import fails_with from hypothesis.strategies import sets, lists, floats, booleans, \ integers, recursive, frozensets def test_errors_when_given_varargs(): @given(integers()) def has_varargs(*args): pass with pytest.raises(InvalidArgument) as e: has_varargs() assert u'varargs' in e.value.args[0] def test_varargs_without_positional_arguments_allowed(): @given(somearg=integers()) def has_varargs(somearg, *args): pass def test_errors_when_given_varargs_and_kwargs_with_positional_arguments(): @given(integers()) def has_varargs(*args, **kw): pass with pytest.raises(InvalidArgument) as e: has_varargs() assert u'varargs' in e.value.args[0] def test_varargs_and_kwargs_without_positional_arguments_allowed(): @given(somearg=integers()) def has_varargs(*args, **kw): pass def test_bare_given_errors(): @given() def test(): pass with pytest.raises(InvalidArgument): test() def test_errors_on_unwanted_kwargs(): @given(hello=int, world=int) def greet(world): pass with pytest.raises(InvalidArgument): greet() def test_errors_on_too_many_positional_args(): @given(integers(), int, int) def foo(x, y): pass with pytest.raises(InvalidArgument): foo() def test_errors_on_any_varargs(): @given(integers()) def oops(*args): pass with pytest.raises(InvalidArgument): oops() def test_can_put_arguments_in_the_middle(): @given(y=integers()) def foo(x, y, z): pass foo(1, 2) def test_float_ranges(): with pytest.raises(InvalidArgument): floats(float(u'nan'), 0).example() with pytest.raises(InvalidArgument): floats(1, -1).example() def test_float_range_and_allow_nan_cannot_both_be_enabled(): with pytest.raises(InvalidArgument): floats(min_value=1, allow_nan=True).example() with pytest.raises(InvalidArgument): floats(max_value=1, allow_nan=True).example() def test_float_finite_range_and_allow_infinity_cannot_both_be_enabled(): with pytest.raises(InvalidArgument): floats(0, 1, allow_infinity=True).example() def test_does_not_error_if_min_size_is_bigger_than_default_size(): lists(integers(), min_size=50).example() sets(integers(), min_size=50).example() frozensets(integers(), min_size=50).example() lists(integers(), min_size=50, unique=True).example() def test_list_unique_and_unique_by_cannot_both_be_enabled(): @given(lists(integers(), unique=True, unique_by=lambda x: x)) def boom(t): pass with pytest.raises(InvalidArgument) as e: boom() assert 'unique ' in e.value.args[0] assert 'unique_by' in e.value.args[0] def test_an_average_size_must_be_positive(): with pytest.raises(InvalidArgument): lists(integers(), average_size=0.0).example() with pytest.raises(InvalidArgument): lists(integers(), average_size=-1.0).example() def test_an_average_size_may_be_zero_if_max_size_is(): lists(integers(), average_size=0.0, max_size=0) def test_min_before_max(): with pytest.raises(InvalidArgument): integers(min_value=1, max_value=0).validate() def test_filter_validates(): with pytest.raises(InvalidArgument): integers(min_value=1, max_value=0).filter(bool).validate() def test_recursion_validates_base_case(): with pytest.raises(InvalidArgument): recursive( integers(min_value=1, max_value=0), lists, ).validate() def test_recursion_validates_recursive_step(): with pytest.raises(InvalidArgument): recursive( integers(), lambda x: lists(x, min_size=3, max_size=1), ).validate() @fails_with(InvalidArgument) @given(x=integers()) def test_stuff_keyword(x=1): pass @fails_with(InvalidArgument) @given(integers()) def test_stuff_positional(x=1): pass @fails_with(InvalidArgument) @given(integers(), integers()) def test_too_many_positional(x): pass def test_given_warns_on_use_of_non_strategies(): @given(bool) def test(x): pass with pytest.raises(InvalidArgument): test() def test_given_warns_when_mixing_positional_with_keyword(): @given(booleans(), y=booleans()) def test(x, y): pass with pytest.raises(InvalidArgument): test() def test_cannot_find_non_strategies(): with pytest.raises(InvalidArgument): find(bool, bool) hypothesis-python-3.44.1/tests/cover/test_verbosity.py000066400000000000000000000071461321557765100232640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from contextlib import contextmanager import pytest from hypothesis import find, given from hypothesis.errors import InvalidArgument from tests.common.utils import fails, capture_out from hypothesis._settings import Verbosity, settings from hypothesis.reporting import default as default_reporter from hypothesis.reporting import with_reporter from hypothesis.strategies import lists, booleans, integers @contextmanager def capture_verbosity(level): with capture_out() as o: with with_reporter(default_reporter): with settings(verbosity=level): yield o def test_prints_intermediate_in_success(): with capture_verbosity(Verbosity.verbose) as o: @given(booleans()) def test_works(x): pass test_works() assert 'Trying example' in o.getvalue() def test_does_not_log_in_quiet_mode(): with capture_verbosity(Verbosity.quiet) as o: @fails @given(integers()) def test_foo(x): assert False test_foo() assert not o.getvalue() def test_includes_progress_in_verbose_mode(): with capture_verbosity(Verbosity.verbose) as o: with settings(verbosity=Verbosity.verbose): find(lists(integers()), lambda x: sum(x) >= 1000000) out = o.getvalue() assert out assert u'Shrunk example' in out assert u'Found satisfying example' in out def test_prints_initial_attempts_on_find(): with capture_verbosity(Verbosity.verbose) as o: with settings(verbosity=Verbosity.verbose): seen = [] def not_first(x): if not seen: seen.append(x) return False return x not in seen find(integers(), not_first) assert u'Trying example' in o.getvalue() def test_includes_intermediate_results_in_verbose_mode(): with capture_verbosity(Verbosity.verbose) as o: @fails @given(lists(integers())) def test_foo(x): assert sum(x) < 1000000 test_foo() lines = o.getvalue().splitlines() assert len([l for l in lines if u'example' in l]) > 2 assert len([l for l in lines if u'AssertionError' in l]) VERBOSITIES = [ Verbosity.quiet, Verbosity.normal, Verbosity.verbose, Verbosity.debug ] def test_verbosity_can_be_accessed_by_name(): for f in VERBOSITIES: assert f is Verbosity.by_name(f.name) def test_verbosity_is_sorted(): assert VERBOSITIES == sorted(VERBOSITIES) def test_hash_verbosity(): x = {} for f in VERBOSITIES: x[f] = f for k, v in x.items(): assert k == v assert k is v def test_verbosities_are_inequal(): for f in VERBOSITIES: for g in VERBOSITIES: if f is not g: assert f != g assert (f <= g) or (g <= f) def test_verbosity_of_bad_name(): with pytest.raises(InvalidArgument): Verbosity.by_name('cabbage') hypothesis-python-3.44.1/tests/cover/test_weird_settings.py000066400000000000000000000016641321557765100242670ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import given, settings def test_setting_database_to_none_disables_the_database(): @given(st.booleans()) @settings(database_file=None) def test(b): pass test() hypothesis-python-3.44.1/tests/datetime/000077500000000000000000000000001321557765100202735ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/datetime/__init__.py000066400000000000000000000012001321557765100223750ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/datetime/test_dates.py000066400000000000000000000030411321557765100230020ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from tests.common.debug import minimal, find_any from tests.common.utils import checks_deprecated_behaviour from hypothesis.extra.datetime import dates from hypothesis.internal.compat import hrange @checks_deprecated_behaviour def test_can_find_after_the_year_2000(): assert minimal(dates(), lambda x: x.year > 2000).year == 2001 @checks_deprecated_behaviour def test_can_find_before_the_year_2000(): assert minimal(dates(), lambda x: x.year < 2000).year == 1999 @checks_deprecated_behaviour def test_can_find_each_month(): for month in hrange(1, 13): find_any(dates(), lambda x: x.month == month) @checks_deprecated_behaviour def test_min_year_is_respected(): assert minimal(dates(min_year=2003)).year == 2003 @checks_deprecated_behaviour def test_max_year_is_respected(): assert minimal(dates(max_year=1998)).year == 1998 hypothesis-python-3.44.1/tests/datetime/test_datetime.py000066400000000000000000000112471321557765100235050ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from datetime import MINYEAR import pytz import pytest from flaky import flaky from hypothesis import find, given, assume, settings, unlimited from hypothesis.errors import InvalidArgument from tests.common.debug import minimal, find_any from tests.common.utils import checks_deprecated_behaviour from hypothesis.extra.datetime import datetimes from hypothesis.internal.compat import hrange @checks_deprecated_behaviour def test_can_find_after_the_year_2000(): assert minimal(datetimes(), lambda x: x.year > 2000).year == 2001 @checks_deprecated_behaviour def test_can_find_before_the_year_2000(): assert minimal(datetimes(), lambda x: x.year < 2000).year == 1999 @checks_deprecated_behaviour def test_can_find_each_month(): for month in hrange(1, 13): find_any(datetimes(), lambda x: x.month == month) @checks_deprecated_behaviour def test_can_find_midnight(): datetimes().filter( lambda x: x.hour == x.minute == x.second == 0 ).example() @checks_deprecated_behaviour def test_can_find_non_midnight(): assert minimal(datetimes(), lambda x: x.hour != 0).hour == 1 @checks_deprecated_behaviour def test_can_find_off_the_minute(): find_any(datetimes(), lambda x: x.second != 0) @checks_deprecated_behaviour def test_can_find_on_the_minute(): find_any(datetimes(), lambda x: x.second == 0) @checks_deprecated_behaviour def test_simplifies_towards_midnight(): d = minimal(datetimes()) assert d.hour == 0 assert d.minute == 0 assert d.second == 0 assert d.microsecond == 0 @checks_deprecated_behaviour def test_can_generate_naive_datetime(): find_any(datetimes(allow_naive=True), lambda d: d.tzinfo is None) @checks_deprecated_behaviour def test_can_generate_non_naive_datetime(): assert minimal( datetimes(allow_naive=True), lambda d: d.tzinfo).tzinfo == pytz.UTC @checks_deprecated_behaviour def test_can_generate_non_utc(): datetimes().filter( lambda d: assume(d.tzinfo) and d.tzinfo.zone != u'UTC' ).example() @checks_deprecated_behaviour @given(datetimes(timezones=[])) def test_naive_datetimes_are_naive(dt): assert not dt.tzinfo @checks_deprecated_behaviour @given(datetimes(allow_naive=False)) def test_timezone_aware_datetimes_are_timezone_aware(dt): assert dt.tzinfo @checks_deprecated_behaviour def test_restricts_to_allowed_set_of_timezones(): timezones = list(map(pytz.timezone, list(pytz.all_timezones)[:3])) x = minimal(datetimes(timezones=timezones)) assert any(tz.zone == x.tzinfo.zone for tz in timezones) @checks_deprecated_behaviour def test_min_year_is_respected(): assert minimal(datetimes(min_year=2003)).year == 2003 @checks_deprecated_behaviour def test_max_year_is_respected(): assert minimal(datetimes(max_year=1998)).year == 1998 @checks_deprecated_behaviour def test_validates_year_arguments_in_range(): with pytest.raises(InvalidArgument): datetimes(min_year=-10 ** 6).example() with pytest.raises(InvalidArgument): datetimes(max_year=-10 ** 6).example() with pytest.raises(InvalidArgument): datetimes(min_year=10 ** 6).example() with pytest.raises(InvalidArgument): datetimes(max_year=10 ** 6).example() @checks_deprecated_behaviour def test_needs_permission_for_no_timezones(): with pytest.raises(InvalidArgument): datetimes(allow_naive=False, timezones=[]).example() @checks_deprecated_behaviour @flaky(max_runs=3, min_passes=1) def test_bordering_on_a_leap_year(): x = find( datetimes(min_year=2002, max_year=2005), lambda x: x.month == 2 and x.day == 29, settings=settings( database=None, max_examples=10 ** 7, timeout=unlimited) ) assert x.year == 2004 @checks_deprecated_behaviour def test_overflow_in_simplify(): """This is a test that we don't trigger a pytz bug when we're simplifying around MINYEAR where valid dates can produce an overflow error.""" minimal( datetimes(max_year=MINYEAR), lambda x: x.tzinfo != pytz.UTC ) hypothesis-python-3.44.1/tests/datetime/test_times.py000066400000000000000000000050121321557765100230230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytz from hypothesis import given, assume from tests.common.debug import minimal, find_any from tests.common.utils import checks_deprecated_behaviour from hypothesis.extra.datetime import times @checks_deprecated_behaviour def test_can_find_midnight(): find_any(times(), lambda x: x.hour == x.minute == x.second == 0) @checks_deprecated_behaviour def test_can_find_non_midnight(): assert minimal(times(), lambda x: x.hour != 0).hour == 1 @checks_deprecated_behaviour def test_can_find_off_the_minute(): find_any(times(), lambda x: x.second != 0) @checks_deprecated_behaviour def test_can_find_on_the_minute(): find_any(times(), lambda x: x.second == 0) @checks_deprecated_behaviour def test_simplifies_towards_midnight(): d = minimal(times()) assert d.hour == 0 assert d.minute == 0 assert d.second == 0 assert d.microsecond == 0 @checks_deprecated_behaviour def test_can_generate_naive_time(): find_any(times(allow_naive=True), lambda d: d.tzinfo is not None) @checks_deprecated_behaviour def test_can_generate_non_naive_time(): assert minimal( times(allow_naive=True), lambda d: d.tzinfo).tzinfo == pytz.UTC @checks_deprecated_behaviour def test_can_generate_non_utc(): times().filter( lambda d: assume(d.tzinfo) and d.tzinfo.zone != u'UTC' ).example() @checks_deprecated_behaviour @given(times(timezones=[])) def test_naive_times_are_naive(dt): assert not dt.tzinfo @checks_deprecated_behaviour @given(times(allow_naive=False)) def test_timezone_aware_times_are_timezone_aware(dt): assert dt.tzinfo @checks_deprecated_behaviour def test_restricts_to_allowed_set_of_timezones(): timezones = list(map(pytz.timezone, list(pytz.all_timezones)[:3])) x = minimal(times(timezones=timezones)) assert any(tz.zone == x.tzinfo.zone for tz in timezones) hypothesis-python-3.44.1/tests/datetime/test_timezones.py000066400000000000000000000057461321557765100237350ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import datetime as dt import pytz import pytest from hypothesis import given, assume from hypothesis.errors import InvalidArgument from tests.common.debug import minimal from hypothesis.extra.pytz import timezones from hypothesis.strategies import times, datetimes, sampled_from def test_utc_is_minimal(): assert pytz.UTC is minimal(timezones()) def test_can_generate_non_naive_time(): assert minimal(times(timezones=timezones()), lambda d: d.tzinfo).tzinfo == pytz.UTC def test_can_generate_non_naive_datetime(): assert minimal(datetimes(timezones=timezones()), lambda d: d.tzinfo).tzinfo == pytz.UTC @given(datetimes(timezones=timezones())) def test_timezone_aware_datetimes_are_timezone_aware(dt): assert dt.tzinfo is not None @given(sampled_from(['min_value', 'max_value']), datetimes(timezones=timezones())) def test_datetime_bounds_must_be_naive(name, val): with pytest.raises(InvalidArgument): datetimes(**{name: val}).validate() def test_underflow_in_simplify(): # we shouldn't trigger a pytz bug when we're simplifying minimal(datetimes(max_value=dt.datetime.min + dt.timedelta(days=3), timezones=timezones()), lambda x: x.tzinfo != pytz.UTC) def test_overflow_in_simplify(): # we shouldn't trigger a pytz bug when we're simplifying minimal(datetimes(min_value=dt.datetime.max - dt.timedelta(days=3), timezones=timezones()), lambda x: x.tzinfo != pytz.UTC) def test_timezones_arg_to_datetimes_must_be_search_strategy(): with pytest.raises(InvalidArgument): datetimes(timezones=pytz.all_timezones).validate() with pytest.raises(InvalidArgument): tz = [pytz.timezone(t) for t in pytz.all_timezones] datetimes(timezones=tz).validate() @given(times(timezones=timezones())) def test_timezone_aware_times_are_timezone_aware(dt): assert dt.tzinfo is not None def test_can_generate_non_utc(): times(timezones=timezones()).filter( lambda d: assume(d.tzinfo) and d.tzinfo.zone != u'UTC' ).validate() @given(sampled_from(['min_value', 'max_value']), times(timezones=timezones())) def test_time_bounds_must_be_naive(name, val): with pytest.raises(InvalidArgument): times(**{name: val}).validate() hypothesis-python-3.44.1/tests/django/000077500000000000000000000000001321557765100177415ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/django/__init__.py000066400000000000000000000012001321557765100220430ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/django/manage.py000077500000000000000000000024131321557765100215460ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import sys from hypothesis import HealthCheck, settings, unlimited from tests.common.setup import run if __name__ == u'__main__': run() settings.register_profile('default', settings( timeout=unlimited, use_coverage=False, suppress_health_check=[HealthCheck.too_slow], )) settings.load_profile(os.getenv('HYPOTHESIS_PROFILE', 'default')) os.environ.setdefault( u'DJANGO_SETTINGS_MODULE', u'tests.django.toys.settings') from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) hypothesis-python-3.44.1/tests/django/toys/000077500000000000000000000000001321557765100207375ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/django/toys/__init__.py000066400000000000000000000012001321557765100230410ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/django/toys/settings.py000066400000000000000000000055001321557765100231510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """Django settings for toys project. For more information on this file, see https://docs.djangoproject.com/en/1.7/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.7/ref/settings/ """ # Build paths inside the project like this: os.path.join(BASE_DIR, ...) from __future__ import division, print_function, absolute_import import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = u'o0zlv@74u4e3s+o0^h$+tlalh&$r(7hbx01g4^h5-3gizj%hub' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = ( u'django.contrib.admin', u'django.contrib.auth', u'django.contrib.contenttypes', u'django.contrib.sessions', u'django.contrib.messages', u'django.contrib.staticfiles', u'tests.django.toystore', ) MIDDLEWARE_CLASSES = ( u'django.contrib.sessions.middleware.SessionMiddleware', u'django.middleware.common.CommonMiddleware', u'django.middleware.csrf.CsrfViewMiddleware', u'django.contrib.auth.middleware.AuthenticationMiddleware', u'django.contrib.auth.middleware.SessionAuthenticationMiddleware', u'django.contrib.messages.middleware.MessageMiddleware', u'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = u'tests.django.toys.urls' WSGI_APPLICATION = u'tests.django.toys.wsgi.application' # Database # https://docs.djangoproject.com/en/1.7/ref/settings/#databases DATABASES = { u'default': { u'ENGINE': u'django.db.backends.sqlite3', u'NAME': os.path.join(BASE_DIR, u'db.sqlite3'), } } # Internationalization # https://docs.djangoproject.com/en/1.7/topics/i18n/ LANGUAGE_CODE = u'en-us' TIME_ZONE = u'UTC' USE_I18N = True USE_L10N = True USE_TZ = os.environ.get('HYPOTHESIS_DJANGO_USETZ', 'TRUE') == 'TRUE' # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.7/howto/static-files/ STATIC_URL = u'/static/' hypothesis-python-3.44.1/tests/django/toys/urls.py000066400000000000000000000020011321557765100222670ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from django.contrib import admin from django.conf.urls import url, include patterns, namespace, name = admin.site.urls urlpatterns = [ # Examples: # url(r'^$', 'toys.views.home', name='home'), # url(r'^blog/', include('blog.urls')), url(r'^admin/', include((patterns, name), namespace=namespace)) ] hypothesis-python-3.44.1/tests/django/toys/wsgi.py000066400000000000000000000021071321557765100222620ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """WSGI config for toys project. It exposes the WSGI callable as a module-level variable named ``application``. For more information on this file, see https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/ """ from __future__ import division, print_function, absolute_import import os from django.core.wsgi import get_wsgi_application os.environ.setdefault(u'DJANGO_SETTINGS_MODULE', u'toys.settings') application = get_wsgi_application() hypothesis-python-3.44.1/tests/django/toystore/000077500000000000000000000000001321557765100216315ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/django/toystore/__init__.py000066400000000000000000000012001321557765100237330ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/django/toystore/admin.py000066400000000000000000000013021321557765100232670ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import hypothesis-python-3.44.1/tests/django/toystore/models.py000066400000000000000000000077521321557765100235010ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from django.db import models from django.core.exceptions import ValidationError class Company(models.Model): name = models.CharField(max_length=100, unique=True) class Store(models.Model): name = models.CharField(max_length=100, unique=True) company = models.ForeignKey(Company, null=False, on_delete=models.CASCADE) class CharmField(models.Field): def db_type(self, connection): return u'char(1)' class CustomishField(models.Field): def db_type(self, connection): return u'char(1)' class Customish(models.Model): customish = CustomishField() class Customer(models.Model): name = models.CharField(max_length=100, unique=True) email = models.EmailField(max_length=100, unique=True) gender = models.CharField(max_length=50, null=True) age = models.IntegerField() birthday = models.DateTimeField() class Charming(models.Model): charm = CharmField() class CouldBeCharming(models.Model): charm = CharmField(null=True) class SelfLoop(models.Model): me = models.ForeignKey(u'self', null=True, on_delete=models.SET_NULL) class LoopA(models.Model): b = models.ForeignKey(u'LoopB', null=False, on_delete=models.CASCADE) class LoopB(models.Model): a = models.ForeignKey(u'LoopA', null=True, on_delete=models.SET_NULL) class ManyNumerics(models.Model): i1 = models.IntegerField() i2 = models.SmallIntegerField() i3 = models.BigIntegerField() p1 = models.PositiveIntegerField() p2 = models.PositiveSmallIntegerField() d = models.DecimalField(decimal_places=2, max_digits=5) class ManyTimes(models.Model): time = models.TimeField() date = models.DateField() duration = models.DurationField() class OddFields(models.Model): uuid = models.UUIDField() slug = models.SlugField() ipv4 = models.GenericIPAddressField(protocol='IPv4') ipv6 = models.GenericIPAddressField(protocol='IPv6') class CustomishDefault(models.Model): customish = CustomishField(default=u'b') class MandatoryComputed(models.Model): name = models.CharField(max_length=100, unique=True) company = models.ForeignKey(Company, null=False, on_delete=models.CASCADE) def __init__(self, **kw): if u'company' in kw: raise RuntimeError() cname = kw[u'name'] + u'_company' kw[u'company'] = Company.objects.create(name=cname) super(MandatoryComputed, self).__init__(**kw) def validate_even(value): if value % 2 != 0: raise ValidationError('') class RestrictedFields(models.Model): text_field_4 = models.TextField(max_length=4, blank=True) char_field_4 = models.CharField(max_length=4, blank=True) choice_field_text = models.TextField( choices=(('foo', 'Foo'), ('bar', 'Bar')) ) choice_field_int = models.IntegerField( choices=((1, 'First'), (2, 'Second')) ) null_choice_field_int = models.IntegerField( choices=((1, 'First'), (2, 'Second')), null=True, blank=True ) choice_field_grouped = models.TextField(choices=( ('Audio', (('vinyl', 'Vinyl'), ('cd', 'CD'),)), ('Video', (('vhs', 'VHS Tape'), ('dvd', 'DVD'),)), ('unknown', 'Unknown'), )) even_number_field = models.IntegerField( validators=[validate_even] ) non_blank_text_field = models.TextField(blank=False) hypothesis-python-3.44.1/tests/django/toystore/test_basic_configuration.py000066400000000000000000000046501321557765100272570ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from unittest import TestCase as VanillaTestCase from django.db import IntegrityError from hypothesis import HealthCheck, given, settings from hypothesis.strategies import integers from hypothesis.extra.django import TestCase, TransactionTestCase from hypothesis.internal.compat import PYPY from tests.django.toystore.models import Company class SomeStuff(object): @settings(suppress_health_check=[HealthCheck.too_slow]) @given(integers()) def test_is_blank_slate(self, unused): Company.objects.create(name=u'MickeyCo') def test_normal_test_1(self): Company.objects.create(name=u'MickeyCo') def test_normal_test_2(self): Company.objects.create(name=u'MickeyCo') class TestConstraintsWithTransactions(SomeStuff, TestCase): pass if not PYPY: # xfail # This is excessively slow in general, but particularly on pypy. We just # disable it altogether there as it's a niche case. class TestConstraintsWithoutTransactions(SomeStuff, TransactionTestCase): pass class TestWorkflow(VanillaTestCase): def test_does_not_break_later_tests(self): def break_the_db(i): Company.objects.create(name=u'MickeyCo') Company.objects.create(name=u'MickeyCo') class LocalTest(TestCase): @given(integers().map(break_the_db)) @settings(perform_health_check=False) def test_does_not_break_other_things(self, unused): pass def test_normal_test_1(self): Company.objects.create(name=u'MickeyCo') t = LocalTest(u'test_normal_test_1') try: t.test_does_not_break_other_things() except IntegrityError: pass t.test_normal_test_1() hypothesis-python-3.44.1/tests/django/toystore/test_given_models.py000066400000000000000000000130361321557765100257200ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import datetime as dt from uuid import UUID from django.conf import settings as django_settings from hypothesis import HealthCheck, given, assume, settings from hypothesis.errors import InvalidArgument from hypothesis.strategies import just, lists from hypothesis.extra.django import TestCase, TransactionTestCase from hypothesis.internal.compat import text_type from tests.django.toystore.models import Store, Company, Customer, \ SelfLoop, Customish, ManyTimes, OddFields, ManyNumerics, \ CustomishField, CouldBeCharming, CustomishDefault, RestrictedFields, \ MandatoryComputed from hypothesis.extra.django.models import models, default_value, \ add_default_field_mapping add_default_field_mapping(CustomishField, just(u'a')) class TestGetsBasicModels(TestCase): @given(models(Company)) def test_is_company(self, company): self.assertIsInstance(company, Company) self.assertIsNotNone(company.pk) @given(models(Store, company=models(Company))) def test_can_get_a_store(self, store): assert store.company.pk @given(lists(models(Company))) def test_can_get_multiple_models_with_unique_field(self, companies): assume(len(companies) > 1) for c in companies: self.assertIsNotNone(c.pk) self.assertEqual( len({c.pk for c in companies}), len({c.name for c in companies}) ) @settings(suppress_health_check=[HealthCheck.too_slow]) @given(models(Customer)) def test_is_customer(self, customer): self.assertIsInstance(customer, Customer) self.assertIsNotNone(customer.pk) self.assertIsNotNone(customer.email) @settings(suppress_health_check=[HealthCheck.too_slow]) @given(models(Customer)) def test_tz_presence(self, customer): if django_settings.USE_TZ: self.assertIsNotNone(customer.birthday.tzinfo) else: self.assertIsNone(customer.birthday.tzinfo) @given(models(CouldBeCharming)) def test_is_not_charming(self, not_charming): self.assertIsInstance(not_charming, CouldBeCharming) self.assertIsNotNone(not_charming.pk) self.assertIsNone(not_charming.charm) @given(models(SelfLoop)) def test_sl(self, sl): self.assertIsNone(sl.me) @given(lists(models(ManyNumerics))) def test_no_overflow_in_integer(self, manyints): pass @given(models(Customish)) def test_custom_field(self, x): assert x.customish == u'a' def test_mandatory_fields_are_mandatory(self): self.assertRaises(InvalidArgument, models, Store) def test_mandatory_computed_fields_are_mandatory(self): self.assertRaises(InvalidArgument, models, MandatoryComputed) def test_mandatory_computed_fields_may_not_be_provided(self): mc = models(MandatoryComputed, company=models(Company)) self.assertRaises(RuntimeError, mc.example) @given(models(MandatoryComputed, company=default_value)) def test_mandatory_computed_field_default(self, x): assert x.company.name == x.name + u'_company' @given(models(CustomishDefault)) def test_customish_default_generated(self, x): assert x.customish == u'a' @given(models(CustomishDefault, customish=default_value)) def test_customish_default_not_generated(self, x): assert x.customish == u'b' @given(models(OddFields)) def test_odd_fields(self, x): assert isinstance(x.uuid, UUID) assert isinstance(x.slug, text_type) assert u' ' not in x.slug assert isinstance(x.ipv4, str) assert len(x.ipv4.split('.')) == 4 assert all(int(i) in range(256) for i in x.ipv4.split('.')) assert isinstance(x.ipv6, text_type) assert set(x.ipv6).issubset(set(u'0123456789abcdefABCDEF:.')) @given(models(ManyTimes)) def test_time_fields(self, x): assert isinstance(x.time, dt.time) assert isinstance(x.date, dt.date) assert isinstance(x.duration, dt.timedelta) class TestsNeedingRollback(TransactionTestCase): def test_can_get_examples(self): for _ in range(200): models(Company).example() class TestRestrictedFields(TestCase): @given(models(RestrictedFields)) def test_constructs_valid_instance(self, instance): self.assertTrue(isinstance(instance, RestrictedFields)) instance.full_clean() self.assertLessEqual(len(instance.text_field_4), 4) self.assertLessEqual(len(instance.char_field_4), 4) self.assertIn(instance.choice_field_text, ('foo', 'bar')) self.assertIn(instance.choice_field_int, (1, 2)) self.assertIn(instance.null_choice_field_int, (1, 2, None)) self.assertEqual(instance.choice_field_grouped, instance.choice_field_grouped.lower()) self.assertEqual(instance.even_number_field % 2, 0) self.assertTrue(instance.non_blank_text_field) hypothesis-python-3.44.1/tests/django/toystore/views.py000066400000000000000000000013021321557765100233340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import hypothesis-python-3.44.1/tests/fakefactory/000077500000000000000000000000001321557765100207755ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/fakefactory/__init__.py000066400000000000000000000012001321557765100230770ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/fakefactory/test_fake_factory.py000066400000000000000000000055431321557765100250520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from faker.providers import BaseProvider from hypothesis import given from tests.common.debug import minimal from tests.common.utils import checks_deprecated_behaviour from hypothesis.extra.fakefactory import fake_factory class KittenProvider(BaseProvider): def kittens(self): return u'meow %d' % (self.random_number(digits=10),) @checks_deprecated_behaviour def test_kittens_meow(): @given(fake_factory(u'kittens', providers=[KittenProvider])) def inner(kitten): assert u'meow' in kitten inner() @checks_deprecated_behaviour def test_email(): @given(fake_factory(u'email')) def inner(email): assert u'@' in email inner() @checks_deprecated_behaviour def test_english_names_are_ascii(): @given(fake_factory(u'name', locale=u'en_US')) def inner(name): name.encode(u'ascii') inner() @checks_deprecated_behaviour def test_french_names_may_have_an_accent(): minimal( fake_factory(u'name', locale=u'fr_FR'), lambda x: u'é' not in x ) @checks_deprecated_behaviour def test_fake_factory_errors_with_both_locale_and_locales(): with pytest.raises(ValueError): fake_factory( u'name', locale=u'fr_FR', locales=[u'fr_FR', u'en_US'] ) @checks_deprecated_behaviour def test_fake_factory_errors_with_unsupported_locale(): with pytest.raises(ValueError): fake_factory( u'name', locale=u'badger_BADGER' ) @checks_deprecated_behaviour def test_factory_errors_with_source_for_unsupported_locale(): with pytest.raises(ValueError): fake_factory(u'state', locale=u'ja_JP') @checks_deprecated_behaviour def test_fake_factory_errors_if_any_locale_is_unsupported(): with pytest.raises(ValueError): fake_factory( u'name', locales=[u'fr_FR', u'en_US', u'mushroom_MUSHROOM'] ) @checks_deprecated_behaviour def test_fake_factory_errors_if_unsupported_method(): with pytest.raises(ValueError): fake_factory(u'spoon') @checks_deprecated_behaviour def test_fake_factory_errors_if_private_ish_method(): with pytest.raises(ValueError): fake_factory(u'_Generator__config') hypothesis-python-3.44.1/tests/nocover/000077500000000000000000000000001321557765100201525ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/nocover/__init__.py000066400000000000000000000012001321557765100222540ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/nocover/test_boundary_exploration.py000066400000000000000000000027751321557765100260450ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import Verbosity, HealthCheck, find, given, reject, \ settings from hypothesis.errors import NoSuchExample @pytest.mark.parametrize('strat', [st.text(min_size=5)]) @settings( max_shrinks=0, deadline=None, suppress_health_check=[ HealthCheck.too_slow, HealthCheck.hung_test] ) @given(st.data()) def test_explore_arbitrary_function(strat, data): cache = {} def predicate(x): try: return cache[x] except KeyError: return cache.setdefault(x, data.draw(st.booleans(), label=repr(x))) try: find( strat, predicate, settings=settings( database=None, verbosity=Verbosity.quiet, max_shrinks=1000) ) except NoSuchExample: reject() hypothesis-python-3.44.1/tests/nocover/test_characters.py000066400000000000000000000025651321557765100237120ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import string from hypothesis import strategies as st from hypothesis import given IDENTIFIER_CHARS = string.ascii_letters + string.digits + '_' @given(st.characters(blacklist_characters=IDENTIFIER_CHARS)) def test_large_blacklist(c): assert c not in IDENTIFIER_CHARS @given(st.data()) def test_arbitrary_blacklist(data): blacklist = data.draw( st.text(st.characters(max_codepoint=1000), min_size=1)) ords = list(map(ord, blacklist)) c = data.draw( st.characters( blacklist_characters=blacklist, min_codepoint=max(0, min(ords) - 1), max_codepoint=max(0, max(ords) + 1), ) ) assert c not in blacklist hypothesis-python-3.44.1/tests/nocover/test_choices.py000066400000000000000000000033611321557765100232030ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import given, settings, unlimited from tests.common.utils import raises, capture_out, \ checks_deprecated_behaviour from hypothesis.database import ExampleDatabase from hypothesis.internal.compat import hrange @checks_deprecated_behaviour def test_stability(): @given( st.lists(st.integers(0, 1000), unique=True, min_size=5), st.choices(), ) @settings( database=ExampleDatabase(), max_shrinks=10**6, timeout=unlimited, ) def test_choose_and_then_fail(ls, choice): for _ in hrange(100): choice(ls) assert False # Run once first for easier debugging with raises(AssertionError): test_choose_and_then_fail() with capture_out() as o: with raises(AssertionError): test_choose_and_then_fail() out1 = o.getvalue() with capture_out() as o: with raises(AssertionError): test_choose_and_then_fail() out2 = o.getvalue() assert out1 == out2 assert 'Choice #100:' in out1 hypothesis-python-3.44.1/tests/nocover/test_collective_minimization.py000066400000000000000000000033211321557765100265020ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from flaky import flaky from hypothesis import find, settings from tests.common import standard_types from hypothesis.errors import NoSuchExample from hypothesis.strategies import lists @pytest.mark.parametrize( u'spec', standard_types, ids=list(map(repr, standard_types))) @flaky(min_passes=1, max_runs=2) def test_can_collectively_minimize(spec): """This should generally exercise strategies' strictly_simpler heuristic by putting us in a state where example cloning is required to get to the answer fast enough.""" n = 10 def distinct_reprs(x): result = set() for t in x: result.add(repr(t)) if len(result) >= 2: return True return False try: xs = find( lists(spec, min_size=n, max_size=n), distinct_reprs, settings=settings(max_shrinks=100, max_examples=2000)) assert len(xs) == n assert 2 <= len(set((map(repr, xs)))) <= 3 except NoSuchExample: pass hypothesis-python-3.44.1/tests/nocover/test_compat.py000066400000000000000000000070411321557765100230500ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import inspect import warnings import pytest from hypothesis import strategies as st from hypothesis import given from hypothesis.internal.compat import FullArgSpec, ceil, floor, hrange, \ qualname, int_to_bytes, integer_types, getfullargspec, \ int_from_bytes def test_small_hrange(): assert list(hrange(5)) == [0, 1, 2, 3, 4] assert list(hrange(3, 5)) == [3, 4] assert list(hrange(1, 10, 2)) == [1, 3, 5, 7, 9] def test_large_hrange(): n = 1 << 1024 assert list(hrange(n, n + 5, 2)) == [n, n + 2, n + 4] assert list(hrange(n, n)) == [] with pytest.raises(ValueError): hrange(n, n, 0) class Foo(): def bar(self): pass def test_qualname(): assert qualname(Foo.bar) == u'Foo.bar' assert qualname(Foo().bar) == u'Foo.bar' assert qualname(qualname) == u'qualname' def a(b, c, d): pass def b(c, d, *ar): pass def c(c, d, *ar, **k): pass def d(a1, a2=1, a3=2, a4=None): pass @pytest.mark.parametrize('f', [a, b, c, d]) def test_agrees_on_argspec(f): with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) basic = inspect.getargspec(f) full = getfullargspec(f) assert basic.args == full.args assert basic.varargs == full.varargs assert basic.keywords == full.varkw assert basic.defaults == full.defaults @given(st.binary()) def test_convert_back(bs): bs = bytearray(bs) assert int_to_bytes(int_from_bytes(bs), len(bs)) == bs bytes8 = st.builds(bytearray, st.binary(min_size=8, max_size=8)) @given(bytes8, bytes8) def test_to_int_in_big_endian_order(x, y): x, y = sorted((x, y)) assert 0 <= int_from_bytes(x) <= int_from_bytes(y) ints8 = st.integers(min_value=0, max_value=2 ** 63 - 1) @given(ints8, ints8) def test_to_bytes_in_big_endian_order(x, y): x, y = sorted((x, y)) assert int_to_bytes(x, 8) <= int_to_bytes(y, 8) @pytest.mark.skipif(not hasattr(inspect, 'getfullargspec'), reason='inspect.getfullargspec only exists under Python 3') def test_inspection_compat(): assert getfullargspec is inspect.getfullargspec @pytest.mark.skipif(not hasattr(inspect, 'FullArgSpec'), reason='inspect.FullArgSpec only exists under Python 3') def test_inspection_result_compat(): assert FullArgSpec is inspect.FullArgSpec @given(st.fractions()) def test_ceil(x): """The compat ceil function always has the Python 3 semantics. Under Python 2, math.ceil returns a float, which cannot represent large integers - for example, `float(2**53) == float(2**53 + 1)` - and this is obviously incorrect for unlimited-precision integer operations. """ assert isinstance(ceil(x), integer_types) assert x <= ceil(x) < x + 1 @given(st.fractions()) def test_floor(x): assert isinstance(floor(x), integer_types) assert x - 1 < floor(x) <= x hypothesis-python-3.44.1/tests/nocover/test_conjecture_engine.py000066400000000000000000000066601321557765100252610ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import HealthCheck, given, settings, unlimited from hypothesis.database import InMemoryExampleDatabase from hypothesis.internal.compat import hbytes from tests.cover.test_conjecture_engine import run_to_buffer, slow_shrinker from hypothesis.internal.conjecture.data import Status from hypothesis.internal.conjecture.engine import ConjectureRunner @given(st.random_module()) @settings(max_shrinks=0, deadline=None, perform_health_check=False) def test_lot_of_dead_nodes(rnd): @run_to_buffer def x(data): for i in range(5): if data.draw_bytes(1)[0] != i: data.mark_invalid() data.mark_interesting() assert x == hbytes([0, 1, 2, 3, 4]) def test_saves_data_while_shrinking(): key = b'hi there' n = 5 db = InMemoryExampleDatabase() assert list(db.fetch(key)) == [] seen = set() def f(data): x = data.draw_bytes(512) if sum(x) >= 5000 and len(seen) < n: seen.add(hbytes(x)) if hbytes(x) in seen: data.mark_interesting() runner = ConjectureRunner( f, settings=settings(database=db), database_key=key) runner.run() assert runner.last_data.status == Status.INTERESTING assert len(seen) == n in_db = set( v for vs in db.data.values() for v in vs ) assert in_db.issubset(seen) assert in_db == seen @given(st.randoms(), st.random_module()) @settings( max_shrinks=0, deadline=None, suppress_health_check=[HealthCheck.hung_test] ) def test_maliciously_bad_generator(rnd, seed): @run_to_buffer def x(data): for _ in range(rnd.randint(1, 100)): data.draw_bytes(rnd.randint(1, 10)) if rnd.randint(0, 1): data.mark_invalid() else: data.mark_interesting() def test_garbage_collects_the_database(): key = b'hi there' n = 200 db = InMemoryExampleDatabase() local_settings = settings( database=db, max_shrinks=n, timeout=unlimited) runner = ConjectureRunner( slow_shrinker(), settings=local_settings, database_key=key) runner.run() assert runner.last_data.status == Status.INTERESTING def in_db(): return set( v for vs in db.data.values() for v in vs ) assert len(in_db()) == n + 1 runner = ConjectureRunner( lambda data: data.draw_bytes(4), settings=local_settings, database_key=key) runner.run() assert 0 < len(in_db()) < n def test_can_discard(): n = 32 @run_to_buffer def x(data): seen = set() while len(seen) < n: seen.add(hbytes(data.draw_bytes(1))) data.mark_interesting() assert len(x) == n hypothesis-python-3.44.1/tests/nocover/test_coverage.py000066400000000000000000000061461321557765100233650ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from coverage import Coverage import hypothesis.strategies as st from hypothesis import given, settings from hypothesis.core import escalate_warning from tests.common.utils import all_values from hypothesis.database import InMemoryExampleDatabase from hypothesis.internal.compat import hrange pytestmark = pytest.mark.skipif( not settings.default.use_coverage, reason='Coverage is disabled for this build.' ) def test_tracks_and_saves_coverage(): db = InMemoryExampleDatabase() def do_nothing(): """Use in place of pass for empty branches, which don't show up under coverge.""" pass @settings(database=db) @given(st.integers()) def test_branching(i): if i < -1000: do_nothing() elif i > 1000: do_nothing() else: do_nothing() test_branching() assert len(all_values(db)) == 3 def some_function_to_test(a, b, c): result = 0 if a: result += 1 if b: result += 1 if c: result += 1 return result LINE_START = some_function_to_test.__code__.co_firstlineno with open(__file__) as i: lines = [l.strip() for l in i] LINE_END = LINE_START + lines[LINE_START:].index('') @pytest.mark.parametrize('branch', [False, True]) @pytest.mark.parametrize('timid', [False, True]) def test_achieves_full_coverage(tmpdir, branch, timid): @given(st.booleans(), st.booleans(), st.booleans()) def test(a, b, c): some_function_to_test(a, b, c) cov = Coverage( config_file=False, data_file=str(tmpdir.join('.coverage')), branch=branch, timid=timid, ) cov._warn = escalate_warning cov.start() test() cov.stop() data = cov.get_data() lines = data.lines(__file__) for i in hrange(LINE_START + 1, LINE_END + 1): assert i in lines def rnd(): import random return random.gauss(0, 1) @pytest.mark.parametrize('branch', [False, True]) @pytest.mark.parametrize('timid', [False, True]) def test_does_not_trace_files_outside_inclusion(tmpdir, branch, timid): @given(st.booleans()) def test(a): rnd() cov = Coverage( config_file=False, data_file=str(tmpdir.join('.coverage')), branch=branch, timid=timid, include=[__file__], ) cov._warn = escalate_warning cov.start() test() cov.stop() data = cov.get_data() assert len(list(data.measured_files())) == 1 hypothesis-python-3.44.1/tests/nocover/test_database_agreement.py000066400000000000000000000047361321557765100253700ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import os import shutil import tempfile import hypothesis.strategies as st from tests.common.utils import validate_deprecation from hypothesis.database import SQLiteExampleDatabase, \ InMemoryExampleDatabase, DirectoryBasedExampleDatabase from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule class DatabaseComparison(RuleBasedStateMachine): def __init__(self): super(DatabaseComparison, self).__init__() self.tempd = tempfile.mkdtemp() exampledir = os.path.join(self.tempd, 'examples') self.dbs = [ DirectoryBasedExampleDatabase(exampledir), InMemoryExampleDatabase(), DirectoryBasedExampleDatabase(exampledir), ] with validate_deprecation(): self.dbs.append(SQLiteExampleDatabase(':memory:')) keys = Bundle('keys') values = Bundle('values') @rule(target=keys, k=st.binary()) def k(self, k): return k @rule(target=values, v=st.binary()) def v(self, v): return v @rule(k=keys, v=values) def save(self, k, v): for db in self.dbs: db.save(k, v) @rule(k=keys, v=values) def delete(self, k, v): for db in self.dbs: db.delete(k, v) @rule(k1=keys, k2=keys, v=values) def move(self, k1, k2, v): for db in self.dbs: db.move(k1, k2, v) @rule(k=keys) def values_agree(self, k): last = None last_db = None for db in self.dbs: keys = set(db.fetch(k)) if last is not None: assert last == keys, (last_db, db) last = keys last_db = db def teardown(self): for d in self.dbs: d.close() shutil.rmtree(self.tempd) TestDBs = DatabaseComparison.TestCase hypothesis-python-3.44.1/tests/nocover/test_deferred_strategies.py000066400000000000000000000044671321557765100256100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import strategies as st from hypothesis import Verbosity, find, note, given, settings from hypothesis.errors import NoSuchExample, Unsatisfiable from hypothesis.internal.compat import hrange @st.composite def mutually_recursive_strategies(draw): strategies = [st.none()] def build_strategy_for_indices(base, ixs, deferred): def f(): return base(*[strategies[i] for i in ixs]) f.__name__ = '%s([%s])' % ( base.__name__, ', '.join( 'strategies[%d]' % (i,) for i in ixs )) if deferred: return st.deferred(f) else: return f() n_strategies = draw(st.integers(1, 5)) for i in hrange(n_strategies): base = draw(st.sampled_from((st.one_of, st.tuples))) indices = draw(st.lists( st.integers(0, n_strategies), min_size=1)) if all(j <= i for j in indices): deferred = draw(st.booleans()) else: deferred = True strategies.append(build_strategy_for_indices(base, indices, deferred)) return strategies @settings(deadline=None) @given(mutually_recursive_strategies()) def test_arbitrary_recursion(strategies): for i, s in enumerate(strategies): if i > 0: note('strategies[%d]=%r' % (i, s)) s.validate() try: find(s, lambda x: True, settings=settings( max_shrinks=0, database=None, verbosity=Verbosity.quiet, max_examples=1, max_iterations=10, )) except (Unsatisfiable, NoSuchExample): pass hypothesis-python-3.44.1/tests/nocover/test_fixtures.py000066400000000000000000000016561321557765100234440ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import time from tests.common import TIME_INCREMENT def test_time_consistently_increments_in_tests(): x = time.time() y = time.time() z = time.time() assert y == x + TIME_INCREMENT assert z == y + TIME_INCREMENT hypothesis-python-3.44.1/tests/nocover/test_float_shrinking.py000066400000000000000000000040371321557765100247500ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random import pytest import hypothesis.strategies as st from hypothesis import Verbosity, given, assume, example, settings from tests.common.debug import minimal from hypothesis.internal.compat import ceil def test_shrinks_to_simple_floats(): assert minimal(st.floats(), lambda x: x > 1) == 2.0 assert minimal(st.floats(), lambda x: x > 0) == 1.0 @pytest.mark.parametrize('n', [1, 2, 3, 8, 10]) def test_can_shrink_in_variable_sized_context(n): x = minimal(st.lists(st.floats(), min_size=n), any) assert len(x) == n assert x.count(0.0) == n - 1 assert 1 in x @example(1.5) @given(st.floats(min_value=0, allow_infinity=False, allow_nan=False)) @settings(use_coverage=False, deadline=None, perform_health_check=False) def test_shrinks_downwards_to_integers(f): g = minimal( st.floats(), lambda x: x >= f, random=Random(0), settings=settings(verbosity=Verbosity.quiet), ) assert g == ceil(f) @example(1) @given(st.integers(1, 2 ** 16 - 1)) @settings(use_coverage=False, deadline=None, perform_health_check=False) def test_shrinks_downwards_to_integers_when_fractional(b): g = minimal( st.floats(), lambda x: assume((0 < x < (2 ** 53)) and int(x) != x) and x >= b, random=Random(0), settings=settings(verbosity=Verbosity.quiet), ) assert g == b + 0.5 hypothesis-python-3.44.1/tests/nocover/test_floating.py000066400000000000000000000066031321557765100233730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """Tests for being able to generate weird and wonderful floating point numbers.""" from __future__ import division, print_function, absolute_import import sys import math from hypothesis import HealthCheck, given, assume, settings from tests.common.utils import fails from hypothesis.strategies import data, lists, floats TRY_HARDER = settings( max_examples=1000, max_iterations=2000, suppress_health_check=[HealthCheck.filter_too_much] ) @given(floats()) @TRY_HARDER def test_is_float(x): assert isinstance(x, float) @fails @given(floats()) @TRY_HARDER def test_inversion_is_imperfect(x): assume(x != 0.0) y = 1.0 / x assert x * y == 1.0 @given(floats(-sys.float_info.max, sys.float_info.max)) def test_largest_range(x): assert not math.isinf(x) @given(floats()) @TRY_HARDER def test_negation_is_self_inverse(x): assume(not math.isnan(x)) y = -x assert -y == x @fails @given(lists(floats())) def test_is_not_nan(xs): assert not any(math.isnan(x) for x in xs) @fails @given(floats()) @TRY_HARDER def test_is_not_positive_infinite(x): assume(x > 0) assert not math.isinf(x) @fails @given(floats()) @TRY_HARDER def test_is_not_negative_infinite(x): assume(x < 0) assert not math.isinf(x) @fails @given(floats()) @TRY_HARDER def test_is_int(x): assume(not (math.isinf(x) or math.isnan(x))) assert x == int(x) @fails @given(floats()) @TRY_HARDER def test_is_not_int(x): assume(not (math.isinf(x) or math.isnan(x))) assert x != int(x) @fails @given(floats()) @TRY_HARDER def test_is_in_exact_int_range(x): assume(not (math.isinf(x) or math.isnan(x))) assert x + 1 != x # Tests whether we can represent subnormal floating point numbers. # This is essentially a function of how the python interpreter # was compiled. # Everything is terrible if math.ldexp(0.25, -1022) > 0: REALLY_SMALL_FLOAT = sys.float_info.min else: REALLY_SMALL_FLOAT = sys.float_info.min * 2 @fails @given(floats()) @TRY_HARDER def test_can_generate_really_small_positive_floats(x): assume(x > 0) assert x >= REALLY_SMALL_FLOAT @fails @given(floats()) @TRY_HARDER def test_can_generate_really_small_negative_floats(x): assume(x < 0) assert x <= -REALLY_SMALL_FLOAT @fails @given(floats()) @TRY_HARDER def test_can_find_floats_that_do_not_round_trip_through_strings(x): assert float(str(x)) == x @fails @given(floats()) @TRY_HARDER def test_can_find_floats_that_do_not_round_trip_through_reprs(x): assert float(repr(x)) == x finite_floats = floats(allow_infinity=False, allow_nan=False) @settings(deadline=None) @given(finite_floats, finite_floats, data()) def test_floats_are_in_range(x, y, data): x, y = sorted((x, y)) assume(x < y) t = data.draw(floats(x, y)) assert x <= t <= y hypothesis-python-3.44.1/tests/nocover/test_large_examples.py000066400000000000000000000015671321557765100245640ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from tests.common.debug import find_any def test_can_generate_large_lists_with_min_size(): find_any(st.lists(st.integers(), min_size=400)) hypothesis-python-3.44.1/tests/nocover/test_pretty_repr.py000066400000000000000000000064351321557765100241520ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import given, settings from hypothesis.errors import InvalidArgument from hypothesis.control import reject from hypothesis.internal.compat import OrderedDict def foo(x): pass def bar(x): pass def baz(x): pass fns = [ foo, bar, baz ] def builds_ignoring_invalid(target, *args, **kwargs): def splat(value): try: result = target(*value[0], **value[1]) result.validate() return result except InvalidArgument: reject() return st.tuples( st.tuples(*args), st.fixed_dictionaries(kwargs)).map(splat) size_strategies = dict( min_size=st.integers(min_value=0, max_value=100) | st.none(), max_size=st.integers(min_value=0, max_value=100) | st.none(), average_size=st.floats(min_value=0.0, max_value=100.0) | st.none() ) values = st.integers() | st.text(average_size=2.0) Strategies = st.recursive( st.one_of( st.sampled_from([ st.none(), st.booleans(), st.randoms(), st.complex_numbers(), st.randoms(), st.fractions(), st.decimals(), ]), st.builds(st.just, values), st.builds(st.sampled_from, st.lists(values, min_size=1)), builds_ignoring_invalid(st.floats, st.floats(), st.floats()), ), lambda x: st.one_of( builds_ignoring_invalid(st.lists, x, **size_strategies), builds_ignoring_invalid(st.sets, x, **size_strategies), builds_ignoring_invalid( lambda v: st.tuples(*v), st.lists(x, average_size=2.0)), builds_ignoring_invalid( lambda v: st.one_of(*v), st.lists(x, average_size=2.0, min_size=1)), builds_ignoring_invalid( st.dictionaries, x, x, dict_class=st.sampled_from([dict, OrderedDict]), min_size=st.integers(min_value=0, max_value=100) | st.none(), max_size=st.integers(min_value=0, max_value=100) | st.none(), average_size=st.floats(min_value=0.0, max_value=100.0) | st.none() ), st.builds(lambda s, f: s.map(f), x, st.sampled_from(fns)), ) ) strategy_globals = dict( (k, getattr(st, k)) for k in dir(st) ) strategy_globals['OrderedDict'] = OrderedDict strategy_globals['inf'] = float('inf') strategy_globals['nan'] = float('nan') strategy_globals['foo'] = foo strategy_globals['bar'] = bar strategy_globals['baz'] = baz @given(Strategies) @settings(max_examples=2000) def test_repr_evals_to_thing_with_same_repr(strategy): r = repr(strategy) via_eval = eval(r, strategy_globals) r2 = repr(via_eval) assert r == r2 hypothesis-python-3.44.1/tests/nocover/test_recursive.py000066400000000000000000000110071321557765100235710ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from random import Random from flaky import flaky import hypothesis.strategies as st from hypothesis import find, given, example, settings from tests.common.debug import find_any from hypothesis.internal.compat import integer_types def test_can_generate_with_large_branching(): def flatten(x): if isinstance(x, list): return sum(map(flatten, x), []) else: return [x] xs = find( st.recursive( st.integers(), lambda x: st.lists(x, average_size=50), max_leaves=100), lambda x: isinstance(x, list) and len(flatten(x)) >= 50 ) assert flatten(xs) == [0] * 50 def test_can_generate_some_depth_with_large_branching(): def depth(x): if x and isinstance(x, list): return 1 + max(map(depth, x)) else: return 1 xs = find( st.recursive(st.integers(), lambda x: st.lists(x, average_size=100)), lambda x: depth(x) > 1 ) assert xs in ([0], [[]]) def test_can_find_quite_broad_lists(): def breadth(x): if isinstance(x, list): return sum(map(breadth, x)) else: return 1 broad = find( st.recursive(st.booleans(), lambda x: st.lists(x, max_size=10)), lambda x: breadth(x) >= 20, settings=settings(max_examples=10000) ) assert breadth(broad) == 20 def test_drawing_many_near_boundary(): ls = find( st.lists(st.recursive( st.booleans(), lambda x: st.lists(x, min_size=8, max_size=10).map(tuple), max_leaves=9)), lambda x: len(set(x)) >= 5, settings=settings(max_examples=10000, database=None, max_shrinks=2000) ) assert len(ls) == 5 @given(st.randoms()) @settings( max_examples=50, max_shrinks=0, perform_health_check=False, deadline=None ) @example(Random(-1363972488426139)) @example(Random(-4)) def test_can_use_recursive_data_in_sets(rnd): nested_sets = st.recursive( st.booleans(), lambda js: st.frozensets(js, average_size=2.0), max_leaves=10 ) find_any(nested_sets, random=rnd) def flatten(x): if isinstance(x, bool): return frozenset((x,)) else: result = frozenset() for t in x: result |= flatten(t) if len(result) == 2: break return result assert rnd is not None x = find( nested_sets, lambda x: len(flatten(x)) == 2, random=rnd, settings=settings(database=None, max_shrinks=1000, max_examples=1000)) assert x in ( frozenset((False, True)), frozenset((False, frozenset((True,)))), frozenset((frozenset((False, True)),)) ) @flaky(max_runs=2, min_passes=1) def test_can_form_sets_of_recursive_data(): trees = st.sets(st.recursive( st.booleans(), lambda x: st.lists(x, min_size=5).map(tuple), max_leaves=20)) xs = find(trees, lambda x: len(x) >= 5, settings=settings( database=None, max_shrinks=1000, max_examples=1000 )) assert len(xs) == 5 @given(st.randoms()) @settings( max_examples=50, max_shrinks=0, perform_health_check=False, deadline=None ) def test_can_flatmap_to_recursive_data(rnd): stuff = st.lists(st.integers(), min_size=1).flatmap( lambda elts: st.recursive( st.sampled_from(elts), lambda x: st.lists(x, average_size=25), max_leaves=25 )) def flatten(x): if isinstance(x, integer_types): return [x] else: return sum(map(flatten, x), []) tree = find( stuff, lambda x: sum(flatten(x)) >= 100, settings=settings( database=None, max_shrinks=2000, max_examples=1000, ), random=rnd ) flat = flatten(tree) assert (sum(flat) == 1000) or (len(set(flat)) == 1) hypothesis-python-3.44.1/tests/nocover/test_regex.py000066400000000000000000000046651321557765100227100ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import re import string import hypothesis.strategies as st from hypothesis import given, assume, reject from hypothesis.searchstrategy.regex import base_regex_strategy @st.composite def charset(draw): negated = draw(st.booleans()) chars = draw(st.text(string.ascii_letters + string.digits, min_size=1)) if negated: return u"[^%s]" % (chars,) else: return u"[%s]" % (chars,) COMBINED_MATCHER = re.compile(u"[?+*]{2}") @st.composite def conservative_regex(draw): result = draw(st.one_of( st.just(u"."), charset(), CONSERVATIVE_REGEX.map(lambda s: u"(%s)" % (s,)), CONSERVATIVE_REGEX.map(lambda s: s + u'+'), CONSERVATIVE_REGEX.map(lambda s: s + u'?'), CONSERVATIVE_REGEX.map(lambda s: s + u'*'), st.lists(CONSERVATIVE_REGEX, min_size=1, max_size=3).map(u"|".join), st.lists(CONSERVATIVE_REGEX, min_size=1, max_size=3).map(u"".join), )) assume(COMBINED_MATCHER.search(result) is None) control = sum( result.count(c) for c in '?+*' ) assume(control <= 3) return result CONSERVATIVE_REGEX = conservative_regex() @given(st.data()) def test_conservative_regex_are_correct_by_construction(data): pattern = re.compile(data.draw(CONSERVATIVE_REGEX)) pattern = re.compile(pattern) result = data.draw(base_regex_strategy(pattern)) assert pattern.search(result) is not None @given(st.data()) def test_fuzz_stuff(data): pattern = data.draw( st.text(min_size=1, max_size=5) | st.binary(min_size=1, max_size=5) | CONSERVATIVE_REGEX.filter(bool) ) try: regex = re.compile(pattern) except re.error: reject() ex = data.draw(st.from_regex(regex)) assert regex.search(ex) hypothesis-python-3.44.1/tests/nocover/test_strategy_state.py000066400000000000000000000163371321557765100246370ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import math import hashlib from random import Random from hypothesis import Verbosity, seed, given, assume, settings, unlimited from hypothesis.errors import FailedHealthCheck from hypothesis.database import ExampleDatabase from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule from hypothesis.strategies import data, just, none, text, lists, binary, \ floats, tuples, booleans, decimals, integers, fractions, \ float_to_int, int_to_float, sampled_from, complex_numbers from hypothesis.internal.compat import PYPY AVERAGE_LIST_LENGTH = 2 def clamp(lower, value, upper): """Given a value and optional lower/upper bounds, 'clamp' the value so that it satisfies lower <= value <= upper.""" if (lower is not None) and (upper is not None) and (lower > upper): raise ValueError('Cannot clamp with lower > upper: %r > %r' % (lower, upper)) if lower is not None: value = max(lower, value) if upper is not None: value = min(value, upper) return value class HypothesisSpec(RuleBasedStateMachine): def __init__(self): super(HypothesisSpec, self).__init__() self.database = None strategies = Bundle(u'strategy') strategy_tuples = Bundle(u'tuples') objects = Bundle(u'objects') basic_data = Bundle(u'basic') varied_floats = Bundle(u'varied_floats') def teardown(self): self.clear_database() @rule() def clear_database(self): if self.database is not None: self.database.close() self.database = None @rule() def set_database(self): self.teardown() self.database = ExampleDatabase() @rule(strat=strategies, r=integers(), max_shrinks=integers(0, 100)) def find_constant_failure(self, strat, r, max_shrinks): with settings( verbosity=Verbosity.quiet, max_examples=1, min_satisfying_examples=0, database=self.database, max_shrinks=max_shrinks, ): @given(strat) @seed(r) def test(x): assert False try: test() except (AssertionError, FailedHealthCheck): pass @rule( strat=strategies, r=integers(), p=floats(0, 1), max_examples=integers(1, 10), max_shrinks=integers(1, 100) ) def find_weird_failure(self, strat, r, max_examples, p, max_shrinks): with settings( verbosity=Verbosity.quiet, max_examples=max_examples, min_satisfying_examples=0, database=self.database, max_shrinks=max_shrinks, ): @given(strat) @seed(r) def test(x): assert Random( hashlib.md5(repr(x).encode(u'utf-8')).digest() ).random() <= p try: test() except (AssertionError, FailedHealthCheck): pass @rule(target=strategies, spec=sampled_from(( integers(), booleans(), floats(), complex_numbers(), fractions(), decimals(), text(), binary(), none(), tuples(), ))) def strategy(self, spec): return spec @rule(target=strategies, values=lists(integers() | text(), min_size=1)) def sampled_from_strategy(self, values): return sampled_from(values) @rule(target=strategies, spec=strategy_tuples) def strategy_for_tupes(self, spec): return tuples(*spec) @rule( target=strategies, source=strategies, level=integers(1, 10), mixer=text()) def filtered_strategy(s, source, level, mixer): def is_good(x): return bool(Random( hashlib.md5((mixer + repr(x)).encode(u'utf-8')).digest() ).randint(0, level)) return source.filter(is_good) @rule(target=strategies, elements=strategies) def list_strategy(self, elements): return lists(elements, average_size=AVERAGE_LIST_LENGTH) @rule(target=strategies, left=strategies, right=strategies) def or_strategy(self, left, right): return left | right @rule(target=varied_floats, source=floats()) def float(self, source): return source @rule( target=varied_floats, source=varied_floats, offset=integers(-100, 100)) def adjust_float(self, source, offset): return int_to_float(clamp( 0, float_to_int(source) + offset, 2 ** 64 - 1 )) @rule( target=strategies, left=varied_floats, right=varied_floats ) def float_range(self, left, right): for f in (math.isnan, math.isinf): for x in (left, right): assume(not f(x)) left, right = sorted((left, right)) assert left <= right return floats(left, right) @rule( target=strategies, source=strategies, result1=strategies, result2=strategies, mixer=text(), p=floats(0, 1)) def flatmapped_strategy(self, source, result1, result2, mixer, p): assume(result1 is not result2) def do_map(value): rep = repr(value) random = Random( hashlib.md5((mixer + rep).encode(u'utf-8')).digest() ) if random.random() <= p: return result1 else: return result2 return source.flatmap(do_map) @rule(target=strategies, value=objects) def just_strategy(self, value): return just(value) @rule(target=strategy_tuples, source=strategies) def single_tuple(self, source): return (source,) @rule(target=strategy_tuples, left=strategy_tuples, right=strategy_tuples) def cat_tuples(self, left, right): return left + right @rule(target=objects, strat=strategies, data=data()) def get_example(self, strat, data): data.draw(strat) @rule(target=strategies, left=integers(), right=integers()) def integer_range(self, left, right): left, right = sorted((left, right)) return integers(left, right) @rule(strat=strategies) def repr_is_good(self, strat): assert u' at 0x' not in repr(strat) MAIN = __name__ == u'__main__' TestHypothesis = HypothesisSpec.TestCase TestHypothesis.settings = settings( TestHypothesis.settings, stateful_step_count=10 if PYPY else 50, max_shrinks=500, timeout=unlimited, min_satisfying_examples=0, verbosity=max(TestHypothesis.settings.verbosity, Verbosity.verbose), max_examples=10000 if MAIN else 200, ) if MAIN: TestHypothesis().runTest() hypothesis-python-3.44.1/tests/nocover/test_streams.py000066400000000000000000000023201321557765100232360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from itertools import islice from hypothesis import HealthCheck, given, settings from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import integers, streaming from hypothesis.internal.compat import integer_types @checks_deprecated_behaviour def test_streams_are_arbitrarily_long(): @settings(suppress_health_check=[HealthCheck.too_slow]) @given(streaming(integers())) def test(ss): for i in islice(ss, 100): assert isinstance(i, integer_types) test() hypothesis-python-3.44.1/tests/nocover/test_target_selector.py000066400000000000000000000051161321557765100247540ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis.stateful import RuleBasedStateMachine, rule, precondition from hypothesis.internal.compat import hrange from tests.cover.test_target_selector import FakeConjectureData, \ fake_randoms from hypothesis.internal.conjecture.engine import TargetSelector, universal class TargetSelectorMachine(RuleBasedStateMachine): def __init__(self): super(TargetSelectorMachine, self).__init__() self.target_selector = None self.data = [] self.tags = set() self.tag_intersections = None @precondition(lambda self: self.target_selector is None) @rule(rnd=fake_randoms()) def initialize(self, rnd): self.target_selector = TargetSelector(rnd) @precondition(lambda self: self.target_selector is not None) @rule( data=st.builds(FakeConjectureData, st.frozensets(st.integers(0, 10)))) def add_data(self, data): self.target_selector.add(data) self.data.append(data) self.tags.update(data.tags) if self.tag_intersections is None: self.tag_intersections = data.tags else: self.tag_intersections &= data.tags @precondition(lambda self: self.data) @rule() def select_target(self): tag, data = self.target_selector.select() assert self.target_selector.has_tag(tag, data) if self.tags != self.tag_intersections: assert tag != universal @precondition(lambda self: self.data) @rule() def cycle_through_tags(self): seen = set() for _ in hrange( (2 * len(self.tags) + 1) * (1 + self.target_selector.mutation_counts) ): _, data = self.target_selector.select() seen.update(data.tags) if seen == self.tags: break else: assert False TestSelector = TargetSelectorMachine.TestCase hypothesis-python-3.44.1/tests/numpy/000077500000000000000000000000001321557765100176475ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/numpy/__init__.py000066400000000000000000000012001321557765100217510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/numpy/test_argument_validation.py000066400000000000000000000043501321557765100253160ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st import hypothesis.extra.numpy as nps from hypothesis.errors import InvalidArgument def e(a, **kwargs): return (a, kwargs) @pytest.mark.parametrize( ('function', 'kwargs'), [ e(nps.array_dtypes, min_size=2, max_size=1), e(nps.array_dtypes, min_size=-1), e(nps.array_shapes, min_side=2, max_side=1), e(nps.array_shapes, min_dims=3, max_dims=2), e(nps.array_shapes, min_dims=0), e(nps.array_shapes, min_side=0), e(nps.arrays, dtype=float, shape=(0.5,)), e(nps.arrays, dtype=object, shape=1), e(nps.arrays, dtype=object, shape=(), elements=st.none()), e(nps.arrays, dtype=float, shape=1, fill=3), e(nps.byte_string_dtypes, min_len=-1), e(nps.byte_string_dtypes, min_len=2, max_len=1), e(nps.datetime64_dtypes, max_period=11), e(nps.datetime64_dtypes, min_period=11), e(nps.datetime64_dtypes, min_period='Y', max_period='M'), e(nps.timedelta64_dtypes, max_period=11), e(nps.timedelta64_dtypes, min_period=11), e(nps.timedelta64_dtypes, min_period='Y', max_period='M'), e(nps.unicode_string_dtypes, min_len=-1), e(nps.unicode_string_dtypes, min_len=2, max_len=1), e(nps.unsigned_integer_dtypes, endianness=3), e(nps.unsigned_integer_dtypes, sizes=()), e(nps.unsigned_integer_dtypes, sizes=(3,)), ] ) def test_raise_invalid_argument(function, kwargs): with pytest.raises(InvalidArgument): function(**kwargs).example() hypothesis-python-3.44.1/tests/numpy/test_fill_values.py000066400000000000000000000034471321557765100235750ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st from hypothesis import given from tests.common.debug import minimal, find_any from hypothesis.extra.numpy import arrays @given(arrays(object, 100, st.lists(max_size=0))) def test_generated_lists_are_distinct(ls): assert len(set(map(id, ls))) == len(ls) @st.composite def distinct_integers(draw): used = draw(st.shared(st.builds(set), key='distinct_integers.used')) i = draw(st.integers(0, 2 ** 64 - 1).filter(lambda x: x not in used)) used.add(i) return i @given(arrays('uint64', 10, distinct_integers())) def test_does_not_reuse_distinct_integers(arr): assert len(set(arr)) == len(arr) def test_may_reuse_distinct_integers_if_asked(): find_any( arrays('uint64', 10, distinct_integers(), fill=distinct_integers()), lambda x: len(set(x)) < len(x) ) def test_minimizes_to_fill(): result = minimal(arrays(float, 10, fill=st.just(3.0))) assert (result == 3.0).all() @given(arrays( dtype=float, elements=st.floats().filter(bool), shape=(3, 3, 3,), fill=st.just(1.0)) ) def test_fills_everything(x): assert x.all() hypothesis-python-3.44.1/tests/numpy/test_gen_data.py000066400000000000000000000210251321557765100230220ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import numpy as np import pytest from flaky import flaky import hypothesis.strategies as st import hypothesis.extra.numpy as nps from hypothesis import given, assume, settings from hypothesis.errors import InvalidArgument from tests.common.debug import minimal, find_any from hypothesis.searchstrategy import SearchStrategy from hypothesis.internal.compat import text_type, binary_type STANDARD_TYPES = list(map(np.dtype, [ u'int8', u'int32', u'int64', u'float', u'float32', u'float64', complex, u'datetime64', u'timedelta64', bool, text_type, binary_type ])) @given(nps.nested_dtypes()) def test_strategies_for_standard_dtypes_have_reusable_values(dtype): assert nps.from_dtype(dtype).has_reusable_values @pytest.mark.parametrize(u't', STANDARD_TYPES) def test_produces_instances(t): @given(nps.from_dtype(t)) def test_is_t(x): assert isinstance(x, t.type) assert x.dtype.kind == t.kind test_is_t() @given(nps.arrays(float, ())) def test_empty_dimensions_are_scalars(x): assert isinstance(x, np.dtype(float).type) @given(nps.arrays(float, (1, 0, 1))) def test_can_handle_zero_dimensions(x): assert x.shape == (1, 0, 1) @given(nps.arrays(u'uint32', (5, 5))) def test_generates_unsigned_ints(x): assert (x >= 0).all() @given(nps.arrays(int, (1,))) def test_assert_fits_in_machine_size(x): pass def test_generates_and_minimizes(): assert (minimal(nps.arrays(float, (2, 2))) == np.zeros(shape=(2, 2))).all() def test_can_minimize_large_arrays(): x = minimal( nps.arrays(u'uint32', 100), lambda x: np.any(x) and not np.all(x), timeout_after=60 ) assert np.logical_or(x == 0, x == 1).all() assert np.count_nonzero(x) in (1, len(x) - 1) @flaky(max_runs=50, min_passes=1) def test_can_minimize_float_arrays(): x = minimal(nps.arrays(float, 50), lambda t: t.sum() >= 1.0) assert x.sum() in (1, 50) class Foo(object): pass foos = st.tuples().map(lambda _: Foo()) def test_can_create_arrays_of_composite_types(): arr = minimal(nps.arrays(object, 100, foos)) for x in arr: assert isinstance(x, Foo) def test_can_create_arrays_of_tuples(): arr = minimal( nps.arrays(object, 10, st.tuples(st.integers(), st.integers())), lambda x: all(t0 != t1 for t0, t1 in x)) assert all(a in ((1, 0), (0, 1)) for a in arr) @given(nps.arrays(object, (2, 2), st.tuples(st.integers()))) def test_does_not_flatten_arrays_of_tuples(arr): assert isinstance(arr[0][0], tuple) @given( nps.arrays(object, (2, 2), st.lists(st.integers(), min_size=1, max_size=1)) ) def test_does_not_flatten_arrays_of_lists(arr): assert isinstance(arr[0][0], list) @given(nps.array_shapes()) def test_can_generate_array_shapes(shape): assert isinstance(shape, tuple) assert all(isinstance(i, int) for i in shape) @settings(deadline=None) @given(st.integers(1, 10), st.integers(0, 9), st.integers(1), st.integers(0)) def test_minimise_array_shapes(min_dims, dim_range, min_side, side_range): smallest = minimal(nps.array_shapes(min_dims, min_dims + dim_range, min_side, min_side + side_range)) assert len(smallest) == min_dims and all(k == min_side for k in smallest) @given(nps.scalar_dtypes()) def test_can_generate_scalar_dtypes(dtype): assert isinstance(dtype, np.dtype) @given(nps.nested_dtypes()) def test_can_generate_compound_dtypes(dtype): assert isinstance(dtype, np.dtype) @given(nps.nested_dtypes(max_itemsize=settings.default.buffer_size // 10), st.data()) def test_infer_strategy_from_dtype(dtype, data): # Given a dtype assert isinstance(dtype, np.dtype) # We can infer a strategy strat = nps.from_dtype(dtype) assert isinstance(strat, SearchStrategy) # And use it to fill an array of that dtype data.draw(nps.arrays(dtype, 10, strat)) @given(nps.nested_dtypes()) def test_np_dtype_is_idempotent(dtype): assert dtype == np.dtype(dtype) def test_minimise_scalar_dtypes(): assert minimal(nps.scalar_dtypes()) == np.dtype(u'bool') def test_minimise_nested_types(): assert minimal(nps.nested_dtypes()) == np.dtype(u'bool') def test_minimise_array_strategy(): smallest = minimal(nps.arrays( nps.nested_dtypes(max_itemsize=settings.default.buffer_size // 3**3), nps.array_shapes(max_dims=3, max_side=3))) assert smallest.dtype == np.dtype(u'bool') and not smallest.any() @given(nps.array_dtypes(allow_subarrays=False)) def test_can_turn_off_subarrays(dt): for field, _ in dt.fields.values(): assert field.shape == () @given(nps.integer_dtypes(endianness='>')) def test_can_restrict_endianness(dt): if dt.itemsize == 1: assert dt.byteorder == '|' else: assert dt.byteorder == '>' @given(nps.integer_dtypes(sizes=8)) def test_can_specify_size_as_an_int(dt): assert dt.itemsize == 1 @given(st.data()) def test_can_draw_shapeless_from_scalars(data): dt = data.draw(nps.scalar_dtypes()) result = data.draw(nps.arrays(dtype=dt, shape=())) assert isinstance(result, dt.type) @given(st.data()) def test_unicode_string_dtypes_generate_unicode_strings(data): dt = data.draw(nps.unicode_string_dtypes()) result = data.draw(nps.from_dtype(dt)) assert isinstance(result, text_type) @given(st.data()) def test_byte_string_dtypes_generate_unicode_strings(data): dt = data.draw(nps.byte_string_dtypes()) result = data.draw(nps.from_dtype(dt)) assert isinstance(result, binary_type) @given(nps.arrays(dtype='int8', shape=st.integers(0, 20), unique=True)) def test_array_values_are_unique(arr): assert len(set(arr)) == len(arr) def test_may_fill_with_nan_when_unique_is_set(): find_any( nps.arrays( dtype=float, elements=st.floats(allow_nan=False), shape=10, unique=True, fill=st.just(float('nan'))), lambda x: np.isnan(x).any() ) def test_is_still_unique_with_nan_fill(): @given(nps.arrays( dtype=float, elements=st.floats(allow_nan=False), shape=10, unique=True, fill=st.just(float('nan')))) def test(xs): assert len(set(xs)) == len(xs) test() def test_may_not_fill_with_non_nan_when_unique_is_set(): @given(nps.arrays( dtype=float, elements=st.floats(allow_nan=False), shape=10, unique=True, fill=st.just(0.0))) def test(arr): pass with pytest.raises(InvalidArgument): test() def test_may_not_fill_with_non_nan_when_unique_is_set_and_type_is_not_number(): @given(nps.arrays( dtype=bytes, shape=10, unique=True, fill=st.just(b''))) def test(arr): pass with pytest.raises(InvalidArgument): test() @given(st.data(), st.builds('{}[{}]'.format, st.sampled_from(('datetime64', 'timedelta64')), st.sampled_from(nps.TIME_RESOLUTIONS) ).map(np.dtype) ) def test_inferring_from_time_dtypes_gives_same_dtype(data, dtype): ex = data.draw(nps.from_dtype(dtype)) assert dtype == ex.dtype @given(st.data(), nps.byte_string_dtypes() | nps.unicode_string_dtypes()) def test_inferred_string_strategies_roundtrip(data, dtype): # Check that we never generate too-long or nul-terminated strings, which # cannot be read back out of an array. arr = np.zeros(shape=1, dtype=dtype) ex = data.draw(nps.from_dtype(arr.dtype)) arr[0] = ex assert arr[0] == ex @given(st.data(), nps.scalar_dtypes()) def test_all_inferred_scalar_strategies_roundtrip(data, dtype): # We only check scalars here, because record/compound/nested dtypes always # give an array of np.void objects. We're interested in whether scalar # values are safe, not known type coercion. arr = np.zeros(shape=1, dtype=dtype) ex = data.draw(nps.from_dtype(arr.dtype)) assume(ex == ex) # If not, the roundtrip test *should* fail! (eg NaN) arr[0] = ex assert arr[0] == ex hypothesis-python-3.44.1/tests/numpy/test_sampled_from.py000066400000000000000000000027051321557765100237340ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import numpy as np from hypothesis import given, assume from hypothesis.extra import numpy as npst from tests.common.utils import checks_deprecated_behaviour from hypothesis.strategies import data, sampled_from @given(data(), npst.arrays( dtype=npst.scalar_dtypes(), shape=npst.array_shapes(max_dims=1) )) def test_can_sample_1D_numpy_array_without_warning(data, arr): elem = data.draw(sampled_from(arr)) try: assume(not np.isnan(elem)) except TypeError: pass assert elem in arr @checks_deprecated_behaviour @given(data(), npst.arrays( dtype=npst.scalar_dtypes(), shape=npst.array_shapes(min_dims=2, max_dims=5) )) def test_sampling_multi_dimensional_arrays_is_deprecated(data, arr): data.draw(sampled_from(arr)) hypothesis-python-3.44.1/tests/pandas/000077500000000000000000000000001321557765100177455ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/pandas/__init__.py000066400000000000000000000012001321557765100220470ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/pandas/helpers.py000066400000000000000000000033631321557765100217660ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import numpy as np PANDAS_TIME_DTYPES = tuple(map(np.dtype, [ 'M8[ns]', '>m8[ns]', ])) def supported_by_pandas(dtype): """Checks whether the dtype is one that can be correctly handled by Pandas.""" # Pandas only supports a limited range of timedelta and datetime dtypes # compared to the full range that numpy supports and will convert # everything to those types (possibly increasing precision in the course of # doing so, which can cause problems if this results in something which # does not fit into the desired word type. As a result we want to filter # out any timedelta or datetime dtypes that are not of the desired types. if dtype.kind in ('m', 'M'): return dtype in PANDAS_TIME_DTYPES # Pandas does not support non-native byte orders and things go amusingly # wrong in weird places if you try to use them. See # https://pandas.pydata.org/pandas-docs/stable/gotchas.html#byte-ordering-issues if dtype.byteorder not in ('|', '='): return False return True hypothesis-python-3.44.1/tests/pandas/test_argument_validation.py000066400000000000000000000056711321557765100254230ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import hypothesis.strategies as st import hypothesis.extra.pandas as pdst from tests.common.arguments import e, argument_validation_test BAD_ARGS = [ e(pdst.data_frames), e(pdst.data_frames, pdst.columns(1, dtype='not a dtype')), e(pdst.data_frames, pdst.columns(1, elements='not a strategy')), e(pdst.data_frames, pdst.columns([[]])), e(pdst.data_frames, [], index=[]), e(pdst.data_frames, [], rows=st.fixed_dictionaries({'A': st.just(1)})), e(pdst.data_frames, pdst.columns(1)), e(pdst.data_frames, pdst.columns(1, dtype=float, fill=1)), e(pdst.data_frames, pdst.columns(1, dtype=float, elements=1)), e(pdst.data_frames, pdst.columns(1, fill=1, dtype=float)), e(pdst.data_frames, pdst.columns(['A', 'A'], dtype=float)), e(pdst.data_frames, pdst.columns(1, elements=st.none(), dtype=int)), e(pdst.data_frames, 1), e(pdst.data_frames, [1]), e(pdst.data_frames, pdst.columns(1, dtype='category')), e(pdst.data_frames, pdst.columns(['A'], dtype=bool), rows=st.tuples(st.booleans(), st.booleans())), e(pdst.data_frames, pdst.columns(1, elements=st.booleans()), rows=st.tuples(st.booleans())), e(pdst.data_frames, rows=st.integers(), index=pdst.range_indexes(0, 0)), e(pdst.data_frames, rows=st.integers(), index=pdst.range_indexes(1, 1)), e(pdst.data_frames, pdst.columns(1, dtype=int), rows=st.integers()), e(pdst.indexes), e(pdst.indexes, dtype='category'), e(pdst.indexes, dtype='not a dtype'), e(pdst.indexes, elements='not a strategy'), e(pdst.indexes, elements=st.text(), dtype=float), e(pdst.indexes, elements=st.none(), dtype=int), e(pdst.indexes, dtype=int, max_size=0, min_size=1), e(pdst.indexes, dtype=int, unique='true'), e(pdst.indexes, dtype=int, min_size='0'), e(pdst.indexes, dtype=int, max_size='1'), e(pdst.range_indexes, 1, 0), e(pdst.range_indexes, min_size='0'), e(pdst.range_indexes, max_size='1'), e(pdst.series), e(pdst.series, dtype='not a dtype'), e(pdst.series, elements='not a strategy'), e(pdst.series, elements=st.none(), dtype=int), e(pdst.series, dtype='category'), e(pdst.series, index='not a strategy'), ] test_raise_invalid_argument = argument_validation_test(BAD_ARGS) hypothesis-python-3.44.1/tests/pandas/test_data_frame.py000066400000000000000000000160141321557765100234430ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import numpy as np import pytest import hypothesis.strategies as st import hypothesis.extra.numpy as npst import hypothesis.extra.pandas as pdst from hypothesis import given, reject from hypothesis.types import RandomWithSeed as Random from tests.common.debug import minimal, find_any from tests.pandas.helpers import supported_by_pandas @given(pdst.data_frames([ pdst.column('a', dtype=int), pdst.column('b', dtype=float), ])) def test_can_have_columns_of_distinct_types(df): assert df['a'].dtype == np.dtype(int) assert df['b'].dtype == np.dtype(float) @given(pdst.data_frames( [pdst.column(dtype=int)], index=pdst.range_indexes(min_size=1, max_size=5))) def test_respects_size_bounds(df): assert 1 <= len(df) <= 5 @given(pdst.data_frames(pdst.columns(['A', 'B'], dtype=float))) def test_can_specify_just_column_names(df): df['A'] df['B'] @given(pdst.data_frames(pdst.columns(2, dtype=float))) def test_can_specify_just_column_count(df): df[0] df[1] @given(pdst.data_frames( rows=st.fixed_dictionaries({'A': st.integers(1, 10), 'B': st.floats()})) ) def test_gets_the_correct_data_shape_for_just_rows(table): assert table['A'].dtype == np.dtype('int64') assert table['B'].dtype == np.dtype(float) @given(pdst.data_frames( columns=pdst.columns(['A', 'B'], dtype=int), rows=st.lists(st.integers(0, 1000), min_size=2, max_size=2).map(sorted), )) def test_can_specify_both_rows_and_columns_list(d): assert d['A'].dtype == np.dtype(int) assert d['B'].dtype == np.dtype(int) for _, r in d.iterrows(): assert r['A'] <= r['B'] @given(pdst.data_frames( columns=pdst.columns(['A', 'B'], dtype=int), rows=st.lists( st.integers(0, 1000), min_size=2, max_size=2).map(sorted).map(tuple), )) def test_can_specify_both_rows_and_columns_tuple(d): assert d['A'].dtype == np.dtype(int) assert d['B'].dtype == np.dtype(int) for _, r in d.iterrows(): assert r['A'] <= r['B'] @given(pdst.data_frames( columns=pdst.columns(['A', 'B'], dtype=int), rows=st.lists(st.integers(0, 1000), min_size=2, max_size=2).map( lambda x: {'A': min(x), 'B': max(x)}), )) def test_can_specify_both_rows_and_columns_dict(d): assert d['A'].dtype == np.dtype(int) assert d['B'].dtype == np.dtype(int) for _, r in d.iterrows(): assert r['A'] <= r['B'] @given( pdst.data_frames([pdst.column('A', fill=st.just(float('nan')), dtype=float, elements=st.floats(allow_nan=False))], rows=st.builds(dict))) def test_can_fill_in_missing_elements_from_dict(df): assert np.isnan(df['A']).all() subsets = ['', 'A', 'B', 'C', 'AB', 'AC', 'BC', 'ABC'] @pytest.mark.parametrize('disable_fill', subsets) @pytest.mark.parametrize('non_standard_index', [True, False]) def test_can_minimize_based_on_two_columns_independently( disable_fill, non_standard_index ): columns = [ pdst.column( name, dtype=bool, fill=st.nothing() if name in disable_fill else None, ) for name in ['A', 'B', 'C'] ] x = minimal( pdst.data_frames( columns, index=pdst.indexes(dtype=int) if non_standard_index else None, ), lambda x: x['A'].any() and x['B'].any() and x['C'].any(), random=Random(0), ) assert len(x['A']) == 1 assert x['A'][0] == 1 assert x['B'][0] == 1 assert x['C'][0] == 1 @st.composite def column_strategy(draw): name = draw(st.none() | st.text()) dtype = draw(npst.scalar_dtypes().filter(supported_by_pandas)) pass_dtype = not draw(st.booleans()) if pass_dtype: pass_elements = not draw(st.booleans()) else: pass_elements = True if pass_elements: elements = npst.from_dtype(dtype) else: elements = None unique = draw(st.booleans()) fill = st.nothing() if draw(st.booleans()) else None return pdst.column( name=name, dtype=dtype, unique=unique, fill=fill, elements=elements) @given(pdst.data_frames(pdst.columns(1, dtype=np.dtype('= '0.19': assert index.dtype == inferred_dtype else: assert index.dtype == converted_dtype if unique: assert len(set(index.values)) == len(index) hypothesis-python-3.44.1/tests/pandas/test_series.py000066400000000000000000000041131321557765100226470ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import numpy as np import pandas import hypothesis.strategies as st import hypothesis.extra.numpy as npst import hypothesis.extra.pandas as pdst from hypothesis import given, assume from tests.common.debug import find_any from tests.pandas.helpers import supported_by_pandas @given(st.data()) def test_can_create_a_series_of_any_dtype(data): dtype = np.dtype(data.draw(npst.scalar_dtypes())) assume(supported_by_pandas(dtype)) series = data.draw(pdst.series(dtype=dtype)) assert series.dtype == pandas.Series([], dtype=dtype).dtype @given(pdst.series( dtype=float, index=pdst.range_indexes(min_size=2, max_size=5))) def test_series_respects_size_bounds(s): assert 2 <= len(s) <= 5 def test_can_fill_series(): nan_backed = pdst.series( elements=st.floats(allow_nan=False), fill=st.just(float('nan'))) find_any( nan_backed, lambda x: np.isnan(x).any() ) @given(pdst.series(dtype=int)) def test_can_generate_integral_series(s): assert s.dtype == np.dtype(int) @given(pdst.series(elements=st.integers(0, 10))) def test_will_use_dtype_of_elements(s): assert s.dtype == np.dtype('int64') @given(pdst.series(elements=st.floats(allow_nan=False))) def test_will_use_a_provided_elements_strategy(s): assert not np.isnan(s).any() @given(pdst.series(dtype='int8', unique=True)) def test_unique_series_are_unique(s): assert len(s) == len(set(s)) hypothesis-python-3.44.1/tests/py2/000077500000000000000000000000001321557765100172115ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/py2/__init__.py000066400000000000000000000012001321557765100213130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/py2/test_destructuring.py000066400000000000000000000022701321557765100235250ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import given from hypothesis.errors import InvalidArgument from hypothesis.strategies import integers from hypothesis.internal.reflection import get_pretty_function_description def test_destructuring_lambdas(): assert get_pretty_function_description(lambda (x, y): 1) == \ u'lambda (x, y): ' def test_destructuring_not_allowed(): @given(integers()) def foo(a, (b, c)): pass with pytest.raises(InvalidArgument): foo() hypothesis-python-3.44.1/tests/py3/000077500000000000000000000000001321557765100172125ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/py3/__init__.py000066400000000000000000000012001321557765100213140ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/py3/test_annotations.py000066400000000000000000000066361321557765100231730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest import hypothesis.strategies as st from hypothesis import given from hypothesis.errors import InvalidArgument from hypothesis.internal.compat import getfullargspec from hypothesis.internal.reflection import define_function_signature, \ convert_positional_arguments, get_pretty_function_description @given(st.integers()) def test_has_an_annotation(i: int): pass def universal_acceptor(*args, **kwargs): return args, kwargs def has_annotation(a: int, *b, c=2) -> None: pass @pytest.mark.parametrize('f', [ has_annotation, lambda *, a: a, lambda *, a=1: a, ]) def test_copying_preserves_argspec(f): af = getfullargspec(f) t = define_function_signature('foo', 'docstring', af)(universal_acceptor) at = getfullargspec(t) assert af.args == at.args[:len(af.args)] assert af.varargs == at.varargs assert af.varkw == at.varkw assert len(af.defaults or ()) == len(at.defaults or ()) assert af.kwonlyargs == at.kwonlyargs assert af.kwonlydefaults == at.kwonlydefaults assert af.annotations == at.annotations @pytest.mark.parametrize('lam,source', [ ((lambda *z, a: a), 'lambda *z, a: a'), ((lambda *z, a=1: a), 'lambda *z, a=1: a'), ((lambda *, a: a), 'lambda *, a: a'), ((lambda *, a=1: a), 'lambda *, a=1: a'), ]) def test_py3only_lambda_formatting(lam, source): # Testing kwonly lambdas, with and without varargs and default values assert get_pretty_function_description(lam) == source def test_given_notices_missing_kwonly_args(): with pytest.raises(InvalidArgument): @given(a=st.none()) def reqs_kwonly(*, a, b): pass def test_converter_handles_kwonly_args(): def f(*, a, b=2): pass out = convert_positional_arguments(f, (), dict(a=1)) assert out == ((), dict(a=1, b=2)) def test_converter_notices_missing_kwonly_args(): def f(*, a, b=2): pass with pytest.raises(TypeError): assert convert_positional_arguments(f, (), dict()) def pointless_composite(draw: None, strat: bool, nothing: list) -> int: return 3 def return_annot() -> int: return 4 # per RFC 1149.5 / xckd 221 def first_annot(draw: None): pass def test_composite_edits_annotations(): spec_comp = getfullargspec(st.composite(pointless_composite)) assert spec_comp.annotations['return'] == int assert 'nothing' in spec_comp.annotations assert 'draw' not in spec_comp.annotations @pytest.mark.parametrize('nargs', [1, 2, 3]) def test_given_edits_annotations(nargs): spec_given = getfullargspec( given(*(nargs * [st.none()]))(pointless_composite)) assert spec_given.annotations.pop('return') is None assert len(spec_given.annotations) == 3 - nargs hypothesis-python-3.44.1/tests/py3/test_asyncio.py000066400000000000000000000033021321557765100222660ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import asyncio import unittest from unittest import TestCase import hypothesis.strategies as st from hypothesis import given, assume class TestAsyncio(TestCase): timeout = 5 def setUp(self): self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) def tearDown(self): self.loop.close() def execute_example(self, f): error = None def g(): nonlocal error try: x = f() if x is not None: yield from x except BaseException as e: error = e coro = asyncio.coroutine(g) future = asyncio.wait_for(coro(), timeout=self.timeout) self.loop.run_until_complete(future) if error is not None: raise error @given(st.text()) def test_foo(self, x): assume(x) yield from asyncio.sleep(0.001) assert x if __name__ == '__main__': unittest.main() hypothesis-python-3.44.1/tests/py3/test_lookup.py000066400000000000000000000257721321557765100221510ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import io import sys import enum import string import collections import pytest import hypothesis.strategies as st from hypothesis import find, given, infer, assume from hypothesis.errors import NoExamples, InvalidArgument, ResolutionFailed from hypothesis.strategies import from_type from hypothesis.searchstrategy import types from hypothesis.internal.compat import integer_types, get_type_hints typing = pytest.importorskip('typing') sentinel = object() generics = sorted((t for t in types._global_type_lookup if isinstance(t, typing.GenericMeta)), key=str) @pytest.mark.parametrize('typ', generics) def test_resolve_typing_module(typ): @given(from_type(typ)) def inner(ex): if typ in (typing.BinaryIO, typing.TextIO): assert isinstance(ex, io.IOBase) elif typ is typing.Tuple: # isinstance is incompatible with Tuple on early 3.5 assert ex == () elif isinstance(typ, typing._ProtocolMeta): pass else: try: assert isinstance(ex, typ) except TypeError: if sys.version_info[:2] < (3, 6): pytest.skip() raise inner() @pytest.mark.parametrize('typ', [typing.Any, typing.Union]) def test_does_not_resolve_special_cases(typ): with pytest.raises(InvalidArgument): from_type(typ).example() @pytest.mark.parametrize('typ,instance_of', [ (typing.Union[int, str], (int, str)), (typing.Optional[int], (int, type(None))), ]) def test_specialised_scalar_types(typ, instance_of): @given(from_type(typ)) def inner(ex): assert isinstance(ex, instance_of) inner() @pytest.mark.skipif(not hasattr(typing, 'Type'), reason='requires this attr') def test_typing_Type_int(): assert from_type(typing.Type[int]).example() is int @pytest.mark.skipif(not hasattr(typing, 'Type'), reason='requires this attr') def test_typing_Type_Union(): @given(from_type(typing.Type[typing.Union[str, list]])) def inner(ex): assert ex in (str, list) inner() @pytest.mark.parametrize('typ,coll_type,instance_of', [ (typing.Set[int], set, int), (typing.FrozenSet[int], frozenset, int), (typing.Dict[int, int], dict, int), (typing.KeysView[int], type({}.keys()), int), (typing.ValuesView[int], type({}.values()), int), (typing.List[int], list, int), (typing.Tuple[int], tuple, int), (typing.Tuple[int, ...], tuple, int), (typing.Iterator[int], typing.Iterator, int), (typing.Sequence[int], typing.Sequence, int), (typing.Iterable[int], typing.Iterable, int), (typing.Mapping[int, None], typing.Mapping, int), (typing.Container[int], typing.Container, int), (typing.NamedTuple('A_NamedTuple', (('elem', int),)), tuple, int), ]) def test_specialised_collection_types(typ, coll_type, instance_of): @given(from_type(typ)) def inner(ex): if sys.version_info[:2] >= (3, 6): assume(ex) assert isinstance(ex, coll_type) assert all(isinstance(elem, instance_of) for elem in ex) try: inner() except (ResolutionFailed, AssertionError): if sys.version_info[:2] < (3, 6): pytest.skip('Hard-to-reproduce bug (early version of typing?)') raise @pytest.mark.skipif(sys.version_info[:2] < (3, 6), reason='new addition') def test_36_specialised_collection_types(): @given(from_type(typing.DefaultDict[int, int])) def inner(ex): if sys.version_info[:2] >= (3, 6): assume(ex) assert isinstance(ex, collections.defaultdict) assert all(isinstance(elem, int) for elem in ex) assert all(isinstance(elem, int) for elem in ex.values()) inner() @pytest.mark.skipif(sys.version_info[:3] <= (3, 5, 1), reason='broken') def test_ItemsView(): @given(from_type(typing.ItemsView[int, int])) def inner(ex): # See https://github.com/python/typing/issues/177 if sys.version_info[:2] >= (3, 6): assume(ex) assert isinstance(ex, type({}.items())) assert all(isinstance(elem, tuple) and len(elem) == 2 for elem in ex) assert all(all(isinstance(e, int) for e in elem) for elem in ex) inner() def test_Optional_minimises_to_None(): assert find(from_type(typing.Optional[int]), lambda ex: True) is None @pytest.mark.parametrize('n', range(10)) def test_variable_length_tuples(n): type_ = typing.Tuple[int, ...] try: from_type(type_).filter(lambda ex: len(ex) == n).example() except NoExamples: if sys.version_info[:2] < (3, 6): pytest.skip() raise @pytest.mark.skipif(sys.version_info[:3] <= (3, 5, 1), reason='broken') def test_lookup_overrides_defaults(): sentinel = object() try: st.register_type_strategy(int, st.just(sentinel)) @given(from_type(typing.List[int])) def inner_1(ex): assert all(elem is sentinel for elem in ex) inner_1() finally: st.register_type_strategy(int, st.integers()) st.from_type.__clear_cache() @given(from_type(typing.List[int])) def inner_2(ex): assert all(isinstance(elem, int) for elem in ex) inner_2() def test_register_generic_typing_strats(): # I don't expect anyone to do this, but good to check it works as expected try: # We register sets for the abstract sequence type, which masks subtypes # from supertype resolution but not direct resolution st.register_type_strategy( typing.Sequence, types._global_type_lookup[typing.Set] ) @given(from_type(typing.Sequence[int])) def inner_1(ex): assert isinstance(ex, set) @given(from_type(typing.Container[int])) def inner_2(ex): assert not isinstance(ex, typing.Sequence) @given(from_type(typing.List[int])) def inner_3(ex): assert isinstance(ex, list) inner_1() inner_2() inner_3() finally: types._global_type_lookup.pop(typing.Sequence) st.from_type.__clear_cache() @pytest.mark.parametrize('typ', [ typing.Sequence, typing.Container, typing.Mapping, typing.Reversible, typing.SupportsBytes, typing.SupportsAbs, typing.SupportsComplex, typing.SupportsFloat, typing.SupportsInt, typing.SupportsRound, ]) def test_resolves_weird_types(typ): from_type(typ).example() def annotated_func(a: int, b: int=2, *, c: int, d: int=4): return a + b + c + d def test_issue_946_regression(): # Turned type hints into kwargs even if the required posarg was passed st.builds(annotated_func, st.integers()).example() @pytest.mark.parametrize('thing', [ annotated_func, # Works via typing.get_type_hints typing.NamedTuple('N', [('a', int)]), # Falls back to inspection int, # Fails; returns empty dict ]) def test_can_get_type_hints(thing): assert isinstance(get_type_hints(thing), dict) def test_force_builds_to_infer_strategies_for_default_args(): # By default, leaves args with defaults and minimises to 2+4=6 assert find(st.builds(annotated_func), lambda ex: True) == 6 # Inferring integers() for args makes it minimise to zero assert find(st.builds(annotated_func, b=infer, d=infer), lambda ex: True) == 0 def non_annotated_func(a, b=2, *, c, d=4): pass def test_cannot_pass_infer_as_posarg(): with pytest.raises(InvalidArgument): st.builds(annotated_func, infer).example() def test_cannot_force_inference_for_unannotated_arg(): with pytest.raises(InvalidArgument): st.builds(non_annotated_func, a=infer, c=st.none()).example() with pytest.raises(InvalidArgument): st.builds(non_annotated_func, a=st.none(), c=infer).example() class UnknownType(object): def __init__(self, arg): pass class UnknownAnnotatedType(object): def __init__(self, arg: int): pass @given(st.from_type(UnknownAnnotatedType)) def test_builds_for_unknown_annotated_type(ex): assert isinstance(ex, UnknownAnnotatedType) def unknown_annotated_func(a: UnknownType, b=2, *, c: UnknownType, d=4): pass def test_raises_for_arg_with_unresolvable_annotation(): with pytest.raises(ResolutionFailed): st.builds(unknown_annotated_func).example() with pytest.raises(ResolutionFailed): st.builds(unknown_annotated_func, a=st.none(), c=infer).example() @given(a=infer, b=infer) def test_can_use_type_hints(a: int, b: float): assert isinstance(a, int) and isinstance(b, float) def test_error_if_has_unresolvable_hints(): @given(a=infer) def inner(a: UnknownType): pass with pytest.raises(InvalidArgument): inner() @pytest.mark.skipif(not hasattr(typing, 'NewType'), reason='test for NewType') def test_resolves_NewType(): typ = typing.NewType('T', int) nested = typing.NewType('NestedT', typ) uni = typing.NewType('UnionT', typing.Optional[int]) assert isinstance(from_type(typ).example(), integer_types) assert isinstance(from_type(nested).example(), integer_types) assert isinstance(from_type(uni).example(), integer_types + (type(None),)) E = enum.Enum('E', 'a b c') @given(from_type(E)) def test_resolves_enum(ex): assert isinstance(ex, E) @pytest.mark.skipif(not hasattr(enum, 'Flag'), reason='test for Flag') @pytest.mark.parametrize('resolver', [from_type, st.sampled_from]) def test_resolves_flag_enum(resolver): # Storing all combinations takes O(2^n) memory. Using an enum of 52 # members in this test ensures that we won't try! F = enum.Flag('F', ' '.join(string.ascii_letters)) # Filter to check that we can generate compound members of enum.Flags @given(resolver(F).filter(lambda ex: ex not in tuple(F))) def inner(ex): assert isinstance(ex, F) inner() class AnnotatedTarget(object): def __init__(self, a: int, b: int): pass def method(self, a: int, b: int): pass @pytest.mark.parametrize('target', [ AnnotatedTarget, AnnotatedTarget(1, 2).method ]) @pytest.mark.parametrize('args,kwargs', [ ((), {}), ((1,), {}), ((1, 2), {}), ((), dict(a=1)), ((), dict(b=2)), ((), dict(a=1, b=2)), ]) def test_required_args(target, args, kwargs): # Mostly checking that `self` (and only self) is correctly excluded st.builds(target, *map(st.just, args), **{k: st.just(v) for k, v in kwargs.items()}).example() hypothesis-python-3.44.1/tests/py3/test_unicode_identifiers.py000066400000000000000000000020621321557765100246360ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.internal.reflection import proxies def test_can_copy_argspec_of_unicode_args(): def foo(μ): return μ @proxies(foo) def bar(μ): return foo(μ) assert bar(1) == 1 def test_can_copy_argspec_of_unicode_name(): def ā(): return 1 @proxies(ā) def bar(): return 2 assert bar() == 2 hypothesis-python-3.44.1/tests/pytest/000077500000000000000000000000001321557765100200275ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/pytest/test_capture.py000066400000000000000000000101641321557765100231050ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis.internal.compat import PY2, WINDOWS, hunichr, \ escape_unicode_characters pytest_plugins = str('pytester') TESTSUITE = """ from hypothesis import given, settings, Verbosity from hypothesis.strategies import integers @settings(verbosity=Verbosity.verbose) @given(integers()) def test_should_be_verbose(x): pass """ @pytest.mark.parametrize('capture,expected', [ ('no', True), ('fd', False), ]) def test_output_without_capture(testdir, capture, expected): script = testdir.makepyfile(TESTSUITE) result = testdir.runpytest(script, '--verbose', '--capture', capture) out = '\n'.join(result.stdout.lines) assert 'test_should_be_verbose' in out assert ('Trying example' in out) == expected assert result.ret == 0 UNICODE_EMITTING = """ import pytest from hypothesis import given, settings, Verbosity from hypothesis.strategies import text from hypothesis.internal.compat import PY3 import sys @settings(verbosity=Verbosity.verbose) def test_emits_unicode(): @given(text()) def test_should_emit_unicode(t): assert all(ord(c) <= 1000 for c in t) with pytest.raises(AssertionError): test_should_emit_unicode() """ @pytest.mark.xfail( WINDOWS, reason=( "Encoding issues in running the subprocess, possibly py.test's fault")) @pytest.mark.skipif( PY2, reason="Output streams don't have encodings in python 2") def test_output_emitting_unicode(testdir, monkeypatch): monkeypatch.setenv('LC_ALL', 'C') monkeypatch.setenv('LANG', 'C') script = testdir.makepyfile(UNICODE_EMITTING) result = getattr( testdir, 'runpytest_subprocess', testdir.runpytest)( script, '--verbose', '--capture=no') out = '\n'.join(result.stdout.lines) assert 'test_emits_unicode' in out assert escape_unicode_characters(hunichr(1001)) in out assert result.ret == 0 TRACEBACKHIDE_TIMEOUT = """ from hypothesis import given, settings from hypothesis.strategies import integers import time @given(integers()) @settings(timeout=1) def test_timeout_traceback_is_hidden(i): time.sleep(1.1) """ def get_line_num(token, result): for i, line in enumerate(result.stdout.lines): if token in line: return i assert False, 'Token %r not found' % token def test_timeout_traceback_is_hidden(testdir): script = testdir.makepyfile(TRACEBACKHIDE_TIMEOUT) result = testdir.runpytest(script, '--verbose') def_line = get_line_num('def test_timeout_traceback_is_hidden', result) timeout_line = get_line_num('Timeout: Ran out of time', result) # If __tracebackhide__ works, then the Timeout error message will be # next to the test name. If it doesn't work, then the message will be # many lines apart with source code dump between them. assert timeout_line - def_line == 1 TRACEBACKHIDE_HEALTHCHECK = """ from hypothesis import given, settings from hypothesis.strategies import integers import time @given(integers().map(lambda x: time.sleep(0.2))) def test_healthcheck_traceback_is_hidden(x): pass """ def test_healthcheck_traceback_is_hidden(testdir): script = testdir.makepyfile(TRACEBACKHIDE_HEALTHCHECK) result = testdir.runpytest(script, '--verbose') def_token = '__ test_healthcheck_traceback_is_hidden __' timeout_token = ': FailedHealthCheck' def_line = get_line_num(def_token, result) timeout_line = get_line_num(timeout_token, result) assert timeout_line - def_line == 6 hypothesis-python-3.44.1/tests/pytest/test_compat.py000066400000000000000000000016131321557765100227240ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from hypothesis import given from hypothesis.strategies import booleans @given(booleans()) @pytest.mark.parametrize('hi', (1, 2, 3)) def test_parametrize_after_given(hi, i): pass hypothesis-python-3.44.1/tests/pytest/test_doctest.py000066400000000000000000000017631321557765100231140ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import pytest_plugins = 'pytester' def test_can_run_doctests(testdir): script = testdir.makepyfile( 'def hi():\n' ' """\n' ' >>> i = 5\n' ' >>> i-1\n' ' 4"""' ) result = testdir.runpytest(script, '--doctest-modules') assert result.ret == 0 hypothesis-python-3.44.1/tests/pytest/test_fixtures.py000066400000000000000000000042331321557765100233130ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import pytest from mock import Mock, create_autospec from hypothesis import given, example from tests.common.utils import fails from hypothesis.strategies import integers @pytest.fixture def infinity(): return float('inf') @pytest.fixture def mock_fixture(): return Mock() @pytest.fixture def spec_fixture(): class Foo(): def __init__(self): pass def bar(self): return 'baz' return create_autospec(Foo) @given(integers()) def test_can_mix_fixture_and_positional_strategy(infinity, xs): # Hypothesis fills arguments from the right, so if @given() uses # positional arguments then any strategies need to be on the right. assert xs <= infinity @given(xs=integers()) def test_can_mix_fixture_and_keyword_strategy(xs, infinity): assert xs <= infinity @example(xs=0) @given(xs=integers()) def test_can_mix_fixture_example_and_keyword_strategy(xs, infinity): assert xs <= infinity @fails @given(integers()) def test_can_inject_mock_via_fixture(mock_fixture, xs): """A negative test is better for this one - this condition uncovers a bug whereby the mock fixture is executed instead of the test body and always succeeds. If this test fails, then we know we've run the test body instead of the mock. """ assert False @given(integers()) def test_can_inject_autospecced_mock_via_fixture(spec_fixture, xs): spec_fixture.bar.return_value = infinity() assert xs <= spec_fixture.bar() hypothesis-python-3.44.1/tests/pytest/test_mark.py000066400000000000000000000033231321557765100223730ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import pytest_plugins = str('pytester') TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers @given(integers()) def test_foo(x): pass def test_bar(): pass """ def test_can_select_mark(testdir): script = testdir.makepyfile(TESTSUITE) result = testdir.runpytest(script, '--verbose', '--strict', '-m', 'hypothesis') out = '\n'.join(result.stdout.lines) assert '1 passed, 1 deselected' in out UNITTEST_TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers from unittest import TestCase class TestStuff(TestCase): @given(integers()) def test_foo(self, x): pass def test_bar(self): pass """ def test_can_select_mark_on_unittest(testdir): script = testdir.makepyfile(UNITTEST_TESTSUITE) result = testdir.runpytest(script, '--verbose', '--strict', '-m', 'hypothesis') out = '\n'.join(result.stdout.lines) assert '1 passed, 1 deselected' in out hypothesis-python-3.44.1/tests/pytest/test_profiles.py000066400000000000000000000025511321557765100232660ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.extra.pytestplugin import LOAD_PROFILE_OPTION pytest_plugins = str('pytester') CONFTEST = """ from hypothesis._settings import settings settings.register_profile("test", settings(max_examples=1)) """ TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers from hypothesis._settings import settings def test_this_one_is_ok(): assert settings().max_examples == 1 """ def test_runs_reporting_hook(testdir): script = testdir.makepyfile(TESTSUITE) testdir.makeconftest(CONFTEST) result = testdir.runpytest(script, LOAD_PROFILE_OPTION, 'test') out = '\n'.join(result.stdout.lines) assert '1 passed' in out hypothesis-python-3.44.1/tests/pytest/test_pytest_detection.py000066400000000000000000000022411321557765100250250ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER """This module provides the core primitives of Hypothesis, such as given.""" from __future__ import division, print_function, absolute_import import sys import subprocess import hypothesis.core as core def test_is_running_under_pytest(): assert core.running_under_pytest FILE_TO_RUN = """ import hypothesis.core as core assert not core.running_under_pytest """ def test_is_not_running_under_pytest(tmpdir): pyfile = tmpdir.join('test.py') pyfile.write(FILE_TO_RUN) subprocess.check_call([sys.executable, str(pyfile)]) hypothesis-python-3.44.1/tests/pytest/test_reporting.py000066400000000000000000000024071321557765100234540ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import pytest_plugins = str('pytester') TESTSUITE = """ from hypothesis import given from hypothesis.strategies import lists, integers @given(integers()) def test_this_one_is_ok(x): pass @given(lists(integers())) def test_hi(xs): assert False """ def test_runs_reporting_hook(testdir): script = testdir.makepyfile(TESTSUITE) result = testdir.runpytest(script, '--verbose') out = '\n'.join(result.stdout.lines) assert 'test_this_one_is_ok' in out assert 'Captured stdout call' not in out assert 'Falsifying example' in out assert result.ret != 0 hypothesis-python-3.44.1/tests/pytest/test_runs.py000066400000000000000000000017021321557765100224270ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis import given from tests.common.utils import fails from hypothesis.strategies import integers @given(integers()) def test_ints_are_ints(x): pass @fails @given(integers()) def test_ints_are_floats(x): assert isinstance(x, float) hypothesis-python-3.44.1/tests/pytest/test_seeding.py000066400000000000000000000063461321557765100230670ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import re import pytest from hypothesis.internal.compat import hrange pytest_plugins = str('pytester') TEST_SUITE = """ from hypothesis import given, settings, assume import hypothesis.strategies as st first = None @settings(database=None) @given(st.integers()) def test_fails_once(some_int): assume(abs(some_int) > 10000) global first if first is None: first = some_int assert some_int != first """ CONTAINS_SEED_INSTRUCTION = re.compile(r"--hypothesis-seed=\d+", re.MULTILINE) @pytest.mark.parametrize('seed', [0, 42, 'foo']) def test_runs_repeatably_when_seed_is_set(seed, testdir): script = testdir.makepyfile(TEST_SUITE) results = [ testdir.runpytest( script, '--verbose', '--strict', '--hypothesis-seed', str(seed) ) for _ in hrange(2) ] for r in results: for l in r.stdout.lines: assert '--hypothesis-seed' not in l failure_lines = [ l for r in results for l in r.stdout.lines if 'some_int=' in l ] assert len(failure_lines) == 2 assert failure_lines[0] == failure_lines[1] HEALTH_CHECK_FAILURE = """ import os from hypothesis import given, strategies as st, assume, reject RECORD_EXAMPLES = if os.path.exists(RECORD_EXAMPLES): target = None with open(RECORD_EXAMPLES, 'r') as i: seen = set(map(int, i.read().strip().split("\\n"))) else: target = open(RECORD_EXAMPLES, 'w') @given(st.integers()) def test_failure(i): if target is None: assume(i not in seen) else: target.write("%s\\n" % (i,)) reject() """ def test_repeats_healthcheck_when_following_seed_instruction(testdir, tmpdir): health_check_test = HEALTH_CHECK_FAILURE.replace( '', repr(str(tmpdir.join('seen')))) script = testdir.makepyfile(health_check_test) initial = testdir.runpytest(script, '--verbose', '--strict',) match = CONTAINS_SEED_INSTRUCTION.search('\n'.join(initial.stdout.lines)) initial_output = '\n'.join(initial.stdout.lines) match = CONTAINS_SEED_INSTRUCTION.search(initial_output) assert match is not None rerun = testdir.runpytest(script, '--verbose', '--strict', match.group(0)) rerun_output = '\n'.join(rerun.stdout.lines) assert 'FailedHealthCheck' in rerun_output assert '--hypothesis-seed' not in rerun_output rerun2 = testdir.runpytest( script, '--verbose', '--strict', '--hypothesis-seed=10') rerun2_output = '\n'.join(rerun2.stdout.lines) assert 'FailedHealthCheck' not in rerun2_output hypothesis-python-3.44.1/tests/pytest/test_skipping.py000066400000000000000000000026761321557765100232770ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import pytest_plugins = str('pytester') PYTEST_TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers import pytest @given(xs=integers()) def test_to_be_skipped(xs): if xs == 0: pytest.skip() else: assert xs == 0 """ def test_no_falsifying_example_if_pytest_skip(testdir): """If ``pytest.skip() is called during a test, Hypothesis should not continue running the test and shrink process, nor should it print anything about falsifying examples.""" script = testdir.makepyfile(PYTEST_TESTSUITE) result = testdir.runpytest(script, '--verbose', '--strict', '-m', 'hypothesis') out = '\n'.join(result.stdout.lines) assert 'Falsifying example' not in out hypothesis-python-3.44.1/tests/pytest/test_statistics.py000066400000000000000000000062361321557765100236410ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import from hypothesis.extra.pytestplugin import PRINT_STATISTICS_OPTION pytest_plugins = 'pytester' TESTSUITE = """ from hypothesis import given, settings, assume from hypothesis.strategies import integers import time import warnings from hypothesis.errors import HypothesisDeprecationWarning warnings.simplefilter('always', HypothesisDeprecationWarning) @given(integers()) def test_all_valid(x): pass @settings(timeout=0.2, min_satisfying_examples=1) @given(integers()) def test_slow(x): time.sleep(0.1) @settings( max_examples=1000, min_satisfying_examples=1, perform_health_check=False ) @given(integers()) def test_iterations(x): assume(x % 50 == 0) """ def test_does_not_run_statistics_by_default(testdir): script = testdir.makepyfile(TESTSUITE) result = testdir.runpytest(script) out = '\n'.join(result.stdout.lines) assert 'Hypothesis Statistics' not in out def test_prints_statistics_given_option(testdir): script = testdir.makepyfile(TESTSUITE) result = testdir.runpytest(script, PRINT_STATISTICS_OPTION) out = '\n'.join(result.stdout.lines) assert 'Hypothesis Statistics' in out assert 'timeout=0.2' in out assert 'max_examples=100' in out assert 'max_iterations=1000' in out assert 'HypothesisDeprecationWarning' in out UNITTEST_TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers from unittest import TestCase class TestStuff(TestCase): @given(integers()) def test_all_valid(self, x): pass """ def test_prints_statistics_for_unittest_tests(testdir): script = testdir.makepyfile(UNITTEST_TESTSUITE) result = testdir.runpytest(script, PRINT_STATISTICS_OPTION) out = '\n'.join(result.stdout.lines) assert 'Hypothesis Statistics' in out assert 'TestStuff::test_all_valid' in out assert 'max_examples=100' in out STATEFUL_TESTSUITE = """ from hypothesis import given from hypothesis.strategies import integers from hypothesis.stateful import GenericStateMachine class Stuff(GenericStateMachine): def steps(self): return integers() def execute_step(self, step): pass TestStuff = Stuff.TestCase """ def test_prints_statistics_for_stateful_tests(testdir): script = testdir.makepyfile(STATEFUL_TESTSUITE) result = testdir.runpytest(script, PRINT_STATISTICS_OPTION) out = '\n'.join(result.stdout.lines) assert 'Hypothesis Statistics' in out assert 'TestStuff::runTest' in out assert 'max_examples=100' in out hypothesis-python-3.44.1/tests/quality/000077500000000000000000000000001321557765100201675ustar00rootroot00000000000000hypothesis-python-3.44.1/tests/quality/__init__.py000066400000000000000000000012001321557765100222710ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER hypothesis-python-3.44.1/tests/quality/test_discovery_ability.py000066400000000000000000000245561321557765100253400ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER # -*- coding: utf-8 -*- """Statistical tests over the forms of the distributions in the standard set of definitions. These tests all take the form of a classic hypothesis test with the null hypothesis being that the probability of some event occurring when drawing data from the distribution produced by some specifier is >= REQUIRED_P """ from __future__ import division, print_function, absolute_import import re import math import collections import hypothesis.internal.reflection as reflection from hypothesis import settings as Settings from hypothesis.errors import UnsatisfiedAssumption from hypothesis.strategies import just, sets, text, lists, floats, \ one_of, tuples, booleans, integers, sampled_from from hypothesis.internal.conjecture.data import Status from hypothesis.internal.conjecture.engine import \ ConjectureRunner as ConConjectureRunner RUNS = 100 REQUIRED_RUNS = 50 INITIAL_LAMBDA = re.compile(u'^lambda[^:]*:\\s*') def strip_lambda(s): return INITIAL_LAMBDA.sub(u'', s) class HypothesisFalsified(AssertionError): pass def define_test(specifier, predicate, condition=None): def run_test(): if condition is None: def _condition(x): return True condition_string = u'' else: _condition = condition condition_string = strip_lambda( reflection.get_pretty_function_description(condition)) def test_function(data): try: value = data.draw(specifier) except UnsatisfiedAssumption: data.mark_invalid() if not _condition(value): data.mark_invalid() if predicate(value): data.mark_interesting() successes = 0 for _ in range(RUNS): runner = ConConjectureRunner( test_function, settings=Settings( max_examples=100, max_iterations=1000, max_shrinks=0 )) runner.run() if runner.last_data.status == Status.INTERESTING: successes += 1 if successes >= REQUIRED_RUNS: return event = reflection.get_pretty_function_description(predicate) if condition is not None: event += '|' event += condition_string description = ( u'P(%s) ~ %d / %d = %.2f < %.2f' ) % ( event, successes, RUNS, successes / RUNS, (REQUIRED_RUNS / RUNS) ) raise HypothesisFalsified(description + u' rejected') return run_test test_can_produce_zero = define_test(integers(), lambda x: x == 0) test_can_produce_large_magnitude_integers = define_test( integers(), lambda x: abs(x) > 1000 ) test_can_produce_large_positive_integers = define_test( integers(), lambda x: x > 1000 ) test_can_produce_large_negative_integers = define_test( integers(), lambda x: x < -1000 ) def long_list(xs): return len(xs) >= 20 test_can_produce_unstripped_strings = define_test( text(), lambda x: x != x.strip() ) test_can_produce_stripped_strings = define_test( text(), lambda x: x == x.strip() ) test_can_produce_multi_line_strings = define_test( text(average_size=25.0), lambda x: u'\n' in x ) test_can_produce_ascii_strings = define_test( text(), lambda x: all(ord(c) <= 127 for c in x), ) test_can_produce_long_strings_with_no_ascii = define_test( text(min_size=5), lambda x: all(ord(c) > 127 for c in x), ) test_can_produce_short_strings_with_some_non_ascii = define_test( text(), lambda x: any(ord(c) > 127 for c in x), condition=lambda x: len(x) <= 3 ) test_can_produce_positive_infinity = define_test( floats(), lambda x: x == float(u'inf') ) test_can_produce_negative_infinity = define_test( floats(), lambda x: x == float(u'-inf') ) test_can_produce_nan = define_test( floats(), math.isnan ) test_can_produce_floats_near_left = define_test( floats(0, 1), lambda t: t < 0.2 ) test_can_produce_floats_near_right = define_test( floats(0, 1), lambda t: t > 0.8 ) test_can_produce_floats_in_middle = define_test( floats(0, 1), lambda t: 0.2 <= t <= 0.8 ) test_can_produce_long_lists = define_test( lists(integers(), average_size=25.0), long_list ) test_can_produce_short_lists = define_test( lists(integers()), lambda x: len(x) <= 10 ) test_can_produce_the_same_int_twice = define_test( tuples(lists(integers(), average_size=25.0), integers()), lambda t: t[0].count(t[1]) > 1 ) def distorted_value(x): c = collections.Counter(x) return min(c.values()) * 3 <= max(c.values()) def distorted(x): return distorted_value(map(type, x)) test_sampled_from_large_number_can_mix = define_test( lists(sampled_from(range(50)), min_size=50), lambda x: len(set(x)) >= 25, ) test_sampled_from_often_distorted = define_test( lists(sampled_from(range(5))), distorted_value, condition=lambda x: len(x) >= 3, ) test_non_empty_subset_of_two_is_usually_large = define_test( sets(sampled_from((1, 2))), lambda t: len(t) == 2 ) test_subset_of_ten_is_sometimes_empty = define_test( sets(integers(1, 10)), lambda t: len(t) == 0 ) test_mostly_sensible_floats = define_test( floats(), lambda t: t + 1 > t ) test_mostly_largish_floats = define_test( floats(), lambda t: t + 1 > 1, condition=lambda x: x > 0, ) test_ints_can_occasionally_be_really_large = define_test( integers(), lambda t: t >= 2 ** 63 ) test_mixing_is_sometimes_distorted = define_test( lists(booleans() | tuples(), average_size=25.0), distorted, condition=lambda x: len(set(map(type, x))) == 2, ) test_mixes_2_reasonably_often = define_test( lists(booleans() | tuples(), average_size=25.0), lambda x: len(set(map(type, x))) > 1, condition=bool, ) test_partial_mixes_3_reasonably_often = define_test( lists(booleans() | tuples() | just(u'hi'), average_size=25.0), lambda x: 1 < len(set(map(type, x))) < 3, condition=bool, ) test_mixes_not_too_often = define_test( lists(booleans() | tuples(), average_size=25.0), lambda x: len(set(map(type, x))) == 1, condition=bool, ) test_integers_are_usually_non_zero = define_test( integers(), lambda x: x != 0 ) test_integers_are_sometimes_zero = define_test( integers(), lambda x: x == 0 ) test_integers_are_often_small = define_test( integers(), lambda x: abs(x) <= 100 ) # This series of tests checks that the one_of() strategy flattens branches # correctly. We assert that the probability of any branch is >= 0.1, # approximately (1/8 = 0.125), regardless of how heavily nested it is in the # strategy. # This first strategy chooses an integer between 0 and 7 (inclusive). one_of_nested_strategy = one_of( just(0), one_of( just(1), just(2), one_of( just(3), just(4), one_of( just(5), just(6), just(7) ) ) ) ) for i in range(8): exec('''test_one_of_flattens_branches_%d = define_test( one_of_nested_strategy, lambda x: x == %d )''' % (i, i)) xor_nested_strategy = ( just(0) | ( just(1) | just(2) | ( just(3) | just(4) | ( just(5) | just(6) | just(7) ) ) ) ) for i in range(8): exec('''test_xor_flattens_branches_%d = define_test( xor_nested_strategy, lambda x: x == %d )''' % (i, i)) # This strategy tests interactions with `map()`. They generate integers # from the set {1, 4, 6, 16, 20, 24, 28, 32}. def double(x): return x * 2 one_of_nested_strategy_with_map = one_of( just(1), one_of( (just(2) | just(3)).map(double), one_of( (just(4) | just(5)).map(double), one_of( (just(6) | just(7) | just(8)).map(double) ) ).map(double) ) ) for i in (1, 4, 6, 16, 20, 24, 28, 32): exec('''test_one_of_flattens_map_branches_%d = define_test( one_of_nested_strategy_with_map, lambda x: x == %d )''' % (i, i)) # This strategy tests interactions with `flatmap()`. It generates lists # of length 0-7 (inclusive) in which every element is `None`. one_of_nested_strategy_with_flatmap = just(None).flatmap( lambda x: one_of( just([x] * 0), just([x] * 1), one_of( just([x] * 2), just([x] * 3), one_of( just([x] * 4), just([x] * 5), one_of( just([x] * 6), just([x] * 7), ) ) ) ) ) for i in range(8): exec('''test_one_of_flattens_flatmap_branches_%d = define_test( one_of_nested_strategy_with_flatmap, lambda x: len(x) == %d )''' % (i, i)) xor_nested_strategy_with_flatmap = just(None).flatmap( lambda x: ( just([x] * 0) | just([x] * 1) | ( just([x] * 2) | just([x] * 3) | ( just([x] * 4) | just([x] * 5) | ( just([x] * 6) | just([x] * 7) ) ) ) ) ) for i in range(8): exec('''test_xor_flattens_flatmap_branches_%d = define_test( xor_nested_strategy_with_flatmap, lambda x: len(x) == %d )''' % (i, i)) # This strategy tests interactions with `filter()`. It generates the even # integers {0, 2, 4, 6} in equal measures. one_of_nested_strategy_with_filter = one_of( just(0), just(1), one_of( just(2), just(3), one_of( just(4), just(5), one_of( just(6), just(7), ) ) ) ).filter(lambda x: x % 2 == 0) for i in range(4): exec('''test_one_of_flattens_filter_branches_%d = define_test( one_of_nested_strategy_with_filter, lambda x: x == 2 * %d )''' % (i, i)) hypothesis-python-3.44.1/tests/quality/test_shrink_quality.py000066400000000000000000000144461321557765100246570ustar00rootroot00000000000000# coding=utf-8 # # This file is part of Hypothesis, which may be found at # https://github.com/HypothesisWorks/hypothesis-python # # Most of this work is copyright (C) 2013-2017 David R. MacIver # (david@drmaciver.com), but it contains contributions by others. See # CONTRIBUTING.rst for a full list of people who may hold copyright, and # consult the git log if you need to determine who owns an individual # contribution. # # This Source Code Form is subject to the terms of the Mozilla Public License, # v. 2.0. If a copy of the MPL was not distributed with this file, You can # obtain one at http://mozilla.org/MPL/2.0/. # # END HEADER from __future__ import division, print_function, absolute_import import sys from fractions import Fraction from functools import reduce import pytest from flaky import flaky from hypothesis import find, assume, settings from tests.common import parametrize from tests.common.debug import minimal from hypothesis.strategies import just, sets, text, lists, tuples, \ booleans, integers, fractions, frozensets, dictionaries, \ sampled_from from hypothesis.internal.compat import PY3, OrderedDict, hrange def test_integers_from_minimizes_leftwards(): assert minimal(integers(min_value=101)) == 101 def test_minimal_fractions_1(): assert minimal(fractions()) == Fraction(0) def test_minimal_fractions_2(): assert minimal(fractions(), lambda x: x >= 1) == Fraction(1) def test_minimal_fractions_3(): assert minimal( lists(fractions()), lambda s: len(s) >= 20) == [Fraction(0)] * 20 def test_minimize_string_to_empty(): assert minimal(text()) == u'' def test_minimize_one_of(): for _ in hrange(100): assert minimal(integers() | text() | booleans()) in ( 0, u'', False ) def test_minimize_mixed_list(): mixed = minimal(lists(integers() | text()), lambda x: len(x) >= 10) assert set(mixed).issubset(set((0, u''))) def test_minimize_longer_string(): assert minimal(text(), lambda x: len(x) >= 10) == u'0' * 10 def test_minimize_longer_list_of_strings(): assert minimal(lists(text()), lambda x: len(x) >= 10) == [u''] * 10 def test_minimize_3_set(): assert minimal(sets(integers()), lambda x: len(x) >= 3) in ( set((0, 1, 2)), set((-1, 0, 1)), ) def test_minimize_3_set_of_tuples(): assert minimal( sets(tuples(integers())), lambda x: len(x) >= 2) == set(((0,), (1,))) def test_minimize_sets_of_sets(): elements = integers(1, 100) size = 10 set_of_sets = minimal( sets(frozensets(elements)), lambda s: len(s) >= size ) assert frozenset() in set_of_sets assert len(set_of_sets) == size for s in set_of_sets: if len(s) > 1: assert any( s != t and t.issubset(s) for t in set_of_sets ) def test_can_simplify_flatmap_with_bounded_left_hand_size(): assert minimal( booleans().flatmap(lambda x: lists(just(x))), lambda x: len(x) >= 10) == [False] * 10 def test_can_simplify_across_flatmap_of_just(): assert minimal(integers().flatmap(just)) == 0 def test_can_simplify_on_right_hand_strategy_of_flatmap(): assert minimal(integers().flatmap(lambda x: lists(just(x)))) == [] @flaky(min_passes=5, max_runs=5) def test_can_ignore_left_hand_side_of_flatmap(): assert minimal( integers().flatmap(lambda x: lists(integers())), lambda x: len(x) >= 10 ) == [0] * 10 def test_can_simplify_on_both_sides_of_flatmap(): assert minimal( integers().flatmap(lambda x: lists(just(x))), lambda x: len(x) >= 10 ) == [0] * 10 def test_flatmap_rectangles(): lengths = integers(min_value=0, max_value=10) def lists_of_length(n): return lists(sampled_from('ab'), min_size=n, max_size=n) xs = find(lengths.flatmap( lambda w: lists(lists_of_length(w))), lambda x: ['a', 'b'] in x, settings=settings(database=None, max_examples=2000) ) assert xs == [['a', 'b']] @flaky(min_passes=5, max_runs=5) @parametrize(u'dict_class', [dict, OrderedDict]) def test_dictionary(dict_class): assert minimal(dictionaries( keys=integers(), values=text(), dict_class=dict_class)) == dict_class() x = minimal( dictionaries(keys=integers(), values=text(), dict_class=dict_class), lambda t: len(t) >= 3) assert isinstance(x, dict_class) assert set(x.values()) == set((u'',)) for k in x: if k < 0: assert k + 1 in x if k > 0: assert k - 1 in x def test_minimize_single_element_in_silly_large_int_range(): ir = integers(-(2 ** 256), 2 ** 256) assert minimal(ir, lambda x: x >= -(2 ** 255)) == 0 def test_minimize_multiple_elements_in_silly_large_int_range(): desired_result = [0] * 20 ir = integers(-(2 ** 256), 2 ** 256) x = minimal( lists(ir), lambda x: len(x) >= 20, timeout_after=20, ) assert x == desired_result def test_minimize_multiple_elements_in_silly_large_int_range_min_is_not_dupe(): ir = integers(0, 2 ** 256) target = list(range(20)) x = minimal( lists(ir), lambda x: ( assume(len(x) >= 20) and all(x[i] >= target[i] for i in target)), timeout_after=60, ) assert x == target @pytest.mark.skipif(PY3, reason=u'Python 3 has better integers') def test_minimize_long(): assert minimal( integers(), lambda x: type(x).__name__ == u'long') == sys.maxint + 1 def test_find_large_union_list(): def large_mostly_non_overlapping(xs): union = reduce(set.union, xs) return len(union) >= 30 result = minimal( lists(sets(integers(), min_size=1), min_size=1), large_mostly_non_overlapping, timeout_after=120) assert len(result) == 1 union = reduce(set.union, result) assert len(union) == 30 assert max(union) == min(union) + len(union) - 1 @pytest.mark.parametrize('n', [0, 1, 10, 100, 1000]) def test_containment(n): iv = minimal( tuples(lists(integers()), integers()), lambda x: x[1] in x[0] and x[1] >= n, timeout_after=60 ) assert iv == ([n], n) def test_duplicate_containment(): ls, i = minimal( tuples(lists(integers()), integers()), lambda s: s[0].count(s[1]) > 1, timeout_after=100) assert ls == [0, 0] assert i == 0 hypothesis-python-3.44.1/tox.ini000066400000000000000000000104241321557765100166510ustar00rootroot00000000000000[tox] envlist = py{27,34,35,36,py}-{brief,prettyquick,full,custom,benchmark} toxworkdir={env:TOX_WORK_DIR:.tox} passenv= HOME LC_ALL COVERAGE_FILE setenv= PYTHONWARNINGS={env:PYTHONWARNINGS:error::DeprecationWarning,error::FutureWarning} [testenv] deps = -rrequirements/test.txt benchmark: pytest-benchmark==3.0.0 whitelist_externals= bash setenv= brief: HYPOTHESIS_PROFILE=speedy passenv= HOME TOXENV commands = full: bash scripts/basic-test.sh brief: python -m pytest tests/cover/test_testdecorators.py {posargs} prettyquick: python -m pytest tests/cover/ custom: python -m pytest {posargs} benchmark: python -m pytest benchmarks [testenv:quality] deps= -rrequirements/test.txt commands= python -m pytest tests/quality/ [testenv:oldpy27] basepython=python2.7.3 deps= -rrequirements/test.txt commands= python -m pytest tests/cover/ [testenv:py27typing] basepython=python2.7 deps= -rrequirements/test.txt -rrequirements/typing.txt commands= python -m pytest tests/cover/ [testenv:unicode] basepython=python2.7 deps = unicode-nazi setenv= UNICODENAZI=true PYTHONPATH=. commands= python scripts/unicodechecker.py [testenv:faker070] deps = -rrequirements/test.txt commands = pip install --no-binary :all: Faker==0.7.0 python -m pytest tests/fakefactory [testenv:faker-latest] deps = -rrequirements/test.txt commands = pip install --no-binary :all: Faker python -m pytest tests/fakefactory [testenv:pandas18] deps = -rrequirements/test.txt pandas~=0.18.1 commands = python -m pytest tests/pandas [testenv:pandas19] deps = -rrequirements/test.txt pandas~=0.19.2 commands = python -m pytest tests/pandas [testenv:pandas20] deps = -rrequirements/test.txt pandas~=0.20.3 commands = python -m pytest tests/pandas [testenv:pandas21] deps = -rrequirements/test.txt pandas~=0.21.0 commands = python -m pytest tests/pandas [testenv:django18] setenv= PYTHONWARNINGS={env:PYTHONWARNINGS:} commands = pip install .[pytz] pip install django~=1.8.18 python -m tests.django.manage test tests.django [testenv:django110] setenv= PYTHONWARNINGS={env:PYTHONWARNINGS:} commands = pip install .[pytz] pip install --no-binary :all: .[fakefactory] pip install django~=1.10.8 python -m tests.django.manage test tests.django [testenv:django111] commands = pip install .[pytz] pip install --no-binary :all: .[fakefactory] pip install django~=1.11.7 python -m tests.django.manage test tests.django [testenv:nose] deps = nose commands= nosetests tests/cover/test_testdecorators.py [testenv:pytest30] deps = -rrequirements/test.txt commands= python -m pytest tests/pytest tests/cover/test_testdecorators.py [testenv:pytest28] deps = -rrequirements/test.txt commands= python -m pytest tests/pytest tests/cover/test_testdecorators.py [testenv:docs] deps = sphinx commands=sphinx-build -W -b html -d docs/_build/doctrees docs docs/_build/html [testenv:coverage] deps = -rrequirements/test.txt -rrequirements/coverage.txt setenv= HYPOTHESIS_INTERNAL_COVERAGE=true commands = python -m coverage --version python -m coverage debug sys python -m coverage run --rcfile=.coveragerc -m pytest -n0 --strict tests/cover tests/datetime tests/py3 tests/numpy tests/pandas --maxfail=1 --ff {posargs} python -m coverage report -m --fail-under=100 --show-missing python scripts/validate_branch_check.py [testenv:pure-tracer] deps = -rrequirements/test.txt setenv= HYPOTHESIS_FORCE_PURE_TRACER=true commands = python -m pytest tests/cover tests/nocover/test_coverage.py -n 0 {posargs} [testenv:pypy-with-tracer] setenv= HYPOTHESIS_PROFILE=with_coverage basepython=pypy deps = -rrequirements/test.txt commands = python -m pytest tests/cover/test_testdecorators.py tests/nocover/test_coverage.py -n 0 {posargs} [testenv:examples3] setenv= HYPOTHESIS_STRICT_MODE=true deps= -rrequirements/test.txt commands= python -m pytest examples [testenv:examples2] setenv= HYPOTHESIS_STRICT_MODE=true basepython=python2.7 deps= -rrequirements/test.txt commands= python -m pytest examples [pytest] addopts=--strict --tb=short -vv -p pytester --runpytest=subprocess --durations=20 -n 2