pax_global_header 0000666 0000000 0000000 00000000064 13215577651 0014526 g ustar 00root root 0000000 0000000 52 comment=d98af95b322d2f7a60d03a87508f0e47420e71ac
hypothesis-python-3.44.1/ 0000775 0000000 0000000 00000000000 13215577651 0015335 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/.coveragerc 0000664 0000000 0000000 00000000727 13215577651 0017464 0 ustar 00root root 0000000 0000000 [run]
branch = True
include =
**/.tox/*/lib/*/site-packages/hypothesis/*.py
**/.tox/*/lib/*/site-packages/hypothesis/**/*.py
omit =
**/pytestplugin.py
**/strategytests.py
**/compat*.py
**/extra/__init__.py
**/.tox/*/lib/*/site-packages/hypothesis/internal/coverage.py
[report]
exclude_lines =
@abc.abstractmethod
@abc.abstractproperty
NotImplementedError
pragma: no cover
__repr__
__ne__
__copy__
__deepcopy__
hypothesis-python-3.44.1/.gitattributes 0000664 0000000 0000000 00000000020 13215577651 0020220 0 ustar 00root root 0000000 0000000 * text eol=lf hypothesis-python-3.44.1/.gitignore 0000664 0000000 0000000 00000000242 13215577651 0017323 0 ustar 00root root 0000000 0000000 *.swo
*.swp
*.pyc
venv*
.cache
.hypothesis
docs/_build
*.egg-info
_build
.tox
.coverage
.runtimes
.idea
.vagrant
.DS_Store
deploy_key
.pypirc
secrets.tar
htmlcov
hypothesis-python-3.44.1/.isort.cfg 0000664 0000000 0000000 00000000125 13215577651 0017232 0 ustar 00root root 0000000 0000000 [settings]
known_third_party = attr, click, django, faker, flaky, numpy, pytz, scipy
hypothesis-python-3.44.1/.pyup.yml 0000664 0000000 0000000 00000000462 13215577651 0017135 0 ustar 00root root 0000000 0000000 requirements:
- requirements/tools.txt:
updates: all
pin: True
- requirements/test.txt:
updates: all
pin: True
- requirements/benchmark.txt:
updates: all
pin: True
- requirements/coverage.txt:
updates: all
pin: True
schedule: "every week on monday"
hypothesis-python-3.44.1/.travis.yml 0000664 0000000 0000000 00000003352 13215577651 0017451 0 ustar 00root root 0000000 0000000 language: c
sudo: false
env: PYTHONDONTWRITEBYTECODE=x
os:
- linux
branches:
only:
- "master"
cache:
apt: true
directories:
- $HOME/.runtimes
- $HOME/.venv
- $HOME/.cache/pip
- $HOME/wheelhouse
- $HOME/.stack
- $HOME/.local
env:
global:
- BUILD_RUNTIMES=$HOME/.runtimes
- FORMAT_ALL=true
matrix:
# Core tests that we want to run first.
- TASK=check-pyup-yml
- TASK=check-release-file
- TASK=check-shellcheck
- TASK=documentation
- TASK=lint
- TASK=doctest
- TASK=check-rst
- TASK=check-format
- TASK=check-benchmark
- TASK=check-coverage
- TASK=check-requirements
- TASK=check-pypy
- TASK=check-py27
- TASK=check-py36
- TASK=check-quality
# Less important tests that will probably
# pass whenever the above do but are still
# worth testing.
- TASK=check-unicode
- TASK=check-ancient-pip
- TASK=check-pure-tracer
- TASK=check-py273
- TASK=check-py27-typing
- TASK=check-py34
- TASK=check-py35
- TASK=check-nose
- TASK=check-pytest28
- TASK=check-faker070
- TASK=check-faker-latest
- TASK=check-django18
- TASK=check-django110
- TASK=check-django111
- TASK=check-pandas19
- TASK=check-pandas20
- TASK=check-pandas21
- TASK=deploy
script:
- python scripts/run_travis_make_task.py
matrix:
fast_finish: true
notifications:
email:
recipients:
- david@drmaciver.com
on_success: never
on_failure: change
addons:
apt:
packages:
- libgmp-dev
hypothesis-python-3.44.1/CITATION 0000664 0000000 0000000 00000001125 13215577651 0016471 0 ustar 00root root 0000000 0000000 Please use one of the following samples to cite the hypothesis version (change
x.y) from this installation
Text:
[Hypothesis] Hypothesis x.y, 2016
David R. MacIver, https://github.com/HypothesisWorks/hypothesis-python
BibTeX:
@misc{Hypothesisx.y,
title = {{H}ypothesis x.y},
author = {David R. MacIver},
year = {2016},
howpublished = {\href{https://github.com/HypothesisWorks/hypothesis-python}{\texttt{https://github.com/HypothesisWorks/hypothesis-python}}},
}
If you are unsure about which version of hypothesis you are using run: `pip show
hypothesis`.
hypothesis-python-3.44.1/CONTRIBUTING.rst 0000664 0000000 0000000 00000023150 13215577651 0017777 0 ustar 00root root 0000000 0000000 =============
Contributing
=============
First off: It's great that you want to contribute to Hypothesis! Thanks!
------------------
Ways to Contribute
------------------
Hypothesis is a mature yet active project. This means that there are many
ways in which you can contribute.
For example, it's super useful and highly appreciated if you do any of:
* Submit bug reports
* Submit feature requests
* Write about Hypothesis
* Give a talk about Hypothesis
* Build libraries and tools on top of Hypothesis outside the main repo
* Submit PRs
If you build a Hypothesis strategy that you would like to be more widely known
please add it to the list of external strategies by preparing a PR against
the docs/strategies.rst file.
If you find an error in the documentation, please feel free to submit a PR that
fixes the error. Spot a tyop? Fix it up and send us a PR!
You can read more about how we document Hypothesis in ``guides/documentation.rst``
The process for submitting source code PRs is generally more involved
(don't worry, we'll help you through it), so do read the rest of this document
first.
-----------------------
Copyright and Licensing
-----------------------
It's important to make sure that you own the rights to the work you are submitting.
If it is done on work time, or you have a particularly onerous contract, make sure
you've checked with your employer.
All work in Hypothesis is licensed under the terms of the
`Mozilla Public License, version 2.0 `_. By
submitting a contribution you are agreeing to licence your work under those
terms.
Finally, if it is not there already, add your name (and a link to your GitHub
and email address if you want) to the list of contributors found at
the end of this document, in alphabetical order. It doesn't have to be your
"real" name (whatever that means), any sort of public identifier
is fine. In particular a GitHub account is sufficient.
-----------------------
The actual contribution
-----------------------
OK, so you want to make a contribution and have sorted out the legalese. What now?
First off: If you're planning on implementing a new feature, talk to us
first! Come `join us on IRC `_,
or open an issue. If it's really small feel free to open a work in progress pull request sketching
out the idea, but it's best to get feedback from the Hypothesis maintainers
before sinking a bunch of work into it.
In general work-in-progress pull requests are totally welcome if you want early feedback
or help with some of the tricky details. Don't be afraid to ask for help.
In order to get merged, a pull request will have to have a green build (naturally) and
to be approved by a Hypothesis maintainer (and, depending on what it is, possibly specifically
by DRMacIver).
The review process is the same one that all changes to Hypothesis go through, regardless of
whether you're an established maintainer or entirely new to the project. It's very much
intended to be a collaborative one: It's not us telling you what we think is wrong with
your code, it's us working with you to produce something better together.
We have `a lengthy check list `_ of things we look for in a review. Feel
free to have a read of it in advance and go through it yourself if you'd like to. It's not
required, but it might speed up the process.
Once your pull request has a green build and has passed review, it will be merged to
master fairly promptly. This will immediately trigger a release! Don't be scared. If that
breaks things, that's our fault not yours - the whole point of this process is to ensure
that problems get caught before we merge rather than after.
~~~~~~~~~~~~~~~~
The Release File
~~~~~~~~~~~~~~~~
All changes to Hypothesis get released automatically when they are merged to
master.
In order to update the version and change log entry, you have to create a
release file. This is a normal restructured text file called RELEASE.rst that
lives in the root of the repository and will be used as the change log entry.
It should start with following lines:
* RELEASE_TYPE: major
* RELEASE_TYPE: minor
* RELEASE_TYPE: patch
This specifies the component of the version number that should be updated, with
the meaning of each component following `semver `_. As a
rule of thumb if it's a bug fix it's probably a patch version update, if it's
a new feature it's definitely a minor version update, and you probably
shouldn't ever need to use a major version update unless you're part of the
core team and we've discussed it a lot.
This line will be removed from the final change log entry.
~~~~~~~~~
The build
~~~~~~~~~
The build is orchestrated by a giant Makefile which handles installation of the relevant pythons.
Actually running the tests is managed by `tox `_, but the Makefile
will call out to the relevant tox environments so you mostly don't have to know anything about that
unless you want to make changes to the test config. You also mostly don't need to know anything about make
except to type 'make' followed by the name of the task you want to run.
All of it will be checked on CI so you don't *have* to run anything locally, but you might
find it useful to do so: A full Travis run takes about twenty minutes, and there's often a queue,
so running a smaller set of tests locally can be helpful.
The makefile should be "fairly" portable, but is currently only known to work on Linux or OS X. It *might* work
on a BSD or on Windows with cygwin installed, but it hasn't been tried. If you try it and find it doesn't
work, please do submit patches to fix that.
Some notable commands:
'make format' will reformat your code according to the Hypothesis coding style. You should use this before each
commit ideally, but you only really have to use it when you want your code to be ready to merge.
You can also use 'make check-format', which will run format and some linting and will then error if you have a
git diff. Note: This will error even if you started with a git diff, so if you've got any uncommitted changes
this will necessarily report an error.
'make check' will run check-format and all of the tests. Warning: This will take a *very* long time. On Travis the
build currently takes more than an hour of total time (it runs in parallel on Travis so you don't have to wait
quite that long). If you've got a multi-core machine you can run 'make -j 2' (or any higher number if you want
more) to run 2 jobs in parallel, but to be honest you're probably better off letting Travis run this step.
You can also run a number of finer grained make tasks - check ``.travis.yml`` for a short list and
the Makefile for details.
Note: The build requires a lot of different versions of python, so rather than have you install them yourself,
the makefile will install them itself in a local directory. This means that the first time you run a task you
may have to wait a while as the build downloads and installs the right version of python for you.
--------------------
List of Contributors
--------------------
The primary author for most of Hypothesis is David R. MacIver (me). However the following
people have also contributed work. As well as my thanks, they also have copyright over
their individual contributions.
* `Adam Johnson `_
* `Adam Sven Johnson `_
* `Alex Stapleton `_
* `Alex Willmer `_ (alex@moreati.org.uk)
* `Ben Peterson `_ (killthrush@hotmail.com_)
* `Charles O'Farrell `_
* `Charlie Tanksley `_
* `Chris Down `_
* `Christopher Martin `_ (ch.martin@gmail.com)
* `Cory Benfield `_
* `Cristi Cobzarenco `_ (cristi@reinfer.io)
* `David Bonner `_ (dbonner@gmail.com)
* `Derek Gustafson `_
* `Dion Misic `_ (dion.misic@gmail.com)
* `Florian Bruhin `_
* `follower `_
* `Jeremy Thurgood `_
* `JP Viljoen `_ (froztbyte@froztbyte.net)
* `Jonty Wareing `_ (jonty@jonty.co.uk)
* `jwg4 `_
* `kbara `_
* `Lee Begg `_
* `marekventur `_
* `Marius Gedminas `_ (marius@gedmin.as)
* `Markus Unterwaditzer `_ (markus@unterwaditzer.net)
* `Matt Bachmann `_ (bachmann.matt@gmail.com)
* `Max Nordlund `_ (max.nordlund@gmail.com)
* `Maxim Kulkin `_ (maxim.kulkin@gmail.com)
* `mulkieran `_
* `Nicholas Chammas `_
* `Peadar Coyle `_ (peadarcoyle@gmail.com)
* `Richard Boulton `_ (richard@tartarus.org)
* `Sam Hames `_
* `Saul Shanabrook `_ (s.shanabrook@gmail.com)
* `Tariq Khokhar `_ (tariq@khokhar.net)
* `Will Hall `_ (wrsh07@gmail.com)
* `Will Thompson `_ (will@willthompson.co.uk)
* `Zac Hatfield-Dodds `_ (zac.hatfield.dodds@gmail.com)
hypothesis-python-3.44.1/LICENSE.txt 0000664 0000000 0000000 00000000635 13215577651 0017164 0 ustar 00root root 0000000 0000000 Copyright (c) 2013, David R. MacIver
All code in this repository except where explicitly noted otherwise is released
under the Mozilla Public License v 2.0. You can obtain a copy at http://mozilla.org/MPL/2.0/.
Some code in this repository comes from other projects. Where applicable, the
original copyright and license are noted and any modifications made are released
dual licensed with the original license.
hypothesis-python-3.44.1/Makefile 0000664 0000000 0000000 00000020447 13215577651 0017004 0 ustar 00root root 0000000 0000000 .PHONY: clean documentation
DEVELOPMENT_DATABASE?=postgres://whereshouldilive@localhost/whereshouldilive_dev
SPHINXBUILD = $(DEV_PYTHON) -m sphinx
SPHINX_BUILDDIR = docs/_build
ALLSPHINXOPTS = -d $(SPHINX_BUILDDIR)/doctrees docs -W
export BUILD_RUNTIMES?=$(HOME)/.cache/hypothesis-build-runtimes
export TOX_WORK_DIR=$(BUILD_RUNTIMES)/.tox
export COVERAGE_FILE=$(BUILD_RUNTIMES)/.coverage
PY27=$(BUILD_RUNTIMES)/snakepit/python2.7
PY273=$(BUILD_RUNTIMES)/snakepit/python2.7.3
PY34=$(BUILD_RUNTIMES)/snakepit/python3.4
PY35=$(BUILD_RUNTIMES)/snakepit/python3.5
PY36=$(BUILD_RUNTIMES)/snakepit/python3.6
PYPY=$(BUILD_RUNTIMES)/snakepit/pypy
BEST_PY3=$(PY36)
TOOLS=$(BUILD_RUNTIMES)/tools
TOX=$(TOOLS)/tox
SPHINX_BUILD=$(TOOLS)/sphinx-build
ISORT=$(TOOLS)/isort
FLAKE8=$(TOOLS)/flake8
PYFORMAT=$(TOOLS)/pyformat
RSTLINT=$(TOOLS)/rst-lint
PIPCOMPILE=$(TOOLS)/pip-compile
TOOL_VIRTUALENV:=$(BUILD_RUNTIMES)/virtualenvs/tools-$(shell scripts/tool-hash.py tools)
TOOL_PYTHON=$(TOOL_VIRTUALENV)/bin/python
TOOL_PIP=$(TOOL_VIRTUALENV)/bin/pip
BENCHMARK_VIRTUALENV:=$(BUILD_RUNTIMES)/virtualenvs/benchmark-$(shell scripts/tool-hash.py benchmark)
BENCHMARK_PYTHON=$(BENCHMARK_VIRTUALENV)/bin/python
FILES_TO_FORMAT=$(BEST_PY3) scripts/files-to-format.py
export PATH:=$(BUILD_RUNTIMES)/snakepit:$(TOOLS):$(PATH)
export LC_ALL=en_US.UTF-8
$(PY27):
scripts/retry.sh scripts/install.sh 2.7
$(PY273):
scripts/retry.sh scripts/install.sh 2.7.3
$(PY34):
scripts/retry.sh scripts/install.sh 3.4
$(PY35):
scripts/retry.sh scripts/install.sh 3.5
$(PY36):
scripts/retry.sh scripts/install.sh 3.6
$(PYPY):
scripts/retry.sh scripts/install.sh pypy
$(TOOL_VIRTUALENV): $(BEST_PY3)
$(BEST_PY3) -m virtualenv $(TOOL_VIRTUALENV)
$(TOOL_PIP) install -r requirements/tools.txt
$(BENCHMARK_VIRTUALENV): $(BEST_PY3)
rm -rf $(BUILD_RUNTIMES)/virtualenvs/benchmark-*
$(BEST_PY3) -m virtualenv $(BENCHMARK_VIRTUALENV)
$(BENCHMARK_PYTHON) -m pip install -r requirements/benchmark.txt
$(TOOLS): $(TOOL_VIRTUALENV)
mkdir -p $(TOOLS)
install-tools: $(TOOLS)
format: $(PYFORMAT) $(ISORT)
$(FILES_TO_FORMAT) | xargs $(TOOL_PYTHON) scripts/enforce_header.py
# isort will sort packages differently depending on whether they're installed
$(FILES_TO_FORMAT) | xargs env -i PATH="$(PATH)" $(ISORT) -p hypothesis -ls -m 2 -w 75 \
-a "from __future__ import absolute_import, print_function, division" \
-rc src tests examples
$(FILES_TO_FORMAT) | xargs $(PYFORMAT) -i
lint: $(FLAKE8)
$(FLAKE8) src tests --exclude=compat.py,test_reflection.py,test_imports.py,tests/py2,test_lambda_formatting.py
check-pyup-yml: $(TOOL_VIRTUALENV)
$(TOOL_PYTHON) scripts/validate_pyup.py
check-release-file: $(BEST_PY3)
$(BEST_PY3) scripts/check-release-file.py
deploy: $(TOOL_VIRTUALENV)
$(TOOL_PYTHON) scripts/deploy.py
check-format: format
find src tests -name "*.py" | xargs $(TOOL_PYTHON) scripts/check_encoding_header.py
git diff --exit-code
install-core: $(PY27) $(PYPY) $(BEST_PY3) $(TOX)
STACK=$(HOME)/.local/bin/stack
GHC=$(HOME)/.local/bin/ghc
SHELLCHECK=$(HOME)/.local/bin/shellcheck
$(STACK):
mkdir -p ~/.local/bin
curl -L https://www.stackage.org/stack/linux-x86_64 | tar xz --wildcards --strip-components=1 -C $(HOME)/.local/bin '*/stack'
$(GHC): $(STACK)
$(STACK) setup
$(SHELLCHECK): $(GHC)
$(STACK) install ShellCheck
check-shellcheck: $(SHELLCHECK)
shellcheck scripts/*.sh
check-py27: $(PY27) $(TOX)
$(TOX) --recreate -e py27-full
check-py273: $(PY273) $(TOX)
$(TOX) --recreate -e oldpy27
check-py27-typing: $(PY27) $(TOX)
$(TOX) --recreate -e py27typing
check-py34: $(PY34) $(TOX)
$(TOX) --recreate -e py34-full
check-py35: $(PY35) $(TOX)
$(TOX) --recreate -e py35-full
check-py36: $(BEST_PY3) $(TOX)
$(TOX) --recreate -e py36-full
check-pypy: $(PYPY) $(TOX)
$(TOX) --recreate -e pypy-full
check-pypy-with-tracer: $(PYPY) $(TOX)
$(TOX) --recreate -e pypy-with-tracer
check-nose: $(TOX)
$(TOX) --recreate -e nose
check-pytest30: $(TOX)
$(TOX) --recreate -e pytest30
check-pytest28: $(TOX)
$(TOX) --recreate -e pytest28
check-quality: $(TOX)
$(TOX) --recreate -e quality
check-ancient-pip: $(PY273)
scripts/check-ancient-pip.sh $(PY273)
check-pytest: check-pytest28 check-pytest30
check-faker070: $(TOX)
$(TOX) --recreate -e faker070
check-faker-latest: $(TOX)
$(TOX) --recreate -e faker-latest
check-django18: $(TOX)
$(TOX) --recreate -e django18
check-django110: $(TOX)
$(TOX) --recreate -e django110
check-django111: $(TOX)
$(TOX) --recreate -e django111
check-django: check-django18 check-django110 check-django111
check-pandas18: $(TOX)
$(TOX) --recreate -e pandas18
check-pandas19: $(TOX)
$(TOX) --recreate -e pandas19
check-pandas20: $(TOX)
$(TOX) --recreate -e pandas20
check-pandas21: $(TOX)
$(TOX) --recreate -e pandas21
check-examples2: $(TOX) $(PY27)
$(TOX) --recreate -e examples2
check-examples3: $(TOX)
$(TOX) --recreate -e examples3
check-coverage: $(TOX)
$(TOX) --recreate -e coverage
check-pure-tracer: $(TOX)
$(TOX) --recreate -e pure-tracer
check-unicode: $(TOX) $(PY27)
$(TOX) --recreate -e unicode
check-noformat: check-coverage check-py26 check-py27 check-py34 check-py35 check-pypy check-django check-pytest
check: check-format check-noformat
check-fast: lint $(PYPY) $(PY36) $(TOX)
$(TOX) --recreate -e pypy-brief
$(TOX) --recreate -e py36-prettyquick
check-rst: $(RSTLINT) $(FLAKE8)
$(RSTLINT) CONTRIBUTING.rst README.rst
$(RSTLINT) guides/*.rst
$(FLAKE8) --select=W191,W291,W292,W293,W391 *.rst docs/*.rst
compile-requirements: $(PIPCOMPILE)
$(PIPCOMPILE) requirements/benchmark.in --output-file requirements/benchmark.txt
$(PIPCOMPILE) requirements/test.in --output-file requirements/test.txt
$(PIPCOMPILE) requirements/tools.in --output-file requirements/tools.txt
$(PIPCOMPILE) requirements/typing.in --output-file requirements/typing.txt
$(PIPCOMPILE) requirements/coverage.in --output-file requirements/coverage.txt
upgrade-requirements:
$(PIPCOMPILE) --upgrade requirements/benchmark.in --output-file requirements/benchmark.txt
$(PIPCOMPILE) --upgrade requirements/test.in --output-file requirements/test.txt
$(PIPCOMPILE) --upgrade requirements/tools.in --output-file requirements/tools.txt
$(PIPCOMPILE) --upgrade requirements/typing.in --output-file requirements/typing.txt
$(PIPCOMPILE) --upgrade requirements/coverage.in --output-file requirements/coverage.txt
check-requirements: compile-requirements
git diff --exit-code
secrets.tar.enc: deploy_key .pypirc
rm -f secrets.tar secrets.tar.enc
tar -cf secrets.tar deploy_key .pypirc
travis encrypt-file secrets.tar
rm secrets.tar
check-benchmark: $(BENCHMARK_VIRTUALENV)
PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --check --nruns=100
build-new-benchmark-data: $(BENCHMARK_VIRTUALENV)
PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --skip-existing --nruns=1000
update-improved-benchmark-data: $(BENCHMARK_VIRTUALENV)
PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --update=improved --nruns=1000
update-all-benchmark-data: $(BENCHMARK_VIRTUALENV)
PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --update=all --nruns=1000
update-benchmark-headers: $(BENCHMARK_VIRTUALENV)
PYTHONPATH=src $(BENCHMARK_PYTHON) scripts/benchmarks.py --only-update-headers
$(TOX): $(BEST_PY3) tox.ini $(TOOLS)
rm -f $(TOX)
ln -sf $(TOOL_VIRTUALENV)/bin/tox $(TOX)
touch $(TOOL_VIRTUALENV)/bin/tox $(TOX)
$(SPHINX_BUILD): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/sphinx-build $(SPHINX_BUILD)
$(PYFORMAT): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/pyformat $(PYFORMAT)
$(ISORT): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/isort $(ISORT)
$(RSTLINT): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/rst-lint $(RSTLINT)
$(FLAKE8): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/flake8 $(FLAKE8)
$(PIPCOMPILE): $(TOOLS)
ln -sf $(TOOL_VIRTUALENV)/bin/pip-compile $(PIPCOMPILE)
clean:
rm -rf .tox
rm -rf .hypothesis
rm -rf docs/_build
rm -rf $(TOOLS)
rm -rf $(BUILD_RUNTIMES)/snakepit
rm -rf $(BUILD_RUNTIMES)/virtualenvs
find src tests -name "*.pyc" -delete
find src tests -name "__pycache__" -delete
.PHONY: RELEASE.rst
RELEASE.rst:
documentation: $(SPHINX_BUILD) docs/*.rst RELEASE.rst
scripts/build-documentation.sh $(SPHINX_BUILD) $(PY36)
doctest: $(SPHINX_BUILD) docs/*.rst
PYTHONPATH=src $(SPHINX_BUILD) -W -b doctest -d docs/_build/doctrees docs docs/_build/html
fix_doctests: $(TOOL_VIRTUALENV)
PYTHONPATH=src $(TOOL_PYTHON) scripts/fix_doctests.py
hypothesis-python-3.44.1/README.rst 0000664 0000000 0000000 00000004124 13215577651 0017025 0 ustar 00root root 0000000 0000000 ==========
Hypothesis
==========
Hypothesis is an advanced testing library for Python. It lets you write tests which
are parametrized by a source of examples, and then generates simple and comprehensible
examples that make your tests fail. This lets you find more bugs in your code with less
work.
e.g.
.. code-block:: python
@given(st.lists(
st.floats(allow_nan=False, allow_infinity=False), min_size=1))
def test_mean(xs):
assert min(xs) <= mean(xs) <= max(xs)
.. code-block::
Falsifying example: test_mean(
xs=[1.7976321109618856e+308, 6.102390043022755e+303]
)
Hypothesis is extremely practical and advances the state of the art of
unit testing by some way. It's easy to use, stable, and powerful. If
you're not using Hypothesis to test your project then you're missing out.
------------------------
Quick Start/Installation
------------------------
If you just want to get started:
.. code-block::
pip install hypothesis
-----------------
Links of interest
-----------------
The main Hypothesis site is at `hypothesis.works `_, and contains a lot
of good introductory and explanatory material.
Extensive documentation and examples of usage are `available at readthedocs `_.
If you want to talk to people about using Hypothesis, `we have both an IRC channel
and a mailing list `_.
If you want to receive occasional updates about Hypothesis, including useful tips and tricks, there's a
`TinyLetter mailing list to sign up for them `_.
If you want to contribute to Hypothesis, `instructions are here `_.
If you want to hear from people who are already using Hypothesis, some of them `have written
about it `_.
If you want to create a downstream package of Hypothesis, please read `these guidelines for packagers `_.
hypothesis-python-3.44.1/Vagrantfile 0000664 0000000 0000000 00000001622 13215577651 0017523 0 ustar 00root root 0000000 0000000 # -*- mode: ruby -*-
# vi: set ft=ruby :
# This is a trivial Vagrantfile designed to simplify development of Hypothesis on Windows,
# where the normal make based build system doesn't work, or anywhere else where you would
# prefer a clean environment for Hypothesis development. It doesn't do anything more than spin
# up a suitable local VM for use with vagrant ssh. You should then use the Makefile from within
# that VM.
PROVISION = <> $HOME/.bashrc
fi
cd /vagrant/
make install-tools
PROVISION
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
config.vm.box = "ubuntu/trusty64"
config.vm.provision "shell", inline: PROVISION, privileged: false
end
hypothesis-python-3.44.1/appveyor.yml 0000664 0000000 0000000 00000004664 13215577651 0017737 0 ustar 00root root 0000000 0000000 environment:
global:
# SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the
# /E:ON and /V:ON options are not enabled in the batch script interpreter
# See: http://stackoverflow.com/a/13751649/163740
CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\scripts\\run_with_env.cmd"
TWINE_USERNAME: DRMacIver
TWINE_PASSWORD:
secure: TpmpMHwgS4xxcbbzROle2xyb3i+VPP8cT5ZL4dF/UrA=
matrix:
- PYTHON: "C:\\Python27-x64"
PYTHON_VERSION: "2.7.13"
PYTHON_ARCH: "64"
- PYTHON: "C:\\Python35-x64"
PYTHON_VERSION: "3.5.3"
PYTHON_ARCH: "64"
- PYTHON: "C:\\Python36-x64"
PYTHON_VERSION: "3.6.1"
PYTHON_ARCH: "64"
# This matches both branches and tags (no, I don't know why either).
# We need a match both for pushes to master, and our release tags which
# trigger wheel builds.
branches:
only:
- master
- /^\d+\.\d+\.\d+$/
artifacts:
- path: 'dist\*.whl'
name: wheel
install:
- ECHO "Filesystem root:"
- ps: "ls \"C:/\""
- ECHO "Installed SDKs:"
- ps: "ls \"C:/Program Files/Microsoft SDKs/Windows\""
# Install Python (from the official .msi of http://python.org) and pip when
# not already installed.
- "powershell ./scripts/install.ps1"
# Prepend newly installed Python to the PATH of this build (this cannot be
# done from inside the powershell script as it would require to restart
# the parent CMD process).
- "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
# Check that we have the expected version and architecture for Python
- "python --version"
- "python -c \"import struct; print(struct.calcsize('P') * 8)\""
- "%CMD_IN_ENV% python -m pip.__main__ install --upgrade setuptools pip wheel twine"
- "%CMD_IN_ENV% python -m pip.__main__ install setuptools -rrequirements/test.txt"
- "%CMD_IN_ENV% python -m pip.__main__ install .[all]"
- "%CMD_IN_ENV% python setup.py bdist_wheel --dist-dir dist"
deploy_script:
- ps: "if ($env:APPVEYOR_REPO_TAG -eq $TRUE) { python -m twine upload dist/* }"
build: false # Not a C# project, build stuff at the test step instead.
test_script:
# Build the compiled extension and run the project tests
- "%CMD_IN_ENV% python -m pytest -n 0 tests/cover"
- "%CMD_IN_ENV% python -m pytest -n 0 tests/datetime"
- "%CMD_IN_ENV% python -m pytest -n 0 tests/fakefactory"
- "%CMD_IN_ENV% python -m pip.__main__ uninstall flaky -y"
- "%CMD_IN_ENV% python -m pytest -n 0 tests/pytest -p pytester --runpytest subprocess"
hypothesis-python-3.44.1/benchmark-data/ 0000775 0000000 0000000 00000000000 13215577651 0020176 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=always 0000664 0000000 0000000 00000002123 13215577651 0030043 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 402
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 6.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [1000 times] to 6 [1000 times] bytes.
# * Median size: 6
# * 99% of examples had at least 6 bytes
# * 99% of examples had at most 6 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWTKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTAZWGAY2UBAAR2KANAA====END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=array_average 0000664 0000000 0000000 00000010250 13215577651 0031353 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "array_average".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 416
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 718.32 bytes, standard deviation: 529.53 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [6 times] to 3226 [once] bytes.
# * Median size: 824
# * 99% of examples had at least 42 bytes
# * 99% of examples had at most 1723 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3240: STARTPCOELGB3SISDODCEV7JDC5Q3AT75ZKZI4RUQZWPDNFR66LXGJOZNVGEK5ZUX5ACEEIA5J35PT57776767H5PL6RLKZ537IXGX2PTTV75JRC47PQ7FPG72KPG2CZ6YZ4O7R7NKGPPK7XUWS7XV2XP2KPWWMCQWWNDX5K5THPUZP7B5BNLKLKUCWO3OPRLWPXZBJ6ZUPG37ZX5K4TNR5OPPM7IRYUFHFQW7W36SUNQRNK3E66JXBKTCLXW67E4PNX5TBWMY3L7Z5U56UJSC5K5N42TMDMF4WSSMOURNJEULG3J6ZO2RN6ZNTE5HK4RM7WWKVIXYQB2KYEMDXD5WKLAH4OA7JUZ65MLF2IWK4TWRZ6WIB5WGT2WHDBBVAEVABRL2BMPUT3AOQJGQBNEIKLW5K43UCNBOSQ3DAK5G5MU6EORZRHGZTTAAMSOVYMBMWR4S3GRA7X36TJLFQGW4GBD24BWI2OO6WMQ62SJGAQIKPNT6ZIZTL2STOXP335ZP36AFMM3AFFCVDALYFAS7AA6ATO6AHLMEVFYIPWEGXQAKTUXA3K7WQEALGBVYOXPQDRAYXC4U77XH5LMCRGFEIH53C3VEUITCWAZSATTUYECITLENELNEUY3FGP3R3QIFTIOWBOG5ISSGU4DUILFTAZY54KJPSIITBECJTVUISO3IQCIO4YNDWBESCQ676WPGJAHDYN6UJULGENRGK4ESRL7RUYXGHZ7TYZSVPFK4JIALMNJA3G5ZTDOOFNW3N2EQUDDN5QIDWKW6O6E4V2QJLU5E2FOXBDYMKOCMLQND2KAXHG5MNQZCNB6TW5EFDIBJT6IOM6N7B3KSM7FMM5MOOU3YJVQKKGILG2VGJLJNISZDPISZD2GIUDQFCKAEQ6WBBRYQLVBKKYH5RF34RTGO2CJD5GRWHMWUCPTVKIUHVEUDVHBXGKNSI526KWRIJYFVZ6GD2DTAKE5IWHUZRB6WMN2M4R4FSHGC3KSBLFIUOMO2ANP7BODS2BPU4NYLGPURB3NJJKXUZCJZSCNBHNQ6EEG2XNFHSALLKEXTEQGNDYQY4VKAMJKRNHFMF3BXFIBBRABPGPSYSTQEHIDSUEIKHCLZ5DIIBJBTSMNPXRBAJTOH4KG2QIFZ6RJGTBYRLZKAPZ2VLACBMQJ4YTE4KX2BF5ZITSSTBWGJ4NGVR6NG4ADESGQERTCI5HATX5DF2MK7LZCZYTZQVKJIFMZXDJYFN5EWODLN6RXJOEWDRLULTCEAXRPXAXSJNICQFVELNKQ4U6VXCTSJK3NH6EZ3AIFMY5DWEKKJRO25B634VAK6EL6QFK3ZKAIZCIJZFBHDUZJSGMKW6VWNEKNURARLE3TSFFDLF2KAVPB52YDBKFQMOC3SCW33BXIRIJRTQIIRGCSBTLIUCQCVRPVYLTFCIB6CWRZX43MUXHFBVNXV6RAPIUW3K5JVEELEXCEFKI7TOE3RLVGAQ6EK2KYMC3IN6YOS7AE7FQOKPJX2QTVP3RODEKZ6F5RQNJCLMQMOQBLLT455J2TODLWDTTCQGWLO3DSO6VU6VOIXIQSWQFKOBKW4WNL3PQ25GIWCZLBFOKICMLLYDLSBZCRBMVNT5FOTZWHXWMFHZOUGKKYJZUK4M6BZ54UVXW4XBIHCCR3KS6GNWFSRK6QCQHM7MXQKRC3OAOAKGH7BZOZD5GK372J3PYJJDDU4I24VKHIIV36OSXC3YBHEDKYJNTGPZFCHU24YZQMARYMAGR76EOIB532UQPTTTSOF4MUQB2LWZHRVI4CQMZKAJMCQQ725HSKRKWFK3GHGXK2HY26HN62IKMPV3HKA46RR2GMZUQKXJYZH6FYVRMODTJDRBKVZEKR7JONN3PYRYF4K7TASJRERNJ3NS3V7N7NRWR5LTRP255LEWKWXOQ2EQ6VAHUPIGGE2DVEMBPDTDQXU2ZIITO2TQTU7LDHMOQG7FDLXMIYAB6AQZOSVU62SSSHIXC5JRPFWRA7RT5KRLWHULBA4OOP5OQKEIUE3CWWJ7Z7A5AX3CFUGYEZHR33QMVTRV5UKH3C2HVDLSCBRTJ3JPDPUGCBJYMJIZICTZUZKSVYAUAJEQHONJ6XXIE6KXD22QIHKSQ6JXNETTXJN3VBS6JYAVJFNBLVQAQPURY33FHNS5AXR77FSELFUENWOZVBP2AZ5ZIJPYQX6SI4E723ZTQPKSP5CA2IROVGL5MMYOB5JJCZDTDVUKWGJYHA5BVKANSV7VCJFWROE25F3QPENZFUGXMYFOGWZGA2SYLDQYNTV65QJ4T2OU2FTKPTSZCZSXJI2LJNYADCETQDXDDIC7ZI73AEOGK3J6L6MZTGP6TALWP7QYIQ44NHNJGB3XHORFLQRXLHEWLZE6NKC3OZBGY5O6HZOHXIP6B2S2S6FFD5HAGO3ONE2U6AMKXZZJS5OD7BU6DK7OIYK6T2RTWXOPTMBRH4RRLVQZTGRZBTNMCZDEDJD4PZEC3LXJTFQCJ4CYKVOQSZ5RVEJN5GV7VQ5JYDX2QJUZ5IOWB6CPQ2J5EFP5IZVMN3I5UE37YCKVYWUUZISE55SVVS6QK3RRAGJCB3ANCYC67J4YUWOZWIDAD65E7UGTVIHSMLJCNJ5TDS5Y4D23EACQ25OGJEM226DUGN4NMC5VY6SRGMAMEWYDFB2KBUYZ54NM75ZQRT5CJZCMZZOWYZASGMCM45W4P725A2D3NLUY63YCD4DB7XDLXOHYUMJMH4F6SOOVNIGHSJRP6N4QNGY3WQ73Y3QX3W2QRIFTFDLQR6NZ5HPZNPHDTLULM6UEFOSDWXCVAPUYGK6KEOQNQROY5P52P64YBGZR3YNVB6GUQYRTHBYDAJ7TZACRSL5YUKTAEC6LXZVI3WASDBLFZ7JOLTWPYOT2KFOJC4HU2PLZ6DLUC5TA3HON76YVYG3GWSIJ4GU55ALGU4OM3NE5ADOYS6A5EV3WKUKXVVJ4TPEGVB22DQ7VXQI2XWLIFEOUKNIJL62TU5CQAI44R2TRLVM3MIDTLG3HGXWEI6XJL554K2M3KXYXQRMTTGAGR3DJO5OMG5THO7H5VHV4E5O3HB4GJEL6ZQFZA6N2LDWG75ZUNKXFT7P2ACZRF7URANQBIJFT7G6LGBWEYFWW43MQNHUBENBBNYA3L5XJWD27Z5FR65TZUXGGRWPDWRDH2K67GERDB3Q2JI42KY3L3IR4HX7P27L4757XH5OL4R47H77HVLZCNA====END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000007435 13215577651 0031075 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 404
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 269.65 bytes, standard deviation: 101.83 bytes
#
# Additional interesting statistics:
#
# * Ranging from 29 [3 times] to 727 [once] bytes.
# * Median size: 267
# * 99% of examples had at least 47 bytes
# * 99% of examples had at most 551 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 2848: STARTPCOFLGBZWISDODCEV7ZGH3DPCQEXBU2VCTZDJBXM6GSNBXKV7EALAQ2OI4WSYECLEIJ6Y7367T5OX347X57PZ5XVPM57X65L5XT7XKZP777PFNW6VN454Z5H5PTX2YNOM76NRUFNPZ5NH3P3NV76M6635S65V34PH7XWONTVHP25N5LWXTH5YTO7NIRXHPOVZ6Z7IBPW32V4GMXNQML55NKWS3WP22TPVUNHVKLFLXLJ5V2PSPNZGC6ZD3X5LYDF24DFWNUW7TBOXJ3WGZ45HZKX35DVP34GO2LCE7QD6P4ZUOSTZ4K36WLNRVGSLIJ7XP3D63BBQ52WLDYS2OKVJF7B5VFK6TBJORXXVCZLJVXY62OHKVYQ5LF4F74GADD5UAWR4OOQWYOOXGIJHUNEUX6E3HXTMKUM5HJU4EPWZIVSOADLLKT4F2GWRSSJFN7XJLC7N7KJVRLEZQWWBK43LSXCVEKPEXQYAI2PMPAVT5GPIJP3Q5FHXBTL627HA625TVFJLIVFX2CDWCKX3O44WU3BVNHW2OHUXWZNOQAP3YFQRVT3S2FG3QDLW6WLZG3KUMKU2VMOB5EDZEFINANFQXHVCIH7WHSV5LS6KBBJYRRX3O4E3G7CRK4WFAPS56DAXVV67JVQI5F5OTIFRK2PTLVVFIV44BT64WHSXPLEVSUTPZCYUSMFFM6XFGKNCQSNLPMNG46XXZNCM6YADR4322YFQ2KCWI6CIPVLLDAID55PMZETTL5KB3OKOCTT6TZBTVWBA2RUEPAJXVPER2D6A2H6V7Z6QCCFBXXPWTDII5OPFJBJ5NR2VVQYDEKSNTONRZVJIOJT7PLOJJJ2UOIZDPVLFPLKSOS4ECMEYRQN5DCWDRJUZ7EMAOT5USVEOOEAWBBRVJ5YKLH4AFAVQLS3VCZ7UPKUOESJL5CX7MKR4CE53ZAVFCKPPPPRIV22QQ5MQ6CFNEZVLLFVRZ6JHCGRV24FH53DXZKJHIMQ4XHEQGW6UMWYPKOGJXGAHOKBLOAQ7LCBMAUUT7FS4PIZX4CUKPXJM62DM2AFKFSF4URQYK4G5SPNX244AHTQKXYHBDCDCIR7XSYFLZFEUOERY3PBHUEUWQIVRVEL5CNJQI53RIC7BF2EEQVL3KVBAPSKCLS3DHJNN2EUPK7ECV5UQCZDWEBGSKRXNJ4DCRBQ7EWDPXSYXAGJX7RA7TRZS43ICMDEVBWQLXBRLILJWSQOHV32ALKLRX4J63G6MADORHDQ5DSSRQSOHDMLG4T7B7DWMNI6TSBU3XLCRY7LMXWPT5I7UEY2P7XPOXJWGP6RAIUERZZ24GR2BUM3XLUCMMX7MDSWQM64Q7MU73SDHSEAYSHBQPDGRUGLD573QOPLMXW5J53OJJYNKLWCKNOIUZQBHBJZ2UYVFMJWFZ5WXGMD77K2XBPBB7INSLQHA2443XARU3L3E5LSHIYMMGLEEREPFURQ7BBZSEIEUYQJYA57AR3S5DZRLLWUW75RVM3LILB5BKQJEBDPGJWKR5CIRHBIF4IDMDIJRPFUEDKBVKLA4ULJENYKQOTWIMHIS2VK6IW6N2ZRI72VJJBIWDVV47C2K2QECZPSAYIE62UZSCPBYM7QIOQGXVK5SAVSDVJPFC6AR7VNE4W2UN4XKVNQCVCSNUYQFHIAQVVQZRFVIJG6LKKTN5AIDTAIQIYLDHPRUAS2UWFCIQU2RQITDVNMJRHIYO3YZLCHL3UI3KFA4YRRRHKWECZNSZSSTFAH72X46GEYE5N2SKOACYANVWOANCQDHC4NL3TMSKPVYAPGCMOCPJGSRI2SCVVLD2I5OWOBBZ6ALY7EMVT2JZSDCQ4RYL2SCBOL5J5IYREWQNCJIXLT3HHPDEFNCBMJJLNJSFDMZAJ5LWSRNY2O66VRJSR72PYX4SVBH2RKQXPMSNTYRTLOHOQSJJGENMDKJHUAF3CNE3WFT3XRU32PZVALEFHITZSZQQVACOZIISZQQ5I2KVK5X5V4RWJVOHAXPULQVMDHNRB6HDQCT7XAVMLJUNALQW6T4C5O5RK3SFWYHP44FQBXWKCXNKGXQIJ22GGL2ZSZYUMSA73KBC3INLFAMKPOEKCJRRCIIVGPOTZGVKIXOK66R2CAU3C6DIM2BDXT2ZE7NAKFIM4EICW3VO4WTPBYQJ4QUOAVXNDY43ERSETIH3EDQGP4V656IMA3PMYYWFCYG47FEXGXA3D7F2GHVJUWVICDYW7OAMKKYIKFIUZ4GS3JUFDKB3NC7MLBYUTCDSTQR4QRBEAFAKONSE727OM6MMDQGDBMWXX3CGR5NRGJGFZYHDGNI6JOUWU3EHHGWCCKAOAMFSAC4JCIBBYPE7UH6IDBVA3YRIASJF3Y6LKESLWYOROJ4DWNMAVHYMSZYLH7Y5G5ZEASAIBV57A2PDE5DKABT7SDN6EY76OV5YGEVEOQ4BOWQRWUBS6CEC6QFEKWDWC5GWHYMAN7M5QUBSZ2GUFTWMT2YRQWRZ4KOBYX6OWTRLOPVPGNOH4Y7IIWETDHCIUH4UPIPQKWSXR5T5XCAKMC2ZXESJEON3B4TRVIB4JURZSMGAQR462GEBR4K2KUPK57GCND5Y5MMTZZ4CYF3DIAJBR42KUMUMCPQ52DYONIGCUGN24OAQEN3IB35LVNOX75FRSAUTUOH4BTQU5MCSBC5VIZESA6JRGRM2SV7AG34KXZVHKZHMSQL43W2A6AL7AYK22TKIYAE66HMRJZJZ4CKWEBNKFEBK4BDDRYOMBLKYAKXXUOP5IWIMSOIM55GH2CDRZVFAIDVS2PBCMATHHNQ4H76H67LY6XZ6PX77XYJVCPZ337IP3YCCA=END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=never 0000664 0000000 0000000 00000010322 13215577651 0027662 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 401
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1195.67 bytes, standard deviation: 221.57 bytes
#
# Additional interesting statistics:
#
# * Ranging from 694 [once] to 2424 [once] bytes.
# * Median size: 1178
# * 99% of examples had at least 786 bytes
# * 99% of examples had at most 1936 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3288: STARTPCOF3GBZVYSDSESEV7ZFC4QJ3SL34SVDWUXGDZGSMYYHP37U64MIZD2WAKE4QIHJRO43SOP767R7O7767PV7PDZ7X37MZKPXT5PTTD6PONXX5FQZZ4KM6NCKHTLYQN5L7X6FU3L5XZV7DX3IHHPON5DZSZLOGO667PGJWNJZ23L7K6POXLR6O3XRRRW7PCEVFHC6XUXDOWX7YOJ77OXLKYVZI7GPRHJGP6TNRNVFQZMRZV7D5BLEPWEFCUR6YHE7NXLWM34W2SDD3PHZ5TBMFBKBV4L2TOBB362BMNSHIB5OHNA5P6LNY6OGTPLNHM3U4FZSTMKXVIZUZQP6TJM47NRJP23YZCJTK33NIOIUAMFIP5XSDC6MHAJ5GC5MMXV5CBFNFEQ3L6XVQWNQ2FCLKQMMR3NQ3EZYXPQGVZV5C3SEYYRWWGFTOZLA6ETNMJ4Q57HAKT3KMH7QRUFOV5UUFKEMZIQ62RRYDOZAEQWC3BO5Q4L7H3OF2I6NI4CXD4K5FNCG7XZFTCKXQGV4KI4LTR66RM2TLRP3D2EFXM4DB5ZRLC2WFVHVFZZMTTLG7WHSEAACONM7P4T3CS3YLPAFOZP5A6CHCBHZJFETH6TDYYPBYN7XBV4ON56ZLUIEO3JN6O56VTK6LLQ4ASE4IGQM5WN6EGSLSG2A6LM3K2N6RZ2OAHVBULMDEQFP4H5Y4IW2TPK2KRTQY3GQ2JPY3GNGYFKBHUXCIFTZNW4F5OQGAFMSAJAICNICTZCOATKIS4KE2NZM3W7AME5DLVV2QHODOYSNOYIU6UISTMCBPUCXRFMKOVGTSKGKEMHVERQLERBSZDHQAJDPJQRBC5NRBNCV5TFNMCE6PNXIRK4CGVXRUKPYVORFLUNLQL5S2Z4CGQ3IWOST5ZJ6BLGQIS2M6NS3SNXLDO6SBBMS4KUH2PUP27G3JNDWBMTUDCWWMKEQY33IMMTPCS2BMYBVNH2EP4LO2WYXU7ECGYQD63GVCBQFNQK2HENN6HYTFBVAFWA7UTIVK4UKCFMBRKLZC4YTGFYDF453RI3VBGHEEBJFXLGJIHOEDCG5IZBDW7KC72GEZYDV3PZWCAHJRGZNQRSHRDDBPK43AFNQNO2S7K5FFJ7FQNN6LTOSQ3FPC24DUGGIXPOJORZHXPSAG4BY2UFXCZTJQHRKUJSXWHUVMWQVF66SM2NPUGFUNF2IDP25UAEAJCVADB2VEWGYPC2XEY6XRUQOLWCAOQTRDBL5EC343C443IJUC2WGM2NKMWIDD6KEQLUWZ2N5GRSOTN2UNKBI2GSF33NNUUWB4OYZ25KYWH4IW6ANYJGK3XTILUJZXEMSAXASWWWY2CRUYNTBDXVIXU6SB7DMDH6KTKPJDPGZY7UYUKRRAKS2I7N62VSEIJTWQ4DKIBPRMW5X3F7BESXKB4JQKVL6CM3RMWIJZZOZMKXETSCDQDQTBDBLY3IU2S2X3HYOCHVTNQSGFIBHKQOEU6NA35SAJJQMJVHVN5IFFIMFNWQKOSR3XSAJ5VBA3EULFNNN3BWJF5QZAHO5C4RMIGNXQV2NVBUY46BDZZJWVR6N7JJLRNESJSHIZMT4SJPHW7HMAO7Z4BTOBGYHFZDXO6NXSCQHOYR5QVCMGYJ2ENFQWOAUKT5WFOBQKCUG4ESBN7IFZXY7VMBGOWTAPZDVQQOVGWLZUIDIAQBTRXHJVF4HVWA2MJRZCLXQVWXDV2NYGAW4HD6VWJPWBLFLKJIR2CDKDMG545HYQLV6KDTFUHNEEFGVZRG5MZCAEISL2E27GBLBGURLCCG7SRMNSRUBN4N2XOTZUWCTKQJMM4JV2CXNFEFWSWIVGLZVG2JHGHVN6ZCJNAVGKVH4AYVISGEZXOQQYNC74NKTJ2XKGTFVHHV63F7ARAXKKW7JWY4GJUV5QZBJGRW4OCVQ4CEHL2T73RUNOVE6H5HUREXFWHZAK4ACIARRZXMHK54BMOKNEWWQFPEUJG6RW2MPVDVGXZQRSWJZAANUIXHHDTRZFW3J6SSVYMI6WYOHORLKE4XNABGYUU4RI2PCNFDM6NENTGSR3VZFVPIS6WJX6S3E3URKQIFXIEE34PRKQ63H4BOPSG7GOHTE34KALNKPUHXIWANJXOLPZBD6SFKGK6CSVSJBCJ5MERU6R2X266UGCXJ5NUDGXZR4ESYWBVYNRUMQVH4ABJXJPNGQWUV2VQV2KNNJHJQJVEWNKE773GN3I6F7ARZSFNTNZFFEPMHSBZXVWI2VVV2KLTXH2WTHPJRUYJH2F6XYGZLSIR26HMNP2NPHM43IN7THVTM7WRKMHVVGPX7M3EP34VMNAOORUWNZQCT43ITWIQRFVDEKLP2WN3ORSAHVRJ7UPNT34T4EHF2BRADX7ILHJQYEVQ57SLQFJN54LO4CUHSX3SZHUKZ3VYUHNTAST5QMZTWVTVPD5UAUEAMWVGHICRJKTX3V6WFABCXC4NYBCJ2N3FE4LFVKHGDWTJYGTJ634PCNKGMAEEATL5SRJTUZ2HIFXUWRUBAKWVNLS73Q2QPM6BKB6VR2OFD4VTIVK5T7V4Z5JL7KLSHM7FVSNIT5NLGYMTI4IGMQ5EBQPFHVAXUGCDGMHYLTRFURKOWQ4AQ77IMPJPXIIDU3LY2MOEO6TU47DMLAN46TN2SCVLY4SLCH6JQQMQHVWSNOIPVFAV65G33JJGOXIBBEKPFOPXGVBMAN33KCVBMUTONDZX4OLPHFLBBOKE7PHANVZKVEZ6CN3C5NLPCB6BE3DNNAALJWMZTEKOTCTKH2MO7IC2QP5KLSGQSST2GWWVCL362O4HBX4HGVCESFCKTCRR4SR3JUAHHMXRT6EBHYI2Z5KUX5D2YBYW45NER6C4ZJVC762MPTVWOOOZWIQGEPPXQZFB3MTIKIFBLVCLU3EWJUNEZ7CCKGVHL4NP3CZFHT5RPV2OBRFW3FOVY6XVUY3J7UTCXHK6ZCMX4M4FSHL25SHARVV6H3ID7TSYR4ZZ2GYZ5Y35TKFY5ILZFKRI2STZM3ZZX554E5P74W2GM7HMCMPYFQM2JIP2IO35XCISZJC4XET7A2CUNY3NZZMXOXQNBPD4SXJ5TUZRLNLDPR42XSSJNSTFRWPUPMBWJWSSCPDOIPUZLRNLDQXISU5IGVY53IMQTXW6HANEVWC5FDPMIQLTF4QBYVYUFQTUFSVBGPPYSSHOBPRYX44X3H4KAZHXV3S5ZAMBYVTJR3NAWWTO5PJMDS52XUZW2SESPJP5GXZ6XR7P377X5PXRY7L4W6L777AO2DEMFKEND
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000012363 13215577651 0032123 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 405
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 17180.64 bytes, standard deviation: 19739.17 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [56 times] to 128851 [once] bytes.
# * Median size: 9886
# * 99% of examples had at least 6 bytes
# * 99% of examples had at most 84297 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4328: STARTPCOE3GJRSIS3SDKEV4ZDD5Q2AQEYFZC6IUQU626ILZHQVXK5TEHTKKZZ3U676V4RECIMQTDQ7367HT3773VY6P37737OG3YRW7TG6P7WDG7726H6FP33RP75TB5Y5UR7Y3LM6WBX6TGYYUNJL42Z47W5CND67DK4J6P6KCR5PBBX73356TZ6ICH5TVNS7LSZVHSSGXVEC54OG6ELVTZOFN4W72QY75FL6JW32FJVXVP65UF7OLLNI473QZ3X6DWYNOKVPHX37GCRL6J2GJ73HLZDXLV6UIZ5XKPKHCR5FMOMV4OHWHB37NSC4EMR35TJ7ORWPPOMKZ4LL7FTJ5YRZ36O4UEH32WQGLN36FUHB673XRSYPL3HSBPPLL27IXO66F5APJPA6CWKGWR676N4HIMPDPHPVVDT24XUOXIXEP2IMF27TTR6IOVMSM5HR4OXLVTXXXZ4Z3P7NKOWDYFGW2TF45V7P6JP46P67LLUGL4KTWBOL7DDTVHZOGWOO45OZEVNVZZZ5WXZEB6K3E4I44S5P673JPTL3OI6LSGSK5X7535UZ3B7K3RZ7GSYHOFKWGKD3H7LBXCR3QUPWO2445LWJZ5M66IWEGXXF4VUVGFSE4AOUB4JZI3BXA6WO75NOCQW4AVGOSMOWGFOFMHPXDJTQ3ATJSSH4SVVAGM4ORTS4FFTP7OUDOSL2AYA5IYABWCUQFZWKR76P6HN5PWC5Q3PZ42NMYW65W6I6RNXMTBXXJ4BHL3XK7PLXFLMSELKAKKDQHMCRMW6I6XX4XLBHFQXCDZ3C23PNLQYMW33SEY76VDDHFH2ZKSPKJDTWRJFMI5WQNJHUGDTO72FKZ2SMJ5STPF4SML2UYF2757XV6KZ3V5Y5QKKVLKL57IG5BKLTHBART6U2X5OU7I74U7O6H6T2W5F5SE3VI2556J72JEFOKMOWF2NKPRE26VQXC43CVHYMXY3WNUMQ2MQ2US4MU45EJR4FFHQJQR7X6LZ2XJ4LCLQJOAWK5K6SGKTEDZN56PGGBGUCDDOOJGLEMNKV5PRBFUTMGYM4CMQVRC7SLT3VRDUG5KHLDMQHPU35MY5QSIGDASYODEL3P62L3E7KWTLQGS7WEMPK32XT5VEJ3BQZJCWOIK44BV56MJIZXXCYPV7R2RYPVVMZ2JENBZZOUDN4XG3K4ATSNRVJ6MPLHEPXCAPNDIXXIG5NFKAVK7LAONPVKUILLHUHPR2L2JB4J2O7U7ZVTIT52D2A2U7MYXPOQE2SYCZVWP4KMPUCFJMXQQIBWN5G6HDXTMV6ZTSE3MHNCYL4QDRRQDCO6QLL42BEW6CEDBWAPP3LCNKBTJNAIMDBPYBFAHE3CQJJHIXAMI4K5TPUAOB3PIJN4DY46YV3J3FZDVQIZW7TJG5OYQHTL6BZITO3GFPJGJTWLN4ECPB3ATPECEEZ7VNTEE24AQAY2PRQAPVRLBVNHCBGLXNXSX6DCAWBCQYMUEJDCW2I33O2CF6RGNOALW5VCGEISJHU5NUOHOQWFODBEDWTENSHVZIC2WW3WSN4TZPZPXZ3COQG6WTRNXTGHG2G2VTLGD6MWAAGHO2NIRB27XON3X4YC522LEQRLH6GUHKYEHYIETPWDNATQYMMVIDWZJX3QDVW4PFGGWSA6IDHSGYK7AJSGTZSCNCXOTASTQ3U4CXETHXNZXMQTZM6NEKSW7CDJKMWIK4OQS7NQZFTRCGJESGPI2L37ZGCUDRV2RGCL63WJ2KOB3GKTUAUEIQ3ERJ4H54WZUIGKR7KPHBH3NL4SK5BQ7NAL2YCWPWFA7BHK7F72PB3VYHI4Q3OPNU7UL3VMNFYLWPNFEQAWJSEVPRMRETWTLD3N2NTGJXTQ7UXQG3Y6LJE5VAPRNOY4JEVNNXLDGWFWM4CTXE6DX3G2SOYKU6AV2E4NHXFHCDV3RJ2UG4AIAFHCVYDQXZSQHRSH45VBWTRGNO3JHULOIH2Q7JRDCYWSWKROAJIN24M4NCRIBC3UUAKGSPDXTRIWCPO2T7GBEMZBNI5WYJMQXTKFV6CJ2ES63CGBYOSPZ7RXGYTYFT2REFOFDPVUY4LL2O6C4OPW5ZGQP5HWWRETXZRZ3I2RKJDSO5HXGO3BG2LQ7MG2JHW4WS5NVESYW77PKSQZ4P62M3GWYAWUYWI2YYEAVMIAVB5X47VHIY6D3YGCXKILLJ3YVXKHJUY3G7HRUB6QDYOPJEUIDMZNWCKGXCVQ7JIJHLOPXZIHYJDMMEGHOOD4SZIXBNCMJ4QBD4UZLREIOAZP57GMQR574OXDCJSEFC7LUFTQBTSC7SGSMS3VEOTDVPPC2RNEETEUXT5MRUZA6EYMCLJ4HHWLMQMTTCKKCLTDICEWWPI3U3Q6GT6VLD2FTNTXDXCHSEUHEYLDB4SFAU4MVRLCKQUEYY5L2ZMPBTTS5NFWYRVWD2WR76CTRGFQU4RYKPZNSC42BZMP54CUHRLFOFNRVEGK5CIFHVE2E75LFX3JQ3I5JBGU2LRKBPHO5DV2NGLGLQTY3YOMYRJWIYVOIZSYDMIAKQHX5VAEW5DZ5O55CRQACCH7W2ZVTGUAELF2MQ6ISGMDSHQNGVR5QR6ZM4JIFSGI5DCDG2PRAB2TB2ZLU4IDPWJEYMGBLGMS7YZXMWKDLIYB53PPC5N6LFEJO2P4CTNEBLKO4BQ33YANJR2QNS24OWKIYXVIUBEPZYGVBLBYC475RFUOR4CNQKP7PGSO3H542MOAUDWN4CG77VLJAX2YXYH3O3RH22XXLOU6NRPUNGLGLRLOWJSOIWOZR4HBLW2UCV6LVGAPL75YLSNWG67AM6A7LBSP44CPPGNAUR2D4XW5NWPO2QHXWMH64OXDEBNU73XGPXZ3LE3F2XL3WTGX2CTLU2YYDDCRNWX7BSXL4CM3G4HAC233OEEYZHBSVIXTHZW7TZX44GQOMYNKEPDRXNDOAI7RA6MSE7KO5YCPI6VIA4VDIGCMUZGSTXQHSTAGSCMY53U6BK65QQZLBB5C3GTXU6RKMMZMAZZKSYV6OQ7OD6FB5H37JPXBG4YN2YJEF2UWUW2GLBOTHTOYXMPJEPKIWYME4QX4DBHA3EHANO3LVMGMVDM4ULBJIPIK5O6SQWCLBKB6AKTQ7LGOCO57L2DCYZUBCMH5NMSAUTBNQUBVFXTXC62KPTEMFESM22NK3ZNM252B3BXSIR2E6FEFHHOBWEPCSURINZGIEY5LVMGUT4DR6DTBLGZDRFNROJHJOHT5FICU7P3PUMHJJEE3EOPDIMGJHTQ3GDL7IW3RZ6QZ4CCNJJ3MZZLQP3NIXBJV6DJVFZBUNXGRK4I3JARXETLKN5WFEATM67DTWG6YTGGD23NAKPUDELSPMMFOGQPB7J4UA3PCY4F2U6Z3ITH5DU3DLZEHHNMQGTDNMQDF6TEJJGNZIKXGDKVYDFTKARXFLXUOIAAPSKCWWB3BCI5E33R44XVEKLU7X4LCBVH4UKBS6VFJ7NVS2I44LWLQMQBIZ66IYOWEK3GWT3Y3KRJXTZO67F5F7FOI3I3ZQDTLS6LRGOD2TMBCLD3R4AIKGMZSFXDA3UXCINRAEG6LBOIZ5XI6CJBVE4JQYASRX272V66JB6SMYXSULOWB5CA4JBOZHT7BL6VBIBRT4BW7DCGFTOA6WKSBK6DMDR6ABEX3B2JQ54E2USCJQ3KGM3PV3OMH2GKEREE7PTEOZTETWVQ5ZWZFVA65XBGQIWUMKKOMP5QUYXBXNSHBXJDX2WJKVQEGV4CIYDUFRSEX64Z6WPMFUNI2GWG7IPHJXL7VVXO2Z3HTN5ZJRUBKWODPOMJZKMBIZUM2DSEHQPSGCMZXWJVHV72AJL4PXSQ562LUQZ2URMZEBTP4YQHKMVMNCJMF264HBEL3Z5XKMAWKLOENXGZ3AJKYUVKCR2XYETNWWU7MY645BHNRRZBUUBWTV3YTBZVPSZY5RK7CYJWQPTQMUUELQY4MGWFKZNX3F6YVTBRQXHDHTLH3NDUF3UQYV4BOTIUBWCGJGH7L44CQD3SMKULQ353X23RPA2NC34AQXP5JK2ZQ5D5EJG64G6NGF7T2LWL5XPIGNEBZYE55QFDWJXGMJGA5LODYAYJYZ7ZPD5I44K3FZSGCALO7PJ5I4UAUZQQL3JD6WIS34OQCPP4KCKT3EYBTPF3ZMEMEQJ6OMZJQ2HUH5KO26ZLJACP24YYZWM42FXSR66CN53HP7LQW3AKSIQ3QW23A4VSL3UQ6UDV3Q6FQ2GPF6DZTNNTMGKKL2UPJ4HZFAGPWCGMJQGKUGVO7775XD46P777DRR6P37677QD7HHX7XDNAY7END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=always-interesting=usually 0000664 0000000 0000000 00000003044 13215577651 0030244 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 403
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 12.25 bytes, standard deviation: 23.06 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [905 times] to 183 [once] bytes.
# * Median size: 6
# * 99% of examples had at least 6 bytes
# * 99% of examples had at most 123 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 560: STARTPCONKV6LN3BSAEH4CXF6OHDWMGE2DP2S6VLB66XWVVI77PNAXAEWNAIFHOEVI2NEJCAPPROM3C4YZS6X666IY36D7P4TJFFAGG3UU7SVMV3TE4EN44FEA3NPX5UHVH2E76SLDURRINWGTGQ2YIJR7QDUZ5G6SIBB5UBJKZQBDSW2OIDJAXCDNDUDYSN5EYJOA7X3G6C3EADDN3NJ2WJ2VXGMVOCSKFNYTBABSMTUZQXC7UYEXLHCD3NZ464MM3FLZZ3ROD55YKXI227CJXVEITDJI4CG2OJXIWRXURHPQGTEZ3HHBZZY3ULFCGUPEDLQGOGGDYSSRNCSEXDXRLPIWKAVY4UUCZW5NILYUAZZKUGXYQTS6KGITKTEU2AY6IMQOTETO62ZB5JIOTV2VXU5SKX7R5APIY44RYMPGX44HUHGGS4JQNUTZIFKFYCAUZTSMIYJ2DEP3UHHORHAUQ2XCHBUSUYRNRLUS4WMELM2KXAUSWQMWPGYSX3TDOZPXBAT3NXULFA3YD2QLWHT2L4TQDPDGLHZ77XPAOA756IF4IHRWYY=END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=usually-interesting=array_average 0000664 0000000 0000000 00000010261 13215577651 0031553 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "usually" and the interestingness condition "array_average".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 418
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 795.10 bytes, standard deviation: 578.38 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [4 times] to 2684 [once] bytes.
# * Median size: 997
# * 99% of examples had at least 27 bytes
# * 99% of examples had at most 1920 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3248: STARTPCOE3GB3SIRTOECEV7BBRGYGQDBHPL5CSCTTCZEPE6C64LXEJOQLSBQZMR32GUE7VSWEJ7736XZ7OP67H5P36XT7YR5L64RK7O57L4L6YWEP3PYNLXGWW76PWR7ZLGO7V7PPPTZW64Z5C5NNS33WX6GXX2QXLEMK7Y3BNV5WX5RU5GMXRXIB4ZPNE7TF4VDL22R3DNY4PKYPL7LALNKQ7LT5UHKOUB5HJTLLFH5LAOZ7RB2XNUDF44VZ6NJ3XZNU27F6CEX3ZNXOBZWYRNFNCFS574J56OLPEMS63H5LNFPZXSFST734LIS7MVGEPHTFGHLZYCMKBDYEICPIHA5XQSG6ZJSHRN7N4HJANOMSDYM4H26QXI6LBID3SGTD3FZHHEGW2MEUWAUROKYR5Y5S3HX4C2N6Z5O2YG24AYRHOLS34P376PYCU3XRLWKN7RUKEBLO2U5UUM3HAPBO5ARF43K6JKOHECADHO5QEDS6KW6KYAIOSQQOFVUHZQ2TW3KLLQQUWSQQS3JITAPQ3EQATH3TBAIBIRUQBDO4DJ5E5TQCHSFJ4RGQ6TUCYWGR5N5AIVMFU4QWHRXUALAZAPQEWBKUAUE3KGPUVJNQU7ARCSFBFQX7UYCJBUBCDSLTMDQI6Z7WOXJJN7F3SVIFOFH3RSANTA2TSMAVATMY3YEBIVY7MX2FVQH52GMKSLRLQULLXJYNYCKQQQADXY3IEF2SKX5ANVWMIA5XRAT67NNQGUSXBV2N2JGXWJIVVAVNLA25PAQGP2GBDXNG6YUDPNC7RIMCTF7HLL535OBNZVBHBZ3IAZIGDWKDGWICD2XPQ6BQKBFDV52CXW7OYSCRHOWHKIKWLOTZOMFTK4G3ETBCEYMYPKJIJHDZOBJXNTF4WECTZEVDQU4OTPWCZKRYWSS6BIRG6EL44JUFX3QL4CUS6JQ2E7QZKHEKUYEJEL2ACETFLX37DF6SUXBG6POUHMN5GRIE27KMXHK7UIIUDDSP4KJMAGWAUOBGR44WKIB6U2RGM2A4L2IQYAY4RNULXYA3AMECGHIG62LKQEVC6RAKSWTNABJAWXRRRPRMFDCEYXZJFEK4QYEHXRTFUZ6HB6NWASCZU3F7EAWC4LWDBQYLMLC6AUXLCRVDDVKBMY5RLMRKMPJ2BJWVLUDLKZEQT7JNHNMVCXGLTD2L3JNZIJD6YVWFXNYITEXAAUJFJQPQIJ7XBSFOETRFLJXQ22UDYLS6JXFTDHWVBL6H3OUDF2PRZLQJDA4LWEMXOBERSGDDWVKZRXHZIKG2BRVRGZYSKUL7DO5C6XMFTDOWUPJXDRQYT7HPK7R2HRHXND6SDPIDMFYF3F6PXWAMEZMAHHLFENKI4FOFYNJUIYZWE5OB4EB3CWGLPLEN6R6JLUJFMYXKUENZACTZVMMDY7FGIR3CE7Z6D6KL2IMRGTEAIKHFNB4LLEEBWGLMCR65FMWNYUJHKQC4WBPGPJBWOP5PBKD2D53NFIEUJNQ74GOXDFH2XJG3IWUYNYATDZLKAVTJ7EQH7HOKFM5OVZ3ZKXUR6D4VYQQDEE2XNC5CHKLEOXFGOVZ7D5Q4L7TYA5TIUAYWKJMNW4RZPWNCTHKBSZG26TAB5MWNZWHTGBHE2YP7BK2JTK6LFTCNBEQ57ZJZ7I3SYIEK4MWIJ4MRRJQSFXJIESK7TGYYTOEZIU2ZRFXVDC5AR6EAI62VEVMQQEVUTUKUK7CEITM3SC6O4TUQCUQQ67K24E6A7GDAWZ5UIXWNAXKHM65W64ZYAAGQTW74KU42QYHPHGPVQ4CRMCDHM3WXREUAOHS5DBTYKNGPQJ3A3EIUJR4AMMMFZ5GAUARAZU45FPFCBB2UKTL5Q6CGW3YUL64RLKHYR3RR5YJFK5ABOTQ24NZWPDBNUSUMYAEFGYIYYZXCCAHMTJLFLE2WBXLOGB74YVKI36FUMOBXDVKWXU4UDT5NKGGMR5OJSW2YAS4LEXUFBJMPIK7E237THJJHI7YNZOKHNOYOOE3JSUA5PJ3COQKBJZKPMFCCA7SHIPNKCY3FVWFNQIO2YRMGOITCE5M2TL2YM2O5Q4OWRVHSJ27FRYUQXKVMPFV6BIUETOCCDDEYKMWZYDJ5B6DQ5JHZDZWTTALHDJ5PTEPP2UFGWJC5GHE5R73IM6E2LEOPZSEPWD3GW2UZ4EBJHFGWVAUDZXB5ZYWYVWON4MNZU4N53UGO2I7H5LARTKNFYI7AUKHE5ELIE46L4O5VGEECGNB33D5HCZMSXEA6RWWPAP5M65CDRH6DIIW55VXAYU3ADN2H7VL6IDPRCMN732YJMEYW5EM23VGWBAHLAO655RZJNB3FGFJAYXKZY7ACRGVSZYHQKAD232JKLBPLPW4RZFPFRGZ4XCZKFGOWQBSIGMQPB4BXGA6KKL2MPEMDC3T3VQKBYTO2OLELWJ5TNOQ5U55VRASD3QFSZGGVIBS23G6KOIASPNO2NQLK4FPLPRFKOZNNMYOKOS7LLWWH3ZCBHFFMAVXIFAV4M6JKMKBLOOJJFXGBR2HC6ICUG5BUZZ2DAOFIVZXHCVOG52E6O6HPOTKMWX25XZKEHEDTQXSZLDZUM5TAS3DCDBAAAMLMIY346PEH7YMIYH7XQFS7MHP6S2NT36O4RR572KMMMUVVVIZZADVV36QWKHXIZUMES3Q656D6B3ET4X75O2WPDUHMHY5DEX74S3BUBBWBHUPO7O4FSWIOEPRRUAFYDZ34LZEQRPWNDTI23ZBVWAIFAPTBXBP4ADPKBKKJXAEP7BHZMMPUJ7ZUXFSAE433JM3AM6GWDNWTEPZHO6JQTVZZCCGK464XIXY57GMGYFXDDDIEIAGVXCJZH3LENQMMRHHVQG7ZUYCYRY2G7ZVWQ6JNM43KWSWFTTBI7LPFIK57HQBJZJ5K4AW3BULADRKNEBMKL5PXIBBKVZNPLIJEV5VCVP3LWD5IRJZ42BBFOYHP7TZLUZU6CODJGZR43VJYTVNZRNQS24AEOXIPRVIT3DC3CXXSJEAH6XDMTJTPR5MP7F53Q7O7Z4IRU3XJ3JDDWVBXKP7RJRNPSCQOGDY57LKKOLVBWLZWRVYHCX3T3THH4WLU7INA3RZ4CE46UQ3ZLM3VX4ZVSB27JUFXQGLTUYIEVL5I7YSDX6BL54RZX4WSHUEKIVAOOYMP5VUG6UHIT2UXRVKC5L63VEE3RYKNNFDSJORTYKRKZ7XZPZ6727D77727K22XNP7POP4TRLMOS===END
hypothesis-python-3.44.1/benchmark-data/arrays10-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000012344 13215577651 0032320 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arrays10 [arrays(dtype='int8', shape=10)], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 406
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 18011.85 bytes, standard deviation: 22290.09 bytes
#
# Additional interesting statistics:
#
# * Ranging from 6 [44 times] to 168836 [once] bytes.
# * Median size: 8860
# * 99% of examples had at least 6 bytes
# * 99% of examples had at most 98792 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4312: STARTPCOELGB5SIWLSDMEV7ZGF3LH6ADQICK5IUQU626ILZHQVXK5TEPWUQSOZ52FOFMLAAJJSCP646H377XV547776XDN6774PWPH337PLZV37XV662O7V37P4PXJL6TZT5J7D66XXR523XV6J3L5DNVYR7X5VMKTX7GVDQZ75VN3677ELJPOD6ORC7QQW4VGN6PRDULOTBNKBV2PPXM5VL7VLRV57JEW44WC7I63KO5ZTFY7J4GT4EV4M7WP26G5FRPYXY24T5GD2VVVYMV23RXH5CCPPWOOG4H56X2XH4OEOOZWPZPR6P3YLEZ7KWDZORPB23ZHEZT53IJ6O553EUALMJLSIWJJAVN44323CD5X4LXVANF2T4IX552XSQ2ZZJXFFMH33VVTXDHP5NR64MLUKPGOQK2NE5NTPJBPNQXEAIXWTZ75Q5TF5HTKNDFGC7KW25YHN4GO55LY2R6K7YZ53IMU6V2EVK4VEGP37WG6D7I4V4PHWPP5XEK4OZK32YEYY5CPPNBLDPEATVHZSY27Z53ULHPKASOSG7A2R2357LFO5Z7VEFBSRQCJN45L6K4SZWRBZHQDPTDQSLQKXS6RXXV533HNARSVZQRFV5BZQN6IZUV6OMGSVEFJJ5TX3CDYL3UWQDYLBY3DR552LQY3YLNHVVHTBVXGEAZM2VGM4PPQS4W6TVDKXPZGW6PSQ7MPYXPEGLDNM7WGR33GCOJNL5IIEGIHHO3FDOY3A2TZ2UOB2KGCDDAZ46F5XFEVOETQYPVEKSZSTNM24FQVO3FOODX6LQCDYXFKVVNCBVONKKUXO4K65JAQLKQ5YQL3V6XJEYF2OATJDIHXIBI3YDL36M674UB5BZ5TTNGG4F7M56ABMUXAGECOLPHDLGDFPSDWV2L22N36UAEETLHISMS76WWK5TMGIX5ZONQSXHOUNF4BNZTDEQNBKLRFTRRRFLOKYX6E5S45IRTOBGPGV3ANSJSJRM5AG42MPY2W34NOSDAIZJHKUAIHSZKEEQ3USF3E2DHHYIJNDXRVAKCLMITVSJ72MYZ4Q3G2TLMMWHYTUC4G6OVO5MM5T66DTLXOAYP2FJ3ZL4GCNJHE6OAYIOO6UTT6HEQJU7AK4MHIMQGD6DQU6RLELJ2FPO4BWZ6X5TNIWWMRPTAAQS4YWL4N4NSRT6PRBEEBZZYIX7NEL6I5MRIVMA43F5O5SYARDTVFBKLR6FKXSPDQJ6GHU56BZZC7ADY57WY4ENPNSZQE2CYZUKASP7HEDVII7VFFRKH2GZIQKHJH6EKGYET66PBXECSO2G32ECKUOQTZRYZMRKRAOFA66DMCGP7GNFXL65MMM2HE4YF7N32PO4WDDMOGPR52ZSF5X4BDJ4ABOFN76Z2MSX2F2NMGMRCHV5ZOLYATOKOWI6JNW2EKIJABMEMPMZHV2OAGNUPAGETJ2EAPO67277UHXJUM7INGFYZCN6L3B7US7TVWRNBRUHPESVBLUTAC5F37OOVU5HEHWCOJGHXK7PAJJAI2GB3VRAGFWDHJF4XKOZG6E5FYOLA5OI2JKVVHIJ3BYVG62FOPHIC5RHELEKHG4QPJCRYG6E55LCFWH255Y4JNYHPBTI26F7VPCVWX5KIIOMI656SAGV2UVO5QCM3P7CHRS3NSQO2QJEMSXM3XQAZ2CPES2REH7EXS7HY43727KTOU2Q4QAPJ77FD6I34VSFZEYG4ODFXJ55AIHRTTB2Q3MUG6KREYGCS4DM2E7RGN4CREUTSO6XJVQNJCENA54XOI2QDHZZUEHEEJ3IPS5LZW3LYHH5JX4PBDTRFA625KZFTICSI3VXCDIR2S2JZBDRYWZOY5JR66ECILQURLHWHS63LW532TT734QZZABJVQNCOV2S3I27XYN2LJIQ6CM66EKILXOG6T2JA2LMJVSGBL24CMLZ6FYKNINCGOAPANGZWAO2GDPQRQB76MHABDZQMC5CFTQRSJ2UGAGTA4JOIRJTV5TO6DYTMAQ75BFFTXSUNU3TDFYBPST745B7JG5JMLMFMFGNCM4TJN2RU5UWNUE4T7FUGJEJNKK63C6XOMC3B5TI2C5LSXPPQPOGG5HBBHCI6GIYPGGIQY23T6QG7QHMY6JVTQUQOKCLDD7BWTQK5QH3PFY7B54VH3XCGPS4AT34UUXWARF5LWM7PEUCHYISPIAYBNV555GG4ISDBYHZFVPXNBZPQSGNSNQWXAWPS7Z2MHBPSTLJB6ZT7TGNGJF7JKC26D7SEO5JMMJUCNBVQ6QY7QRTWQTMI7RCYOTKT5XAY6WR2X7KBI5X7XKVQIUTIVYGCCVO5MDIZPBKN43W6QFVG4LQV4ZBIPXHZV3O4XQP2IBBDHK5565Z3H2PMOBWIFNLLWAOXIBWI5CK3LCB6ECJUL4IV6PDCAJOD6D3JVBEZDHZTZSRTF6UJUFSPZEMAN7MODACOURZX47RHXLJ6RWL6P3AMG6EFZCGTJXPGUEYKRVHYZA6QMJY5K3G3NC4IWFEA3ZC4DSC3FTFFTWWEMR7VCADOUGNNNTE33R75WV4ACCZVWLQXZXFSFIQN3KRE42HZ6NTZQ7FSHNWABHBNCCJTHUEHVYQHMFXEXBTD7ETJ3CZLPDYNKHGRUX3ZUGQADDVMENNYEYZISE75WH47MGAXGK7AVYHV3PS3YQVNTEHUGXI3JUIA6POHYWWQPTJ7ZAJ4GU43SFTEAH43Q2KRO6HE5MBP72QRUGINHRZTE6U2CSJMJD7MENSJ5J7MMAHMZQ7D6L2D2ABOPV44ZEO5TCL5ND7D2QH3MELOAG44RJZ2CMUY5TSE5ELHPBSBQKAHGOXZEBWUTH2LWP3OQCBKES7RWK7NQMY42YEO2W6MVPGBSJTBPOCORKVRM4CRATD3RGWEK5EOEDQWKSELLO62DA3MCICMRWAQF3MU2TC6VONJVYTCTRGDXUL4ZO7MGG2UR6MOMNS6OSMJXVIKWDISQXYQ355JXVMJUS64BXGWBTIPE3ESRBXJX7XTDCRRMI3BHGHWVR2ARBI5IHWNULM4HITFKSRBG4XH2BSPN2ICWISXWR5PZVVKHYB4ZFHDQAEOGUXMELSY4JATM2CA5JP6C5TPKPVOZ7WQMAMN6P6TSWI22AHIQ2QXIYWBZQOBGDFLYGAWNWZA6B7DT53C7RD6JBBRCNCF4PFLGKE5HPJAYRYMG6BT4WNRCKAI7Q3HC43L2T4KTG7GTM3O4DCIZZPRNZCWXCXPCWLYHBWEFM2MCV65A4H2FBRAEWYESCPF6PTWOEQWBCUYRTS7G7CGA4VOQ6IXMOWX4BTV54OWINUZZ3XIMJOSIQTJK4JXCTDQ4GWMXUUD36UIPNSTW3EUNYYQMSQQB6YPMFKSUXLY6ZSL26XVVB7B5JL6743KDKZJU67QMGGL3X6MVRWY6BVD57APVSOVGKTBBPFMHRMFSB66TTOWATCUWKEIDQT6JNUTY4CWZZDBTFKHMIMP2ACPOBGBB5HPV3BNUMWHDOUCOGKUCVL5AYEQP66Z5JO3PT3MDXH65HWRKHE27C6EH7N4P72EJBTLGRUJR2FDQ5AQSIAULIUONBBIWMDUJZWDTYJ4JYML7G5SXDW5L6OOWJJ4PKYUCPW4NC7TZWABVYLHAXIEX5E7XDUHAR2JNVHG5UQH2PLGY62RQGYHO4YFETRUO7Q2JSOG5JPEMCHFRJQUXJQOWWI4HTZ6RORY6OPECHZWB3MKPXQOZ5B7GA5FBULKLPPBIFLHKMFU4XGESTP6QSNSM43W37X7VBICKQWWY5OD7FSKQDXRHSXXZWWD3LJHGDOJZI64DZWDZEHVHHCGIFFWC3ELSFEO24YQ4WJVI2SXLWT6GB43JP7F3RFWTZ3MW2W3NATL2VHA2C35WD5DSS2FAYAGIBD222DYP22BQQOON3W3OLCHLDTFOWMQCQ2FY72DNIJ6SOIMYGVTDRJ6ZQV6JC3CDVKZDQLYCGEG7LMTX6P44JVE4HOVKYXDQGC3PW6DM2BS4HQXZTJTLRHHK547SRZM5AOPHWVRTE4Y6EOSECWEMBSXX2NMTYKK3B6WUJF2MNIXDEXKPDYOD5XACLPE6TDIPGGUYESHJ33XULSFTJAPYPLLOKPC3QGHLYQOB2C4QE6DW7OSAXNRDBLM5JHBTVW4J6BRXRUIBAG5OKGR7GWAIPGYLG2WSSODQQL3HE5UBTR6SWQN3PPCMT2ACLSZ4XME5EFIH3MGKGHWUZVH34F5SMJGROHJFLBON364H2PWNOHJUNTFZJEIYJX5GQXVR7P377XY5PHT777ZY7PVZIH77YH5DIL6XQ======END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=always 0000664 0000000 0000000 00000002165 13215577651 0030421 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 408
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [1000 times] to 2 [1000 times] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTAZWGCY2QBABUOT6OLQ====END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=array_average 0000664 0000000 0000000 00000010140 13215577651 0031721 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "array_average".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 417
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 553.41 bytes, standard deviation: 372.98 bytes
#
# Additional interesting statistics:
#
# * Ranging from 23 [once] to 2354 [once] bytes.
# * Median size: 661
# * 99% of examples had at least 65 bytes
# * 99% of examples had at most 1271 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3136: STARTPCOELGB3SKODODEEV6ZKKWAB34AHIFMXGMVXBLGMF3P533AP4AUNRLMZ75AQFDKGUM4767X36X3T6P37PX57D5KH57V7WV3P327X7WJY67P3476HX574LOGXW7PU6PXNPZO7VXSGXP7U7XDHSY7K3642Q5LDR54ZGHXI2PXPRUO7G7UN7PZB452BX6LWFXSLLVN3AXXH7L6TXZ66HSQSP5DJZS5WJ36247MN3UUPQLJMHZTXAHX7PBWNUNTZVNVUPMYTMUO5J5NCD3G6DVN4X2ENPW5DM4KFU2S5VUT2RJNTWRN376LO5DXARW52KCIDB7W3C5H3MZ6564N3D7DTUWUMCAC4UGF2MYQEQBWKKH57O7PPIQL2VBCE5LNL4K4QYEN72PTSS33FFIQBPBB6T2HWCD7KJYXN32MEOWAI3LPPI7IQLBBKOYSNLXV4AIUIN7ECVAECLASHZOFYJ7KZLYQJASZGMB63L3C4WTHHCY5EW27LSOJYFPTONJ7K73JN7RZSLSVFO4KYGQDUUFRPXPRQE6ES2YJI2BOOTWOKH5LW5IWWKGWYMIHBPENYUBTGBXQDA7O7MSQ43YEQCYMI2DK3AVV2DAFVNNTEYAPOUPT4NOUEZQQMHKII4GHRF2NKR4PYVIW4EHABIPBH7A5LLCWWKMK3UXUOAUDZJGKWGBWUJCYY2UL2BULOZBJY3YBN5NDG2V4VACPDOWCGBZB5DANVFGDMS7W25IIJFKQME5U45LMQQROXMJAUKAYA5MI53A3I3XLX6UKEZJYFLQUA6JZ2SGMJ2ACDCF3FUULVKBLU3AJ5SPYMUARQS4LLRQVBBWQTRNKRBRNHJCC4I5FY2JTQ3MUWTAO34K7YSOI3FRU5DIMVI6T33VIOULPSDX4A2CR2QEFKDXFQ76FOWQG2LUN3S652WOWBAFKIE4ZANFVFZUONUQ5FCLPIIW5ED3NSSP276CHKVGOJZPQQTB5VXSIVXQRP6VBBUP5UIISAC72XZILZVKDEUOV5KRWH5WG324CELQIWTE23FBYMTL4BKARWUC2J4NJ2B4KB37KSQQU2KYXXJZSUDJNU3NM5LE4LNZLS2QVJ2GWDNHQGA5CWPHMGHLJVKUVE3GJRRBDZKKWVC4KYENVA4SRD6CD5222TMEC2AUJJIEJYFZKIHU3MXKK3EC6XU6JCXSFFNZNFNIQWQJVKVCOF2OHMM3CVAJ2YURL6RH2MIUCBG237O62G2NFEYR4RHJM2NJICGPJAEFD4LMXSKRIZSB4C5DK5Y5Q6HX6ULJDSXBB5RSGJIYQV3IUXBXKLGMJBRJSAYRARPI2NM2LMZPE5A45AFJ5YLXGXPFRL4SEN6XKEKAXJYR6EVABGRNKNJVNTGQFEA5J3WO3MVIJ5KTOITMWE4PJJIRTGKUMY37VKITS2D63QFNEAGEKR74OLAHSPYWJ4FUU43E7YHMU7272W6FRJCKZFGIIAQ4YUWBKXR3ZE6VMVBCO4A3ETOCYRYLASJSUPYUHGEHAU5IWRBYU72T6JEOIAIRGVD7VRDJASROUPT2PR6YIMIM63LMYOEGK4HCG7GGHOWQAU4SHVTEHJ57TUTU6RXXOMHKNCG2QWRLH37LQKVTSL4FOJQIVA6WQA5NAULQBETOGKLFETWUOHUPO5SPBAVGNRBAALYD2DGBO7CU32PHU2GLGZDQJDGWKV2GBXPURTAYIEYLTGCMKVG37DDMRJRZ4C3XFN5ICYPXHWZFMEI3N6AODQK4ZZXDXW2A4FNI3MMMFKXPHUUYIZHUDCSSSG7AJ47V24BZORPLJ3NLJX3T3GANHN5L56IJ24ITSMLXXIR52DTR3H7E3IXJ4FQRDNGPUV3XKULGAYINDZAK2GZL3HFG7YCEON2WRJLQZ4OSVJQSIYQI6BZ7MWFVTYNW77RVEP6PJT4WQTWYJ5E3KMDRUZI3QF4YEYKEKTM7FX5QPHJJEBR6AEVBEA4FOCBJ5REUURCFRZJ6MAOY4UJCVCPUBTZHTMPKMUTQQKSSEDKOSILFY42R4O2X6QIW5BQJTAUGR5ROM3HMSECP35XYQ5DVJO5RVIKHCJIDHY2FJQHHXNW6EZDO47SYOAYEZSFSOYPUOKVCPXBQBTMOQW4NTIZWUXNRY44FRGQCKEZ3HNW3PFHDTLIQDD4RQ2MV7YGNVSDSFBEGSGMIADLSXCKAOB6NQFYG3XED4ZZM546ZIZSNQIFDXUHVZDQLEERDW4HRVPLYTTT42PKOYZUACCZQGQOL2DU2STY4YGFLVRAOVYAGRZ7EWESCPZHISTA2EJHUAQBDQMWAQPXKJDQLOMDOSJHUTC5B4SQOFN5GMYMT2J5N7F52UAYDJQG7GNXVL3HUD6KQAWPXL3T2XWXK4N4W7BSCFIFU2FJUM7MPNNFAXZMNOPSK5JKTXE5QLRL6EU2MMAOSBLQLGSQZ4BO5Y3RVZ6YG24YXCCJVTWBKIOCVW23YHNWROMGYXGBFHY7MZELFRHD2C7DIDQYGA6TSRRJDNFKH6ZB67MHLGKBPRWH7LGZYXTCTT747KXMIWELXXASTGN2JOVTTIAF6LCZ3LFIKQZRG7EUDKSZBLI3TXU3GF7UMS5GCBXQHHGM45EJZMPLCBRIPYMET7TSDNXAI7I6FC2HJKZT4ZTOMTZ3XG4V6BRRAX4DORY5ZBJNHEBIRU2AGI423TVJRNARNESBMHDI2BGEQSTMVC55WJS5OSORUZT2NSHBRRGHOFFLFEQCPJFX4EGBZ2UWOPKA44A4TAMCYJFH3QLOMWENAB4U3CF4YTVFJNK54CEDHY5XJNYLYXADOOOSB33EDPF5MYHDNSP2HJM6GOGKK5MP2CITFYYAJZN4CJM63T4RACYQ7LYNFQTYU72AIE57TJAA6ZCTYMRLIUGEKGVDL7AW44GAQ7MZYAQUL357AQQGL4JOCSWX7OEZTUWQPJ5K3YKWRN5Q3AHFWXEQRCNWNHWIWWORKWJR4G33ODF6UJ4KKXLR2PUSQYY2U6E7XTA7N42FQIIONVE575UWTIIEWYRDIGS2GAGJWC7LURIL5MV26BGGYUEOICML7NUVSMT5GIJBX22SL6VICHPRM7GNLC6LEYJG7P67H66XW5PT47776XW4PXGL775B6H4XFXHEND
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000010120 13215577651 0031426 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 410
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 845.64 bytes, standard deviation: 247.26 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [5 times] to 2546 [once] bytes.
# * Median size: 852
# * 99% of examples had at least 32 bytes
# * 99% of examples had at most 1482 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3120: STARTPCOELGB3SLWTMDCEW4ZDL4IE4IL4JW4KZOMSO4H4GK53Y55TB4AE3IV2K4JES7A2RWDP57P47X3T7X57H57X37BR7LLYPH7T6VQT45FZP7NTH56XY5U6WXXW7VXXM75Z7PL4PWTW546VH3ZMV3Z5DNLLSNMNVW55W2XLLJVF6743NO7UTSQVXWXY2GXX333VO3J2X47I27LNE5VUZVXFU54PHBOYXB7WX636YLAZVGOR7V63WLKLP3C3LUXZXFUZTYWB3VKH3LVQ7XTRJSJXHXGTQGHW43S77JUTUQVPY7I7ZFMHU3DS4S5XR3UQ6S3U4BOBLLZ53BGXXOXDIN7XJEG6DI5HFTRXACTN3SRKZYPMUAAH6DDJFNO23MYWJH6MQI64VBYTJ4XHE5OIUOTHOWWHFDS4BFWWFAY6OIJR5PPPINXKRGJTRQ5FXOJUPJGKUVBXAW6DEO42CEDPYHCKOLCYOK5Q34TQOKG3WLG7KOTLVYVBAOT3NMDS6UWDFBJFFNO5E5QCQMUW2J36FHSGYBBQZ7HBH57BUJRYRF4SPADDNS2GTXCTDLLRPX2BEWOTRFT2Y5HTY3RK5WPA4A6MFD7A3PSUO7YSH745O7ZQV26OZOO7PUC4EM4I2PICMR7WG3SWGTJ5TKM4CI5A5I6AGKSCRE33TICAN2ZDQKRUTE4YKBU3WCIKU6DW53KON5HUZKNRMC7HHZABJRQACMMWWSW4LZMLY6YB2KOQVETVTWFVEJNAT5HLK2JBKVHETTU6QERYOJUMXFPN4SYHW3MFLJQ4YIXIGTAXO5OAFIRLPV34ERXCUK5CVQOXXO5ZC4XCXOIME64UV4FUPG57RKXJDRWTFKANVZLOECF3KQ4AG3QACHXADBUEDVWYBR3JCRYDLDFFPCRBLTUJQIIXDPERGPSOM4IVSVTKLOXVFARD3NMLZLXQHNY6DJB6RMHNVU5KOAXJRVQLKJHIKH7AXA6ZE2EOLY75IUDEQKQPB4UTRGY74ZSMCZTKTNQHBYMOTVRRFTLLNVRT5HL7PWT45FRFCAAKR6FA4FRYVXUETXTIEXQYNHMCIF5SCOPCHQZ5QZAEIQSUCQDDZKPYTT4TRXHEWKXAPY7ULEH5US4R4NUCR7BND5NNU5WUPCFDWYLB4BSNZEOFHVLNIKWBFO3UXCJCNIS7QW3TJERRSNUK4LPQFSEV3IZKNLV2FP2OEIN24UP6BOFZ7IEG3LKLJSEUD4NT25Z6RMDDBKXFZEHLV3DJ2E2X6EBSTTUFGXR2SP5I3NVWRNQZVHSEIUDORTZKVIRYR52XGHNBNYFPOW2FZ4QP5U6YTLLIPRRRM7KSQUXC4O3WGKBROUJ2URDROH2ZD5KPGUOADW7IFZPAPVNOE6JB4JWJC3CLSOZSZZJI4AAI5MSDZAK2IPLDJXSEJB7DB4YYKMB4MBIK2FCRCBXSW4BYIIMGSCZ7RR327I7QAMI6CZEQSP4ONIBBTYACZGTHE72K6MPWV3DQ4KX5KGJYFKSDKXUH325QPPP5SRCAPXEIBFNGXPUCWRAAPND2T3ZM426WXZ6LE72AQLK6MRVHPDVZS5MUIBWBTK5IOPMWGSHWV2GZPE53ODIKQT4DVFCHJ6POS6PUEEX24RFLYCILAUUQGYI2K3EIBBSYMAZ4UE3ZYBYKHIUMSXV5NO6OF3XADZX37YIGIKVKCE2CIB24XBV3KQRQVAMTGZ2OAGC4KWFWISERI7JW3LIE32SVWT2VMXFLIR3RTT6ZNCHPC6LHBHLQXDUXPRBCPKMVFRTDZI4KIUUWGB27HKUV2FP4L2Z5EWDCX6QUGDKEXFGWFEMXD4BUMDRG6Y6ECUQD44DIAQ63IET4HCELIWEWFSCSPAMUI2AOTTP5Q5YOSCVTTIEJ5EFXKTF2QDPRAVWY4JQB74BAJVPWLWV4LDUKKPHFAODITWTNRETUYPDMQKCYLQKQBGARXK26AYBQFUTHM4P7DZSJEOGANVPON5NC3CUNAH4G56QWPHU6NTJ5NK7BRGGZHLFI3O7F4NHYHLMUHGJSFQDJYJ3B4IYTNTCUBJDBKW7JYFWD66YU2QPBQDEQ4MN7R2XACOFIMDYPCRC4XWSJVHI2VYNBSTPV45ARDTB6SEHLMY7OSNNJYTCEBV7IDPWWRGPMTNO53XNERVMAYJG4FIHKJNIGPJCQC2AT2QASCWB5MENIK3GV73UAQYG4MKFCD22P6L4PIZGJIPRQBCOMJIENNF6VYEAOIQDHXNM2CJ5DXMUXRUOX2CXIXEJECR72XWGKGFTSEQOYHUFPHRVFLUFDQWEXL3BKBSS7URVEYRSMUMLOALRWGGVOW2JV4UCZRXHKZABS753ID7CYEOGBDANTFSNTUIWMNZ2WQ2D7LUNHMQPZVEMZNRM6VZ4YC36TS6NRCATDE2U6D6M4M6HBGV26CARZRWYXMF5NMMOZVAA2AVLP33FCM7Q4JIU2PNFJT4CAQ3PPNDCUQN6VNEKNNIZZILKRMKTZXPCZHC535UM5DGIWQQMAM6CBFHZP7BLXNEYT7VAHS5S2YEXATAWE2M64E7YC54272BD5MIUMIYLPI645J6BSEZTHLJRKERDPIYPQVI65XJENGRFX4Q6SYW3QRVCR5MM4TXXQFT4LLUWC4C6VGPENAI6QLSQHAOCABXIJKEXKN2JEMPJICUD6RBNC77UKUH4TRMYG25DWLXKHPZIIR2KWNCEACHZRSV5CC6OAFQVDU2GDWKGVGTH6SY6WVTRQ4MKPZYTLAPKR5QGH3EQJ56G7L3FJG522RSBI6YRO6V4VW3NPQ3BJPGCQZFLB27EOTTL2EBSCAVDXPG6VR3UP7PAZGUIJ6LM6FSCB7ZVHOZAMGXPWCT6OGGYD3GXW6N6EEDDBZ6VPFJSMI55HTVBASY2RWEY6M6AN3Q3THM7KGIDLIDMKPLGH3O2GT3QEGJCFRSLK32EVG24CKDX3ZJHQI7AR6KJCZ65FUDYV5ZTZH4FNAASJMDEMAAGGCNPUOGGFFBRARVMP45L2EGCHPLIC7TAXBX6722SGXGB4OVQDCVG4JA7WWUW36YTIEW2WYHVTBW6652X7H67T4PT7P57775PZ5NR3NZ5774AU2JF3IM======END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=never 0000664 0000000 0000000 00000010062 13215577651 0030233 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 407
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 894.57 bytes, standard deviation: 162.49 bytes
#
# Additional interesting statistics:
#
# * Ranging from 568 [once] to 2325 [once] bytes.
# * Median size: 870
# * 99% of examples had at least 619 bytes
# * 99% of examples had at most 1405 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3096: STARTPCOFLGBZSISTODCEV7JNC5Q3LSFAJINLFDSGSDGZ4NE2DO4LPAEVI244D67RM3UJIQRFD734736OX3Y7H47X767Y7X4PH5OR3PZK7D7TZSXQ63ZTV2W4L5N55GRX67OYXQXZTX33C2H23R4DX3XIXPJT7Q47CO7O5U665DLDGGEOTNW5PNZ5Y72P3SFQPR3XJR2IPHB54PS7QZ7XHV346S43WGLMPS7LXZ6PVSM56Q3HPLWNEKT57KOPNDY2I4LNGETGNDAFVTC3JXFTE73XAPAG544OBCBDRQPJXCBDGHMV5RRSQKAUO54QT4K4HY5QMPDOYMMTW4OS7RJIHP7C47T7NTCDWPOWACYMA36HCQL2UXS3R4PY6M7WOHRJA46XSJ6U5L7FT23GEPWPJH3Y4SYDN5TU74L6R7SS2DXCGP3NAAUVADE5SSCEKP7SFTW3I4GGD5QGJLTT4XZHGALCYAG5HR7DOHEIRFLQB2FROAD57OAG7HJ5S4UDHWRVNB2RUPRKMEYTL4EHFQAMK3C3JEYV4NUJWQZIX2Y7JDHDHIIMI7QXA54GF5DAKXQGUC2QIZXJAQMJOTYNGBYLBU4BVIAESN36PGXA62AD6VXXTDPIQ3LMXBPRIMUOWZ4H3TO4W5DZQCMZJU3AXWE6SAMEGGA4CD2ACWCTKGZUVANAWYMQPNEM247JAYATTPMLDELA64YDDKGVWF5RBADU3EII6HIFDMC6YJQ7RD7SDCV4WQ6KCUCCNBQGANBCUW2CJGOUQNB55WDNCDIQ5PEGLWY64E7POM4GP5QYX5E2BB7BDS4QKH2OSDWAJPH5HSYPPLVXG2JZA2Z53HMQOHBVZBFSBJBIJ5FM5GP4QBRMXTJIA7RQIBSLXVRHGJVRTIR5OJQ2C5KKIK5SWN4ULTBNYMX4UKJTELKCQB7CVCUD5RBFZ3VWBP5NSIKTQDQMT2IDOZHTMYSJIZUCLDHBMRD5T2FDMOXGDSNN3MFSEI5DEEPNKQNPBT2AK4BDCAVEEOWUNYY2ZHBAWB72YYSP3VPS5XQGFKNUA3WRLQWYPVIIO3PZ3GEQVQXLESE3EIGWGXPI56A5CQEQSE52YGISWKRPCZKAHTPO6YFHPY3HUUGIQOXX2FJRCRQOJFFEGR75RXFGQPLXW3HUZKTNERRCXELHBI5KIRFPWM4KYZEJMRWEEWPKROJHEF7PFUENIH5CD7L25KTGWFSSM7QUOPUPAQWQ3FY4A7IY34QZEVEZYDD56E33USPIVFRXI5NAEZKMRUKC4JEHZIZUJD4IHKBD6ZVNUK6Z46CXAXJIWGSSZD7FITUBCODVJUKQMGB6Z4OYZDG5KSY7IXMCQBKE2R5B24LLY2JULVBGIMROPT5LCYLKONUS5VUVSTRP3NETCFOJSNCVERK66U6V4F7BDAMFZEOMGIS3ERHFS6IFVSRZKELU5PS2KV5MA34RGYXOKIICMKLNO4JSAIEALQKNOW6SOGTTVTNMIYITBLOKU2X6XRPAKS7USCSZQGGSXMVJBBBDAU6BNWVHKLPAV6X5SCZD2CZG6LCWBJMWCWKU2SOFO35I5ULI3JOBSAFWQNULFPTMTSWZMBEWN3XZ4J4Q6YWQF7FECBELDCMOISABBLMRSHQPRCFNFGGEX6O22PTVZ2HMBXPVGCSNUQCAHAEEA3FLNFBEOAHJH52Y5EQCZ3SR3IEZRKTFA7TDMMQZZC5VX7LW33OBJIABK3AZTPN7JMWHQYJ2EMH4I4OOJGESAQADKNEALELFQBMVILT4SU7ZNUT4AJFIW5JXVGOOL2LVA6ZXZ3LG4LPOX5MUC3JIG2RXX2L6YWIQTXZUAXNK5BU6LHWJG5DKBRQIBYKNDZKM63O3RHD4DOCJAK5ZQDPMN2YF7J3TVTUX5PHJZMD6XZMTM4YXWDWFFL5HXNKMFKBYJOULBEJWQCUOQLO5HXS4ZUKFNKJ5WSXMKYYHON7WLJJOYRZJJRAXUSSJFDJ333FNBJFKYDGQYTZCV3WMNGHRI4JGSEGXUIGDRCQ4W3BRFW7NXCH2HRCYBANRY2U4XJ5ONIJVZ2WJKE4FI27JCQEQR5KWHEVUSOKVSRG3MXYSWZFZRBQEPVNVM4J2EF4W4NEOA2ENIRJKTFU5GCVNKI2PTEHTXL4GZX5YA3O7HWSPZLO2NWVAHTSJPUKYLFIRE5YYFZQZ334ONPTAZWA7HIRXIL6FADA5ANC6EWMIZ5G6FP3DZ2PKK7EFQ6SBF7NXSTSFZXIAUGSN7RSRI7PRDCQ56FCIV5JAORXWJKGIUOS2YTN3FHNTVQ66QQI23AFFBUJIMREZJUB7NHDDHPASCK2EUOMA32LTBHPRC36ENAZGMM2MUFJAKYVNIJ2RI6ILJNRI55SX3JFCEZRWOMQZSBTMZJFLKSCBMTUAFNLXWVQMS7ZZROYYNPSR4UT6XDTJRPL2ZARRPT4M7TELU73X54R2XL5MYZJ7L64GDH5VURP7FJ2FKGLUIFNQG5T5LKQKLZEH6CU7JRCEPNKIDFJHHAFINFK66WY4WETOVWGGVIJQ5NWYVYIO32YZOS7QA62Z2GMMASHTZ3E4AO2VCZXQPWRPLKTK6K7WJQEWOZ6LPDWULF4VX6CGNWZFQ25DNORDV7UK5QDX7BGAERJYAQQME7UVZLJJXLXKUFX5EWHLRBZ2Z42PBVFPVN3OXDPBJTDMWYFABW2IU2ZFGVGL2724OI7MYMS5GONMETOINSEE3747LTWCZ7JEVKU5ZLTCLAYPMEROMKZ5VC3GVAODFBJS2LE4C6B3N75SJHKGOLESNWSKSSCFBDFXKZWEXS2CS6CN4XWIFWO2UUQV3XV7LQJFSNTUAKJVBE57TG3JGGVW4HIZAXMUKPBF4M35C6CFLGPRQCDZP7TMWAMEOWHZIEYZEF6UW2SUJL6NXFTKLCLAK6SSPXJHRM7Y5JRPKET46ZILP3HE3QUJOQETZN3ZIQYY7QLZAWT55FJDHDDKZPSFRRSVAW77P4QUKQNLYZ47HBCZVQS6D7F3FSIEAMEYRPUEWEQXRAQ34SN5P7VFKDUYXHQ2HA7X67D5PT7PZ4PDZ7H56PU6M3774B5CBXDM4===END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000012075 13215577651 0032473 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 411
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 12079.59 bytes, standard deviation: 17201.65 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [20 times] to 181025 [once] bytes.
# * Median size: 5220
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 78262 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4112: STARTPCOD3GBR2IS3ODMEV6ZLL4IGAQARBJFO4JZOMDI4FOZ4W5ZX7LBURKWS7P333GJBIGQLXUMY777PZ46777473Z6PH57PZQ3NTVPT63A7LO7X4HG757LQ756K73W257PLY73LLLP7XSJX3WNOV2XNOV7XGYV74RD7ZPW55V5DH46PNVPVWKS6XK4XVX7K2P7VWON7G27IS6UGGXGYXXXLMKR527PWPA6QM5D2PZKLIHMDTPKYLZWWEK25MFMWQ4ZN7FLEPGCR2FP45RH3OLU7FDP6GTZZXCTXWGD533OEPW67PNR4V327WI3MJU2PN6SOPMFOFHBMFVPL25DEP5SK6RUIXWYOWNULEIM2L7EEELYSFTSQTC3PGXFFEH5RJJLHJZ75Z7SPVGUFZ3FH5ATIZUQ6HO3B5CVXCT37G7XU6THS6TELFM4H3NO6JSTUWX6KSAWS3MFSRS2HJDDRCZAV3DXNXJQIWXHN5I224UW2AXJZ24MUTQNXOUZBYVUC6VKTOMZHRMWRMFCVRKNT7B6VVL3LFWGBZHOVLH3C3O2PAVPPU5VYXJKLNLNDSWYNMJJKFW32RS2ZJP327UTNSZNM7X3WIXZEEUPJT7TT2VPBNT6ZYXTEX2CLZRUHRJVENLVUL5OEDI6TUNGU4J24NH3VJMSAS4I3SO4QU2LDUZ6XLPYVEEKKJZCRPCCTW3LVF7CWU2CJ3JLYTIQ2VGWKPIXBA3WKBBEMXWFTIKK63UBCQUXFOXRWKHC5G4225FXLNPIK5HPKPDMBUXFHIU3X5RL3B3D4YXLC43C7YMT5OXT3FGY3NAOCKLKHTKMYRMGVKV6V2XRKAY3R4J4CAVCJ5SB6QNXR7FOPDNSPV6OUVC44FW4J2J3QTRNHHOYK5WIEUKIMHPA6DXSTSSNSI5OKFGPZMODTOQR3Z6B6XIJJIDSCAO63GRLEEWRSKQDBFSK2FN6WWUVAOIHCS34HGS7F4QILW72GYRCJD4FFH4N6AS6JFZDRDSTWYNJPR53HKIME6EN62G2HL4D2QUPPFLMTVBNZ5A6U424IRHOCMPU7OQIVEE76FTE4NTWA7YYFGI3ASBRZ66QKBAANGVV5DU4BR3SMH3QSZIZSK5RNSQDJO32ASYJDP4254QSHJNAWXHATI4SDXBBFUAN3S3KJYY3QBWSGY5DPTFFHCPN7NDJPTPMTUA23K4EZCAMAXXG7U2EB4WAEVCT6ZNSEBYYJL3GSBYWSAOUFHMRKMOS7KMBLZ5MDNCCDQ53UABQYS4YT7KJIJ66Y7SJPEQA5GZROXDY43AMQLV7RRCZVHAH2OVQRYHIGEEOGIXPQQQVDY5CHHE6R77X5R2XJLQWKA4T3OZENHAGSU5ZTHRSEDKGIGMSS5RMDFZP5GOK6NQO6Y7NNI5DVF4K3POKKFNIHMEO43UCDKJ5DTPUPEKVND3W4DQCQIKGEPUKUJZ42H2QRGDBJORX2XIPKDTFHOPGUQHXPS2FTZJDCXNZEK72QQU5YGUNKAVFUYTVZTBVXHAXBI5LHQ3N5BLN3UCUNXADABESVLWDIH6ILJ35EEMWO2XDIGL6WZZCC5MTLEGB2J6IMQKHJEXG5Q2UDVID27DPO3KRK6P5N4UOHR6SL2IR6ECQRM6NO2OEDKWANTBBC4JCDJH3WXBUGJ47B4FMCQX26YRM2DN54YEWHV6MCIJ7YCAT2RSDBLLCD54WWRLFKHEAWRQPJ2MT4JY4SIL4TSFGKSHYYMCZBI7SHRC4QET3QSJB5E4YFCL2YMKV3BXIYG2QN7CLG5OJF3IAL6D5EJJGYYFY4WEKJLX23SFXJ3QBO2Z4EI6TABUNG42LEDF66DCKW5OGTQ625MHRRFSJRQTZT3563MACVZJILQDWS5ICFEZZT6EKJSPIYGCLKDYSSOF4L3RY2NCM644QPASEDANBXEUOJCX5Q2WEBTR3J6VAHSCOWQHTPKUIP4TPNV6IJ63C5HCEPKOWA4ULM2W2ZQRFUD3PY6KLWB667ZDSF5CCCWUJZQKBNQH224H7MVO4RKPF2CU62452LYXHEB24GINL26LIHESZIO56IYGSXWT47JMEX4NQD22EHR6BWGYR2X5HUOERU3LJKCOA44KLMRVM4D473X2EWMO6QLNGEOCV3OSDURRSOKM22NDFLWCFPN6FWOQBTYYD3ME6RZ45CVRNUCATUU22E7VC525SLDNO2LN3QUJBM52LTDQ6XBLEAYPLXINFCUUEO2IZRK6EPXI5ZE24ZBEAUWZ46CAOHTKRJLF5NGZUUYRULV23BE5MVGIHRVXHY5CDXCCRFVEVUJR6HPK3IB2Q4NWTMO3JYA6ZCCEEC6MHLFYTILUJYCQUMG5VBJALXUVBSQ3NPE3SQRNCRWJHO7WZDQQCXFIQDYHXNADQPKCUTCCJYSA3DROXM3BFMD5KK5ZPNDRC7EFQFOK7DEAJ7HNURYQDZGOTNNJ7AVMRJEFQJYKJVAZ3LIB3ZU7HTPUUK4LSDJC3XSHPL22NHOUYQ3CIDWXUUW7KFR46GNEGWUYOI6NB4XEUA3DNX4HVV7AJPHX2IHEQXY5BS4KAPYOQWVDRZUIYRUG6OTVNNRD4EXNVS5KKJSS5GPQBTBMWWMTGQ3EIBFUIAM264PMLIVTWUVAFYILMDJ2YWKEFK37YRIY2H2J2AHYGFLS5QKV2CDB6TWWUOTHJZKCY7QYZQYTLDCBQF5YLLHBP2OTP2PAJ5A2CHJDB27SAC2PRD2SO2CE5BUKRQIY4TYVTMQDKBWO6354GTJBQKBVDZ3NJOCD46KNW2SXGOHQ4FH4LUV7YYLISL3ADYASBBL4ZUMH4DCTIGJWTX2MIMNNRQ7W2MRK2AJUP2MMKHR3BGOTVD5KCW42L6A6IDHLJCGUZ7Q6FOJ5254AUZRAF5MLQIE62XOEXSG57KAGD5II2ZQUET5DZDF3X3TA4D3G2EL7RVMNTSGRVJ3PBJSA2AHWWE47IMUXKQFFQQ3BFURLRPVDFZJP6JSCLQZRO2B6EYQ7FPX6MDTYVZSSUNFGVDX5Q5KMG3PWVIMIZIZM2OR6AYBBPIJBSNWWJQ3LCEZ43PB6VPJZZ66WRVXU24HBWTFLZY42CMCCYB6GIZCHJTPROLVDEOUB7XVTXUVAZT7UXYTCI7PX5WW26D572IEPEYWOJPFVYJ3VJW3BRWI26LNGPTDDTBNIZ5CZ7SQDAMBIV3WW5DPUXUPAJ3GLRUVBUX62F6PYCZED3NVEY7VVEG72IIXOM6K4FYDOUGRR7SOYOR24STHKWMDKUZWHG47TNRLGRIS5MLWJUVDZW764KB6WMTTCUXKEN7NNDWNTDBMXUJBRS77MSXZFFR2OI5FQ6LBR5JF3KP6UXDBMGMN5HAG2ORDALOG7XRPGGSJAF2NAXPGP3LI3MBEMCI5B6B64O7UZWYRTCIMDBQWEJZLTPLD2NEC5GDQ4TVWSL4IT3GYTCCMGVVLINEQWCMP4TJAV4H6NNWU3CNBDEE2P3TW6U5FDHEOKG4TCMMORVDC7O7TYFF2GO7OUHIFQMDYTNGBMHFTJ4PYNLGA5GHMGFPZSKU2UZILPBZVUCBNRN4LKNDOKLBGGLMKYCFEWBYGSCBX6ZJDGMHKU5TYNM4WMURNQULD6SO7LRTUXFQ7JAH2WPXCFEM6ODOJTJIHVJQAJASEXDJT4CWFDR6AIEYYXYQIB2FOZ3V5ORQS3KEMJV66FZWGVVCUEO57LDRJU2SG24XG2SSWZZM73QGLK6OPLVUD6TUZO6IASNMQLGZJTV23WQ4MIGTAIXW2ZV2Q2ZNPM7D3DUUOVZMC3WKSJ3HH3PV65W2ZGQMVPE6YL6FY5Q2NOBSHDKBWYOZBVRDAHFGH2B2SZ4DRP6ZF5KPZBUAUSSEAWBSKTHRWBFAHLZ5OWPDDI342C77IAPBGHZWV5IO6UTGGAZWPT6U4XZQ3Z42XTBIJBE3S7BP4BTOURVB2TA4SHGFXZVOZED2XA3HS3HDUYZXVIY66TGXDA53DYWA2QPOELQOSH3ZVAEMCH6M5FBV6TCHN56MJZRTYMIDMLJKRAIBIYO7TBLHDMDSL4WOUCY7D7JTHTS7XZ5PD47777H57NPT772QZP7N77YBEODVPGQ=END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=always-interesting=usually 0000664 0000000 0000000 00000003125 13215577651 0030614 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 409
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 7.44 bytes, standard deviation: 18.30 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [900 times] to 138 [once] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 100 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 576: STARTPCOLKV2JN3CCAEH4BLZNSB625T4UUNFX7CILG34J6L3RRYJQFRW5G6EIKSJBCSZPIV3QS72P3PT5POSNN7WB3AGPNSMRSLHBDMQOLX6IRJ6Y3O3ERUBCTYWIZRGHJHINKBAN4KSMDTMTVEXH4JQ5REANT5A67WPJ2J3RTFDY3FGUNB4OT2XZR4F3TJ32KVL2ATOX5MBWQAVSJFNLTNKYHHAEYDAPR5IVV72GE3NDNHFVBOSR2VGAESGCYGAKZLQ3KXL2MLT43R7VIUZPVQT7AXRAWJLJOJMVZLA6BWFMIJT47AHIEH26JDKGQTGN2MHNYBU37E5PJRBQFX2FIS52U2EAXQ57JICECFDDTBKECEKEGEMNNLXA3BZ2GJMTSBLHPB3YR3DZ4A62E2JLWQNUJILJWBCEFW3PANL4XHPXMKFTX2LYGDOIN27KXYTBIKXE7WX22AXWOUTA44RFKH4JDR4OV2BVBDKGUGTY2EZXAQTSEX24XS6E5RLLD7YGQSIWDV2YQ6FJFKSZC6VZJ762SSGJGSYX75MTRDD4ZJTNLU77N7YFZHO46LZ2QYFPQ===END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=usually-interesting=array_average 0000664 0000000 0000000 00000010211 13215577651 0032116 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "usually" and the interestingness condition "array_average".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 419
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 593.62 bytes, standard deviation: 408.75 bytes
#
# Additional interesting statistics:
#
# * Ranging from 25 [once] to 1786 [once] bytes.
# * Median size: 639
# * 99% of examples had at least 55 bytes
# * 99% of examples had at most 1410 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3176: STARTPCOD3GBZWIS3WDKEW7JND5RTJBYAF72WCTZNJBXM55E2DPMLPGJCZI3PKS76FABBSFENI737776677747H57OX577ZIWP6PZKVD776MVNPUCTT2PZH4PTNOG7L45LWTZWOZZ3H7TV7LNBH5UMPLLHI47S5VZ7H5I6P6K3HYXPNXK223HMG2K3PL2XXHVCOVTJ3OVDJ6663ZZG3GWDXRZYF7LNTEHNDTKKV35IY2LUZHNP6WU23JFE544T7M27NLNJZGNMXXZKHFZV3YR47VD4Z7POIPLWLVIIVXWVV7DYYAV2IYPOVSAY7HFV2VBLZ5MVW7VLXOPUVCVXI3I3VY2XLWFDUC6XTU2TBZLM2MLDRV3UOHFD3762W3BFCFG3F2ZJE6M4Q6IWERMW253F2X45PO2IDGBLMP5YWOGX4FHMDRVEJFRIXTC4OL5ILA263SPPQPVUKR3NIT5CMPE66Y3FM2Q24GDSOPFPTTS5DBIISJOHWQYZ5TOOOWLBPLFITDUIXO7W6MZLKUORLFRWVMVSL6K35KDBZBJH72UKERAZCBSJ5GRCRBNSLYAMZKYUQWAYRU6LRCYBGCTTGW52U63EGXPQ5B2PWGFAPUJNX2TEGXYIEHIAKSJBTKLR2TQEP3UYMPA6NRBEM3NKILDCM4IZ34ECNHZZJ32ASCXA2A4IE42ZJER3MSASASRSDNQCEQKBYAOQUFJUTRLEFD3KCYZLJM47M5BTHPMQFK5KSPDEJQXAK3CZJATQJLSLZDW2ZY5UAVWBKZ77O3C4AMECWOBTUKJV3M6INHLKUIFD2SEYEF733CFRH3APIWNIZBLHJ4TYKQV6QVIB4ORZIME465BCTTQJQZFOEZNAC3KLYW5ZKMQFIHEQVYLEQNB6KPX3XOETT5TTWQMKLFCWX45LOB5AVPSFZNQGUWTMXCZCCABSMVQWSZXME6VOTLKHVEZ7TSUATCTHEGUFEGK2DU6LUXTENXVTDFAAKI5KS3VYJSFILEYMGAJBUCSMCXO6OORFYWJARJEKSCJ4F4ZXWC4XURE5PK7SAAVLHQQBUQU6CAGJJIJAT2MEBMR32X5FPPLEYZQZ3EK6KUQG4EE2F3W7W35WCTHNHOOEUA5KJE2TGTXYYJ26MCRWGRCKCDFVU5MBYQ2Q4ZG4UUCHLX7FAQ6KO2HYNZIL2X53G7EDZ4TUT2KKDANRZLCVIIWJATUUKRJZQMEUXZCW42G3HTOWUHBYKRFWNHERIAIJVONN6WBXNKOLZKEKSKFAQKK32QDPQ7D3FGUKNZVYNDPL44PCVSPCE7FVTWL27INWEUSANXIBCX3K6J6HENYXNW2KE4IQENILU5A7VFXSRFBKL6TE2K4XQ2OPBYWLHR2GAV2FBUJOG2TLV4QUWMU7SBFBZ5NJB4AGF4AQZLBTWP3QIZUJWPOVKW36WSYV5XBWTOVQBJC2QFYRWTYNN3SJ4IYATEWZRMYV5H2O3QEPJ4FJQ4KAXVI4SAJ6ZAYMTLAYCTRATQCNI6UOMRXJSY5DTWMBCTIBNPG23IDFGSJL5MHML3OL27I6UZBATLNGUBAAVVAMBVEMF2NKLOVCPQ3LPZKG7FEKPPFQNTZTO2CKTEFRHTYK5SVMROGQQNG5GSL3XKIQEONASX3JZ26L2QSJBYSHIIDTHG7KUK6DWQZPMR5VKV4QCUO4MUYSZ55V63RWNX4W7VBG27542WBKO5NI5JSNLZG2QBAWCQCXU6P2VFPOSB6NRVNPDIKWXBMSEQ3JZSOBBMQLLMCBBYGYIONGJFGCPHXRWFVKVYRYKF5FESRYSYOPUYGE55PE4C45YRPEW6QGREHY2UBLOU2QVHOVSOAIEV3IVKZLDTQYESXSTBXUJJ4VKSVDWLPU5KOXIXD3TURAUFUBCPT3UP7T23NWLQD4UBWK4VZTPZKOIL42Q4X3QEOU6EIUJHJWO37LG3SK3S2C343AGARWBISYAK3LN4UEJ5DHX5F7ZGTJHCXZWOY5CYIJMRIJZNTKQB7H2HOSUUKBGKYG3UCBUFUVPK5F56IL6MXWZSX7V6PSMJQV7OKVO5LOCIVYAUNBLKZEXDWWRAFVIAATH3OMVVFS6IZYYIW56MIUQEBHUZUIADFAVM33S66DISBJCSEUVBZOVF4Y6R3TBGXNM26J47FULZ2X2TFXM6TFOFAURAUALL3U3GO54DAQ2BST6YA5ITDOJI5ZQWK4FKRS65XYC54Q4PGQVQ3X2L3QXDXQZKAEFIERHANIGDWYT2IJROYSTGGFAIPLKVMUCLZAWGOD7MAM6UW6YLEL7LKYK2ORFLDGC4LTIBYK7R7EG4FK7O5IOKEOZ5QFAACXZXKAVENFNAKZKJOJ4TZVJVMFMJ33GSGAR7QZUQJREQOTYSNJ7RQ3ESL5W5NL27FOYOESE452NYJST4VSL3PIPGBULSWSWADCGVNKCC4W5EZRKZ5NBA3JJAA6VSLKAYFRU32PTOXSJHZZKWLUQ33UUS63LILFLUTUY5UMWHC2DZZP7Q6NYHOKG3LMF7VSOKIAAOAWU5BVSSIVASS3FLG54YO5XHNYJQBFJN6FTKAFZSIHV3GLHJL64NVGJ6FBQMWESGBN43RUNTU7E435B3VR4CT3VMM2UK3H227V4N2Y4V7JZT4DFCIUX6V6NKTP6WIURWRSVDPEZO345TSXBPS5ZQ5R7MK6J2SYYEBLJD3ACXPMJTTFIMUDJJUPMMGCQDD22JI7HX4ONPM3EYUJVIJX3NC2PBIOEUHV2U56TUL6F2EC2GKK4BGRQSXC3DO46ESQ4EKDKERD5C5ZWYFNBTL4W5NUNPRDVYEQXLL5GL3GMJ5ZSLEXXXOITPGGM7C7ZA5KNG7FDAOIGT3PGTGYKN577DVKDF2JDUGP3BQCRIGDVPMLZAJTQHTMQOG3Z3ZBLSUO645TIMWNGLEDUH4PTNL34UN4CUWLBTRKZRMPBCYUA6WPR2NTOXR32356LYOOEQJUVSYZUCNBXRXTUMFP3DIY7SRZKNE7QZTW7IZ6PAJ65LTYDQPD5BTYAYQNWYIB5FFQTPHZZOB33JTUX47JCVL7SBYJTSZKBMT4O3JR3MP6W3366ARFVH3NCGQILRXV2RD7SG6K7KMXCBG6XFO6JL2L2GPATDOIYPRJPI256WPT5P377766PTV7P77SKA776Z7JF7KVII=END
hypothesis-python-3.44.1/benchmark-data/arraysvar-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000012226 13215577651 0032667 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for arraysvar [arrays(dtype='int8', shape=integers(min_value=0, max_value=10))], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 412
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 14057.69 bytes, standard deviation: 19622.91 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [14 times] to 138211 [once] bytes.
# * Median size: 5749
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 89609 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4200: STARTPCOELGBROKSLWDMEV6RNUWABAGJCB6NO4J5JSN3QXSMV3PV3XM74ZLQVJCRZS72ICBUHIN7YT4P377XV56P367X46X2Y6HBT2335PDP272VJTX276O6RK6V547575CTTR774Y6H2PRLO37P5OX6NNXBT6U5N4KT5CLR6JVV366LL4O3T5KRVU636P5SV272D77TYRVH2PHX5CW4ZOOHO7LZVT57EOTL7ZXV53YLAVZBAVVV72PN2PX4YO55I6GNRWTHJ3U272EZ3PTRZLLYQYR2QFS7I6FT4Y6QZ33KTTXZHE23PH44VS63JDU73AMV42R6Q455FP6H6KDJTYPD332PA5Z6T4RAOFUYNP5H2M44MYOXP7AQDVHVWP7BU64LOIWZ46GR635ET6M7DQYPSDCFVS475T53JSERLUTE7MOHO3HLC5L2XTV4LJQD6H5O6JW7TYRL6C35WTM7HGQ6EPNF5RMLM64N5YYUC75JYOT5LA24ES2U7IAST2HUYHUEPO6WUXOMKM6EWOJUZLHLWTV22W7ELQE37LVNZLDJVD74PDW3HSFEQPHMOEOOMBYHF2JIQ2II7674G3H5PKT77OKKUY32RPFSBILMLCB3LVB2CBVCPD2D6ETNO5M2M6IGZY3IF3DROYNEVKKI524PAJLDVCTHQLVX3SDNOW45JAADP2CDHG33PF4LOBWPSFDKZ5IBBYQ5B53ZFHK4OG43TVXTYQMWPV25ZU2LVIH5YDUVN2TVKEAVZPX224VZAWDIGZHNJ2CND7HWUX2YZFVMZWPFYEMKWJORQDNMPU22GIG6NT72SIBA4KR2LOA7O64UCNGKTK55FT57BI5POER3Q3DLQR3HLY3XSIFQ5QITIMQ25Z4ABYB2K4DLAJGW5O5FR2N4TYQCQKHV7PHS64YMQV445OREQ4Z5LODEG73MOHC6MSMWHWCCA524XY3RX45FNVPVJKBVEUFJLVRSLQAMC45JYU42BANI4MEMKJFRSADPKKRAOK6N7VLLQ4PSXEFNHPM3NIXCG3EMG4XJVJ65BYLVUKMMH5RODZLHGG44D3RQFES54R3LAWRWCF7SRF3PKSI77VUCZ6ORD5BVRSB53WJMVKDLULPERWBV3FR3TFEVLMN4HAF3G5LY4M4LDB5DZAQISONPHPCHHGNVRDAKCVGIXKWUK4KGQP6TAVNNNMF3JIHOD3CIUFCG6CAPL4BVPL7A4LYVFZ253VUBZC255BDIU3XMYFOHPJFJVDGQZOTNE2RB6SGYTGOXFV5JSPZHS6FZAF7DEOUIDBGRD32QPGPKKX6LFX6TOENR333JOCP7TOFYAQGQF3ZVIJ7UE2DXK3ZY4DLI7ERP3TQKG6YTMXME34GZNKMPSQMHOLOIJHK6TIFCK6ACWXUOHGMNCWA5MV5VZEEUDDADWEOA7BCEAULWOYADHMSSPILG75ZJ4DYTVBO2A5FS2ZT4JF3XGW6ITCNB5GSFLDZZXNV4AUPDXA66HBYT7RJD3UVDLPDGXVTQ7PJ2IPSATMSDYIDM6XIYMS4DHQFHXUK3QFD57ASK5M5S7S6MLYDC73DGVI2O6HEHP2TJEYRIGUNE2N4H22K3F6N3IMWNWRQEO57PHDFJHL2USZ2BUJRSHGSIOMQHGYXCESQ5Z2JXY3SJ7F3V323JCTT2KB3PSHFCNTD36F4YR4EDRW34OUQEUWAPUNQYSRIENQROAJMSKUXF466IPKAU6N7I4RUHOYJ2GRRMJLBNVZLUVKPP3VQD5ULT6ATVLDPWNTUSWQODHB7AS5D2OSFWSRHHIJVWCFCQLUE5WZVTZ5VMCKZAGH6QOE7Z62ZUEIUG5YAQFOWE3ULL35D4LC5SWECA2VADXXCV2DXVKPHIYHZFZRRXFWI7KTHGKIBMGWOVQ6LAKOF6QDLO6AEEPI2GBROZV2AAETUC7A4KLMWICXKKAAF5FUOGDWNURRQYEG26Q2QITZOSEBK2QY3LIOYDFYHNQJJLKNTYOHRBVIDLXDCNP3XTGK7LBMM3KR3SH4OXXFBZTPTHNMHWQU6N3QRIQXFB7ZID5FEMNIBEFJ2NDDXZEJURJPQVYXFZYLOSNHKVSSPF2R2DYZNUKICGROSL2GD54AGFW6EK5AA5LMA6YJQRXDD4ZQG2RXDJS2UBU4FFAY7A6ZATGAPXVC6VODT2VSNBRFE4DZBQQFGPDFIVFXHKWD6UBWUBBIQN2D6GUY5GY5JRKPUTLPYAEQBQG7FBDRIFERB5QGWRVDADOFZVNPNFVWFF4GSG4QBPVS665OD4XT24RMZS257MNLFKFDEGGCFW4YW4P47QE6GB3BWQLZ6H6KZ4HI2MKHPQXNH5HSYVTH54CNQK4J3W4XMD4774NL7RDMT5H7R6YNA3OATVWEU4GON3HMG5WA7XTZFFIWEKG2VYOSQYREKVP2VHFGX5F7MDUWPQOWYEFNGBBMQUBIK3KGKVNGJHEOUF5ISL2ZTL66MMOSDSMAAYGUPKN36KRO2KPPILLANGT73J747ARGAEGQE235LPT4VZHX4PXLKLDU4IPJP5OVUEE4QET3Q6TXZVEOWIUOIYEUSGOTLMQY3Z2KH6DRULDWJYDOTXUSK7XXUAT43B75A4PBK732TURSNFSVGMMARR3HS36XWI2WRJPATZFOZ7Q5GAH6YJAKKT5DDABAE2FB47THKXQ65X74NHCPTTGUJLJ6PCKMYB4PZPADFE443U3QKJHNM7M7UUQU42X634IXK7L7EZC6I7CVALAWA2ZAKDZW2IXELLNUFEHNZUWMRKAC3KC6APBHDKRHYGCGJIXXCHWAQPNOB7JCF4OHYK4EAU3TQYYHVFVTGBGOMCHUIESIU5CAHL4YQXMO4IGFCNMYPC3WJ3H4W7SYXAHMMBZ6KYI2AFSSS3TGKK2GQSMRNKGLYMADQEIUTQCJ6MN6WM7XI7H5F2TUVG4YI7AJMTOPNRX7ZPWJBPA34H3GFYYCZQZLHHC3ZSTUMQM47O2JMLZHHSCQAS4OZHAWQ6YUFOVSOHES47DWFUKZDNADKHBRCMDBNVQWP7RHLREE4BDS334SRNVWTJIOPTQ43EQ5QP4HEKVJNNJALPHQHW3K4GHQWYBLDK2FPJY6ZQRVLF4T44TZYGSPIPSO5Z7IKUQGMAGPY3GHTDMI463CWEOPD2DYV7Y27K73YXMWRRWDZWOX2P5R5L3ELK7562TI34FXPIIZILIUJ2PAUERN7Q3TGLE7WIIE7BL27I6FIXSAQICTAL26OO32YIYPTHMY6NPIROD6ODNL47ZAUVPCYHAT24OGN27BIXED352ENF3JDFPIB45ZLLSN5MAYUQWN4GIBTYEEDJOZRTSVJSNYMAXKS63QM7QCQYBQPR7KMMLBPD5SFYGS4AXHCQS64TWOZ5XBYEJMHCBUMC3RO43TTPSQTJJALZ2VBGSAVCFX7ROY63ES6BXFUHK3ZUPQ4H56DT2CFWT6MTVSXGU5AVU5CQ2MYASLY24GBCSUNSDAGAGW67C7DF6YKDLYBMIYWOCSBEDEH5XLGEDWNMUL7AXML3OH35DU2LQKNBVOM7J6TUO63SH6SMDFZDY3L6AFKN5SCL6VG4VQVMIT34NGCZVDNAREBG4374PVUHQHFETOIYZK7MLE24QDJTH6V2V5II3RREHE6FDU4YHQYNWNBPMMKU442KBHPM7RJZ5LNZ2HUAH2NHML2X4A5GUC5LKXCPZW4BUOBIGW5I5YYPM66T6SP7OTZUVZAFEOHU5G6VNVRO7VTKYOXLMMAO6HMU5RPO4EJTPGE5D47ATOL52OYY5MB2JYDLIX5YYTMKD6LIMFDYTJDE3PUIXIAWG2A7T34ZTE3ADQD2G4YNDFW7LLKGJZBG37YGPKVXJVCTSSZHGNN24COG6JZ2UGAA5UYCWOG2JWNHJ7VA4IGAPPQ36ROEM43PDHDG22FFKCVFL7KIQWNBA6IBKTW223R3MBWE2PGIX73F4OE3U7LTDTLFJJDU7OTO646RHAG2DGVD46EKAIIZQ6PUDVO2JIBVWKK67MUHAVL6T3PNLTEPRPBBOBQ2JS2YLMULPNXL56O2GQUIN4MA66M2BJ2DBJOBMDFFLV7NQ6BWVEDZVILAK3MHTGMWUG4FTHUWHPNQ5FGZ3X7BZQ6QH7KL6ZA7SGDG45QNCF5L2474PX67LY6XZ6P777HRS7D55776R623UM4KQ====END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=always 0000664 0000000 0000000 00000002117 13215577651 0030255 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 378
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [1000 times] to 2 [1000 times] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARXG232QBABUPE6OOQ====END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=has_duplicates 0000664 0000000 0000000 00000011601 13215577651 0031743 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "has_duplicates".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 414
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2946.87 bytes, standard deviation: 1606.30 bytes
#
# Additional interesting statistics:
#
# * Ranging from 526 [once] to 14321 [once] bytes.
# * Median size: 2554
# * 99% of examples had at least 786 bytes
# * 99% of examples had at most 8342 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3968: STARTPCOD3GB3SISDODCEV4ZLD5QYYSD37PMKIKPNNEF5TYKLVO4QB5K625DUKSZVSBEQTFEPBXY7X7775Z6X54PT7P766AZ4754XLXZ66L57EPV7XMKR37BW47NOLXHXHWNO467NP52XDSX6PNSWVX2BD6T67HGK274NKPVWH2C76MOD3WOJ6O2FUMZN6SCDLMNPKZDO3GLR6TVRSQ77DJNGH57ILWTLO3FPE63PPVQJWVB65E22XGT7M5GWN7JT4UOPYO6Z5O27IW2JP7TFUH65PVVP7QV5PUX6OHTHWEULPG23GVT2HCP2W5U6UZHY6DE5OYXO3PBYQMP62332K6CVVP2OTXM4NVFJQYH32JNVP2NHNS6FNGHTEOMM77PDVT46PLRZ5ZGHVBBIZ5PGFKHSVEUZKQTXRO4RHV645HMT7QIJ2PTHV3F44OWJX3KOF4G6UGUGHH3XA5HWK2NGNN7XE22TOHM536XUUW5FBR25I5KAKVMXWVB7TDFLL6LSE5IDE5SN4YYS5FYNM23FQIS37HBK46F2LLUOVVBYZO3DHFSETMOH5KJQWD2KKLAVAINGL25QRKDSHIEXC22LEAYFY72NUUZZMQHNA4QFZETEHJNGDGTE5DEJHZ3WR3FD5L36HVFHGP7RYLZ7OV37LX5FHRNM56TWHU4P3E6CCX66LC72CLMXLZKYAVA6R77FKZZ2EBKWQ626VO6HX2XYAVHGL3MIKEY2PAZHGB3ZL4JZVEGWYOORDCBJ6QSYNQ6Z7CZ5V4MGRNXUHSCKOGMNE7YJ3KMVQOOSPCLU2VXYMMQ6ZZN7ZTHX5WLDB7CODAC35SMD7ESD2STWSVSSXKOKRFO6URZMPFTXY5Q3IJZVBLSCFWMFAXAVRORINK2VSA5PGAINYVSLEJFEHYMR5CTL3JSAOGD6KWXFUTVZRQUHIQUEFBA4TEKCCQT7NC3UHMYOG3CAELCPSAVB7S7XAAJ4YQYGGTJJCOVCH6FNMQPVG3KKBCSIQN4UNNLV3C6SVCQBQVV75I2BWZSMOSIWUQO5UKY3MXAKFIN64Q7SSR5WUCKZUTHGT42TZFRKKKTDQL4EERKLM5VM2XVDWUF6RK6OK6JANZA3BGSVNKBKTJ2VZO4HMIVE4ER5YA3RIYIGVANWBSOQMUYH7UO43CWWQSXLQCEKBPMQBR5PL2ILHVBAWKT22UXBIT5G2ZLS4NZEKP4AKRHNIBXJNE6F2V56U6IKHEKZLPS7SKJZGGQGZKS3ZLRUWIYIISMHGUHNVF2QKEFH4XWVFOPMGRHU5MVJWBKJMFURE5PIRV44B5XQZYKSYGFIDM4XDMJG7TBHCMJ25BGKKZNIDCP2SGSMKCLHJQKKVL7HLVOPNB3A6ZIRWSRA4QDMJDU3BZWZAVATAJD5SXWAWOJIDLDZYQAKSGYOQ5SMRM4QWGVRZ5IYRFI5UQEPUPXQTVSBAVVDDIOQSG4XCIR2CQZZBVQ2LZCCGEWMKCMQJKZIGGAU4CALNA3X2CXCY2OYMEWSGCFXSF3DL5NAAEPZAOAE75GAFPEXBKP7KPUPVPOFOHIQNNKWG35RVMSCZYNN2BSKIKHIXHUF6MKZRFBCGJ7UT7MBHGT4XTLOM7HMANYE6YA5EH5NO4QDIZWIGPJRQEKX3DQ4DUKB2TRYOG5GZUHZBKA7IY3CLMLRYNW6LIKYJDKKXVHD6E66FHNNTDTHVCCNWDBVOJZNDPKAA44SJZZIOPYVRJSALSNBYKWMAPQW5MSISVB2HQGIHUIAMM5V6F4Q62GR5HWV6LZSI67NUJQ5MR5Q5AVYWQF3VLZDGJNQMJXWEAGVJWEUDBBTJ7KC35XCGKCRUWKFMA4CO3LUBIQSNPCKONS3FEUYDMNXAWK7P4OXMQLKLKDFKBXKZIDDAUGWUDQKGJT4WCJNCTIVV7ELUY3X2ADFETPGBX4RZGUUF4O6QE4XTBG6SFMXIWV3EU5F37GHM6UFYJ7QA4VVW6AO3BD6EB5NKKSUWJF2EW3PUCYASFI2ZPCBX4B2AFHPXXOUFIOALGEAPAW2RJYYBUVY5ADNBIDJSW2PPL2Q7JQTO6LOCYFA7ZMONE763BZYM7N5JDL2DCDDAWADL5F4YHAL2PVSLHCLU3APOQDHZC6GCFY3OUDYIIBI3XARZJE53WGEZ7CHWOOOGL744YTESJHY5EYY27TZ4SZNUPKJUADUYEUJZC2TDW6DL3DBPFZI7BAMPAD4ML3PN7FEVWIKIISTVABIKPQNL7U3AEOHSMAWH3JRQ6OBBWE6YQWTKYCRJDCK5MAIRWGLO25G5FZZN35ZV5EBPL7UHF426AO3VU32MMMLAA2KPADFELHLWMSR3IBAEXQNS7EZIUBKZUNORQRMSLLJ4LD2JJBR3H7TJDFLFQOODJP62IZP72GHZYRDSZNEUCMAADXGHPUOUP6BMJDO4HXJSTBLAMQ6TRSSY43UBBCTTI6W6YLICRR3C6T6MYZDUTQJM3QDQ2IY5U55O6IVOPTO5OXHA7TAA7ZBYVG7D2VAROUNSB4XJSNWTVKYHWXGBFISBOI53FAFSMFXUQTNOENC57XAJUMECBWKQ7C2Z4FVS6O3MKPZ3JM4OQG4YGJB4NJAGY7LCPAZM44TJEGJMTS5JB3ZR5E2SZU36YDKUMAT25UYOMCA3RRNNPPNWLYELOEF6LGHIOHQOUPU6IXHQ5LV2CMOZBMQN35CZWLZPVUTGHXJXY5WCESSQVEJCSGWJEG5KLDZENUO4J2BRPADQ23NWUXUEBY6SLMMF2E443IKDWNTCBULLL2OTLN35J4E7R536WWM2BBQNEOKKHGIGX7KHY664BXXO57N3TGEMHVIYWRTW4WHRMGX72VYZNBMMJZXYG2OR7O6JNEI5GRALQAQOCXKPAHSU42UTH53Y7GRTXUJ7DAGCGGE7QF3WUG3IGIDJT3VRYYCYAPKXR6LDEAUU4G2YL3KRA7K563PA6B2VXMYU77B42TUWC2R5HSFVIYG463PVCAUUIAIYHKEMZFWIAU72ERR35YUYFMNDD46LON6YOKB7EFPJI26RY2K2NVGJAA4E2EIQDRQ4HDDWVGWITD4KSHXHDTFFGHLIENDJMR6M5A3UNL5Q3WWWREJ53XBLAS2ZHGAKK7L232PNCYUNY2UYLFKXIYERONQ4WXJW354FM63MHGMJWSGLM4Y5TENRFPW4MI2QYSG5DWW4Y3NHMEJNGCQSJMYGIJD6X5FIPK3Q64KKZGTIGCL2EBFA4YQJJ4ZX62YOCEUYFNWJOGQDXW5MJ47X3WJR5KKSB2OHRSI5A72KDENX52WQIPL5LV4I6VROUVBUBVSRBZM2YBSV6C3HAFINSO36XTHZCVTQYR44QJPYDONU2BQOMBUIL5O5ZB6PROOUAWIC3NJ73KXO4XW7WM55MOKFM7NSKM7RII6BFTBGZA446L3I72WE2FCGUJSWWQH5FPANZEEH7GG2MDE6TSGGNE4XYSCTQOMC7ST2DGUMB2RANAG53NSBH7PEBLAGDTGQH3NRQ3X373UZWQIMJV2P6YCS5VQJELI77B53WPO6ZYI3A4S6VSQKPLBZVXOGLPVZVUSZSOFHVJSPY2SWKTG7PPFSAVMPB3B4AG3DK4R4YZIJUQUGYLY46CTU64WCVLGL3LOPR4TTW7D4C64MHBM527HDWOCVTN2XJQ7JWCYE6563SJIBCRUIX5KHJBXPNBF5HVSYZ3CZ7UHUILR7OIQFUSKLM6H2MIB3GM5KH2ZIOHRH7JKXNCHOYJ4DXRNW6TOOYNIOCVABUCSV6GYSC7BYRHO3OH24YAG5D3NJ6J74N6PVUTNLXQ4RTAGXG5L757FZXVFSCFZHN6VC4C62KKQCYLAU5CXM3RRGUKSDNUUJPGJ4EIDAZRHYEQK45WU7TYVGCDDDTTDKSYWTEVAYRQD5I7MJ2XPROG6YUBJOJIYP57H77PY7XV5OXZ7P2FA3775B6AOOWE4END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000010202 13215577651 0031266 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 380
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 771.53 bytes, standard deviation: 922.53 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [6 times] to 8880 [once] bytes.
# * Median size: 529
# * 99% of examples had at least 10 bytes
# * 99% of examples had at most 6150 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3208: STARTPCOF3GB5WISLSDMEV4ZPD3BRJAIPZW5LFDSO2GDLR63RXOV3TAPYA7QKHEK5LXKVESIEQJES7XH5P357737PL67LR4P772TD7DHR63X3LX3LWG3LX3N35XWFVS63W47GXO3P3PQYN3XXM5G5RWPT7VR257XL5D655X6WF3JOFX2S5O277X2B4GVGRNLLN4I6H55HZH5FD7PLUU73JTLNDNVOAXWLZD5W47NXYNHNNV3PLUF3LXDRLQGQS6XJ7OV65WXG6VT7EEWL76X4C24PPPLP6M6X7GT6DXX5OPPV425AURQB7PLMNOT3LG7MPF6NV2LRJWHJ3JJ3C6ZDP662UJNX2I7VWG56V5OACD7LTHAAHJZHJ27LIYYXNTWW2OEV5YMUWQJLQCU7RR6V3ZFU6OQBFBANWSTE5PNNWVB6LXLQAUU35S7PFDSFACWW73NZWXJBSIHZLU2323PGHXYSJA2X4CRRQK7QVHOBZOCH2PHFXOOPWW2J4IQVC4ABCPCYYOKNPXMUJCCZSVYFK3OVYW6CVEWUJJ2XQHTTYWVVVMVOZJNHZGRM7JLVAWWDB5KNKARYXNF2BLPRJBDPVUW74UO5NHNFI5DT2HWZBXCQBHL5KF3OQXZRNELX25CXXL2FXVCEKI7DGFHHCVUPL5YRMLFNNWCZMSEO2FNN5OSAPPKVCDVX5DVU2KNKYS7BWXKBJBOWDL7IDV635YBLMZXZCYRPCV5J5EJBF3O4DCNZIRL6CRTOHMJBZRNQUWTY6WDH3QU7HJFCKGMFGNLS5VRSABKJJUQR7UPGAT5VSU3ZNVA3GSK557IAMJWMABZ2WNCL5NHFLKU3ZI4SCIKLXY6U2L2ZAJBIHOILPUNQLPBXUA6FTSVBGW5WUQKSPUE4JMLYIXWFKLFMCMJNJNODNA5TSNZHXUXEBRMTPPEJZ7BPYJDK7OJFUFWQTDY3DMEXMC2SRBRIRC6WUSGNB7AKJW4ENUHUZTXIUT57J2RZS3YKUPJROMCRJJDS6U635AMEUIBJIFPSR2CSJAWEBBYHU2Z6PR2AS6BMMZ3UMSYPLVNNZWKX2TNLBUIUL4LXX7UBNNM36VZQP2CAPPAEAUJBOTBHNSZ2JZE2TI5HIKP64ITUPLHUFJBKFRNWVXCOXO6ME6ASDTK5QQGCSJSGTJNMLMI5C5VUQREUAIP7OJWXMEF7EMMK7KX4CRJVT3UZ4MCQ2WQHDWFSN5Y2ANIIYBTWFJX2SL5JEBM4SHKC5RC47PIDFSXKENWCBW3LCYKHVZXD7WLRO7UFCSCCZMCZDDV2A5CQUEJ7FKV2UQIFTC2GYP6QXUDXX2LCJL5EPZRB2BQZMCJYO4X5YBPQLWPSMJ4OTHVDQCRDCV2Q7UBFRNHFOSS3GQFS7QPQUAEE2PWRO2II7UWGMTYZTW2CVNQOCZSESPWS32QZ4J6QEWMX6TXKZ3EENLG2TJAE2VFF7MYXIIN3LTVJJEQBS6GZPAFHLJ7DR32XVXGPZZ6WT7FHTJFVQ75GTCKWN5KVWMAQHQWAZ2XNFVKFVUYRARCPOFBU45XVCDBYTCOU6LEUXCJULST7BHLDKTKEHT3M43QBVGL5O7YWUQ6JGGZ7QXOVIH26M2CJBUAN3YFL42QSW2MKKPFPMCBIDZMXBRET3KQBA3NICREG4MMG6NVYUO2ET5B6CYTYLP7HUPVTPQY4KEBEZALELCFOUAA3JMTVHL2OJ5AOVKBN7SRBP7ABS66XKYA2RMNEORPSCRPDIALI25UNKIITY6UKT2GJSKWZEYI6JIRXKGCMEPRJMCYJBWZJV42W45TY5AM4AMQACFLOWNNM6ZOVL46IRPZM5U2S6DYSYRZPH2SMLXFJQ3MWCOZRMSTSYSJUDVJ6MKF2527AW5FRQJXHQU4TZ653ZXF6DIF5ABUFTQGW7NAL4LYHD4GNYUCGVCKLDQFGFKJC4HWRN5UK7G6NDLPBMUFVDRSB6UXI3C7DNBU23Q3UB3NE7TF2JQYKWGIFDCN67TWIAXUEBPY7567FMSA6WO4PKTCAAMR4LL54ZXCFBD3KVKHAXADA7D4WO3BNTH6PI2PFJMPKMF7R5EVHI7ALQU5HGRHZVVI5F7T5YQ4JJNGTRYLSHA7FMNBJYJAQTJ3Y2IPOZFSDVAVJO444RKB4ZS754YCSY66OAPSXPLQRAQ2M2ESDH3DEE76QNWPBR4UYZPU4QGV2FKZNVARF475EPOXKNHXHPNJQ4QGC6WRMLXOLCEYZ6FHH4FWZKZSUISQRLPMOGXW4HCNU2HZPH7P72VWVPCRP2AHQNH3JIO37ZD3JINRTRN7VTYZMQ6DALKZAZTWMMWA3CPV4WFA4MWVW4RQ3JTTEZDAB3P3IHW5ZVHJYDQ5DIXQIKP4TBRX7BVAZ2SICBZ2PPFTZULPHAYS2ZWRIXNEEI5LZIIT2PANV3NUX4GRRE4MG4DWGQOI3YKSHTOJMJQUTDJXV2TVJQKGMGEIVQVV4YOYOB6ARTOHX3DGJUASYELC2XIHIH6BP4BCK4DXNXRT2TBJTRKMK7O5VFRYLOWNHMAVQWOWLKDCOSUQUXC3XZDCKXDTPYECHFM675TXCEPRD2SWY7KOJNCC2CDEGDUXWIKA54XVDUKTFLIV7K6T4J5CDTTXIZI6JFHIUOZB2Y5MLZJQKOU4YJBFJDTUT7V5UWQFZ36OPPML6P5PRHVQ2KOOV5DSSRK2DNMQYVIG4OZJZWORKCMBXLXHKNVO65LXZPZU6UOG6GMF7S7YBEWEMGZIJDEYKYVUNQNMEGTU6ROZVJD6ZKHZ4OPRHZIW4O4VV734QYXDPUXKEMYJRY4ABXPT63X5GPAYTBBSOBZEMD3J2RSAD44N3VBXOBJPNBGVYCIF44IL5DANIMP37AQYYSASDVA4O5SLDFLFILN4PTKNCMYJUZUXINTEPJWUWOWIFUZ7BXK7YNZ775JGAMHBKR34TWV4B7FUFOR36U4MIIUH74VXZOJ423R2BZCRW6C2KJ3KU62B7QOO5HUFD75FJLDBG2NDTZ4546HL4QCE3GLR66EMRDWZY3GL5A4II42JLXIRRZGLYFIM3UH6M6H5LHYPLTF2PYE4GHFS25RSMKO4CXZHMGMTVUSXLKPFZDWA5VQOXVM7L344KYMH7JUWEHZD37WV7KPAPPI7QXGUWCLYP3D6BXVMNI7P77XZ4PV7PL67757H2INFPTH77AII4NMOLEND
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=minsum 0000664 0000000 0000000 00000010370 13215577651 0030265 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "minsum".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 413
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1357.65 bytes, standard deviation: 694.08 bytes
#
# Additional interesting statistics:
#
# * Ranging from 721 [once] to 10775 [once] bytes.
# * Median size: 1156
# * 99% of examples had at least 775 bytes
# * 99% of examples had at most 4018 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3328: STARTPCOE3GB3CLS3UDKEW4ZDL4IEETAO7W4KZOMSO4H4GK53Y5YDU5EWUEUVVYXELYWTNA2PJX47P777WP57777PTV4PP7KFV7GXR7NGVDLL64JFO23ZH75PR5LDRW6PLNT77WYF4ZS4ZPFOKY7FXUL4W745VPCZGWSWR5V626LNGOPK6HMPO7HTWTTIFX77EEQ323WTKH7HP4RTNXJ3D63JD67QUDHNFYUU5UFXGMFN46NMVOJW4P263MO7LI223KP35S2FVGF6K4XCVHUGZ3E722AYPLGQVJ3TZ5JHFH3UGE6C62LNL3VEQA327ZK7DKLAK6XK3JNXONJLZUOO7HPPFKLWVM6S3XZNBWX3SJ26QYAWTYJKYQQU3RGXRZMTLQ25O7FSO5PHSKAT6GC2ZUE65SAE4RLDXY57CSGOS36ZEKWM2OJW5SMLXX2TMR73L6ZUKKMMCXJDNXWLR2HZULJFIMTDWF23WC2X6VGELF3Z6HHJZGKYE6RIHRNY34TGDWWE43CDWQ2BRTMYRRM6PYXJYNJWLLMAKEQ5L4Z7E6FFZMRNPGJZADKDEPP4RZINXAY4AGXIHZ4V3V6AY5V3ZPSXXHPVPJNNGJM5JTVPI5C3J4LTGNR6OODWD7OBTOACV7MOGUTDWY5VZEHD2VR2HAVZ4AC7AKVF3TUVETHZ3J5VO65LEWO6APQ5P4W4M2Z3A2Z7AGOJG7LWA2EPPXAG523VPJ55SIHGLFUAGRJN3HQ7UR3WLRNY7HHHZ5DTG3QSPWKYFGTXNO2FPPUYH722AK7JLOUIBFFUBGUB7D5CU3DFUWZ3WKJ7LW27JIP4MQJGYV4HIMLGAUXRKDNHWPGMVQJOPGBFAJSTTAMQYOVL67FPTEMCMGIMR5XKDZNZ2YOFWKMTIFKVALHBJCC7NMFKIJZKLHDIFSVB7URMXF4IGWTM4TMKRCLHZWBD5DLFBX3MZQUYTTQ6GIRPPRUI7HWGBU6WZJEHIAAUUWQ7JIRNT3OG4JZELP4WBLZHG7OFLXAKEAEMCGSJO3CTM6EWJRN3MRY7D4GUO734PHREI7F2ZXIMWFLGSZ7JAP53V6FYIHGHAT2AVKFAOWABAE6F7D3XMFDRD7G4KRAYNVBK7GADX2QAPQ6YC6CCGDWV25ZN243XCTV4G6JRWF6VNV4Q3ZHQLKZMIJD4HONUIRGIV4MYKKVODNCBREOA6OLGAD7X3OW6LM2SJVC324VMOO5S3D44F7MWC46J2PU3PDNAIK6MWKEV3JPNQSIT542KPYBKJXIQULVN7SXTYCZK6V3K5YYSVX5ODMFP3HMCNAPDCFLB7NH4OHEEQTRYOULK7WDJMM5ROREV26DQZAR6QUHREPMUTUSKEQ5NWADIBWFMFCFHQWJXRNDPXH33TTFWDG2CK4OOWUJDZJA3FJGDCEVQCFS5SFZKHTKCI2VCNFLQU7XCXUHS2PJ7EJIXFKOQET42R7ZLIW5EIIXKYHS64EHVDG7VWZPLHDCPDAXGVICZDB7EEUVFDTSRHEEYMCLPNHZGZYUS4HEHOQB5H7IGZTKCDMPQKAMBT4FKK3TB7VC2X5OTLIWP6FCEOUVM4XBGQO2PAXC3725QOZAGWXO2HFW4UF2LESBRFJ3YP3NKLULIOPC2NNDLXMUHKLGBQ3W272NDBNGKX5T26XLR7HSXKG2MBXQU3GW5X26FU6KNDYHEIFXMNFUGUJU43BZB2VQ7D4OIIZ4DIMRHCLUKYJ372KN6EQL3HSUNNV6VYKEUJ532CWOND64XRXJ3SZA3MVYDVOW2D4IDBFCLQTJW7BZMMIR2E3HC6D7GSZNEZFGVAG4PMSFFUU5GN5UD2NHBJ2527ZA7NNCDE5J4WW7IWDADXAM2FUHJCS632DRQ3JKR3IJSZEVFL66VLU5ZELVL2FP6STVQ26SYDU5SHCSD47BYVVJXKIA2QCPEYYFPOUOVUEPDBTSI5SQFQZVR6GIBR2A2HIVOHIXCU5NLZTZ2AOXCMZUTLGI7TV3VAPSPAUNUANPHJXSZEUAYJJUH6KUMEEDQT3KSUIBJYQCG5T6SKW35VHKPCIBOLYWZOGSBRJXI3NBUHTWFBOY5PU4W64I7QGGMNXE7QY2RKKYMW65D3VCJXUAB2ZQWNHMVQ6D3QWDJVRRRBBJXKZ7FBQMSTK2VBVJGMV3SIT6MXLJOFVMGODWDYWRE4DUXG2VEMYS7QRHKCPPJEOUUJKAIRUKWQ2RO4TWLUJDRIIX6WQFMYRI6W57AVGTWOM5FVVH42E5LR2JLN7I43R3URM4VIFLTBABN3C4PESOKD7ODBKNITRGEEHIMRVZJCKSOQYBRY7YVFJBYZD5XVHMBS5QTFE5DVJ4WE7OTIC5DWPEJYFSQAN7GNX3REUFVW7HK5SHCTSEU23XPDEKQSJ7TUI7GIOUZ6BUJPBPPCJR3XX6SCW3ADXNMRGOLUQUMJMLZD6QHXHBG36UELTMEPP6ACNO7Q5I2GA4UC5PBLQGECKY7XVGJGTFCUGN333BD7JRRWX2TT7J2IWQXQGP5SSOMVEYKBHF35LW2ER4SJYZZG7WRCEUQC2ANING4VEZSD5HVYFBE3WU76YMC65KCNVUFKP657Z6HD2EPVMMMVVZDDCG57X64OM5MNNQ3VFUP2DYXNLBNLX4MEVOVBSST5HRU36DH6A2Y3FZQSO3ROOGES5H7IPRHYZ6MV2I27LQA6SSNZWWBHKPKO4BLCOU7H2JQNA67ECMNHIE26QPLCVFXGOSLV4PBAF2KUOQT42QTD7EWQQNDM4GVHSGWG7SLJUR4L4PIFCDTI5EO7O26HEBF2OT4TCBUPFIK2YSJAUC6Y5TFOBAD73LLX6N5VUKHE7VMR6X6CTJSJ6KNRXUBXHI4FRWY3EHCKW5U3VHLQVKRKO2JGH27KTIRVPPYRY7UMQ35OBGHGQ3M45ASQFDF2WF3EUL5VYGNXM5I6FVBGPFNIDFADKOIO25IFTGQR25J4MV27VX5MQDA2IKTV6PJIZFIT4RERNPHLWCNX324MRFYQJO65Q7ZCB7NKPV6FYEOCOJXTMW5HE6YFV7PAG2IHAB2XWT7LZ4JXYFHA3MGSSJGAHDIZTLORQHDYYRVMPEVVIGFX2MPJWSKBEYF3XKY4FOOQSO3WBV26RZ6XK2MGKGXIJI77TCAOWZN73554BX272OKYEK2IYZUNW6AG4A4YG3G5GVMGKLQY25JHNFUQQIYX44BIQRJL6PPOVER6BNJJGYHZPMN7ZKKN6UCULXUF7OJN6UVHZC4FPTZWFFCONO42FGT2KNZQ3P56PLY7H37777P3L47P76YQWH5577Z6MQZ3E======END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=never 0000664 0000000 0000000 00000012043 13215577651 0030073 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 377
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 8729.52 bytes, standard deviation: 1998.71 bytes
#
# Additional interesting statistics:
#
# * Ranging from 5148 [once] to 22379 [once] bytes.
# * Median size: 8354
# * 99% of examples had at least 5762 bytes
# * 99% of examples had at most 15022 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4136: STARTPCOE3GF32JOMSDMDL5C2KWCBL5E7NPUKZOMTO4F4TFOX45YTD5TGIJ5DL4Z6P5EFARAJB7767HLT777V45PT777Y6G3VQOL67WEJH56XR4OLP7464OGK277WLC7X4VGF2LXXD6LK6YBVS37KKM76OXGVT64V77Z4MOSN7VZLCW6UV3UI7ZVM757UWDLH7Y6V672TSYXW57KQ753752NFU4U4P23M7VM7GXT5CU2TQ7GOIEDLT7PL5TWTK5NZ73443Y3VOHT5V3PPNXOZPO6NZXDHOCIXSOK4UQHB6T2XPTV7IP3VB6T46PPVAAX5LUGGKF23J7GUKOYXX562XJBGKEMOXF4KGFFPH2OH5GKH265HW43ZXUMEG537JOPVKHHCR2S2P5WVQVPB7NU7XV5DC6HDNZO3OD2E2BFGPPCPA7HFPWTDFGTDLF2OTDOC3AWAVXKMTNKQCKSLPFRWXC5HFZGRLP47V4YS55OIG3Z6NMVGBZ3RNOSTCHNIHX2O4VI452NSJJQI7HIFWOYU6JGCFE523IPFWIN3VOL65ZYJMVWTMWF2NM6BBIXEPL73QEZA5SR2G3T6L4RXLSTUBZ2I4OLLS6ZB4ICIEGESBIVDPBHQC7PZXG2BAMAXQ6NKSDC4EHXAUH5V6YTXLRCTGGOCO5GC7ZD4S2MUSHKATCCD2DEKXQBWCSKZMTVB53OMRVAPYZAZUMDZPCGLWN2J6TOSMPZTR4BDQGZPALWWHBK4EYLFYII6YGX2TEZVYWAPIBIBTMACUAFPST54PPRRQXKLYWYIADYDK24BCJ6XHBVOHPEOKVEM34INPZS27KVEDLILBTQ62Q7I3CFFKAEKEVSHNNGJYYA5TYHQUVJNUG6DXBFQMNQSOCZHIEMB547KJIODUYRSTFNNFR5N4WUS4J776WO7CWNOX237ZSNQUXBDSF2OPZQAQQSBZZLVBHSXQCXWXLPFMCZXAI4TVCVSEHNHYBIDAXLHTHS65G5WVYZJCTTYAN3TA4NGNFIY37H3RQDXRMYU24WFVPXCNBBTRUPJ3EOLGY5LKUJ2F6YAEGCUFJVRO6N5IWHJ45OINL2656CGMEHHQWUIOBTVR5Z2B52AQBLP3RQLORDFQSBMK6XDLXKEO7ZPTWUDTJS7SMABWWKJEJN7YUFFPLFNDPK7PL32FEC4YB72TQ6HAWRDDWI6KYK3THS2VZCWOWHODUMUJVAMLSI2ITPLHWUWGRY3KENBONJP6SIAIJAKQF2JJRFHOVNJCCPTV6MXGYHREPIERVBCHRJW7ZCFYAKNYZHH73OJB32TWQWEEPEKQ27QDZYOS5LJMWQICEFYLJDPBCTZDXJVKCEF42ODG5HLGPV2EWFLYDJWKNBFPRF2BLGJ2A3JLS4CIVFRBOIOXYCJSFVPVQXOPSXEKWXCTVBH2TECMU4ZYAP6CXULPLGHVKYUKILOOZODB3FCFQI2WQWDIYGGVZETDC3SYWLTKNREUMJRE4PYAKSAQQIPMST2OR3E4HL4DUUSNR3DV4YEOUAQN7KRKVDHPOKGZSUWGY3SOB4WAKSRRZX72VPK43SNJH6R3EAEKVXWKRO3EBL4LNJENEGR65B7CJRKZEGOCZQYLZYSYZDBLNLLYAMMV2BUVTI3XMOT7542IS2JFAXDNPASYV7W3Z4OXWLT4HZJX6EUFLGFIY3AVA773WURR5IWUTVYAU63UMKIZ2YOS4I25OOMSSFQUQU3A2QX5D2LDUBEKUITHC4EADIJTOFXADL3WAEZBKJEEPHBBCNBMPMLNTMJRQQRSFBWAMSKGPRH4EGMPEQXRBIAU425F5MQNURUZVGROQMWFPDMDHZMDRGWFRAIOVPQPVZWYZASYC6B3VRERCYBTA4YOEUMZFARFKHFN6RVWIIZNKGZ2OBB5J2OBZWK6VAYROPYQWXMXVGSXWISM6HXPB5VFWAPZ3GCKBTJGID3X2GOV5RWX5AXE2GFH2C2NFJYAK3MOG4FYAL5W2AUIWTOMAYSYLMURTLHAQ7BSGDIL5I3RJ3B6FJNEPTGSFQ7UBYMUVXE7CTC54QU7QCFZJD5KJ7MMZODI7QAYQ7IO5COWVWOBUOQRWO2NGJUR6MHEJRTT6SJWWT75T3BJ4EJXDRLZP4GUYAXSNU7IBSSE5U2VBIIJ652JDKPSNUHILHUZH2QUSBUR44Q55LYKOVAUZ7WEHFKHIFUNPBLV3TFPOEBUU7QV2PYFGDCVHCUYV3PTFDBZHRLVAHHP4CAQGVBVUHTQ3OT2BOGQJLQQDGKT24KHMVGB7WFCH36PSZ5LZQRC2PGWZ5XIEASYHEYDJXNKPMQEB7ONJ2AC7M4M7RPJB3VBKZOOL2BCNRISORVZF7ZXHYOP3TSG2PHDWX35YX7V67IRD2XL2A6EVTOM32KO7OXM2OA4W74BMYOPPOYTVBCDF6TSQ2SVQWHYUI37FHUHZS2Z4DIOYV72GWT4KTFQYZILPWEIUEJOOZHVYPTILO5HYIDAOYJCQRYSRYHZRNJZFG4ANTF2JTNJ6IXM4RVJXSAD5KV723CLI3IFMBA4APKQCH6ZFPOLP3HLF2ABNJNVK6Q3H6NAFPNQONUT3F2BMYDVQHIYKC3OZ24DBCNUIJSHM75KGSFHDOHMYXUMBGFCCMC22WPOLTK2WPPQRFMCFEOZG5T4CO3NJ7ZDMCG6EP7CYHDPLVPVTAG4KVJYESCXXHPNVGPOUTC3MMG5KQP6DCCOJDF5ZJ4DBVW54XGA4YULLWGHCYFVRVZGMME7NWBOBFWQPOE72U4PDCXJ7LWG7T7UO2QX3QLFDKSQUSCCH3WUCZWDT7FS2MU5AZ2CBPWXDCASDQ7JW472ZK3C73INBT54LU3WYRYT3UG7CXSNEJITLEOL3Q3OM4LDTKFR7JAY7V7ZNUWACF3UWREPMWZMD23Q5EPJBNXUWFAHN2463PQHVIDP3L4F7S5ONTLOI2JULOKCVPLQBII6GSBEP5MYYOU4DSWGR2NN5Q5F2PEQPFEFNGKX66MGGIWDJQG6EY475D4J3Q5XSAT7L6OEIFHHD43P2LY5K5GC7CWA2ZPJOPDOT5WBEFT4K2WWW7OJ7OWLOIPC7LJ5SZUKJJIB5BZ5WU6BOCRXGLRVZJCRZTBM7POXMPAGPEMGOPR4NQE3LP62C7SK72EB6A2DKAT5CQ3FTHWVUHAMOE34TU2JOIRMDG27RSOVU4LH4FKAJ2PRLCDE54GHIMA6QDCPMCMCZFI7APIP6GGU7HNERIMKJ2B3EXL6EI54BDLWNIP6Z5THDWH7Y6LYRLIL7B76ME5F54637TCZ2LYCJTDPDOL2XXX257IM4IU6W2USDLQEU3IUQCYMU26UCFAKPOTPUQ3DITRTBMJDNAML2XMERBKDSGGJYHVGC3NPYUMQHBJQ6QQK3LXNUHTEFHRMHWLBXHVNMCUZGYGXMWPDLZ6GMJWBMZMDGXYDODSZMABNS5F3CIWCJFASPIFVZEOFQ232I7ALU4U73KVSN2J3QCRZH4VDGDF3HR3E55I7T7UQXIQEA5SULVKUOXCUQPULDFRMFJXM75346DBV5LDPH2OAQIODQ5IZAS2TYXHIJA6FYFBO763Z7I7QMRCWG53YWBEFMHN2RIA7KXCKTAF5HLBBZLODNWANENLOSUTCGJD5N34PUDVT2PWZORHADQG5SOMWJQUQJ5V5735CCCGRJILHUMY6SMIZRFY4XE7HPLFHMROJYDA3ZQG4UM5AVDEMP6AC5P5LGSBVY6DRQGW6WKQRVPUJSIZZOB37J5DWBVM2D7TFBZ25ZU5243OSX3NRHQH572MBN56GOUOOYI642S6QI6LDVT5VJQBSYYNI4OST7E55J7BMUXJRQ6RIJHKAYQGNBO3JSHB7ZUSLGYCPT2DVMMQVWMNBURT5JRHU7MCZHBLLC62ZO6SGCKMW6Q7GN5I4LB4G5Z4T6QTOZRI5G2M5TIJUGGU2Z2CREGI6BSBZP3NQBA3PFGE372AF72LBPOMD7N3QM7PIS4WTK6DGO6DQHUFLW5EAMHEHKCKC66MMVL2WGGWTCRGRXZWGZ3BOYYQJDAIIMO5KXQRXEBDLKXXV3UIMFHXBJCPD7EXVL3P673V6P367326PH777D44PD457GP76AUV42J72===END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000010177 13215577651 0032333 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 381
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 859.48 bytes, standard deviation: 578.94 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [28 times] to 3870 [once] bytes.
# * Median size: 750
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2469 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3200: STARTPCOE3GB3SJS3SEKDW5JNC5QZMTZD6W2RZBJRXMW32PCOYXOEAHEVPTUN67XIOTBCSFAJE7777LZ5777573Z6XL5PP7G5VPV7OLCNZV2S233VOG72XFDN3VZ67P35PPR3X27XVX7P2W277D54D7KURX27WPX7O6ZO7J2O375ZR36WN3BR7K7OJRVANXXN7M7JIZHPWRVU3AR3LWE7V22X6DINZQQDL5XIKFRY6MOTXZMWNUITYU3SH37HIOZZOPL3FN53NUGAHGK2ZUN2QZPYUJDJVLIAW7PI2NJBGC24DKTDCDNXWWUNN5TASFMHDTZM4CDJT46UIQJVZKWWKTNV2ZZM3UGQLHWXXOSD5XT7AGM7HCBT53EJ3QPXWOK7JVW4G6L4M3T5I62IV3WFJD37XPEEJZRD4EBHVSTAQG7DOXOW32J4FJK5GVINIWNFMAWSZRCEILPE3JMCOOOCKKM6KVZOJVM6DOTBRINQHT323JZRJTLBLISFALHT4RQ7737XVNE7RI3LWBGEBPG4STHKQ6V3NMEHENCETRCWFGCGCPHDSLQPCSW4HWTTRWZ5ZYB3ZCVIL36EU4B6IWOG4HODV6KZXJ5ZPW7V6JNPKTQYJE7SXCMOW3HBJAHHWAFWTUV3WAYEPWOXCLV55HNCDUWFWDIITRYVDQCA4QGMWP5AK27FLAPEEJGDF6OIXEVLZ3WU7TAD7CXIKFCUQ4FDUSDANJMSHLMAU7MSN6DCJVUWZGEXTJDMQAUZG5ITSIMGUTVX4K6WF2FNTMST3CUTS7SNUORTE4ROZOQJ22BQUMJ3BIC2YL43GQRDWKQXU7R5D3KQQZIYXNTOHVIKIEKF36TUKIWTKGAZC3QSLHMDBV2THFR5M7ADKW67BMRDEBFZKJEAVSVGOHMVTID63JTUAW6NLKFD3X6IXJKR7MBSPGCVF6MZYTIY2VNM7VFDSSB7GUUMLWKADHN65YZLES3A33DUL4TCSSQQANE3F4O73COLTKSEW6V424YE6BDIMUXXCR3PM2URMMCQGZOSEZKFZQZPVOXKIRGZW7BN37ZGSWJIJNS7HKFPFIZCJ3A7MEV4EN2PBUQEJKST3PETLYBAKQRO3GQXSKQS7INPWEZ6HKPB5ALJK4KWZV4JHXCGX4BNLFN7IU4NHGVIVIFESJHSBS73NZGUTPIWYQMH7TI3TLIZVJVB5OTV3AMUQCYUBVVLRT3WFPOVUTN4RYQ3OW667O2XVQZ5FKRI7J7QTBEBVDAZIIMSCPWQD7NYD2GSL5GZ53WCW7P4NEILRBITAMMMC3GCM2ABOHFNP6WMBRFQNU457JNJOREYBM2Z2MM3TQZEU5CKV3FJP26NWJSS23FB6WVMWMPWBT6E3QJ3ECKMCL23OQACRMTFAAKEAHU2QSZYBQRMOAQS4GQHASOFFGKPPSNXEX7SOOM2CNM3WGGECAIXBARCXGUCZXRJ5SHMWEDAYAKXWZ6PRCSSDOXNV34MDWZSTHUH7Q75KWMIIJK4RLNHLM2KYKA6HZIHY7CMJ432W4WYIC7WXFM542RFWKO3SOJKVIULATAMXK336W2CBGAVRGVCQBATQKVJP244XX3VOMFJLSBV7T7AMGEG3TC2BQEZXRVGCLBOVHK7OIGUSPSV3ML2VIGHZMVEG32YJBRIRGKYKDKEMX6BW5RHRVDD5KYDPSXN4C4CYS7PXT5BWET5GB3NZ2PI2Z5D4TCU3B2LDYYK32XJTXMZSEV4KV63SUV2IEJ3VZYKCVB5KZTHDEMS26O5TEBMLPEDQEV5C6L66VYMJ4DBSD64JDYOSU4OXXI5YIGSNYOPW3M7GCTJG7DXK22JX5LGS2FB5MCCSDG4EEQXZWGHX2SLBZS43DPVWBONCQ4PE6LMZAC6OQC55BABMVBMZRO5JGUBDRVVOY3IKPBCSIALMIPPMYHJS3TU5AKZPIHCLWE4FLJLFEZ3SLHGVWY2LYTAGE6IVO2JYDKI7S4GHN5K3FKGM5OA7NFBHPD3TTWLLTDYMBEKE3JRBIG6APHJK7DF7EFJY3KXGLTSMJKIERGUJP3ZYCA2P6WA73U7P2IOENM55BGGBRV5TDJRB2G5BIC55XWQAFSXDAWLUUG3JFCABM3XFQLJY7AH34WICZSOIF6X3EWBBZWR2DEMFUR56WVXM5MORFMRAUIZHZFLBK35UTI4NMUCLVPMWRAAF7WRVZJK6SGXX7M65ODLEQ7FIIGKFHILYHSL3WAEDKEEKUXEQ7B5PFBBTVADI4PBBPOQCGWSGOBTPNRZRLWGXKE5YMO5D2E5PYVG423KCH5OCNOXWYVYAGRQPONZOV3TD3AOPLKUEGUKVXTKPWDLPUDBRN2VOGXXTPXKG6PLBLYUYKU7K5SB5EEDW6UV2LFSR7BZV2H4DMYXUZHTYSWQXXUXYPTFS5TJOQY37CUZ6H65IXOGE4ZG7BKTAFMF4GOOGAJWUNOSJN6XMDPXVWWY6F3HVDOA5FUBQ6XVIOHXA26JKBXCL6XNCSB74AWKA2NUFJ55SRHY5WCMDRJUIM27Z5GNZGEQSXO7EINUQYHQXVE6JUK2VR6IDUYYP6HEZW2VH3VOOFEUDFCFHISO7OGT7KMUTRVBSTGJSHQ7RZOPWHVOZO56D5HBIE523SPYVGSLS2PKUCL5P2YLOJYMFABBXEG6W4PE4XJPQR5LMC3JK25WU6PBOKUV5YFCPGK6Z6LJCREWJAX3IKDDCRNWMNJIY3XA4EEGEI6ITFV6A3HPOTTCTSF2Z6MAYGMNSDZHASBMCPWMDHSQC5IOLMYE7OALUOOKL5NUTU3GTT7MJV7HUSZ5P3F5NBR4PJY3MUIU2INWC2G44IWBNZLSUGK2B6ZZPO4IJJRWVNPD7HI53TNB3R4T3P2MZQP3IIP3LJD36KKNSZ3H6TAC2UMKZP7U2RFY33OFTRRZJSEFXM5HLS6QTAUTHZLFQ3DT2TMR5HFBFYGQXYKVRAIG73PBAOBOSTIIQ27PVXPYODQEKUODMTONF5NFSDO4K662ULHAC4Z245MOYZAKUM3BEUDEMZGOZFDZN7J5QS6GHWJBTXSZ3FSL2XA7AO45W347GOJPIOIYQKING5XQTL2Q626XEX3DTYBJG4OWOH2YYT7T5XQBKH4GJUAHLB4PS7F3P23NTLEPJRIWOUXGP57X77P26P57O7367W25PHH77E7R3YWUYI======END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=always-interesting=usually 0000664 0000000 0000000 00000003152 13215577651 0030453 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 379
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 34.57 bytes, standard deviation: 151.52 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [898 times] to 2029 [once] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 731 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 632: STARTPCOMKV53OLCCADH4CWH6WKYQBACPTFKMXK4ER3JOTH6HWTGOG3DIQVZ4O6M5TAXBWAIKXVPDX3DPTY3TTLDZPYKV32DBIIG4WZCHHWBHRHOPCAUMJK3JB3T3VCLYXFGY552PVZEEUT6V5YTMAPOK64FC4CBO25FYBYYPSH5B4TFADLXN3C7QBVPDNQB6UJ57Y3A7VKC6CZJKKK7DXQQ522LJ7M4DL2CYLHTWAZITYO73TC7QDR7IDLIHLXVI4IQDWKIFYKJIDQDUYMY5WDFVUZAZR5Q5NAL2N7JFR4EL5AJXCZH4IQAJKMSKLFLJUVIO5OSQNW4V3J5BJHELXIBX7CAJIUM63ASRSHRAZJO4AEV73OQSNKXSVMKH5GOWNFIHZLPMVM2DBOZ35KBJ6OXQBMF5CQAXTN5IEXDECVS2FRFRIIIK6YMJBVVWTGBZ6ACKGZEOJAQAPQ32PXRZ44RJIOBJJDNYZO4NNJBMZUORSTVNNJQE6CE4LOZJV4GDRW4UZG2OAB6BIQK2XNLLVDJCJSGTVTSYTRF7O6UEA5XIFZOMHK6VGVJLWS5A7W2LJUJOPQINDHA4JZBEWYME25ETS7V3BXRTYTPPZM7Q2RPO7MD3LCIWPA======END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=has_duplicates-interesting=minsum 0000664 0000000 0000000 00000012324 13215577651 0031756 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "has_duplicates" and the interestingness condition "minsum".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 415
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 7914.53 bytes, standard deviation: 3847.96 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2836 [once] to 29591 [once] bytes.
# * Median size: 6846
# * 99% of examples had at least 3461 bytes
# * 99% of examples had at most 21917 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4304: STARTPCOELGB5SITLSDKEV7JDD5QY7QAQQ4VPUJME626ILZHQVXK5TEHV7N44T2PK4KQSAQJJSCP647L5777265PX77724PVR727FT2736XV57X56X3FTPZ7WKXD6L727W267HN7H7VM57L52TUGPS4TX676N7PROUP7BS7XCQ7KQOX2OUHHFA5Y33UUJU7P3LNF6RDZEW4M7L3FLT5HXTH5YKXU2PYYJOIWRQLDU7YWU6GB2HH3ZDTPVY67IYF4VHH7P447532MNZZ432XP25HIHVMOTQHD5LTY4JN74SO5UKCYMDOS3X7VSIWTHRUM46257UFQBKRXK2FKFQPTSBR627DI4ZBQN2W36JPPXXKZV7RSWZDVWPXZFY7S6PEK5WZVP2SMZ3BGF7FTLB6WK44XRY5MO4GKMZRMDPO2USDOHWDTHXZNXLQEP25EB33V7IOPZLPHZ3RN5PW4DTPMIAM47OOBFNMWX6RQ5R232WSQ7MA4JAN443PEO35C4P6SYFOUUAUHENZ3AN56XNFZTCF33RLTXQR3UX35AANH6HQJ7MOCOHDVDU43S64DQRXTZZYNMHTT6N2DUJYLWO7HF4TDTWZNYTQOEBRD7HIXHRVL4IZLKNFZN77SB2ILKCPNOJHYRKTOIPTG45OTSIE4ITSOPYZ3OD56HTLOIOI6NOB34E6OF7BPIZLOPFO4K2EU2TRUT5O7DW3JVDJWU5ZBQSJ2G4H47MPTDVPNL6NZBW2HF5JUATH3PU735RDZSLTRWZU2G7M4JUWSEUBGBDGUDXIVTUL3OXRJS4TJ4A54K4APU2LO2F4E5SEG4VZRGQLAX4OTQWG5HCW7XCG76CUAUH6FS2HSPW5ZRPTU5YICPI25RMABLK6N7SGZZ7VZOLLWQE5TB3WG7VHS7KWPU4F5DTSIJVXXUSBQOLTFZTMI6NDZZYYBPWXSMD5PDUK6BBD2OELWLXIDOBXUOMTLQNSOG2ZGKQSFZNJ3AFJ4CUM5WYX3YYTNWXY7BXBOP5HMGNSOEORW4MXVEQR2A2RTXRCZY5YXLDDXTOL6VHQG46QTJCD5SACM5Y5FQ54XCAWMQ3VWD4ATMILU2MZBMMK5LVZPN5LITTQZAHRR7GNK3YB2U2RPXPGPWJVJTHG56RVXJODIU6AHT7UUCDZIWVZ6PFY75APGFMLS6OX5HP46EQGMPGDCSPJO7WN657BXVLUYMRNWBE53BQSKRY6Z3VC4ZPWZWTP64H7CQSFWEXQ65UCVOSHQTPNGD5JY7HO74I5W5BRZWZN6PJOX7PJWPVI5FSTNGJM3QOJFGZYPVBPJ3HTFK2IKG3NIKW7YPARTDLYKUJQCMNPIUE2RIOHWENBG2D36DRMV3ZY62IJMVCLJPNY3XNK3R5DADXSAEHBHG7IG4XHTIAQM2QDBK2JEDQXZJSIIENEPVLNDQXOGZ3ECVO56VHF7KQP3HSRNBASMDOAQW3OJOPXFESOSQFZQVGMO2R6WBNYC2EHP7AGEPHI55PZZUZ5GMVDXRVVDUST4QQP4AZ56QJZDSNH3QCVQXJSOE3UPG6FCGZGXACANZ32QRLSDXQ2JUJD4QC6USFPQLUPU4MZQRSIQNS7GWORI5SQ3YU7DGH4DPKD3UDOZIUMJZB2COOA4KEKO3KMCIYXWY4YWAFLH67X6FBLNVTDRGH6MQHAEFEYSA3GR6VUNBVNVHBCZRTFMWAA6QSMZGPIRU43AP4O766BTFAAIHFYKOJ5FZWEZALF3ERT7SVNNAPDMLCSZ6P5T2XHAP2GO5PMBRRKPZXU6JAUZLJTHOET6GQ3TAOGS2JXT6M2GRX7UXPPGGA6OG7DMUZ3IFLDJIDJPV5Q45FNPH42G5I4F6SG6EG6QRKPPQTEIYWMZI7HY57ZZLQPTW4J2G57NMWZEDS6VJ4CZRMQGFRTCVS7K4GEOJ3IOCRBJNRMHJXVR3ZFSJSZOQ7M5INQRDQGBB3HLQ3FSEDVJ6QOENJINMQCJV6QFRVPGRKQOKGALDQ5YOBBMMCA2GCPCFEPM3SFQKC5NBFUPURVB5LIOLXDL7HSGIF57JKRSJOSKSCEQNIO6CGYI65XG3JHCWGLMZBLRSKF5JY5ZRJGVNC2QTSEMRHCYJO72XL6QFFJCOTQ2DBV3QR7ZSHNFBJHIG7DUOMMQGLFYID7TCBDNHINLQKBFFBY3EUC6VXFADME45TB4PTZ2MD6KKW5PAILKO6HBU2WSXXAIHHPWPV4BTA4QZVG2M3AFAHT4E33PG2I53GGOPZ3JBUIL6WVNLWBM22EZY7QGYTPXV2HTYSDZJXQ3GSD4D5ISSIOBNXBXRDSQFA5BN5ZSBWV6ALKKZIBXWMIYQW377B3TIYDSADFLU55M4A7JZI5FAWM6EOFROIAIOLQHOTANB4MGL73P2ZCX6B5JB3WLRH6WIDYUHZXRTYGI4SNJRKPROEQLHDNMRQK4R276VSSCTAP6IZDGBKLRAES5UECAOHGXRRU2QOTXH7OAJUNMML36K6EXPGTFI3UZ3JFQ6R7ZT3KRTBF7BN6CRM7ZBJ3AEWNVKOQAJ6Y24DAPMMFXHA4B4HJPZY23UUHES67AJT4TKPNYWUMRX7A5X2NN6QVEYITNTXF7WXYD4B4MUTWZZRM245HQJRKU6RNHTXPIOUGQFFH2RZFSBRLIVV4GV2IQDFLBWRDMEORBRUBQIVZL6U7MQM4YDNKPTJ3ZIUD4HABZOI5BJFB7RJGUOYQMC6FFWTIKQMX4N7GHT3MZMWYTMTQWCFWKRB7VFY3J3JDK2KGMFNIVHRYCVAQ2YSLPQB7A6RWH2VCQRBDKZLBDWKEWIGBZS5P5FNLWO7OKWZYF7MGQMMVTRBGZ7CCWNWIJ4Z3USUZ4XQNLRLGRT7XUYPYNFQRUCM22ADYHE2NLZRW2AJQCER32XZGADXIRGDWML4SFTOQKYKJLUEAUS2JHE4CWF43R6Z576BT7IBTY6QCXHACZ4FS3JUYT4J4N5TAVBQWNLULEWBHF2GRA3WMQUNAYVLSXQO27KEF3RLCEDNLNTJVJQQO4T2LLCENI7ONV5GOY5UBYNN6GPFFAKW6FZZNVCTU5NPBNG54IVPZQH7WGX5PUYL4HKYHA7JBXIZV2BG4IN45STLBRWKBERPZEJGAJLHLZDCAMLY2SYC53KEXDRK4XDWGMYBX3MLWB4W4HTYJZU7CPTTMMNJDQEYZ6UBERQOEY3Q6YMOTD2XPV6DTMNYNJM6SDXSRQKWG7P3DLGEJVOX2W52Z3JA4VPGN2JT326J6UYHTBRDPDW2OJ7JW7C5S2FYEFV3NDRQF3XUPLMPUZGTNSIFCGMSRKOSIYRVV7ZBQSEUMDA2DEYNB63DVRN3QH52FPKOL2VGSOMHO2MYDXY4H6GRGHT37VYLKBTQ3T7GHEM6YJO5U4T6W3D42HWDX24OVURA3IHRLOYNZ33QYGQJ3JRB3F4PNML3UGFJSFR2HA33GCLWUTDYMO3J5LQ7OOKJ56576K43DSECFCV4PTSUTL5NXFIKR5EGVODTNGPYYVSXLJYS6G4PQTLD2Z6ZKMGAZ3W653P7QXANERRJ6KQGGIMIITZVR32IYDWHD5RMEZ6FY37IFRTEEBGQOSAZV3WRL5BFZETOM7FRXCO5XVMC2LQBE2NXJLD3N5KMXVISFVOTDTFLIESA4EZRHKEXVPV6WD6P7II5KNB7F4CGYKSKUAH6XQMZTO45XU2J6R2CGXPREH7APMEVD6T6BZR3SM4QD7KQFPASKLFD5BZTABNL3CQDB3SRE7E5S7IAWI3LTRG6JWXPT6B4XFJVEYXC7W4RJS5EEWKHAQEKX4DUZV24GAY3CK62I5YEIVENIWXK4BYTXTR7HJ5PEKFGGFJSTHEQKZF4V7SOIAZRIZPGCO23NP3QKTBSTA6ZNQPULBWVPZQ3FIEDHIJYMQR6OCSPBWVDYRY7C3VDFEBHRYVQDZSBOVCJFXFWN4XPSZBNUMMNSDOAUSU6TAYR2KQJ4NUPUIYVP353YXIQF56CONQZ72THK2EL4BS7XCPTWVO62PA4L7RXMOFUQ3U3CAGFYRDPAAGV7XNHDXN7XP5QHQIUJZPNDHH5XRLJMZYBSPUKDUTTOEPOPKRL3XF2BFR4LVAYC64GP3NQWCMU5MZVRLNTENTV6U2DF75UM4OBK3LQECFRR3L7Y7ASOAD7LSFSMSK525M55M3U3S7FRBTAE4KZSH3IQF4EA4RTUS6ZPWFUPIPAS3H6MYC5TH2ZWZ2LNYA36L33R7BE7TALHDGU3J347H56X57737OXL37735MHSP3577QAUW3R5DQ====END
hypothesis-python-3.44.1/benchmark-data/intlists-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000010460 13215577651 0032524 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for intlists [lists(elements=integers())], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 382
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 959.35 bytes, standard deviation: 668.30 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [17 times] to 5024 [once] bytes.
# * Median size: 817
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2788 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3376: STARTPCOELGF3SIS3SDKEP6SWH3BWJCIOAY37IUQU6Y6IDZHRX67PMKPCI3Z3G65KX6AACKEQIUD7775PHX777X7PH25PV577LGG7L6V6535PTI537P5ST6PTV2XX27WPD7IVPWOPPNNO6N2I44P6FRT6SXHHV5O3U5XSNSXPGG7ZWZLB5RZ2A7WDRK5SSXX26VWOTMGG3LZ2GN2OMOKZJ7L5TP3UJBNWO6O53IOWPT4445TY3TPJPLJYU3S23K2HKU3K2PXE3FU43NLD726HHQVWHSE3WP7FZVM272OOPFGROVENLIWWOVCORZG5MTWJZT73S3TXD2ZBE4T64Y3QAZPM4CYCA7LN5TTPKS66UU3GB54RR674V6ZWKREUPRUPJSH3WJ3E4K5L3QZ3XYAZCVHXIRMXC76B6UMF5HTD5XMRGY55PNNEFFDNYXTDXQMEIELWPZFGBZOOAQSMCO22OCE6YVC4LPTXIXK2JMMHBXM2DXMBUQONZ4GW3JTQH7D6VMHNG4ZFXJSFGGPRI5KCRPNVQUNHMKYUUYX2PJNXOTVBVBNXUAVQAIYXQNZWOY2NRM364BMA2T5X52UNQKZIXOBBXXL6MDVNMZLT5PY7CTH3267MZKQJHRRTZKEB6PRN7PKAEDLUNKBBJ5VQFW4I4J2RCRIVTOAMUXME2OZVV5B366R4DIKV3ZVRYFMSBRVIR57H6FIK4LFQB3Y4FMDLDG4XGARAG2OJJEIBM5U4HUBWCXWZID3TIELYKNU24BMIFEPMRZHDIFFNFG5REZZGKGXYUUI5QLCZNTLSZ3M5UUHLMXWIEHO2VC2D7EGBFNSWXG4EZVCOGVDZ3OF6FGFSZWXYJS2JTFDDUXJBHC7C3VRLVS636BBYBGVL24V4JNELMBEAXRFG33OJQUUE7CRCKEC3E4C6RIPTC25QC6Y2UT5WB2KQJPSO5M4NJZCLRISQXGNTR3A33WQLFRAM5JS4ZTKGCAIHMTVEDGEDFVBQITO6ZQ7EI4HBDRMLRHNG6ODGPGA3LXOCOWUFKP5JGABE5CCNATBNH6FCZJ4ARPWO44D6ILGIUIOZ3LFSZWCZMTVJIF4O6JPPATPW45GW2HPELR3WJYPTKJARQRPGLEDAYLHPSQJRAS22AYYVSA557O57TKFBQRPTWMUNZO7FIPBHIA5ZTAV6YZTL7LR3X5JZY53NPCHFMUB6UVCWSJMDB55VALRLOUTZX2R3QONGLBUK5HHEHC2BRCVSDS5SOGCUZG766OJFTGAQGMZDOJXSMOC24AY4FU6PUU4453BWT7RYT4K37WYTHQYWC2QNKQ3AK3LWWT3RAOMNDBHSUZYSUVZIXJXBVBX7XFURIOGHD7WAVCW3XYOSLTEPTOSKPHJI3DJ2ZCZF3EB6NJUEPI7B3HQTAGI2D4MJZXRXZEU5GFIMMEHTIP4MNA3XOTUIIRPD2XKULILWAUEHJCLHIJ4U2ECDPNCIOUCO3SSB4237FEHCM4WWZV5KEBCQC3PU6X6KVZQVLFEADMLCG54FMNUVTZSL2ZLLLQITS6PH3LHUTZPVB3LCU5DAUMNRINCXT4XDJXN3WSTHHBGGCSGZNYTJ5FVESIASWLUU4U5BZGJCW57RXGUKXOZYQW5N74S42GMJAX2TKHOPWU2M5DJYMMKRDWM45E5AIIAEU32ISOKCQGQ5KTPY62Z5FHF3APLPFDTYXQOJ73YSJKS2T26JLW2SGZTVXVZOXKPC2YVA24YKZ33VPFBPCWUL72BHJIQ55C5336AWTMCLWMXPQPOIRH3KRF6ZBHATGY5JWRQNRDL62QH7FM4ZKGQTS5JKOWI5XA4S5J5IZAHE444FEAXXVQ34DCBCULHCD6PSFH3SOKHHYSKKVO4S5MXSKCFEGG4F5GXPDTUWE27VDSYNALAO7I6H2O5TTLB2MBIR4CWDDYHPUJPFX6CA7X27Z5U3V75UUKS357UACUUS2NQLQUHK25FYURK4LEYMOXVAQLPENXI4PTLZGOIT45VWR4ZBMHVKOMSWTKG65ZOUI44YITI2EV3K4PY3UVIFGNGDJJB4PXINKEQROJP7VK4RGF26S32XOOZ3BH3HSOEYALFHQKOZBIMBYBDHKG5X25PE4IIUQXHGTTSPCVYV2QXYJC36KSSPBNG36F3PUVKZUMS7UTTSINDX3DZXJQ2OY3U2J3NFV44TSD7TC272KKGSNDCE2ZKEPPX7C46NPTLRZM5AAP23JCI2KSBLUP2MOI45FKUVN6FDDXNM4K3QLKR7NWOD3JZX5IEA2J42TSFZXR5L3FNSODHSJYAVSUUBVU6G5SGXDX6TBLWUJ4VVAEPGMPM6WSBHIMKEEUQZGAWCDYQ4AMLVHBSHBW3UYCO55G5GCRDARNJKMYILQOGHAXDSVMWQ6BBIBHJCOAADHS2BZQTBWED2OO3SQIPKGA25MFTWJMJ6THA6UZXGDPCFBDXD6GTTZEUNO4SGMGCSCBSR4MTRFOEKHV7JTZIUHYAAVJDKGNC7JL74ZCTHGTI2DOXJRCLHJQ5NQXOK3H6646S4C6S5NNEFIB7NGVKGEOSBAVEW4XHXZCNTFXYVVMHY6BSPYIWQTFMSHGA2TORHARNSAGGLAXL6FYYCSB35MOLCAYUKQLHYAGO2OWMEHXEKVQSEYLTGFMSDMCH75Y7TKFQ6IEZRR5524VXAEJHLMQJ3XWPR4TV2YZZ4KQ4HS3HZDICTUTG6BJABXLP3OQ3QGV3QFRWDO2442QNDEGFTKFEVTYCSGVDSYJ7K6TZ375TCLY7YTF4J3VDUUY7GQICQNDGZSUOLNUSF5CV2C5RWF4JAQCMXA6Z43ZKWUB3L7PHUIRPNAXZP36JICE3XE6PFS5PCCGWTOTTXXDPHTY2CQSLJ2VHVEA6GYPCEH54W4WTOF5RXDN2M6E2VMKPASE46IMMIKP5W2DH3VE6QCEPHCPOCEL5NQQ3SF2IZ6KM65LVDNX2AFT36HUSDR4HIYW6NENEDFSWFXCUYQI34MDYBOEIKWQNUGBRYZ5XOQ6L5DXF626S764UX4J62K5XETZEH5IIAY5X5BHNHJ4XVOV2Z2QSY64WWTCG4OG2L4BXPOT7ON6DLJEHUKPRE6K2P76MGKQKIZIPENWVMKIHHUQOHBPV3PLWPFBQ7IAM66KYPO4HVLFNI5XTZXXGKBPLHFP5Y34DQCFANNWRT3MLVSRQOWROBYYVDDOY5FPGYDVINWZ4JVFMYJHSYPK2NMPVBW7FSUCZCFVUD7BCL6WKJC7PHIET5V27ZFDKPUPBLPHSYT6IWVMU6F6K23RA5IYVVHOLLJN6UGQUEQ6CEFBIQZX4IANQWQN25E7OXP57X77P26P57O7367V2UBDT77YH5ZB666E======END
hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=always 0000664 0000000 0000000 00000002112 13215577651 0027354 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 372
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 32.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 32 [1000 times] to 32 [1000 times] bytes.
# * Median size: 32
# * 99% of examples had at least 32 bytes
# * 99% of examples had at most 32 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 104: STARTPCOO3V5BBUACADAAYFKZU2QUCUSKYQTQKSQOWIHMBZNWAXW4CC3TLZXS2AVM24QSAAAAAAAA6BJU7IXBH3PNJLPEOMAQKEF23Y======END
hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000006702 13215577651 0030404 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 374
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 5697.52 bytes, standard deviation: 241.36 bytes
#
# Additional interesting statistics:
#
# * Ranging from 4768 [once] to 6640 [once] bytes.
# * Median size: 5712
# * 99% of examples had at least 5136 bytes
# * 99% of examples had at most 6256 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 2520: STARTPCOGLGB5SLWDMDEEV6ZLL4IL7AX6UXOF4XGBWOG6ZQXN7XJDPQ6UFT3MUJJHCNBA3BUDIAH67P4735Z7L7PZ7PZ772MKXLS7D7Z5ZY7RHTNH4PHHXKPOXDHVCLVSHHWHWUZON5PMWXO57LDRPV7ZZV6KDDSVV2OX7MXNP6WZV2KRD35M5PLSGLNQN6FXOLHUWB3IMJ33Y6ZV2PWULBU3XJDP4HEYY57W3LQSCPVPCL7KZ6FVU633ZVZYGUEPADNHYABB54TM6EVSPUSJ4MN7MXPBZ5WHS5W6YODYOC3RUI4OHYBMS6O5LYVE3DKO5K64U4UC37S3COJZ3X5PL3RFPEAPARDOAA3C5TBBS6PKK7HHXGXH6EPARL3RLFNQLLDGPPBZB2EDCXLEBCJIJSYCLPX3H255BM2TNPP3SM3WPIPBSHP6UXEJUZLRMYT45DQ35OK66YLGVQICSMEUK7QQ6OYW4MNXX5NQA3P7BAMTCZ4HL7P2QI6HYABRXSBJ6YHFTTA464NUWRR4Y5GM7FPFSFTWQTMHVTZPROQPESB3DKL44H2ZGMWDPRPTO3T4F2RLVUE4T4ABYRAAGC6KGW4G2DCEIPBP6VKMEU4CS5W2YWVVK246IXNLFETPSFD7URVRGBXIBJDHF2UNNQ7MVTGB2MCUMSHZWV5LYVFPQI4UO3DPVCJN6ZNWFCYFFFDETXQKZGJP7RKPMWGWOPIMNHEUS6HIZIR5KIROCGXWNXGD3M7JEG3T3Y7IYEZ6UDE2PKM4OGRR7N6WVVVN6S7NZVTSYS7LFSF5TXKZ4724FMX6WF4WEAQ7RBI3ARJ4MFDWK3SJ7WD3PLLGUZIC7FCTEO4S36ZFEPXFTT2ZUSCGIRBA3R5CVADLEMYUEVOM3IVD244THIRD5VHERFREDPXMHMRV4OCAUYUJVZUDK5N6ZRHOVPGZWM5KB3RCJONWSPZ5GMSRXEGKUIA4POAT6OOLBI5VV3DUKKIMOE7CAKVHYT5VNIS46IU4K5RD5LP66LGWTPWJLLYDA5WUXZJDZ2XYJATJAHNQ25M2CCQSLYUQPQLK35YZOPQEM7DGSPAX4ZLK3BSVAF4WQ6SOGRK5SSZCRQH43XVFGZKKRFIVD6HHLK7ALDBT4PCUITHHNVY5FEMLTVQSCPM6ZWMJAU7MU27HT2WJ3WJBUCCNDKZ4WGZOVIA2K7PJKNQY7CRLPUWPVQTM63QEB2X5I6JRQTNNPXV3H3PMLB3NDSYO4BKG4GQ3EJTCYQXMIXM22YZHPBHNXLR2EPESN3WNIKGLIR2XCIHMXVV3HCEAPQ2VKPBMTJJHK2U4OWBWCU45YDHN4AFUT45LHLDGLFX6GWWXVV5QOVLFAHOYF5J4YNHHUWQPVRJZZGJ3FCPOVZWO7ZGZ3RTR5PJDGT5P75FAOXVSXO7PXIG6OLOFTJRKXTW37KRVKEOYRY4CT76WN4RYFSN3ZWYRSJLKHJIPVESLEQZ6PI3ULXEDUXSMQA46VSZ4FBC2LSJY2A5VDRMQPJNOGPKWYWZ5RUMCT3VOGDBHAZFP5CKWRWRJXTROQXFGTGXJYWAGQFQ5VCFV2MRKRWWU3ZLXYBRBU3JN5HX5DBOQVGJCHRDWVO6PE2WLG65TP2PJV3TTTR52E32W4L7XHATQUVJI5DDXGS3NOL47G57XJYBPSNO427FHRZYFF4FEPN2JKFW2I7HALCT6OJQA3KYCZLLOXP3DA4DUU777V4V3ZYSCQ443FVWANU2NT6YQTUCW5B5OZZ5E6Z3CXBKWUVCXLGUXY5Z5TGMSSKZZ7TGOEHZ5EQODXLMPKJSE7K5NEOZUMUNQHZPZLJNEQ2EFDB6FZZVDDO4INHKIW2I56WHWS53EX4WTJCGLWA7M3GNAV222P76TXKOW3U4QGOPWPM22U37HAZQNAIVVLKERPCVGPNTKUJ5GIU4YMJKJHE7MBCJN37HUXOLWK5ZPMSPW4OKKPLV5KFWN2SW4XNHJE27K6VRPOTJZF5KIJKXBZN2ZJHHXV42THZEGD2BRPQBDKJPRDHAWOK3VYL2BQPQ52TSI6VZ5YV5GM34ME5QO552X7DPYVHJCE4BPCCTL6LZUZ5GEL252TUC45SNEHVNIU3YMNNBTE4KHD5L454KZ6DHMG5LV6VFXPPPJLLEAMSM5O2ZJYY3CMU7XM7HAL2X5PJLCXO2FPGGH3KMTTIWGYFXTU75KEI3HFPLDL26ZPWKPVK4R3WFRBNXIX72KW5GC6G67U3OPXD4FE464TE25C2X5OMVBTWOBBLHZ3PPIBPVUY3PM5ZBMV3CKFAHEZ25URUHT52JIL3LIT6PNKAZTW7B7BUA36AMQSJ4E5FMMJC37XNDFW4RNE3KT24L5NLOUFDY23WBUBJN2QTLW3PTHOEJ3E6QOPHMP6XVH54YDRPVLISOENKRON3U5M7KHV5X4J6HFSTIU3NTWMN6P4WKT367QM523N6ZXZUBCKJ4OXCT42G5NH57X27D6P56735PZ7ORR7Y5774ASNGTJ4Y======END
hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=never 0000664 0000000 0000000 00000002135 13215577651 0027200 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 371
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1600.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 1600 [1000 times] to 1600 [1000 times] bytes.
# * Median size: 1600
# * 99% of examples had at least 1600 bytes
# * 99% of examples had at most 1600 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 112: STARTPCOO3RFRBGADAEAAYBKZ5L2TEQEAUWKF5RGGDHKOOF3XPMF6FPXMS6O5MNTI7PNNWWLLA3O3WZW5XNTN3P7T6SXEDTR4YHWL23PA7D6RHHFQ====END
hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000003004 13215577651 0031426 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 375
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1063.74 bytes, standard deviation: 743.83 bytes
#
# Additional interesting statistics:
#
# * Ranging from 32 [342 times] to 1600 [658 times] bytes.
# * Median size: 1600
# * 99% of examples had at least 32 bytes
# * 99% of examples had at most 1600 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 528: STARTPCOLKVN3B3BDADH4SUUHHB2CAVEP2FOEI2D45XKA7Q5WEQCAJHWLW4Z2GQBFCB73L27OOZN6JVFR5U3Z3B36T5PMRZ676PR737L377E77QNVMBLWTN34H5YT5GZFJXYGVPZLW5QPDRTGKHEQ6O3UXLQW5OFUGOGKTA3HGWKDBINPLOUYCPKA4VD4UN65LEVGR24KMKKUDW3JGEOW7SXNKNGSLCGSRNGWGXDX3W3N4BYRZ53FRVIVIQZNDXFH3MVVYYY2H3NU7CXNN7R5T3ARUQV6YN2TMFDV7P55GSQ6YZ3RVU4YO3KHSYYJQA55TSQGMRNUKUZCGLLRNN4VGWCW4SYULHOES4JNOLHCU33UY3Q3YNQQMQIZYW7VYPP452GCMYO47ONI33ID46BP5JOKC3XCWXXOLHFCROMAX7I2PXR2RWUIXFAETMNWTOKZXAW5NW4JVPMEIX4PG7OLJ7F2QTOTXE62DLTAZY4TMYQIBNN7VWHGPF6V5OUUS5U3VZRRBWT4HYYJZ2NLNVA=END
hypothesis-python-3.44.1/benchmark-data/ints-valid=always-interesting=usually 0000664 0000000 0000000 00000002476 13215577651 0027567 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 373
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 35.44 bytes, standard deviation: 12.91 bytes
#
# Additional interesting statistics:
#
# * Ranging from 32 [910 times] to 208 [once] bytes.
# * Median size: 32
# * 99% of examples had at least 32 bytes
# * 99% of examples had at most 80 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 352: STARTPCOKWVRKZ2WEULKWWJJIQNRW2JIYAYJTCMCJWEEGVHC2NBY4ONENKR5MTH2MB5FUN6UIMEJZOZNBRYGWI6XPWDKNJRIDKGYZLBIM7RME2QHUJKET4JXFZ3RDI2OJIWBCG3L5GKV4ZA2ZOGD56THKD6GCOKYKIN2C52AOK5DCZQMYR5ACOOR2DGIBOUWKIZVVQ6OHTKMRABEC2BAGEIRGQZPWIDDSXNJKDRNIMP5BJQ4BYMBF7GEJFMAB5GSWL6RGY5PYHLISDZGG4JBVX4HZLZR3HXZTPOCJRZWDDNK3M74MZJDG36BJAXJIDYS55MSYDUC2LYWU2QKGA5653DOLQFQAQW2L3AI=END
hypothesis-python-3.44.1/benchmark-data/ints-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000005171 13215577651 0031633 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for ints [integers()], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 376
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1227.23 bytes, standard deviation: 810.66 bytes
#
# Additional interesting statistics:
#
# * Ranging from 32 [280 times] to 1984 [once] bytes.
# * Median size: 1744
# * 99% of examples had at least 32 bytes
# * 99% of examples had at most 1888 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 1672: STARTPCOHLGFRSLNTADCEP7C6GOUFITJRI5N7SJEZOK2SL6LUZ7R5WFEAE32BXI4I3DUSIBQACLEW7ZZ776XV7PZ6X7TR7O7B6OL7XPWUPWK626R7Q7NKV7V6X2JD7W72WW7NO7623V5V3DJWI56H6XHHTXPHNXMOQVWLLFUG73GV33MM3EXPYY73LV74HOJXQ5PN337M6ZVH3PSC5FY7IV3CVZXFYPPGBVBHXR5RND6AHWYDWYUA46HCRX5YT2O5K7N3V3WYWCM5FNFD425ZWAWF3KYFHNMDZ7P3M3F37WTLGWCNU5ZJF53GRSDXGSH2KJ7QCWUJSZYNZ7UE6XZ5PWC3T2V3U3WOWF4GDX73ZIJWQBV4TU6TTYVUMT37F7N53F7WZ3UZDGNFKEBU7K7BRLVBU6GEHV3255K5O5MDXET6Y4PDWBODQKFA5NMHZTCB5ZLTEJ44FB6X2OBNLFXTM4QZNCF34PRPCAND32RWOWWUWDKRN4LJLXI7ZNOKF5KTDZ4O46LGLL7LLXLKHXVROPDJ43J5XH7WAE6HHNMHHHYMBFKB5YRAD7SUL7ZJLOABXUBDMOSW5EUZL2U5L4RRPC24J3QIKESMPGRMBITHHL6SQHCBCKTNA63HLKQ4A22454HYCCEKXMGTKNMQJ2ARDQBD2E35XD7BXTOIWHBJNXA5PFKEF34FAJZYDS43FGXTTME5GIOFSKGN2WHC7SVGUG7ODUU65O6GBEJTLBSA66MSWTDNNU6UPHC6OLSZQHJYOHUCE7AAAY27SYHPX53JVTCM54GMUBKEUS7NGM4QMKPYIMHSKMRQ7E6CWFZ4YHMQFLY5N7PS2MRUMLWLX3RJ3JP7GITHGCMJBKCLRWFDKKAFM7XRE4LQL5TM4DZVCRVAXXRWL5FFI2BSIWDRMHCVKA6Z5DSKZN44V33KCGJUTWOV6XEDIE5SJ7W2MSMMPBDMLTHTSPZYW4KESKBFEFJ5DRKBQ5HPGNGOY5YZL4UTN3M3ELPZLLPKBNWBLGHU4DPQ2MAJSKB74MGIGQU47IVOMNEHN4F4YC3IT42ZCAOLWEFCEVEE5MHXJTCQU6BMAQTUF5LHXSBQCCCEKKG3NY5E6KTYOXKXEOBAREY4DGZUNMSVRUNAOA3YJ7MZ4RFKMPSEXQJJPIU6ZYREUPDZ7ACFT643CWG5TJFHBRVMOTJMV65NGBOQXVMY25YFZU6R5DKEFLZZDBA2Z7I34YUWREGFKAMUOHQKHUTYKMGEPA4FIJOUT74BDRMPLI7HA5JU5O4N3JYVV4EI3ESHDGW33GL22CNU46S5T2WA7ZQNM467YR6EVK6UPFBYYLOV73IJZEUPKHCQOGQTWM6WVIFG2QFNMSY6Z2RZ2DBQUB3CSBZPRXVNCYNU4S5GQLYBI4NU7SMULJH6YRJKG3RJYCAKA6YTPYZJTY22DTJEZ6EGLGBNTQFTYHS6P6PQKGEMGMUVXFK2H2D67WU2J3PPT4XDLQRAZQWO3GFUVKWG5AGUCACEHLCQHQNVXCLMMKXNQZKHZCG2FNJTXGXGLKC2HSLOR2HBVV4LSCIKLQIG5JCRL7DRBEPDTOATB7OS4TS7WDX57G3SLHQGLS2LZ54AAQ4A6XJNFSEKB3QV6RLFBSHLOOHSVLM4P56LXXN7HY7X7XR7N2H2H7H5A6GK7XHIEND
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=always 0000664 0000000 0000000 00000002240 13215577651 0031311 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 384
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [1000 times] to 2 [1000 times] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARWW4Y2QBABUO76ONA====END
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000011670 13215577651 0032337 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 386
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 6083.48 bytes, standard deviation: 2640.32 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [2 times] to 62752 [once] bytes.
# * Median size: 6097
# * 99% of examples had at least 385 bytes
# * 99% of examples had at most 9697 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3944: STARTPCOE3GF3SISTODSEP6SWH3BW7DAIHVFPFDSGSDGZ4PW4N7X3EIH2V3WIXHIVCXKFQJACEM6B777PR5OPP57P56XRY7LZ6MPNPZ74LDHZ7W2SZP57ZYO6P66L2KRR532VR7B4PUVP257WGHNP36I6H66LGPVVHN2ZZ463NXXXKO76447LZWUF7O5MGB3232J33BVGTNLDWWHMLUB3X225UCP5LJUP3HFB765P5HQ5ZETKUTJ4JPHU3T3T36WURJRJVKWXR3V4YRPW7WTS5U2XILS7PIZVZEOHDSXDO6ISXEKAHMJNGPRNO2RXKLHXS77MPI74PUXJKIE4W7PU43FD5C76B36Z4GOEUCQAMOPCXVFQNN566R2O3ZEXNRNL3KHNINUZ3LSUU3UWK555O2LI52TLFFIFG4VT5LZSTSKKHU77Y53I7O2DKFHUERICPRCOTEC4LKZHO2KPGTHWE4FJX6TIPCPVVOJ5HJ27VKKPONM73PLTFDVKG3LTI66PL5LDI5P23JCSZL6LSOWQWH46FR4XCBKYC25VJHEAHDAFCWP6Y2BA4FWI5HXLVWBF6ZJIV7LPJPLLS5524IQBT7M3TPCH5NWKVHCB4TBGVPOPTKBYU67NLCFYCAJBYSD7MOVOOTTAFQZGBT55DKNLQL2WJKZKWXYNJC2GPICKWCB5FHEPAX2BVSJKJZ2D2CXJAKMWZIYW5LJY26SGI5EC6H5JRPBYQJPOEDFAOB2BYOSJLTPLCJEPAAVOQLWQK2SXPEVEWTSV42RPG2XDRFPMUHCMYANMPRY2ESSWTUUJIN5HY6XT25KM4C3KTOBSDF5DGJ3ZLOFXRR5EIOKT3MC4OXKDVNX26D23WXDHRW6K4LRA2LK6OUGOWNMWE2W72P4HCBL5UYAH4ARMAXICN43WDVXFDND3O6RGTCZA3JFWZ5AAQRIVTIXYADT2R6E3GD62V3D6STCKFCXT62ANHHEWORKDNKAJH4UBEB3SIKZRW6T7RKFHFK4LHS5JYFBPIV2QK3PHPWVKB7YWBJUS6PNJSLVTORLSAZ7CUTQIAPFCSVRAFK6R7A65G4ISVQOXKS5DHVB2QUIH2G2Q3LO6WXO7MLJJU4J67PHCGMC5GVXKIGWOQKTBINE4JYFWAERTAG2SRIUOVAABOHDX5BT3QIENT7QJYUHCUKI27SHWKTWRLU6K65WELBSJOHD2PUMCVX3KD2NI2DKN77JECJTIF6MK24JWVVLKNFKEKGDRSRSJF4AZFCEYFCQNY654F4O6ODIHLXJHWRDIBUGYWAOSIWCIQQIODDYBJ4KET27X6C7P523UVFP3UJZYK2SUPBMYPR23XWIBADZBT5ABLGVTNIDDWJSRK5MCTE5BT7BXLIS4ZGW4RWCEDTITCCVB45WRZQOHYSYPFUYUECM3BI6XLXQZDVT7HKF6IS47RLKTDEGIY4SRXB66D5UFBKEVPZ4G2URRAFUVUBAMNGPHU65W2NNB5QHRWXHU3WAKYU5M5KXUT37L54C5IFVMX3C5KFJFMA4KQA2AL4BM7PVL7HZJCHGSJHNU6KDUVJPUWDLBGUO2MECWN63O5JFAP4B72QSBBJTM3MS24Y2AEHDZUQJAOTUDWIJWTLOX4GLPW2TKEVGIWDNI6ZPJFJEUDS4CETGHFKXJAMKZURT674O2COB3443FIMKFG4GOBNBCODN6XOQOVXEJG3DVFUIWMQYKDGNLYYWWNCKXFFFEM2DCHZWSI3U5HEOQECHB4KBJOYSVUNZSDCOJW7QHIKILDMRIYNM27XQ7OQEPZFKKRWLWKOLTPPJFFDJ6D3SQ36KPIT2ZSQJW62IMHUUCBYNRLLLUD6QADGHNWQIB5OE6GNQ47UACXDPF2FUWR2SVWLD2DD633ZIPVBNC3MCAB3N25AAIG5BPPDNNV7KY4MADWNGR3HXDAPC4TFKUYGPO4HUQD6T6QAJMOYME2BZXNW5EYP56YCGTSQ4WSJ4EJ46TQTQU2HQIBDL3GBLIHYHHIYOCMFLZXQTLSDME5AEAFCJCPLZTBLRIYBY3GVDWB4X3DROZANZEJGQ2UQJ46PM3MQO5TAKAN5VWJO6NO3YITWYW5Y3RLCKN42N7HKC2GFCREFOMARRQO7CO5GTJ4VGGEELEUBBFOXIRFG355CLFV6FXIV7YYVQPBN333IHWUHCDQZX3TR3P3B3FCDMZQH5WITKIUFLRBXNNBOYNOFZQKWSQYVLYRTEEDOTRSG22WOS6H5OO5M4AUE4EQAHZVXKFVAAAEXMHDYX2MEZ3M42HF5TEMOQCP7RV7XINIGNAPQSTUOXJWQTBB7G7XCDWCJH3MPNL3TI5GMDYTONHU4MUK6ZAHZDYTLS6J55MBWWQL524FRHUC5JVAHFGUVV6XVPVEKXA67MVEYXPYPVLBG2XHUVFQZVGBDJKGPW64HL7EAXLQWE5CVQBIZDMEH54B2YHENKGMGBGK2G5MRTBGAYN5J5AVPN7WV32PYJ6M3TSY65MAZIGDIXG4CAZG5DBT3CR4GCXLO22EJUOX4LIW7AYA5JQQIDAGJITMIOVNYYU5LNZQPMLYORDO6AZNOVYXIQ4DLKVTDDK7BRSUTZ3BPKEE64XESBUD6HNGI2TTY6VPLTB7QZE3XP7NDNYANPDVFG4G62YUZQDXD5NC2KGWYGRELYTGVXHQPIKE4HGPX6D4RMN4APWU3HBRW7W64SQKYCWTE7NRLHCO2TAZNEAJGUQNZCGW77MW6HIMC6AIHIRYNYQIWDIFUMPJU2KXJYGFKGRUCK3ZA7N3WGKWLLRQYBTEBDQ4546TQA7UO5ODKABGXUETQTDIXDEYF2BLV5RNFIRVYIXXMN32CDQD3NO5LU65MNE3QTFJSWR7ZTPW5Q3RXUGOCR3WARPHHWFHMILI7ZMAMPDAZ5LGDWEEZNNU4KDV4VZBVI4YZZEJJDW5LJNYNRY557LVCFKKD5GDNDO3AIFMDTXTBN5M2KZ55KJE6WTE6REY6PMNZRSYQMW7GYQN5HZ4TAUXCGUBDW2VKZ6NIK5B5SFCRBWNDQEWQ5REP6HYRH7GUXACZZ4HGWVXAZJYMVYKWRNNYFL6FVD5F5EBY6SAOATEAG3ENQWTBF7ARKISB247LVLIUK6HUOUCALO7EMQT64CYSSDEB6IKRWPSOY6VZP5VKMAPBT7Z2M3VJSGMHKG6P3K3FPVNUF2TYUIPCUYJNOKAZYCHJOWTZOQ6APMGYOZFF3MSBZTKUD5UPC5VENGJDFRRQ63WV2QH3R675BWTHJ32ZTUATLMIDAZXWMO626D2FR44PKX3LKOCEW3XH2WAUNPZIQTMTWS52UKMUP3RB7SBXW3QGJKSETKDYPUDVIRRQYKBXXND52LRAGE5FQ3ROCXINGZC5ARTTLYOC4E3UX55X6RRILM2HR34IJ34XWZVCMTVOMWEXOMB7XPQYCOQ3RDTDUEL3IIZJCEVJM5VEUZFMTYSRXWNRO2JJHSJZGAWR6MTZBGGN5VEKDGJGABF2UI4WYXL45RNJFBKOSWVUTPPXJLNBMOCL3RKVNRC5ZB23MPIFPBXSDOLM5CDFP7HTMYAUWME2RJWMYZQ5YDB6FJ5AVRQNUZSS5BGNRZ3TQK4Y3USFQ4UY7LKVMWHFEOOZTUNARPYN6AGAM76RGLZLKL5XLRMRJQJTCVHLXM7EV6TDI6325YT5VL64E4BHNZUGOE243HZ7E6WLLAIANHAZSCJ56SWNKWL4VOR6BLLL2GPYY3UR3L7YQZXGFXYBTW3N6YXAWF4T3KZ3IKINXB2FZ3YGAB6YECUH4MZZUAIUFNWM4KQJF6UMIYYLFACYL5XNJHK2QUHKTA54TNSRRRPG5SXGYQT3DSWUGCXEZE3DL3ZL4N7MPKML323PTGSR7GTS7H6CEIRXJTHSTGDO2B2735P767R5PT47PT77XY3IT75537VSYWGUI=END
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=never 0000664 0000000 0000000 00000011623 13215577651 0031135 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 383
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 6323.27 bytes, standard deviation: 1171.34 bytes
#
# Additional interesting statistics:
#
# * Ranging from 3554 [once] to 15672 [once] bytes.
# * Median size: 6224
# * 99% of examples had at least 4224 bytes
# * 99% of examples had at most 9688 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3912: STARTPCOE3GB52I2LSDMDV7ZNLRQ35CDREKK7YXS4YGZY33GC5X65YSBZS545ORG7K5FLFEJAIQH5463T777V547757HNN6775HXTT3PX7XJ5XBLPN5PF5JVZ54P3V5LVHP7HVO7N5XP7T257U5443X3U7TGZK7PMVIP347G2G73VZVFH5R6QTJ6OL3O367T6X27NQX5PSOT5RVKUWK7E2ZLZ5L2PFOD25GN2LZOZ7YZ4TNTY5YQJV2SQQ44HKZ53H3YU6HKBTCV7OZUBPPCVLMV5JVHPOHMALXZ22FIUKLGFTS6PL5H3IT2ZHGN4CX3SVHRVWRJXNVG35DVXFHBHCQZZSWKT3GUDX5MVVPWO3IZPJ2W5NX2SPY3DXLKSPYDFSCTW5IWA37O3D7KAUV7ZHGR3JTTY3LXVGDC5WX7PPTNLGXPSQ2VBJSTOVEQVPJ7FDNI5VQZZ6YWIJFC7I3VSY642SHIVPGVHXPTRVFQBBIOWFF3D2UQIDH24RYJFVPFNKOO22GNYV2RGHMNKT6FVZ3MZHKXJLN47ZFVKOPT6LN3SSVOTNZMETVPHEUWKLDKNDP3VSSM5ECQV6GZ44W4KE7LQLMIRPKZPJTCQGWP2GPLLCPQSGG32M2IJHJP44QIX5KUEZG6EFV2ZUU5J4BPKQSV75LNIKLULU25RAIO3DF3WG5ANOFUW5ZLO2ORTAV7MRWAL7KBDXF4VKXSXRZDHNCTRTKSFF35O4CSX7VOULPQ6L7OGBI4MDEIHLVDXUCSWH3W52VKY36UGK2NWA6YR77LDUWPDWHBCI3ZAWD4PPUDXZJ47SGP2XXTOO6PNEUHCRUSXN2BK47M5RWBACFBHPXLUWKVLMPISSSLXFDNHH2MFS72JOOZXZQPNZZ7XLSJYUJG4IKLQTJN5EFSYAJ43ACULFPW6BRRTJKS3W57POHTBFI5PCCBZXHCSO6PJAL5DODTMWPE4YIY3OMGCUAXS2MECCJ6TZQAD4QJDKVELVT4Y5GOW5MA2ZKPCU6476CJUTL3N6JYCP7NYZR2YPTHHC6V6PATKOMNDSP7PKHJYIVOUPWYW5QRT6AT5OU3EL3GNGF25PX7WTBJOSFRDWYUGNC4IA7HFME4A7F6MJT4IUKU7EE6MOBS54KIFZYE4XU3UFVAXJGP4ECKNXL34XJJH2ABCGP7BJEZPGZJTOZ6S4TALEY34QZ5VMOLIBHISYY56Z5G4AFJ7DVUDBWJL33FE4JSH5K3KKHAR3MKB27RMCS5ASHXQNFO5OAOXCNUG3HU2CW6VJUWWL3YHQHTIV226CZ56IXWE63AKOOVOSFE4MQGCMPL2DHAUJTKTXLPPJYHUZWGYUMJ3K4VDSSXVA4JQ2YBCQBCGKGXW23GAT7BEMV6WHRYL6XE5A47DDVYN7CMWRO5NOV62ZMDZFOSB3B22H6INSKSWFBR3DJT2FW27OJTVWKT3OMWS7X7OA4FKLFSA7JPTO4L5STTKWGUALXVA67SXLXVXTONLWAXN2M5IW6AKLWBZ66YFOD5WAQVKQA3DKAWBX67NEDXEPV43XOLRFDHQT2VMQ3YPNEYKWTFBWG3576CZR2RNPIJRN2W7IL2RQP422VHQM2VCGVL3UB2RHJ5AJFLPRUMDQCRQUB6RL7VET5GLETIPKKIUJUPQDA4UD7VCNIS5YJSRLXRWCT52B5C2ACTXKO5UGH25KBCELCR4LBARFDTNQTJTLLB754H36NOECKDNY6E67WB6OFS5WHD7UC7T4SSDPJ4Y4G2PYMHNUNJTMTHYYBEZVWYX3JFP7PXTVXTNPJK4ST35HILQAUHBAZKMTD6KEYVO7PCVSONKMZU7BAEKTUHYXIQU6IPALN3U3Y3NLUI5LTINTVRV6Y5ZNO7PF5TXFQEPPHWFYMBMCTCUHZNHO3OYCKHTVGO35THY6J7662YKJNVX2VLY3POYDUVOGLMKAXVJ34H543RTQJNUBKONNEQLHAZFJ2ONWAPNZ7VSQJ5VZHAHBDIOBJEA7ZRAG3S3SJIOK4GE4JROCGSN2ZHFWPQGC6REXQHOVE6WKZ6FJQK2J6GPIJRO7WEFL5Q5UAC5O6TBBT5P3IGL53OJYZOP634DVPIJVPR62G254LPDIV4AS2CHWD74N4TMCBJF6HHSM6RNPIHYKRYUB7JHKMMOEQSS7BUSP4RWN7AYBZNQPWWDYVXJPDSIIM5IXNU7QCXSQWB55RWPSGJWM3ID7FYZDGYP2FPAZJDNM5RZF6W4XWFQCZUUE2EHQUFBALAIDXB5UYWAGPNYGB2WKXZNAL3ZNCQ4TWMLBVJ7D3RDSW3DEF2DZQAHM2EJITJC5NR3RHA4E4ZQLBZ7QY3QLDA73PLZXKJW5M4SD4VLZLHERGV7F5DDHRHNQK5JERPGNKCW4V2YUGUWE4MAVZRPB3ZRYALN2CAUG33LJAQXDXDWKJ3O4FBVZTRNGEZEJHDY7NDTQRX7I35NREV2IMEJB55PKMZPC3362MLDBLR52456INKJJZ4ZTHWDXXG7TY3EB54AU475GGHO7DBG3TFGY5VJ73V7NXIOR26HP3DS4UX56TEY6AMU6CFJ7OLYONOHY47H7XESV7G2O2Q673AD7MDAUM7BRRRO6G7VUPRQ6WXRZB3BGNFAYFWTC3NXSB54FW7YCHZQJ2RBZADNSPU6CHHUKZYAZUB5BQ3TIDSLPKMSJRTJU7MZPMBMY3HYMMWOXNTXQG75AW654X32XRW5KPJBNK6K76ORB5CGIHOWCNWFRGHU36YVZWGLN37S52MNFYGC3I4HCVSDUVHXRUUCBPYRI3LRRALG6RKUT64XHYPYYIZTCHPRW3RUCT6UPB2ROE34UCIBOX7MYZ7HMUTENFJ6LAEG2BBZ6K7Q2ZS663KD6O5XFYGLCKGALHG2CXI4I2KVUQPR6DVC2GRF3E4ZID4T6LTD4H7ORBO2HZJTD32CEI4NHFXWQWNFCFZGVIBCDMFMJGQ72PQ2BDZO2PL5HYI6R6MDYU6W7INUHUJQQ2OMRM6URQAJXRN5PZYT4P347JGSJSBAFWGRFCJHF7BPHVQ7AFUTKOEUBO255SZ2RFD3GM6O3YLZF5HMP3UYTU7NRCNJHLYUDPJGPYSEBCHHE6P65KDHYOPKLWGLQ5AFDNU42Z7RZOXHD3W2KWJFMBCTVHP4LM6KA5J3SSRB7W6FYBXUZNHGWEA4NIHPWSX5TFQKM3CJRY5LTGOXTRJ4MHI2IW3RDTDF5QAGYGMUUCZBRRNNMVP2CQKR3H5TC25MBXCSZS4OIYPBKUBRDMWS3JOUNWDROBXIPH3STRA2ODB3O6VTPKDGPKUMGHKP3OOKNJH5N5GLXYELHBI4D3JTSWF2IMMP4I5H2FVY2PBLNP5Z6W7YPGPJOJTNGAZNIFGKYHPWUZWYGTOLEYU4QFNSATCXYCDIGISM426ZUBWGVCOEQJXI4O7OWSET34P53X3AGKUHLYYWSF7F35HYT2YW6WISNEQJDVQOHW4B2I3LMZY4LYORORQJ5KHCBIKAPHFLGHAODZNYAGI4HS6GZLDAWHNMCRTSLQDOCPQMNVYJA6D5UTANCUBSSJP4XMB5JBGB7RDB3MDFPDVBWZC5IRYHRFP6B4AK4IXANXQGULE64Z3YWRUFT6D52DIY6M3QFWHY2JSWR7SAWOWAQUQRKS27B6HF6PXEJA4Q6YTTHZBRTLJYHANHVSG6AAIBZLD52ZE4XAAPTLMPK6ZGPIOBBWGAUYCKH5CLW4SWWESP6MPC3B5TAO4JOA44J4DBYYKN5AZD2OWZ4IMJV5MTSM4X2OBD5O7CZCSQTPS57QWJCDQ2H2OA5B6FY2RO3PX7QHVBGSCSDSNURUHTHL3NBIGXSOCGYA6PTEFQZGFSNB5YGPVR7P777LW46P767DTW6773WVV775374ROGVCI=END
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000010712 13215577651 0033365 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 387
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1285.08 bytes, standard deviation: 1151.25 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [17 times] to 5662 [once] bytes.
# * Median size: 878
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 5322 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3448: STARTPCOE3GBZSJSDSDSEV6JFM4QKYTBK3L6S2ZNJPUDSNEZTM5Y77IBRTVKCUZC7ZYBHWE4BYDX667D267766PZ5PDZ7X37MYHG7L5TDHPV7PLF7HT7N7RP627EP2337OVYO777PF3LHLFNP77U24M6HI7KZ6MWLI5J6GZXXUJ5VKPJYGPH7PKZO53D532ZMRKITW3LJ3J3LT4BXZGAWVTICWAZ3HI5WXIYWM6W3T7UHXGO46O6OR6RS6PK6HJAHW4FK64GJ3GXWC2WM5YO567M5NY74TNOY7USCPPF6TDFO26TVNN7THVRMRJR6U7PGRT5DYT6NDZ63WXFX553VZ2FL5T7D2BXD3CCD3TMJSK5CKMX2IPFX2BHWNWHJV3JZM26BGF3OVLHIUZS35W65YGD3ZSM5ITQSY44TMXLMMJAWPKJK5BBI7DLOKWHWYZVGYXLFSSX6TT3UTUPCINBWOFKUZDH74Q3FVRKR7ZGKLCW7KM4Z4U7ABCRKO3DQ2RKHMXBZWMUXPOSMH27S7AFAYRQINRJOSDE24OMWRTUFVXPZVY6JX7BWHYXOKQTIOAC45E7IZBIXZ2CE26DD2F4VYDO63RVZN43CMQ43GAVVYBNOIURILTYTVVARBR4D36IC3XUE3FCU5FK6TOW6FLWP32PUSU76XMQJ4XOL3QUBCDSJPTVIMNZAU6G7G6JXFJEWPLGKSGFEPJY5O62ZDAVHMCXLKKJNWDBEFQESEOYTO3UQMIUUOPR2TEBF6UMIH4KSI22N6UNEXBSZCM2CN6A3XHPJ5NWLBXVD7NU634YFCDFSWKB4TIJXXPBORVS6QKWE43LF5Z4IA55ZR6ZQJME3LVIOZ6NJWM2R4PEDKKS2ZAL4R6LWLI76BD76ITAWJX635I4BC645GBPQAO6Y2QP3UYTAQFSOQJ4IKOPT4NCSQBXH276T42RZ4EU4FG7SF7A232PLNQQGGX73BZWYQTPCLHNBPNCHA3CQY6HPNMV3R7AEYBBQQPLEARASSZUCT3HWML63LKPRO3ZKEWD2RKVKKOXB2QV4SAM6ZLLXDIGCMDSQ7F4AOWVGJVQROMVFCITBTLS7BEEMCLWGD4OBRFZLBY57MVYX33EGB6KMWSZXFUBOZZQMINQVVR3C2SUZ5BPDYOUITO4BYSRO3JE5DSHBMN4RXWS74QKLJEXDFS6A6UB34SX2UD5AOI3JVLZGTTRN7ZBKECD7NYPGFIXK2WX5SMRO2PXJJ3E5FWQF6ASZJOBREXA7RXE7VKGCDFRMHKWN3ZFMTZGR27UA6OMDLG4S5C37HJOCCF2LYIXKIMPGP7NIZ2RB6F7QUKIGBSYIHIS4B3OUNAIVEEN7KLKCTSSZIUGC6CSVOPAYMGAH6V2HTMUMTYUPVSIBIEIQC2F6PJKVADDEYQKJE5B5D7ZTMMX55U3F4KCH3GSTEWZ2W2BSE7FHHCFAWJSWPSNOVMBCFO6Z6HZSWWZHQKYIOITITCKRFBHA34H3RJYW2PKIAAK2RDRGWSGNUYN56QTO3JSJJKONFFENQUXAC7IXFFHEQ5CWUPEUOJ5AXIMEI6ARRMS3QA3QJECQLCCOUKWDMAG44UMRSA2DZAK5UYMRP4GGQX4O66AWE6NKFGYP7QFG2EGGA2F4ACN23JAJIVID266KYDBBWFB3NBCNVDV6RQ66KYFYTA3OMR5EIAPZGFIL7KMQLJCL2M54I6L2M5SGDBYE4RBXA5HVR5JRDZ65NUHRCVDJWTYYTTUQVPF7OZIP4XH5REEIIHDGXLS3QGHLRID7IP6PY7C2U5NC5O3ZVGEXMMJILHA5XLXQQQCYNN2PDUNCFKM2INEY34IIM4DH2ZR4Q2TSBLFUD7QTDEKX2CSZWOWN5HYXEAWW466TFJUEK6IQDEBNMB5LCAL2UY7SYQH6QGBPAVQ42KPUYKCTPHLJ4DV3E5AAI3NCL4OB2BTNPCSRU34TCCU3AN3VKLIIPYKX63TX6FIZPET2NMPEMNJKA4KJJC3EDGKVLJW4DWLUHK2TVRBSXUBGU4CUDJ5GGYIOPBBC5UOYB6FRPDO3EMFFBA2GIJ4WN433KRGB6L32BBR54JBMM2P5VXBIAMCF736ICIMBF5BECIJMLJ3MBTKXRJZYZH4MSPRKHBUMM5CN2DKNELU4QRW6NDJIZZIXXSX6UQDQNFNSK6UVKMDDCQDOQPFFS3EDWYQUQ4U44U5TVRR4B6GMERTRQIMGWOVE5ABPMGUZFXOSU3U6GCKUHQ4QJOGRSHWSLC46LE4CFVUHBSETKSDH2LVL36W25VD3JYG7PI5OX4FKLPVGPWV6RQCWG3DZHQ7QHQSTQYPWUDPRDWMD27MK2FGDVJSUS2YFRIBOGUHTNUJZBCYKGAU6TBNMYFU4DJUPRWFFQFKW3WNO3KCJGK7FKVPGQEIIPD5IPNGPMN5LFSQV2ZAMUM4YTQCKZG7XCMIHWJYMGJW73DKURZTDCJRL5CQQEPGVWRVGAM2IEPKNFB3CGNQ7JHCIN2YZ3O3EBQEZ4FPO62TJ7M33PMRSYRRIR4TUTEHYP2OX5PK6GMCT4KTVO2YK22DIGPTJJIKPI533CI5SUOQS2N4BWGHRIYUONE27NSNS5XUFZTPHEXJ4K5AOSHNUM3VUG7DUJZBVS6XQZEZFNSLFNLU2JLU34WOMXEM47KVIVNVBLF7VBD77MH3YE7Y32XLZLXTXKPKERMKK4TE3YBMFKURV6WWSAO72BGLKMQPVPQ4SIJDHUZ7LNF4CURSOACLGS2VGT65A5KIPHOCUKPBTOYLTKXSW5D4LJ5UMI7ZSP6WLYU4Z26AAO3LELZHEWBFQCUEC5MTLJJPBAGCDRFRJMEGDQD55VPEBBQTNLKR23KCTH4YSTAVPA2XXWTRZGW3VEDU6V3UPMDHABN5PXIUY2B6BSYFKPPHASH224Y3QOFOBTKNWH2PBZLAJYVV3RL2ZVZO7QMDOBCEUOY37SQ37HR7JPYCTVMJFAPFYNNY3VV4T4CQ2YDG2QH2ABFGUYFKG6XTYRG6CCLMLGLQYDCM5XBKIOBV6EXCMLFGTC5GLFNUFNELGX2NRAQU2KDSBVQFOCDN24YIVKFIEWY7P3BE4X4GBJSAGVBG2KGZB5WCW4N6H5ZGNWL6RC47ZPNQC6FSI5YH4VB75XHK6BSTJBKVMYASVOYZW7RF3H2G4SIF2P7PWIMICEATY3R3MTKUF5P32RIWY3QNSXIM3YR4N7HXXWK3UTSQZKWSWJ6R4TDWPGGAZTG4KAINKH7OLBZHGGNURRXOC2LGEUIZTQNOUYMWGTR55YMUCKU3XH4VJWBICTAQBGCGDGFLY3V7VEHMGVJMVKATFCLGRFZCJYSYU6O4PO3WVH4LROYM7ZO3ZV6XIBNMMF4GH4WB2HP3WZWROWXQCZYRETLRBIG67PP56X57PZ6XZ6PXR675IMP76777BKXR6M===END
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=always-interesting=usually 0000664 0000000 0000000 00000003314 13215577651 0031512 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 385
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 80.91 bytes, standard deviation: 339.19 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [904 times] to 3362 [once] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 1989 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 648: STARTPCOL2V2BOLBSADH4JLDOOHAQSIIPIK453SNEHT56WXJ36N6UYSHLCMHAU2O5TGJEATEWE5ZFSLHWC7H7XCHMHS7J2WPE6EISU3LTEKHMPTU4TV7CW4ABAWNPWDU424IYQAGE2NLOS7MDZEQFCZK65RGSCIQ235AQHV6GFLHN44P55JPX6C6HUTAGELMQMZEVMBCW63657WW3GB5MUEBTZV7WEA47KG3N3XL7SBORYUW665YUDHKIXSDAEYISJW2MZ2UKTK3H37UHRADDEH26CLIIG3COFDYFV5VDB644G7MOBHYBEFFA6467CSGQLVB4BFTDNJDFJY2VLOOT6TFSN2DT7Q6WU35CVCQFAVIRPBUSJYOZTC4HOE7V4KQ5XBA3PUSUNELEWUKFATOFPGX2OA2B2UKFQ5ABULPHEYJ6C4CO4ME5GL6SP6CKLMLEWD5XNYAEPWIBED4CSH3DEUPEZ3ECERL3PTYS6PKVD72H4BTZU5MGOAZFE5AOQBZDONBVPPNGVTCWOOHU256YBRVVZNGI4HN2GKETHRFTSJ57FQXKFFECRN2PW7HC34DGY3ZDQGZVCYKTCCKYXPZXCBNFJBGQY6GCJPZUETPXCYN2NEXOPUZQL2XW7NZ7EPUPT2Y3OXJR3UA=END
hypothesis-python-3.44.1/benchmark-data/sizedintlists-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000011024 13215577651 0033560 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for sizedintlists [integers(min_value=0, max_value=10).flatmap(lambda n: st.lists(st.integers(), min_size=n, max_size=n))], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 388
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 1393.57 bytes, standard deviation: 1259.92 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [18 times] to 6429 [once] bytes.
# * Median size: 1016
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 5411 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3520: STARTPCOELGF3RYSTODCEP5S3B4IEEKU2O76FODTA2HDPM3B763WWUF2A2LHO6Z2KWJJ6RLCWF77T4PLV67777T26HN5P37L3F35P2HL7PFZV3PPV7HX7RZD746L4PR74KODZY56LL35P4W47JLFN6XDLE7D66JA2Y7F5Z66BOLPUJ3JZ5OUKSMHY557PV66WOXQ247UO43W2PVM252RLG6NGG2URRYPZNPZ5W3LG73OSOP6ZROOJ5G44Q6WL34YWJBQMRYRWXW6M5PKXJESOL75DCW32GNZKTW4QFFOT4WMUFW45NZWYXIQ3H6WOK65HPLMOEAVHTYWRBMWZ6D7HGK2HM6K2XC2XSN6ZTHX4VBW2M5HJNZINT47BLE46GGZD243ZTCHHMX62DI36LW2SCB5272HH77NFOBUY4M44SUE263QDTRU2BSOLE2ATFDCWT4YS23PGZGGSUZL3XAVQSUO5X6JYUYZODVRQL343BLN5DRXU5E6BV3OTS3P2HPBGM55MHTZHF2PIEFPTFZLSJMNQ2IT2P544OWJWQUR7TUQ3I4V3UVQGAAWFWCSQEJGZ4HDZ4JO675AVAQXHJIF5DTDNQXKQEFUEXWFRIOYJIFB3PKEEOPU3CIM4UFZXLV54JLRX4TTWQB4PMSNFGBKHIBNZL76QKNGFKVXWCJYNVEKBXXF3OWZ6G5FSMEDS2XWG5YFGMI5PEGNHABACEWT3AO7KGEYK2PCGD2PGG36K6PPFWQIT4P5TRTWF4F2ZFOZGQRZ2WKT7EPX2GN3GD3BMNCVVVBFXVHJUFB6ZUBLMWPFNO5J4FN5KNE363XOPINCABDTJ3CXWPNWYMILTNNLEVB2LP6YFOLZIETUPMP3WBX6UKB7SCEDXFSOTKNYVJBZFVFOU7CQK74SYNFL5NN2TJJRCH4VVKVTATAXVOAGQ5B6IYEZNKWXVGKREG533XBMMVTMSOV4YM5R6ILP3NMRCF2IHAZPXXBXYMYQLKFS5YCZVKNATM6GH3KAYEBB2QIQYNS7CSIKTFKF37A2ZCIKYFYA4EHFCQUKWWNIARJK3PZJ4MXRYMHKBKJRN4QUOUZO6DK32DG6AYOFIQEBAXWNLZUQCZOOFVVCK2ARPI22VAUQDXL2FNVFVMG4PC2G73UHXNC3VWWGWUQ7JKN6KOIWWJXKDQAV4AYKDFRPZQRXQA5LP6EVEGKILA4JFJ4TO2D3VIMBJVSAEJBHIIMNIGE5IG3EBEYHZA6N3AK7Q5J3MWY47RIDTXA466FQVQFYQC3UJTOU353VBUBQ7ZKR46KRBXHQUV5QQAF4ROCJODHLQLQUYXOQCYE3A7OMDJO3ZIMPKPQWBWITPGERUYVP2OYDCJUQIRJJ4YTJDUX32GYKWLXXLMMIAFGM4PP4HW32WMD2MNBAYUI4PQU5QJGWAURLI5M24UAV3GAEHTGXFQQTVZTWXKU4MPALD3R6ELU4RWQHQONSUIIDAHHSVIK6HVKDVXN5VKWKVCWZSY2ARWSHIZ2HCGQEALG6OVNRUG2Q4KVERQLMRNLTNFFUSWIGA6AKPOGNDQKSVQRPHDAQAGA3M62GBDDPGRZ3APR6YAWH3DLNCL2GTIIVF22DSHJSNBKYXIJ3SNW35SVMDUVJR6D2JKJOETG3RASGDRD6ODV7J3UZHJQ2ONP4SXRDE4JVOSSANE6MEBLSNZJWL4JSRZ4VHH4TLDIOIRWDS5AE4NWJPVFV2YT7GKVHRK5PLETKNMC3FLIV5AHT3QFA52KLAYJ4RPQT367NVLLR3MY2FXRNOTCND4YYVEYPBDNXOV7AKWVT5KQYT7AQBOAICUIVQUQBM2EM2UXYXVKJDJLQJA77Q2Z2DQXNHDAFVUR3QSVYQTPL5EWJZGXUUVKKQQIER6SJCHVC4QBAJ2RZJ5EJ7KWXVECNZAQD3PS5HEBJELJE6I6KFFKS53C4QSSFYEEA3LCD3YRUJHTUDD45ZD2YWNM4VVJVNAUV7ULWWBPGSHGIWQ3MXVBBHZ6TVDHEY3PLUMNRHDWUN5WQD2IJZMZE3XA7ADKK6VCOBIETZRSR2EAYCRMX4YT3CYIT23YQVITAIZDZHKJ4BJQXJEDXAYVR5UGKWPSKQZN6AFGMCVBGENXXHWMQGVAACNRCMPZICFFDO2F7VVAPY6QJNLTA5DC7G5TCXFMOBA3UYAJ4YRWSBDA4IEOXVKORXDUXLSEHKFGFJLTIXC3WWQKPHWRLBJIIUR3E47POKT4N6QDSED45W3NPPOJGURUTDBLTCVD36SH45OYVQB2FUJH62AWD3BUODACCVLTJEIKNB5I3ELBPKE5XO4VVNZJXQJEVCCRHPLBF5KBBGUV736HPFLR4YVDK2WMMDVWLWEVF426WAOJ6AFWWE2ECD3HSKD7D74VTYP5GZ7RIJTQNXWBA3MO4YTQ3ILUAPWJSJUICBCRVA2XCS33CXOBNG3RDWY7TIH2CCLGR53IVJZQ7GBLPGZL745FRAZCOQ7A7KJFBJVEAUSCVDVCXS2KQBWFB42ICGMFEFVHCOU277KVNSJ5A5RSDUYZDRUZBBMUK2MZLFH6GEAB5UGIZ26R262VXWROIR4OR3EQJUGBBWAICMVLI2AJMS7Y2I7ICUI3IASSPAMZZCU2HL56A26TEAWO67ETZGIS3DE6FASJJVQMIVYVWSZFKWFAML4PGVETBW5BBBINMATVJX6TQIB5U2KAECFXIWERCGKEG3RZ5NEUSBGD4EYB7IAP4UXZJG75KQZZ2VPWATMLWFOQECUURLAF4JQQKISXCDXYWGXXKGV5N47GJFSI3KPWQYBVUZVI6KVNJBV5RT4PAZFU5BO25A2CFKGTOHPY2R2BM6FCC3GIHGOEKWBGK7AKB5SHVKVS7HM5QN5V4EYZE2TBTVTQKLB3B5XO6CG7QSVCXR7RPQ37MO6HANCOOYH6RUYYOKIE6RWZROFD5YVL27FH5JJZAUO4BVPPDM47JBYTDNPW7KOOLNUPN4M7ASN2QYVPB6C4EE2662CWKVSWTSJKGOP5CPZKAU6GUYW4KTMDHCZ5KOUUHCFY2JM4UTZQDW3C4YV2UH4XIZMIJPBAWUPKMRIBJRMDIQQ24YVTRAHWW27MJ3UTHPUI4N4FAPBV5KWD2UOJXWQT5G55WMIXU7DAXEHIC6VKXNG5P2RX3YW5FMXVPUXYT5BKUQDI7W6VJNL5LVRHU3VJHQMYKNNM7TU3TGUOSEBDDEUHKMPJ2O5ZQ3AICRYQYIEBRAIYVCD3UMPRRHE27JYMB3UZDCKNBUA2N6RXDR53KLWYZ7POWUM36YJK46UA7FLZJ3ZXHIWKDXTJB6OTWJU6UTHTTZKAQBEVSAF6WDKF6Y2MB6YOQUWCXQ3EA2RKVCHJ5C7BWRPTG67GFEKAZNZ4SZMBIUMNOM2J5AZJWK5P3ONUEBAYYOMHBW4LFUQAYBIU5QRCX2CYYF4RZP6LSJHYIASN6AFYATEF4OEUKHW6KHVKSYT724P56XR5PT67767HRTPVP3X77QCEAG5D3A====END
hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=always 0000664 0000000 0000000 00000002067 13215577651 0027374 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 390
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [1000 times] to 2 [1000 times] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 2 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 96: STARTPCOKWVRKZ2WEULKWWJJIQNWSKEMELI3ICSG2EUJURJDNCKA2IWRWQFANNIKKXI5AKSOJVGQCNTARWWY22QBABUO26OLQ====END
hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000007401 13215577651 0030410 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 392
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 211.02 bytes, standard deviation: 172.69 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [3 times] to 1741 [once] bytes.
# * Median size: 164
# * 99% of examples had at least 11 bytes
# * 99% of examples had at most 815 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 2848: STARTPCOFLGB3OITDODEEV6RFFLEAAT4PIVK4Z26IDY6N5TZN2PP5AEMK2A2N7XB6CA3IGQNKB7XZ7T45PXZ7PZ7P55XRPNP6323D6TY7KYPLPZ67J654Z3TU3Q35H4RW4NNKZ7TXTMZ5GSWAT6XIPV7X3HB6GV7WT3XT6B6Z4X25R723YXT7P2YE67Y6VXVTBTNX44Y7P47RBZG5OY7IST4YYW37U3UZWD5NWSRV7T6IZV2X327BHWT7PINT5WMTVGBST2KXW2Y33FU7VWCOZX26PI3JY5XFWB3LW4Q2WZFRDRNVM63LQL3WOOOHNGP5SZE5L3S43OFTHQOTFUIKV3XM6B6XD3PF7L636PQ4UKQQAFI3TKA7DLXKN2LS2OCXC53CQOZRV44CPRUS7QHWLXXMQYMXNGJZ25HQ5FS3FDTMFOITWI2OTQLBAAIMOZZRJC3ELHLRZLMCBPACEYQ4MKS42RWLYUJSZRIDZRLDIRJGQ5LDRLVQBYOTUNDVISUFPEWKT4PSNODCC3HIS2Z5Z6MOBZ4JT4GYLZGITTQJKXXVKWUA63PBJVCOT4OXA7HMWERZXM5OYEZ2XD7IZBY33JMVXGJIRZW54OE3EMKXEBNH5NEIOC6CPLAJC5UTENJ7A4V2SKQOXBDTWVZEKMCM3BMZPVOHANHDMJFJQBEUWJ7B2A5NC4ESQDZK6X365WCDVMWAIZNLNNGPHKKWZT6U4NEUKI4NUC73QBNZEN5SZ7PIP2JQX3FAD5RIPX4ZJEYWJHRWWSBXKTI3IJPXQI7EOFM2CGQIIK4AKXFHMA2WLTGCSBQIBNXRETIZRMCF2ECGTJPSIVZZ5EMZDEKLYJWHWD3KMFYWMRISBFAXKN7PM37V2BASMZZO5SJ2KG57OFYFMCSHNAOKTUPELFGBOPBE2APASCCES7JBQFUNZTIX4RE45EKTEXGXJGZELVGDJHZTPEHT6VRGL7XCB545CLGJSXCAIFL7ZW4WLXJEFZ5ORVZ5HEB4JJZ456223RGYIURVAARP6EDLKJHRCUQAKMVIFGDI6JBKQELTW5QN5DWHQ7HLGWUQV4JUCU7JXFAVY2RMLABK3PH4WIKP6DQDXDHOONK4EIKMEEJ4UEF75FYERIWC4TA2VDAQ3SCIPJAW7SLZFEODRSF3EYMFUWXLSUTJVU3XDKAWWCBFYRTPMSK6AMYRBIFKBBRDQNUFQDEGPIGQVRERCTIV2ZFDG26NMTVSUYSS4NURP2OQOYZB7IOCSYV6OYTM5QCRCFECQY5EERDC6KSCSTQOAB4ESLAWIWBPRVAD7RTLE4FTH2XW4Y6OJCSXBLIWDIIAHQS5V6BBZA6OE4WTVSWUP2HNHU4RMNMP5YAE4ILYXUIJLK72Q3P32EVVCKJ2UYLCJQIKBMTKFAWJEMR75BBEO5BIRDIYFCCRCQGGQNOGV4UYY3FBFLZ7KBKZAKOZG2OHJSQXWS53EHMADWHKOLELSXIVTTVCTSICGEC5O5MCKJHG3T6U3JZ7GU3HFDZG2JOL5FSZJOOBEUWRYVNVUNDUDX4EI7UKAUZYK6YORPT5SSBL53NNL6LLSSLBIMO5DOSONOTCAMBUVLMUCON5FI5UVQXLUCKIS2GINQ2BMER266RBKWRFXIVONJAOOAAGOMIW36DGL54HRR3KIQWRTNFNTANKT2SDC5NVUMUPE5DA3Z6QLTZKODIQ3YLLZ7YKDTBNRANZ6KBITZE6Z4EMRB5PK5TM2PKTQLNPGYWRI2CXIOFWFVZZSFXJEVUL5ZWKOWBDPK7ALHVYRUSIR3AKDZ7UFCBKGCJEYJFID3HBA3JFAK6QXXXIXNN6SNEVMRCFCSY7NC66APDJ63FFJKARIUNQVXNCNKCH2JZ6WRIZAHPY5FIWKRSU2RVKMZ3IWMLLNWR3TKHJCM4IB6IYVGUHWIVF5N5OC7SKBP65ODF5MJ6YF5QCPY24C5EAMVA2IIVPWFSN5VGZ3FZ5CQ3LES5IKLVQEC72K7M2UXLF27KBSNBIQF4M7XT46XFELKMTCAFANCBJZDJFZGLX36CSLXFTFL4RCDJE2IVB2LB2J6BAMNGF2BTCTBBTLIJKWUETLDJZEFRBYKNDOFHGL5RLUZQAWGYBKWBG3WIREUHLOUGKLVXILCILWVR5FRU2BQDSOGNBJBAQHV2FLRZPHKDMYBYLOUWUEZ732VBNWKCP4ZXVBLI2M4JRODYQYSOSXC5QY66URHVKBWFHRLW4FDFVODZJ2DQUHGG6NLKUYPEYJFIOJNM2ECENU2J6HVMABX2ZQVFXIK5XVKJLJX5DVIIFCT5IXABUK53E2WIZSTFAZWFWYQD2VW5KZ6JWTDI3HNRKGY2IDK5WYQTYUE4K3JIAMTX5RJDKYWPG5HGOXKG7XSESCKG4YGRWV5GMEQUGRBAE2QERQYM6KHGCJSNLVLQNUL77BCHX4DJHI2VVY3Q4JXOPLANKIEIGQYXX3PLVAYEJTQJAS2AQ4HLWKYNED5OEISD5TINFURC6XG5KD7OZSFLEGOSQBR3KGOKU74LHP3C6G6ROKSCR3X3STN3RLHT2AGLFGM3BFBST23OLZ2POBP6TZRMSBRPBDIMVFSEX47F775UCB4OH75NKAUHBSLGKBFCMMBR7RSAFHAJJLATTVN2TYKJK7GXZJFJZRWAO3OTX664NGVOTBBFIQLG5DF6YBTFARTOMEGLN47BPSLKGMLK2BJAHZDSGTC2SJXYULZFY7IIQDEMPD2N2TBEK7PPSYMFDYJU26VP7YZM32GQJJMETLGQDFS7W7T5JIU3SLH2ODL3LPFFNKPG6XDX76COD7PR6HZ7756HR47TW7OWA7T5R7AJEMHWA====END
hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=never 0000664 0000000 0000000 00000010560 13215577651 0027210 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 389
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 2344.62 bytes, standard deviation: 386.42 bytes
#
# Additional interesting statistics:
#
# * Ranging from 1417 [once] to 5701 [once] bytes.
# * Median size: 2294
# * 99% of examples had at least 1694 bytes
# * 99% of examples had at most 3421 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3472: STARTPCOE3GB3CISKWEKFW4ZDD5RTJD7LZLJI4RUQZWPDJGQ32C6OUHUXCKR2VIU4RT6NTM37T36P37775T5P347777XRR4MCT7X2SFJTXTYYNM7XW27H5HXZZ6HXHTV6ZZ2Z4Z7I5SKX5GWMS6M4G5V77DRZZ2NNJT57TFT2S5J7244SLVU4TSJM6PTLTY3YHHPTPGJ5OWJ7ZN4RHPPMM4VTYY3CAPHXDXWUNDT5W5CWM3LOPYB5S3LYVTPZ5TLHEPJLOPAZ4Y2JLH45ROIKO25P5PA7DXF4P4JFRZDTDMFBTQOXMHGOX4MWX4ZAK3D563PH3EQ5K7WWT5JWF5N5U24HGL6YKCYCDDUP2HMI3M7SHRVQUUMNZK3ZZUX4W7ZNK2U4JAGI7ZJZFXRFB6EWVS23U74X2LZHSFXD4NFGE3X4APFU6DDDJZZYU535MHN5TM4UIVHAOO5XS6FYYLZRBPH7HJU5WNZE6WE3C5H3BEMOZV6Z2EZLJCUOHHBYRTKTYO5XOS7B2BNP6WS7GLZZ5LCIOSHO2PL4PKTCASRAY4HCXEJWYVDSZW3WBMSWJ3A56QMWHNPRK4E3J6TIUMK446WJRYFK7TQTKJ3OO7ESAGBLPC2KDAHPYYZ552PFIRTO6Q2MRB7Q73B445XKPSPMRHB55DII27Z63QGCALDBFHWW2Y4WW76VVJIW2Z474U7NQU6BJSZMVAXVKTPXIPOZFFSK2E5R5YNWZK2U7IQNV5LAEZTP5LVMKY5DLWEDZCB32OAYFMUXCLWW62B7A4E7EABG2EJ5FNXBEKZLZ25MNLZWRAVDCH6LLLULZCKP6Q3LR4FF4KL5D2CYZCN2VZNHHJ3KL3D5JL2ESTC6ERF4L7UQTW5JRCMHW7FURZVCXTOO2JZLRFSDPTWEGNI66K4DRIWG62AVFASSNGJMGSYJDWLNKMNTXYBJTEK3PE5IGZ7DO4JI33HLTJK2UFEYJ43ML62BPOKC4SADC6BMO6INZSC5QB6DQKQGWVLNG7F6RT2CPTXBMFMJ3PAUOOLYT5XAHUMCWIVNZT26VOXBLVEWJGI6GEXEXH7HUK5ODEEN7GBZZZVZOBNOBY4ENVBPNFQN3L37I3E5QKOCB7VJSII5NQ3VW4JO2VUNVQ22FUKYB5V4LLBC2CPJLH5VBFPRM6JREPDKOGWVQAJRF7GT7FFPX5M2XR2PQFQ7JO4ZTETPVELXMK6VXLDGNOBCYQZXCC5YG5APFKD4BNZFVMRW5VJ6LWKGMY36NV7LUFMY53OWFQ2QEGTHCWD5IGTLMJ4FN6VAVAWNMR23Z5BHNPS427MGPEMFOIMVFTXHIR7FHRNXI57RQY54RJFVIXZJKYRJQIPPRE33TWVBYDB6LJJ7OIL7EJZL5GGE3HGAUIZAHO4FKWM4ESS7V7JSLOT7MVSTEO4D3I43TDRKGMPS7CUMK5QW76F4RC5IJ36B55RO6I3C6V55NNUNLZR66ZX7IRGPZVXOT7ZOHK7NKHBZP4RQSNTANB342XT4G4255ZH4N6OKXYLOJJISMH3NSWC236XYU2VKXQJ62KNSQ5K5BTNDJLR6ZOJBPF6JYVZHVBBTQENM42J26EWNF2AJ2C6GI37VI7VP3PO6UQ4UHU6YEPKVPJKZLATOYSDKEXU27GQVIXJXOO6F4WPZCZ3G22WUHTJ3HJCW2XEKTQNMQXOQUFPH6SKESBGMJBEZ7DHM67PUYZNVLG24XAC3OLNIS4CUXONRLCXPU6XRL4AQJNF6BB34ORRUS5XHLPUXQZB4KUZFI3XC6QRCGNJ3GLYQZ5C3I6QVT2NBIZCZDIVVPKLP7WYHYO5SFKDCNIQZHTOOK5LVXCU7WW7PK2DMLVZ7BM27VDCFAGWEMEJG3OZNBS6FGCT2E52UVVIHS2UD2MT3RLVHT5RM5VQKAJDZDZH22HLVIU4EDKC3TK5J2PNRKBKRSYBMMWKRWSETMOUZJJA2I7S5EZZP5PLMWSAZX7EDEWOIOCU52B6FWIZW5VPQAS6Y2LAEXQ2OYPXVNNWUHK5MVPTY3JZI4LN5W3KR7Z2OC6TTUEDNNWWCLSIJ77IJVNDJOOKVCHVF5XUTBP5NCBQXLEDG7ZWUDGTYRBE2LZ6XLFPQVAPU5W2KSX65L5XDVGZFBW6XUMYWUMPDVKBKWRNJITYNTDNN7J72T43BE4ZQWC6BN6PVVNRDVN2YWWXOUFSSHVS6347ZRX7GSBA6Z3GEI7GBI6MGTDL2DRWANN72P5E5FDQ3JCOC5TSL3WR5FN2QIP3AZFK7W2UVUFTINPJTPCTEXXRMS47J453VHAC7B3JZNNZY5SUBFQNU6T3B75NJQT2ZH7L6HBZ3I7OHHVLGBUF5TZ6222ORDPW2V7RXLMZDYN5WLWDH4G66UV7STOFJP3BXPFB4LYX6TS54IESIOCXG6TWQPRC32JD3OTFG6QWP6AJFH7CGMB6BBO5TZGESUECOCTZXN6IEQ6RQC5RSVX5P23Y56FK3TSL2X7JQI2WQZ2HF76J34FXN7UJK5TOINX5GQ4UDTBBOCZ33EM73MRZ2TMQCF6OO434EZOVST5JHKNAJCLIT3CICCZPMEIZVP7JCYLCNHOYI6WSDYLGCON6GW5R23XFR5IOKYXV6ILHJZWBXVD5TLJ5XKSR7FWXDJTSX76VTJUQXPOVHVNLINFX7VWDNDQP23UFOIYBJ75X2VNR4KZFH55JRCVVLCO2FHBUK5DHCT4IQPATQ4T5732UUXCNHNJOOJKFPQL4SF6VLCQ6SUMJ6OHPBM7VHADM54CNXJ3MYSSEKAT7KKU7MTBMALIW5NYBF6ONTTS5AXO2M46G6ZFF5XOIADCCAE2DEH3OHFFK7ZQAMQNNLGOV4B4E662XMSLPB55B752USUOYTM5X473RUBLZOTUPKRLGP3TDGG3PIZL22WA75V27PNWN7H35NJUTSH25RPZVOJIYJN7G4FQFOVRLJT2ESKW2PY64KVJUYQEBH4LSUZ5E5LFKGF7OOO73NB6WX2VZLXAWATOZ22XQ3GHRP7ONJCPZW74QKE7CDN6WTSD3BCUM5XFFRYZWKNHV6ZNXVPS6DWBHHEZZT2FYIQSL6MH7VPMK6A6DXCGP3XNPZPLAX5Y25223RO67HLGY4JG5UV33L5N236H3DJQVKF6T2URA5RXXE3FDK7FK5BHLORKFN2OYTE4KH2UMLZKV73K6ZU22VTRJW62UFJQMMLSQ54Q7BPABXHJ7YRRKQLHUXZG57OA7IFAKPR3PSQFW6NHKODVJUKVHDVXE6DXIUPQMF6BJFCLIDTO2EVT2PK6C3NR35I34VFVKSN433WNSGLKPKXVWRBGJKT6M6OQ5JTUVVWQJSAZNGR4FGMO7TXS2R7XPCRPS3U2VXLI2YVXW5GQKUC5C42RUNN5DLKL5DRO6FS5OWG356YU7FC2VGTK7XP77PLY7H55P377XV6PXR6ZNO776R63VWK7FQ====END
hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000011654 13215577651 0031447 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 393
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 7542.10 bytes, standard deviation: 8988.48 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [13 times] to 52606 [once] bytes.
# * Median size: 4092
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 36889 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4032: STARTPCOD3GB3SJSLWDKEW7JDC5Q3ETYSHX2WCTH5GGFSY6JUE6YX6KQFUTTV6UWV4EU7IQRMD7747LZ2677773Z6XL5PP5KK535P6LGO7L3T53N56XYR656H76NOK7PV7W3OH7NL5IT7777LFG726DVJP3FM2MVWZBPVRIYO735BGVY732MP63KME7L2IGONXX5KS23DWGNBPWXN4XHFXXUJV5PHSEAWTBZD6V723YHR3H67VONV7P37W3FM6F6NVYG42XT53LLX645CDI3C5WT7HDVWLLL7LAWLAIWTV3VF563WJVCZCWWMV7T4J47OKS6XYYXNP2Q6NT3H5FA37N2Z447W5ODX3ISC2XYSAKT5SRGMRVAZ4YSUHL26W267TTFDEDXMESLNW5JUCPSLYHWTLCSNMK4HLCDUSL3OQKEWYI2AMZB5DW6PKRXJYPO7KZ5Z2Y7XSZZ4OB3IULLDNMOQKJLVOQO2LZU4SCRUMIGDT3ZLMTDGTYZ7SX3JPITS4KJ57LCZIC7555WLOQ6O5VKGIAKATN67NRCVLF3LMCZCWUXCCATT6JBSDSWQ4EE4IGQMHKXN3FTRSGFIJHABML5O7GDQW3XZE32PYZDRJJ3IZNHDJAOCMPA474S6NQSK2FICZ2KVKG2X6Y7PROBZF23TAU2LS2OZP4SRXM3BYZNU36INCKN77VWCB2XTG3ANTQBLVZZH6YTB7JJPNNGUTYZFZMXUUTV3WX2B5QA4B5UZ4LEKK53N5GC7J2T7PRIYWK722QWBTZVGLDSATPNBGCDSXMXWWT2VWRGBRGWFNPGSN4KNR3RCPD6KGX6BQYMYKZGULCRZAP2H5GRDZVWGS7FNJOKOJIPSK64BI25KSJQ7DF43OTF3M775FX2V6A7FLWTR4X6D5PWABTQ3OS2JBLJZ3BEL4OXPHXCFATLBLKCSC5EC6DRGIDGBBHETYRCMTXIHDN2IEQIEI6352KUP5W4D6JFR3ZH5CNJ45QD7SHSJCVIIZIPUY4LMKHCS2XYFALHZDKNQHGRYCQNKO6V2XCEIGKOFBESBUJXS7VJESV6ZNUPFGYDC5OQGCHJCXDQBYA3ZJN2XYNQMLBXQ2KOKRSKJDZ3CMSK6T5KN3VPLAEVADL727DHXOAFELWZVP3XBRB5FC5FBEKRDWTZ53BC6NOUI3ZRXQXGGVQKLAXZUC2ALBQPHSEKPKDMDPKPPW3OQB5OFKIC3KI5PRLM4U5DPPMAUBJ27LYDIJVGMVRO3OUMFY4L3TEHHBHTGIXETWSUPBTP6NQUQMDCM3NYAY5XUPNX2BVZAUHHK6KXOOTYZAPIIAUL6IQOH6KUIE7KYKY5ZNYXB5PYZZPN2N225S6XAM4SYKIIHHPD4GGFOXXXCEPIDBH6Z7IUCZIUSXWD6ZY4QFHYTFY3KD6R2S2NY4J7W2MIU5CIVTRJUUONZNOJIGM5B5DDQOD4KXKLQC636BJIKRAXKCJ5Z6XHZGH3NIIL6Q3ZHI6VXIGMOYUBO4ROG56BG6KRIRZR6KXEEWVJKYKQPSXAYL2CFJG6ESK7YODG7PIKASZ3YBFLCOTJ4GFKNN62D5X23JKYEYWDB3SHKXBCVDBMOWOC4DXKKVGK2PJG4WWTGSQ2ACRJYSQU3F7UMYJWQL4O6KTYGW35QJZ7JK45ORI563MOAHEHPRQMCCTS3N472H2VBUN7PTAYQNSLSAGOSVPARHEFTUECCUVKRWIKCTS62IOU2VEPANAZYRQT3SPYDV3NBVOCMILIXSQX2VS2L5YVR7XKCMTTHWNESOSFA6NPEAWJX4WSHD3KIIQDT532HKECBP64EGN6AOE73436IQ6TCRWWSWK4OROUV3KYHXAQLSWZECPT5ZDWZ5IOPUGIVF5S5RXUKS2Q2CAVI4EKQF3RV4GFRL6R3AZH4QZD7U2GFZD2DZFGQF3TRCKHUBGYNS3FMXACMA354HZ6F7KXPOY5H5VGVKBNML7DZOCD3IJSK5CAIN6QU43UFPLHOXSBB3RFILIK6KDW3IYMCS7477YQNQKBSRF3CD65WQY3UREYNWXCVDCEGK5RNCBVXGDQZQWJDNCQCFN5EVPIMORLUKZVLM57YNIA2JKO5AQGV33R2AY2KGDI6DG5Z7PKU2EL5EWWU75B5Y2FVOVPRACE5RU6WKQ2XAWUZEUKZAZPGQLY3V6PRY256C6CENGBXYWUW7LSD7BPUFIIQEI7C46SNE5WJUDKNZGJIUQECHZ2ZFRW3LI7SJTHZYOYMAIYXVVWGAK33LOE5QSCP7VEIKHJRCRGRUQVA4PJSYML2EFEE6VYJGQZFU2WUCM4WZSQNSQKTU2KB2R5UFMSHDSR4LKON6YNGEMEAF54A4MKEZCI75JCREZGZM2PEXQRAKC4SSZI74ICOIT5UK4FHAJRKGRQQVOIZYUMLE2UEQQMVJU6HUHWMHSE243JFGW6DJKS7VNRMJ5KARRZPMUO2RFRQMN6WGREMVMD3BRB4UFTYNPJH7WJPVFH2Q2N6DILF3BVZQMFDB2HND2AGUQ33R5TYKCXKZ2BBGE2KQEXUWGSYY2AORBAUGQOXJCIATQRNLDBBZWRQIVDLPJJEJ3BSBRWFC6FXE5SNANWCUWQLKAA5L264HEIUW3BX5RZAVZHWGCI4KFG3YGS2IKWM2YHIZQZZZZ4Q5L4MPMEZORKGNO2OFLGRLCTUNJXQ2ZOLVBU6TWVPNBID2X4KJZBLWBUGGPIQ34IWINCEPAH4M5A2OQOSMOFPVDOOO3S4C3AE4TK3UC6NSJW7ALMM4X2F3ESYFK6SEDJVCF6DG7JIPWUBBZXNID6RIXBL443YMI2J62IJTPW65FVQKXVI76LFJNCJQ5H4T3CHZMZWVK3MNABQRRZ6TPMGB3SBH4YFONFQ4R4MAV6N72HTNCITFDIBSKOTDGXGBY5A6VICOPLORDG4BBONXSG65I3AE7PYOXR2AP3PNUKYYYKFT4ZMTDTTMPDUJXQER6SL5NFLE3J7SQKTSOUGOI7VJKOOCML5DWKYMOEPAXLTXWQJQ36E6FGJTVGGM2DKMMOI75CGMIGX3VDF3ORMXKMHPX7QLBYXOIKCYHNRMJNA6WQARHNAL24KISJMMZWH4CO32BPS4A7ETGBN3LCWX2DXNTOREYBYAN45BSSXGXNVS3JQKVUOF3GAP27PCE4IWI2GBFI5DBWMAV7LZZJUVOMSZXYUNBDSO7V256EMXFS7Z2SDMFZN36RTA47MZCPUVOLTVDT72TAMYNK4LILHUPVKRSE63IKCRAW4GZVJXA4GDW4UQLZZT7XFI44LGUVIKW4OSFWYPAGMNVFVH4ZQTHFCHOJSTCTDRTONCFQ73HPSKYZGDMLBVOOE7TC26QLMDYDRFFRDCVUZUI2NSPIUPMP65Y4MKFKZWNWAYRD4HGVLQHSZ6WLRONA5BFIULTGNC4ERNHW6VZ6OMLXGAUDQTMQ33KK73TMEIM5QSWLVZYV5QGONMZEOLLTWGDNXNUAB44ZZYTOMXKFTNFGQRHSXI2P4SO6TGV6TTJ6STG7AY5U6TVCSA23O7M7NYAJZZ52RGMQP6ZEMEEQM7XYFG4VRG3W62AO5QTC5BDE3HOIO5IK2JDTIX64V3DEIC6WTSELKO5OT3TOJ4KMYBT55YV6SN6G7SE4SWSSBWD4OPCGPO63C5GL3CA7F2PNIFSJLWUKMNPJ34F7SUEOARW5DXH3SL77U7SXCDLRDU52FJCFPYNAX4QG5BQEEL2AZRB5QO3BAANXUG2LRKF2IGFZC6ICJJVHEHZ4256UC4YTEYB5DKOTIGOLS6RAGUKLFENVEAKBZSRZSRQCHEXRMYNTB443T7EDHGLEELNEJNQCDPTTF4IHI7XARRQDVFE7VFPSDQMAORDRQQC7A44HZXFVJNYJIXLLCIQ7LM6GWRAVS6C7ON6WC4VLQ6O7LINCV2YZQ4FOCFLY7WFYYY7UAYWMOPI4PXJV25T5NMEH6LXDVEDWASBIDGXBS7IA2GK37P57X26XT7P3777Z5OX2SJD772H4TNNIMA===END
hypothesis-python-3.44.1/benchmark-data/text-valid=always-interesting=usually 0000664 0000000 0000000 00000003027 13215577651 0027567 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 391
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 8.03 bytes, standard deviation: 23.94 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [898 times] to 290 [once] bytes.
# * Median size: 2
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 129 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 576: STARTPCOL2V2NN7BSADH5FNI44PPATCH3BPZS5W3BY5WONXJ7564CJANGFARRNNNOSVJK2Q3LM6J45BHT6735J5Z77VV5B63U5DQRTOWGD63GQQPOQWXCA6MPE32Q6VXF5LIRUBEL6RRN646QYJEZAJTBHC63HBTIMNSOPHQWMQQ6AEK6HIOWYLOT22JLF753BMDNWDO2RMZERN44FK2RREWZBVUDUQXV4TQTNOLY7UG4OM6UZIDD2JDGRTTD6QSIS5HMDYGB56HHUFVHQGFIJPFKVV55NQGHVRUE2GIVUCMXIKT73M3ZGMFTJMD46X34VAPLIEA2W6HLK5KHZSH2QGQBBA6CVJAYMHDJFWGE2FPVVEAFLKJG4B2KH7P76SK4AEPBHKOBOVZCR3NVWG75WGXAAEPMEVZGCWYHNFBC6RH5NWZNVZP3JW43ZAYXQEJEJ6YFEYHQ4JKHRR2KIF3BRTAARLL3AYQ3MLFZQ7RCTN5J32CXWGGULZZ7D7TWIRODAXO26ALU26HIQVZ2QS6HBWDCMBIHOUXA2J5TCX3HZXF2P2PKNT7FH4BXV6H5AN7NGC4NEND
hypothesis-python-3.44.1/benchmark-data/text-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000011756 13215577651 0031650 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text [text()], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 394
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 8751.32 bytes, standard deviation: 10378.56 bytes
#
# Additional interesting statistics:
#
# * Ranging from 2 [24 times] to 67837 [once] bytes.
# * Median size: 4892
# * 99% of examples had at least 2 bytes
# * 99% of examples had at most 48101 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 4096: STARTPCOD3GCNSIO3SDMEV6RNAWQLSL4CCMKXOHGM5M7QPJ3HN6HOIZ7CQWIRPKX3XOUKASA4YRFC73Z7H3377X527P3773Y6GH635PWF6PZS3R7P3WFZ56XR7NTC5PTZHL575Z2V34WVL7LW77X2CFI7254PCO7X5HCK5ND7NPWZ25HNUWK2VR7X567VRL3LIVHUTXQWWHO5SLNCR4TJ6HWSW5L7764JM3R5WFH6TN3T7IH26XDXIXOX4MXPHN5GJ5YOLZL3XHW5NMOQ6YR3UGC43KSXPNFTC257NXNTPEPLSKP34VCFDRI33EP32JTWNL7QRYPAXVZ2Z5JGDPFLZ7VSZJ5HXQ724DTWZP5U6ZXGXXOZOFTNOSPD552WKZVHHVUOJ5LZMDQRM6CWW56XXELSSOQK4GM4VUFH2Z5IOLOOFHO7UXS6UWBDGVPUEL7HJAQIVW2PX4C2H2ETO5R4DYWAVKGK6WMRKUFHWCPXS45AP27JP3D5WX7LFV5RLDMFF6XTNFI5PDSUFBBLW53OLFI7PWFOXVTHTBP2WRR6M6J2TQUTHGOUMEI3Y5VVJUNHZK4KLOUGVCBZJDFSEW35VN3TE5BKJ3IISFY6TW56F2VTV6VYJT4WYC2RJQJ4PW4N4CJXUHZOIYJNICN4TGKLHPVOB7TWQ4WIB2ERXJHQBUBRXXHQSUFX2IMFXEM4WWXMVM3U3NPKCVPO2WT26ID7V6XGAKLUPNDSYQGG2JPJHLTGQ2AJ224KAFDK5KPOZJDQTYENOMOSPFWKXLP5MEM2O4FPKQB6HB4NMAJOVHGXX2OSEWJCY6WEFXJJWSYR3SBAMRYLNKEXOOFKGLUMKGPPELQOAWFOPCKCYN2FDZXYBNYBPMOSCKC43OPNIYPO2OHCU2M7BIPE7HFRYRDXCQ2MBTA5HFM4I4MLXKBJEHDRGHHDFKVD5O7YMRG6ZBBHEDK47HJ5TLVKME4EXDKPISHRJ4AFGJUPBGT72N7UQNS5MRTW4ZM5RNE7USSQWKLWTMLNWHHTAZIXO2QUWUEUZLLGLKYILCIHYTAS35Q4757ZYF7YM4RGS6YZTFNMUCSYN465DFINZS3QVVWUJOEQH4D6LOKT45IFX6JEYSV6STBU2XH2VCW3QRU7LQMGYWMVDBOTQNDKBSASLU55CNUXWOPUCBLIXTO7JUYN2S5BGNDKSOXVCJBBDUVPAMPVIKSSAHSJCSXZKFDNAMMLBXHLJX2FOA5F6F7AOB7AI6DPQHYSF4BWLUAS27YI6NZEQ4KJFSEOVAVFBIUHD7GU3ST6MBI2V7QPKNS4QUJYPPBCCE5JTM67F2L4NJF7NEOGXFCLJERLM3SDGHBCMRFLND53STX4NBLIFT2NCTSD4WUV3F2BGJNI3YTFEVH3AA7TKCHRUIGM5REHWFFV4LK6EE76X54VLJZSI7NW7YE3CJZBCKETWSKAAUNQJH62URRB2NMIZIFSLBHLN2DENHIQXO3UNIOENNO62XQE3YQBWC45KLEZEHWRFGQITP6RDLU7NUZ5TVEOQXVBEOZ6IIPZU5Z2PE7K74SY3AZVX5FGN3YGYM5QZUD2X5CCXFLIUFDY3WCB2MQF6NY5KBZIRJP6OXEGMF2RPDMYHDEGBTRKNQWT7KGW5D2KT4A3VHSPIDOFEAI6Q64PIMT4JCJWRBEALIJ2E3LESRKB2LEHBYRGNXBBEIUKMSAFUWFUA6RBGYC26D3OPBZK7YTHF5QG26PRDO2JJWD3JBIYBEGC5BVASDPU7VQTHSTC7AOVA5RZYKATIYLEVPR5IUUGKMY62UJDT7JSJOLOQWLNE7ZMDWHRF62TZXQJQQXKWR4QHGXPDK3EMR57TTOVG5NZTCSFBGQONUVLCFVJYAOEMFZUXOQ2RTKGNR3EEXSTMKOL4E6F264TEBI3FTIHGMQE66F3SS5KT6ZVLU2TZPRLNGW4SLZTHSFCN6UGTVQAX4FMPR7W47QG7SHTPO2HVJAJNIWQP3C2WGOQQEVXLRQLBDJHXHJVGC7IJSESMQ6LQDMGFD73WEEFPMKJ3M2PPOYGA422A7DDDQLO3WHQGZD2XESLUNL3IYHCAAJSOXKBCREZOF4RQ32QBAYOE2FSI55Q4GJ34CQ5EUGZPPW7O7JIP42DBSVKS4V2BSH4GRH4UDG4TWWSFJQ53A6W445RZ5DPQCD7S2SMCZLTNZGBWMWNAFBE5F6XIOKI3ORKHCLHDHJNPO4IVZKRBXR3ZF4LV27247WETHUP34U4LWSYYGWBMR2UPXRGFAMBY2BJI572F6R4JDA2MPSE7W7FFFO2XPDWLAFWKHUPJMFUX7CYUE3PUHL62R2BE3KDSVNFXR4BAA7URUFTBT2ZSCLFPXQV4VQXQOIBDKRVLYO6DTM6EPKNDFARLTPZHVGTDTL7RXRYUOK5EYCHBSND4XCKSN3NAGYKPPKU47CSMCBPZBT7LUF4SUV4ME2Z3KQTJQOD53PVSODQAKUMC6D4ZVWMAQTJ4O7XXYGWFWV4RVMHIS4KL5RT4WSSCH6S6LOKKGREXGCALR3V6VRND5RY2YL5CDMSHKHNIINAF6PJFIA4R4NVGW6XI7FLKCH672VGFATYQ2JEKCENSEQOVJDCHEZGRIYKN2W4KOMWIHW5XFQI3CKASJ7MMTVAB5GS4HNZDBO7BIGNTYIXFDHI4PFT4GECQ6LTHJCRSHDVRWW6PPFJAY6ONTNC5TNU4FBSPNX4TWTTZFGGF6HDCMMPLL35STAUJASWUJDQ4REPBCHP4QOGTIADSGHDN63YKS4YX62BVJHSSMZ7O6NWI2HD7ISWAOXPMFJGHGADBVXM5CCIVQNAC2IHQGZCHC2RQPGA6DKH5EXNM2OLU6G62YKWYYF3MEIZ43AKRMLJDXWGONGUGDWI3KYYVQUU4XHIOIOLXFRFHNDODV4LZ7FVZH3JL47Y3GKTV7ZU2EWHB7I67XYYLFO7MRYKAA5DNQRQQON3ODSNCZCJENWDAC2L6FVPZDNIVD73AWOGW4JWYOVKG22FRSRVO3MDY7OTCF7E63M4UZWGLKR45U4TNKOBHFQGDA3MAMDZRLOS544AP5ZSSPMO23VSG6WF6ARLKXS7VPPWXXAXRYBXKQIE5XG5A74AAMAIOK7TC3AZJYYOLHZ4QJZPZQBB2576MYFW4P4AJEZYJOIT7GS5GNN7C2TPSQ2SBQC3UKKP4YQVT4CCQNU23ZYLSLWASMHZEV4C3U2WL3AJPKHUHTEARPGHIIKNHQU62FSAMAUMQZKI5KLOQZHUMWYELSI2G7657V2AZG5GSPB7FZPUQHHUIEZHEW62GOVU2NNWILIEHROFHCYXSEM7KSQGDC5EH2Z6HFWI3UN5VDIRQND6RRBZN32YBOW2ZTOH4LTGEWG7GQAZKEATTRGPRBSZ2VCIIDHJOA6ZXLWJFFXSMSFJ73NJAG22RJF4HYGYGPAEEO4WSZR5ADQCIDZNQBXOLV2N5RGUEY2TD2UMJ7BDU3KRXXS674NGQX3UVSXWDWS2P7PRMKTCHRLOKN6CAMMHUYGG7YKWRJ7GWALUBUIRSEZYRZA5SY4OCNHJBTU4GFZEZAADG62GHMLRR7ZOFXN4LRCPEN5PSABKHLKXHKA4KGIGNLFVEGQLW2OP4J5YSHC47J4NTUJURDIIF4WYSF3IMPD4PKSSJODJGGNCTLEKV4IUMGQZGOJ6CTCEJYJ2PACQM2RETSFROTT2TMZRHDEA7ZUAECDAW4J4K6DZWFB2OMKH5IGUXY34GUFQK2S6OHCJXI7ZA63KEHPW57V7OS2R7DQWJOT3R3XDS2SF4VBVJ742QMPOQIKKKQWD6Y7NKB54IBMD2JDPGMXRA2AOXHQACHRXOT7CFXSBR7WHTNBCMQJ7GOMYCZY5UXQ3S6J3QJYVCNSZCV3QJY4PHS4WVN36D56N4TTU4GCRUBXS6O6XC3MC5DUDSMYXQ7YXD33Z7UN4HCVSWR5VFVICAQZ3BRXX7KUNXRLFJLBBJUTKHMTOVACRFFTZDQ2Z2CBHPPRO2V5ERT7QGH5BHZLI75DGOMU5WIDS7TQS7AR4Q4F57PWJ7P367T6PXL577347GD55ST4677YA3JSFULI=END
hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=always 0000664 0000000 0000000 00000002127 13215577651 0027456 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "always" and the interestingness condition "always".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 421
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 296.00 bytes, standard deviation: 0.00 bytes
#
# Additional interesting statistics:
#
# * Ranging from 296 [1000 times] to 296 [1000 times] bytes.
# * Median size: 296
# * 99% of examples had at least 296 bytes
# * 99% of examples had at most 296 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 104: STARTPCOO3RFBBWADAEAAYBKT5L3LNAEASXMF4CUEBV4VWA5VHYHOYQ6TT3WZI63DR2V6SWICISMSEREZF5DDM6ERZPK73FRK3S73AHUZLJKIEND
hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=lower_bound 0000664 0000000 0000000 00000011314 13215577651 0030473 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "always" and the interestingness condition "lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 398
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 5284.78 bytes, standard deviation: 917.40 bytes
#
# Additional interesting statistics:
#
# * Ranging from 392 [4 times] to 11932 [once] bytes.
# * Median size: 5137
# * 99% of examples had at least 3847 bytes
# * 99% of examples had at most 8235 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3800: STARTPCOE3GB32IODODEEV6RFFLEAB46EQX6F4XGAUHBLWPFXON7Q6XWPYTVWWZTGOSIQNB2DP6GP65PX77P56PL7PX567XXHXLY737GON7HRZXOW47OPKOPTCV6U44MLXHSEM36TVTXOO7LKQJ7X227XFT36T3T7A7VPGDTOTVN432GBZ7L55K72Z3FTHZZUZKZHGOOTK67XXT6WNXD647YGWLKKET6XSTXWR2WJNI76CV3IQ2TXHQRXU7LDWU67W4455D27WXE6HP5L7UIRF2PCYP755PJL72TT377K5GB5K37RVN37TXCONO57N7RDS57WNKFKPX5R3RCLKOZHNEO5LWXE2B5J3RNP25D27KJ46POVQJ5X6VMUNLITPY5PFW7ZWQOWITZRV5NG2H7LDBXK73DT63UU4GBLCMWRTS7IDDLCKESONLEXTSMS7MS4HHINM7S5ZTXX7Z27DZQY7XHDK2VBXJH5M3B4XCG554IESETXT2EM2UJ6VO5JNFTX4NZUORUDWHL6IGWF653ACKSGUDH5JJUL23MMH2RML2SN3Q5BYETP2HD7ESLI23PXHZJGQWOZRGA3TAIA4KOJQZDMZW6PI477DYPVNOOD4SRFXBRVWBRI7BXQAPNHNCBEBEWOIFRKYAKV3DVH63O6W7BUCB6Q5MMJJPCKCXPU2JNCTDAHSJVLRUXNU2XQPLZP5WNUK6LSWRQILTWM5PV3EN2DLRFGWDPPXVU3CSVFXUGXUKGA37DKTJ7JGSAKSFSAHOHFELNM4QE5MUJ3TWURRKYAWTMTMRWOP2HVQNJQY4QYUYTF5ADR6CSKWME3GBZKWCP2ZVMKCBSKIDLRSXXBAAWT762W7NJKHTXQOHVLEGQPWWLQCZHS2D7XED22WPPS7QQAIXO2Y5S4KXXFGF6INKCU324MVO3OQU64A3U4TJAFPNGU72VV7O75KWFURMM5K57W5WZ2RUITJSHEKTMNXSAQ2IJZXOG2U72AAJEHMPVJXDC2CCQDALVZRQZDLAAXFBIIK4RSC3VDXXQE6C4OZ3B6Z3VIT3XXODQY6OHDGTWTOBSRQYRYWS5RAQT7XF263LHR4NT5AKN5K5ZCEV4XQEP2P64KU5APWVIX6S7NDFATTTIUGCBBYPT5FAZ2GXUKECZKZ6GZYCNJVT5SKDQOJ2DAQ3ALN5PTIH25SHME2G7VXJUEHHKSNOPNDJZIYT3ESD6IHSO77GDEUC4AWWQCWF2EJEQTL5KODMQXXUHX47A7EKDQO3VPLSSJFRGW34SXCIRNDNKO2QJVVBMPJLHKTMWVMDQGHQSFNTUJ5AXU5OUUM7IPFMG4LAP5NKSPGKMS7WP3B2VN7GIJZHFU32W3PP4WDNJZCCUI4OJHOGJP6ORDIISJDAPGEWWYQ4FJBBUCPQ3HSODT42MJGBZN3HCQTMUHWXPRAGNUTNHTFBHOIDR4PBR6BSOPAHK6BRYWWUAJHRNFMRX45DYYC36ZHVMFEQ5LS2RGW6A2Q4QWAPTZDHOV2K4YJZIGWFYRMWDLSGG6QFOOIJWNLSYDV3AJQVFHENGTB42EGXIA5J4DRKGOHA4ERKCB4NBKRB6RR4F4EHIRM2ABJQ7LXPU332WDF5R3G7EVRM4ENWPEJV7L4FYPEI7OFJSPNZ2GSDHQCFJGRQG5QJWJFWRE2NZBYILOI6IK7HKJ3PAZ3RPD2HQ7AYF6EELTKYGHAVHXJZ2ZB4MOBQNOMQSZEVX2P4Y7DPKTXLQVPK2QHCVGUHBLES5QSZ275A7OPN4O7AQ7OBPF2I45JLLMIUA45FBL675QGMBQZEFFZKZ4GS35S2RD6TRK2ZOVQS7QINKBPLKHXOFTQYNNR2W3ALMW5N6G45SJ5WCRMWHHFUYVTDYBOSYCC2YDP3VKJLZLGBY34SY7D4DWEL46YX7UHXMNWJGNSUSWL26G7WBCJRJ5LC37AXXILSQ2HF33BZ6JL2G7SCDYGKNTHC6D7NONUEMLLK5TX7ILB5IJDPOEOX55G7P36Z7YWDJGSWK4LG3PJVHY5F6HB3GKZPOQWIIEPOMMM6VOH4BH7UCLGJULKRKEJY4KJL53RANB7AIFLB2THI5F6KQSFXSQDPEXTFAUELLIHQHPRPRNFMR7BEBFRVZCLIPTFVCZDBD4E5BI2QNYJW6QTEFLUWAO3FPFUFCVI5UKCYBCZ3RZVZCP4S3YW6U334RFU6KRFVMGFLCH7GBLIR5ZEJUHYKFZMMC4JWM4LHXDSUOYIVDNFJFVF3UNEYIBS3EAUAZON7TE5SI5ZBOMDJRDRNJ6AU2ESLQUBCWBESVO6KVJRHHLAEKBASZKEU4FYGIBRCDZVCTAAZZ4RJAFEGPDUO6WQPZTEY2UR4E45HGFTHJSB34JMCVB3BZKIUWS4AGH3IMHKLGJLUXR42CT4VM3RKUSAJ7USANSWO5VCQTG75VG4AIVELFUH5JHOPF2LVWMSYHYKQGJ4WEZKMMATSBDTRVT3MRMDT4RKGJLLZFEGU3GEAI6IPA3TIEEOVS5NCHEJIROMBYP4V5ISR53IPW7HKKF7T4BRIWRKUVQAYVIEHOYXFLGZCUJNYTULCAMDTDZDAHBJA7VHLKUBDDOQO3LD7N23THRZZA2UM2QLZAHFDETIYOGMYL7BQODOKYOET2TNLCERGTOFYXDU65WOLGRP56RFSPOIC72MLF72SOK3U4BHA4HA4D4BV6VASFVNBQUCMVMDNKOS4VZHN7H3AH7HI24WTJ6DDDG7PSXZPUXRQAA5ZQ57EFPDNWJ76VVI6BTY3OEUK6ACLLR2TMOHIIULCSXY7R7CS42VU2HWMW2N3GYSQA7Z7ZXBZXAKJCG2P6JWATAXUMZBMLEHJBAX4A4763ZBGFQCOXQZ4PMYWMTTJZOKEHCO36HHFKCS4LGQMPNO42STCFGUEMF2VFEFKHEHYZSUDUOBYTYL4FFGM4UN3RPJIFBZVPITW27PQ76I2XI7PKNYRE6UV2HRD3LN5F5RVP2UL4LCXW5DEN7PKCLUCRZSFZKQRREJJQQJVCH5KMPYS6IGLOOA66K2BKJKBSV3EK4ORTWAESN64IZISZSBMSBLCADDSJQKYAEBGGFEMECTPUWMZH3Q6FRIAX67CQXSOSBCNCWRMRSHJQ6BEFWFA3PHKZLVAIO363MRAJZED5LJXSPY5DFCTIPSYASSPXRJ3OZI5NDPU2CB2TCHV4FSXTDO2QAFFLLLTRLPJN3ITLVEW4XG22HAQLMIGAV2I5UH4YYAIMPFJRAPY6G7YXWQZ7SFV2FTHEU764Q57DQON43USTEIWBGIT77MTBPWAAOE5WJ3D7S7EMYJG3KMDTC2Z6Y3S4PLYSPV4M2Z6WQC3S6FYPWCB67R6L7VICZ4JH47Y2WEU7PW754FHPZHT3MU3ER7JDSAUT7PPFZZPIK3TOW6RBJ45ODALXYR35KV7QEOUHOU25Q7ZQBNZKZBX2P73SVSVJVTYZ7GJ3SK6ZIGRISYDES2U35XXEZVDOMCDSJL3I5GIZEJWGMDC5P6YVSFUSDAY7ISK3GCXSIXZX25OG7VHGGAGOIQAXDHROUV37NIDGMLZQOTAH7TVJBCXILGOM4QGE2HS5K4B5RJYYEROBM4GRAL5ZL5THTY4C5O4OLV7PZXLHZVGQNOTBYLCQYPSTXWIUDD63WMWQZHLZKMPR26DX5EDI5TKSLS2BK6OMCBTOYM5EKXSBHLGNNY26AMLK5PICFC6WUIK3GSDMRCKUWU2BC3PYOANHE2ZMY2MNR7ZGLT2KOZHR7PZ6PX7XL47HT7P77K2WHH7X5B7MA6ZJDEND
hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=never 0000664 0000000 0000000 00000011344 13215577651 0027276 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "always" and the interestingness condition "never".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 395
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 5361.16 bytes, standard deviation: 975.56 bytes
#
# Additional interesting statistics:
#
# * Ranging from 3494 [once] to 13144 [once] bytes.
# * Median size: 5186
# * 99% of examples had at least 3950 bytes
# * 99% of examples had at most 9406 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3832: STARTPCOE3GF3SISLWDKEP5SWF3JRJDRED4X6RLRHUWSD6Z5FF2G7AWOKZ2UR2NIVDTPCANEGIJXYT5PX77XV57PX67X56X2Y6GBHX67XZD757WK5GTZ5T6Z6VV6N5LG337PPTX7P2FJ6J7764UN7FNVLZB6XTA6SG6SM2TENYWO7MZ5I7COR573TFZVMG67HZ5VXK2ZWLLGLFKL7T255U72VVEJ5OL3PYML3K57G7TIT2ZV5K3EL3XLFO4N5T7W7NM7VZD5LIR7SV6TTHJ7X7O6XXRLJPW5DRM47UO7TVJ6G27AF4MLTGJ7Z75HMTZT7JHWTPK3IZSJ6J5DWT5P37KLR3LR6UMT6P4W6P53DYI5DX7O4Y7D3Q62S4MPJLCC57XWP3I6XFXJ6YW6B3WZVPKD5TUPW5SJZ5WSWI6H7HDV5HCF45S3YUJ5G5364N7X4OOEXZVPOKRUPDHCHYZGPLFB37P7MMQYH7TK4ZHUWRWFR33N6DDC3VP3RX2YVHVMTO5OS7YVMN2H53NRLP7NMN6JTSKZ3PVJGG36R5PDF3TEZTXGR2PNTI7HXYVUNWHNWRSJRDJX5DP5VK2P6QGV3NJKMJHWM3QZ5IEBSYJNNGFF5IV6PUX3PORKRZ3ENPRGL44XXYYVAQGJAZLE5OHPV5W7LHR44BNLAOSBDEOZ2GUJRC4ZHWFPL3T4QMIOPJAF7E25MGV5JL44JJGOMLCTDSL3R5OP2LQK6DDUPGG7R2KQLQTTVYVATE4RFMC3MHMZQIWE23HRVHVLUZFKBETZD56FQVXNAXVRVFVLUVCLY5G2QBIBIHCUI3N4KQVPMYVTVKYR7N6ROOIGBHPLYLK2LAZVYMOKVV24Q2WBIGSBCTR5ABCRQ6U2D7JPWAV4HEQCEZGUDDGOOKOZUVGYBUUEDJKXPGZXVBTNQLKJFMLNG6EOWGQGZCCJ5ZUTKYAVCCMHNFRRKC5VAXEOCLHXWD5RB2CZ4XIDWIVUKBMGTEGXIF335R2DGGFODEXUBIZEDUMLGC34AOXVXIR5FAL4FI6FEP52VTVU7VS55SM73X7MQICSJ7GPNBKAIKHVV4JM6KM4FABXCBOVM2A2ZBPRB3JJURDR3FVUTJ3RP6EBUCCX2G7WG3S2V6EFEAVCSWSUCOZD3B763QADEYOMIJ42CQ42KCIJIAWUPW6QUZ5KXIS4AFSMKILBG5ISZ22SU5DRKGKWEE7TXC5GAFUGKX267WDBT6I4CA6SOJUM7RAKYAP4EEGNC3WCKUCV56EJGXFMQ23VHLFYEVTRVSBIT5TMT7AVXUAMKC5AFSHWIYJAEKSWEFDRWU4FUM5SRHEIXAENRU7EMLCISUVBJCUZDSGN72Q42TNJBZUB2TEW4SNBI7M3OMG6VQVPNNIXYAMCN47STJI6T6IVBWMWNMDKWIGFGUAWOJBWBHBE7BGDBDPBL5IQZWRSGW3DFBJ6TP7UMGW3COM2BUM5J2DM3STWHGQUL7O2XWQV4Y57NNR5SEYFP4RLQMO7SI4ZNHKFOIBBI4TWWEBLMT4FGCI32WL7I55INCAZECCVS2QS7JY3GHPCITZRSJHTC4BFDUQHJ2ILPZBJVZAFEMEAHZMT4BOUXQJDZSE2XSKWVUW2EW2A3ROMYCWLQCZ2PRAGA6US3YE24QEYBRO6CEGRHODGMARVF75YT4WZP3LVFJ6KGVPXXSV5PKQSYPBZYT2SPMWP4KENXYECKAJCZN2T6HSNK7C5OIIUHS4AHCQ7VI25YTCNGNPNLXBEC3UHZYFMSMIDTJAYEZKSFVECHVCAOUCPXDWP7QIONDSNCMAKPE4J5FIMOEQZB5WGWPAMOF7IRRKENFXKKNS56NRCJPRDGE64IYUS2JAOKVGVMFNNJPLYMBIZMO7BJ5QWOBDTDCJYTZYF3CTGPD6AQ2Z2OTHV7UVNVVB23YGQUBIESRXADWOH3WQFQJLJBNNDZB3O5UZ2D4BSQYY26AXCSN3UIBFRRKSLJBHJ44PADLYYXW36LQ2ARN44FYVEI7OZY7BEUA5SC3CHAZT5SDL6EWN5XZ4HP624JX6KKKQQYI4XKTDKMADPKAYKEZ4MLRSQCPLJ3HHF4FZIN4WKJTBMHBGGUVNHMEISOIMNGLQVC4AYKG2OIRXHF5DFCUFODFMGVMANRHM7MKX3OCRAQ3DJHX7EPMRIGZTHQPK54VHT7GSTP6EERJLHZEWXJGGEOPTV24AVGV4DNDFXPIMVHFPMI2XM2WURES3GIMBAOTASJYWST2XZQN6W6IL5QC66BK4U56LJEJQ6CYPIBCROIORVIER5TENGEM7JJDZGTJOG3PRXX5CUGNUSLBUER4BGKBYVZXIC2XWLYX4PWLMOS5NQJ4I4SOGOHOYSAYEYIZ7WF2FPWHDIZLW5NG7TSBAITBZMWESBMR2E5T2GPPTHLOP2ITR7L2RQIYV4KAF7ECQ2UFZKHDCRVOUEXABR4G36FSM65XFJ4BBXREP3CJZXVDVEFBWRIFKEXISGCXPLQGLZISCSEEKRCANUXXIM3EM4IZQKMFVL75CSOYRCZYDDIL4RO4H3UGTXR4PUMWHFGU2HWMSR6MUSRBWP3KW3CXXNG4U2DLGYG3HEQXJ4M67JSQAIRJJI6KHODH5UVHA2MCMQ37EFE3E4UBUQ2JDB6K2VLJAUERZQUFEHVVTT3JE3GSCAJTZNG7VS2QLFRIBPTA3DQOKK77EOZOX2QWNYLLVT5IWV4VAYC42JGDSOVSJTBOTKDD62D6CVOAJNCPKQZZK4EAUQKWQNFEYVFOZUXKYLMXFKIS3SQMYIUQJSO2MSTMEQ63TRFAOIRD54T3GVYQAZMUL37PKNIICXHSTJAV2NB2YAEMGMN7JKDXM3BKDRWAC362JXYFU4QA4Q6QK2JCUQAQ2J3TQNBW67D3RFGZ6ZI2I6EEYTP52Y6QVG3USVK56FJ5PK5JV7J53R6AZLAHOR4WLLDR76VMJ4PKMEFQMZRXIMLTV3A4OSA5KJCDRST5445VUMJJLYAGQ2V3WM7RV6SPE4JF6P5QGF2STKPLQZ5BIP5OJYQ7ORLI4MWBYKZPDH6OQU6OEJJPIKZOQOCGBIDDBCDEEXPKICYLQDNAYZ4S5DLDZJ344VR7MSDQVPWINZFNL7JGBVEIRP22LJ3UGVSHGT3FQA2O44RYCS2HQ3ZE5TOTQJY2W74PY7VIEUP2GDUTJDQIQQVEKHAKLBK5RDRU2R35JK6QHDCXSRFHGGNC4XMMVON3N75FY2RA5SWZ3AFRWTWYGFJYKOYO3GGIKEGPKFP2ACQZPZOOP4H6PN4VF3QFYUK3ZKU7EXP3FP7WZZYPNE3B6SSNA42R2MVXLSMBAERIH2XQALE5H2BFJRMAESSGHP7ZUCVQYFBSNVZ5YQSFQP3NE7LJHC6V2NVESU47RK3RCYQB565F5GF2V4PILKEIBHVCWTSQNEOS6NYGC5HVW6IWC6ZP7KUW7XY4NZE2OCTC5RVQJWMEE6ZRK2XGSYQQC4HRHAM6T2J5CB2DDUC5ODYCBTP25WSPYKQSFSL5IJJJOAPN46VLFEATTFUKWUQTDCYCGEDYN4HXJ4RGQDVNUZ2L3V25HW4IT435YP6ZKD2JRDOS4DJ3CY56QWKIRIUEJSFCWGCU4ARE2JCZXISJXUQUMSRDJWYR7SORUCVPIV2RQPE75YKM3A7LQJSBRSQBPXG5ZEK3A7LUJJ7IGHXXR2XSFXVHWUFO5POKMGSEXJQ5ISX4565HDMVKNPIXJYHYVQMLGBVIS7UNEHAVOBS3KJTWHBJNNRGT45F3GVABFES2CDVA77C57X26X37P3775Z5PX4NM5775D5Z4EZ2CEND
hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=size_lower_bound 0000664 0000000 0000000 00000010544 13215577651 0031531 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "always" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 399
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 6396.64 bytes, standard deviation: 8280.42 bytes
#
# Additional interesting statistics:
#
# * Ranging from 392 [333 times] to 66330 [once] bytes.
# * Median size: 2974
# * 99% of examples had at least 392 bytes
# * 99% of examples had at most 38237 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3432: STARTPCOH2WF5S3S3YDL3SU4VWTZB5KDZJPSXZFEZOLKSN6LZZPD3ADEKNRXDJG3LQ66H26LEIEQAIH77XV47P77OX547L57X37GNUKG47D26YM7T7TC2LOPR63OV5NR7VMFPP7V2GTU7XC3VV2NP37HIPFOZ7X5D5LAVLQMS2GX6TPV3CRY36XPKHVRSPZMK5WHQ675L2FH6G3PQOPA7PWGMDGYYIVWT5OCF4VIZMHWWHLXCKPR7UJ3EJU4DGFM6W53EXZ4NRVMCS555ZF3LDN4MMIUMMFK6P4KCLEUV5OKO55F3WTDOGOGH6ND6V5I2OIS256AFVK2YWRNK7Y52WTTJUPYHKZLQZVTLAY3W3BJHVCFWTMV6KTE3SGF64WEYPNQ7NXVLW56635Z3SNS5HHS3SAVJ7W5NO2S4FNIE6XDIXVUUGZVRO3CVUO4OHVLV2XFHLPOPDDFQSZQOQBEDSUX6KCROGGBKXIRIXBZSC6O7FJGFAVTHLIGM6KULXUYUSTKBKYQWOFE6VFHXXQYRJPYFJJJQLPJRFXFVGXXGWMI3YMEQKMXKIRZPM543YWKKRFSQIL6MUXK66EPDHMZLQRK6FQOHKQTGNP5DI6IPZ3ECKJCFDLILF62XMZ32RRQKU62TBG2ZHIEHOIFDCO74XVMSDOBK2LHLBFLE7TIXI47TM76NLHAH5PTYVGYO3FOTA5MG56QVLTO7UCWKLDJ3ZAKSFEDZXOX7R2WMVNBLU3Q7UKYRBM6TW2NM5N5DLNGO2SCNWERGFVNR3YSMM3J52PV2FG4ZTDQSLL4XBXFOBT26AXVMF4ZCSQ6JESQYEHQCCHGFIQEIDNTZRRC6YLEPDO532HQYY5QCZAD4RC7FKFHFGKWRIUMKIFQZGO3CWD7C57DS4OXF2WFLZG3A5WRGBPLF4Q44LXZQTYJRA6KYSZ4ZQWSBZJ56UD4SE3G2LMNFWY2BFLBBTL67SZXS2VEW3J4FNXAYIRP4IVJVDORADPT3VQQ44ELDWFTHTLMKY2Y7RUBIETHTIEQD7ONYVTQ4XAF6UKGRSBEPF3FALXUI7DOLQOFNGIDKLH276NMZVPBD344JZUFHOY5RKY7XZL76JCC7SVQTKCJIKX46PPNZFWSMJIUQUWNEEH4WYEZRFXJYNMGZMDIBR2V5GSBM5A2JNXG7XEKEX6RQDH2NTWS4II4WSHQADTYSUCW2SSJ35EFESU3PVHXTFTG55CTJKB47BSEJPLRHFE6MJU5Z5O5DXISABJJFAAZN5ZJ5IJNSKDAH7TX6RCTULMBSXACDIIDSWPBY2OMGS3ZXQLIEJVXH2C3MY2GX6WDLKXLGJBBEV3EHNDCGPCHMTH3N6R4OJQUWWBLESTZIMC3BZOYKWJMCD55UDVIIWN2R7IL7SKMUSA2SLE3DNXIEUOL3VYI6XKDHEXXF2XIJQDQ7MFTF7TIMTRLJGS2N22WY5OVJLXD7HVZ5I2T3IZM2IHJYAPY3TW2WH5VF3ROITNC67TPRCQPZ7WQRJMJV7Q3VMSBBIKMRBI2CVY3QREAVA47MIS536AUPHMLBO7TYZMKIU66JKZMQKAK343WI6JUPGWJXJXTZA34COKIO3HROH2IDBUDTILOAES3WYAG5ORFKI2W6PUIYYLHSI7ETICARP7C2C3S42UWRQ5N6KF4OC3SQOMQYOTZCXGO5NV46HXFVYVNPOVEPHKAKOCKPNOFH644GAP6N42JAMRUPSOZLK62T2KTXR6INTTMGQMVXZRAZLB2C2LVV32QNQZ63O3TMEOETW2RCMJBVMXIV43KKJOPS3C66KSWTGNOYPQA5CBQGT67ESYIOU57SWCJLVMMUD3UOI7MRTFDA7T6EXHH3PY7Y6YIYWKANGVMXAX5WOOXHRZTVW27EL4SBCDAJ4SPKGLIQAB5RANJ7M6EUIYZDD3KFJWWJM3OXXKHNWXGTAFC44WXKBQSEMDNUVF6TONN25CMXV2AM3EICEF3EE7CNONXXGBBKFS54EEMUL3KKPWTMYKMJERGECRLOBC3SNI463IBKHGKHTBH5LCZFAZNKPYGAGSTS6MD6AWIXLFBDQ3PMQRXE4OQQ3BVBE2S332AQZYW4HFFHKJZXFAJV2DZSC55CUCVG5BJAKDRYW3GERBWHAYYPFPOYP3D2S6GLHLQYFAU7AMIU4Y2Q5WQT6WSWU7URINAYYW3525EOWNJCTPNTYPLVSZ46G6MKPQDBL3M2E33KZFEGBNTGJK3ODIWPJ7PDOSTPM3IHABF22B4C6CG4L7O3KDWNELIYSKTGSOVUBLMYA3T4I6EKROVJZGJLZS6ZH5T2I22YEIKANPX3PYGIRGRBHLD2HLSRVQK6EOTAE6E4224WH5WXOECZVOWQOYC44YMTUJFJ6OSDXCTRN7G7E2OWR6IC5LLHOANR7OQ5IQRA5RUZ54YLYHIHGDSZWZGQWUKKYZ3KCZTPIXDWXLDILEOCBFXCS44SVZARPBFWWVSKOQZNNFYXSIQZJ2GOTKHV3HVWVFQYUYYLTA3BI6J46EPRFHBYEPGI46R63DKMLZSD5QBZBL5SFKXMKLP52KYA66T2BV3SKR4WRBSDYH2RZPGA77UDWX46O2L6ASKTKDIMG77BW7C6D4NUDR6LWJTINCOFQOHNEH34XW4WCS6SWKGMPGCDI3UVULPO3T3WQXENQUC5ZUJHBJ23OB2RGJJHHI6XKNDGDXR5M6A6L6B6YOVPQC7C5O5WLAKJ2XTD6KWEFURY334SFWIPTEZDKHJIFSRWW6Y6RMWPEMDSMNDLWYGP46O2O4F5XHAKA7HGGR7TW2H45SHSHV5NHPBDQ7B7WPDJTWXXNQRFH7T4MUDA42WESHE4JVFI2FHWGG5EGVGZGHYH3SVYMIIIFNQTIU3TODKBYQXXVEXG2R6T2CWUXIIULGSTQHBSNIS57V3U3WN7DQQIZPCNTXF2IS55C5WH5F56J6PBUXKA4SJPCFNGKMLCUZWG3FZTNPVBEHA4FNYL3YKYAQK4IPA3LD3XZMJV7OLZONP35UJLOGGUTOW2FB2FXO3TE42TVUEZM32GOZTEYA5H2OUTZOH4O3Z432AH475OF6PVW3W5ECGXSQQZDUV3H5UKWSQ4CLNHFWO74RZ53YSVJXX4EVAM2MEI7R5EBF237R2ZE7QKCZS3DC3AWPWXIJOKFO3UUHWZ5335JFZOLDZRHPERIC2A3B25Z7P44L7PUAOCTNFRMJW3SOOPNYHVH2JKS5JPK7HLJDOWSS5FW4TJJTDRJ756M7FSGPE2LCXJXO3UGG4BJCRNSWVWSPCRGZJ3WEOBETOPB6T2AMYLSKEMJMNPV7MCUC4675IYKSJU3ZGLMMXDRVBESHXS6PJD3P65JXA2E5A7XLNHLZYWHPVMDLVG364AJOXP77XZ6HV47367774PKF76LL77SL45CZFGE===END
hypothesis-python-3.44.1/benchmark-data/text5-valid=always-interesting=usually 0000664 0000000 0000000 00000003425 13215577651 0027656 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "always" and the interestingness condition "usually".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 397
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 532.32 bytes, standard deviation: 428.93 bytes
#
# Additional interesting statistics:
#
# * Ranging from 392 [895 times] to 3275 [once] bytes.
# * Median size: 392
# * 99% of examples had at least 392 bytes
# * 99% of examples had at most 2290 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 808: STARTPCON2V53N3BTADH4SUQHGBQ7IXVJCXZJXI2UHZ3MFX5O7JMC2SUGUPLMZNYYWLSCRREHYHCPI7VP26D5PW5VZD7HYOJQS5B26QWAQGESN5V5NWLB6J6ZD5FHMKWNHVMR3HCFRMCLR7ELASI2NTTW2JECNMNLGQYBBU6YZW77ZWZEC3CIJYNDH5ILX25VXCW2NRMHZT3B6G7URTCGNQEFZ6WHFVAJHDA3NJXMEUDDBNMBFIRIMIETC2FQG65FBE3S67WTZRHN5RX37Q74GS3FTCDG3ZCQAEVTIUATWBC7VDHH6WOA7AVA7LH53PEYETMN4NMSMSGUXHGQGCK4WMFBHO5B63CEYPDQ2LUZWZUKFRCONOOSZ22RJVRVWWVV4SOYKEDOC3IUB6JCTRCE4HYZC7QHY3LKI2YFN3YO3JV5OIQ4WJIRONDKCJGHJNXTOAPTQYUYRS6MARJO2SC5QVYSHGNUE425WC4EMXOM5BMJI6F32J5X2ZTE5PRTJIK7ZHKUL2DYUTVNZOJNEY4LU4DA2GL5PEGAYG5WKUYOOLS2KWTONGAVLGFKTUWIGUKQF3BRXIULKLI55U4VVYSFTADD76QQS6NEXXZUHI2JJUFBWQL4HRKJKXNU7JEHBE2UYCN526YJZGRBJ2772TSE3U7JGXN36XHMTMM7CJCF6WK32QFDVY2FVHHTZND7522KCKNR3KDTWBMTJRTNBUOGLN5LR4QO6NSDSXLPTI2YASJFLGX7HAV2YFTUVX555CNXKWRGCXCW7AJ5T4HMP27F6JZDZ24X7P4AJXCBVVIA====END
hypothesis-python-3.44.1/benchmark-data/text5-valid=usually-interesting=size_lower_bound 0000664 0000000 0000000 00000011007 13215577651 0031722 0 ustar 00root root 0000000 0000000 # This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for text5 [text(min_size=5)], with the validity
# condition "usually" and the interestingness condition "size_lower_bound".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed 400
#
# Key statistics for this benchmark:
#
# * 1000 examples
# * Mean size: 7619.13 bytes, standard deviation: 10345.12 bytes
#
# Additional interesting statistics:
#
# * Ranging from 392 [303 times] to 117219 [once] bytes.
# * Median size: 3603
# * 99% of examples had at least 392 bytes
# * 99% of examples had at most 46515 bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
Data 3592: STARTPCOHKWF3OYSLWDP4CWOY2FIQEBAIF52XPRXOMDI4N5TB7753VOFNG3FVIZHENIY6G2EWOVMB7747LZ27777P3Z6XL4P772GLH477VKXIT4P65XR6H7GHXTOPR7WDH4FIHJ7PONMLVPYHL5NS25LX3WDKPCZVX47R7EI4X6AZQVDTM6RF7ZSFLO26OCXLKWSY2RN4GPRYFLZPVVLS6JAGWWB7A5HZ6YKBQNTLWFSZL6TF3AZHPG2PNVMYTBCTWY6QYQ3OEYLIV5XNVTAW74GDHOLXNXQJ2INYB3Q3D2OHKVV3NTLN3DP4YJX55DEV67HBKG3GHYO6FQDVMVGM7326T54P52J4P5EJO7O7Z7ZMLNDEW3JM7IZRVX7R3356AL2TX5GOLRLJLJXXIEWBFMWM4WGX7E7EF5XXZMRW647R2MKYZZV674MY7EGVKARBG25GGOUNTOX4DSXDIY7VGTXKHZJT27WW5HGQEHYKZKUYVPIY5YTOS3EVF4BZPGBLPRFOKUJ25DOBKLZRK5GQC4JTDKSPKUCHPT5YJRCOBBW6VIP6FFXWFAGKYUS6KP2XOCTSI3JKMGPDGI6HXI3Z3VK7X3L5NNSJNELNQPHEOMQBTZG3XNBWL6FUMBDXKEJL5PB25QCMRVXCJOM47LE3FHFV5Y544QWZI7JZQCFIEM5OUSCX5GDFM5HXSM3DYXEXD4VLVIYFVKX4SW2L27AU5OTNLU6JLJONNYBBLPZAJJVDO34KA5DIH525LQTFKIX5LVDACEHGGZVMJ4DIJHNWLCPSUX7P5L65W2HXHTT3X43I63NITEQ2ZVFYAYRY5PCKSVDAVCDLHNTXXK7336WSHWG75THIL4IGJR542MVNWXODLWEDZNBNAQYM23Q7I7LI5SMYE53HS7UGLFW3OCKNBLJVVOELM6VRRDZUTI732PN35PS74XIS2YRZIV62QF344D4IDLZTQU2N43VPNOESHRLL6AUIVY4V6I7LGJ3FN7DQGDZPNABMAMZJMIHO3PBJU3AJK752E7FYNOWKXZZTY72RK4D74URKNKZD5SWEH65TKXVO5EP5DATKXIECVEO4KWWMMFC6ZXHW25REZV33EMXEBIJA5ZAUOYAYWIEBUWFYS2XMFSLRXVPXOWBODA6FXT6IW5JQB5J4PO4DNWWYKUPKODSQSTVMLSPGYIRNNKYPCYNWRFDZBRM7EZKXTMDZIGWZBXG7BTCONTUJZYVX3X6WANFCX2IRHFZXYXJVTJVV2FZKYU7UQYBU22ZX4ELCC6T4U5WPYDYADKWM34IAJIKPGUXEFI35HWIJZ4GRJQB6WWEB7JABUWP3A3TD7OHFRMTSMMEFGZX6ADK6R6K5A53TKDUCE72MDIQKCHCQCXLIJXYMSLD2WOOGJUKHD2NMTTIPX2N5R27JSVW7EUK5Z3A2BIX6PWLTLO5WD4LMWVKKRDI3TCTHOLIOELWQBVXIPBWRT6GBG7BFYVUBWERQYR2D3MOF5A5ZCTM2EHU2ZPIEAQFJ37IFJGJVYZ2TCKU4IS7ZJCHAGFAYQ6KQV75QQNUS7WXIXPP2PYDCZGS4MPCCGMEZBJNZOYFDWRKFBECCW4564UC3NZO5GTG54FUUGQQQRASAG5RX5M5UWEECATPOVX5VGDEB3PUR3ADSI5N7YNXMURWCM2YXK2SNUWMSOS3LXBRF2E3OCQAFCJWJ3FQAFOLQVAZRLB3GWOS7Q5J62C5R655JO2DIPITWPOMHZA4K6HZCCSXRNSWCMK7BVT6Z6VQMFISUT7QP24I43URODGKLZA65SY4NRQFAMEPLVQXIMCV7L2DFYSURAELZLSBKSW2JOPKLVU3JU5DAVQ3VOOTEEPK2KIWEOCOVEVJTQKRW2M2PJKNZKPCKFFVST3ZWCSXT2F2IBZDNTUMQ4HL5POMVSBIU5Q6CM44NPQMJEFLHHBIIMV6NSO26SDMSIAL5JCNUHCRRML62PYW42AW5MTW3IKGENV5XNZ2YKB62R6HYOFDHRDCKSJM27JDEGDXE3ZKRARQBEAGH62FXOBMQGIV2JYAP6HFSVUUUB5IT3TAYGV4WJLY2BHQLRDIYX3HRSCWIMXARUZW3S7U2BBEN7NYIMB46OIL3NKIKCP5EIWWYPLX6IJNSOVRIO6U2CW3UICDZWOWKXDUXJJI4GD5F4NEYD2XLYLXREOSJDH2J4NSURKKMTL7QADK7YJTLDFJAVHDPW65LJUTXDIUOTN5ONDSUVLD2BTEX3CCZHBS4V5O7FKU2JAM6L4YKJ4UJU676RQ4HIPRBAQKPAGZQUGNIIK2TJON4TFZFZZB4SE3L5YMEARNUC6Z7FHIYJ7G65GRVZDAUJTOOXRWW2ETKZNXDOFYNJ2EVAFAMPE7VRFT6O5SVOJLUSNOLLHYEJE2SURU4FE7FDSKNREC7F4IVMC2NMO6VK327PB5POBUC5LDHFVAMCPEFIMFGCKDMV2NAEOR5UUC7B5FEOZFIG2OQYMT3LTQUY4Q7R3PIIJ27YSYZTVQ4HF3NWGQ6VENWAK5NDY7KL4TKHVZVYVZWNRLQFJQEWJQTUJJ3BKTGDCQYCR6KRDIMLOJFU5UEINYGX34XRXRFBXJOQ3NTDG2RMGY6TUQCWZJW3JIGYTIBQCWIVCWEE5B5G23WNHTFJVAQDA2T6L2F6Y64ECZY7MKQBFZOLHVSQWSU3SRWHQCLO62JVUAFWOQNQRWIS6INORAEJUCIBOEDZZO7EGUEJ3RVF4WACZ7QKQSKX3FI7GHDMACUF5IMNOHFJQKN4JTNVWSAO32D2VAUEO4PQFXUWYTGYLPNTPTL7NGLHW5GB3GWDS4CPLV2Q4L6RVXHXVHGZD4CMBTOVTFVSUX3RGB52I2GFRSS42F4SRVU36AWSSMVKRXUOFPMUHMMMUF6RIGV5WYJJCECXVVB3GKSH646BTCYSPPNCEQYZKU3ERC762YXVPCBYME3ZXJU454BN7N7KLUUI2OWGNCLBSQSZZX63RCELG2G3FG5ZDATUZ6C2MRBKZX3T75ZHFVZ4IW2ZFD5E32QL243S7KKS7UD4TIDLNN3UXYYOJ33MIYA2EJ2YHFE667DVGKKKHJYIBM6KURUAAG5SDJFYA77OM7AHYNSM2QMIKJ7JMHNOOWKCBECRRNVRYTTBA74LZWQXFHWPEDCJF364XZJTX5OVV2M3AYWHX3G6INYU6USSGIVGIHRUTB46ARNUOQEGFCQYGHOES4YIAUYX5BVOOY2NX373A2UD3FGOK54BVKDT7OJOONJMHQIP3H6H3LQBGDVLXJSPCUHY2LKNKLOXKXLWEI577GWYC6Q5STDAJSSWTO3W6R3UX7RK7N5OQLRV56XHNHSA53JQ44XQBXODWW2CJL7ANQYVJOMFSAQESD2SBUZVS44WVMXOGKFVJW66PVUNU2T4IHZG47KV3RIAXIX6JEEJQ3DIHJMQK2QVHV37F4SGF7TH5QLTRQNUSIS3YIO3EISNK3G3KKQ4XKIM44MS2ZKHGAQZERXGWQEUY65PQDT36I6V7RL62S2OEAZMONPJJW3EON2RIFWIX757HY7X77PZ7X5Z5PX64JA7HX76AERLJM2C===END
hypothesis-python-3.44.1/circle.yml 0000664 0000000 0000000 00000001102 13215577651 0017313 0 ustar 00root root 0000000 0000000 test:
override:
- scripts/run_circle.py
machine:
# Courtesy of https://pewpewthespells.com/blog/building_python_on_circleci.html
pre:
- export PATH=/usr/local/bin:$PATH:/Users/distiller/Library/Python/2.7/bin
- pip install --user --ignore-installed --upgrade virtualenv
- ln -s $HOME/Library/Python/2.7/bin/virtualenv /usr/local/bin/virtualenv
- cd "$(brew --repository)" && git fetch && git reset --hard origin/master
- brew update
dependencies:
override:
- make install-core
cache_directories:
- ~/.cache/hypothesis-build-runtimes
hypothesis-python-3.44.1/docs/ 0000775 0000000 0000000 00000000000 13215577651 0016265 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/docs/_static/ 0000775 0000000 0000000 00000000000 13215577651 0017713 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/docs/_static/.empty 0000664 0000000 0000000 00000000000 13215577651 0021040 0 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/docs/changes.rst 0000664 0000000 0000000 00000401263 13215577651 0020435 0 ustar 00root root 0000000 0000000 =========
Changelog
=========
This is a record of all past Hypothesis releases and what went into them,
in reverse chronological order. All previous releases should still be available
on pip.
Hypothesis APIs come in three flavours:
* Public: Hypothesis releases since 1.0 are `semantically versioned `_
with respect to these parts of the API. These will not break except between
major version bumps. All APIs mentioned in this documentation are public unless
explicitly noted otherwise.
* Semi-public: These are APIs that are considered ready to use but are not wholly
nailed down yet. They will not break in patch releases and will *usually* not break
in minor releases, but when necessary minor releases may break semi-public APIs.
* Internal: These may break at any time and you really should not use them at
all.
You should generally assume that an API is internal unless you have specific
information to the contrary.
-------------------
3.44.1 - 2017-12-18
-------------------
This release fixes :issue:`997`, in which under some circumstances the body of
tests run under Hypothesis would not show up when run under coverage even
though the tests were run and the code they called outside of the test file
would show up normally.
-------------------
3.44.0 - 2017-12-17
-------------------
This release adds a new feature: The :ref:`@reproduce_failure `,
designed to make it easy to use Hypothesis's binary format for examples to
reproduce a problem locally without having to share your example database
between machines.
This also changes when seeds are printed:
* They will no longer be printed for
normal falsifying examples, as there are now adequate ways of reproducing those
for all cases, so it just contributes noise.
* They will once again be printed when reusing examples from the database, as
health check failures should now be more reliable in this scenario so it will
almost always work in this case.
This work was funded by `Smarkets `_.
-------------------
3.43.1 - 2017-12-17
-------------------
This release fixes a bug with Hypothesis's database management - examples that
were found in the course of shrinking were saved in a way that indicated that
they had distinct causes, and so they would all be retried on the start of the
next test. The intended behaviour, which is now what is implemented, is that
only a bounded subset of these examples would be retried.
-------------------
3.43.0 - 2017-12-17
-------------------
:exc:`~hypothesis.errors.HypothesisDeprecationWarning` now inherits from
:exc:`python:FutureWarning` instead of :exc:`python:DeprecationWarning`,
as recommended by :pep:`565` for user-facing warnings (:issue:`618`).
If you have not changed the default warnings settings, you will now see
each distinct :exc:`~hypothesis.errors.HypothesisDeprecationWarning`
instead of only the first.
-------------------
3.42.2 - 2017-12-12
-------------------
This patch fixes :issue:`1017`, where instances of a list or tuple subtype
used as an argument to a strategy would be coerced to tuple.
-------------------
3.42.1 - 2017-12-10
-------------------
This release has some internal cleanup, which makes reading the code
more pleasant and may shrink large examples slightly faster.
-------------------
3.42.0 - 2017-12-09
-------------------
This release deprecates :ref:`faker-extra`, which was designed as a transition
strategy but does not support example shrinking or coverage-guided discovery.
-------------------
3.41.0 - 2017-12-06
-------------------
:func:`~hypothesis.strategies.sampled_from` can now sample from
one-dimensional numpy ndarrays. Sampling from multi-dimensional
ndarrays still results in a deprecation warning. Thanks to Charlie
Tanksley for this patch.
-------------------
3.40.1 - 2017-12-04
-------------------
This release makes two changes:
* It makes the calculation of some of the metadata that Hypothesis uses for
shrinking occur lazily. This should speed up performance of test case
generation a bit because it no longer calculates information it doesn't need.
* It improves the shrinker for certain classes of nested examples. e.g. when
shrinking lists of lists, the shrinker is now able to concatenate two
adjacent lists together into a single list. As a result of this change,
shrinking may get somewhat slower when the minimal example found is large.
-------------------
3.40.0 - 2017-12-02
-------------------
This release improves how various ways of seeding Hypothesis interact with the
example database:
* Using the example database with :func:`~hypothesis.seed` is now deprecated.
You should set ``database=None`` if you are doing that. This will only warn
if you actually load examples from the database while using ``@seed``.
* The :attr:`~hypothesis.settings.derandomize` will behave the same way as
``@seed``.
* Using ``--hypothesis-seed`` will disable use of the database.
* If a test used examples from the database, it will not suggest using a seed
to reproduce it, because that won't work.
This work was funded by `Smarkets `_.
-------------------
3.39.0 - 2017-12-01
-------------------
This release adds a new health check that checks if the smallest "natural"
possible example of your test case is very large - this will tend to cause
Hypothesis to generate bad examples and be quite slow.
This work was funded by `Smarkets `_.
-------------------
3.38.9 - 2017-11-29
-------------------
This is a documentation release to improve the documentation of shrinking
behaviour for Hypothesis's strategies.
-------------------
3.38.8 - 2017-11-29
-------------------
This release improves the performance of
:func:`~hypothesis.strategies.characters` when using ``blacklist_characters``
and :func:`~hypothesis.strategies.from_regex` when using negative character
classes.
The problems this fixes were found in the course of work funded by
`Smarkets `_.
-------------------
3.38.7 - 2017-11-29
-------------------
This is a patch release for :func:`~hypothesis.strategies.from_regex`, which
had a bug in handling of the :obj:`python:re.VERBOSE` flag (:issue:`992`).
Flags are now handled correctly when parsing regex.
-------------------
3.38.6 - 2017-11-28
-------------------
This patch changes a few byte-string literals from double to single quotes,
thanks to an update in :pypi:`unify`. There are no user-visible changes.
-------------------
3.38.5 - 2017-11-23
-------------------
This fixes the repr of strategies using lambda that are defined inside
decorators to include the lambda source.
This would mostly have been visible when using the
:ref:`statistics ` functionality - lambdas used for e.g. filtering
would have shown up with a ```` as their body. This can still happen,
but it should happen less often now.
-------------------
3.38.4 - 2017-11-22
-------------------
This release updates the reported :ref:`statistics ` so that they
show approximately what fraction of your test run time is spent in data
generation (as opposed to test execution).
This work was funded by `Smarkets `_.
-------------------
3.38.3 - 2017-11-21
-------------------
This is a documentation release, which ensures code examples are up to date
by running them as doctests in CI (:issue:`711`).
-------------------
3.38.2 - 2017-11-21
-------------------
This release changes the behaviour of the :attr:`~hypothesis.settings.deadline`
setting when used with :func:`~hypothesis.strategies.data`: Time spent inside
calls to ``data.draw`` will no longer be counted towards the deadline time.
As a side effect of some refactoring required for this work, the way flaky
tests are handled has changed slightly. You are unlikely to see much difference
from this, but some error messages will have changed.
This work was funded by `Smarkets `_.
-------------------
3.38.1 - 2017-11-21
-------------------
This patch has a variety of non-user-visible refactorings, removing various
minor warts ranging from indirect imports to typos in comments.
-------------------
3.38.0 - 2017-11-18
-------------------
This release overhauls :doc:`the health check system `
in a variety of small ways.
It adds no new features, but is nevertheless a minor release because it changes
which tests are likely to fail health checks.
The most noticeable effect is that some tests that used to fail health checks
will now pass, and some that used to pass will fail. These should all be
improvements in accuracy. In particular:
* New failures will usually be because they are now taking into account things
like use of :func:`~hypothesis.strategies.data` and
:func:`~hypothesis.assume` inside the test body.
* New failures *may* also be because for some classes of example the way data
generation performance was measured was artificially faster than real data
generation (for most examples that are hitting performance health checks the
opposite should be the case).
* Tests that used to fail health checks and now pass do so because the health
check system used to run in a way that was subtly different than the main
Hypothesis data generation and lacked some of its support for e.g. large
examples.
If your data generation is especially slow, you may also see your tests get
somewhat faster, as there is no longer a separate health check phase. This will
be particularly noticeable when rerunning test failures.
This work was funded by `Smarkets `_.
-------------------
3.37.0 - 2017-11-12
-------------------
This is a deprecation release for some health check related features.
The following are now deprecated:
* Passing :attr:`~hypothesis.HealthCheck.exception_in_generation` to
:attr:`~hypothesis.settings.suppress_health_check`. This no longer does
anything even when passed - All errors that occur during data generation
will now be immediately reraised rather than going through the health check
mechanism.
* Passing :attr:`~hypothesis.HealthCheck.random_module` to
:attr:`~hypothesis.settings.suppress_health_check`. This hasn't done anything
for a long time, but was never explicitly deprecated. Hypothesis always seeds
the random module when running @given tests, so this is no longer an error
and suppressing it doesn't do anything.
* Passing non-:class:`~hypothesis.HealthCheck` values in
:attr:`~hypothesis.settings.suppress_health_check`. This was previously
allowed but never did anything useful.
In addition, passing a non-iterable value as :attr:`~hypothesis.settings.suppress_health_check`
will now raise an error immediately (it would never have worked correctly, but
it would previously have failed later). Some validation error messages have
also been updated.
This work was funded by `Smarkets `_.
-------------------
3.36.1 - 2017-11-10
-------------------
This is a yak shaving release, mostly concerned with our own tests.
While :func:`~python:inspect.getfullargspec` was documented as deprecated
in Python 3.5, it never actually emitted a warning. Our code to silence
this (nonexistent) warning has therefore been removed.
We now run our tests with ``DeprecationWarning`` as an error, and made some
minor changes to our own tests as a result. This required similar upstream
updates to :pypi:`coverage` and :pypi:`execnet` (a test-time dependency via
:pypi:`pytest-xdist`).
There is no user-visible change in Hypothesis itself, but we encourage you
to consider enabling deprecations as errors in your own tests.
-------------------
3.36.0 - 2017-11-06
-------------------
This release adds a setting to the public API, and does some internal cleanup:
- The :attr:`~hypothesis.settings.derandomize` setting is now documented (:issue:`890`)
- Removed - and disallowed - all 'bare excepts' in Hypothesis (:issue:`953`)
- Documented the :attr:`~hypothesis.settings.strict` setting as deprecated, and
updated the build so our docs always match deprecations in the code.
-------------------
3.35.0 - 2017-11-06
-------------------
This minor release supports constraining :func:`~hypothesis.strategies.uuids`
to generate a particular version of :class:`~python:uuid.UUID` (:issue:`721`).
Thanks to Dion Misic for this feature.
-------------------
3.34.1 - 2017-11-02
-------------------
This patch updates the documentation to suggest
:func:`builds(callable) ` instead of
:func:`just(callable()) `.
-------------------
3.34.0 - 2017-11-02
-------------------
Hypothesis now emits deprecation warnings if you apply
:func:`@given ` more than once to a target.
Applying :func:`@given ` repeatedly wraps the target multiple
times. Each wrapper will search the space of of possible parameters separately.
This is equivalent but will be much more inefficient than doing it with a
single call to :func:`@given `.
For example, instead of
``@given(booleans()) @given(integers())``, you could write
``@given(booleans(), integers())``
-------------------
3.33.1 - 2017-11-02
-------------------
This is a bugfix release:
- :func:`~hypothesis.strategies.builds` would try to infer a strategy for
required positional arguments of the target from type hints, even if they
had been given to :func:`~hypothesis.strategies.builds` as positional
arguments (:issue:`946`). Now it only infers missing required arguments.
- An internal introspection function wrongly reported ``self`` as a required
argument for bound methods, which might also have affected
:func:`~hypothesis.strategies.builds`. Now it knows better.
-------------------
3.33.0 - 2017-10-16
-------------------
This release supports strategy inference for more field types in Django
:func:`~hypothesis.extra.django.models` - you can now omit an argument for
Date, Time, Duration, Slug, IP Address, and UUID fields. (:issue:`642`)
Strategy generation for fields with grouped choices now selects choices from
each group, instead of selecting from the group names.
-------------------
3.32.2 - 2017-10-15
-------------------
This patch removes the ``mergedb`` tool, introduced in Hypothesis 1.7.1
on an experimental basis. It has never actually worked, and the new
:doc:`Hypothesis example database ` is designed to make such a
tool unnecessary.
-------------------
3.32.1 - 2017-10-13
-------------------
This patch has two improvements for strategies based on enumerations.
- :func:`~hypothesis.strategies.from_type` now handles enumerations correctly,
delegating to :func:`~hypothesis.strategies.sampled_from`. Previously it
noted that ``Enum.__init__`` has no required arguments and therefore delegated
to :func:`~hypothesis.strategies.builds`, which would subsequently fail.
- When sampling from an :class:`python:enum.Flag`, we also generate combinations
of members. Eg for ``Flag('Permissions', 'READ, WRITE, EXECUTE')`` we can now
generate, ``Permissions.READ``, ``Permissions.READ|WRITE``, and so on.
-------------------
3.32.0 - 2017-10-09
-------------------
This changes the default value of
:attr:`use_coverage=True ` to True when
running on pypy (it was already True on CPython).
It was previously set to False because we expected it to be too slow, but
recent benchmarking shows that actually performance of the feature on pypy is
fairly acceptable - sometimes it's slower than on CPython, sometimes it's
faster, but it's generally within a factor of two either way.
-------------------
3.31.6 - 2017-10-08
-------------------
This patch improves the quality of strategies inferred from Numpy dtypes:
* Integer dtypes generated examples with the upper half of their (non-sign) bits
set to zero. The inferred strategies can now produce any representable integer.
* Fixed-width unicode- and byte-string dtypes now cap the internal example
length, which should improve example and shrink quality.
* Numpy arrays can only store fixed-size strings internally, and allow shorter
strings by right-padding them with null bytes. Inferred string strategies
no longer generate such values, as they can never be retrieved from an array.
This improves shrinking performance by skipping useless values.
This has already been useful in Hypothesis - we found an overflow bug in our
Pandas support, and as a result :func:`~hypothesis.extra.pandas.indexes` and
:func:`~hypothesis.extra.pandas.range_indexes` now check that ``min_size``
and ``max_size`` are at least zero.
-------------------
3.31.5 - 2017-10-08
-------------------
This release fixes a performance problem in tests where
:attr:`~hypothesis.settings.use_coverage` is set to True.
Tests experience a slow-down proportionate to the amount of code they cover.
This is still the case, but the factor is now low enough that it should be
unnoticeable. Previously it was large and became much larger in 3.28.4.
-------------------
3.31.4 - 2017-10-08
-------------------
:func:`~hypothesis.strategies.from_type` failed with a very confusing error
if passed a :func:`~python:typing.NewType` (:issue:`901`). These psudeo-types
are now unwrapped correctly, and strategy inference works as expected.
-------------------
3.31.3 - 2017-10-06
-------------------
This release makes some small optimisations to our use of coverage that should
reduce constant per-example overhead. This is probably only noticeable on
examples where the test itself is quite fast. On no-op tests that don't test
anything you may see up to a fourfold speed increase (which is still
significantly slower than without coverage). On more realistic tests the speed
up is likely to be less than that.
-------------------
3.31.2 - 2017-09-30
-------------------
This release fixes some formatting and small typos/grammar issues in the
documentation, specifically the page docs/settings.rst, and the inline docs
for the various settings.
-------------------
3.31.1 - 2017-09-30
-------------------
This release improves the handling of deadlines so that they act better with
the shrinking process. This fixes :issue:`892`.
This involves two changes:
1. The deadline is raised during the initial generation and shrinking, and then
lowered to the set value for final replay. This restricts our attention to
examples which exceed the deadline by a more significant margin, which
increases their reliability.
2. When despite the above a test still becomes flaky because it is
significantly faster on rerun than it was on its first run, the error
message is now more explicit about the nature of this problem, and includes
both the initial test run time and the new test run time.
In addition, this release also clarifies the documentation of the deadline
setting slightly to be more explicit about where it applies.
This work was funded by `Smarkets `_.
-------------------
3.31.0 - 2017-09-29
-------------------
This release blocks installation of Hypothesis on Python 3.3, which
:PEP:`reached its end of life date on 2017-09-29 <398>`.
This should not be of interest to anyone but downstream maintainers -
if you are affected, migrate to a secure version of Python as soon as
possible or at least seek commercial support.
-------------------
3.30.4 - 2017-09-27
-------------------
This release makes several changes:
1. It significantly improves Hypothesis's ability to use coverage information
to find interesting examples.
2. It reduces the default :attr:`~hypothesis.settings.max_examples` setting from 200 to 100. This takes
advantage of the improved algorithm meaning fewer examples are typically
needed to get the same testing and is sufficiently better at covering
interesting behaviour, and offsets some of the performance problems of
running under coverage.
3. Hypothesis will always try to start its testing with an example that is near
minimized.
The new algorithm for 1 also makes some changes to Hypothesis's low level data
generation which apply even with coverage turned off. They generally reduce the
total amount of data generated, which should improve test performance somewhat.
Between this and 3 you should see a noticeable reduction in test runtime (how
much so depends on your tests and how much example size affects their
performance. On our benchmarks, where data generation dominates, we saw up to
a factor of two performance improvement, but it's unlikely to be that large.
-------------------
3.30.3 - 2017-09-25
-------------------
This release fixes some formatting and small typos/grammar issues in the
documentation, specifically the page docs/details.rst, and some inline
docs linked from there.
-------------------
3.30.2 - 2017-09-24
-------------------
This release changes Hypothesis's caching approach for functions in
``hypothesis.strategies``. Previously it would have cached extremely
aggressively and cache entries would never be evicted. Now it adopts a
least-frequently used, least recently used key invalidation policy, and is
somewhat more conservative about which strategies it caches.
Workloads which create strategies based on dynamic values, e.g. by using
:ref:`flatmap ` or :func:`~hypothesis.strategies.composite`,
will use significantly less memory.
-------------------
3.30.1 - 2017-09-22
-------------------
This release fixes a bug where when running with
:attr:`use_coverage=True ` inside an
existing running instance of coverage, Hypothesis would frequently put files
that the coveragerc excluded in the report for the enclosing coverage.
-------------------
3.30.0 - 2017-09-20
-------------------
This release introduces two new features:
* When a test fails, either with a health check failure or a falsifying example,
Hypothesis will print out a seed that led to that failure, if the test is not
already running with a fixed seed. You can then recreate that failure using either
the :func:`@seed ` decorator or (if you are running pytest) with ``--hypothesis-seed``.
* :pypi:`pytest` users can specify a seed to use for :func:`@given ` based tests by passing
the ``--hypothesis-seed`` command line argument.
This work was funded by `Smarkets `_.
-------------------
3.29.0 - 2017-09-19
-------------------
This release makes Hypothesis coverage aware. Hypothesis now runs all test
bodies under coverage, and uses this information to guide its testing.
The :attr:`~hypothesis.settings.use_coverage` setting can be used to disable
this behaviour if you want to test code that is sensitive to coverage being
enabled (either because of performance or interaction with the trace function).
The main benefits of this feature are:
* Hypothesis now observes when examples it discovers cover particular lines
or branches and stores them in the database for later.
* Hypothesis will make some use of this information to guide its exploration of
the search space and improve the examples it finds (this is currently used
only very lightly and will likely improve significantly in future releases).
This also has the following side-effects:
* Hypothesis now has an install time dependency on the :pypi:`coverage` package.
* Tests that are already running Hypothesis under coverage will likely get
faster.
* Tests that are not running under coverage now run their test bodies under
coverage by default.
This feature is only partially supported under pypy. It is significantly slower
than on CPython and is turned off by default as a result, but it should still
work correctly if you want to use it.
-------------------
3.28.3 - 2017-09-18
-------------------
This release is an internal change that affects how Hypothesis handles
calculating certain properties of strategies.
The primary effect of this is that it fixes a bug where use of
:func:`~hypothesis.deferred` could sometimes trigger an internal assertion
error. However the fix for this bug involved some moderately deep changes to
how Hypothesis handles certain constructs so you may notice some additional
knock-on effects.
In particular the way Hypothesis handles drawing data from strategies that
cannot generate any values has changed to bail out sooner than it previously
did. This may speed up certain tests, but it is unlikely to make much of a
difference in practice for tests that were not already failing with
Unsatisfiable.
-------------------
3.28.2 - 2017-09-18
-------------------
This is a patch release that fixes a bug in the :mod:`hypothesis.extra.pandas`
documentation where it incorrectly referred to :func:`~hypothesis.extra.pandas.column`
instead of :func:`~hypothesis.extra.pandas.columns`.
-------------------
3.28.1 - 2017-09-16
-------------------
This is a refactoring release. It moves a number of internal uses
of :func:`~python:collections.namedtuple` over to using attrs based classes, and removes a couple
of internal namedtuple classes that were no longer in use.
It should have no user visible impact.
-------------------
3.28.0 - 2017-09-15
-------------------
This release adds support for testing :pypi:`pandas` via the :ref:`hypothesis.extra.pandas `
module.
It also adds a dependency on :pypi:`attrs`.
This work was funded by `Stripe `_.
-------------------
3.27.1 - 2017-09-14
-------------------
This release fixes some formatting and broken cross-references in the
documentation, which includes editing docstrings - and thus a patch release.
-------------------
3.27.0 - 2017-09-13
-------------------
This release introduces a :attr:`~hypothesis.settings.deadline`
setting to Hypothesis.
When set this turns slow tests into errors. By default it is unset but will
warn if you exceed 200ms, which will become the default value in a future
release.
This work was funded by `Smarkets `_.
-------------------
3.26.0 - 2017-09-12
-------------------
Hypothesis now emits deprecation warnings if you are using the legacy
SQLite example database format, or the tool for merging them. These were
already documented as deprecated, so this doesn't change their deprecation
status, only that we warn about it.
-------------------
3.25.1 - 2017-09-12
-------------------
This release fixes a bug with generating :doc:`numpy datetime and timedelta types `:
When inferring the strategy from the dtype, datetime and timedelta dtypes with
sub-second precision would always produce examples with one second resolution.
Inferring a strategy from a time dtype will now always produce example with the
same precision.
-------------------
3.25.0 - 2017-09-12
-------------------
This release changes how Hypothesis shrinks and replays examples to take into
account that it can encounter new bugs while shrinking the bug it originally
found. Previously it would end up replacing the originally found bug with the
new bug and show you only that one. Now it is (often) able to recognise when
two bugs are distinct and when it finds more than one will show both.
-------------------
3.24.2 - 2017-09-11
-------------------
This release removes the (purely internal and no longer useful)
``strategy_test_suite`` function and the corresponding strategytests module.
-------------------
3.24.1 - 2017-09-06
-------------------
This release improves the reduction of examples involving floating point
numbers to produce more human readable examples.
It also has some general purpose changes to the way the minimizer works
internally, which may see some improvement in quality and slow down of test
case reduction in cases that have nothing to do with floating point numbers.
-------------------
3.24.0 - 2017-09-05
-------------------
Hypothesis now emits deprecation warnings if you use ``some_strategy.example()`` inside a
test function or strategy definition (this was never intended to be supported,
but is sufficiently widespread that it warrants a deprecation path).
-------------------
3.23.3 - 2017-09-05
-------------------
This is a bugfix release for :func:`~hypothesis.strategies.decimals`
with the ``places`` argument.
- No longer fails health checks (:issue:`725`, due to internal filtering)
- Specifying a ``min_value`` and ``max_value`` without any decimals with
``places`` places between them gives a more useful error message.
- Works for any valid arguments, regardless of the decimal precision context.
-------------------
3.23.2 - 2017-09-01
-------------------
This is a small refactoring release that removes a now-unused parameter to an
internal API. It shouldn't have any user visible effect.
-------------------
3.23.1 - 2017-09-01
-------------------
Hypothesis no longer propagates the dynamic scope of settings into strategy
definitions.
This release is a small change to something that was never part of the public
API and you will almost certainly not notice any effect unless you're doing
something surprising, but for example the following code will now give a
different answer in some circumstances:
.. code-block:: python
import hypothesis.strategies as st
from hypothesis import settings
CURRENT_SETTINGS = st.builds(lambda: settings.default)
(We don't actually encourage you writing code like this)
Previously this would have generated the settings that were in effect at the
point of definition of ``CURRENT_SETTINGS``. Now it will generate the settings
that are used for the current test.
It is very unlikely to be significant enough to be visible, but you may also
notice a small performance improvement.
-------------------
3.23.0 - 2017-08-31
-------------------
This release adds a ``unique`` argument to :func:`~hypothesis.extra.numpy.arrays`
which behaves the same ways as the corresponding one for
:func:`~hypothesis.strategies.lists`, requiring all of the elements in the
generated array to be distinct.
-------------------
3.22.2 - 2017-08-29
-------------------
This release fixes an issue where Hypothesis would raise a ``TypeError`` when
using the datetime-related strategies if running with ``PYTHONOPTIMIZE=2``.
This bug was introduced in v3.20.0. (See :issue:`822`)
-------------------
3.22.1 - 2017-08-28
-------------------
Hypothesis now transparently handles problems with an internal unicode cache,
including file truncation or read-only filesystems (:issue:`767`).
Thanks to Sam Hames for the patch.
-------------------
3.22.0 - 2017-08-26
-------------------
This release provides what should be a substantial performance improvement to
numpy arrays generated using :ref:`provided numpy support `,
and adds a new ``fill_value`` argument to :func:`~hypothesis.extra.numpy.arrays`
to control this behaviour.
This work was funded by `Stripe `_.
-------------------
3.21.3 - 2017-08-26
-------------------
This release fixes some extremely specific circumstances that probably have
never occurred in the wild where users of
:func:`~hypothesis.searchstrategy.deferred` might have seen a RuntimeError from
too much recursion, usually in cases where no valid example could have been
generated anyway.
-------------------
3.21.2 - 2017-08-25
-------------------
This release fixes some minor bugs in argument validation:
* :ref:`hypothesis.extra.numpy ` dtype strategies would raise an internal error
instead of an InvalidArgument exception when passed an invalid
endianness specification.
* :func:`~hypothesis.strategies.fractions` would raise an internal error instead of an InvalidArgument
if passed ``float("nan")`` as one of its bounds.
* The error message for passing ``float("nan")`` as a bound to various
strategies has been improved.
* Various bound arguments will now raise InvalidArgument in cases where
they would previously have raised an internal TypeError or
ValueError from the relevant conversion function.
* :func:`~hypothesis.strategies.streaming` would not have emitted a
deprecation warning when called with an invalid argument.
-------------------
3.21.1 - 2017-08-24
-------------------
This release fixes a bug where test failures that were the result of
an :func:`@example ` would print an extra stack trace before re-raising the
exception.
-------------------
3.21.0 - 2017-08-23
-------------------
This release deprecates Hypothesis's strict mode, which turned Hypothesis's
deprecation warnings into errors. Similar functionality can be achieved
by using :func:`simplefilter('error', HypothesisDeprecationWarning) `.
-------------------
3.20.0 - 2017-08-22
-------------------
This release renames the relevant arguments on the
:func:`~hypothesis.strategies.datetimes`, :func:`~hypothesis.strategies.dates`,
:func:`~hypothesis.strategies.times`, and :func:`~hypothesis.strategies.timedeltas`
strategies to ``min_value`` and ``max_value``, to make them consistent with the
other strategies in the module.
The old argument names are still supported but will emit a deprecation warning
when used explicitly as keyword arguments. Arguments passed positionally will
go to the new argument names and are not deprecated.
-------------------
3.19.3 - 2017-08-22
-------------------
This release provides a major overhaul to the internals of how Hypothesis
handles shrinking.
This should mostly be visible in terms of getting better examples for tests
which make heavy use of :func:`~hypothesis.strategies.composite`,
:func:`~hypothesis.strategies.data` or :ref:`flatmap ` where the data
drawn depends a lot on previous choices, especially where size parameters are
affected. Previously Hypothesis would have struggled to reliably produce
good examples here. Now it should do much better. Performance should also be
better for examples with a non-zero ``min_size``.
You may see slight changes to example generation (e.g. improved example
diversity) as a result of related changes to internals, but they are unlikely
to be significant enough to notice.
-------------------
3.19.2 - 2017-08-21
-------------------
This release fixes two bugs in :mod:`hypothesis.extra.numpy`:
* :func:`~hypothesis.extra.numpy.unicode_string_dtypes` didn't work at all due
to an incorrect dtype specifier. Now it does.
* Various impossible conditions would have been accepted but would error when
they fail to produced any example. Now they raise an explicit InvalidArgument
error.
-------------------
3.19.1 - 2017-08-21
-------------------
This is a bugfix release for :issue:`739`, where bounds for
:func:`~hypothesis.strategies.fractions` or floating-point
:func:`~hypothesis.strategies.decimals` were not properly converted to
integers before passing them to the integers strategy.
This excluded some values that should have been possible, and could
trigger internal errors if the bounds lay between adjacent integers.
You can now bound :func:`~hypothesis.strategies.fractions` with two
arbitrarily close fractions.
It is now an explicit error to supply a min_value, max_value, and
max_denominator to :func:`~hypothesis.strategies.fractions` where the value
bounds do not include a fraction with denominator at most max_denominator.
-------------------
3.19.0 - 2017-08-20
-------------------
This release adds the :func:`~hypothesis.strategies.from_regex` strategy,
which generates strings that contain a match of a regular expression.
Thanks to Maxim Kulkin for creating the
`hypothesis-regex `_
package and then helping to upstream it! (:issue:`662`)
-------------------
3.18.5 - 2017-08-18
-------------------
This is a bugfix release for :func:`~hypothesis.strategies.integers`.
Previously the strategy would hit an internal assertion if passed non-integer
bounds for ``min_value`` and ``max_value`` that had no integers between them.
The strategy now raises InvalidArgument instead.
-------------------
3.18.4 - 2017-08-18
-------------------
Release to fix a bug where mocks can be used as test runners under certain
conditions. Specifically, if a mock is injected into a test via pytest
fixtures or patch decorators, and that mock is the first argument in the
list, hypothesis will think it represents self and turns the mock
into a test runner. If this happens, the affected test always passes
because the mock is executed instead of the test body. Sometimes, it
will also fail health checks.
Fixes :issue:`491` and a section of :issue:`198`.
Thanks to Ben Peterson for this bug fix.
-------------------
3.18.3 - 2017-08-17
-------------------
This release should improve the performance of some tests which
experienced a slow down as a result of the 3.13.0 release.
Tests most likely to benefit from this are ones that make extensive
use of ``min_size`` parameters, but others may see some improvement
as well.
-------------------
3.18.2 - 2017-08-16
-------------------
This release fixes a bug introduced in 3.18.0. If the arguments
``whitelist_characters`` and ``blacklist_characters`` to
:func:`~hypothesis.strategies.characters` both contained elements, then an
``InvalidArgument`` exception would be raised.
Thanks to Zac Hatfield-Dodds for reporting and fixing this.
-------------------
3.18.1 - 2017-08-14
-------------------
This is a bug fix release to fix :issue:`780`, where
:func:`~hypothesis.strategies.sets` and similar would trigger health check
errors if their element strategy could only produce one element (e.g.
if it was :func:`~hypothesis.strategies.just`).
-------------------
3.18.0 - 2017-08-13
-------------------
This is a feature release:
* :func:`~hypothesis.strategies.characters` now accepts
``whitelist_characters``, particular characters which will be added to those
it produces. (:issue:`668`)
* A bug fix for the internal function ``_union_interval_lists()``, and a rename
to ``_union_intervals()``. It now correctly handles all cases where intervals
overlap, and it always returns the result as a tuple for tuples.
Thanks to Alex Willmer for these.
-------------------
3.17.0 - 2017-08-07
-------------------
This release documents :ref:`the previously undocumented phases feature `,
making it part of the public API. It also updates how the example
database is used. Principally:
* A ``Phases.reuse`` argument will now correctly control whether examples
from the database are run (it previously did exactly the wrong thing and
controlled whether examples would be *saved*).
* Hypothesis will no longer try to rerun *all* previously failing examples.
Instead it will replay the smallest previously failing example and a
selection of other examples that are likely to trigger any other bugs that
will found. This prevents a previous failure from dominating your tests
unnecessarily.
* As a result of the previous change, Hypothesis will be slower about clearing
out old examples from the database that are no longer failing (because it can
only clear out ones that it actually runs).
-------------------
3.16.1 - 2017-08-07
-------------------
This release makes an implementation change to how Hypothesis handles certain
internal constructs.
The main effect you should see is improvement to the behaviour and performance
of collection types, especially ones with a ``min_size`` parameter. Many cases
that would previously fail due to being unable to generate enough valid
examples will now succeed, and other cases should run slightly faster.
-------------------
3.16.0 - 2017-08-04
-------------------
This release introduces a deprecation of the timeout feature. This results in
the following changes:
* Creating a settings object with an explicit timeout will emit a deprecation
warning.
* If your test stops because it hits the timeout (and has not found a bug) then
it will emit a deprecation warning.
* There is a new value ``unlimited`` which you can import from hypothesis.
``settings(timeout=unlimited)`` will *not* cause a deprecation warning.
* There is a new health check, ``hung_test``, which will trigger after a test
has been running for five minutes if it is not suppressed.
-------------------
3.15.0 - 2017-08-04
-------------------
This release deprecates two strategies, :func:`~hypothesis.strategies.choices` and
:func:`~hypothesis.strategies.streaming`.
Both of these are somewhat confusing to use and are entirely redundant since the
introduction of the :func:`~hypothesis.strategies.data` strategy for interactive
drawing in tests, and their use should be replaced with direct use of
:func:`~hypothesis.strategies.data` instead.
-------------------
3.14.2 - 2017-08-03
-------------------
This fixes a bug where Hypothesis would not work correctly on Python 2.7 if you
had the :mod:`python:typing` module :pypi:`backport ` installed.
-------------------
3.14.1 - 2017-08-02
-------------------
This raises the maximum depth at which Hypothesis starts cutting off data
generation to a more reasonable value which it is harder to hit by accident.
This resolves (:issue:`751`), in which some examples which previously worked
would start timing out, but it will also likely improve the data generation
quality for complex data types.
-------------------
3.14.0 - 2017-07-23
-------------------
Hypothesis now understands inline type annotations (:issue:`293`):
- If the target of :func:`~hypothesis.strategies.builds` has type annotations,
a default strategy for missing required arguments is selected based on the
type. Type-based strategy selection will only override a default if you
pass :const:`hypothesis.infer` as a keyword argument.
- If :func:`@given ` wraps a function with type annotations,
you can pass :const:`~hypothesis.infer` as a keyword argument and the
appropriate strategy will be substituted.
- You can check what strategy will be inferred for a type with the new
:func:`~hypothesis.strategies.from_type` function.
- :func:`~hypothesis.strategies.register_type_strategy` teaches Hypothesis
which strategy to infer for custom or unknown types. You can provide a
strategy, or for more complex cases a function which takes the type and
returns a strategy.
-------------------
3.13.1 - 2017-07-20
-------------------
This is a bug fix release for :issue:`514` - Hypothesis would continue running
examples after a :class:`~python:unittest.SkipTest` exception was raised,
including printing a falsifying example. Skip exceptions from the standard
:mod:`python:unittest` module, and ``pytest``, ``nose``, or ``unittest2``
modules now abort the test immediately without printing output.
-------------------
3.13.0 - 2017-07-16
-------------------
This release has two major aspects to it: The first is the introduction of
:func:`~hypothesis.strategies.deferred`, which allows more natural definition
of recursive (including mutually recursive) strategies.
The second is a number of engine changes designed to support this sort of
strategy better. These should have a knock-on effect of also improving the
performance of any existing strategies that currently generate a lot of data
or involve heavy nesting by reducing their typical example size.
-------------------
3.12.0 - 2017-07-07
-------------------
This release makes some major internal changes to how Hypothesis represents
data internally, as a prelude to some major engine changes that should improve
data quality. There are no API changes, but it's a significant enough internal
change that a minor version bump seemed warranted.
User facing impact should be fairly mild, but includes:
* All existing examples in the database will probably be invalidated. Hypothesis
handles this automatically, so you don't need to do anything, but if you see
all your examples disappear that's why.
* Almost all data distributions have changed significantly. Possibly for the better,
possibly for the worse. This may result in new bugs being found, but it may
also result in Hypothesis being unable to find bugs it previously did.
* Data generation may be somewhat faster if your existing bottleneck was in
draw_bytes (which is often the case for large examples).
* Shrinking will probably be slower, possibly significantly.
If you notice any effects you consider to be a significant regression, please
open an issue about them.
-------------------
3.11.6 - 2017-06-19
-------------------
This release involves no functionality changes, but is the first to ship wheels
as well as an sdist.
-------------------
3.11.5 - 2017-06-18
-------------------
This release provides a performance improvement to shrinking. For cases where
there is some non-trivial "boundary" value (e.g. the bug happens for all values
greater than some other value), shrinking should now be substantially faster.
Other types of bug will likely see improvements too.
This may also result in some changes to the quality of the final examples - it
may sometimes be better, but is more likely to get slightly worse in some edge
cases. If you see any examples where this happens in practice, please report
them.
-------------------
3.11.4 - 2017-06-17
-------------------
This is a bugfix release: Hypothesis now prints explicit examples when
running in verbose mode. (:issue:`313`)
-------------------
3.11.3 - 2017-06-11
-------------------
This is a bugfix release: Hypothesis no longer emits a warning if you try to
use :func:`~hypothesis.strategies.sampled_from` with
:class:`python:collections.OrderedDict`. (:issue:`688`)
-------------------
3.11.2 - 2017-06-10
-------------------
This is a documentation release. Several outdated snippets have been updated
or removed, and many cross-references are now hyperlinks.
-------------------
3.11.1 - 2017-05-28
-------------------
This is a minor ergonomics release. Tracebacks shown by pytest no longer
include Hypothesis internals for test functions decorated with :func:`@given `.
-------------------
3.11.0 - 2017-05-23
-------------------
This is a feature release, adding datetime-related strategies to the core strategies.
:func:`~hypotheses.extra.pytz.timezones` allows you to sample pytz timezones from
the Olsen database. Use directly in a recipe for tz-aware datetimes, or
compose with :func:`~hypothesis.strategies.none` to allow a mix of aware and naive output.
The new :func:`~hypothesis.strategies.dates`, :func:`~hypothesis.strategies.times`,
:func:`~hypothesis.strategies.datetimes`, and :func:`~hypothesis.strategies.timedeltas`
strategies are all constrained by objects of their type.
This means that you can generate dates bounded by a single day
(i.e. a single date), or datetimes constrained to the microsecond.
:func:`~hypothesis.strategies.times` and :func:`~hypothesis.strategies.datetimes`
take an optional ``timezones=`` argument, which
defaults to :func:`~hypothesis.strategies.none` for naive times. You can use our extra strategy
based on pytz, or roll your own timezones strategy with dateutil or even
the standard library.
The old ``dates``, ``times``, and ``datetimes`` strategies in
``hypothesis.extra.datetimes`` are deprecated in favor of the new
core strategies, which are more flexible and have no dependencies.
-------------------
3.10.0 - 2017-05-22
-------------------
Hypothesis now uses :func:`python:inspect.getfullargspec` internally.
On Python 2, there are no visible changes.
On Python 3 :func:`@given ` and :func:`@composite `
now preserve :pep:`3107` annotations on the
decorated function. Keyword-only arguments are now either handled correctly
(e.g. :func:`@composite `), or caught in validation instead of silently discarded
or raising an unrelated error later (e.g. :func:`@given `).
------------------
3.9.1 - 2017-05-22
------------------
This is a bugfix release: the default field mapping for a DateTimeField in the
Django extra now respects the ``USE_TZ`` setting when choosing a strategy.
------------------
3.9.0 - 2017-05-19
------------------
This is feature release, expanding the capabilities of the
:func:`~hypothesis.strategies.decimals` strategy.
* The new (optional) ``places`` argument allows you to generate decimals with
a certain number of places (e.g. cents, thousandths, satoshis).
* If allow_infinity is None, setting min_bound no longer excludes positive
infinity and setting max_value no longer excludes negative infinity.
* All of ``NaN``, ``-Nan``, ``sNaN``, and ``-sNaN`` may now be drawn if
allow_nan is True, or if allow_nan is None and min_value or max_value is None.
* min_value and max_value may be given as decimal strings, e.g. ``"1.234"``.
------------------
3.8.5 - 2017-05-16
------------------
Hypothesis now imports :mod:`python:sqlite3` when a SQLite database is used, rather
than at module load, improving compatibility with Python implementations
compiled without SQLite support (such as BSD or Jython).
------------------
3.8.4 - 2017-05-16
------------------
This is a compatibility bugfix release. ``sampled_from`` no longer raises
a deprecation warning when sampling from an ``Enum``, as all enums have a
reliable iteration order.
------------------
3.8.3 - 2017-05-09
------------------
This release removes a version check for older versions of pytest when using
the Hypothesis pytest plugin. The pytest plugin will now run unconditionally
on all versions of pytest. This breaks compatibility with any version of pytest
prior to 2.7.0 (which is more than two years old).
The primary reason for this change is that the version check was a frequent
source of breakage when pytest change their versioning scheme. If you are not
working on pytest itself and are not running a very old version of it, this
release probably doesn't affect you.
------------------
3.8.2 - 2017-04-26
------------------
This is a code reorganisation release that moves some internal test helpers
out of the main source tree so as to not have changes to them trigger releases
in future.
------------------
3.8.1 - 2017-04-26
------------------
This is a documentation release. Almost all code examples are now doctests
checked in CI, eliminating stale examples.
------------------
3.8.0 - 2017-04-23
------------------
This is a feature release, adding the :func:`~hypothesis.strategies.iterables` strategy, equivalent
to ``lists(...).map(iter)`` but with a much more useful repr. You can use
this strategy to check that code doesn't accidentally depend on sequence
properties such as indexing support or repeated iteration.
------------------
3.7.4 - 2017-04-22
------------------
This is a bug fix release for a single bug:
* In 3.7.3, using :func:`@example ` and a pytest fixture in the same test could
cause the test to fail to fill the arguments, and throw a TypeError.
------------------
3.7.3 - 2017-04-21
------------------
This release should include no user visible changes and is purely a refactoring
release. This modularises the behaviour of the core :func:`~hypothesis.given` function, breaking
it up into smaller and more accessible parts, but its actual behaviour should
remain unchanged.
------------------
3.7.2 - 2017-04-21
------------------
This reverts an undocumented change in 3.7.1 which broke installation on
debian stable: The specifier for the hypothesis[django] extra\_requires had
introduced a wild card, which was not supported on the default version of pip.
------------------
3.7.1 - 2017-04-21
------------------
This is a bug fix and internal improvements release.
* In particular Hypothesis now tracks a tree of where it has already explored.
This allows it to avoid some classes of duplicate examples, and significantly
improves the performance of shrinking failing examples by allowing it to
skip some shrinks that it can determine can't possibly work.
* Hypothesis will no longer seed the global random arbitrarily unless you have
asked it to using :py:meth:`~hypothesis.strategies.random_module`
* Shrinking would previously have not worked correctly in some special cases
on Python 2, and would have resulted in suboptimal examples.
------------------
3.7.0 - 2017-03-20
------------------
This is a feature release.
New features:
* Rule based stateful testing now has an :func:`@invariant ` decorator that specifies
methods that are run after init and after every step, allowing you to
encode properties that should be true at all times. Thanks to Tom Prince for
this feature.
* The :func:`~hypothesis.strategies.decimals` strategy now supports ``allow_nan`` and ``allow_infinity`` flags.
* There are :ref:`significantly more strategies available for numpy `, including for
generating arbitrary data types. Thanks to Zac Hatfield Dodds for this
feature.
* When using the :func:`~hypothesis.strategies.data` strategy you can now add a label as an argument to
``draw()``, which will be printed along with the value when an example fails.
Thanks to Peter Inglesby for this feature.
Bug fixes:
* Bug fix: :func:`~hypothesis.strategies.composite` now preserves functions' docstrings.
* The build is now reproducible and doesn't depend on the path you build it
from. Thanks to Chris Lamb for this feature.
* numpy strategies for the void data type did not work correctly. Thanks to
Zac Hatfield Dodds for this fix.
There have also been a number of performance optimizations:
* The :func:`~hypothesis.strategies.permutations` strategy is now significantly faster to use for large
lists (the underlying algorithm has gone from O(n^2) to O(n)).
* Shrinking of failing test cases should have got significantly faster in
some circumstances where it was previously struggling for a long time.
* Example generation now involves less indirection, which results in a small
speedup in some cases (small enough that you won't really notice it except in
pathological cases).
------------------
3.6.1 - 2016-12-20
------------------
This release fixes a dependency problem and makes some small behind the scenes
improvements.
* The fake-factory dependency was renamed to faker. If you were depending on
it through hypothesis[django] or hypothesis[fake-factory] without pinning it
yourself then it would have failed to install properly. This release changes
it so that hypothesis[fakefactory] (which can now also be installed as
hypothesis[faker]) will install the renamed faker package instead.
* This release also removed the dependency of hypothesis[django] on
hypothesis[fakefactory] - it was only being used for emails. These now use
a custom strategy that isn't from fakefactory. As a result you should also
see performance improvements of tests which generated User objects or other
things with email fields, as well as better shrinking of email addresses.
* The distribution of code using nested calls to :func:`~hypothesis.strategies.one_of` or the ``|`` operator for
combining strategies has been improved, as branches are now flattened to give
a more uniform distribution.
* Examples using :func:`~hypothesis.strategies.composite` or ``.flatmap`` should now shrink better. In particular
this will affect things which work by first generating a length and then
generating that many items, which have historically not shrunk very well.
------------------
3.6.0 - 2016-10-31
------------------
This release reverts Hypothesis to its old pretty printing of lambda functions
based on attempting to extract the source code rather than decompile the bytecode.
This is unfortunately slightly inferior in some cases and may result in you
occasionally seeing things like ``lambda x: `` in statistics reports and
strategy reprs.
This removes the dependencies on uncompyle6, xdis and spark-parser.
The reason for this is that the new functionality was based on uncompyle6, which
turns out to introduce a hidden GPLed dependency - it in turn depended on xdis,
and although the library was licensed under the MIT license, it contained some
GPL licensed source code and thus should have been released under the GPL.
My interpretation is that Hypothesis itself was never in violation of the GPL
(because the license it is under, the Mozilla Public License v2, is fully
compatible with being included in a GPL licensed work), but I have not consulted
a lawyer on the subject. Regardless of the answer to this question, adding a
GPLed dependency will likely cause a lot of users of Hypothesis to inadvertently
be in violation of the GPL.
As a result, if you are running Hypothesis 3.5.x you really should upgrade to
this release immediately.
------------------
3.5.3 - 2016-10-05
------------------
This is a bug fix release.
Bugs fixed:
* If the same test was running concurrently in two processes and there were
examples already in the test database which no longer failed, Hypothesis
would sometimes fail with a FileNotFoundError (IOError on Python 2) because
an example it was trying to read was deleted before it was read. (:issue:`372`).
* Drawing from an :func:`~hypothesis.strategies.integers` strategy with both a min_value and a max_value
would reject too many examples needlessly. Now it repeatedly redraws until
satisfied. (:pull:`366`. Thanks to Calen Pennington for the contribution).
------------------
3.5.2 - 2016-09-24
------------------
This is a bug fix release.
* The Hypothesis pytest plugin broke pytest support for doctests. Now it doesn't.
------------------
3.5.1 - 2016-09-23
------------------
This is a bug fix release.
* Hypothesis now runs cleanly in -B and -BB modes, avoiding mixing bytes and unicode.
* :class:`python:unittest.TestCase` tests would not have shown up in the new statistics mode. Now they
do.
* Similarly, stateful tests would not have shown up in statistics and now they do.
* Statistics now print with pytest node IDs (the names you'd get in pytest verbose mode).
------------------
3.5.0 - 2016-09-22
------------------
This is a feature release.
* :func:`~hypothesis.strategies.fractions` and :func:`~hypothesis.strategies.decimals` strategies now support min_value and max_value
parameters. Thanks go to Anne Mulhern for the development of this feature.
* The Hypothesis pytest plugin now supports a --hypothesis-show-statistics parameter
that gives detailed statistics about the tests that were run. Huge thanks to
Jean-Louis Fuchs and Adfinis-SyGroup for funding the development of this feature.
* There is a new :func:`~hypothesis.event` function that can be used to add custom statistics.
Additionally there have been some minor bug fixes:
* In some cases Hypothesis should produce fewer duplicate examples (this will mostly
only affect cases with a single parameter).
* py.test command line parameters are now under an option group for Hypothesis (thanks
to David Keijser for fixing this)
* Hypothesis would previously error if you used :pep:`3107` function annotations on your tests under
Python 3.4.
* The repr of many strategies using lambdas has been improved to include the lambda body
(this was previously supported in many but not all cases).
------------------
3.4.2 - 2016-07-13
------------------
This is a bug fix release, fixing a number of problems with the settings system:
* Test functions defined using :func:`@given ` can now be called from other threads
(:issue:`337`)
* Attempting to delete a settings property would previously have silently done
the wrong thing. Now it raises an AttributeError.
* Creating a settings object with a custom database_file parameter was silently
getting ignored and the default was being used instead. Now it's not.
------------------
3.4.1 - 2016-07-07
------------------
This is a bug fix release for a single bug:
* On Windows when running two Hypothesis processes in parallel (e.g. using
pytest-xdist) they could race with each other and one would raise an exception
due to the non-atomic nature of file renaming on Windows and the fact that you
can't rename over an existing file. This is now fixed.
------------------
3.4.0 - 2016-05-27
------------------
This release is entirely provided by `Lucas Wiman `_:
Strategies constructed by :func:`~hypothesis.extra.django.models` will now respect much more of
Django's validations out of the box. Wherever possible full_clean() should
succeed.
In particular:
* The max_length, blank and choices kwargs are now respected.
* Add support for DecimalField.
* If a field includes validators, the list of validators are used to filter the field strategy.
------------------
3.3.0 - 2016-05-27
------------------
This release went wrong and is functionally equivalent to 3.2.0. Ignore it.
------------------
3.2.0 - 2016-05-19
------------------
This is a small single-feature release:
* All tests using :func:`@given ` now fix the global random seed. This removes the health
check for that. If a non-zero seed is required for the final falsifying
example, it will be reported. Otherwise Hypothesis will assume randomization
was not a significant factor for the test and be silent on the subject. If you
use :func:`~hypothesis.strategies.random_module` this will continue to work and will always
display the seed.
------------------
3.1.3 - 2016-05-01
------------------
Single bug fix release
* Another charmap problem. In 3.1.2 :func:`~hypothesis.strategies.text` and
:func:`~hypothesis.strategies.characters` would break on systems
which had ``/tmp`` mounted on a different partition than the Hypothesis storage
directory (usually in home). This fixes that.
------------------
3.1.2 - 2016-04-30
------------------
Single bug fix release:
* Anything which used a :func:`~hypothesis.strategies.text` or
:func:`~hypothesis.strategies.characters` strategy was broken on Windows
and I hadn't updated appveyor to use the new repository location so I didn't
notice. This is now fixed and windows support should work correctly.
------------------
3.1.1 - 2016-04-29
------------------
Minor bug fix release.
* Fix concurrency issue when running tests that use :func:`~hypothesis.strategies.text` from multiple
processes at once (:issue:`302`, thanks to Alex Chan).
* Improve performance of code using :func:`~hypothesis.strategies.lists` with max_size (thanks to
Cristi Cobzarenco).
* Fix install on Python 2 with ancient versions of pip so that it installs the
enum34 backport (thanks to Donald Stufft for telling me how to do this).
* Remove duplicated __all__ exports from hypothesis.strategies (thanks to
Piët Delport).
* Update headers to point to new repository location.
* Allow use of strategies that can't be used in :func:`~hypothesis.find`
(e.g. :func:`~hypothesis.strategies.choices`) in stateful testing.
------------------
3.1.0 - 2016-03-06
------------------
* Add a :func:`~hypothesis.strategies.nothing` strategy that never successfully generates values.
* :func:`~hypothesis.strategies.sampled_from` and :func:`~hypothesis.strategies.one_of`
can both now be called with an empty argument
list, in which case they also never generate any values.
* :func:`~hypothesis.strategies.one_of` may now be called with a single argument that is a collection of strategies
as well as as varargs.
* Add a :func:`~hypothesis.strategies.runner` strategy which returns the instance of the current test object
if there is one.
* 'Bundle' for RuleBasedStateMachine is now a normal(ish) strategy and can be used
as such.
* Tests using RuleBasedStateMachine should now shrink significantly better.
* Hypothesis now uses a pretty-printing library internally, compatible with IPython's
pretty printing protocol (actually using the same code). This may improve the quality
of output in some cases.
* As a 'phases' setting that allows more fine grained control over which parts of the
process Hypothesis runs
* Add a suppress_health_check setting which allows you to turn off specific health checks
in a fine grained manner.
* Fix a bug where lists of non fixed size would always draw one more element than they
included. This mostly didn't matter, but if would cause problems with empty strategies
or ones with side effects.
* Add a mechanism to the Django model generator to allow you to explicitly request the
default value (thanks to Jeremy Thurgood for this one).
------------------
3.0.5 - 2016-02-25
------------------
* Fix a bug where Hypothesis would now error on py.test development versions.
------------------
3.0.4 - 2016-02-24
------------------
* Fix a bug where Hypothesis would error when running on Python 2.7.3 or
earlier because it was trying to pass a :class:`python:bytearray` object
to :func:`python:struct.unpack` (which is only supported since 2.7.4).
------------------
3.0.3 - 2016-02-23
------------------
* Fix version parsing of py.test to work with py.test release candidates
* More general handling of the health check problem where things could fail because
of a cache miss - now one "free" example is generated before the start of the
health check run.
------------------
3.0.2 - 2016-02-18
------------------
* Under certain circumstances, strategies involving :func:`~hypothesis.strategies.text` buried inside some
other strategy (e.g. ``text().filter(...)`` or ``recursive(text(), ...))`` would cause
a test to fail its health checks the first time it ran. This was caused by having
to compute some related data and cache it to disk. On travis or anywhere else
where the ``.hypothesis`` directory was recreated this would have caused the tests
to fail their health check on every run. This is now fixed for all the known cases,
although there could be others lurking.
------------------
3.0.1 - 2016-02-18
------------------
* Fix a case where it was possible to trigger an "Unreachable" assertion when
running certain flaky stateful tests.
* Improve shrinking of large stateful tests by eliminating a case where it was
hard to delete early steps.
* Improve efficiency of drawing :func:`binary(min_size=n, max_size=n) ` significantly by
provide a custom implementation for fixed size blocks that can bypass a lot
of machinery.
* Set default home directory based on the current working directory at the
point Hypothesis is imported, not whenever the function first happens to be
called.
------------------
3.0.0 - 2016-02-17
------------------
Codename: This really should have been 2.1.
Externally this looks like a very small release. It has one small breaking change
that probably doesn't affect anyone at all (some behaviour that never really worked
correctly is now outright forbidden) but necessitated a major version bump and one
visible new feature.
Internally this is a complete rewrite. Almost nothing other than the public API is
the same.
New features:
* Addition of :func:`~hypothesis.strategies.data` strategy which allows you to draw arbitrary data interactively
within the test.
* New "exploded" database format which allows you to more easily check the example
database into a source repository while supporting merging.
* Better management of how examples are saved in the database.
* Health checks will now raise as errors when they fail. It was too easy to have
the warnings be swallowed entirely.
New limitations:
* :func:`~hypothesis.strategies.choices` and :func:`~hypothesis.strategies.streaming`
strategies may no longer be used with :func:`~hypothesis.find`. Neither may
:func:`~hypothesis.strategies.data` (this is the change that necessitated a major version bump).
Feature removal:
* The ForkingTestCase executor has gone away. It may return in some more working
form at a later date.
Performance improvements:
* A new model which allows flatmap, composite strategies and stateful testing to
perform *much* better. They should also be more reliable.
* Filtering may in some circumstances have improved significantly. This will
help especially in cases where you have lots of values with individual filters
on them, such as lists(x.filter(...)).
* Modest performance improvements to the general test runner by avoiding expensive
operations
In general your tests should have got faster. If they've instead got significantly
slower, I'm interested in hearing about it.
Data distribution:
The data distribution should have changed significantly. This may uncover bugs the
previous version missed. It may also miss bugs the previous version could have
uncovered. Hypothesis is now producing less strongly correlated data than it used
to, but the correlations are extended over more of the structure.
Shrinking:
Shrinking quality should have improved. In particular Hypothesis can now perform
simultaneous shrinking of separate examples within a single test (previously it
was only able to do this for elements of a single collection). In some cases
performance will have improved, in some cases it will have got worse but generally
shouldn't have by much.
------------------
2.0.0 - 2016-01-10
------------------
Codename: A new beginning
This release cleans up all of the legacy that accrued in the course of
Hypothesis 1.0. These are mostly things that were emitting deprecation warnings
in 1.19.0, but there were a few additional changes.
In particular:
* non-strategy values will no longer be converted to strategies when used in
given or find.
* FailedHealthCheck is now an error and not a warning.
* Handling of non-ascii reprs in user types have been simplified by using raw
strings in more places in Python 2.
* given no longer allows mixing positional and keyword arguments.
* given no longer works with functions with defaults.
* given no longer turns provided arguments into defaults - they will not appear
in the argspec at all.
* the basic() strategy no longer exists.
* the n_ary_tree strategy no longer exists.
* the average_list_length setting no longer exists. Note: If you're using
using recursive() this will cause you a significant slow down. You should
pass explicit average_size parameters to collections in recursive calls.
* @rule can no longer be applied to the same method twice.
* Python 2.6 and 3.3 are no longer officially supported, although in practice
they still work fine.
This also includes two non-deprecation changes:
* given's keyword arguments no longer have to be the rightmost arguments and
can appear anywhere in the method signature.
* The max_shrinks setting would sometimes not have been respected.
-------------------
1.19.0 - 2016-01-09
-------------------
Codename: IT COMES
This release heralds the beginning of a new and terrible age of Hypothesis 2.0.
It's primary purpose is some final deprecations prior to said release. The goal
is that if your code emits no warnings under this release then it will probably run
unchanged under Hypothesis 2.0 (there are some caveats to this: 2.0 will drop
support for some Python versions, and if you're using internal APIs then as usual
that may break without warning).
It does have two new features:
* New @seed() decorator which allows you to manually seed a test. This may be
harmlessly combined with and overrides the derandomize setting.
* settings objects may now be used as a decorator to fix those settings to a
particular @given test.
API changes (old usage still works but is deprecated):
* Settings has been renamed to settings (lower casing) in order to make the
decorator usage more natural.
* Functions for the storage directory that were in hypothesis.settings are now
in a new hypothesis.configuration module.
Additional deprecations:
* the average_list_length setting has been deprecated in favour of being
explicit.
* the basic() strategy has been deprecated as it is impossible to support
it under a Conjecture based model, which will hopefully be implemented at
some point in the 2.x series.
* the n_ary_tree strategy (which was never actually part of the public API)
has been deprecated.
* Passing settings or random as keyword arguments to given is deprecated (use
the new functionality instead)
Bug fixes:
* No longer emit PendingDeprecationWarning for __iter__ and StopIteration in
streaming() values.
* When running in health check mode with non strict, don't print quite so
many errors for an exception in reify.
* When an assumption made in a test or a filter is flaky, tests will now
raise Flaky instead of UnsatisfiedAssumption.
-----------------------------------------------------------------------
`1.18.1 `_ - 2015-12-22
-----------------------------------------------------------------------
Two behind the scenes changes:
* Hypothesis will no longer write generated code to the file system. This
will improve performance on some systems (e.g. if you're using
`PythonAnywhere `_ which is running your
code from NFS) and prevent some annoying interactions with auto-restarting
systems.
* Hypothesis will cache the creation of some strategies. This can significantly
improve performance for code that uses flatmap or composite and thus has to
instantiate strategies a lot.
-----------------------------------------------------------------------
`1.18.0 `_ - 2015-12-21
-----------------------------------------------------------------------
Features:
* Tests and find are now explicitly seeded off the global random module. This
means that if you nest one inside the other you will now get a health check
error. It also means that you can control global randomization by seeding
random.
* There is a new random_module() strategy which seeds the global random module
for you and handles things so that you don't get a health check warning if
you use it inside your tests.
* floats() now accepts two new arguments: allow\_nan and allow\_infinity. These
default to the old behaviour, but when set to False will do what the names
suggest.
Bug fixes:
* Fix a bug where tests that used text() on Python 3.4+ would not actually be
deterministic even when explicitly seeded or using the derandomize mode,
because generation depended on dictionary iteration order which was affected
by hash randomization.
* Fix a bug where with complicated strategies the timing of the initial health
check could affect the seeding of the subsequent test, which would also
render supposedly deterministic tests non-deterministic in some scenarios.
* In some circumstances flatmap() could get confused by two structurally
similar things it could generate and would produce a flaky test where the
first time it produced an error but the second time it produced the other
value, which was not an error. The same bug was presumably also possible in
composite().
* flatmap() and composite() initial generation should now be moderately faster.
This will be particularly noticeable when you have many values drawn from the
same strategy in a single run, e.g. constructs like lists(s.flatmap(f)).
Shrinking performance *may* have suffered, but this didn't actually produce
an interestingly worse result in any of the standard scenarios tested.
-----------------------------------------------------------------------
`1.17.1 `_ - 2015-12-16
-----------------------------------------------------------------------
A small bug fix release, which fixes the fact that the 'note' function could
not be used on tests which used the @example decorator to provide explicit
examples.
-----------------------------------------------------------------------
`1.17.0 `_ - 2015-12-15
-----------------------------------------------------------------------
This is actually the same release as 1.16.1, but 1.16.1 has been pulled because
it contains the following additional change that was not intended to be in a
patch release (it's perfectly stable, but is a larger change that should have
required a minor version bump):
* Hypothesis will now perform a series of "health checks" as part of running
your tests. These detect and warn about some common error conditions that
people often run into which wouldn't necessarily have caused the test to fail
but would cause e.g. degraded performance or confusing results.
-----------------------------------------------------------------------
`1.16.1 `_ - 2015-12-14
-----------------------------------------------------------------------
Note: This release has been removed.
A small bugfix release that allows bdists for Hypothesis to be built
under 2.7 - the compat3.py file which had Python 3 syntax wasn't intended
to be loaded under Python 2, but when building a bdist it was. In particular
this would break running setup.py test.
-----------------------------------------------------------------------
`1.16.0 `_ - 2015-12-08
-----------------------------------------------------------------------
There are no public API changes in this release but it includes a behaviour
change that I wasn't comfortable putting in a patch release.
* Functions from hypothesis.strategies will no longer raise InvalidArgument
on bad arguments. Instead the same errors will be raised when a test
using such a strategy is run. This may improve startup time in some
cases, but the main reason for it is so that errors in strategies
won't cause errors in loading, and it can interact correctly with things
like pytest.mark.skipif.
* Errors caused by accidentally invoking the legacy API are now much less
confusing, although still throw NotImplementedError.
* hypothesis.extra.django is 1.9 compatible.
* When tests are run with max_shrinks=0 this will now still rerun the test
on failure and will no longer print "Trying example:" before each run.
Additionally note() will now work correctly when used with max_shrinks=0.
-----------------------------------------------------------------------
`1.15.0 `_ - 2015-11-24
-----------------------------------------------------------------------
A release with two new features.
* A 'characters' strategy for more flexible generation of text with particular
character ranges and types, kindly contributed by `Alexander Shorin `_.
* Add support for preconditions to the rule based stateful testing. Kindly
contributed by `Christopher Armstrong `_
-----------------------------------------------------------------------
`1.14.0 `_ - 2015-11-01
-----------------------------------------------------------------------
New features:
* Add 'note' function which lets you include additional information in the
final test run's output.
* Add 'choices' strategy which gives you a choice function that emulates
random.choice.
* Add 'uuid' strategy that generates UUIDs'
* Add 'shared' strategy that lets you create a strategy that just generates a
single shared value for each test run
Bugs:
* Using strategies of the form streaming(x.flatmap(f)) with find or in stateful
testing would have caused InvalidArgument errors when the resulting values
were used (because code that expected to only be called within a test context
would be invoked).
-----------------------------------------------------------------------
`1.13.0 `_ - 2015-10-29
-----------------------------------------------------------------------
This is quite a small release, but deprecates some public API functions
and removes some internal API functionality so gets a minor version bump.
* All calls to the 'strategy' function are now deprecated, even ones which
pass just a SearchStrategy instance (which is still a no-op).
* Never documented hypothesis.extra entry_points mechanism has now been removed (
it was previously how hypothesis.extra packages were loaded and has been deprecated
and unused for some time)
* Some corner cases that could previously have produced an OverflowError when simplifying
failing cases using hypothesis.extra.datetimes (or dates or times) have now been fixed.
* Hypothesis load time for first import has been significantly reduced - it used to be
around 250ms (on my SSD laptop) and now is around 100-150ms. This almost never
matters but was slightly annoying when using it in the console.
* hypothesis.strategies.randoms was previously missing from \_\_all\_\_.
-----------------------------------------------------------------------
`1.12.0 `_ - 2015-10-18
-----------------------------------------------------------------------
* Significantly improved performance of creating strategies using the functions
from the hypothesis.strategies module by deferring the calculation of their
repr until it was needed. This is unlikely to have been an performance issue
for you unless you were using flatmap, composite or stateful testing, but for
some cases it could be quite a significant impact.
* A number of cases where the repr of strategies build from lambdas is improved
* Add dates() and times() strategies to hypothesis.extra.datetimes
* Add new 'profiles' mechanism to the settings system
* Deprecates mutability of Settings, both the Settings.default top level property
and individual settings.
* A Settings object may now be directly initialized from a parent Settings.
* @given should now give a better error message if you attempt to use it with a
function that uses destructuring arguments (it still won't work, but it will
error more clearly),
* A number of spelling corrections in error messages
* py.test should no longer display the intermediate modules Hypothesis generates
when running in verbose mode
* Hypothesis should now correctly handle printing objects with non-ascii reprs
on python 3 when running in a locale that cannot handle ascii printing to
stdout.
* Add a unique=True argument to lists(). This is equivalent to
unique_by=lambda x: x, but offers a more convenient syntax.
-----------------------------------------------------------------------
`1.11.4 `_ - 2015-09-27
-----------------------------------------------------------------------
* Hide modifications Hypothesis needs to make to sys.path by undoing them
after we've imported the relevant modules. This is a workaround for issues
cryptography experienced on windows.
* Slightly improved performance of drawing from sampled_from on large lists
of alternatives.
* Significantly improved performance of drawing from one_of or strategies
using \| (note this includes a lot of strategies internally - floats()
and integers() both fall into this category). There turned out to be a
massive performance regression introduced in 1.10.0 affecting these which
probably would have made tests using Hypothesis significantly slower than
they should have been.
-----------------------------------------------------------------------
`1.11.3 `_ - 2015-09-23
-----------------------------------------------------------------------
* Better argument validation for datetimes() strategy - previously setting
max_year < datetime.MIN_YEAR or min_year > datetime.MAX_YEAR would not have
raised an InvalidArgument error and instead would have behaved confusingly.
* Compatibility with being run on pytest < 2.7 (achieved by disabling the
plugin).
-----------------------------------------------------------------------
`1.11.2 `_ - 2015-09-23
-----------------------------------------------------------------------
Bug fixes:
* Settings(database=my_db) would not be correctly inherited when used as a
default setting, so that newly created settings would use the database_file
setting and create an SQLite example database.
* Settings.default.database = my_db would previously have raised an error and
now works.
* Timeout could sometimes be significantly exceeded if during simplification
there were a lot of examples tried that didn't trigger the bug.
* When loading a heavily simplified example using a basic() strategy from the
database this could cause Python to trigger a recursion error.
* Remove use of deprecated API in pytest plugin so as to not emit warning
Misc:
* hypothesis-pytest is now part of hypothesis core. This should have no
externally visible consequences, but you should update your dependencies to
remove hypothesis-pytest and depend on only Hypothesis.
* Better repr for hypothesis.extra.datetimes() strategies.
* Add .close() method to abstract base class for Backend (it was already present
in the main implementation).
-----------------------------------------------------------------------
`1.11.1 `_ - 2015-09-16
-----------------------------------------------------------------------
Bug fixes:
* When running Hypothesis tests in parallel (e.g. using pytest-xdist) there was a race
condition caused by code generation.
* Example databases are now cached per thread so as to not use sqlite connections from
multiple threads. This should make Hypothesis now entirely thread safe.
* floats() with only min_value or max_value set would have had a very bad distribution.
* Running on 3.5, Hypothesis would have emitted deprecation warnings because of use of
inspect.getargspec
-----------------------------------------------------------------------
`1.11.0 `_ - 2015-08-31
-----------------------------------------------------------------------
* text() with a non-string alphabet would have used the repr() of the the alphabet
instead of its contexts. This is obviously silly. It now works with any sequence
of things convertible to unicode strings.
* @given will now work on methods whose definitions contains no explicit positional
arguments, only varargs (`bug #118 `_).
This may have some knock on effects because it means that @given no longer changes the
argspec of functions other than by adding defaults.
* Introduction of new @composite feature for more natural definition of strategies you'd
previously have used flatmap for.
-----------------------------------------------------------------------
`1.10.6 `_ - 2015-08-26
-----------------------------------------------------------------------
Fix support for fixtures on Django 1.7.
-------------------
1.10.4 - 2015-08-21
-------------------
Tiny bug fix release:
* If the database_file setting is set to None, this would have resulted in
an error when running tests. Now it does the same as setting database to
None.
-----------------------------------------------------------------------
`1.10.3 `_ - 2015-08-19
-----------------------------------------------------------------------
Another small bug fix release.
* lists(elements, unique_by=some_function, min_size=n) would have raised a
ValidationError if n > Settings.default.average_list_length because it would
have wanted to use an average list length shorter than the minimum size of
the list, which is impossible. Now it instead defaults to twice the minimum
size in these circumstances.
* basic() strategy would have only ever produced at most ten distinct values
per run of the test (which is bad if you e.g. have it inside a list). This
was obviously silly. It will now produce a much better distribution of data,
both duplicated and non duplicated.
-----------------------------------------------------------------------
`1.10.2 `_ - 2015-08-19
-----------------------------------------------------------------------
This is a small bug fix release:
* star imports from hypothesis should now work correctly.
* example quality for examples using flatmap will be better, as the way it had
previously been implemented was causing problems where Hypothesis was
erroneously labelling some examples as being duplicates.
-----------------------------------------------------------------------
`1.10.0 `_ - 2015-08-04
-----------------------------------------------------------------------
This is just a bugfix and performance release, but it changes some
semi-public APIs, hence the minor version bump.
* Significant performance improvements for strategies which are one\_of()
many branches. In particular this included recursive() strategies. This
should take the case where you use one recursive() strategy as the base
strategy of another from unusably slow (tens of seconds per generated
example) to reasonably fast.
* Better handling of just() and sampled_from() for values which have an
incorrect \_\_repr\_\_ implementation that returns non-ASCII unicode
on Python 2.
* Better performance for flatmap from changing the internal morpher API
to be significantly less general purpose.
* Introduce a new semi-public BuildContext/cleanup API. This allows
strategies to register cleanup activities that should run once the
example is complete. Note that this will interact somewhat weirdly with
find.
* Better simplification behaviour for streaming strategies.
* Don't error on lambdas which use destructuring arguments in Python 2.
* Add some better reprs for a few strategies that were missing good ones.
* The Random instances provided by randoms() are now copyable.
* Slightly more debugging information about simplify when using a debug
verbosity level.
* Support using given for functions with varargs, but not passing arguments
to it as positional.
---------------------------------------------------------------------
`1.9.0 `_ - 2015-07-27
---------------------------------------------------------------------
Codename: The great bundling.
This release contains two fairly major changes.
The first is the deprecation of the hypothesis-extra mechanism. From
now on all the packages that were previously bundled under it other
than hypothesis-pytest (which is a different beast and will remain
separate). The functionality remains unchanged and you can still import
them from exactly the same location, they just are no longer separate
packages.
The second is that this introduces a new way of building strategies
which lets you build up strategies recursively from other strategies.
It also contains the minor change that calling .example() on a
strategy object will give you examples that are more representative of
the actual data you'll get. There used to be some logic in there to make
the examples artificially simple but this proved to be a bad idea.
---------------------------------------------------------------------
`1.8.5 `_ - 2015-07-24
---------------------------------------------------------------------
This contains no functionality changes but fixes a mistake made with
building the previous package that would have broken installation on
Windows.
---------------------------------------------------------------------
`1.8.4 `_ - 2015-07-20
---------------------------------------------------------------------
Bugs fixed:
* When a call to floats() had endpoints which were not floats but merely
convertible to one (e.g. integers), these would be included in the generated
data which would cause it to generate non-floats.
* Splitting lambdas used in the definition of flatmap, map or filter over
multiple lines would break the repr, which would in turn break their usage.
---------------------------------------------------------------------
`1.8.3 `_ - 2015-07-20
---------------------------------------------------------------------
"Falsifying example" would not have been printed when the failure came from an
explicit example.
---------------------------------------------------------------------
`1.8.2 `_ - 2015-07-18
---------------------------------------------------------------------
Another small bugfix release:
* When using ForkingTestCase you would usually not get the falsifying example
printed if the process exited abnormally (e.g. due to os._exit).
* Improvements to the distribution of characters when using text() with a
default alphabet. In particular produces a better distribution of ascii and
whitespace in the alphabet.
------------------
1.8.1 - 2015-07-17
------------------
This is a small release that contains a workaround for people who have
bad reprs returning non ascii text on Python 2.7. This is not a bug fix
for Hypothesis per se because that's not a thing that is actually supposed
to work, but Hypothesis leans more heavily on repr than is typical so it's
worth having a workaround for.
---------------------------------------------------------------------
`1.8.0 `_ - 2015-07-16
---------------------------------------------------------------------
New features:
* Much more sensible reprs for strategies, especially ones that come from
hypothesis.strategies. These should now have as reprs python code that
would produce the same strategy.
* lists() accepts a unique_by argument which forces the generated lists to be
only contain elements unique according to some function key (which must
return a hashable value).
* Better error messages from flaky tests to help you debug things.
Mostly invisible implementation details that may result in finding new bugs
in your code:
* Sets and dictionary generation should now produce a better range of results.
* floats with bounds now focus more on 'critical values', trying to produce
values at edge cases.
* flatmap should now have better simplification for complicated cases, as well
as generally being (I hope) more reliable.
Bug fixes:
* You could not previously use assume() if you were using the forking executor.
---------------------------------------------------------------------
`1.7.2 `_ - 2015-07-10
---------------------------------------------------------------------
This is purely a bug fix release:
* When using floats() with stale data in the database you could sometimes get
values in your tests that did not respect min_value or max_value.
* When getting a Flaky error from an unreliable test it would have incorrectly
displayed the example that caused it.
* 2.6 dependency on backports was incorrectly specified. This would only have
caused you problems if you were building a universal wheel from Hypothesis,
which is not how Hypothesis ships, so unless you're explicitly building wheels
for your dependencies and support Python 2.6 plus a later version of Python
this probably would never have affected you.
* If you use flatmap in a way that the strategy on the right hand side depends
sensitively on the left hand side you may have occasionally seen Flaky errors
caused by producing unreliable examples when minimizing a bug. This use case
may still be somewhat fraught to be honest. This code is due a major rearchitecture
for 1.8, but in the meantime this release fixes the only source of this error that
I'm aware of.
---------------------------------------------------------------------
`1.7.1 `_ - 2015-06-29
---------------------------------------------------------------------
Codename: There is no 1.7.0.
A slight technical hitch with a premature upload means there's was a yanked
1.7.0 release. Oops.
The major feature of this release is Python 2.6 support. Thanks to Jeff Meadows
for doing most of the work there.
Other minor features
* strategies now has a permutations() function which returns a strategy
yielding permutations of values from a given collection.
* if you have a flaky test it will print the exception that it last saw before
failing with Flaky, even if you do not have verbose reporting on.
* Slightly experimental git merge script available as "python -m
hypothesis.tools.mergedbs". Instructions on how to use it in the docstring
of that file.
Bug fixes:
* Better performance from use of filter. In particular tests which involve large
numbers of heavily filtered strategies should perform a lot better.
* floats() with a negative min_value would not have worked correctly (worryingly,
it would have just silently failed to run any examples). This is now fixed.
* tests using sampled\_from would error if the number of sampled elements was smaller
than min\_satisfying\_examples.
------------------
1.6.2 - 2015-06-08
------------------
This is just a few small bug fixes:
* Size bounds were not validated for values for a binary() strategy when
reading examples from the database.
* sampled\_from is now in __all__ in hypothesis.strategies
* floats no longer consider negative integers to be simpler than positive
non-integers
* Small floating point intervals now correctly count members, so if you have a
floating point interval so narrow there are only a handful of values in it,
this will no longer cause an error when Hypothesis runs out of values.
------------------
1.6.1 - 2015-05-21
------------------
This is a small patch release that fixes a bug where 1.6.0 broke the use
of flatmap with the deprecated API and assumed the passed in function returned
a SearchStrategy instance rather than converting it to a strategy.
---------------------------------------------------------------------
`1.6.0 `_ - 2015-05-21
---------------------------------------------------------------------
This is a smallish release designed to fix a number of bugs and smooth out
some weird behaviours.
* Fix a critical bug in flatmap where it would reuse old strategies. If all
your flatmap code was pure you're fine. If it's not, I'm surprised it's
working at all. In particular if you want to use flatmap with django models,
you desperately need to upgrade to this version.
* flatmap simplification performance should now be better in some cases where
it previously had to redo work.
* Fix for a bug where invalid unicode data with surrogates could be generated
during simplification (it was already filtered out during actual generation).
* The Hypothesis database is now keyed off the name of the test instead of the
type of data. This makes much more sense now with the new strategies API and
is generally more robust. This means you will lose old examples on upgrade.
* The database will now not delete values which fail to deserialize correctly,
just skip them. This is to handle cases where multiple incompatible strategies
share the same key.
* find now also saves and loads values from the database, keyed off a hash of the
function you're finding from.
* Stateful tests now serialize and load values from the database. They should have
before, really. This was a bug.
* Passing a different verbosity level into a test would not have worked entirely
correctly, leaving off some messages. This is now fixed.
* Fix a bug where derandomized tests with unicode characters in the function
body would error on Python 2.7.
---------------------------------------------------------------------
`1.5.0 `_ - 2015-05-14
---------------------------------------------------------------------
Codename: Strategic withdrawal.
The purpose of this release is a radical simplification of the API for building
strategies. Instead of the old approach of @strategy.extend and things that
get converted to strategies, you just build strategies directly.
The old method of defining strategies will still work until Hypothesis 2.0,
because it's a major breaking change, but will now emit deprecation warnings.
The new API is also a lot more powerful as the functions for defining strategies
give you a lot of dials to turn. See :doc:`the updated data section ` for
details.
Other changes:
* Mixing keyword and positional arguments in a call to @given is deprecated as well.
* There is a new setting called 'strict'. When set to True, Hypothesis will raise
warnings instead of merely printing them. Turning it on by default is inadvisable because
it means that Hypothesis minor releases can break your code, but it may be useful for
making sure you catch all uses of deprecated APIs.
* max_examples in settings is now interpreted as meaning the maximum number
of unique (ish) examples satisfying assumptions. A new setting max_iterations
which defaults to a larger value has the old interpretation.
* Example generation should be significantly faster due to a new faster parameter
selection algorithm. This will mostly show up for simple data types - for complex
ones the parameter selection is almost certainly dominated.
* Simplification has some new heuristics that will tend to cut down on cases
where it could previously take a very long time.
* timeout would previously not have been respected in cases where there were a lot
of duplicate examples. You probably wouldn't have previously noticed this because
max_examples counted duplicates, so this was very hard to hit in a way that mattered.
* A number of internal simplifications to the SearchStrategy API.
* You can now access the current Hypothesis version as hypothesis.__version__.
* A top level function is provided for running the stateful tests without the
TestCase infrastructure.
---------------------------------------------------------------------
`1.4.0 `_ - 2015-05-04
---------------------------------------------------------------------
Codename: What a state.
The *big* feature of this release is the new and slightly experimental
stateful testing API. You can read more about that in :doc:`the
appropriate section `.
Two minor features the were driven out in the course of developing this:
* You can now set settings.max_shrinks to limit the number of times
Hypothesis will try to shrink arguments to your test. If this is set to
<= 0 then Hypothesis will not rerun your test and will just raise the
failure directly. Note that due to technical limitations if max_shrinks
is <= 0 then Hypothesis will print *every* example it calls your test
with rather than just the failing one. Note also that I don't consider
settings max_shrinks to zero a sensible way to run your tests and it
should really be considered a debug feature.
* There is a new debug level of verbosity which is even *more* verbose than
verbose. You probably don't want this.
Breakage of semi-public SearchStrategy API:
* It is now a required invariant of SearchStrategy that if u simplifies to
v then it is not the case that strictly_simpler(u, v). i.e. simplifying
should not *increase* the complexity even though it is not required to
decrease it. Enforcing this invariant lead to finding some bugs where
simplifying of integers, floats and sets was suboptimal.
* Integers in basic data are now required to fit into 64 bits. As a result
python integer types are now serialized as strings, and some types have
stopped using quite so needlessly large random seeds.
Hypothesis Stateful testing was then turned upon Hypothesis itself, which lead
to an amazing number of minor bugs being found in Hypothesis itself.
Bugs fixed (most but not all from the result of stateful testing) include:
* Serialization of streaming examples was flaky in a way that you would
probably never notice: If you generate a template, simplify it, serialize
it, deserialize it, serialize it again and then deserialize it you would
get the original stream instead of the simplified one.
* If you reduced max_examples below the number of examples already saved in
the database, you would have got a ValueError. Additionally, if you had
more than max_examples in the database all of them would have been
considered.
* @given will no longer count duplicate examples (which it never called
your function with) towards max_examples. This may result in your tests
running slower, but that's probably just because they're trying more
examples.
* General improvements to example search which should result in better
performance and higher quality examples. In particular parameters which
have a history of producing useless results will be more aggressively
culled. This is useful both because it decreases the chance of useless
examples and also because it's much faster to not check parameters which
we were unlikely to ever pick!
* integers_from and lists of types with only one value (e.g. [None]) would
previously have had a very high duplication rate so you were probably
only getting a handful of examples. They now have a much lower
duplication rate, as well as the improvements to search making this
less of a problem in the first place.
* You would sometimes see simplification taking significantly longer than
your defined timeout. This would happen because timeout was only being
checked after each *successful* simplification, so if Hypothesis was
spending a lot of time unsuccessfully simplifying things it wouldn't
stop in time. The timeout is now applied for unsuccessful simplifications
too.
* In Python 2.7, integers_from strategies would have failed during
simplification with an OverflowError if their starting point was at or
near to the maximum size of a 64-bit integer.
* flatmap and map would have failed if called with a function without a
__name__ attribute.
* If max_examples was less than min_satisfying_examples this would always
error. Now min_satisfying_examples is capped to max_examples. Note that
if you have assumptions to satisfy here this will still cause an error.
Some minor quality improvements:
* Lists of streams, flatmapped strategies and basic strategies should now
now have slightly better simplification.
---------------------------------------------------------------------
`1.3.0 `_ - 2015-05-22
---------------------------------------------------------------------
New features:
* New verbosity level API for printing intermediate results and exceptions.
* New specifier for strings generated from a specified alphabet.
* Better error messages for tests that are failing because of a lack of enough
examples.
Bug fixes:
* Fix error where use of ForkingTestCase would sometimes result in too many
open files.
* Fix error where saving a failing example that used flatmap could error.
* Implement simplification for sampled_from, which apparently never supported
it previously. Oops.
General improvements:
* Better range of examples when using one_of or sampled_from.
* Fix some pathological performance issues when simplifying lists of complex
values.
* Fix some pathological performance issues when simplifying examples that
require unicode strings with high codepoints.
* Random will now simplify to more readable examples.
---------------------------------------------------------------------
`1.2.1 `_ - 2015-04-16
---------------------------------------------------------------------
A small patch release for a bug in the new executors feature. Tests which require
doing something to their result in order to fail would have instead reported as
flaky.
---------------------------------------------------------------------
`1.2.0 `_ - 2015-04-15
---------------------------------------------------------------------
Codename: Finders keepers.
A bunch of new features and improvements.
* Provide a mechanism for customizing how your tests are executed.
* Provide a test runner that forks before running each example. This allows
better support for testing native code which might trigger a segfault or a C
level assertion failure.
* Support for using Hypothesis to find examples directly rather than as just as
a test runner.
* New streaming type which lets you generate infinite lazily loaded streams of
data - perfect for if you need a number of examples but don't know how many.
* Better support for large integer ranges. You can now use integers_in_range
with ranges of basically any size. Previously large ranges would have eaten
up all your memory and taken forever.
* Integers produce a wider range of data than before - previously they would
only rarely produce integers which didn't fit into a machine word. Now it's
much more common. This percolates to other numeric types which build on
integers.
* Better validation of arguments to @given. Some situations that would
previously have caused silently wrong behaviour will now raise an error.
* Include +/- sys.float_info.max in the set of floating point edge cases that
Hypothesis specifically tries.
* Fix some bugs in floating point ranges which happen when given
+/- sys.float_info.max as one of the endpoints... (really any two floats that
are sufficiently far apart so that x, y are finite but y - x is infinite).
This would have resulted in generating infinite values instead of ones inside
the range.
---------------------------------------------------------------------
`1.1.1 `_ - 2015-04-07
---------------------------------------------------------------------
Codename: Nothing to see here
This is just a patch release put out because it fixed some internal bugs that would
block the Django integration release but did not actually affect anything anyone could
previously have been using. It also contained a minor quality fix for floats that
I'd happened to have finished in time.
* Fix some internal bugs with object lifecycle management that were impossible to
hit with the previously released versions but broke hypothesis-django.
* Bias floating point numbers somewhat less aggressively towards very small numbers
---------------------------------------------------------------------
`1.1.0 `_ - 2015-04-06
---------------------------------------------------------------------
Codename: No-one mention the M word.
* Unicode strings are more strongly biased towards ascii characters. Previously they
would generate all over the space. This is mostly so that people who try to
shape their unicode strings with assume() have less of a bad time.
* A number of fixes to data deserialization code that could theoretically have
caused mysterious bugs when using an old version of a Hypothesis example
database with a newer version. To the best of my knowledge a change that could
have triggered this bug has never actually been seen in the wild. Certainly
no-one ever reported a bug of this nature.
* Out of the box support for Decimal and Fraction.
* new dictionary specifier for dictionaries with variable keys.
* Significantly faster and higher quality simplification, especially for
collections of data.
* New filter() and flatmap() methods on Strategy for better ways of building
strategies out of other strategies.
* New BasicStrategy class which allows you to define your own strategies from
scratch without needing an existing matching strategy or being exposed to the
full horror or non-public nature of the SearchStrategy interface.
---------------------------------------------------------------------
`1.0.0 `_ - 2015-03-27
---------------------------------------------------------------------
Codename: Blast-off!
There are no code changes in this release. This is precisely the 0.9.2 release
with some updated documentation.
------------------
0.9.2 - 2015-03-26
------------------
Codename: T-1 days.
* floats_in_range would not actually have produced floats_in_range unless that
range happened to be (0, 1). Fix this.
------------------
0.9.1 - 2015-03-25
------------------
Codename: T-2 days.
* Fix a bug where if you defined a strategy using map on a lambda then the results would not be saved in the database.
* Significant performance improvements when simplifying examples using lists, strings or bounded integer ranges.
------------------
0.9.0 - 2015-03-23
------------------
Codename: The final countdown
This release could also be called 1.0-RC1.
It contains a teeny tiny bugfix, but the real point of this release is to declare
feature freeze. There will be zero functionality changes between 0.9.0 and 1.0 unless
something goes really really wrong. No new features will be added, no breaking API changes
will occur, etc. This is the final shakedown before I declare Hypothesis stable and ready
to use and throw a party to celebrate.
Bug bounty for any bugs found between now and 1.0: I will buy you a drink (alcoholic,
caffeinated, or otherwise) and shake your hand should we ever find ourselves in the
same city at the same time.
The one tiny bugfix:
* Under pypy, databases would fail to close correctly when garbage collected, leading to a memory leak and a confusing error message if you were repeatedly creating databases and not closing them. It is very unlikely you were doing this and the chances of you ever having noticed this bug are very low.
------------------
0.7.2 - 2015-03-22
------------------
Codename: Hygienic macros or bust
* You can now name an argument to @given 'f' and it won't break (issue #38)
* strategy_test_suite is now named strategy_test_suite as the documentation claims and not in fact strategy_test_suitee
* Settings objects can now be used as a context manager to temporarily override the default values inside their context.
------------------
0.7.1 - 2015-03-21
------------------
Codename: Point releases go faster
* Better string generation by parametrizing by a limited alphabet
* Faster string simplification - previously if simplifying a string with high range unicode characters it would try every unicode character smaller than that. This was pretty pointless. Now it stops after it's a short range (it can still reach smaller ones through recursive calls because of other simplifying operations).
* Faster list simplification by first trying a binary chop down the middle
* Simultaneous simplification of identical elements in a list. So if a bug only triggers when you have duplicates but you drew e.g. [-17, -17], this will now simplify to [0, 0].
-------------------
0.7.0, - 2015-03-20
-------------------
Codename: Starting to look suspiciously real
This is probably the last minor release prior to 1.0. It consists of stability
improvements, a few usability things designed to make Hypothesis easier to try
out, and filing off some final rough edges from the API.
* Significant speed and memory usage improvements
* Add an example() method to strategy objects to give an example of the sort of data that the strategy generates.
* Remove .descriptor attribute of strategies
* Rename descriptor_test_suite to strategy_test_suite
* Rename the few remaining uses of descriptor to specifier (descriptor already has a defined meaning in Python)
---------------------------------------------------------
0.6.0 - 2015-03-13
---------------------------------------------------------
Codename: I'm sorry, were you using that API?
This is primarily a "simplify all the weird bits of the API" release. As a result there are a lot of breaking changes. If
you just use @given with core types then you're probably fine.
In particular:
* Stateful testing has been removed from the API
* The way the database is used has been rendered less useful (sorry). The feature for reassembling values saved from other
tests doesn't currently work. This will probably be brought back in post 1.0.
* SpecificationMapper is no longer a thing. Instead there is an ExtMethod called strategy which you extend to specify how
to convert other types to strategies.
* Settings are now extensible so you can add your own for configuring a strategy
* MappedSearchStrategy no longer needs an unpack method
* Basically all the SearchStrategy internals have changed massively. If you implemented SearchStrategy directly rather than
using MappedSearchStrategy talk to me about fixing it.
* Change to the way extra packages work. You now specify the package. This
must have a load() method. Additionally any modules in the package will be
loaded in under hypothesis.extra
Bug fixes:
* Fix for a bug where calling falsify on a lambda with a non-ascii character
in its body would error.
Hypothesis Extra:
hypothesis-fakefactory\: An extension for using faker data in hypothesis. Depends
on fake-factory.
------------------
0.5.0 - 2015-02-10
------------------
Codename: Read all about it.
Core hypothesis:
* Add support back in for pypy and python 3.2
* @given functions can now be invoked with some arguments explicitly provided. If all arguments that hypothesis would have provided are passed in then no falsification is run.
* Related to the above, this means that you can now use pytest fixtures and mark.parametrize with Hypothesis without either interfering with the other.
* Breaking change: @given no longer works for functions with varargs (varkwargs are fine). This might be added back in at a later date.
* Windows is now fully supported. A limited version (just the tests with none of the extras) of the test suite is run on windows with each commit so it is now a first class citizen of the Hypothesis world.
* Fix a bug for fuzzy equality of equal complex numbers with different reprs (this can happen when one coordinate is zero). This shouldn't affect users - that feature isn't used anywhere public facing.
* Fix generation of floats on windows and 32-bit builds of python. I was using some struct.pack logic that only worked on certain word sizes.
* When a test times out and hasn't produced enough examples this now raises a Timeout subclass of Unfalsifiable.
* Small search spaces are better supported. Previously something like a @given(bool, bool) would have failed because it couldn't find enough examples. Hypothesis is now aware of the fact that these are small search spaces and will not error in this case.
* Improvements to parameter search in the case of hard to satisfy assume. Hypothesis will now spend less time exploring parameters that are unlikely to provide anything useful.
* Increase chance of generating "nasty" floats
* Fix a bug that would have caused unicode warnings if you had a sampled_from that was mixing unicode and byte strings.
* Added a standard test suite that you can use to validate a custom strategy you've defined is working correctly.
Hypothesis extra:
First off, introducing Hypothesis extra packages!
These are packages that are separated out from core Hypothesis because they have one or more dependencies. Every
hypothesis-extra package is pinned to a specific point release of Hypothesis and will have some version requirements
on its dependency. They use entry_points so you will usually not need to explicitly import them, just have them installed
on the path.
This release introduces two of them:
hypothesis-datetime:
Does what it says on the tin: Generates datetimes for Hypothesis. Just install the package and datetime support will start
working.
Depends on pytz for timezone support
hypothesis-pytest:
A very rudimentary pytest plugin. All it does right now is hook the display of falsifying examples into pytest reporting.
Depends on pytest.
------------------
0.4.3 - 2015-02-05
------------------
Codename: TIL narrow Python builds are a thing
This just fixes the one bug.
* Apparently there is such a thing as a "narrow python build" and OS X ships with these by default
for python 2.7. These are builds where you only have two bytes worth of unicode. As a result,
generating unicode was completely broken on OS X. Fix this by only generating unicode codepoints
in the range supported by the system.
------------------
0.4.2 - 2015-02-04
------------------
Codename: O(dear)
This is purely a bugfix release:
* Provide sensible external hashing for all core types. This will significantly improve
performance of tracking seen examples which happens in literally every falsification
run. For Hypothesis fixing this cut 40% off the runtime of the test suite. The behaviour
is quadratic in the number of examples so if you're running the default configuration
this will be less extreme (Hypothesis's test suite runs at a higher number of examples
than default), but you should still see a significant improvement.
* Fix a bug in formatting of complex numbers where the string could get incorrectly truncated.
------------------
0.4.1 - 2015-02-03
------------------
Codename: Cruel and unusual edge cases
This release is mostly about better test case generation.
Enhancements:
* Has a cool release name
* text_type (str in python 3, unicode in python 2) example generation now
actually produces interesting unicode instead of boring ascii strings.
* floating point numbers are generated over a much wider range, with particular
attention paid to generating nasty numbers - nan, infinity, large and small
values, etc.
* examples can be generated using pieces of examples previously saved in the
database. This allows interesting behaviour that has previously been discovered
to be propagated to other examples.
* improved parameter exploration algorithm which should allow it to more reliably
hit interesting edge cases.
* Timeout can now be disabled entirely by setting it to any value <= 0.
Bug fixes:
* The descriptor on a OneOfStrategy could be wrong if you had descriptors which
were equal but should not be coalesced. e.g. a strategy for one_of((frozenset({int}), {int}))
would have reported its descriptor as {int}. This is unlikely to have caused you
any problems
* If you had strategies that could produce NaN (which float previously couldn't but
e.g. a Just(float('nan')) could) then this would have sent hypothesis into an infinite
loop that would have only been terminated when it hit the timeout.
* Given elements that can take a long time to minimize, minimization of floats or tuples
could be quadratic or worse in the that value. You should now see much better performance
for simplification, albeit at some cost in quality.
Other:
* A lot of internals have been been rewritten. This shouldn't affect you at all, but
it opens the way for certain of hypothesis's oddities to be a lot more extensible by
users. Whether this is a good thing may be up for debate...
------------------
0.4.0 - 2015-01-21
------------------
FLAGSHIP FEATURE: Hypothesis now persists examples for later use. It stores
data in a local SQLite database and will reuse it for all tests of the same
type.
LICENSING CHANGE: Hypothesis is now released under the Mozilla Public License
2.0. This applies to all versions from 0.4.0 onwards until further notice.
The previous license remains applicable to all code prior to 0.4.0.
Enhancements:
* Printing of failing examples. I was finding that the pytest runner was not
doing a good job of displaying these, and that Hypothesis itself could do
much better.
* Drop dependency on six for cross-version compatibility. It was easy
enough to write the shim for the small set of features that we care about
and this lets us avoid a moderately complex dependency.
* Some improvements to statistical distribution of selecting from small (<=
3 elements)
* Improvements to parameter selection for finding examples.
Bugs fixed:
* could_have_produced for lists, dicts and other collections would not have
examined the elements and thus when using a union of different types of
list this could result in Hypothesis getting confused and passing a value
to the wrong strategy. This could potentially result in exceptions being
thrown from within simplification.
* sampled_from would not work correctly on a single element list.
* Hypothesis could get *very* confused by values which are
equal despite having different types being used in descriptors. Hypothesis
now has its own more specific version of equality it uses for descriptors
and tracking. It is always more fine grained than Python equality: Things
considered != are not considered equal by hypothesis, but some things that
are considered == are distinguished. If your test suite uses both frozenset
and set tests this bug is probably affecting you.
------------------
0.3.2 - 2015-01-16
------------------
* Fix a bug where if you specified floats_in_range with integer arguments
Hypothesis would error in example simplification.
* Improve the statistical distribution of the floats you get for the
floats_in_range strategy. I'm not sure whether this will affect users in
practice but it took my tests for various conditions from flaky to rock
solid so it at the very least improves discovery of the artificial cases
I'm looking for.
* Improved repr() for strategies and RandomWithSeed instances.
* Add detection for flaky test cases where hypothesis managed to find an
example which breaks it but on the final invocation of the test it does
not raise an error. This will typically happen with too much recursion
errors but could conceivably happen in other circumstances too.
* Provide a "derandomized" mode. This allows you to run hypothesis with
zero real randomization, making your build nice and deterministic. The
tests run with a seed calculated from the function they're testing so you
should still get a good distribution of test cases.
* Add a mechanism for more conveniently defining tests which just sample
from some collection.
* Fix for a really subtle bug deep in the internals of the strategy table.
In some circumstances if you were to define instance strategies for both
a parent class and one or more of its subclasses you would under some
circumstances get the strategy for the wrong superclass of an instance.
It is very unlikely anyone has ever encountered this in the wild, but it
is conceivably possible given that a mix of namedtuple and tuple are used
fairly extensively inside hypothesis which do exhibit this pattern of
strategy.
------------------
0.3.1 - 2015-01-13
------------------
* Support for generation of frozenset and Random values
* Correct handling of the case where a called function mutates it argument.
This involved introducing a notion of a strategies knowing how to copy
their argument. The default method should be entirely acceptable and the
worst case is that it will continue to have the old behaviour if you
don't mark your strategy as mutable, so this shouldn't break anything.
* Fix for a bug where some strategies did not correctly implement
could_have_produced. It is very unlikely that any of these would have
been seen in the wild, and the consequences if they had been would have
been minor.
* Re-export the @given decorator from the main hypothesis namespace. It's
still available at the old location too.
* Minor performance optimisation for simplifying long lists.
------------------
0.3.0 - 2015-01-12
------------------
* Complete redesign of the data generation system. Extreme breaking change
for anyone who was previously writing their own SearchStrategy
implementations. These will not work any more and you'll need to modify
them.
* New settings system allowing more global and modular control of Verifier
behaviour.
* Decouple SearchStrategy from the StrategyTable. This leads to much more
composable code which is a lot easier to understand.
* A significant amount of internal API renaming and moving. This may also
break your code.
* Expanded available descriptors, allowing for generating integers or
floats in a specific range.
* Significantly more robust. A very large number of small bug fixes, none
of which anyone is likely to have ever noticed.
* Deprecation of support for pypy and python 3 prior to 3.3. 3.3 and 3.4.
Supported versions are 2.7.x, 3.3.x, 3.4.x. I expect all of these to
remain officially supported for a very long time. I would not be
surprised to add pypy support back in later but I'm not going to do so
until I know someone cares about it. In the meantime it will probably
still work.
------------------
0.2.2 - 2015-01-08
------------------
* Fix an embarrassing complete failure of the installer caused by my being
bad at version control
------------------
0.2.1 - 2015-01-07
------------------
* Fix a bug in the new stateful testing feature where you could make
__init__ a @requires method. Simplification would not always work if the
prune method was able to successfully shrink the test.
------------------
0.2.0 - 2015-01-07
------------------
* It's aliiive.
* Improve python 3 support using six.
* Distinguish between byte and unicode types.
* Fix issues where FloatStrategy could raise.
* Allow stateful testing to request constructor args.
* Fix for issue where test annotations would timeout based on when the
module was loaded instead of when the test started
------------------
0.1.4 - 2013-12-14
------------------
* Make verification runs time bounded with a configurable timeout
------------------
0.1.3 - 2013-05-03
------------------
* Bugfix: Stateful testing behaved incorrectly with subclassing.
* Complex number support
* support for recursive strategies
* different error for hypotheses with unsatisfiable assumptions
------------------
0.1.2 - 2013-03-24
------------------
* Bugfix: Stateful testing was not minimizing correctly and could
throw exceptions.
* Better support for recursive strategies.
* Support for named tuples.
* Much faster integer generation.
------------------
0.1.1 - 2013-03-24
------------------
* Python 3.x support via 2to3.
* Use new style classes (oops).
------------------
0.1.0 - 2013-03-23
------------------
* Introduce stateful testing.
* Massive rewrite of internals to add flags and strategies.
------------------
0.0.5 - 2013-03-13
------------------
* No changes except trying to fix packaging
------------------
0.0.4 - 2013-03-13
------------------
* No changes except that I checked in a failing test case for 0.0.3
so had to replace the release. Doh
------------------
0.0.3 - 2013-03-13
------------------
* Improved a few internals.
* Opened up creating generators from instances as a general API.
* Test integration.
------------------
0.0.2 - 2013-03-12
------------------
* Starting to tighten up on the internals.
* Change API to allow more flexibility in configuration.
* More testing.
------------------
0.0.1 - 2013-03-10
------------------
* Initial release.
* Basic working prototype. Demonstrates idea, probably shouldn't be used.
hypothesis-python-3.44.1/docs/community.rst 0000664 0000000 0000000 00000004402 13215577651 0021043 0 ustar 00root root 0000000 0000000 =========
Community
=========
The Hypothesis community is small for the moment but is full of excellent people
who can answer your questions and help you out. Please do join us.
The two major places for community discussion are:
* `The mailing list `_.
* An IRC channel, #hypothesis on freenode, which is more active than the mailing list.
Feel free to use these to ask for help, provide feedback, or discuss anything remotely
Hypothesis related at all.
---------------
Code of conduct
---------------
Hypothesis's community is an inclusive space, and everyone in it is expected to abide by a code of conduct.
At the high level the code of conduct goes like this:
1. Be kind
2. Be respectful
3. Be helpful
While it is impossible to enumerate everything that is unkind, disrespectful or unhelpful, here are some specific things that are definitely against the code of conduct:
1. -isms and -phobias (e.g. racism, sexism, transphobia and homophobia) are unkind, disrespectful *and* unhelpful. Just don't.
2. All software is broken. This is not a moral failing on the part of the authors. Don't give people a hard time for bad code.
3. It's OK not to know things. Everybody was a beginner once, nobody should be made to feel bad for it.
4. It's OK not to *want* to know something. If you think someone's question is fundamentally flawed, you should still ask permission before explaining what they should actually be asking.
5. Note that "I was just joking" is not a valid defence.
What happens when this goes wrong?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For minor infractions, I'll just call people on it and ask them to apologise and not do it again. You should
feel free to do this too if you're comfortable doing so.
Major infractions and repeat offenders will be banned from the community.
Also, people who have a track record of bad behaviour outside of the Hypothesis community may be banned even
if they obey all these rules if their presence is making people uncomfortable.
At the current volume level it's not hard for me to pay attention to the whole community, but if you think I've
missed something please feel free to alert me. You can either message me as DRMacIver on freenode or send a me
an email at david@drmaciver.com.
hypothesis-python-3.44.1/docs/conf.py 0000664 0000000 0000000 00000007405 13215577651 0017572 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
import os
import sys
import datetime
sys.path.append(
os.path.join(os.path.dirname(__file__), '..', 'src')
)
autodoc_member_order = 'bysource'
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.extlinks',
'sphinx.ext.viewcode',
'sphinx.ext.intersphinx',
]
templates_path = ['_templates']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Hypothesis'
copyright = u'2013-%s, David R. MacIver' % datetime.datetime.utcnow().year
author = u'David R. MacIver'
_d = {}
with open(os.path.join(os.path.dirname(__file__), '..', 'src',
'hypothesis', 'version.py')) as f:
exec(f.read(), _d)
version = _d['__version__']
release = _d['__version__']
language = None
exclude_patterns = ['_build']
pygments_style = 'sphinx'
todo_include_todos = False
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
'numpy': ('https://docs.scipy.org/doc/numpy/', None),
'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),
'pytest': ('https://docs.pytest.org/en/stable/', None),
}
autodoc_mock_imports = ['numpy', 'pandas']
doctest_global_setup = '''
# Some standard imports
from hypothesis import *
from hypothesis.strategies import *
# Run deterministically, and don't save examples
import random
random.seed(0)
doctest_settings = settings(database=None, derandomize=True)
settings.register_profile('doctests', doctest_settings)
settings.load_profile('doctests')
# Never show deprecated behaviour in code examples
import warnings
warnings.filterwarnings('error', category=DeprecationWarning)
'''
# This config value must be a dictionary of external sites, mapping unique
# short alias names to a base URL and a prefix.
# See http://sphinx-doc.org/ext/extlinks.html
_repo = 'https://github.com/HypothesisWorks/hypothesis-python/'
extlinks = {
'commit': (_repo + 'commit/%s', 'commit '),
'gh-file': (_repo + 'blob/master/%s', ''),
'gh-link': (_repo + '%s', ''),
'issue': (_repo + 'issues/%s', 'issue #'),
'pull': (_repo + 'pulls/%s', 'pull request #'),
'pypi': ('https://pypi.python.org/pypi/%s', ''),
}
# -- Options for HTML output ----------------------------------------------
if os.environ.get('READTHEDOCS', None) != 'True':
# only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_static_path = ['_static']
htmlhelp_basename = 'Hypothesisdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
}
latex_documents = [
(master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',
u'David R. MacIver', 'manual'),
]
man_pages = [
(master_doc, 'hypothesis', u'Hypothesis Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'Hypothesis', u'Hypothesis Documentation',
author, 'Hypothesis', 'Advanced property-based testing for Python.',
'Miscellaneous'),
]
hypothesis-python-3.44.1/docs/data.rst 0000664 0000000 0000000 00000027260 13215577651 0017737 0 ustar 00root root 0000000 0000000 =============================
What you can generate and how
=============================
*Most things should be easy to generate and everything should be possible.*
To support this principle Hypothesis provides strategies for most built-in
types with arguments to constrain or adjust the output, as well as higher-order
strategies that can be composed to generate more complex types.
This document is a guide to what strategies are available for generating data
and how to build them. Strategies have a variety of other important internal
features, such as how they simplify, but the data they can generate is the only
public part of their API.
Functions for building strategies are all available in the hypothesis.strategies
module. The salient functions from it are as follows:
.. automodule:: hypothesis.strategies
:members:
.. _shrinking:
~~~~~~~~~
Shrinking
~~~~~~~~~
When using strategies it is worth thinking about how the data *shrinks*.
Shrinking is the process by which Hypothesis tries to produce human readable
examples when it finds a failure - it takes a complex example and turns it
into a simpler one.
Each strategy defines an order in which it shrinks - you won't usually need to
care about this much, but it can be worth being aware of as it can affect what
the best way to write your own strategies is.
The exact shrinking behaviour is not a guaranteed part of the API, but it
doesn't change that often and when it does it's usually because we think the
new way produces nicer examples.
Possibly the most important one to be aware of is
:func:`~hypothesis.strategies.one_of`, which has a preference for values
produced by strategies earlier in its argument list. Most of the others should
largely "do the right thing" without you having to think about it.
~~~~~~~~~~~~~~~~~~~
Adapting strategies
~~~~~~~~~~~~~~~~~~~
Often it is the case that a strategy doesn't produce exactly what you want it
to and you need to adapt it. Sometimes you can do this in the test, but this
hurts reuse because you then have to repeat the adaption in every test.
Hypothesis gives you ways to build strategies from other strategies given
functions for transforming the data.
-------
Mapping
-------
``map`` is probably the easiest and most useful of these to use. If you have a
strategy ``s`` and a function ``f``, then an example ``s.map(f).example()`` is
``f(s.example())``, i.e. we draw an example from ``s`` and then apply ``f`` to it.
e.g.:
.. doctest::
>>> lists(integers()).map(sorted).example()
[-158104205405429173199472404790070005365, -131418136966037518992825706738877085689, -49279168042092131242764306881569217089, 2564476464308589627769617001898573635]
Note that many things that you might use mapping for can also be done with
:func:`~hypothesis.strategies.builds`.
.. _filtering:
---------
Filtering
---------
``filter`` lets you reject some examples. ``s.filter(f).example()`` is some
example of ``s`` such that ``f(example)`` is truthy.
.. doctest::
>>> integers().filter(lambda x: x > 11).example()
87034457550488036879331335314643907276
>>> integers().filter(lambda x: x > 11).example()
145321388071838806577381808280858991039
It's important to note that ``filter`` isn't magic and if your condition is too
hard to satisfy then this can fail:
.. doctest::
>>> integers().filter(lambda x: False).example()
Traceback (most recent call last):
...
hypothesis.errors.NoExamples: Could not find any valid examples in 20 tries
In general you should try to use ``filter`` only to avoid corner cases that you
don't want rather than attempting to cut out a large chunk of the search space.
A technique that often works well here is to use map to first transform the data
and then use ``filter`` to remove things that didn't work out. So for example if
you wanted pairs of integers (x,y) such that x < y you could do the following:
.. doctest::
>>> tuples(integers(), integers()).map(sorted).filter(lambda x: x[0] < x[1]).example()
[-145066798798423346485767563193971626126, -19139012562996970506504843426153630262]
.. _flatmap:
----------------------------
Chaining strategies together
----------------------------
Finally there is ``flatmap``. ``flatmap`` draws an example, then turns that
example into a strategy, then draws an example from *that* strategy.
It may not be obvious why you want this at first, but it turns out to be
quite useful because it lets you generate different types of data with
relationships to each other.
For example suppose we wanted to generate a list of lists of the same
length:
.. code-block:: pycon
>>> rectangle_lists = integers(min_value=0, max_value=10).flatmap(
... lambda n: lists(lists(integers(), min_size=n, max_size=n)))
>>> find(rectangle_lists, lambda x: True)
[]
>>> find(rectangle_lists, lambda x: len(x) >= 10)
[[], [], [], [], [], [], [], [], [], []]
>>> find(rectangle_lists, lambda t: len(t) >= 3 and len(t[0]) >= 3)
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]
>>> find(rectangle_lists, lambda t: sum(len(s) for s in t) >= 10)
[[0], [0], [0], [0], [0], [0], [0], [0], [0], [0]]
In this example we first choose a length for our tuples, then we build a
strategy which generates lists containing lists precisely of that length. The
finds show what simple examples for this look like.
Most of the time you probably don't want ``flatmap``, but unlike ``filter`` and
``map`` which are just conveniences for things you could just do in your tests,
``flatmap`` allows genuinely new data generation that you wouldn't otherwise be
able to easily do.
(If you know Haskell: Yes, this is more or less a monadic bind. If you don't
know Haskell, ignore everything in these parentheses. You do not need to
understand anything about monads to use this, or anything else in Hypothesis).
--------------
Recursive data
--------------
Sometimes the data you want to generate has a recursive definition. e.g. if you
wanted to generate JSON data, valid JSON is:
1. Any float, any boolean, any unicode string.
2. Any list of valid JSON data
3. Any dictionary mapping unicode strings to valid JSON data.
The problem is that you cannot call a strategy recursively and expect it to not just
blow up and eat all your memory. The other problem here is that not all unicode strings
display consistently on different machines, so we'll restrict them in our doctest.
The way Hypothesis handles this is with the :py:func:`recursive` function
which you pass in a base case and a function that, given a strategy for your data type,
returns a new strategy for it. So for example:
.. doctest::
>>> from string import printable; from pprint import pprint
>>> json = recursive(none() | booleans() | floats() | text(printable),
... lambda children: lists(children) | dictionaries(text(printable), children))
>>> pprint(json.example())
{'': 'wkP!4',
'\nLdy': None,
'"uHuds:8a{h\\:694K~{mY>a1yA:#CmDYb': {},
'#1J1': [')gnP',
inf,
['6', 11881275561.716116, "v'A?qyp_sB\n$62g", ''],
-1e-05,
'aF\rl',
[-2.599459969184803e+250, True, True, None],
[True,
'9qP\x0bnUJH5',
3.0741121405774857e-131,
None,
'',
-inf,
'L&',
1.5,
False,
None]],
'cx.': None}
>>> pprint(json.example())
[5.321430774293539e+16, [], 1.1045114769709281e-125]
>>> pprint(json.example())
{'a': []}
That is, we start with our leaf data and then we augment it by allowing lists and dictionaries of anything we can generate as JSON data.
The size control of this works by limiting the maximum number of values that can be drawn from the base strategy. So for example if
we wanted to only generate really small JSON we could do this as:
.. doctest::
>>> small_lists = recursive(booleans(), lists, max_leaves=5)
>>> small_lists.example()
True
>>> small_lists.example()
[False, False, True, True, True]
>>> small_lists.example()
True
.. _composite-strategies:
~~~~~~~~~~~~~~~~~~~~
Composite strategies
~~~~~~~~~~~~~~~~~~~~
The :func:`@composite ` decorator lets you combine other strategies in more or less
arbitrary ways. It's probably the main thing you'll want to use for
complicated custom strategies.
The composite decorator works by giving you a function as the first argument
that you can use to draw examples from other strategies. For example, the
following gives you a list and an index into it:
.. doctest::
>>> @composite
... def list_and_index(draw, elements=integers()):
... xs = draw(lists(elements, min_size=1))
... i = draw(integers(min_value=0, max_value=len(xs) - 1))
... return (xs, i)
``draw(s)`` is a function that should be thought of as returning ``s.example()``,
except that the result is reproducible and will minimize correctly. The
decorated function has the initial argument removed from the list, but will
accept all the others in the expected order. Defaults are preserved.
.. doctest::
>>> list_and_index()
list_and_index()
>>> list_and_index().example()
([-57328788235238539257894870261848707608], 0)
>>> list_and_index(booleans())
list_and_index(elements=booleans())
>>> list_and_index(booleans()).example()
([True], 0)
Note that the repr will work exactly like it does for all the built-in
strategies: it will be a function that you can call to get the strategy in
question, with values provided only if they do not match the defaults.
You can use :func:`assume ` inside composite functions:
.. code-block:: python
@composite
def distinct_strings_with_common_characters(draw):
x = draw(text(), min_size=1)
y = draw(text(alphabet=x))
assume(x != y)
return (x, y)
This works as :func:`assume ` normally would, filtering out any examples for which the
passed in argument is falsey.
.. _interactive-draw:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drawing interactively in tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is also the :func:`~hypothesis.strategies.data` strategy, which gives you a means of using
strategies interactively. Rather than having to specify everything up front in
:func:`@given ` you can draw from strategies in the body of your test:
.. code-block:: python
@given(data())
def test_draw_sequentially(data):
x = data.draw(integers())
y = data.draw(integers(min_value=x))
assert x < y
If the test fails, each draw will be printed with the falsifying example. e.g.
the above is wrong (it has a boundary condition error), so will print:
.. code-block:: pycon
Falsifying example: test_draw_sequentially(data=data(...))
Draw 1: 0
Draw 2: 0
As you can see, data drawn this way is simplified as usual.
Test functions using the :func:`~hypothesis.strategies.data` strategy do not support explicit
:func:`@example(...) `\ s. In this case, the best option is usually to construct
your data with :func:`@composite ` or the explicit example, and unpack this within
the body of the test.
Optionally, you can provide a label to identify values generated by each call
to ``data.draw()``. These labels can be used to identify values in the output
of a falsifying example.
For instance:
.. code-block:: python
@given(data())
def test_draw_sequentially(data):
x = data.draw(integers(), label='First number')
y = data.draw(integers(min_value=x), label='Second number')
assert x < y
will produce the output:
.. code-block:: pycon
Falsifying example: test_draw_sequentially(data=data(...))
Draw 1 (First number): 0
Draw 2 (Second number): 0
hypothesis-python-3.44.1/docs/database.rst 0000664 0000000 0000000 00000006271 13215577651 0020571 0 ustar 00root root 0000000 0000000 ===============================
The Hypothesis Example Database
===============================
When Hypothesis finds a bug it stores enough information in its database to reproduce it. This
enables you to have a classic testing workflow of find a bug, fix a bug, and be confident that
this is actually doing the right thing because Hypothesis will start by retrying the examples that
broke things last time.
-----------
Limitations
-----------
The database is best thought of as a cache that you never need to invalidate: Information may be
lost when you upgrade a Hypothesis version or change your test, so you shouldn't rely on it for
correctness - if there's an example you want to ensure occurs each time then :ref:`there's a feature for
including them in your source code ` - but it helps the development
workflow considerably by making sure that the examples you've just found are reproduced.
--------------
File locations
--------------
The default storage format is as a fairly opaque directory structure. Each test
corresponds to a directory, and each example to a file within that directory.
The standard location for it is .hypothesis/examples in your current working
directory. You can override this, either by setting either the database\_file property on
a settings object (you probably want to specify it on settings.default) or by setting the
HYPOTHESIS\_DATABASE\_FILE environment variable.
There is also a legacy sqlite3 based format. This is mostly still supported for
compatibility reasons, and support will be dropped in some future version of
Hypothesis. If you use a database file name ending in .db, .sqlite or .sqlite3
that format will be used instead.
--------------------------------------------
Upgrading Hypothesis and changing your tests
--------------------------------------------
The design of the Hypothesis database is such that you can put arbitrary data in the database
and not get wrong behaviour. When you upgrade Hypothesis, old data *might* be invalidated, but
this should happen transparently. It should never be the case that e.g. changing the strategy
that generates an argument sometimes gives you data from the old strategy.
-----------------------------
Sharing your example database
-----------------------------
.. note::
If specific examples are important for correctness you should use the
:func:`@example ` decorator, as the example database may discard entries due to
changes in your code or dependencies. For most users, we therefore
recommend using the example database locally and possibly persisting it
between CI builds, but not tracking it under version control.
The examples database can be shared simply by checking the directory into
version control, for example with the following ``.gitignore``::
# Ignore files cached by Hypothesis...
.hypothesis/
# except for the examples directory
!.hypothesis/examples/
Like everything under ``.hypothesis/``, the examples directory will be
transparently created on demand. Unlike the other subdirectories,
``examples/`` is designed to handle merges, deletes, etc if you just add the
directory into git, mercurial, or any similar version control system.
hypothesis-python-3.44.1/docs/details.rst 0000664 0000000 0000000 00000041200 13215577651 0020441 0 ustar 00root root 0000000 0000000 =============================
Details and advanced features
=============================
This is an account of slightly less common Hypothesis features that you don't need
to get started but will nevertheless make your life easier.
----------------------
Additional test output
----------------------
Normally the output of a failing test will look something like:
.. code::
Falsifying example: test_a_thing(x=1, y="foo")
With the ``repr`` of each keyword argument being printed.
Sometimes this isn't enough, either because you have values with a ``repr`` that
isn't very descriptive or because you need to see the output of some
intermediate steps of your test. That's where the ``note`` function comes in:
.. doctest::
>>> from hypothesis import given, note, strategies as st
>>> @given(st.lists(st.integers()), st.randoms())
... def test_shuffle_is_noop(ls, r):
... ls2 = list(ls)
... r.shuffle(ls2)
... note("Shuffle: %r" % (ls2))
... assert ls == ls2
...
>>> try:
... test_shuffle_is_noop()
... except AssertionError:
... print('ls != ls2')
Falsifying example: test_shuffle_is_noop(ls=[0, 0, 1], r=RandomWithSeed(0))
Shuffle: [0, 1, 0]
ls != ls2
The note is printed in the final run of the test in order to include any
additional information you might need in your test.
.. _statistics:
---------------
Test Statistics
---------------
If you are using py.test you can see a number of statistics about the executed tests
by passing the command line argument ``--hypothesis-show-statistics``. This will include
some general statistics about the test:
For example if you ran the following with ``--hypothesis-show-statistics``:
.. code-block:: python
from hypothesis import given, strategies as st
@given(st.integers())
def test_integers(i):
pass
You would see:
.. code-block:: none
test_integers:
- 100 passing examples, 0 failing examples, 0 invalid examples
- Typical runtimes: ~ 1ms
- Fraction of time spent in data generation: ~ 12%
- Stopped because settings.max_examples=100
The final "Stopped because" line is particularly important to note: It tells you the
setting value that determined when the test should stop trying new examples. This
can be useful for understanding the behaviour of your tests. Ideally you'd always want
this to be ``max_examples``.
In some cases (such as filtered and recursive strategies) you will see events mentioned
which describe some aspect of the data generation:
.. code-block:: python
from hypothesis import given, strategies as st
@given(st.integers().filter(lambda x: x % 2 == 0))
def test_even_integers(i):
pass
You would see something like:
.. code-block:: none
test_even_integers:
- 100 passing examples, 0 failing examples, 36 invalid examples
- Typical runtimes: 0-1 ms
- Fraction of time spent in data generation: ~ 16%
- Stopped because settings.max_examples=100
- Events:
* 80.88%, Retried draw from integers().filter(lambda x: ) to satisfy filter
* 26.47%, Aborted test because unable to satisfy integers().filter(lambda x: )
You can also mark custom events in a test using the ``event`` function:
.. autofunction:: hypothesis.event
.. code:: python
from hypothesis import given, event, strategies as st
@given(st.integers().filter(lambda x: x % 2 == 0))
def test_even_integers(i):
event("i mod 3 = %d" % (i % 3,))
You will then see output like:
.. code-block:: none
test_even_integers:
- 100 passing examples, 0 failing examples, 38 invalid examples
- Typical runtimes: 0-1 ms
- Fraction of time spent in data generation: ~ 16%
- Stopped because settings.max_examples=100
- Events:
* 80.43%, Retried draw from integers().filter(lambda x: ) to satisfy filter
* 31.88%, i mod 3 = 0
* 27.54%, Aborted test because unable to satisfy integers().filter(lambda x: )
* 21.74%, i mod 3 = 1
* 18.84%, i mod 3 = 2
Arguments to ``event`` can be any hashable type, but two events will be considered the same
if they are the same when converted to a string with ``str``.
------------------
Making assumptions
------------------
Sometimes Hypothesis doesn't give you exactly the right sort of data you want - it's
mostly of the right shape, but some examples won't work and you don't want to care about
them. You *can* just ignore these by aborting the test early, but this runs the risk of
accidentally testing a lot less than you think you are. Also it would be nice to spend
less time on bad examples - if you're running 200 examples per test (the default) and
it turns out 150 of those examples don't match your needs, that's a lot of wasted time.
.. autofunction:: hypothesis.assume
For example suppose you had the following test:
.. code:: python
@given(floats())
def test_negation_is_self_inverse(x):
assert x == -(-x)
Running this gives us:
.. code::
Falsifying example: test_negation_is_self_inverse(x=float('nan'))
AssertionError
This is annoying. We know about NaN and don't really care about it, but as soon as Hypothesis
finds a NaN example it will get distracted by that and tell us about it. Also the test will
fail and we want it to pass.
So lets block off this particular example:
.. code:: python
from math import isnan
@given(floats())
def test_negation_is_self_inverse_for_non_nan(x):
assume(not isnan(x))
assert x == -(-x)
And this passes without a problem.
In order to avoid the easy trap where you assume a lot more than you intended, Hypothesis
will fail a test when it can't find enough examples passing the assumption.
If we'd written:
.. code:: python
@given(floats())
def test_negation_is_self_inverse_for_non_nan(x):
assume(False)
assert x == -(-x)
Then on running we'd have got the exception:
.. code::
Unsatisfiable: Unable to satisfy assumptions of hypothesis test_negation_is_self_inverse_for_non_nan. Only 0 examples considered satisfied assumptions
~~~~~~~~~~~~~~~~~~~
How good is assume?
~~~~~~~~~~~~~~~~~~~
Hypothesis has an adaptive exploration strategy to try to avoid things which falsify
assumptions, which should generally result in it still being able to find examples in
hard to find situations.
Suppose we had the following:
.. code:: python
@given(lists(integers()))
def test_sum_is_positive(xs):
assert sum(xs) > 0
Unsurprisingly this fails and gives the falsifying example ``[]``.
Adding ``assume(xs)`` to this removes the trivial empty example and gives us ``[0]``.
Adding ``assume(all(x > 0 for x in xs))`` and it passes: the sum of a list of
positive integers is positive.
The reason that this should be surprising is not that it doesn't find a
counter-example, but that it finds enough examples at all.
In order to make sure something interesting is happening, suppose we wanted to
try this for long lists. e.g. suppose we added an ``assume(len(xs) > 10)`` to it.
This should basically never find an example: a naive strategy would find fewer
than one in a thousand examples, because if each element of the list is
negative with probability one-half, you'd have to have ten of these go the right
way by chance. In the default configuration Hypothesis gives up long before
it's tried 1000 examples (by default it tries 200).
Here's what happens if we try to run this:
.. code:: python
@given(lists(integers()))
def test_sum_is_positive(xs):
assume(len(xs) > 10)
assume(all(x > 0 for x in xs))
print(xs)
assert sum(xs) > 0
In: test_sum_is_positive()
[17, 12, 7, 13, 11, 3, 6, 9, 8, 11, 47, 27, 1, 31, 1]
[6, 2, 29, 30, 25, 34, 19, 15, 50, 16, 10, 3, 16]
[25, 17, 9, 19, 15, 2, 2, 4, 22, 10, 10, 27, 3, 1, 14, 17, 13, 8, 16, 9, 2...
[17, 65, 78, 1, 8, 29, 2, 79, 28, 18, 39]
[13, 26, 8, 3, 4, 76, 6, 14, 20, 27, 21, 32, 14, 42, 9, 24, 33, 9, 5, 15, ...
[2, 1, 2, 2, 3, 10, 12, 11, 21, 11, 1, 16]
As you can see, Hypothesis doesn't find *many* examples here, but it finds some - enough to
keep it happy.
In general if you *can* shape your strategies better to your tests you should - for example
:py:func:`integers(1, 1000) ` is a lot better than
``assume(1 <= x <= 1000)``, but ``assume`` will take you a long way if you can't.
---------------------
Defining strategies
---------------------
The type of object that is used to explore the examples given to your test
function is called a :class:`~hypothesis.SearchStrategy`.
These are created using the functions
exposed in the :mod:`hypothesis.strategies` module.
Many of these strategies expose a variety of arguments you can use to customize
generation. For example for integers you can specify ``min`` and ``max`` values of
integers you want.
If you want to see exactly what a strategy produces you can ask for an example:
.. doctest::
>>> integers(min_value=0, max_value=10).example()
9
Many strategies are built out of other strategies. For example, if you want
to define a tuple you need to say what goes in each element:
.. doctest::
>>> from hypothesis.strategies import tuples
>>> tuples(integers(), integers()).example()
(-85296636193678268231691518597782489127, 68871684356256783618296489618877951982)
Further details are :doc:`available in a separate document `.
------------------------------------
The gory details of given parameters
------------------------------------
.. autofunction:: hypothesis.given
The :func:`@given ` decorator may be used
to specify which arguments of a function should
be parametrized over. You can use either positional or keyword arguments or a mixture
of the two.
For example all of the following are valid uses:
.. code:: python
@given(integers(), integers())
def a(x, y):
pass
@given(integers())
def b(x, y):
pass
@given(y=integers())
def c(x, y):
pass
@given(x=integers())
def d(x, y):
pass
@given(x=integers(), y=integers())
def e(x, **kwargs):
pass
@given(x=integers(), y=integers())
def f(x, *args, **kwargs):
pass
class SomeTest(TestCase):
@given(integers())
def test_a_thing(self, x):
pass
The following are not:
.. code:: python
@given(integers(), integers(), integers())
def g(x, y):
pass
@given(integers())
def h(x, *args):
pass
@given(integers(), x=integers())
def i(x, y):
pass
@given()
def j(x, y):
pass
The rules for determining what are valid uses of ``given`` are as follows:
1. You may pass any keyword argument to ``given``.
2. Positional arguments to ``given`` are equivalent to the rightmost named
arguments for the test function.
3. Positional arguments may not be used if the underlying test function has
varargs, arbitrary keywords, or keyword-only arguments.
4. Functions tested with ``given`` may not have any defaults.
The reason for the "rightmost named arguments" behaviour is so that
using :func:`@given ` with instance methods works: ``self``
will be passed to the function as normal and not be parametrized over.
The function returned by given has all the same arguments as the original
test, minus those that are filled in by ``given``.
-------------------------
Custom function execution
-------------------------
Hypothesis provides you with a hook that lets you control how it runs
examples.
This lets you do things like set up and tear down around each example, run
examples in a subprocess, transform coroutine tests into normal tests, etc.
The way this works is by introducing the concept of an executor. An executor
is essentially a function that takes a block of code and run it. The default
executor is:
.. code:: python
def default_executor(function):
return function()
You define executors by defining a method execute_example on a class. Any
test methods on that class with :func:`@given ` used on them will use
``self.execute_example`` as an executor with which to run tests. For example,
the following executor runs all its code twice:
.. code:: python
from unittest import TestCase
class TestTryReallyHard(TestCase):
@given(integers())
def test_something(self, i):
perform_some_unreliable_operation(i)
def execute_example(self, f):
f()
return f()
Note: The functions you use in map, etc. will run *inside* the executor. i.e.
they will not be called until you invoke the function passed to ``execute_example``.
An executor must be able to handle being passed a function which returns None,
otherwise it won't be able to run normal test cases. So for example the following
executor is invalid:
.. code:: python
from unittest import TestCase
class TestRunTwice(TestCase):
def execute_example(self, f):
return f()()
and should be rewritten as:
.. code:: python
from unittest import TestCase
import inspect
class TestRunTwice(TestCase):
def execute_example(self, f):
result = f()
if inspect.isfunction(result):
result = result()
return result
-------------------------------
Using Hypothesis to find values
-------------------------------
You can use Hypothesis's data exploration features to find values satisfying
some predicate. This is generally useful for exploring custom strategies
defined with :func:`@composite `, or
experimenting with conditions for filtering data.
.. autofunction:: hypothesis.find
.. doctest::
>>> from hypothesis import find
>>> from hypothesis.strategies import sets, lists, integers
>>> find(lists(integers()), lambda x: sum(x) >= 10)
[10]
>>> find(lists(integers()), lambda x: sum(x) >= 10 and len(x) >= 3)
[0, 0, 10]
>>> find(sets(integers()), lambda x: sum(x) >= 10 and len(x) >= 3)
{0, 1, 9}
The first argument to :func:`~hypothesis.find` describes data in the usual way for an argument to
:func:`~hypothesis.given`, and supports :doc:`all the same data types `. The second is a
predicate it must satisfy.
Of course not all conditions are satisfiable. If you ask Hypothesis for an
example to a condition that is always false it will raise an error:
.. doctest::
>>> find(integers(), lambda x: False)
Traceback (most recent call last):
...
hypothesis.errors.NoSuchExample: No examples of condition lambda x:
(The ``lambda x: unknown`` is because Hypothesis can't retrieve the source code
of lambdas from the interactive python console. It gives a better error message
most of the time which contains the actual condition)
.. _type-inference:
-------------------
Inferred Strategies
-------------------
In some cases, Hypothesis can work out what to do when you omit arguments.
This is based on introspection, *not* magic, and therefore has well-defined
limits.
:func:`~hypothesis.strategies.builds` will check the signature of the
``target`` (using :func:`~python:inspect.getfullargspec`).
If there are required arguments with type annotations and
no strategy was passed to :func:`~hypothesis.strategies.builds`,
:func:`~hypothesis.strategies.from_type` is used to fill them in.
You can also pass the special value :const:`hypothesis.infer` as a keyword
argument, to force this inference for arguments with a default value.
.. doctest::
>>> def func(a: int, b: str):
... return [a, b]
>>> builds(func).example()
[72627971792323936471739212691379790782, '']
:func:`@given ` does not perform any implicit inference
for required arguments, as this would break compatibility with pytest fixtures.
:const:`~hypothesis.infer` can be used as a keyword argument to explicitly
fill in an argument from its type annotation.
.. code:: python
@given(a=infer)
def test(a: int): pass
# is equivalent to
@given(a=integers())
def test(a): pass
~~~~~~~~~~~
Limitations
~~~~~~~~~~~
:pep:`3107` type annotations are not supported on Python 2, and Hypothesis
does not inspect :pep:`484` type comments at runtime. While
:func:`~hypothesis.strategies.from_type` will work as usual, inference in
:func:`~hypothesis.strategies.builds` and :func:`@given `
will only work if you manually create the ``__annotations__`` attribute
(e.g. by using ``@annotations(...)`` and ``@returns(...)`` decorators).
The :mod:`python:typing` module is fully supported on Python 2 if you have
the backport installed.
The :mod:`python:typing` module is provisional and has a number of internal
changes between Python 3.5.0 and 3.6.1, including at minor versions. These
are all supported on a best-effort basis, but you may encounter problems with
an old version of the module. Please report them to us, and consider
updating to a newer version of Python as a workaround.
hypothesis-python-3.44.1/docs/development.rst 0000664 0000000 0000000 00000004334 13215577651 0021345 0 ustar 00root root 0000000 0000000 ==============================
Ongoing Hypothesis Development
==============================
Hypothesis development is managed by me, `David R. MacIver `_.
I am the primary author of Hypothesis.
*However*, I no longer do unpaid feature development on Hypothesis. My roles as leader of the project are:
1. Helping other people do feature development on Hypothesis
2. Fixing bugs and other code health issues
3. Improving documentation
4. General release management work
5. Planning the general roadmap of the project
6. Doing sponsored development on tasks that are too large or in depth for other people to take on
So all new features must either be sponsored or implemented by someone else.
That being said, the maintenance team takes an active role in shepherding pull requests and
helping people write a new feature (see :gh-file:`CONTRIBUTING.rst` for
details and :pull:`154` for an example of how the process goes). This isn't
"patches welcome", it's "we will help you write a patch".
.. _release-policy:
Release Policy
==============
Hypothesis releases follow `semantic versioning `_.
We maintain backwards-compatibility wherever possible, and use deprecation
warnings to mark features that have been superseded by a newer alternative.
If you want to detect this, you can
:mod:`upgrade warnings to errors in the usual ways `.
We use continuous deployment to ensure that you can always use our newest and
shiniest features - every change to the source tree is automatically built and
published on PyPI as soon as it's merged onto master, after code review and
passing our extensive test suite.
Project Roadmap
===============
Hypothesis does not have a long-term release plan. However some visibility
into our plans for future :doc:`compatibility ` may be useful:
- We value compatibility, and maintain it as far as practical. This generally
excludes things which are end-of-life upstream, or have an unstable API.
- We would like to drop Python 2 support when it it reaches end of life in
2020. Ongoing support is likely to depend on commercial funding.
- We intend to support PyPy3 as soon as it supports a recent enough version of
Python 3. See :issue:`602`.
hypothesis-python-3.44.1/docs/django.rst 0000664 0000000 0000000 00000014724 13215577651 0020271 0 ustar 00root root 0000000 0000000 .. _hypothesis-django:
===========================
Hypothesis for Django users
===========================
Hypothesis offers a number of features specific for Django testing, available
in the :mod:`hypothesis[django]` :doc:`extra `. This is tested
against each supported series with mainstream or extended support -
if you're still getting security patches, you can test with Hypothesis.
Using it is quite straightforward: All you need to do is subclass
:class:`hypothesis.extra.django.TestCase` or
:class:`hypothesis.extra.django.TransactionTestCase`
and you can use :func:`@given ` as normal,
and the transactions will be per example
rather than per test function as they would be if you used :func:`@given ` with a normal
django test suite (this is important because your test function will be called
multiple times and you don't want them to interfere with each other). Test cases
on these classes that do not use
:func:`@given ` will be run as normal.
I strongly recommend not using
:class:`~hypothesis.extra.django.TransactionTestCase`
unless you really have to.
Because Hypothesis runs this in a loop the performance problems it normally has
are significantly exacerbated and your tests will be really slow.
If you are using :class:`~hypothesis.extra.django.TransactionTestCase`,
you may need to use ``@settings(suppress_health_check=[HealthCheck.too_slow])``
to avoid :doc:`errors due to slow example generation `.
In addition to the above, Hypothesis has some support for automatically
deriving strategies for your model types, which you can then customize further.
.. warning::
Hypothesis creates saved models. This will run inside your testing
transaction when using the test runner, but if you use the dev console this
will leave debris in your database.
For example, using the trivial django project I have for testing:
.. code-block:: python
>>> from hypothesis.extra.django.models import models
>>> from toystore.models import Customer
>>> c = models(Customer).example()
>>> c
>>> c.email
'jaime.urbina@gmail.com'
>>> c.name
'\U00109d3d\U000e07be\U000165f8\U0003fabf\U000c12cd\U000f1910\U00059f12\U000519b0\U0003fabf\U000f1910\U000423fb\U000423fb\U00059f12\U000e07be\U000c12cd\U000e07be\U000519b0\U000165f8\U0003fabf\U0007bc31'
>>> c.age
-873375803
Hypothesis has just created this with whatever the relevant type of data is.
Obviously the customer's age is implausible, so lets fix that:
.. code-block:: python
>>> from hypothesis.strategies import integers
>>> c = models(Customer, age=integers(min_value=0, max_value=120)).example()
>>> c
>>> c.age
5
You can use this to override any fields you like. Sometimes this will be
mandatory: If you have a non-nullable field of a type Hypothesis doesn't know
how to create (e.g. a foreign key) then the models function will error unless
you explicitly pass a strategy to use there.
Foreign keys are not automatically derived. If they're nullable they will default
to always being null, otherwise you always have to specify them. e.g. suppose
we had a Shop type with a foreign key to company, we would define a strategy
for it as:
.. code:: python
shop_strategy = models(Shop, company=models(Company))
---------------
Tips and tricks
---------------
Custom field types
==================
If you have a custom Django field type you can register it with Hypothesis's
model deriving functionality by registering a default strategy for it:
.. code-block:: python
>>> from toystore.models import CustomishField, Customish
>>> models(Customish).example()
hypothesis.errors.InvalidArgument: Missing arguments for mandatory field
customish for model Customish
>>> from hypothesis.extra.django.models import add_default_field_mapping
>>> from hypothesis.strategies import just
>>> add_default_field_mapping(CustomishField, just("hi"))
>>> x = models(Customish).example()
>>> x.customish
'hi'
Note that this mapping is on exact type. Subtypes will not inherit it.
Generating child models
=======================
For the moment there's no explicit support in hypothesis-django for generating
dependent models. i.e. a Company model will generate no Shops. However if you
want to generate some dependent models as well, you can emulate this by using
the *flatmap* function as follows:
.. code:: python
from hypothesis.strategies import lists, just
def generate_with_shops(company):
return lists(models(Shop, company=just(company))).map(lambda _: company)
company_with_shops_strategy = models(Company).flatmap(generate_with_shops)
Lets unpack what this is doing:
The way flatmap works is that we draw a value from the original strategy, then
apply a function to it which gives us a new strategy. We then draw a value from
*that* strategy. So in this case we're first drawing a company, and then we're
drawing a list of shops belonging to that company: The *just* strategy is a
strategy such that drawing it always produces the individual value, so
``models(Shop, company=just(company))`` is a strategy that generates a Shop belonging
to the original company.
So the following code would give us a list of shops all belonging to the same
company:
.. code:: python
models(Company).flatmap(lambda c: lists(models(Shop, company=just(c))))
The only difference from this and the above is that we want the company, not
the shops. This is where the inner map comes in. We build the list of shops
and then throw it away, instead returning the company we started for. This
works because the models that Hypothesis generates are saved in the database,
so we're essentially running the inner strategy purely for the side effect of
creating those children in the database.
Using default field values
==========================
Hypothesis ignores field defaults and always tries to generate values, even if
it doesn't know how to. You can tell it to use the default value for a field
instead of generating one by passing ``fieldname=default_value`` to
``models()``:
.. code:: python
>>> from toystore.models import DefaultCustomish
>>> models(DefaultCustomish).example()
hypothesis.errors.InvalidArgument: Missing arguments for mandatory field
customish for model DefaultCustomish
>>> from hypothesis.extra.django.models import default_value
>>> x = models(DefaultCustomish, customish=default_value).example()
>>> x.customish
'b'
hypothesis-python-3.44.1/docs/endorsements.rst 0000664 0000000 0000000 00000022562 13215577651 0021534 0 ustar 00root root 0000000 0000000 ============
Testimonials
============
This is a page for listing people who are using Hypothesis and how excited they
are about that. If that's you and your name is not on the list, `this file is in
Git `_
and I'd love it if you sent me a pull request to fix that.
---------------------------------------------------------------------------------------
`Stripe `_
---------------------------------------------------------------------------------------
At Stripe we use Hypothesis to test every piece of our machine
learning model training pipeline (powered by scikit). Before we
migrated, our tests were filled with hand-crafted pandas Dataframes
that weren't representative at all of our actual very complex
data. Because we needed to craft examples for each test, we took the
easy way out and lived with extremely low test coverage.
Hypothesis changed all that. Once we had our strategies for generating
Dataframes of features it became trivial to slightly customize each
strategy for new tests. Our coverage is now close to 90%.
Full-stop, property-based testing is profoundly more powerful - and
has caught or prevented far more bugs - than our old style of
example-based testing.
---------------------------------------------------------------------------------------
Kristian Glass - Director of Technology at `LaterPay GmbH `_
---------------------------------------------------------------------------------------
Hypothesis has been brilliant for expanding the coverage of our test cases,
and also for making them much easier to read and understand,
so we're sure we're testing the things we want in the way we want.
-----------------------------------------------
`Seth Morton `_
-----------------------------------------------
When I first heard about Hypothesis, I knew I had to include it in my two
open-source Python libraries, `natsort `_
and `fastnumbers `_ . Quite frankly,
I was a little appalled at the number of bugs and "holes" I found in the code. I can
now say with confidence that my libraries are more robust to "the wild." In
addition, Hypothesis gave me the confidence to expand these libraries to fully
support Unicode input, which I never would have had the stomach for without such
thorough testing capabilities. Thanks!
-------------------------------------------
`Sixty North `_
-------------------------------------------
At Sixty North we use Hypothesis for testing
`Segpy `_ an open source Python library for
shifting data between Python data structures and SEG Y files which contain
geophysical data from the seismic reflection surveys used in oil and gas
exploration.
This is our first experience of property-based testing – as opposed to example-based
testing. Not only are our tests more powerful, they are also much better
explanations of what we expect of the production code. In fact, the tests are much
closer to being specifications. Hypothesis has located real defects in our code
which went undetected by traditional test cases, simply because Hypothesis is more
relentlessly devious about test case generation than us mere humans! We found
Hypothesis particularly beneficial for Segpy because SEG Y is an antiquated format
that uses legacy text encodings (EBCDIC) and even a legacy floating point format
we implemented from scratch in Python.
Hypothesis is sure to find a place in most of our future Python codebases and many
existing ones too.
-------------------------------------------
`mulkieran `_
-------------------------------------------
Just found out about this excellent QuickCheck for Python implementation and
ran up a few tests for my `bytesize `_
package last night. Refuted a few hypotheses in the process.
Looking forward to using it with a bunch of other projects as well.
-----------------------------------------------
`Adam Johnson `_
-----------------------------------------------
I have written a small library to serialize ``dict``\s to MariaDB's dynamic
columns binary format,
`mariadb-dyncol `_. When I first
developed it, I thought I had tested it really well - there were hundreds of
test cases, some of them even taken from MariaDB's test suite itself. I was
ready to release.
Lucky for me, I tried Hypothesis with David at the PyCon UK sprints. Wow! It
found bug after bug after bug. Even after a first release, I thought of a way
to make the tests do more validation, which revealed a further round of bugs!
Most impressively, Hypothesis found a complicated off-by-one error in a
condition with 4095 versus 4096 bytes of data - something that I would never
have found.
Long live Hypothesis! (Or at least, property-based testing).
-------------------------------------------
`Josh Bronson `_
-------------------------------------------
Adopting Hypothesis improved `bidict `_'s
test coverage and significantly increased our ability to make changes to
the code with confidence that correct behavior would be preserved.
Thank you, David, for the great testing tool.
--------------------------------------------
`Cory Benfield `_
--------------------------------------------
Hypothesis is the single most powerful tool in my toolbox for working with
algorithmic code, or any software that produces predictable output from a wide
range of sources. When using it with
`Priority `_, Hypothesis consistently found
errors in my assumptions and extremely subtle bugs that would have taken months
of real-world use to locate. In some cases, Hypothesis found subtle deviations
from the correct output of the algorithm that may never have been noticed at
all.
When it comes to validating the correctness of your tools, nothing comes close
to the thoroughness and power of Hypothesis.
------------------------------------------
`Jon Moore `_
------------------------------------------
One extremely satisfied user here. Hypothesis is a really solid implementation
of property-based testing, adapted well to Python, and with good features
such as failure-case shrinkers. I first used it on a project where we needed
to verify that a vendor's Python and non-Python implementations of an algorithm
matched, and it found about a dozen cases that previous example-based testing
and code inspections had not. Since then I've been evangelizing for it at our firm.
--------------------------------------------
`Russel Winder `_
--------------------------------------------
I am using Hypothesis as an integral part of my Python workshops. Testing is an integral part of Python
programming and whilst unittest and, better, py.test can handle example-based testing, property-based
testing is increasingly far more important than example-base testing, and Hypothesis fits the bill.
---------------------------------------------
`Wellfire Interactive `_
---------------------------------------------
We've been using Hypothesis in a variety of client projects, from testing
Django-related functionality to domain-specific calculations. It both speeds
up and simplifies the testing process since there's so much less tedious and
error-prone work to do in identifying edge cases. Test coverage is nice but
test depth is even nicer, and it's much easier to get meaningful test depth
using Hypothesis.
--------------------------------------------------
`Cody Kochmann `_
--------------------------------------------------
Hypothesis is being used as the engine for random object generation with my
open source function fuzzer
`battle_tested `_
which maps all behaviors of a function allowing you to minimize the chance of
unexpected crashes when running code in production.
With how efficient Hypothesis is at generating the edge cases that cause
unexpected behavior occur,
`battle_tested `_
is able to map out the entire behavior of most functions in less than a few
seconds.
Hypothesis truly is a masterpiece. I can't thank you enough for building it.
---------------------------------------------------
`Merchise Autrement `_
---------------------------------------------------
Just minutes after our first use of hypothesis `we uncovered a subtle bug`__
in one of our most used library. Since then, we have increasingly used
hypothesis to improve the quality of our testing in libraries and applications
as well.
__ https://github.com/merchise/xoutil/commit/0a4a0f529812fed363efb653f3ade2d2bc203945
-------------------------------------------
`Your name goes here `_
-------------------------------------------
I know there are many more, because I keep finding out about new people I'd never
even heard of using Hypothesis. If you're looking to way to give back to a tool you
love, adding your name here only takes a moment and would really help a lot. As per
instructions at the top, just send me a pull request and I'll add you to the list.
hypothesis-python-3.44.1/docs/examples.rst 0000664 0000000 0000000 00000041041 13215577651 0020635 0 ustar 00root root 0000000 0000000 ==================
Some more examples
==================
This is a collection of examples of how to use Hypothesis in interesting ways.
It's small for now but will grow over time.
All of these examples are designed to be run under `py.test`_ (`nose`_ should probably
work too).
----------------------------------
How not to sort by a partial order
----------------------------------
The following is an example that's been extracted and simplified from a real
bug that occurred in an earlier version of Hypothesis. The real bug was a lot
harder to find.
Suppose we've got the following type:
.. code:: python
class Node(object):
def __init__(self, label, value):
self.label = label
self.value = tuple(value)
def __repr__(self):
return "Node(%r, %r)" % (self.label, self.value)
def sorts_before(self, other):
if len(self.value) >= len(other.value):
return False
return other.value[:len(self.value)] == self.value
Each node is a label and a sequence of some data, and we have the relationship
sorts_before meaning the data of the left is an initial segment of the right.
So e.g. a node with value ``[1, 2]`` will sort before a node with value ``[1, 2, 3]``,
but neither of ``[1, 2]`` nor ``[1, 3]`` will sort before the other.
We have a list of nodes, and we want to topologically sort them with respect to
this ordering. That is, we want to arrange the list so that if ``x.sorts_before(y)``
then x appears earlier in the list than y. We naively think that the easiest way
to do this is to extend the partial order defined here to a total order by
breaking ties arbitrarily and then using a normal sorting algorithm. So we
define the following code:
.. code:: python
from functools import total_ordering
@total_ordering
class TopoKey(object):
def __init__(self, node):
self.value = node
def __lt__(self, other):
if self.value.sorts_before(other.value):
return True
if other.value.sorts_before(self.value):
return False
return self.value.label < other.value.label
def sort_nodes(xs):
xs.sort(key=TopoKey)
This takes the order defined by ``sorts_before`` and extends it by breaking ties by
comparing the node labels.
But now we want to test that it works.
First we write a function to verify that our desired outcome holds:
.. code:: python
def is_prefix_sorted(xs):
for i in range(len(xs)):
for j in range(i+1, len(xs)):
if xs[j].sorts_before(xs[i]):
return False
return True
This will return false if it ever finds a pair in the wrong order and
return true otherwise.
Given this function, what we want to do with Hypothesis is assert that for all
sequences of nodes, the result of calling ``sort_nodes`` on it is sorted.
First we need to define a strategy for Node:
.. code:: python
from hypothesis import settings, strategy
import hypothesis.strategies as s
NodeStrategy = s.builds(
Node,
s.integers(),
s.lists(s.booleans(), average_size=5, max_size=10))
We want to generate *short* lists of values so that there's a decent chance of
one being a prefix of the other (this is also why the choice of bool as the
elements). We then define a strategy which builds a node out of an integer and
one of those short lists of booleans.
We can now write a test:
.. code:: python
from hypothesis import given
@given(s.lists(NodeStrategy))
def test_sorting_nodes_is_prefix_sorted(xs):
sort_nodes(xs)
assert is_prefix_sorted(xs)
this immediately fails with the following example:
.. code:: python
[Node(0, (False, True)), Node(0, (True,)), Node(0, (False,))]
The reason for this is that because False is not a prefix of (True, True) nor vice
versa, sorting things the first two nodes are equal because they have equal labels.
This makes the whole order non-transitive and produces basically nonsense results.
But this is pretty unsatisfying. It only works because they have the same label. Perhaps
we actually wanted our labels to be unique. Lets change the test to do that.
.. code:: python
def deduplicate_nodes_by_label(nodes):
table = {}
for node in nodes:
table[node.label] = node
return list(table.values())
NodeSet = s.lists(Node).map(deduplicate_nodes_by_label)
We define a function to deduplicate nodes by labels, and then map that over a strategy
for lists of nodes to give us a strategy for lists of nodes with unique labels. We can
now rewrite the test to use that:
.. code:: python
@given(NodeSet)
def test_sorting_nodes_is_prefix_sorted(xs):
sort_nodes(xs)
assert is_prefix_sorted(xs)
Hypothesis quickly gives us an example of this *still* being wrong:
.. code:: python
[Node(0, (False,)), Node(-1, (True,)), Node(-2, (False, False))])
Now this is a more interesting example. None of the nodes will sort equal. What is
happening here is that the first node is strictly less than the last node because
(False,) is a prefix of (False, False). This is in turn strictly less than the middle
node because neither is a prefix of the other and -2 < -1. The middle node is then
less than the first node because -1 < 0.
So, convinced that our implementation is broken, we write a better one:
.. code:: python
def sort_nodes(xs):
for i in hrange(1, len(xs)):
j = i - 1
while j >= 0:
if xs[j].sorts_before(xs[j+1]):
break
xs[j], xs[j+1] = xs[j+1], xs[j]
j -= 1
This is just insertion sort slightly modified - we swap a node backwards until swapping
it further would violate the order constraints. The reason this works is because our
order is a partial order already (this wouldn't produce a valid result for a general
topological sorting - you need the transitivity).
We now run our test again and it passes, telling us that this time we've successfully
managed to sort some nodes without getting it completely wrong. Go us.
--------------------
Time zone arithmetic
--------------------
This is an example of some tests for pytz which check that various timezone
conversions behave as you would expect them to. These tests should all pass,
and are mostly a demonstration of some useful sorts of thing to test with
Hypothesis, and how the hypothesis-datetime extra package works.
.. doctest::
>>> from datetime import timedelta
>>> from hypothesis.extra.pytz import timezones
>>> # The datetimes strategy is naive by default, so tell it to use timezones
>>> aware_datetimes = datetimes(timezones=timezones())
>>> @given(aware_datetimes, timezones(), timezones())
... def test_convert_via_intermediary(dt, tz1, tz2):
... """Test that converting between timezones is not affected
... by a detour via another timezone.
... """
... assert dt.astimezone(tz1).astimezone(tz2) == dt.astimezone(tz2)
>>> @given(aware_datetimes, timezones())
... def test_convert_to_and_fro(dt, tz2):
... """If we convert to a new timezone and back to the old one
... this should leave the result unchanged.
... """
... tz1 = dt.tzinfo
... assert dt == dt.astimezone(tz2).astimezone(tz1)
>>> @given(aware_datetimes, timezones())
... def test_adding_an_hour_commutes(dt, tz):
... """When converting between timezones it shouldn't matter
... if we add an hour here or add an hour there.
... """
... an_hour = timedelta(hours=1)
... assert (dt + an_hour).astimezone(tz) == dt.astimezone(tz) + an_hour
>>> @given(aware_datetimes, timezones())
... def test_adding_a_day_commutes(dt, tz):
... """When converting between timezones it shouldn't matter
... if we add a day here or add a day there.
... """
... a_day = timedelta(days=1)
... assert (dt + a_day).astimezone(tz) == dt.astimezone(tz) + a_day
>>> # And we can check that our tests pass
>>> test_convert_via_intermediary()
>>> test_convert_to_and_fro()
>>> test_adding_an_hour_commutes()
>>> test_adding_a_day_commutes()
-------------------
Condorcet's Paradox
-------------------
A classic paradox in voting theory, called Condorcet's paradox, is that
majority preferences are not transitive. That is, there is a population
and a set of three candidates A, B and C such that the majority of the
population prefer A to B, B to C and C to A.
Wouldn't it be neat if we could use Hypothesis to provide an example of this?
Well as you can probably guess from the presence of this section, we can! This
is slightly surprising because it's not really obvious how we would generate an
election given the types that Hypothesis knows about.
The trick here turns out to be twofold:
1. We can generate a type that is *much larger* than an election, extract an election out of that, and rely on minimization to throw away all the extraneous detail.
2. We can use assume and rely on Hypothesis's adaptive exploration to focus on the examples that turn out to generate interesting elections
Without further ado, here is the code:
.. code:: python
from hypothesis import given, assume
from hypothesis.strategies import integers, lists
from collections import Counter
def candidates(votes):
return {candidate for vote in votes for candidate in vote}
def build_election(votes):
"""
Given a list of lists we extract an election out of this. We do this
in two phases:
1. First of all we work out the full set of candidates present in all
votes and throw away any votes that do not have that whole set.
2. We then take each vote and make it unique, keeping only the first
instance of any candidate.
This gives us a list of total orderings of some set. It will usually
be a lot smaller than the starting list, but that's OK.
"""
all_candidates = candidates(votes)
votes = list(filter(lambda v: set(v) == all_candidates, votes))
if not votes:
return []
rebuilt_votes = []
for vote in votes:
rv = []
for v in vote:
if v not in rv:
rv.append(v)
assert len(rv) == len(all_candidates)
rebuilt_votes.append(rv)
return rebuilt_votes
@given(lists(lists(integers(min_value=1, max_value=5))))
def test_elections_are_transitive(election):
election = build_election(election)
# Small elections are unlikely to be interesting
assume(len(election) >= 3)
all_candidates = candidates(election)
# Elections with fewer than three candidates certainly can't exhibit
# intransitivity
assume(len(all_candidates) >= 3)
# Now we check if the election is transitive
# First calculate the pairwise counts of how many prefer each candidate
# to the other
counts = Counter()
for vote in election:
for i in range(len(vote)):
for j in range(i+1, len(vote)):
counts[(vote[i], vote[j])] += 1
# Now look at which pairs of candidates one has a majority over the
# other and store that.
graph = {}
all_candidates = candidates(election)
for i in all_candidates:
for j in all_candidates:
if counts[(i, j)] > counts[(j, i)]:
graph.setdefault(i, set()).add(j)
# Now for each triple assert that it is transitive.
for x in all_candidates:
for y in graph.get(x, ()):
for z in graph.get(y, ()):
assert x not in graph.get(z, ())
The example Hypothesis gives me on my first run (your mileage may of course
vary) is:
.. code:: python
[[3, 1, 4], [4, 3, 1], [1, 4, 3]]
Which does indeed do the job: The majority (votes 0 and 1) prefer 3 to 1, the
majority (votes 0 and 2) prefer 1 to 4 and the majority (votes 1 and 2) prefer
4 to 3. This is in fact basically the canonical example of the voting paradox,
modulo variations on the names of candidates.
-------------------
Fuzzing an HTTP API
-------------------
Hypothesis's support for testing HTTP services is somewhat nascent. There are
plans for some fully featured things around this, but right now they're
probably quite far down the line.
But you can do a lot yourself without any explicit support! Here's a script
I wrote to throw random data against the API for an entirely fictitious service
called Waspfinder (this is only lightly obfuscated and you can easily figure
out who I'm actually talking about, but I don't want you to run this code and
hammer their API without their permission).
All this does is use Hypothesis to generate random JSON data matching the
format their API asks for and check for 500 errors. More advanced tests which
then use the result and go on to do other things are definitely also possible.
.. code:: python
import unittest
from hypothesis import given, assume, settings, strategies as st
from collections import namedtuple
import requests
import os
import random
import time
import math
# These tests will be quite slow because we have to talk to an external
# service. Also we'll put in a sleep between calls so as to not hammer it.
# As a result we reduce the number of test cases and turn off the timeout.
settings.default.max_examples = 100
settings.default.timeout = -1
Goal = namedtuple("Goal", ("slug",))
# We just pass in our API credentials via environment variables.
waspfinder_token = os.getenv('WASPFINDER_TOKEN')
waspfinder_user = os.getenv('WASPFINDER_USER')
assert waspfinder_token is not None
assert waspfinder_user is not None
GoalData = st.fixed_dictionaries({
'title': st.text(),
'goal_type': st.sampled_from([
"hustler", "biker", "gainer", "fatloser", "inboxer",
"drinker", "custom"]),
'goaldate': st.one_of(st.none(), st.floats()),
'goalval': st.one_of(st.none(), st.floats()),
'rate': st.one_of(st.none(), st.floats()),
'initval': st.floats(),
'panic': st.floats(),
'secret': st.booleans(),
'datapublic': st.booleans(),
})
needs2 = ['goaldate', 'goalval', 'rate']
class WaspfinderTest(unittest.TestCase):
@given(GoalData)
def test_create_goal_dry_run(self, data):
# We want slug to be unique for each run so that multiple test runs
# don't interfere with each other. If for some reason some slugs trigger
# an error and others don't we'll get a Flaky error, but that's OK.
slug = hex(random.getrandbits(32))[2:]
# Use assume to guide us through validation we know about, otherwise
# we'll spend a lot of time generating boring examples.
# Title must not be empty
assume(data["title"])
# Exactly two of these values should be not None. The other will be
# inferred by the API.
assume(len([1 for k in needs2 if data[k] is not None]) == 2)
for v in data.values():
if isinstance(v, float):
assume(not math.isnan(v))
data["slug"] = slug
# The API nicely supports a dry run option, which means we don't have
# to worry about the user account being spammed with lots of fake goals
# Otherwise we would have to make sure we cleaned up after ourselves
# in this test.
data["dryrun"] = True
data["auth_token"] = waspfinder_token
for d, v in data.items():
if v is None:
data[d] = "null"
else:
data[d] = str(v)
result = requests.post(
"https://waspfinder.example.com/api/v1/users/"
"%s/goals.json" % (waspfinder_user,), data=data)
# Lets not hammer the API too badly. This will of course make the
# tests even slower than they otherwise would have been, but that's
# life.
time.sleep(1.0)
# For the moment all we're testing is that this doesn't generate an
# internal error. If we didn't use the dry run option we could have
# then tried doing more with the result, but this is a good start.
self.assertNotEqual(result.status_code, 500)
if __name__ == '__main__':
unittest.main()
.. _py.test: https://docs.pytest.org/en/latest/
.. _nose: https://nose.readthedocs.io/en/latest/
hypothesis-python-3.44.1/docs/extras.rst 0000664 0000000 0000000 00000006652 13215577651 0020336 0 ustar 00root root 0000000 0000000 ===================
Additional packages
===================
Hypothesis itself does not have any dependencies, but there are some packages that
need additional things installed in order to work.
You can install these dependencies using the setuptools extra feature as e.g.
``pip install hypothesis[django]``. This will check installation of compatible versions.
You can also just install hypothesis into a project using them, ignore the version
constraints, and hope for the best.
In general "Which version is Hypothesis compatible with?" is a hard question to answer
and even harder to regularly test. Hypothesis is always tested against the latest
compatible version and each package will note the expected compatibility range. If
you run into a bug with any of these please specify the dependency version.
There are seperate pages for :doc:`django` and :doc:`numpy`.
--------------------
hypothesis[pytz]
--------------------
.. automodule:: hypothesis.extra.pytz
:members:
--------------------
hypothesis[datetime]
--------------------
.. automodule:: hypothesis.extra.datetime
:members:
.. _faker-extra:
-----------------------
hypothesis[fakefactory]
-----------------------
.. note::
This extra package is deprecated. We strongly recommend using native
Hypothesis strategies, which are more effective at both finding and
shrinking failing examples for your tests.
The :func:`~hypothesis.strategies.from_regex`,
:func:`~hypothesis.strategies.text` (with some specific alphabet), and
:func:`~hypothesis.strategies.sampled_from` strategies may be particularly
useful.
:pypi:`Faker` (previously :pypi:`fake-factory`) is a Python package that
generates fake data for you. It's great for bootstraping your database,
creating good-looking XML documents, stress-testing a database, or anonymizing
production data. However, it's not designed for automated testing - data from
Hypothesis looks less realistic, but produces minimal bug-triggering examples
and uses coverage information to check more cases.
``hypothesis.extra.fakefactory`` lets you use Faker generators to parametrize
Hypothesis tests. This was only ever meant to ease your transition to
Hypothesis, but we've improved Hypothesis enough since then that we no longer
recommend using Faker for automated tests under any circumstances.
hypothesis.extra.fakefactory defines a function fake_factory which returns a
strategy for producing text data from any Faker provider.
So for example the following will parametrize a test by an email address:
.. code-block:: pycon
>>> fake_factory('email').example()
'tnader@prosacco.info'
>>> fake_factory('name').example()
'Zbyněk Černý CSc.'
You can explicitly specify the locale (otherwise it uses any of the available
locales), either as a single locale or as several:
.. code-block:: pycon
>>> fake_factory('name', locale='en_GB').example()
'Antione Gerlach'
>>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example()
'Miloš Šťastný'
>>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example()
'Harm Sanford'
You can use custom Faker providers via the ``providers`` argument:
.. code-block:: pycon
>>> from faker.providers import BaseProvider
>>> class KittenProvider(BaseProvider):
... def meows(self):
... return 'meow %d' % (self.random_number(digits=10),)
>>> fake_factory('meows', providers=[KittenProvider]).example()
'meow 9139348419'
hypothesis-python-3.44.1/docs/healthchecks.rst 0000664 0000000 0000000 00000002074 13215577651 0021450 0 ustar 00root root 0000000 0000000 =============
Health checks
=============
Hypothesis tries to detect common mistakes and things that will cause difficulty
at run time in the form of a number of 'health checks'.
These include detecting and warning about:
* Strategies with very slow data generation
* Strategies which filter out too much
* Recursive strategies which branch too much
* Tests that are unlikely to complete in a reasonable amount of time.
If any of these scenarios are detected, Hypothesis will emit a warning about them.
The general goal of these health checks is to warn you about things that you are doing that might
appear to work but will either cause Hypothesis to not work correctly or to perform badly.
To selectively disable health checks, use the suppress_health_check setting.
The argument for this parameter is a list with elements drawn from any of
the class-level attributes of the HealthCheck class.
To disable all health checks, set the perform_health_check settings parameter
to False.
.. module:: hypothesis
.. autoclass:: HealthCheck
:undoc-members:
:inherited-members:
hypothesis-python-3.44.1/docs/index.rst 0000664 0000000 0000000 00000005344 13215577651 0020134 0 ustar 00root root 0000000 0000000 ======================
Welcome to Hypothesis!
======================
`Hypothesis `_ is a Python library for
creating unit tests which are simpler to write and more powerful when run,
finding edge cases in your code you wouldn't have thought to look for. It is
stable, powerful and easy to add to any existing test suite.
It works by letting you write tests that assert that something should be true
for every case, not just the ones you happen to think of.
Think of a normal unit test as being something like the following:
1. Set up some data.
2. Perform some operations on the data.
3. Assert something about the result.
Hypothesis lets you write tests which instead look like this:
1. For all data matching some specification.
2. Perform some operations on the data.
3. Assert something about the result.
This is often called property based testing, and was popularised by the
Haskell library `Quickcheck `_.
It works by generating random data matching your specification and checking
that your guarantee still holds in that case. If it finds an example where it doesn't,
it takes that example and cuts it down to size, simplifying it until it finds a
much smaller example that still causes the problem. It then saves that example
for later, so that once it has found a problem with your code it will not forget
it in the future.
Writing tests of this form usually consists of deciding on guarantees that
your code should make - properties that should always hold true,
regardless of what the world throws at you. Examples of such guarantees
might be:
* Your code shouldn't throw an exception, or should only throw a particular type of exception (this works particularly well if you have a lot of internal assertions).
* If you delete an object, it is no longer visible.
* If you serialize and then deserialize a value, then you get the same value back.
Now you know the basics of what Hypothesis does, the rest of this
documentation will take you through how and why. It's divided into a
number of sections, which you can see in the sidebar (or the
menu at the top if you're on mobile), but you probably want to begin with
the :doc:`Quick start guide `, which will give you a worked
example of how to use Hypothesis and a detailed outline
of the things you need to know to begin testing your code with it, or
check out some of the
`introductory articles `_.
.. toctree::
:maxdepth: 1
:hidden:
quickstart
details
settings
data
extras
django
numpy
healthchecks
database
stateful
supported
examples
community
manifesto
endorsements
usage
strategies
changes
development
support
packaging
reproducing
hypothesis-python-3.44.1/docs/manifesto.rst 0000664 0000000 0000000 00000006435 13215577651 0021014 0 ustar 00root root 0000000 0000000 =========================
The Purpose of Hypothesis
=========================
What is Hypothesis for?
From the perspective of a user, the purpose of Hypothesis is to make it easier for
you to write better tests.
From my perspective as the author, that is of course also a purpose of Hypothesis,
but (if you will permit me to indulge in a touch of megalomania for a moment), the
larger purpose of Hypothesis is to drag the world kicking and screaming into a new
and terrifying age of high quality software.
Software is, as they say, eating the world. Software is also `terrible`_. It's buggy,
insecure and generally poorly thought out. This combination is clearly a recipe for
disaster.
And the state of software testing is even worse. Although it's fairly uncontroversial
at this point that you *should* be testing your code, can you really say with a straight
face that most projects you've worked on are adequately tested?
A lot of the problem here is that it's too hard to write good tests. Your tests encode
exactly the same assumptions and fallacies that you had when you wrote the code, so they
miss exactly the same bugs that you missed when you wrote the code.
Meanwhile, there are all sorts of tools for making testing better that are basically
unused. The original Quickcheck is from *1999* and the majority of developers have
not even heard of it, let alone used it. There are a bunch of half-baked implementations
for most languages, but very few of them are worth using.
The goal of Hypothesis is to bring advanced testing techniques to the masses, and to
provide an implementation that is so high quality that it is easier to use them than
it is not to use them. Where I can, I will beg, borrow and steal every good idea
I can find that someone has had to make software testing better. Where I can't, I will
invent new ones.
Quickcheck is the start, but I also plan to integrate ideas from fuzz testing (a
planned future feature is to use coverage information to drive example selection, and
the example saving database is already inspired by the workflows people use for fuzz
testing), and am open to and actively seeking out other suggestions and ideas.
The plan is to treat the social problem of people not using these ideas as a bug to
which there is a technical solution: Does property-based testing not match your workflow?
That's a bug, let's fix it by figuring out how to integrate Hypothesis into it.
Too hard to generate custom data for your application? That's a bug. Let's fix it by
figuring out how to make it easier, or how to take something you're already using to
specify your data and derive a generator from that automatically. Find the explanations
of these advanced ideas hopelessly obtuse and hard to follow? That's a bug. Let's provide
you with an easy API that lets you test your code better without a PhD in software
verification.
Grand ambitions, I know, and I expect ultimately the reality will be somewhat less
grand, but so far in about three months of development, Hypothesis has become the most
solid implementation of Quickcheck ever seen in a mainstream language (as long as we don't
count Scala as mainstream yet), and at the same time managed to
significantly push forward the state of the art, so I think there's
reason to be optimistic.
.. _terrible: https://www.youtube.com/watch?v=csyL9EC0S0c
hypothesis-python-3.44.1/docs/numpy.rst 0000664 0000000 0000000 00000003456 13215577651 0020177 0 ustar 00root root 0000000 0000000 ===================================
Hypothesis for the Scientific Stack
===================================
.. _hypothesis-numpy:
-----
numpy
-----
Hypothesis offers a number of strategies for `NumPy `_ testing,
available in the :mod:`hypothesis[numpy]` :doc:`extra `.
It lives in the ``hypothesis.extra.numpy`` package.
The centerpiece is the :func:`~hypothesis.extra.numpy.arrays` strategy, which generates arrays with
any dtype, shape, and contents you can specify or give a strategy for.
To make this as useful as possible, strategies are provided to generate array
shapes and generate all kinds of fixed-size or compound dtypes.
.. automodule:: hypothesis.extra.numpy
:members:
.. _hypothesis-pandas:
------
pandas
------
Hypothesis provides strategies for several of the core pandas data types:
:class:`pandas.Index`, :class:`pandas.Series` and :class:`pandas.DataFrame`.
The general approach taken by the pandas module is that there are multiple
strategies for generating indexes, and all of the other strategies take the
number of entries they contain from their index strategy (with sensible defaults).
So e.g. a Series is specified by specifying its :class:`numpy.dtype` (and/or
a strategy for generating elements for it).
.. automodule:: hypothesis.extra.pandas
:members:
~~~~~~~~~~~~~~~~~~
Supported Versions
~~~~~~~~~~~~~~~~~~
There is quite a lot of variation between pandas versions. We only
commit to supporting the latest version of pandas, but older minor versions are
supported on a "best effort" basis. Hypothesis is currently tested against
and confirmed working with Pandas 0.19, 0.20, and 0.21.
Releases that are not the latest patch release of their minor version are not
tested or officially supported, but will probably also work unless you hit a
pandas bug.
hypothesis-python-3.44.1/docs/packaging.rst 0000664 0000000 0000000 00000007403 13215577651 0020747 0 ustar 00root root 0000000 0000000 ====================
Packaging Guidelines
====================
Downstream packagers often want to package Hypothesis. Here are some guidelines.
The primary guideline is this: If you are not prepared to keep up with the Hypothesis release schedule,
don't. You will annoy me and are doing your users a disservice.
Hypothesis has a very frequent release schedule. It's rare that it goes a week without a release,
and there are often multiple releases in a given week.
If you *are* prepared to keep up with this schedule, you might find the rest of this document useful.
----------------
Release tarballs
----------------
These are available from :gh-link:`the GitHub releases page `. The
tarballs on pypi are intended for installation from a Python tool such as pip and should not
be considered complete releases. Requests to include additional files in them will not be granted. Their absence
is not a bug.
------------
Dependencies
------------
~~~~~~~~~~~~~~~
Python versions
~~~~~~~~~~~~~~~
Hypothesis is designed to work with a range of Python versions. Currently supported are:
* pypy-2.6.1 (earlier versions of pypy *may* work)
* CPython 2.7.x
* CPython 3.4.x
* CPython 3.5.x
* CPython 3.6.x
If you feel the need to have separate Python 3 and Python 2 packages you can, but Hypothesis works unmodified
on either.
~~~~~~~~~~~~~~~~~~~~~~
Other Python libraries
~~~~~~~~~~~~~~~~~~~~~~
Hypothesis has *mandatory* dependencies on the following libraries:
* :pypi:`attrs`
* :pypi:`coverage`
* :pypi:`enum34` is required on Python 2.7
Hypothesis has *optional* dependencies on the following libraries:
* :pypi:`pytz` (almost any version should work)
* :pypi:`Faker`, version 0.7 or later
* `Django `_, all supported versions
* :pypi:`numpy`, 1.10 or later (earlier versions will probably work fine)
* :pypi:`pandas`, 1.8 or later
* :pypi:`py.test ` (2.8.0 or greater). This is a mandatory dependency for testing Hypothesis itself but optional for users.
The way this works when installing Hypothesis normally is that these features become available if the relevant
library is installed.
------------------
Testing Hypothesis
------------------
If you want to test Hypothesis as part of your packaging you will probably not want to use the mechanisms
Hypothesis itself uses for running its tests, because it has a lot of logic for installing and testing against
different versions of Python.
The tests must be run with py.test. A version more recent than 2.8.0 is strongly encouraged, but it may work
with earlier versions (however py.test specific logic is disabled before 2.8.0).
Tests are organised into a number of top level subdirectories of the tests/ directory.
* cover: This is a small, reasonably fast, collection of tests designed to give 100% coverage of all but a select
subset of the files when run under Python 3.
* nocover: This is a much slower collection of tests that should not be run under coverage for performance reasons.
* py2: Tests that can only be run under Python 2
* py3: Tests that can only be run under Python 3
* datetime: This tests the subset of Hypothesis that depends on pytz
* fakefactory: This tests the subset of Hypothesis that depends on fakefactory.
* django: This tests the subset of Hypothesis that depends on django (this also depends on fakefactory).
An example invocation for running the coverage subset of these tests:
.. code-block:: bash
pip install -e .
pip install pytest # you will probably want to use your own packaging here
python -m pytest tests/cover
--------
Examples
--------
* `arch linux `_
* `fedora `_
* `gentoo `_
hypothesis-python-3.44.1/docs/quickstart.rst 0000664 0000000 0000000 00000022134 13215577651 0021213 0 ustar 00root root 0000000 0000000 =================
Quick start guide
=================
This document should talk you through everything you need to get started with
Hypothesis.
----------
An example
----------
Suppose we've written a `run length encoding
`_ system and we want to test
it out.
We have the following code which I took straight from the
`Rosetta Code `_ wiki (OK, I
removed some commented out code and fixed the formatting, but there are no
functional modifications):
.. code:: python
def encode(input_string):
count = 1
prev = ''
lst = []
for character in input_string:
if character != prev:
if prev:
entry = (prev, count)
lst.append(entry)
count = 1
prev = character
else:
count += 1
else:
entry = (character, count)
lst.append(entry)
return lst
def decode(lst):
q = ''
for character, count in lst:
q += character * count
return q
We want to write a test for this that will check some invariant of these
functions.
The invariant one tends to try when you've got this sort of encoding /
decoding is that if you encode something and then decode it then you get the same
value back.
Lets see how you'd do that with Hypothesis:
.. code:: python
from hypothesis import given
from hypothesis.strategies import text
@given(text())
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
(For this example we'll just let pytest discover and run the test. We'll cover
other ways you could have run it later).
The text function returns what Hypothesis calls a search strategy. An object
with methods that describe how to generate and simplify certain kinds of
values. The @given decorator then takes our test function and turns it into a
parametrized one which, when called, will run the test function over a wide
range of matching data from that strategy.
Anyway, this test immediately finds a bug in the code:
.. code::
Falsifying example: test_decode_inverts_encode(s='')
UnboundLocalError: local variable 'character' referenced before assignment
Hypothesis correctly points out that this code is simply wrong if called on
an empty string.
If we fix that by just adding the following code to the beginning of the function
then Hypothesis tells us the code is correct (by doing nothing as you'd expect
a passing test to).
.. code:: python
if not input_string:
return []
If we wanted to make sure this example was always checked we could add it in
explicitly:
.. code:: python
from hypothesis import given, example
from hypothesis.strategies import text
@given(text())
@example('')
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
You don't have to do this, but it can be useful both for clarity purposes and
for reliably hitting hard to find examples. Also in local development
Hypothesis will just remember and reuse the examples anyway, but there's not
currently a very good workflow for sharing those in your CI.
It's also worth noting that both example and given support keyword arguments as
well as positional. The following would have worked just as well:
.. code:: python
@given(s=text())
@example(s='')
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
Suppose we had a more interesting bug and forgot to reset the count
each time. Say we missed a line in our ``encode`` method:
.. code:: python
def encode(input_string):
count = 1
prev = ''
lst = []
for character in input_string:
if character != prev:
if prev:
entry = (prev, count)
lst.append(entry)
# count = 1 # Missing reset operation
prev = character
else:
count += 1
else:
entry = (character, count)
lst.append(entry)
return lst
Hypothesis quickly informs us of the following example:
.. code::
Falsifying example: test_decode_inverts_encode(s='001')
Note that the example provided is really quite simple. Hypothesis doesn't just
find *any* counter-example to your tests, it knows how to simplify the examples
it finds to produce small easy to understand ones. In this case, two identical
values are enough to set the count to a number different from one, followed by
another distinct value which should have reset the count but in this case
didn't.
The examples Hypothesis provides are valid Python code you can run. Any
arguments that you explicitly provide when calling the function are not
generated by Hypothesis, and if you explicitly provide *all* the arguments
Hypothesis will just call the underlying function the once rather than
running it multiple times.
----------
Installing
----------
Hypothesis is :pypi:`available on pypi as "hypothesis" `. You can install it with:
.. code:: bash
pip install hypothesis
If you want to install directly from the source code (e.g. because you want to
make changes and install the changed version) you can do this with:
.. code:: bash
pip install -e .
You should probably run the tests first to make sure nothing is broken. You can
do this with:
.. code:: bash
python setup.py test
Note that if they're not already installed this will try to install the test
dependencies.
You may wish to do all of this in a `virtualenv `_. For example:
.. code:: bash
virtualenv venv
source venv/bin/activate
pip install hypothesis
Will create an isolated environment for you to try hypothesis out in without
affecting your system installed packages.
-------------
Running tests
-------------
In our example above we just let pytest discover and run our tests, but we could
also have run it explicitly ourselves:
.. code:: python
if __name__ == '__main__':
test_decode_inverts_encode()
We could also have done this as a unittest TestCase:
.. code:: python
import unittest
class TestEncoding(unittest.TestCase):
@given(text())
def test_decode_inverts_encode(self, s):
self.assertEqual(decode(encode(s)), s)
if __name__ == '__main__':
unittest.main()
A detail: This works because Hypothesis ignores any arguments it hasn't been
told to provide (positional arguments start from the right), so the self
argument to the test is simply ignored and works as normal. This also means
that Hypothesis will play nicely with other ways of parameterizing tests. e.g
it works fine if you use pytest fixtures for some arguments and Hypothesis for
others.
-------------
Writing tests
-------------
A test in Hypothesis consists of two parts: A function that looks like a normal
test in your test framework of choice but with some additional arguments, and
a :func:`@given ` decorator that specifies
how to provide those arguments.
Here are some other examples of how you could use that:
.. code:: python
from hypothesis import given
import hypothesis.strategies as st
@given(st.integers(), st.integers())
def test_ints_are_commutative(x, y):
assert x + y == y + x
@given(x=st.integers(), y=st.integers())
def test_ints_cancel(x, y):
assert (x + y) - y == x
@given(st.lists(st.integers()))
def test_reversing_twice_gives_same_list(xs):
# This will generate lists of arbitrary length (usually between 0 and
# 100 elements) whose elements are integers.
ys = list(xs)
ys.reverse()
ys.reverse()
assert xs == ys
@given(st.tuples(st.booleans(), st.text()))
def test_look_tuples_work_too(t):
# A tuple is generated as the one you provided, with the corresponding
# types in those positions.
assert len(t) == 2
assert isinstance(t[0], bool)
assert isinstance(t[1], str)
Note that as we saw in the above example you can pass arguments to :func:`@given `
either as positional or as keywords.
--------------
Where to start
--------------
You should now know enough of the basics to write some tests for your code
using Hypothesis. The best way to learn is by doing, so go have a try.
If you're stuck for ideas for how to use this sort of test for your code, here
are some good starting points:
1. Try just calling functions with appropriate random data and see if they
crash. You may be surprised how often this works. e.g. note that the first
bug we found in the encoding example didn't even get as far as our
assertion: It crashed because it couldn't handle the data we gave it, not
because it did the wrong thing.
2. Look for duplication in your tests. Are there any cases where you're testing
the same thing with multiple different examples? Can you generalise that to
a single test using Hypothesis?
3. `This piece is designed for an F# implementation
`_, but
is still very good advice which you may find helps give you good ideas for
using Hypothesis.
If you have any trouble getting started, don't feel shy about
:doc:`asking for help `.
hypothesis-python-3.44.1/docs/reproducing.rst 0000664 0000000 0000000 00000010776 13215577651 0021353 0 ustar 00root root 0000000 0000000 ====================
Reproducing Failures
====================
One of the things that is often concerning for people using randomized testing
like Hypothesis is the question of how to reproduce failing test cases.
Fortunately Hypothesis has a number of features in support of this. The one you
will use most commonly when developing locally is `the example database `,
which means that you shouldn't have to think about the problem at all for local
use - test failures will just automatically reproduce without you having to do
anything.
The example database is perfectly suitable for sharing between machines, but
there currently aren't very good work flows for that, so Hypothesis provides a
number of ways to make examples reproducible by adding them to the source code
of your tests. This is particularly useful when e.g. you are trying to run an
example that has failed on your CI, or otherwise share them between machines.
.. _providing-explicit-examples:
---------------------------
Providing explicit examples
---------------------------
You can explicitly ask Hypothesis to try a particular example, using
.. autofunction:: hypothesis.example
Hypothesis will run all examples you've asked for first. If any of them fail it
will not go on to look for more examples.
It doesn't matter whether you put the example decorator before or after given.
Any permutation of the decorators in the above will do the same thing.
Note that examples can be positional or keyword based. If they're positional then
they will be filled in from the right when calling, so either of the following
styles will work as expected:
.. code:: python
@given(text())
@example("Hello world")
@example(x="Some very long string")
def test_some_code(x):
assert True
from unittest import TestCase
class TestThings(TestCase):
@given(text())
@example("Hello world")
@example(x="Some very long string")
def test_some_code(self, x):
assert True
As with ``@given``, it is not permitted for a single example to be a mix of
positional and keyword arguments.
Either are fine, and you can use one in one example and the other in another
example if for some reason you really want to, but a single example must be
consistent.
-------------------------------------
Reproducing a test run with ``@seed``
-------------------------------------
.. autofunction:: hypothesis.seed
When a test fails unexpectedly, usually due to a health check failure,
Hypothesis will print out a seed that led to that failure, if the test is not
already running with a fixed seed. You can then recreate that failure using either
the ``@seed`` decorator or (if you are running :pypi:`pytest`) with
``--hypothesis-seed``.
.. _reproduce_failure:
-------------------------------------------------------
Reproducing an example with with ``@reproduce_failure``
-------------------------------------------------------
Hypothesis has an opaque binary representation that it uses for all examples it
generates. This representation is not intended to be stable across versions or
with respect to changes in the test, but can be used to to reproduce failures
with the ``@reproduce_example`` decorator.
.. autofunction:: hypothesis.reproduce_failure
The intent is that you should never write this decorator by hand, but it is
instead provided by Hypothesis.
When a test fails with a falsifying example, Hypothesis may print out a
suggestion to use ``@reproduce_failure`` on the test to recreate the problem
as follows:
.. doctest::
>>> from hypothesis import settings, given, PrintSettings
>>> import hypothesis.strategies as st
>>> @given(st.floats())
... @settings(print_blob=PrintSettings.ALWAYS)
... def test(f):
... assert f == f
...
>>> try:
... test()
... except AssertionError:
... pass
Falsifying example: test(f=nan)
You can reproduce this example by temporarily adding @reproduce_failure(..., b'AAD/8AAAAAAAAQA=') as a decorator on your test case
Adding the suggested decorator to the test should reproduce the failure (as
long as everything else is the same - changing the versions of Python or
anything else involved, might of course affect the behaviour of the test! Note
that changing the version of Hypothesis will result in a different error -
each ``@reproduce_failure`` invocation is specific to a Hypothesis version).
When to do this is controlled by the :attr:`~hypothesis.settings.print_blob`
setting, which may be one of the following values:
.. autoclass:: hypothesis.PrintSettings
hypothesis-python-3.44.1/docs/settings.rst 0000664 0000000 0000000 00000025012 13215577651 0020657 0 ustar 00root root 0000000 0000000 ========
Settings
========
Hypothesis tries to have good defaults for its behaviour, but sometimes that's
not enough and you need to tweak it.
The mechanism for doing this is the :class:`~hypothesis.settings` object.
You can set up a :func:`@given ` based test to use this using a settings
decorator:
:func:`@given ` invocation is as follows:
.. code:: python
from hypothesis import given, settings
@given(integers())
@settings(max_examples=500)
def test_this_thoroughly(x):
pass
This uses a :class:`~hypothesis.settings` object which causes the test to receive a much larger
set of examples than normal.
This may be applied either before or after the given and the results are
the same. The following is exactly equivalent:
.. code:: python
from hypothesis import given, settings
@settings(max_examples=500)
@given(integers())
def test_this_thoroughly(x):
pass
------------------
Available settings
------------------
.. module:: hypothesis
.. autoclass:: settings
:members: max_examples, max_iterations, min_satisfying_examples,
max_shrinks, timeout, strict, database_file, stateful_step_count,
database, perform_health_check, suppress_health_check, buffer_size,
phases, deadline, use_coverage, derandomize
.. _phases:
~~~~~~~~~~~~~~~~~~~~~
Controlling What Runs
~~~~~~~~~~~~~~~~~~~~~
Hypothesis divides tests into four logically distinct phases:
1. Running explicit examples :ref:`provided with the @example decorator `.
2. Rerunning a selection of previously failing examples to reproduce a previously seen error
3. Generating new examples.
4. Attempting to shrink an example found in phases 2 or 3 to a more manageable
one (explicit examples cannot be shrunk).
The phases setting provides you with fine grained control over which of these run,
with each phase corresponding to a value on the :class:`~hypothesis._settings.Phase` enum:
1. ``Phase.explicit`` controls whether explicit examples are run.
2. ``Phase.reuse`` controls whether previous examples will be reused.
3. ``Phase.generate`` controls whether new examples will be generated.
4. ``Phase.shrink`` controls whether examples will be shrunk.
The phases argument accepts a collection with any subset of these. e.g.
``settings(phases=[Phase.generate, Phase.shrink])`` will generate new examples
and shrink them, but will not run explicit examples or reuse previous failures,
while ``settings(phases=[Phase.explicit])`` will only run the explicit
examples.
.. _verbose-output:
~~~~~~~~~~~~~~~~~~~~~~~~~~
Seeing intermediate result
~~~~~~~~~~~~~~~~~~~~~~~~~~
To see what's going on while Hypothesis runs your tests, you can turn
up the verbosity setting. This works with both :func:`~hypothesis.core.find`
and :func:`@given `.
.. doctest::
>>> from hypothesis import find, settings, Verbosity
>>> from hypothesis.strategies import lists, booleans
>>> find(lists(integers()), any, settings=settings(verbosity=Verbosity.verbose))
Trying example []
Found satisfying example [-106641080167757791735701986170810016341,
-129665482689688858331316879188241401294,
-17902751879921353864928802351902980929,
86547910278013668694989468221154862503,
99789676068743906931733548810810835946,
-56833685188912180644827795048092269385,
-12891126493032945632804716628985598019,
57797823215504994933565345605235342532,
98214819714866425575119206029702237685]
Shrunk example to [-106641080167757791735701986170810016341,
-129665482689688858331316879188241401294,
-17902751879921353864928802351902980929,
86547910278013668694989468221154862503,
99789676068743906931733548810810835946,
-56833685188912180644827795048092269385,
-12891126493032945632804716628985598019,
57797823215504994933565345605235342532,
98214819714866425575119206029702237685]
Shrunk example to [-106641080167757791735701986170810016341,
-129665482689688858331316879188241401294,
-17902751879921353864928802351902980929,
86547910278013668694989468221154862503]
Shrunk example to [-106641080167757791735701986170810016341,
164695784672172929935660921670478470673]
Shrunk example to [164695784672172929935660921670478470673]
Shrunk example to [164695784672172929935660921670478470673]
Shrunk example to [164695784672172929935660921670478470673]
Shrunk example to [1]
[1]
The four levels are quiet, normal, verbose and debug. normal is the default,
while in quiet Hypothesis will not print anything out, even the final
falsifying example. debug is basically verbose but a bit more so. You probably
don't want it.
You can also override the default by setting the environment variable
:envvar:`HYPOTHESIS_VERBOSITY_LEVEL` to the name of the level you want. So e.g.
setting ``HYPOTHESIS_VERBOSITY_LEVEL=verbose`` will run all your tests printing
intermediate results and errors.
If you are using ``pytest``, you may also need to
:doc:`disable output capturing for passing tests `.
-------------------------
Building settings objects
-------------------------
Settings can be created by calling :class:`~hypothesis.settings` with any of the available settings
values. Any absent ones will be set to defaults:
.. doctest::
>>> from hypothesis import settings
>>> settings().max_examples
100
>>> settings(max_examples=10).max_examples
10
You can also copy settings from other settings:
.. doctest::
>>> s = settings(max_examples=10)
>>> t = settings(s, max_iterations=20)
>>> s.max_examples
10
>>> t.max_iterations
20
>>> s.max_iterations
1000
>>> s.max_shrinks
500
>>> t.max_shrinks
500
----------------
Default settings
----------------
At any given point in your program there is a current default settings,
available as ``settings.default``. As well as being a settings object in its own
right, all newly created settings objects which are not explicitly based off
another settings are based off the default, so will inherit any values that are
not explicitly set from it.
You can change the defaults by using profiles (see next section), but you can
also override them locally by using a settings object as a :ref:`context manager `
.. doctest::
>>> with settings(max_examples=150):
... print(settings.default.max_examples)
... print(settings().max_examples)
150
150
>>> settings().max_examples
100
Note that after the block exits the default is returned to normal.
You can use this by nesting test definitions inside the context:
.. code:: python
from hypothesis import given, settings
with settings(max_examples=500):
@given(integers())
def test_this_thoroughly(x):
pass
All settings objects created or tests defined inside the block will inherit their
defaults from the settings object used as the context. You can still override them
with custom defined settings of course.
Warning: If you use define test functions which don't use :func:`@given `
inside a context block, these will not use the enclosing settings. This is because the context
manager only affects the definition, not the execution of the function.
.. _settings_profiles:
~~~~~~~~~~~~~~~~~
settings Profiles
~~~~~~~~~~~~~~~~~
Depending on your environment you may want different default settings.
For example: during development you may want to lower the number of examples
to speed up the tests. However, in a CI environment you may want more examples
so you are more likely to find bugs.
Hypothesis allows you to define different settings profiles. These profiles
can be loaded at any time.
Loading a profile changes the default settings but will not change the behavior
of tests that explicitly change the settings.
.. doctest::
>>> from hypothesis import settings
>>> settings.register_profile("ci", settings(max_examples=1000))
>>> settings().max_examples
100
>>> settings.load_profile("ci")
>>> settings().max_examples
1000
Instead of loading the profile and overriding the defaults you can retrieve profiles for
specific tests.
.. doctest::
>>> with settings.get_profile("ci"):
... print(settings().max_examples)
...
1000
Optionally, you may define the environment variable to load a profile for you.
This is the suggested pattern for running your tests on CI.
The code below should run in a `conftest.py` or any setup/initialization section of your test suite.
If this variable is not defined the Hypothesis defined defaults will be loaded.
.. doctest::
>>> import os
>>> from hypothesis import settings, Verbosity
>>> settings.register_profile("ci", settings(max_examples=1000))
>>> settings.register_profile("dev", settings(max_examples=10))
>>> settings.register_profile("debug", settings(max_examples=10, verbosity=Verbosity.verbose))
>>> settings.load_profile(os.getenv(u'HYPOTHESIS_PROFILE', 'default'))
If you are using the hypothesis pytest plugin and your profiles are registered
by your conftest you can load one with the command line option ``--hypothesis-profile``.
.. code:: bash
$ py.test tests --hypothesis-profile
~~~~~~~~
Timeouts
~~~~~~~~
The `timeout` functionality of Hypothesis is being deprecated, and will
eventually be removed. For the moment, the timeout setting can still be set
and the old default timeout of one minute remains.
If you want to future proof your code you can get
the future behaviour by setting it to the value `unlimited`, which you can
import from the main Hypothesis package:
.. code:: python
from hypothesis import given, settings, unlimited
from hypothesis import strategies as st
@settings(timeout=unlimited)
@given(st.integers())
def test_something_slow(i):
...
This will cause your code to run until it hits the normal Hypothesis example
limits, regardless of how long it takes. `timeout=unlimited` will remain a
valid setting after the timeout functionality has been deprecated (but will
then have its own deprecation cycle).
There is however now a timing related health check which is designed to catch
tests that run for ages by accident. If you really want your test to run
forever, the following code will enable that:
.. code:: python
from hypothesis import given, settings, unlimited, HealthCheck
from hypothesis import strategies as st
@settings(timeout=unlimited, suppress_health_check=[
HealthCheck.hung_test
])
@given(st.integers())
def test_something_slow(i):
...
hypothesis-python-3.44.1/docs/stateful.rst 0000664 0000000 0000000 00000036262 13215577651 0020657 0 ustar 00root root 0000000 0000000 ================
Stateful testing
================
Hypothesis offers support for a stateful style of test, where instead of
trying to produce a single data value that causes a specific test to fail, it
tries to generate a program that errors. In many ways, this sort of testing is
to classical property based testing as property based testing is to normal
example based testing.
The idea doesn't originate with Hypothesis, though Hypothesis's implementation
and approach is mostly not based on an existing implementation and should be
considered some mix of novel and independent reinventions.
This style of testing is useful both for programs which involve some sort
of mutable state and for complex APIs where there's no state per se but the
actions you perform involve e.g. taking data from one function and feeding it
into another.
The idea is that you teach Hypothesis how to interact with your program: Be it
a server, a python API, whatever. All you need is to be able to answer the
question "Given what I've done so far, what could I do now?". After that,
Hypothesis takes over and tries to find sequences of actions which cause a
test failure.
Right now the stateful testing is a bit new and experimental and should be
considered as a semi-public API: It may break between minor versions but won't
break between patch releases, and there are still some rough edges in the API
that will need to be filed off.
This shouldn't discourage you from using it. Although it's not as robust as the
rest of Hypothesis, it's still pretty robust and more importantly is extremely
powerful. I found a number of really subtle bugs in Hypothesis by turning the
stateful testing onto a subset of the Hypothesis API, and you likely will find
the same.
Enough preamble, lets see how to use it.
The first thing to note is that there are two levels of API: The low level
but more flexible API and the higher level rule based API which is both
easier to use and also produces a much better display of data due to its
greater structure. We'll start with the more structured one.
-------------------------
Rule based state machines
-------------------------
Rule based state machines are the ones you're most likely to want to use.
They're significantly more user friendly and should be good enough for most
things you'd want to do.
A rule based state machine is a collection of functions (possibly with side
effects) which may depend on both values that Hypothesis can generate and
also on values that have resulted from previous function calls.
You define a rule based state machine as follows:
.. code:: python
import unittest
from collections import namedtuple
from hypothesis import strategies as st
from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule
Leaf = namedtuple('Leaf', ('label',))
Split = namedtuple('Split', ('left', 'right'))
class BalancedTrees(RuleBasedStateMachine):
trees = Bundle('BinaryTree')
@rule(target=trees, x=st.integers())
def leaf(self, x):
return Leaf(x)
@rule(target=trees, left=trees, right=trees)
def split(self, left, right):
return Split(left, right)
@rule(tree=trees)
def check_balanced(self, tree):
if isinstance(tree, Leaf):
return
else:
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1
self.check_balanced(tree.left)
self.check_balanced(tree.right)
def size(self, tree):
if isinstance(tree, Leaf):
return 1
else:
return 1 + self.size(tree.left) + self.size(tree.right)
In this we declare a Bundle, which is a named collection of previously generated
values. We define two rules which put data onto this bundle - one which just
generates leaves with integer labels, the other of which takes two previously
generated values and returns a new one.
We can then integrate this into our test suite by getting a unittest TestCase
from it:
.. code:: python
TestTrees = BalancedTrees.TestCase
if __name__ == '__main__':
unittest.main()
(these will also be picked up by py.test if you prefer to use that). Running
this we get:
.. code:: bash
Step #1: v1 = leaf(x=0)
Step #2: v2 = split(left=v1, right=v1)
Step #3: v3 = split(left=v2, right=v1)
Step #4: check_balanced(tree=v3)
F
======================================================================
FAIL: runTest (hypothesis.stateful.BalancedTrees.TestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
(...)
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1
AssertionError
Note how it's printed out a very short program that will demonstrate the
problem.
...the problem of course being that we've not actually written any code to
balance this tree at *all*, so of course it's not balanced.
So lets balance some trees.
.. code:: python
from collections import namedtuple
from hypothesis import strategies as st
from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule
Leaf = namedtuple('Leaf', ('label',))
Split = namedtuple('Split', ('left', 'right'))
class BalancedTrees(RuleBasedStateMachine):
trees = Bundle('BinaryTree')
balanced_trees = Bundle('balanced BinaryTree')
@rule(target=trees, x=st.integers())
def leaf(self, x):
return Leaf(x)
@rule(target=trees, left=trees, right=trees)
def split(self, left, right):
return Split(left, right)
@rule(tree=balanced_trees)
def check_balanced(self, tree):
if isinstance(tree, Leaf):
return
else:
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1, \
repr(tree)
self.check_balanced(tree.left)
self.check_balanced(tree.right)
@rule(target=balanced_trees, tree=trees)
def balance_tree(self, tree):
return self.split_leaves(self.flatten(tree))
def size(self, tree):
if isinstance(tree, Leaf):
return 1
else:
return self.size(tree.left) + self.size(tree.right)
def flatten(self, tree):
if isinstance(tree, Leaf):
return (tree.label,)
else:
return self.flatten(tree.left) + self.flatten(tree.right)
def split_leaves(self, leaves):
assert leaves
if len(leaves) == 1:
return Leaf(leaves[0])
else:
mid = len(leaves) // 2
return Split(
self.split_leaves(leaves[:mid]),
self.split_leaves(leaves[mid:]),
)
We've now written a really noddy tree balancing implementation. This takes
trees and puts them into a new bundle of data, and we only assert that things
in the balanced_trees bundle are actually balanced.
If you run this it will sit there silently for a while (you can turn on
:ref:`verbose output ` to get slightly more information about
what's happening. debug will give you all the intermediate programs being run)
and then run, telling you your test has passed! Our balancing algorithm worked.
Now lets break it to make sure the test is still valid:
Changing the split to ``mid = max(len(leaves) // 3, 1)`` this should no longer
balance, which gives us the following counter-example:
.. code:: python
v1 = leaf(x=0)
v2 = split(left=v1, right=v1)
v3 = balance_tree(tree=v1)
v4 = split(left=v2, right=v2)
v5 = balance_tree(tree=v4)
check_balanced(tree=v5)
Note that the example could be shrunk further by deleting v3. Due to some
technical limitations, Hypothesis was unable to find that particular shrink.
In general it's rare for examples produced to be long, but they won't always be
minimal.
You can control the detailed behaviour with a settings object on the TestCase
(this is a normal hypothesis settings object using the defaults at the time
the TestCase class was first referenced). For example if you wanted to run
fewer examples with larger programs you could change the settings to:
.. code:: python
TestTrees.settings = settings(max_examples=100, stateful_step_count=100)
Which doubles the number of steps each program runs and halves the number of
runs relative to the example. settings.timeout will also be respected as usual.
Preconditions
-------------
While it's possible to use :func:`~hypothesis.assume` in RuleBasedStateMachine rules, if you
use it in only a few rules you can quickly run into a situation where few or
none of your rules pass their assumptions. Thus, Hypothesis provides a
:func:`~hypothesis.stateful.precondition` decorator to avoid this problem. The :func:`~hypothesis.stateful.precondition`
decorator is used on ``rule``-decorated functions, and must be given a function
that returns True or False based on the RuleBasedStateMachine instance.
.. autofunction:: hypothesis.stateful.precondition
.. code:: python
from hypothesis.stateful import RuleBasedStateMachine, rule, precondition
class NumberModifier(RuleBasedStateMachine):
num = 0
@rule()
def add_one(self):
self.num += 1
@precondition(lambda self: self.num != 0)
@rule()
def divide_with_one(self):
self.num = 1 / self.num
By using :func:`~hypothesis.stateful.precondition` here instead of :func:`~hypothesis.assume`, Hypothesis can filter the
inapplicable rules before running them. This makes it much more likely that a
useful sequence of steps will be generated.
Note that currently preconditions can't access bundles; if you need to use
preconditions, you should store relevant data on the instance instead.
Invariant
---------
Often there are invariants that you want to ensure are met after every step in
a process. It would be possible to add these as rules that are run, but they
would be run zero or multiple times between other rules. Hypothesis provides a
decorator that marks a function to be run after every step.
.. autofunction:: hypothesis.stateful.invariant
.. code:: python
from hypothesis.stateful import RuleBasedStateMachine, rule, invariant
class NumberModifier(RuleBasedStateMachine):
num = 0
@rule()
def add_two(self):
self.num += 2
if self.num > 50:
self.num += 1
@invariant()
def divide_with_one(self):
assert self.num % 2 == 0
NumberTest = NumberModifier.TestCase
Invariants can also have :func:`~hypothesis.stateful.precondition`\ s applied to them, in which case
they will only be run if the precondition function returns true.
Note that currently invariants can't access bundles; if you need to use
invariants, you should store relevant data on the instance instead.
----------------------
Generic state machines
----------------------
The class GenericStateMachine is the underlying machinery of stateful testing
in Hypothesis. In execution it looks much like the RuleBasedStateMachine but
it allows the set of steps available to depend in essentially arbitrary
ways on what has happened so far. For example, if you wanted to
use Hypothesis to test a game, it could choose each step in the machine based
on the game to date and the set of actions the game program is telling it it
has available.
It essentially executes the following loop:
.. code:: python
machine = MyStateMachine()
try:
machine.check_invariants()
for _ in range(n_steps):
step = machine.steps().example()
machine.execute_step(step)
machine.check_invariants()
finally:
machine.teardown()
Where ``steps`` and ``execute_step`` are methods you must implement, and
``teardown`` and ``check_invarants`` are methods you can implement if required.
``steps`` returns a strategy, which is allowed to depend arbitrarily on the
current state of the test execution. *Ideally* a good steps implementation
should be robust against minor changes in the state. Steps that change a lot
between slightly different executions will tend to produce worse quality
examples because they're hard to simplify.
The steps method *may* depend on external state, but it's not advisable and
may produce flaky tests.
If any of ``execute_step``, ``check_invariants`` or ``teardown`` produces an
exception, Hypothesis will try to find a minimal sequence of values steps such
that the following throws an exception:
.. code:: python
machine = MyStateMachine()
try:
machine.check_invariants()
for step in steps:
machine.execute_step(step)
machine.check_invariants()
finally:
machine.teardown()
and such that at every point, the step executed is one that could plausible
have come from a call to ``steps`` in the current state.
Here's an example of using stateful testing to test a broken implementation
of a set in terms of a list (note that you could easily do something close to
this example with the rule based testing instead, and probably should. This
is mostly for illustration purposes):
.. code:: python
import unittest
from hypothesis.stateful import GenericStateMachine
from hypothesis.strategies import tuples, sampled_from, just, integers
class BrokenSet(GenericStateMachine):
def __init__(self):
self.data = []
def steps(self):
add_strategy = tuples(just("add"), integers())
if not self.data:
return add_strategy
else:
return (
add_strategy |
tuples(just("delete"), sampled_from(self.data)))
def execute_step(self, step):
action, value = step
if action == 'delete':
try:
self.data.remove(value)
except ValueError:
pass
assert value not in self.data
else:
assert action == 'add'
self.data.append(value)
assert value in self.data
TestSet = BrokenSet.TestCase
if __name__ == '__main__':
unittest.main()
Note that the strategy changes each time based on the data that's currently
in the state machine.
Running this gives us the following:
.. code:: bash
Step #1: ('add', 0)
Step #2: ('add', 0)
Step #3: ('delete', 0)
F
======================================================================
FAIL: runTest (hypothesis.stateful.BrokenSet.TestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
(...)
assert value not in self.data
AssertionError
So it adds two elements, then deletes one, and throws an assertion when it
finds out that this only deleted one of the copies of the element.
-------------------------
More fine grained control
-------------------------
If you want to bypass the TestCase infrastructure you can invoke these
manually. The stateful module exposes the function run_state_machine_as_test,
which takes an arbitrary function returning a GenericStateMachine and an
optional settings parameter and does the same as the class based runTest
provided.
In particular this may be useful if you wish to pass parameters to a custom
__init__ in your subclass.
hypothesis-python-3.44.1/docs/strategies.rst 0000664 0000000 0000000 00000002501 13215577651 0021167 0 ustar 00root root 0000000 0000000 =============================
Projects extending Hypothesis
=============================
The following is a non-exhaustive list of open source projects that make
Hypothesis strategies available. If you're aware of any others please add them
the list! The only inclusion criterion right now is that if it's a Python
library then it should be available on pypi.
* `hs-dbus-signature `_ - strategy to generate arbitrary D-Bus signatures
* `hypothesis-regex `_ -
merged into Hypothesis as the :func:`~hypothesis.strategies.from_regex` strategy.*
* `lollipop-hypothesis `_ -
strategy to generate data based on
`Lollipop `_ schema definitions.
* `hypothesis-fspaths `_ -
strategy to generate filesystem paths.
* `hypothesis-protobuf `_ -
strategy to generate data based on `Protocol Buffer `_ schema definitions.
If you're thinking about writing an extension, consider naming it
``hypothesis-{something}`` - a standard prefix makes the community more
visible and searching for extensions easier.
hypothesis-python-3.44.1/docs/support.rst 0000664 0000000 0000000 00000002146 13215577651 0020536 0 ustar 00root root 0000000 0000000 ================
Help and Support
================
For questions you are happy to ask in public, the :doc:`Hypothesis community ` is a
friendly place where I or others will be more than happy to help you out. You're also welcome to
ask questions on Stack Overflow. If you do, please tag them with 'python-hypothesis' so someone
sees them.
For bugs and enhancements, please file an issue on the :issue:`GitHub issue tracker <>`.
Note that as per the :doc:`development policy `, enhancements will probably not get
implemented unless you're willing to pay for development or implement them yourself (with assistance from me). Bugs
will tend to get fixed reasonably promptly, though it is of course on a best effort basis.
To see the versions of Python, optional dependencies, test runners, and operating systems Hypothesis
supports (meaning incompatibility is treated as a bug), see :doc:`supported`.
If you need to ask questions privately or want more of a guarantee of bugs being fixed promptly, please contact me on
hypothesis-support@drmaciver.com to talk about availability of support contracts.
hypothesis-python-3.44.1/docs/supported.rst 0000664 0000000 0000000 00000010114 13215577651 0021041 0 ustar 00root root 0000000 0000000 =============
Compatibility
=============
Hypothesis does its level best to be compatible with everything you could
possibly need it to be compatible with. Generally you should just try it and
expect it to work. If it doesn't, you can be surprised and check this document
for the details.
---------------
Python versions
---------------
Hypothesis is supported and tested on CPython 2.7 and CPython 3.4+.
Hypothesis also supports PyPy2, and will support PyPy3 when there is a stable
release supporting Python 3.4+. Hypothesis does not currently work on Jython,
though could feasibly be made to do so. IronPython might work but hasn't been
tested. 32-bit and narrow builds should work, though this is currently only
tested on Windows.
In general Hypothesis does not officially support anything except the latest
patch release of any version of Python it supports. Earlier releases should work
and bugs in them will get fixed if reported, but they're not tested in CI and
no guarantees are made.
-----------------
Operating systems
-----------------
In theory Hypothesis should work anywhere that Python does. In practice it is
only known to work and regularly tested on OS X, Windows and Linux, and you may
experience issues running it elsewhere.
If you're using something else and it doesn't work, do get in touch and I'll try
to help, but unless you can come up with a way for me to run a CI server on that
operating system it probably won't stay fixed due to the inevitable march of time.
------------------
Testing frameworks
------------------
In general Hypothesis goes to quite a lot of effort to generate things that
look like normal Python test functions that behave as closely to the originals
as possible, so it should work sensibly out of the box with every test framework.
If your testing relies on doing something other than calling a function and seeing
if it raises an exception then it probably *won't* work out of the box. In particular
things like tests which return generators and expect you to do something with them
(e.g. nose's yield based tests) will not work. Use a decorator or similar to wrap the
test to take this form.
In terms of what's actually *known* to work:
* Hypothesis integrates as smoothly with py.test and unittest as I can make it,
and this is verified as part of the CI.
* py.test fixtures work correctly with Hypothesis based functions, but note that
function based fixtures will only run once for the whole function, not once per
example.
* Nose works fine with hypothesis, and this is tested as part of the CI. yield based
tests simply won't work.
* Integration with Django's testing requires use of the :ref:`hypothesis-django` package.
The issue is that in Django's tests' normal mode of execution it will reset the
database one per test rather than once per example, which is not what you want.
Coverage works out of the box with Hypothesis (and Hypothesis has 100% branch
coverage in its own tests). However you should probably not use Coverage, Hypothesis
and PyPy together. Because Hypothesis does quite a lot of CPU heavy work compared
to normal tests, it really exacerbates the performance problems the two normally
have working together.
-----------------
Optional Packages
-----------------
The supported versions of optional packages, for strategies in ``hypothesis.extra``,
are listed in the documentation for that extra. Our general goal is to support
all versions that are supported upstream.
------------------------
Regularly verifying this
------------------------
Everything mentioned above as explicitly supported is checked on every commit
with `Travis `_ and
`Appveyor `_
and goes green before a release happens, so when I say they're supported I
really mean it.
-------------------
Hypothesis versions
-------------------
Backwards compatibility is better than backporting fixes, so we use
:ref:`semantic versioning ` and only support the most recent
version of Hypothesis. See :doc:`support` for more information.
hypothesis-python-3.44.1/docs/usage.rst 0000664 0000000 0000000 00000005135 13215577651 0020127 0 ustar 00root root 0000000 0000000 =====================================
Open Source Projects using Hypothesis
=====================================
The following is a non-exhaustive list of open source projects I know are using Hypothesis. If you're aware of
any others please add them to the list! The only inclusion criterion right now is that if it's a Python library
then it should be available on pypi.
* `aur `_
* `argon2_cffi `_
* `attrs `_
* `axelrod `_
* `bidict `_
* `binaryornot `_
* `brotlipy `_
* :pypi:`chardet`
* `cmph-cffi `_
* `cryptography `_
* `dbus-signature-pyparsing `_
* `fastnumbers `_
* `flocker `_
* `flownetpy `_
* `funsize `_
* `fusion-index `_
* `hyper-h2 `_
* `into-dbus-python `_
* `justbases `_
* `justbytes `_
* `loris `_
* `mariadb-dyncol `_
* `mercurial `_
* `natsort `_
* `pretext `_
* `priority `_
* `PyCEbox `_
* `PyPy `_
* `pyrsistent `_
* `python-humble-utils `_
* `pyudev `_
* `qutebrowser `_
* `RubyMarshal `_
* `Segpy `_
* `simoa `_
* `srt `_
* `tchannel `_
* `vdirsyncer `_
* `wcag-contrast-ratio `_
* `yacluster `_
* `yturl `_
hypothesis-python-3.44.1/examples/ 0000775 0000000 0000000 00000000000 13215577651 0017153 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/examples/README.rst 0000664 0000000 0000000 00000000662 13215577651 0020646 0 ustar 00root root 0000000 0000000 ============================
Examples of Hypothesis usage
============================
This is a directory for examples of using Hypothesis that show case its
features or demonstrate a useful way of testing something.
Right now it's a bit small and fairly algorithmically focused. Pull requests to
add more examples would be *greatly* appreciated, especially ones using e.g.
the Django integration or testing something "Businessy".
hypothesis-python-3.44.1/examples/test_binary_search.py 0000664 0000000 0000000 00000011524 13215577651 0023400 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This file demonstrates testing a binary search.
It's a useful example because the result of the binary search is so clearly
determined by the invariants it must satisfy, so we can simply test for those
invariants.
It also demonstrates the useful testing technique of testing how the answer
should change (or not) in response to movements in the underlying data.
"""
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given
def binary_search(ls, v):
"""Take a list ls and a value v such that ls is sorted and v is comparable
with the elements of ls.
Return an index i such that 0 <= i <= len(v) with the properties:
1. ls.insert(i, v) is sorted
2. ls.insert(j, v) is not sorted for j < i
"""
# Without this check we will get an index error on the next line when the
# list is empty.
if not ls:
return 0
# Without this check we will miss the case where the insertion point should
# be zero: The invariant we maintain in the next section is that lo is
# always strictly lower than the insertion point.
if v <= ls[0]:
return 0
# Invariant: There is no insertion point i with i <= lo
lo = 0
# Invariant: There is an insertion point i with i <= hi
hi = len(ls)
while lo + 1 < hi:
mid = (lo + hi) // 2
if v > ls[mid]:
# Inserting v anywhere below mid would result in an unsorted list
# because it's > the value at mid. Therefore mid is a valid new lo
lo = mid
# Uncommenting the following lines will cause this to return a valid
# insertion point which is not always minimal.
# elif v == ls[mid]:
# return mid
else:
# Either v == ls[mid] in which case mid is a valid insertion point
# or v < ls[mid], in which case all valid insertion points must be
# < hi. Either way, mid is a valid new hi.
hi = mid
assert lo + 1 == hi
# We now know that there is a valid insertion point <= hi and there is no
# valid insertion point < hi because hi - 1 is lo. Therefore hi is the
# answer we were seeking
return hi
def is_sorted(ls):
"""Is this list sorted?"""
for i in range(len(ls) - 1):
if ls[i] > ls[i + 1]:
return False
return True
Values = st.integers()
# We generate arbitrary lists and turn this into generating sorting lists
# by just sorting them.
SortedLists = st.lists(Values).map(sorted)
# We could also do it this way, but that would be a bad idea:
# SortedLists = st.lists(Values).filter(is_sorted)
# The problem is that Hypothesis will only generate long sorted lists with very
# low probability, so we are much better off post-processing values into the
# form we want than filtering them out.
@given(ls=SortedLists, v=Values)
def test_insert_is_sorted(ls, v):
"""We test the first invariant: binary_search should return an index such
that inserting the value provided at that index would result in a sorted
set."""
ls.insert(binary_search(ls, v), v)
assert is_sorted(ls)
@given(ls=SortedLists, v=Values)
def test_is_minimal(ls, v):
"""We test the second invariant: binary_search should return an index such
that no smaller index is a valid insertion point for v."""
for i in range(binary_search(ls, v)):
ls2 = list(ls)
ls2.insert(i, v)
assert not is_sorted(ls2)
@given(ls=SortedLists, v=Values)
def test_inserts_into_same_place_twice(ls, v):
"""In this we test a *consequence* of the second invariant: When we insert
a value into a list twice, the insertion point should be the same both
times. This is because we know that v is > the previous element and == the
next element.
In theory if the former passes, this should always pass. In practice,
failures are detected by this test with much higher probability because it
deliberately puts the data into a shape that is likely to trigger a
failure.
This is an instance of a good general category of test: Testing how the
function moves in responses to changes in the underlying data.
"""
i = binary_search(ls, v)
ls.insert(i, v)
assert binary_search(ls, v) == i
hypothesis-python-3.44.1/examples/test_rle.py 0000664 0000000 0000000 00000006775 13215577651 0021365 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This example demonstrates testing a run length encoding scheme. That is, we
take a sequence and represent it by a shorter sequence where each 'run' of
consecutive equal elements is represented as a single element plus a count. So
e.g.
[1, 1, 1, 1, 2, 1] is represented as [[1, 4], [2, 1], [1, 1]]
This demonstrates the useful decode(encode(x)) == x invariant that is often
a fruitful source of testing with Hypothesis.
It also has an example of testing invariants in response to changes in the
underlying data.
"""
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given, assume
def run_length_encode(seq):
"""Encode a sequence as a new run-length encoded sequence."""
if not seq:
return []
# By starting off the count at zero we simplify the iteration logic
# slightly.
result = [[seq[0], 0]]
for s in seq:
if (
# If you uncomment this line this branch will be skipped and we'll
# always append a new run of length 1. Note which tests fail.
# False and
s == result[-1][0]
# Try uncommenting this line and see what problems occur:
# and result[-1][-1] < 2
):
result[-1][1] += 1
else:
result.append([s, 1])
return result
def run_length_decode(seq):
"""Take a previously encoded sequence and reconstruct the original from
it."""
result = []
for s, i in seq:
for _ in range(i):
result.append(s)
return result
# We use lists of a type that should have a relatively high duplication rate,
# otherwise we'd almost never get any runs.
Lists = st.lists(st.integers(0, 10))
@given(Lists)
def test_decodes_to_starting_sequence(ls):
"""If we encode a sequence and then decode the result, we should get the
original sequence back.
Otherwise we've done something very wrong.
"""
assert run_length_decode(run_length_encode(ls)) == ls
@given(Lists, st.integers(0, 100))
def test_duplicating_an_element_does_not_increase_length(ls, i):
"""The previous test could be passed by simply returning the input sequence
so we need something that tests the compression property of our encoding.
In this test we deliberately introduce or extend a run and assert
that this does not increase the length of our encoding, because they
should be part of the same run in the final result.
"""
# We use assume to get a valid index into the list. We could also have used
# e.g. flatmap, but this is relatively straightforward and will tend to
# perform better.
assume(i < len(ls))
ls2 = list(ls)
# duplicating the value at i right next to it guarantees they are part of
# the same run in the resulting compression.
ls2.insert(i, ls2[i])
assert len(run_length_encode(ls2)) == len(run_length_encode(ls))
hypothesis-python-3.44.1/guides/ 0000775 0000000 0000000 00000000000 13215577651 0016615 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/guides/README.rst 0000664 0000000 0000000 00000000512 13215577651 0020302 0 ustar 00root root 0000000 0000000 =================================
Guides for Hypothesis Development
=================================
This is a general collection of useful documentation for people
working on Hypothesis.
It is separate from the main documentation because it is not much
use if you are merely *using* Hypothesis. It's purely for working
on it.
hypothesis-python-3.44.1/guides/api-style.rst 0000664 0000000 0000000 00000013241 13215577651 0021257 0 ustar 00root root 0000000 0000000 ===============
House API Style
===============
Here are some guidelines for how to write APIs so that they "feel" like
a Hypothesis API. This is particularly focused on writing new strategies, as
that's the major place where we add APIs, but also applies more generally.
Note that it is not a guide to *code* style, only API design.
The Hypothesis style evolves over time, and earlier strategies in particular
may not be consistent with this style, and we've tried some experiments
that didn't work out, so this style guide is more normative than descriptive
and existing APIs may not match it. Where relevant, backwards compatibility is
much more important than conformance to the style.
~~~~~~~~~~~~~~~~~~
General Guidelines
~~~~~~~~~~~~~~~~~~
* When writing extras modules, consistency with Hypothesis trumps consistency
with the library you're integrating with.
* *Absolutely no subclassing as part of the public API*
* We should not strive too hard to be pythonic, but if an API seems weird to a
normal Python user we should see if we can come up with an API we like as
much but is less weird.
* Code which adds a dependency on a third party package should be put in a
hypothesis.extra module.
* Complexity should not be pushed onto the user. An easy to use API is more
important than a simple implementation.
~~~~~~~~~~~~~~~~~~~~~~~~~
Guidelines for strategies
~~~~~~~~~~~~~~~~~~~~~~~~~
* A strategy function should be somewhere between a recipe for how to build a
value and a range of valid values.
* It should not include distribution hints. The arguments should only specify
how to produce a valid value, not statistical properties of values.
* Strategies should try to paper over non-uniformity in the underlying types
as much as possible (e.g. ``hypothesis.extra.numpy`` has a number of
workarounds for numpy's odd behaviour around object arrays).
~~~~~~~~~~~~~~~~~
Argument handling
~~~~~~~~~~~~~~~~~
We have a reasonably distinctive style when it comes to handling arguments:
* Arguments must be validated to the greatest extent possible. Hypothesis
should reject bad arguments with an InvalidArgument error, not fail with an
internal exception.
* We make extensive use of default arguments. If an argument could reasonably
have a default, it should.
* Exception to the above: Strategies for collection types should *not* have a
default argument for element strategies.
* Interacting arguments (e.g. arguments that must be in a particular order, or
where at most one is valid, or where one argument restricts the valid range
of the other) are fine, but when this happens the behaviour of defaults
should automatically be adjusted. e.g. if the normal default of an argument
would become invalid, the function should still do the right thing if that
default is used.
* Where the actual default used depends on other arguments, the default parameter
should be None.
* It's worth thinking about the order of arguments: the first one or two
arguments are likely to be passed positionally, so try to put values there
where this is useful and not too confusing.
* When adding arguments to strategies, think carefully about whether the user
is likely to want that value to vary often. If so, make it a strategy instead
of a value. In particular if it's likely to be common that they would want to
write ``some_strategy.flatmap(lambda x: my_new_strategy(argument=x))`` then
it should be a strategy.
* Arguments should not be "a value or a strategy for generating that value".
If you find yourself inclined to write something like that, instead make it
take a strategy. If a user wants to pass a value they can wrap it in a call
to ``just``.
~~~~~~~~~~~~~~
Function Names
~~~~~~~~~~~~~~
We don't have any real consistency here. The rough approach we follow is:
* Names are `snake_case` as is standard in Python.
* Strategies for a particular type are typically named as a plural name for
that type. Where that type has some truncated form (e.g. int, str) we use a
longer form name.
* Other strategies have no particular common naming convention.
~~~~~~~~~~~~~~
Argument Names
~~~~~~~~~~~~~~
We should try to use the same argument names and orders across different
strategies wherever possible. In particular:
* For collection types, the element strategy (or strategies) should always be
the first arguments. Where there is only one element strategy it should be
called ``elements`` (but e.g. ``dictionaries`` has element strategies named
``keys`` and ``values`` and that's fine).
* For ordered types, the first two arguments should be a lower and an upper
bound. They should be called ``min_value`` and ``max_value``.
* Collection types should have a ``min_size`` and a ``max_size`` parameter that
controls the range of their size. ``min_size`` should default to zero and
``max_size`` to ``None`` (even if internally it is bounded).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A catalogue of current violations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following are places where we currently deviate from this style. Some of
these should be considered targets for deprecation and/or improvement.
* most of the collections in ``hypothesis.strategies`` have an ``average_size``
distribution hint.
* many of the collections in ``hypothesis.strategies`` allow a default of
``None`` for their elements strategy (meaning only generate empty
collections).
* ``hypothesis.extra.numpy`` has some arguments which can be either
strategies or values.
* ``hypothesis.extra.numpy`` assumes arrays are fixed size and doesn't have
``min_size`` and ``max_size`` arguments (but this is probably OK because of
more complicated shapes of array).
* ``hypothesis.stateful`` is a great big subclassing based train wreck.
hypothesis-python-3.44.1/guides/documentation.rst 0000664 0000000 0000000 00000006565 13215577651 0022234 0 ustar 00root root 0000000 0000000 =====================================
The Hypothesis Documentation Handbook
=====================================
Good documentation can make the difference between good code and useful code -
and Hypothesis is written to be used, as widely as possible.
This is a working document-in-progress with some tips for how we try to write
our docs, with a little of the what and a bigger chunk of the how.
If you have ideas about how to improve these suggestions, meta issues or pull
requests are just as welcome as for docs or code :D
----------------------------
What docs should be written?
----------------------------
All public APIs should be comprehensively described. If the docs are
confusing to new users, incorrect or out of date, or simply incomplete - we
consider all of those to be bugs; if you see them please raise an issue and
perhaps submit a pull request.
That's not much advice, but it's what we have so far.
------------
Using Sphinx
------------
We use `the Sphinx documentation system `_ to run
doctests and convert the .rst files into html with formatting and
cross-references. Without repeating the docs for Sphinx, here are some tips:
- When documenting a Python object (function, class, module, etc.), you can
use autodoc to insert and interpret the docstring.
- When referencing a function, you can insert a reference to a function as
(eg) ``:func:`hypothesis.given`\ ``, which will appear as
``hypothesis.given()`` with a hyperlink to the apropriate docs. You can
show only the last part (unqualified name) by adding a tilde at the start,
like ``:func:`~hypothesis.given`\ `` -> ``given()``. Finally, you can give
it alternative link text in the usual way:
``:func:`other text `\ `` -> ``other text``.
- For the formatting and also hyperlinks, all cross-references should use the
Sphinx cross-referencing syntax rather than plain text.
- Wherever possible, example code should be written as a doctest. This
ensures that if the example raises deprecation warnings, or simply breaks,
it will be flagged in CI and can be fixed immediately.
-----------------
Changelog Entries
-----------------
`Hypothesis does continuous deployment `_,
where every pull request that touches ``./src`` results in a new release.
That means every contributor gets to write their changelog!
A changelog entry should be written in a new ``RELEASE.rst`` file in
the repository root, and:
- concisely describe what changed and why
- use Sphinx cross-references to any functions or classes mentioned
- if closing an issue, mention it with the issue role to generate a link
- finish with a note of thanks from the maintainers:
"Thanks to for this bug fix / feature / contribution"
(depending on which it is). If this is your first contribution,
don't forget to add yourself to contributors.rst!
-----------------
Updating Doctests
-----------------
We use the Sphinx `doctest` builder to ensure that all example code snippets
are kept up to date. To make this less tedious, you can run
``scripts/fix_doctests.py`` (under Python 3) to... fix failing doctests.
The script is pretty good, but doesn't handle ``+ELLIPSIS`` or
``+NORMALIZE_WHITESPACE`` options. Check that output is stable (running
it again should give "All doctests are OK"), then review the diff before
committing.
hypothesis-python-3.44.1/guides/review.rst 0000664 0000000 0000000 00000023126 13215577651 0020654 0 ustar 00root root 0000000 0000000 ===================================
The Hypothesis Code Review Handbook
===================================
Hypothesis has a process of reviewing every change, internal or external.
This is a document outlining that process. It's partly descriptive, partly
prescriptive, and entirely prone to change in response to circumstance
and need. We're still figuring this thing out!
----------------
How Review Works
----------------
All changes to Hypothesis must be signed off by at least one person with
write access to the repo other than the author of the change. Once the
build is green and a reviewer has approved the change, anyone on the
maintainer team may merge the request.
More than one maintainer *may* review a change if they wish to, but it's
not required. Any maintainer may block a pull request by requesting changes.
Consensus on a review is best but not required. If some reviewers have
approved a pull request and some have requested changes, ideally you
would try to address all of the changes, but it is OK to dismiss dissenting
reviews if you feel it appropriate.
We've not tested the case of differing opinions much in practice yet, so
we may grow firmer guidelines on what to do there over time.
------------
Review Goals
------------
At a high level, the two things we're looking for in review are answers
to the following questions:
1. Is this change going to make users' lives worse?
2. Is this change going to make the maintainers' lives worse?
Code review is a collaborative process between the author and the
reviewer to try to ensure that the answer to both of those questions
is no.
Ideally of course the change should also make one or both of the users'
and our lives *better*, but it's OK for changes to be mostly neutral.
The author should be presumed to have a good reason for submitting the
change in the first place, so neutral is good enough!
--------------
Social Factors
--------------
* Always thank external contributors. Thank maintainers too, ideally!
* Remember that the `Code of Conduct `_
applies to pull requests and issues too. Feel free to throw your weight
around to enforce this if necessary.
* Anyone, maintainer or not, is welcome to do a code review. Only official
maintainers have the ability to actually approve and merge a pull
request, but outside review is also welcome.
------------
Requirements
------------
The rest of this document outlines specific things reviewers should
focus on in aid of this, broken up by sections according to their area
of applicability.
All of these conditions must be satisfied for merge. Where the reviewer
thinks this conflicts with the above higher level goals, they may make
an exception if both the author and another maintainer agree.
~~~~~~~~~~~~~~~~~~~~
General Requirements
~~~~~~~~~~~~~~~~~~~~
The following are required for almost every change:
1. Changes must be of reasonable size. If a change could logically
be broken up into several smaller changes that could be reviewed
separately on their own merits, it should be.
2. The motivation for each change should be clearly explained (this
doesn't have to be an essay, especially for small changes, but
at least a sentence of explanation is usually required).
3. The likely consequences of a change should be outlined (again,
this doesn't have an essay, and it may be sufficiently
self-explanatory that the motivation section is sufficient).
~~~~~~~~~~~~~~~~~~~~~
Functionality Changes
~~~~~~~~~~~~~~~~~~~~~
This section applies to any changes in Hypothesis's behaviour, regardless
of their nature. A good rule of thumb is that if it touches a file in
src then it counts.
1. The code should be clear in its intent and behaviour.
2. Behaviour changes should come with appropriate tests to demonstrate
the new behaviour.
3. Hypothesis must never be *flaky*. Flakiness here is
defined as anything where a test fails and this does not indicate
a bug in Hypothesis or in the way the user wrote the code or the test.
4. The version number must be kept up to date, following
`Semantic Versioning `_ conventions: The third (patch)
number increases for things that don't change public facing functionality,
the second (minor) for things that do but are backwards compatible, and
the first (major) changes for things that aren't backwards compatible.
See the section on API changes for the latter two.
5. The changelog should be kept up to date by creating a RELEASE.rst file in
the root of the repository. Make sure you build the documentation and
manually inspect the resulting changelog to see that it looks good - there
are a lot of syntax mistakes possible in RST that don't result in a
compilation error.
~~~~~~~~~~~
API Changes
~~~~~~~~~~~
Public API changes require the most careful scrutiny of all reviews,
because they are the ones we are stuck with for the longest: Hypothesis
follows semantic versioning, and we don't release new major versions
very often.
Public API changes must satisfy the following:
1. All public API changes must be well documented. If it's not documented,
it doesn't count as public API!
2. Changes must be backwards compatible. Where this is not possible, they
must first introduce a deprecation warning, then once the major version
is bumped the deprecation warning and the functionality may be removed.
3. If an API is deprecated, the deprecation warning must make it clear
how the user should modify their code to adapt to this change (
possibly by referring to documentation).
4. If it is likely that we will want to make backwards incompatible changes
to an API later, to whatever extent possible these should be made immediately
when it is introduced instead.
5. APIs should give clear and helpful error messages in response to invalid inputs.
In particular error messages should always display
the value that triggered the error, and ideally be specific about the
relevant feature of it that caused this failure (e.g. the type).
6. Incorrect usage should never "fail silently" - when a user accidentally
misuses an API this should result in an explicit error.
7. Functionality should be limited to that which is easy to support in the
long-term. In particular functionality which is very tied to the
current Hypothesis internals should be avoided.
8. `DRMacIver `_ must approve the changes
though other maintainers are welcome and likely to chip in to review as
well.
9. We have a separate guide for `house API style `_ which should
be followed.
~~~~~~~~~
Bug Fixes
~~~~~~~~~
1. All bug fixes must come with a test that demonstrates the bug on master and
which is fixed in this branch. An exception *may* be made here if the submitter
can convincingly argue that testing this would be prohibitively difficult.
2. Where possible, a fix that makes it impossible for similar bugs to occur is
better.
3. Where possible, a test that will catch both this bug and a more general class
of bug that contains it is better.
~~~~~~~~~~~~~~~~
Settings Changes
~~~~~~~~~~~~~~~~
It is tempting to use the Hypothesis settings object as a dumping ground for
anything and everything that you can think of to control Hypothesis. This
rapidly gets confusing for users and should be carefully avoided.
New settings should:
1. Be something that the user can meaningfully have an opinion on. Many of the
settings that have been added to Hypothesis are just cases where Hypothesis
is abdicating responsibility to do the right thing to the user.
2. Make sense without reference to Hypothesis internals.
3. Correspond to behaviour which can meaningfully differ between tests - either
between two different tests or between two different runs of the same test
(e.g. one use case is the profile system, where you might want to run Hypothesis
differently in CI and development). If you would never expect a test suite to
have more than one value for a setting across any of its runs, it should be
some sort of global configuration, not a setting.
Removing settings is not something we have done so far, so the exact process
is still up in the air, but it should involve a careful deprecation path where
the default behaviour does not change without first introducing warnings.
~~~~~~~~~~~~~~
Engine Changes
~~~~~~~~~~~~~~
Engine changes are anything that change a "fundamental" of how Hypothesis
works. A good rule of thumb is that an engine change is anything that touches
a file in hypothesis.internal.conjecture.
All such changes should:
1. Be approved (or authored) by DRMacIver.
2. Be approved (or authored) by someone who *isn't* DRMacIver (a major problem
with this section of the code is that there is too much that only DRMacIver
understands properly and we want to fix this).
3. If appropriate, come with a test in test_discovery_ability.py showing new
examples that were previously hard to discover.
4. If appropriate, come with a test in test_shrink_quality.py showing how they
improve the shrinker.
~~~~~~~~~~~~~~~~~~~~~~
Non-Blocking Questions
~~~~~~~~~~~~~~~~~~~~~~
These questions should *not* block merge, but may result in additional
issues or changes being opened, either by the original author or by the
reviewer.
1. Is this change well covered by the review items and is there
anything that could usefully be added to the guidelines to improve
that?
2. Were any of the review items confusing or annoying when reviewing this
change? Could they be improved?
3. Are there any more general changes suggested by this, and do they have
appropriate issues and/or pull requests associated with them?
hypothesis-python-3.44.1/notebooks/ 0000775 0000000 0000000 00000000000 13215577651 0017340 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/notebooks/Designing a better simplifier.ipynb 0000664 0000000 0000000 00000471123 13215577651 0026115 0 ustar 00root root 0000000 0000000 {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Designing a better simplifier\n",
"\n",
"This is a notebook talking through some of the considerations in the design of Hypothesis's approach to simplification.\n",
"\n",
"It doesn't perfectly mirror what actually happens in Hypothesis, but it should give some consideration to the sort of things that Hypothesis does and why it takes a particular approach.\n",
"\n",
"In order to simplify the scope of this document we are only going to\n",
"concern ourselves with lists of integers. There are a number of API considerations involved in expanding beyond that point, however most of the algorithmic considerations are the same.\n",
"\n",
"The big difference between lists of integers and the general case is that integers can never be too complex. In particular we will rapidly get to the point where individual elements can be simplified in usually only log(n) calls. When dealing with e.g. lists of lists this is a much more complicated proposition. That may be covered in another notebook.\n",
"\n",
"Our objective here is to minimize the number of times we check the condition. We won't be looking at actual timing performance, because usually the speed of the condition is the bottleneck there (and where it's not, everything is fast enough that we need not worry)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def greedy_shrink(ls, constraint, shrink):\n",
" \"\"\"\n",
" This is the \"classic\" QuickCheck algorithm which takes a shrink function\n",
" which will iterate over simpler versions of an example. We are trying\n",
" to find a local minima: That is an example ls such that condition(ls)\n",
" is True but that constraint(t) is False for each t in shrink(ls).\n",
" \"\"\"\n",
" while True:\n",
" for s in shrink(ls):\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink1(ls):\n",
" \"\"\"\n",
" This is our prototype shrink function. It is very bad. It makes the\n",
" mistake of only making very small changes to an example each time.\n",
" \n",
" Most people write something like this the first time they come to\n",
" implement example shrinking. In particular early Hypothesis very much\n",
" made this mistake.\n",
" \n",
" What this does:\n",
" \n",
" For each index, if the value of the index is non-zero we try\n",
" decrementing it by 1.\n",
" \n",
" We then (regardless of if it's zero) try the list with the value at\n",
" that index deleted.\n",
" \"\"\"\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" if s[i] > 0:\n",
" s[i] -= 1\n",
" yield list(s)\n",
" del s[i]\n",
" yield list(s)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def show_trace(start, constraint, simplifier):\n",
" \"\"\"\n",
" This is a debug function. You shouldn't concern yourself with\n",
" its implementation too much.\n",
" \n",
" What it does is print out every intermediate step in applying a\n",
" simplifier (a function of the form (list, constraint) -> list)\n",
" along with whether it is a successful shrink or not.\n",
" \"\"\"\n",
" if start is None:\n",
" while True:\n",
" start = gen_list()\n",
" if constraint(start):\n",
" break\n",
"\n",
" shrinks = [0]\n",
" tests = [0]\n",
"\n",
" def print_shrink(ls):\n",
" tests[0] += 1\n",
" if constraint(ls):\n",
" shrinks[0] += 1\n",
" print(\"✓\", ls)\n",
" return True\n",
" else:\n",
" print(\"✗\", ls)\n",
" return False\n",
" print(\"✓\", start)\n",
" simplifier(start, print_shrink)\n",
" print()\n",
" print(\"%d shrinks with %d function calls\" % (\n",
" shrinks[0], tests[0]))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from functools import partial"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5]\n",
"✓ [4, 5]\n",
"✓ [3, 5]\n",
"✓ [2, 5]\n",
"✓ [1, 5]\n",
"✓ [0, 5]\n",
"✗ [5]\n",
"✓ [0, 4]\n",
"✗ [4]\n",
"✓ [0, 3]\n",
"✗ [3]\n",
"✓ [0, 2]\n",
"✗ [2]\n",
"✓ [0, 1]\n",
"✗ [1]\n",
"✓ [0, 0]\n",
"✗ [0]\n",
"✗ [0]\n",
"\n",
"10 shrinks with 17 function calls\n"
]
}
],
"source": [
"show_trace([5, 5], lambda x: len(x) >= 2, partial(greedy_shrink, shrink=shrink1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That worked reasonably well, but it sure was a lot of function calls for such a small amount of shrinking. What would have happened if we'd started with [100, 100]?"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink2(ls):\n",
" \"\"\"\n",
" Here is an improved shrink function. We first try deleting each element\n",
" and then we try making each element smaller, but we do so from the left\n",
" hand side instead of the right. This means we will always find the\n",
" smallest value that can go in there, but we will do so much sooner.\n",
" \"\"\"\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
" \n",
" for i in range(len(ls)):\n",
" for x in range(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5]\n",
"✗ [5]\n",
"✗ [5]\n",
"✓ [0, 5]\n",
"✗ [5]\n",
"✗ [0]\n",
"✓ [0, 0]\n",
"✗ [0]\n",
"✗ [0]\n",
"\n",
"2 shrinks with 8 function calls\n"
]
}
],
"source": [
"show_trace([5, 5], lambda x: len(x) >= 2, partial(\n",
" greedy_shrink, shrink=shrink2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This did indeed reduce the number of function calls significantly - we immediately determine that the value in the cell doesn't matter and we can just put zero there. \n",
"\n",
"But what would have happened if the value *did* matter?"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [1000]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [3]\n",
"✗ [4]\n",
"✗ [5]\n",
"✗ [6]\n",
"✗ [7]\n",
"✗ [8]\n",
"✗ [9]\n",
"✗ [10]\n",
"✗ [11]\n",
"✗ [12]\n",
"✗ [13]\n",
"✗ [14]\n",
"✗ [15]\n",
"✗ [16]\n",
"✗ [17]\n",
"✗ [18]\n",
"✗ [19]\n",
"✗ [20]\n",
"✗ [21]\n",
"✗ [22]\n",
"✗ [23]\n",
"✗ [24]\n",
"✗ [25]\n",
"✗ [26]\n",
"✗ [27]\n",
"✗ [28]\n",
"✗ [29]\n",
"✗ [30]\n",
"✗ [31]\n",
"✗ [32]\n",
"✗ [33]\n",
"✗ [34]\n",
"✗ [35]\n",
"✗ [36]\n",
"✗ [37]\n",
"✗ [38]\n",
"✗ [39]\n",
"✗ [40]\n",
"✗ [41]\n",
"✗ [42]\n",
"✗ [43]\n",
"✗ [44]\n",
"✗ [45]\n",
"✗ [46]\n",
"✗ [47]\n",
"✗ [48]\n",
"✗ [49]\n",
"✗ [50]\n",
"✗ [51]\n",
"✗ [52]\n",
"✗ [53]\n",
"✗ [54]\n",
"✗ [55]\n",
"✗ [56]\n",
"✗ [57]\n",
"✗ [58]\n",
"✗ [59]\n",
"✗ [60]\n",
"✗ [61]\n",
"✗ [62]\n",
"✗ [63]\n",
"✗ [64]\n",
"✗ [65]\n",
"✗ [66]\n",
"✗ [67]\n",
"✗ [68]\n",
"✗ [69]\n",
"✗ [70]\n",
"✗ [71]\n",
"✗ [72]\n",
"✗ [73]\n",
"✗ [74]\n",
"✗ [75]\n",
"✗ [76]\n",
"✗ [77]\n",
"✗ [78]\n",
"✗ [79]\n",
"✗ [80]\n",
"✗ [81]\n",
"✗ [82]\n",
"✗ [83]\n",
"✗ [84]\n",
"✗ [85]\n",
"✗ [86]\n",
"✗ [87]\n",
"✗ [88]\n",
"✗ [89]\n",
"✗ [90]\n",
"✗ [91]\n",
"✗ [92]\n",
"✗ [93]\n",
"✗ [94]\n",
"✗ [95]\n",
"✗ [96]\n",
"✗ [97]\n",
"✗ [98]\n",
"✗ [99]\n",
"✗ [100]\n",
"✗ [101]\n",
"✗ [102]\n",
"✗ [103]\n",
"✗ [104]\n",
"✗ [105]\n",
"✗ [106]\n",
"✗ [107]\n",
"✗ [108]\n",
"✗ [109]\n",
"✗ [110]\n",
"✗ [111]\n",
"✗ [112]\n",
"✗ [113]\n",
"✗ [114]\n",
"✗ [115]\n",
"✗ [116]\n",
"✗ [117]\n",
"✗ [118]\n",
"✗ [119]\n",
"✗ [120]\n",
"✗ [121]\n",
"✗ [122]\n",
"✗ [123]\n",
"✗ [124]\n",
"✗ [125]\n",
"✗ [126]\n",
"✗ [127]\n",
"✗ [128]\n",
"✗ [129]\n",
"✗ [130]\n",
"✗ [131]\n",
"✗ [132]\n",
"✗ [133]\n",
"✗ [134]\n",
"✗ [135]\n",
"✗ [136]\n",
"✗ [137]\n",
"✗ [138]\n",
"✗ [139]\n",
"✗ [140]\n",
"✗ [141]\n",
"✗ [142]\n",
"✗ [143]\n",
"✗ [144]\n",
"✗ [145]\n",
"✗ [146]\n",
"✗ [147]\n",
"✗ [148]\n",
"✗ [149]\n",
"✗ [150]\n",
"✗ [151]\n",
"✗ [152]\n",
"✗ [153]\n",
"✗ [154]\n",
"✗ [155]\n",
"✗ [156]\n",
"✗ [157]\n",
"✗ [158]\n",
"✗ [159]\n",
"✗ [160]\n",
"✗ [161]\n",
"✗ [162]\n",
"✗ [163]\n",
"✗ [164]\n",
"✗ [165]\n",
"✗ [166]\n",
"✗ [167]\n",
"✗ [168]\n",
"✗ [169]\n",
"✗ [170]\n",
"✗ [171]\n",
"✗ [172]\n",
"✗ [173]\n",
"✗ [174]\n",
"✗ [175]\n",
"✗ [176]\n",
"✗ [177]\n",
"✗ [178]\n",
"✗ [179]\n",
"✗ [180]\n",
"✗ [181]\n",
"✗ [182]\n",
"✗ [183]\n",
"✗ [184]\n",
"✗ [185]\n",
"✗ [186]\n",
"✗ [187]\n",
"✗ [188]\n",
"✗ [189]\n",
"✗ [190]\n",
"✗ [191]\n",
"✗ [192]\n",
"✗ [193]\n",
"✗ [194]\n",
"✗ [195]\n",
"✗ [196]\n",
"✗ [197]\n",
"✗ [198]\n",
"✗ [199]\n",
"✗ [200]\n",
"✗ [201]\n",
"✗ [202]\n",
"✗ [203]\n",
"✗ [204]\n",
"✗ [205]\n",
"✗ [206]\n",
"✗ [207]\n",
"✗ [208]\n",
"✗ [209]\n",
"✗ [210]\n",
"✗ [211]\n",
"✗ [212]\n",
"✗ [213]\n",
"✗ [214]\n",
"✗ [215]\n",
"✗ [216]\n",
"✗ [217]\n",
"✗ [218]\n",
"✗ [219]\n",
"✗ [220]\n",
"✗ [221]\n",
"✗ [222]\n",
"✗ [223]\n",
"✗ [224]\n",
"✗ [225]\n",
"✗ [226]\n",
"✗ [227]\n",
"✗ [228]\n",
"✗ [229]\n",
"✗ [230]\n",
"✗ [231]\n",
"✗ [232]\n",
"✗ [233]\n",
"✗ [234]\n",
"✗ [235]\n",
"✗ [236]\n",
"✗ [237]\n",
"✗ [238]\n",
"✗ [239]\n",
"✗ [240]\n",
"✗ [241]\n",
"✗ [242]\n",
"✗ [243]\n",
"✗ [244]\n",
"✗ [245]\n",
"✗ [246]\n",
"✗ [247]\n",
"✗ [248]\n",
"✗ [249]\n",
"✗ [250]\n",
"✗ [251]\n",
"✗ [252]\n",
"✗ [253]\n",
"✗ [254]\n",
"✗ [255]\n",
"✗ [256]\n",
"✗ [257]\n",
"✗ [258]\n",
"✗ [259]\n",
"✗ [260]\n",
"✗ [261]\n",
"✗ [262]\n",
"✗ [263]\n",
"✗ [264]\n",
"✗ [265]\n",
"✗ [266]\n",
"✗ [267]\n",
"✗ [268]\n",
"✗ [269]\n",
"✗ [270]\n",
"✗ [271]\n",
"✗ [272]\n",
"✗ [273]\n",
"✗ [274]\n",
"✗ [275]\n",
"✗ [276]\n",
"✗ [277]\n",
"✗ [278]\n",
"✗ [279]\n",
"✗ [280]\n",
"✗ [281]\n",
"✗ [282]\n",
"✗ [283]\n",
"✗ [284]\n",
"✗ [285]\n",
"✗ [286]\n",
"✗ [287]\n",
"✗ [288]\n",
"✗ [289]\n",
"✗ [290]\n",
"✗ [291]\n",
"✗ [292]\n",
"✗ [293]\n",
"✗ [294]\n",
"✗ [295]\n",
"✗ [296]\n",
"✗ [297]\n",
"✗ [298]\n",
"✗ [299]\n",
"✗ [300]\n",
"✗ [301]\n",
"✗ [302]\n",
"✗ [303]\n",
"✗ [304]\n",
"✗ [305]\n",
"✗ [306]\n",
"✗ [307]\n",
"✗ [308]\n",
"✗ [309]\n",
"✗ [310]\n",
"✗ [311]\n",
"✗ [312]\n",
"✗ [313]\n",
"✗ [314]\n",
"✗ [315]\n",
"✗ [316]\n",
"✗ [317]\n",
"✗ [318]\n",
"✗ [319]\n",
"✗ [320]\n",
"✗ [321]\n",
"✗ [322]\n",
"✗ [323]\n",
"✗ [324]\n",
"✗ [325]\n",
"✗ [326]\n",
"✗ [327]\n",
"✗ [328]\n",
"✗ [329]\n",
"✗ [330]\n",
"✗ [331]\n",
"✗ [332]\n",
"✗ [333]\n",
"✗ [334]\n",
"✗ [335]\n",
"✗ [336]\n",
"✗ [337]\n",
"✗ [338]\n",
"✗ [339]\n",
"✗ [340]\n",
"✗ [341]\n",
"✗ [342]\n",
"✗ [343]\n",
"✗ [344]\n",
"✗ [345]\n",
"✗ [346]\n",
"✗ [347]\n",
"✗ [348]\n",
"✗ [349]\n",
"✗ [350]\n",
"✗ [351]\n",
"✗ [352]\n",
"✗ [353]\n",
"✗ [354]\n",
"✗ [355]\n",
"✗ [356]\n",
"✗ [357]\n",
"✗ [358]\n",
"✗ [359]\n",
"✗ [360]\n",
"✗ [361]\n",
"✗ [362]\n",
"✗ [363]\n",
"✗ [364]\n",
"✗ [365]\n",
"✗ [366]\n",
"✗ [367]\n",
"✗ [368]\n",
"✗ [369]\n",
"✗ [370]\n",
"✗ [371]\n",
"✗ [372]\n",
"✗ [373]\n",
"✗ [374]\n",
"✗ [375]\n",
"✗ [376]\n",
"✗ [377]\n",
"✗ [378]\n",
"✗ [379]\n",
"✗ [380]\n",
"✗ [381]\n",
"✗ [382]\n",
"✗ [383]\n",
"✗ [384]\n",
"✗ [385]\n",
"✗ [386]\n",
"✗ [387]\n",
"✗ [388]\n",
"✗ [389]\n",
"✗ [390]\n",
"✗ [391]\n",
"✗ [392]\n",
"✗ [393]\n",
"✗ [394]\n",
"✗ [395]\n",
"✗ [396]\n",
"✗ [397]\n",
"✗ [398]\n",
"✗ [399]\n",
"✗ [400]\n",
"✗ [401]\n",
"✗ [402]\n",
"✗ [403]\n",
"✗ [404]\n",
"✗ [405]\n",
"✗ [406]\n",
"✗ [407]\n",
"✗ [408]\n",
"✗ [409]\n",
"✗ [410]\n",
"✗ [411]\n",
"✗ [412]\n",
"✗ [413]\n",
"✗ [414]\n",
"✗ [415]\n",
"✗ [416]\n",
"✗ [417]\n",
"✗ [418]\n",
"✗ [419]\n",
"✗ [420]\n",
"✗ [421]\n",
"✗ [422]\n",
"✗ [423]\n",
"✗ [424]\n",
"✗ [425]\n",
"✗ [426]\n",
"✗ [427]\n",
"✗ [428]\n",
"✗ [429]\n",
"✗ [430]\n",
"✗ [431]\n",
"✗ [432]\n",
"✗ [433]\n",
"✗ [434]\n",
"✗ [435]\n",
"✗ [436]\n",
"✗ [437]\n",
"✗ [438]\n",
"✗ [439]\n",
"✗ [440]\n",
"✗ [441]\n",
"✗ [442]\n",
"✗ [443]\n",
"✗ [444]\n",
"✗ [445]\n",
"✗ [446]\n",
"✗ [447]\n",
"✗ [448]\n",
"✗ [449]\n",
"✗ [450]\n",
"✗ [451]\n",
"✗ [452]\n",
"✗ [453]\n",
"✗ [454]\n",
"✗ [455]\n",
"✗ [456]\n",
"✗ [457]\n",
"✗ [458]\n",
"✗ [459]\n",
"✗ [460]\n",
"✗ [461]\n",
"✗ [462]\n",
"✗ [463]\n",
"✗ [464]\n",
"✗ [465]\n",
"✗ [466]\n",
"✗ [467]\n",
"✗ [468]\n",
"✗ [469]\n",
"✗ [470]\n",
"✗ [471]\n",
"✗ [472]\n",
"✗ [473]\n",
"✗ [474]\n",
"✗ [475]\n",
"✗ [476]\n",
"✗ [477]\n",
"✗ [478]\n",
"✗ [479]\n",
"✗ [480]\n",
"✗ [481]\n",
"✗ [482]\n",
"✗ [483]\n",
"✗ [484]\n",
"✗ [485]\n",
"✗ [486]\n",
"✗ [487]\n",
"✗ [488]\n",
"✗ [489]\n",
"✗ [490]\n",
"✗ [491]\n",
"✗ [492]\n",
"✗ [493]\n",
"✗ [494]\n",
"✗ [495]\n",
"✗ [496]\n",
"✗ [497]\n",
"✗ [498]\n",
"✗ [499]\n",
"✓ [500]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [3]\n",
"✗ [4]\n",
"✗ [5]\n",
"✗ [6]\n",
"✗ [7]\n",
"✗ [8]\n",
"✗ [9]\n",
"✗ [10]\n",
"✗ [11]\n",
"✗ [12]\n",
"✗ [13]\n",
"✗ [14]\n",
"✗ [15]\n",
"✗ [16]\n",
"✗ [17]\n",
"✗ [18]\n",
"✗ [19]\n",
"✗ [20]\n",
"✗ [21]\n",
"✗ [22]\n",
"✗ [23]\n",
"✗ [24]\n",
"✗ [25]\n",
"✗ [26]\n",
"✗ [27]\n",
"✗ [28]\n",
"✗ [29]\n",
"✗ [30]\n",
"✗ [31]\n",
"✗ [32]\n",
"✗ [33]\n",
"✗ [34]\n",
"✗ [35]\n",
"✗ [36]\n",
"✗ [37]\n",
"✗ [38]\n",
"✗ [39]\n",
"✗ [40]\n",
"✗ [41]\n",
"✗ [42]\n",
"✗ [43]\n",
"✗ [44]\n",
"✗ [45]\n",
"✗ [46]\n",
"✗ [47]\n",
"✗ [48]\n",
"✗ [49]\n",
"✗ [50]\n",
"✗ [51]\n",
"✗ [52]\n",
"✗ [53]\n",
"✗ [54]\n",
"✗ [55]\n",
"✗ [56]\n",
"✗ [57]\n",
"✗ [58]\n",
"✗ [59]\n",
"✗ [60]\n",
"✗ [61]\n",
"✗ [62]\n",
"✗ [63]\n",
"✗ [64]\n",
"✗ [65]\n",
"✗ [66]\n",
"✗ [67]\n",
"✗ [68]\n",
"✗ [69]\n",
"✗ [70]\n",
"✗ [71]\n",
"✗ [72]\n",
"✗ [73]\n",
"✗ [74]\n",
"✗ [75]\n",
"✗ [76]\n",
"✗ [77]\n",
"✗ [78]\n",
"✗ [79]\n",
"✗ [80]\n",
"✗ [81]\n",
"✗ [82]\n",
"✗ [83]\n",
"✗ [84]\n",
"✗ [85]\n",
"✗ [86]\n",
"✗ [87]\n",
"✗ [88]\n",
"✗ [89]\n",
"✗ [90]\n",
"✗ [91]\n",
"✗ [92]\n",
"✗ [93]\n",
"✗ [94]\n",
"✗ [95]\n",
"✗ [96]\n",
"✗ [97]\n",
"✗ [98]\n",
"✗ [99]\n",
"✗ [100]\n",
"✗ [101]\n",
"✗ [102]\n",
"✗ [103]\n",
"✗ [104]\n",
"✗ [105]\n",
"✗ [106]\n",
"✗ [107]\n",
"✗ [108]\n",
"✗ [109]\n",
"✗ [110]\n",
"✗ [111]\n",
"✗ [112]\n",
"✗ [113]\n",
"✗ [114]\n",
"✗ [115]\n",
"✗ [116]\n",
"✗ [117]\n",
"✗ [118]\n",
"✗ [119]\n",
"✗ [120]\n",
"✗ [121]\n",
"✗ [122]\n",
"✗ [123]\n",
"✗ [124]\n",
"✗ [125]\n",
"✗ [126]\n",
"✗ [127]\n",
"✗ [128]\n",
"✗ [129]\n",
"✗ [130]\n",
"✗ [131]\n",
"✗ [132]\n",
"✗ [133]\n",
"✗ [134]\n",
"✗ [135]\n",
"✗ [136]\n",
"✗ [137]\n",
"✗ [138]\n",
"✗ [139]\n",
"✗ [140]\n",
"✗ [141]\n",
"✗ [142]\n",
"✗ [143]\n",
"✗ [144]\n",
"✗ [145]\n",
"✗ [146]\n",
"✗ [147]\n",
"✗ [148]\n",
"✗ [149]\n",
"✗ [150]\n",
"✗ [151]\n",
"✗ [152]\n",
"✗ [153]\n",
"✗ [154]\n",
"✗ [155]\n",
"✗ [156]\n",
"✗ [157]\n",
"✗ [158]\n",
"✗ [159]\n",
"✗ [160]\n",
"✗ [161]\n",
"✗ [162]\n",
"✗ [163]\n",
"✗ [164]\n",
"✗ [165]\n",
"✗ [166]\n",
"✗ [167]\n",
"✗ [168]\n",
"✗ [169]\n",
"✗ [170]\n",
"✗ [171]\n",
"✗ [172]\n",
"✗ [173]\n",
"✗ [174]\n",
"✗ [175]\n",
"✗ [176]\n",
"✗ [177]\n",
"✗ [178]\n",
"✗ [179]\n",
"✗ [180]\n",
"✗ [181]\n",
"✗ [182]\n",
"✗ [183]\n",
"✗ [184]\n",
"✗ [185]\n",
"✗ [186]\n",
"✗ [187]\n",
"✗ [188]\n",
"✗ [189]\n",
"✗ [190]\n",
"✗ [191]\n",
"✗ [192]\n",
"✗ [193]\n",
"✗ [194]\n",
"✗ [195]\n",
"✗ [196]\n",
"✗ [197]\n",
"✗ [198]\n",
"✗ [199]\n",
"✗ [200]\n",
"✗ [201]\n",
"✗ [202]\n",
"✗ [203]\n",
"✗ [204]\n",
"✗ [205]\n",
"✗ [206]\n",
"✗ [207]\n",
"✗ [208]\n",
"✗ [209]\n",
"✗ [210]\n",
"✗ [211]\n",
"✗ [212]\n",
"✗ [213]\n",
"✗ [214]\n",
"✗ [215]\n",
"✗ [216]\n",
"✗ [217]\n",
"✗ [218]\n",
"✗ [219]\n",
"✗ [220]\n",
"✗ [221]\n",
"✗ [222]\n",
"✗ [223]\n",
"✗ [224]\n",
"✗ [225]\n",
"✗ [226]\n",
"✗ [227]\n",
"✗ [228]\n",
"✗ [229]\n",
"✗ [230]\n",
"✗ [231]\n",
"✗ [232]\n",
"✗ [233]\n",
"✗ [234]\n",
"✗ [235]\n",
"✗ [236]\n",
"✗ [237]\n",
"✗ [238]\n",
"✗ [239]\n",
"✗ [240]\n",
"✗ [241]\n",
"✗ [242]\n",
"✗ [243]\n",
"✗ [244]\n",
"✗ [245]\n",
"✗ [246]\n",
"✗ [247]\n",
"✗ [248]\n",
"✗ [249]\n",
"✗ [250]\n",
"✗ [251]\n",
"✗ [252]\n",
"✗ [253]\n",
"✗ [254]\n",
"✗ [255]\n",
"✗ [256]\n",
"✗ [257]\n",
"✗ [258]\n",
"✗ [259]\n",
"✗ [260]\n",
"✗ [261]\n",
"✗ [262]\n",
"✗ [263]\n",
"✗ [264]\n",
"✗ [265]\n",
"✗ [266]\n",
"✗ [267]\n",
"✗ [268]\n",
"✗ [269]\n",
"✗ [270]\n",
"✗ [271]\n",
"✗ [272]\n",
"✗ [273]\n",
"✗ [274]\n",
"✗ [275]\n",
"✗ [276]\n",
"✗ [277]\n",
"✗ [278]\n",
"✗ [279]\n",
"✗ [280]\n",
"✗ [281]\n",
"✗ [282]\n",
"✗ [283]\n",
"✗ [284]\n",
"✗ [285]\n",
"✗ [286]\n",
"✗ [287]\n",
"✗ [288]\n",
"✗ [289]\n",
"✗ [290]\n",
"✗ [291]\n",
"✗ [292]\n",
"✗ [293]\n",
"✗ [294]\n",
"✗ [295]\n",
"✗ [296]\n",
"✗ [297]\n",
"✗ [298]\n",
"✗ [299]\n",
"✗ [300]\n",
"✗ [301]\n",
"✗ [302]\n",
"✗ [303]\n",
"✗ [304]\n",
"✗ [305]\n",
"✗ [306]\n",
"✗ [307]\n",
"✗ [308]\n",
"✗ [309]\n",
"✗ [310]\n",
"✗ [311]\n",
"✗ [312]\n",
"✗ [313]\n",
"✗ [314]\n",
"✗ [315]\n",
"✗ [316]\n",
"✗ [317]\n",
"✗ [318]\n",
"✗ [319]\n",
"✗ [320]\n",
"✗ [321]\n",
"✗ [322]\n",
"✗ [323]\n",
"✗ [324]\n",
"✗ [325]\n",
"✗ [326]\n",
"✗ [327]\n",
"✗ [328]\n",
"✗ [329]\n",
"✗ [330]\n",
"✗ [331]\n",
"✗ [332]\n",
"✗ [333]\n",
"✗ [334]\n",
"✗ [335]\n",
"✗ [336]\n",
"✗ [337]\n",
"✗ [338]\n",
"✗ [339]\n",
"✗ [340]\n",
"✗ [341]\n",
"✗ [342]\n",
"✗ [343]\n",
"✗ [344]\n",
"✗ [345]\n",
"✗ [346]\n",
"✗ [347]\n",
"✗ [348]\n",
"✗ [349]\n",
"✗ [350]\n",
"✗ [351]\n",
"✗ [352]\n",
"✗ [353]\n",
"✗ [354]\n",
"✗ [355]\n",
"✗ [356]\n",
"✗ [357]\n",
"✗ [358]\n",
"✗ [359]\n",
"✗ [360]\n",
"✗ [361]\n",
"✗ [362]\n",
"✗ [363]\n",
"✗ [364]\n",
"✗ [365]\n",
"✗ [366]\n",
"✗ [367]\n",
"✗ [368]\n",
"✗ [369]\n",
"✗ [370]\n",
"✗ [371]\n",
"✗ [372]\n",
"✗ [373]\n",
"✗ [374]\n",
"✗ [375]\n",
"✗ [376]\n",
"✗ [377]\n",
"✗ [378]\n",
"✗ [379]\n",
"✗ [380]\n",
"✗ [381]\n",
"✗ [382]\n",
"✗ [383]\n",
"✗ [384]\n",
"✗ [385]\n",
"✗ [386]\n",
"✗ [387]\n",
"✗ [388]\n",
"✗ [389]\n",
"✗ [390]\n",
"✗ [391]\n",
"✗ [392]\n",
"✗ [393]\n",
"✗ [394]\n",
"✗ [395]\n",
"✗ [396]\n",
"✗ [397]\n",
"✗ [398]\n",
"✗ [399]\n",
"✗ [400]\n",
"✗ [401]\n",
"✗ [402]\n",
"✗ [403]\n",
"✗ [404]\n",
"✗ [405]\n",
"✗ [406]\n",
"✗ [407]\n",
"✗ [408]\n",
"✗ [409]\n",
"✗ [410]\n",
"✗ [411]\n",
"✗ [412]\n",
"✗ [413]\n",
"✗ [414]\n",
"✗ [415]\n",
"✗ [416]\n",
"✗ [417]\n",
"✗ [418]\n",
"✗ [419]\n",
"✗ [420]\n",
"✗ [421]\n",
"✗ [422]\n",
"✗ [423]\n",
"✗ [424]\n",
"✗ [425]\n",
"✗ [426]\n",
"✗ [427]\n",
"✗ [428]\n",
"✗ [429]\n",
"✗ [430]\n",
"✗ [431]\n",
"✗ [432]\n",
"✗ [433]\n",
"✗ [434]\n",
"✗ [435]\n",
"✗ [436]\n",
"✗ [437]\n",
"✗ [438]\n",
"✗ [439]\n",
"✗ [440]\n",
"✗ [441]\n",
"✗ [442]\n",
"✗ [443]\n",
"✗ [444]\n",
"✗ [445]\n",
"✗ [446]\n",
"✗ [447]\n",
"✗ [448]\n",
"✗ [449]\n",
"✗ [450]\n",
"✗ [451]\n",
"✗ [452]\n",
"✗ [453]\n",
"✗ [454]\n",
"✗ [455]\n",
"✗ [456]\n",
"✗ [457]\n",
"✗ [458]\n",
"✗ [459]\n",
"✗ [460]\n",
"✗ [461]\n",
"✗ [462]\n",
"✗ [463]\n",
"✗ [464]\n",
"✗ [465]\n",
"✗ [466]\n",
"✗ [467]\n",
"✗ [468]\n",
"✗ [469]\n",
"✗ [470]\n",
"✗ [471]\n",
"✗ [472]\n",
"✗ [473]\n",
"✗ [474]\n",
"✗ [475]\n",
"✗ [476]\n",
"✗ [477]\n",
"✗ [478]\n",
"✗ [479]\n",
"✗ [480]\n",
"✗ [481]\n",
"✗ [482]\n",
"✗ [483]\n",
"✗ [484]\n",
"✗ [485]\n",
"✗ [486]\n",
"✗ [487]\n",
"✗ [488]\n",
"✗ [489]\n",
"✗ [490]\n",
"✗ [491]\n",
"✗ [492]\n",
"✗ [493]\n",
"✗ [494]\n",
"✗ [495]\n",
"✗ [496]\n",
"✗ [497]\n",
"✗ [498]\n",
"✗ [499]\n",
"\n",
"1 shrinks with 1003 function calls\n"
]
}
],
"source": [
"show_trace([1000], lambda x: sum(x) >= 500,\n",
" partial(greedy_shrink, shrink=shrink2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because we're trying every intermediate value, what we have amounts to a linear probe up to the smallest value that will work. If that smallest value is large, this will take a long time. Our shrinking is still O(n), but n is now the size of the smallest value that will work rather than the starting value. This is still pretty suboptimal.\n",
"\n",
"What we want to do is try to replace our linear probe with a binary search. What we'll get isn't exactly a binary search, but it's close enough."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_integer(n):\n",
" \"\"\"\n",
" Shrinker for individual integers.\n",
" \n",
" What happens is that we start from the left, first probing upwards in powers of two.\n",
" \n",
" When this would take us past our target value we then binary chop towards it.\n",
" \"\"\"\n",
" if not n:\n",
" return\n",
" for k in range(64):\n",
" probe = 2 ** k\n",
" if probe >= n:\n",
" break\n",
" yield probe - 1\n",
" probe //= 2\n",
" while True:\n",
" probe = (probe + n) // 2\n",
" yield probe\n",
" if probe == n - 1:\n",
" break\n",
"\n",
"\n",
"def shrink3(ls):\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
" for x in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[0, 1, 3, 7, 15, 31, 63, 127, 255, 378, 439, 469, 484, 492, 496, 498, 499]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list(shrink_integer(500))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This gives us a reasonable distribution of O(log(n)) values in the middle while still making sure we start with 0 and finish with n - 1.\n",
"\n",
"In Hypothesis's actual implementation we also try random values in the probe region in case there's something special about things near powers of two, but we won't worry about that here."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [1000]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✓ [511]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [383]\n",
"✗ [447]\n",
"✗ [479]\n",
"✗ [495]\n",
"✓ [503]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [379]\n",
"✗ [441]\n",
"✗ [472]\n",
"✗ [487]\n",
"✗ [495]\n",
"✗ [499]\n",
"✓ [501]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [378]\n",
"✗ [439]\n",
"✗ [470]\n",
"✗ [485]\n",
"✗ [493]\n",
"✗ [497]\n",
"✗ [499]\n",
"✓ [500]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [378]\n",
"✗ [439]\n",
"✗ [469]\n",
"✗ [484]\n",
"✗ [492]\n",
"✗ [496]\n",
"✗ [498]\n",
"✗ [499]\n",
"\n",
"4 shrinks with 79 function calls\n"
]
}
],
"source": [
"show_trace([1000], lambda x: sum(x) >= 500, partial(\n",
" greedy_shrink, shrink=shrink3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This now runs in a much more reasonable number of function calls.\n",
"\n",
"Now we want to look at how to reduce the number of elements in the list more efficiently. We're currently making the same mistake we did with n umbers. Only reducing one at a time."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2]\n",
"✓ [2, 2, 2]\n",
"✓ [2, 2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✓ [1, 2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✗ [1]\n",
"✗ [1, 0]\n",
"✗ [1, 1]\n",
"\n",
"19 shrinks with 26 function calls\n"
]
}
],
"source": [
"show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n",
" greedy_shrink, shrink=shrink3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We won't try too hard here, because typically our lists are not *that* long. We will just attempt to start by finding a shortish initial prefix that demonstrates the behaviour:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_to_prefix(ls):\n",
" i = 1\n",
" while i < len(ls):\n",
" yield ls[:i]\n",
" i *= 2\n",
"\n",
"\n",
"def delete_individual_elements(ls):\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
"\n",
"\n",
"def shrink_individual_elements(ls):\n",
" for i in range(len(ls)):\n",
" for x in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s\n",
" \n",
"def shrink4(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from shrink_individual_elements(ls) "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [2]\n",
"✓ [2, 2]\n",
"✗ [2]\n",
"✗ [2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✓ [1, 2]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [1]\n",
"✗ [0, 2]\n",
"✗ [1, 0]\n",
"✗ [1, 1]\n",
"\n",
"2 shrinks with 13 function calls\n"
]
}
],
"source": [
"show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n",
" greedy_shrink, shrink=shrink4))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The problem we now want to address is the fact that when we're shrinking elements we're only shrinking them one at a time. This means that even though we're only O(log(k)) in each element, we're O(log(k)^n) in the whole list where n is the length of the list. For even very modest k this is bad.\n",
"\n",
"In general we may not be able to fix this, but in practice for a lot of common structures we can exploit similarity to try to do simultaneous shrinking.\n",
"\n",
"Here is our starting example: We start and finish with all identical values. We would like to be able to shortcut through a lot of the uninteresting intermediate examples somehow."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✓ [7, 20, 20, 20, 20]\n",
"✗ [7]\n",
"✗ [7, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✓ [5, 20, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✗ [4, 20, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✓ [5, 7, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 7]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [0, 7, 20, 20, 20]\n",
"✗ [1, 7, 20, 20, 20]\n",
"✗ [3, 7, 20, 20, 20]\n",
"✗ [4, 7, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✓ [5, 5, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [0, 5, 20, 20, 20]\n",
"✗ [1, 5, 20, 20, 20]\n",
"✗ [3, 5, 20, 20, 20]\n",
"✗ [4, 5, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✗ [5, 4, 20, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✓ [5, 5, 7, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [0, 5, 7, 20, 20]\n",
"✗ [1, 5, 7, 20, 20]\n",
"✗ [3, 5, 7, 20, 20]\n",
"✗ [4, 5, 7, 20, 20]\n",
"✗ [5, 0, 7, 20, 20]\n",
"✗ [5, 1, 7, 20, 20]\n",
"✗ [5, 3, 7, 20, 20]\n",
"✗ [5, 4, 7, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✓ [5, 5, 5, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [0, 5, 5, 20, 20]\n",
"✗ [1, 5, 5, 20, 20]\n",
"✗ [3, 5, 5, 20, 20]\n",
"✗ [4, 5, 5, 20, 20]\n",
"✗ [5, 0, 5, 20, 20]\n",
"✗ [5, 1, 5, 20, 20]\n",
"✗ [5, 3, 5, 20, 20]\n",
"✗ [5, 4, 5, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✗ [5, 5, 4, 20, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✓ [5, 5, 5, 7, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [0, 5, 5, 7, 20]\n",
"✗ [1, 5, 5, 7, 20]\n",
"✗ [3, 5, 5, 7, 20]\n",
"✗ [4, 5, 5, 7, 20]\n",
"✗ [5, 0, 5, 7, 20]\n",
"✗ [5, 1, 5, 7, 20]\n",
"✗ [5, 3, 5, 7, 20]\n",
"✗ [5, 4, 5, 7, 20]\n",
"✗ [5, 5, 0, 7, 20]\n",
"✗ [5, 5, 1, 7, 20]\n",
"✗ [5, 5, 3, 7, 20]\n",
"✗ [5, 5, 4, 7, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✓ [5, 5, 5, 5, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 20]\n",
"✗ [1, 5, 5, 5, 20]\n",
"✗ [3, 5, 5, 5, 20]\n",
"✗ [4, 5, 5, 5, 20]\n",
"✗ [5, 0, 5, 5, 20]\n",
"✗ [5, 1, 5, 5, 20]\n",
"✗ [5, 3, 5, 5, 20]\n",
"✗ [5, 4, 5, 5, 20]\n",
"✗ [5, 5, 0, 5, 20]\n",
"✗ [5, 5, 1, 5, 20]\n",
"✗ [5, 5, 3, 5, 20]\n",
"✗ [5, 5, 4, 5, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✗ [5, 5, 5, 4, 20]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 7]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 7]\n",
"✗ [1, 5, 5, 5, 7]\n",
"✗ [3, 5, 5, 5, 7]\n",
"✗ [4, 5, 5, 5, 7]\n",
"✗ [5, 0, 5, 5, 7]\n",
"✗ [5, 1, 5, 5, 7]\n",
"✗ [5, 3, 5, 5, 7]\n",
"✗ [5, 4, 5, 5, 7]\n",
"✗ [5, 5, 0, 5, 7]\n",
"✗ [5, 5, 1, 5, 7]\n",
"✗ [5, 5, 3, 5, 7]\n",
"✗ [5, 5, 4, 5, 7]\n",
"✗ [5, 5, 5, 0, 7]\n",
"✗ [5, 5, 5, 1, 7]\n",
"✗ [5, 5, 5, 3, 7]\n",
"✗ [5, 5, 5, 4, 7]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"12 shrinks with 236 function calls\n"
]
}
],
"source": [
"show_trace([20] * 7,\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink4))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_shared(ls):\n",
" \"\"\"\n",
" Look for all sets of shared indices and try to perform a simultaneous shrink on\n",
" their value, replacing all of them at once.\n",
" \n",
" In actual Hypothesis we also try replacing only subsets of the values when there\n",
" are more than two shared values, but we won't worry about that here.\n",
" \"\"\"\n",
" shared_indices = {}\n",
" for i in range(len(ls)):\n",
" shared_indices.setdefault(ls[i], []).append(i)\n",
" for sharing in shared_indices.values():\n",
" if len(sharing) > 1:\n",
" for v in shrink_integer(ls[sharing[0]]):\n",
" s = list(ls)\n",
" for i in sharing:\n",
" s[i] = v\n",
" yield s\n",
"\n",
"\n",
"def shrink5(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from shrink_shared(ls)\n",
" yield from shrink_individual_elements(ls)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [7, 7, 7, 7, 7]\n",
"✗ [7]\n",
"✗ [7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"4 shrinks with 64 function calls\n"
]
}
],
"source": [
"show_trace([20] * 7,\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This achieves the desired result. We rapidly progress through all of the intermediate stages. We do still have to perform individual shrinks at the end unfortunately (this is unavoidable), but the size of the elements is much smaller now so it takes less time.\n",
"\n",
"Unfortunately while this solves the problem in this case it's almost useless, because unless you find yourself in the exact right starting position it never does anything."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 21, 22, 23, 24, 25, 26]\n",
"✗ [20]\n",
"✗ [20, 21]\n",
"✗ [20, 21, 22, 23]\n",
"✓ [21, 22, 23, 24, 25, 26]\n",
"✗ [21]\n",
"✗ [21, 22]\n",
"✗ [21, 22, 23, 24]\n",
"✓ [22, 23, 24, 25, 26]\n",
"✗ [22]\n",
"✗ [22, 23]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [22, 24, 25, 26]\n",
"✗ [22, 23, 25, 26]\n",
"✗ [22, 23, 24, 26]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✓ [7, 23, 24, 25, 26]\n",
"✗ [7]\n",
"✗ [7, 23]\n",
"✗ [7, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [7, 24, 25, 26]\n",
"✗ [7, 23, 25, 26]\n",
"✗ [7, 23, 24, 26]\n",
"✗ [7, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✓ [5, 23, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 23]\n",
"✗ [5, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 23, 25, 26]\n",
"✗ [5, 23, 24, 26]\n",
"✗ [5, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✗ [4, 23, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✓ [5, 7, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 7]\n",
"✗ [5, 7, 24, 25]\n",
"✗ [7, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 7, 24, 26]\n",
"✗ [5, 7, 24, 25]\n",
"✗ [0, 7, 24, 25, 26]\n",
"✗ [1, 7, 24, 25, 26]\n",
"✗ [3, 7, 24, 25, 26]\n",
"✗ [4, 7, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✓ [5, 5, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 24, 25]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 24, 26]\n",
"✗ [5, 5, 24, 25]\n",
"✗ [0, 0, 24, 25, 26]\n",
"✗ [1, 1, 24, 25, 26]\n",
"✗ [3, 3, 24, 25, 26]\n",
"✗ [4, 4, 24, 25, 26]\n",
"✗ [0, 5, 24, 25, 26]\n",
"✗ [1, 5, 24, 25, 26]\n",
"✗ [3, 5, 24, 25, 26]\n",
"✗ [4, 5, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✗ [5, 4, 24, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✓ [5, 5, 7, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 7, 25]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 25]\n",
"✗ [0, 0, 7, 25, 26]\n",
"✗ [1, 1, 7, 25, 26]\n",
"✗ [3, 3, 7, 25, 26]\n",
"✗ [4, 4, 7, 25, 26]\n",
"✗ [0, 5, 7, 25, 26]\n",
"✗ [1, 5, 7, 25, 26]\n",
"✗ [3, 5, 7, 25, 26]\n",
"✗ [4, 5, 7, 25, 26]\n",
"✗ [5, 0, 7, 25, 26]\n",
"✗ [5, 1, 7, 25, 26]\n",
"✗ [5, 3, 7, 25, 26]\n",
"✗ [5, 4, 7, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✓ [5, 5, 5, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 25]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 25]\n",
"✗ [0, 0, 0, 25, 26]\n",
"✗ [1, 1, 1, 25, 26]\n",
"✗ [3, 3, 3, 25, 26]\n",
"✗ [4, 4, 4, 25, 26]\n",
"✗ [0, 5, 5, 25, 26]\n",
"✗ [1, 5, 5, 25, 26]\n",
"✗ [3, 5, 5, 25, 26]\n",
"✗ [4, 5, 5, 25, 26]\n",
"✗ [5, 0, 5, 25, 26]\n",
"✗ [5, 1, 5, 25, 26]\n",
"✗ [5, 3, 5, 25, 26]\n",
"✗ [5, 4, 5, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✗ [5, 5, 4, 25, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✓ [5, 5, 5, 7, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [0, 0, 0, 7, 26]\n",
"✗ [1, 1, 1, 7, 26]\n",
"✗ [3, 3, 3, 7, 26]\n",
"✗ [4, 4, 4, 7, 26]\n",
"✗ [0, 5, 5, 7, 26]\n",
"✗ [1, 5, 5, 7, 26]\n",
"✗ [3, 5, 5, 7, 26]\n",
"✗ [4, 5, 5, 7, 26]\n",
"✗ [5, 0, 5, 7, 26]\n",
"✗ [5, 1, 5, 7, 26]\n",
"✗ [5, 3, 5, 7, 26]\n",
"✗ [5, 4, 5, 7, 26]\n",
"✗ [5, 5, 0, 7, 26]\n",
"✗ [5, 5, 1, 7, 26]\n",
"✗ [5, 5, 3, 7, 26]\n",
"✗ [5, 5, 4, 7, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✓ [5, 5, 5, 5, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 26]\n",
"✗ [1, 1, 1, 1, 26]\n",
"✗ [3, 3, 3, 3, 26]\n",
"✗ [4, 4, 4, 4, 26]\n",
"✗ [0, 5, 5, 5, 26]\n",
"✗ [1, 5, 5, 5, 26]\n",
"✗ [3, 5, 5, 5, 26]\n",
"✗ [4, 5, 5, 5, 26]\n",
"✗ [5, 0, 5, 5, 26]\n",
"✗ [5, 1, 5, 5, 26]\n",
"✗ [5, 3, 5, 5, 26]\n",
"✗ [5, 4, 5, 5, 26]\n",
"✗ [5, 5, 0, 5, 26]\n",
"✗ [5, 5, 1, 5, 26]\n",
"✗ [5, 5, 3, 5, 26]\n",
"✗ [5, 5, 4, 5, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✗ [5, 5, 5, 4, 26]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 7]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 7]\n",
"✗ [1, 1, 1, 1, 7]\n",
"✗ [3, 3, 3, 3, 7]\n",
"✗ [4, 4, 4, 4, 7]\n",
"✗ [0, 5, 5, 5, 7]\n",
"✗ [1, 5, 5, 5, 7]\n",
"✗ [3, 5, 5, 5, 7]\n",
"✗ [4, 5, 5, 5, 7]\n",
"✗ [5, 0, 5, 5, 7]\n",
"✗ [5, 1, 5, 5, 7]\n",
"✗ [5, 3, 5, 5, 7]\n",
"✗ [5, 4, 5, 5, 7]\n",
"✗ [5, 5, 0, 5, 7]\n",
"✗ [5, 5, 1, 5, 7]\n",
"✗ [5, 5, 3, 5, 7]\n",
"✗ [5, 5, 4, 5, 7]\n",
"✗ [5, 5, 5, 0, 7]\n",
"✗ [5, 5, 5, 1, 7]\n",
"✗ [5, 5, 5, 3, 7]\n",
"✗ [5, 5, 5, 4, 7]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"12 shrinks with 264 function calls\n"
]
}
],
"source": [
"show_trace([20 + i for i in range(7)],\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what we're going to try to do is to try a simplification first which *creates* that exact right starting condition. Further it's one that will be potentially very useful even if we don't actually have the situation where we have shared shrinks.\n",
"\n",
"What we're going to do is we're going to use values from the list to act as evidence for how complex things need to be. Starting from the smallest, we'll try capping the array at each individual value and see what happens.\n",
"\n",
"As well as being potentially a very rapid shrink, this creates lists with lots of duplicates, which enables the simultaneous shrinking to shine."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def replace_with_simpler(ls):\n",
" if not ls:\n",
" return\n",
" values = set(ls)\n",
" values.remove(max(ls))\n",
" values = sorted(values)\n",
" for v in values:\n",
" yield [min(v, l) for l in ls]\n",
"\n",
"\n",
"def shrink6(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from replace_with_simpler(ls)\n",
" yield from shrink_shared(ls)\n",
" yield from shrink_individual_elements(ls)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 21, 22, 23, 24, 25, 26]\n",
"✗ [20]\n",
"✗ [20, 21]\n",
"✗ [20, 21, 22, 23]\n",
"✓ [21, 22, 23, 24, 25, 26]\n",
"✗ [21]\n",
"✗ [21, 22]\n",
"✗ [21, 22, 23, 24]\n",
"✓ [22, 23, 24, 25, 26]\n",
"✗ [22]\n",
"✗ [22, 23]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [22, 24, 25, 26]\n",
"✗ [22, 23, 25, 26]\n",
"✗ [22, 23, 24, 26]\n",
"✗ [22, 23, 24, 25]\n",
"✓ [22, 22, 22, 22, 22]\n",
"✗ [22]\n",
"✗ [22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [7, 7, 7, 7, 7]\n",
"✗ [7]\n",
"✗ [7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"5 shrinks with 73 function calls\n"
]
}
],
"source": [
"show_trace([20 + i for i in range(7)],\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we're going to start looking at some numbers.\n",
"\n",
"What we'll do is we'll generate 1000 random lists satisfying some predicate, and then simplify them down to the smallest possible examples satisfying those predicates. This lets us verify that these aren't just cherry-picked examples and our methods help in the general case. We fix the set of examples per predicate so that we're comparing like for like.\n",
"\n",
"A more proper statistical treatment would probably be a good idea."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from collections import OrderedDict\n",
"\n",
"conditions = OrderedDict([\n",
" (\"length >= 2\", lambda xs: len(xs) >= 2),\n",
" (\"sum >= 500\", lambda xs: sum(xs) >= 500),\n",
" (\"sum >= 3\", lambda xs: sum(xs) >= 3),\n",
" (\"At least 10 by 5\", lambda xs: len(\n",
" [t for t in xs if t >= 5]) >= 10),\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[17861213645196285187,\n",
" 15609796832515195084,\n",
" 8808697621832673046,\n",
" 1013319847337885109,\n",
" 1252281976438780211,\n",
" 15526909770962854196,\n",
" 2065337703776048239,\n",
" 11654092230944134701,\n",
" 5554896851708700201,\n",
" 17485190250805381572,\n",
" 7700396730246958474,\n",
" 402840882133605445,\n",
" 5303116940477413125,\n",
" 7459257850255946545,\n",
" 10349184495871650178,\n",
" 4361155591615075311,\n",
" 15194020468024244632,\n",
" 14428821588688846242,\n",
" 5754975712549869618,\n",
" 13740966788951413307,\n",
" 15209704957418077856,\n",
" 12562588328524673262,\n",
" 8415556016795311987,\n",
" 3993098291779210741,\n",
" 16874756914619597640,\n",
" 7932421182532982309,\n",
" 1080869529149674704,\n",
" 13878842261614060122,\n",
" 229976195287031921,\n",
" 8378461140013520338,\n",
" 6189522326946191255,\n",
" 16684625600934047114,\n",
" 12533448641134015292,\n",
" 10459192142175991903,\n",
" 15688511015570391481,\n",
" 3091340728247101611,\n",
" 4034760776171697910,\n",
" 6258572097778886531,\n",
" 13555449085571665140,\n",
" 6727488149749641424,\n",
" 7125107819562430884,\n",
" 1557872425804423698,\n",
" 4810250441100696888,\n",
" 10500486959813930693,\n",
" 841300069403644975,\n",
" 9278626999406014662,\n",
" 17219731431761688449,\n",
" 15650446646901259126,\n",
" 8683172055034528265,\n",
" 5138373693056086816,\n",
" 4055877702343936882,\n",
" 5696765901584750542,\n",
" 7133363948804979946,\n",
" 988518370429658551,\n",
" 16302597472193523184,\n",
" 579078764159525857,\n",
" 10678347012503400890,\n",
" 8433836779160269996,\n",
" 13884258181758870664,\n",
" 13594877609651310055]"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import random\n",
"\n",
"N_EXAMPLES = 1000\n",
"\n",
"datasets = {}\n",
"\n",
"def gen_list(rnd):\n",
" return [\n",
" random.getrandbits(64)\n",
" for _ in range(random.randint(0, 100))\n",
" ]\n",
"\n",
"def dataset_for(condition):\n",
" if condition in datasets:\n",
" return datasets[condition]\n",
" constraint = conditions[condition]\n",
" dataset = []\n",
" rnd = random.Random(condition)\n",
" while len(dataset) < N_EXAMPLES:\n",
" ls = gen_list(rnd)\n",
" if constraint(ls):\n",
" dataset.append(ls)\n",
" datasets[condition] = dataset\n",
" return dataset\n",
"\n",
"dataset_for(\"sum >= 3\")[1]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"13"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# In order to avoid run-away cases where things will take basically forever\n",
"# we cap at 5000 as \"you've taken too long. Stop it\". Because we're only ever\n",
"# showing the worst case scenario we'll just display this as > 5000 if we ever\n",
"# hit it and it won't distort statistics.\n",
"MAX_COUNT = 5000\n",
"\n",
"class MaximumCountExceeded(Exception):\n",
" pass\n",
"\n",
"def call_counts(condition, simplifier):\n",
" constraint = conditions[condition]\n",
" dataset = dataset_for(condition)\n",
" counts = []\n",
"\n",
" for ex in dataset:\n",
" counter = [0]\n",
" \n",
" def run_and_count(ls):\n",
" counter[0] += 1\n",
" if counter[0] > MAX_COUNT:\n",
" raise MaximumCountExceeded()\n",
" return constraint(ls)\n",
" \n",
" try:\n",
" simplifier(ex, run_and_count)\n",
" counts.extend(counter)\n",
" except MaximumCountExceeded:\n",
" counts.append(MAX_COUNT + 1)\n",
" break\n",
" return counts\n",
" \n",
"def worst_case(condition, simplifier):\n",
" return max(call_counts(condition, simplifier))\n",
"\n",
"worst_case(\n",
" \"length >= 2\",\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
"\n",
"def compare_simplifiers(named_simplifiers):\n",
" \"\"\"\n",
" Given a list of (name, simplifier) pairs, output a table comparing\n",
" the worst case performance of each on our current set of examples.\n",
" \"\"\"\n",
" html_fragments = []\n",
" html_fragments.append(\"\\n\\n\")\n",
" header = [\"Condition\"]\n",
" header.extend(name for name, _ in named_simplifiers)\n",
" for h in header:\n",
" html_fragments.append(\"%s | \" % (h,))\n",
" html_fragments.append(\"
\\n\\n\")\n",
" \n",
" for name in conditions:\n",
" bits = [name.replace(\">\", \">\")] \n",
" for _, simplifier in named_simplifiers:\n",
" value = worst_case(name, simplifier)\n",
" if value <= MAX_COUNT:\n",
" bits.append(str(value))\n",
" else:\n",
" bits.append(\" > %d\" % (MAX_COUNT,))\n",
" html_fragments.append(\" \")\n",
" html_fragments.append(' '.join(\n",
" \"%s | \" % (b,) for b in bits))\n",
" html_fragments.append(\"
\")\n",
" html_fragments.append(\"\\n
\")\n",
" return HTML('\\n'.join(html_fragments))"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"2 | \n",
"3 | \n",
"4 | \n",
"5 | \n",
"6 | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 106 | 105 | 13 | 13 | 13 | \n",
"
\n",
" \n",
"sum >= 500 | 1102 | 178 | 80 | 80 | 80 | \n",
"
\n",
" \n",
"sum >= 3 | 108 | 107 | 9 | 9 | 9 | \n",
"
\n",
" \n",
"At least 10 by 5 | 535 | 690 | 809 | 877 | 144 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (f.__name__[-1], partial(greedy_shrink, shrink=f))\n",
" for f in [shrink2, shrink3, shrink4, shrink5, shrink6]\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So you can see from the above table, the iterations 2 through 5 were a little ambiguous ion that they helped a lot in the cases they were designed to help with but hurt in other cases. 6 however is clearly the best of the lot, being no worse than any of the others on any of the cases and often significantly better.\n",
"\n",
"Rather than continuing to refine our shrink further, we instead look to improvements to how we use shrinking. We'll start by noting a simple optimization: If you look at our traces above, we often checked the same example twice. We're only interested in deterministic conditions, so this isn't useful to do. So we'll start by simply pruning out all duplicates. This should have exactly the same set and order of successful shrinks but will avoid a bunch of redundant work."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def greedy_shrink_with_dedupe(ls, constraint, shrink):\n",
" seen = set()\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Normal | \n",
"Deduped | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 13 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 80 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 9 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 144 | 107 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Normal\", partial(greedy_shrink, shrink=shrink6)),\n",
" (\"Deduped\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
"\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As expected, this is a significant improvement in some cases. It is logically impossible that it could ever make things worse, but it's nice that it makes it better.\n",
"\n",
"So far we've only looked at things where the interaction between elements was fairly light - the sum cases the values of other elements mattered a bit, but shrinking an integer could never enable other shrinks. Lets look at one where this is not the case: Where our condition is that we have at least 10 distinct elements."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 101]\n",
"✗ [0, 101, 102, 103]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [0, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [0, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 102, 103]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107]\n",
"✗ [1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [0, 1, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 3, 103]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107]\n",
"✗ [1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 3, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 3, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 3, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 3, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 103]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107]\n",
"✗ [1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 2, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 2, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107]\n",
"✗ [1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 2, 3, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107]\n",
"✗ [1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 7, 7, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 7, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107]\n",
"✗ [1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 5, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 5, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107]\n",
"✗ [1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107]\n",
"✗ [1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 7, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107]\n",
"✗ [1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107]\n",
"✗ [1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107]\n",
"✗ [1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15, 15]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11, 11]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9, 9]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 848 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This does not do very well at all.\n",
"\n",
"The reason it doesn't is that we keep trying useless shrinks. e.g. none of the shrinks done by shrink\\_to\\_prefix, replace\\_with\\_simpler or shrink\\_shared will ever do anything useful here.\n",
"\n",
"So lets switch to an approach where we try shrink types until they stop working and then we move on to the next type:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink1(ls, constraint):\n",
" seen = set()\n",
" for shrink in [\n",
" shrink_to_prefix,\n",
" replace_with_simpler,\n",
" shrink_shared,\n",
" shrink_individual_elements,\n",
" ]:\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 318 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" multicourse_shrink1)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"conditions[\"10 distinct elements\"] = lambda xs: len(set(xs)) >= 10"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 4 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 34 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 5 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 58 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 320 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink1)\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So that helped, but not as much as we'd have liked. It's saved us about half the calls, when really we wanted to save 90% of the calls.\n",
"\n",
"We're on the right track though. The problem is not that our solution isn't good, it's that it didn't go far enough: We're *still* making an awful lot of useless calls. The problem is that each time we shrink the element at index i we try shrinking the elements at indexes 0 through i - 1, and this will never work. So what we want to do is to break shrinking elements into a separate shrinker for each index:"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def simplify_index(i):\n",
" def accept(ls):\n",
" if i >= len(ls):\n",
" return\n",
" for v in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = v\n",
" yield s\n",
" return accept\n",
"\n",
"def shrinkers_for(ls):\n",
" yield shrink_to_prefix\n",
" yield delete_individual_elements\n",
" yield replace_with_simpler\n",
" yield shrink_shared\n",
" for i in range(len(ls)):\n",
" yield simplify_index(i)\n",
"\n",
"def multicourse_shrink2(ls, constraint):\n",
" seen = set()\n",
" for shrink in shrinkers_for(ls):\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 75 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" multicourse_shrink2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This worked great! It saved us a huge number of function calls.\n",
"\n",
"Unfortunately it's wrong. Actually the previous one was wrong too, but this one is more obviously wrong. The problem is that shrinking later elements can unlock more shrinks for earlier elements and we'll never be able to benefit from that here:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [101, 100]\n",
"✗ [101]\n",
"✗ [100]\n",
"✗ [100, 100]\n",
"✗ [0, 100]\n",
"✗ [1, 100]\n",
"✗ [3, 100]\n",
"✗ [7, 100]\n",
"✗ [15, 100]\n",
"✗ [31, 100]\n",
"✗ [63, 100]\n",
"✗ [82, 100]\n",
"✗ [91, 100]\n",
"✗ [96, 100]\n",
"✗ [98, 100]\n",
"✗ [99, 100]\n",
"✓ [101, 0]\n",
"\n",
"1 shrinks with 16 function calls\n"
]
}
],
"source": [
"show_trace([101, 100],\n",
" lambda x: len(x) >= 2 and x[0] > x[1],\n",
" multicourse_shrink2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Armed with this example we can also show an example where the previous one is wrong because a later simplification unlocks an earlier one because shrinking values allows us to delete more elements:"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0, 0, 0]\n",
"\n",
"2 shrinks with 5 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink1)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"conditions[\"First > Second\"] = lambda xs: len(xs) >= 2 and xs[0] > xs[1]"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Note: We modify this to mask off the high bits because otherwise the probability of\n",
"# hitting the condition at random is too low.\n",
"conditions[\"Size > max & 63\"] = lambda xs: xs and len(xs) > (max(xs) & 63)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what we'll try doing is iterating this to a fixed point and see what happens:"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink3(ls, constraint):\n",
" seen = set()\n",
" while True:\n",
" old_ls = ls\n",
" for shrink in shrinkers_for(ls):\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if ls == old_ls:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [101, 100]\n",
"✗ [101]\n",
"✗ [100]\n",
"✗ [100, 100]\n",
"✗ [0, 100]\n",
"✗ [1, 100]\n",
"✗ [3, 100]\n",
"✗ [7, 100]\n",
"✗ [15, 100]\n",
"✗ [31, 100]\n",
"✗ [63, 100]\n",
"✗ [82, 100]\n",
"✗ [91, 100]\n",
"✗ [96, 100]\n",
"✗ [98, 100]\n",
"✗ [99, 100]\n",
"✓ [101, 0]\n",
"✗ [0]\n",
"✗ [0, 0]\n",
"✓ [1, 0]\n",
"✗ [1]\n",
"\n",
"2 shrinks with 20 function calls\n"
]
}
],
"source": [
"show_trace([101, 100],\n",
" lambda xs: len(xs) >= 2 and xs[0] > xs[1],\n",
" multicourse_shrink3)"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0]\n",
"✓ [0]\n",
"✗ []\n",
"\n",
"5 shrinks with 10 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So that worked. Yay!\n",
"\n",
"Lets compare how this does to our single pass implementation."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3)\n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So the answer is generally favourably but *ouch* that last one.\n",
"\n",
"What's happening there is that because later shrinks are opening up potentially very large improvements accessible to the lower shrinks, the original greedy algorithm can exploit that much better, while the multi pass algorithm spends a lot of time in the later stages with their incremental shrinks.\n",
"\n",
"Lets see another similar example before we try to fix this:"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import hashlib\n",
"\n",
"conditions[\"Messy\"] = lambda xs: hashlib.md5(repr(xs).encode('utf-8')).hexdigest()[0] == '0'"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3)\n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This one is a bit different in that the problem is not that the structure is one we're ill suited to exploiting, it's that there is no structure at all so we have no hope of exploiting it. Literally any change at all will unlock earlier shrinks we could have done.\n",
"\n",
"What we're going to try to do is hybridize the two approaches. If we notice we're performing an awful lot of shrinks we can take that as a hint that we should be trying again from earlier stages.\n",
"\n",
"Here is our first approach. We simply restart the whole process every five shrinks:"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"MAX_SHRINKS_PER_RUN = 2\n",
"\n",
"\n",
"def multicourse_shrink4(ls, constraint):\n",
" seen = set()\n",
" while True:\n",
" old_ls = ls\n",
" shrinks_this_run = 0\n",
" for shrink in shrinkers_for(ls):\n",
" while shrinks_this_run < MAX_SHRINKS_PER_RUN:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" shrinks_this_run += 1\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if ls == old_ls:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"Multi pass with restart | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | 90 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | 396 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | 1463 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | > 5000 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | 1423 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3),\n",
" (\"Multi pass with restart\", multicourse_shrink4) \n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That works OK, but it's pretty unsatisfying as it loses us most of the benefits of the multi pass shrinking - we're now at most twice as good as the greedy one.\n",
"\n",
"So what we're going to do is bet on the multi pass working and then gradually degrade to the greedy algorithm as it fails to work."
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink5(ls, constraint):\n",
" seen = set()\n",
" max_shrinks_per_run = 10\n",
" while True:\n",
" shrinks_this_run = 0\n",
" for shrink in shrinkers_for(ls):\n",
" while shrinks_this_run < max_shrinks_per_run:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" shrinks_this_run += 1\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if max_shrinks_per_run > 1:\n",
" max_shrinks_per_run -= 2\n",
" if not shrinks_this_run:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0]\n",
"✓ [0]\n",
"✗ []\n",
"\n",
"5 shrinks with 10 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink5)"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"Multi pass with restart | \n",
"Multi pass with variable restart | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | 90 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | 396 | 212 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | 1463 | 1168 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | > 5000 | 1002 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | 1423 | 824 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3), \n",
" (\"Multi pass with restart\", multicourse_shrink4),\n",
" (\"Multi pass with variable restart\", multicourse_shrink5) \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is now more or less the current state of the art (it's actually a bit different from the Hypothesis state of the art at the time of this writing. I'm planning to merge some of the things I figured out in the course of writing this back in). We've got something that is able to adaptively take advantage of structure where it is present, but degrades reasonably gracefully back to the more aggressive version that works better in unstructured examples.\n",
"\n",
"Surprisingly, on some examples it seems to even be best of all of them. I think that's more coincidence than truth though."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
hypothesis-python-3.44.1/requirements/ 0000775 0000000 0000000 00000000000 13215577651 0020060 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/requirements/benchmark.in 0000664 0000000 0000000 00000000041 13215577651 0022335 0 ustar 00root root 0000000 0000000 attrs
click
numpy
scipy
coverage
hypothesis-python-3.44.1/requirements/benchmark.txt 0000664 0000000 0000000 00000000334 13215577651 0022553 0 ustar 00root root 0000000 0000000 #
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements/benchmark.txt requirements/benchmark.in
#
attrs==17.3.0
click==6.7
coverage==4.4.2
numpy==1.13.3
scipy==1.0.0
hypothesis-python-3.44.1/requirements/coverage.in 0000664 0000000 0000000 00000000033 13215577651 0022177 0 ustar 00root root 0000000 0000000 numpy
coverage
pytz
pandas
hypothesis-python-3.44.1/requirements/coverage.txt 0000664 0000000 0000000 00000000447 13215577651 0022421 0 ustar 00root root 0000000 0000000 #
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements/coverage.txt requirements/coverage.in
#
coverage==4.4.2
numpy==1.13.3
pandas==0.21.0
python-dateutil==2.6.1 # via pandas
pytz==2017.3
six==1.11.0 # via python-dateutil
hypothesis-python-3.44.1/requirements/test.in 0000664 0000000 0000000 00000000045 13215577651 0021366 0 ustar 00root root 0000000 0000000 flaky
pytest
pytest-xdist
mock
attrs
hypothesis-python-3.44.1/requirements/test.txt 0000664 0000000 0000000 00000000772 13215577651 0021606 0 ustar 00root root 0000000 0000000 #
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements/test.txt requirements/test.in
#
apipkg==1.4 # via execnet
attrs==17.3.0
execnet==1.5.0 # via pytest-xdist
flaky==3.4.0
mock==2.0.0
pbr==3.1.1 # via mock
pluggy==0.6.0 # via pytest
py==1.5.2 # via pytest
pytest-forked==0.2 # via pytest-xdist
pytest-xdist==1.20.1
pytest==3.3.1
six==1.11.0 # via mock, pytest
hypothesis-python-3.44.1/requirements/tools.in 0000664 0000000 0000000 00000000165 13215577651 0021552 0 ustar 00root root 0000000 0000000 flake8
isort
pip-tools
pyformat
pytest
restructuredtext-lint
Sphinx
sphinx-rtd-theme
tox
twine
attrs
coverage
pyupio
hypothesis-python-3.44.1/requirements/tools.txt 0000664 0000000 0000000 00000004271 13215577651 0021765 0 ustar 00root root 0000000 0000000 #
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements/tools.txt requirements/tools.in
#
alabaster==0.7.10 # via sphinx
attrs==17.3.0
autoflake==1.0 # via pyformat
autopep8==1.3.3 # via pyformat
babel==2.5.1 # via sphinx
certifi==2017.11.5 # via requests
chardet==3.0.4 # via requests
click==6.7 # via pip-tools, pyupio, safety
coverage==4.4.2
docformatter==0.8 # via pyformat
docutils==0.14 # via restructuredtext-lint, sphinx
dparse==0.2.1 # via pyupio, safety
first==2.0.1 # via pip-tools
flake8==3.5.0
hashin-pyup==0.7.2 # via pyupio
idna==2.6 # via requests
imagesize==0.7.1 # via sphinx
isort==4.2.15
jinja2==2.10 # via pyupio, sphinx
markupsafe==1.0 # via jinja2
mccabe==0.6.1 # via flake8
packaging==16.8 # via dparse, pyupio, safety
pip-tools==1.11.0
pkginfo==1.4.1 # via twine
pluggy==0.6.0 # via pytest, tox
py==1.5.2 # via pytest, tox
pycodestyle==2.3.1 # via autopep8, flake8
pyflakes==1.6.0 # via autoflake, flake8
pyformat==0.7
pygithub==1.35 # via pyupio
pygments==2.2.0 # via sphinx
pyjwt==1.5.3 # via pygithub
pyparsing==2.2.0 # via packaging
pytest==3.3.1
python-gitlab==1.1.0 # via pyupio
pytz==2017.3 # via babel
pyupio==0.8.2
pyyaml==3.12 # via dparse, pyupio
requests-toolbelt==0.8.0 # via twine
requests==2.18.4 # via python-gitlab, pyupio, requests-toolbelt, safety, sphinx, twine
restructuredtext-lint==1.1.2
safety==1.6.1 # via pyupio
six==1.11.0 # via dparse, packaging, pip-tools, pytest, python-gitlab, pyupio, sphinx, tox
snowballstemmer==1.2.1 # via sphinx
sphinx-rtd-theme==0.2.4
sphinx==1.6.5
sphinxcontrib-websupport==1.0.1 # via sphinx
tox==2.9.1
tqdm==4.19.5 # via pyupio, twine
twine==1.9.1
unify==0.4 # via pyformat
untokenize==0.1.1 # via docformatter, unify
urllib3==1.22 # via requests
virtualenv==15.1.0 # via tox
hypothesis-python-3.44.1/requirements/typing.in 0000664 0000000 0000000 00000000007 13215577651 0021717 0 ustar 00root root 0000000 0000000 typing
hypothesis-python-3.44.1/requirements/typing.txt 0000664 0000000 0000000 00000000240 13215577651 0022127 0 ustar 00root root 0000000 0000000 #
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements/typing.txt requirements/typing.in
#
typing==3.6.2
hypothesis-python-3.44.1/scripts/ 0000775 0000000 0000000 00000000000 13215577651 0017024 5 ustar 00root root 0000000 0000000 hypothesis-python-3.44.1/scripts/basic-test.sh 0000775 0000000 0000000 00000003255 13215577651 0021426 0 ustar 00root root 0000000 0000000 #!/bin/bash
set -e -o xtrace
# We run a reduced set of tests on OSX mostly so the CI runs in vaguely
# reasonable time.
if [[ "$(uname -s)" == 'Darwin' ]]; then
DARWIN=true
else
DARWIN=false
fi
python -c '
import os
for k, v in sorted(dict(os.environ).items()):
print("%s=%s" % (k, v))
'
pip install .
PYTEST="python -m pytest"
$PYTEST tests/cover
COVERAGE_TEST_TRACER=timid $PYTEST tests/cover
if [ "$(python -c 'import sys; print(sys.version_info[0] == 2)')" = "True" ] ; then
$PYTEST tests/py2
else
$PYTEST tests/py3
fi
$PYTEST --runpytest=subprocess tests/pytest
pip install ".[datetime]"
$PYTEST tests/datetime/
pip uninstall -y pytz
if [ "$DARWIN" = true ]; then
exit 0
fi
if [ "$(python -c 'import sys; print(sys.version_info[:2] in ((2, 7), (3, 6)))')" = "False" ] ; then
exit 0
fi
for f in tests/nocover/test_*.py; do
$PYTEST "$f"
done
# fake-factory doesn't have a correct universal wheel
pip install --no-binary :all: faker
$PYTEST tests/fakefactory/
pip uninstall -y faker
if [ "$(python -c 'import platform; print(platform.python_implementation())')" != "PyPy" ]; then
if [ "$(python -c 'import sys; print(sys.version_info[0] == 2 or sys.version_info[:2] >= (3, 4))')" == "True" ] ; then
pip install .[django]
HYPOTHESIS_DJANGO_USETZ=TRUE python -m tests.django.manage test tests.django
HYPOTHESIS_DJANGO_USETZ=FALSE python -m tests.django.manage test tests.django
pip uninstall -y django
fi
if [ "$(python -c 'import sys; print(sys.version_info[:2] in ((2, 7), (3, 6)))')" = "True" ] ; then
pip install numpy
$PYTEST tests/numpy
pip install pandas
$PYTEST tests/pandas
pip uninstall -y numpy pandas
fi
fi
hypothesis-python-3.44.1/scripts/benchmarks.py 0000664 0000000 0000000 00000037764 13215577651 0021534 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# pylint: skip-file
from __future__ import division, print_function, absolute_import
import os
import sys
import json
import zlib
import base64
import random
import hashlib
from collections import OrderedDict
import attr
import click
import numpy as np
import hypothesis.strategies as st
import hypothesis.extra.numpy as npst
from hypothesis import settings, unlimited
from hypothesis.errors import UnsatisfiedAssumption
from hypothesis.internal.conjecture.engine import ConjectureRunner
ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DATA_DIR = os.path.join(
ROOT,
'benchmark-data',
)
BENCHMARK_SETTINGS = settings(
max_examples=100, max_iterations=1000, max_shrinks=1000,
database=None, timeout=unlimited, use_coverage=False,
perform_health_check=False,
)
BENCHMARKS = OrderedDict()
@attr.s()
class Benchmark(object):
name = attr.ib()
strategy = attr.ib()
valid = attr.ib()
interesting = attr.ib()
@attr.s()
class BenchmarkData(object):
sizes = attr.ib()
seed = attr.ib(default=0)
STRATEGIES = OrderedDict([
('ints', st.integers()),
('intlists', st.lists(st.integers())),
('sizedintlists', st.integers(0, 10).flatmap(
lambda n: st.lists(st.integers(), min_size=n, max_size=n))),
('text', st.text()),
('text5', st.text(min_size=5)),
('arrays10', npst.arrays('int8', 10)),
('arraysvar', npst.arrays('int8', st.integers(0, 10))),
])
def define_benchmark(strategy_name, valid, interesting):
name = '%s-valid=%s-interesting=%s' % (
strategy_name, valid.__name__, interesting.__name__)
assert name not in BENCHMARKS
strategy = STRATEGIES[strategy_name]
BENCHMARKS[name] = Benchmark(name, strategy, valid, interesting)
def always(seed, testdata, value):
return True
def never(seed, testdata, value):
return False
def nontrivial(seed, testdata, value):
return sum(testdata.buffer) >= 255
def sometimes(p, name=None):
def accept(seed, testdata, value):
hasher = hashlib.md5()
hasher.update(testdata.buffer)
hasher.update(seed)
return random.Random(hasher.digest()).random() <= p
accept.__name__ = name or 'sometimes(%r)' % (p,)
return accept
def array_average(seed, testdata, value):
if np.prod(value.shape) == 0:
return False
avg = random.Random(seed).randint(0, 255)
return value.mean() >= avg
def lower_bound(seed, testdata, value):
"""Benchmarking condition for testing the lexicographic minimization aspect
of test case reduction.
This lets us test for the sort of behaviour that happens when we
e.g. have a lower bound on an integer, but in more generality.
"""
# We implicitly define an infinite stream of bytes, and compare the buffer
# of the testdata object with the prefix of the stream of the same length.
# If it is >= that prefix we accept the testdata, if not we reject it.
rnd = random.Random(seed)
for b in testdata.buffer:
c = rnd.randint(0, 255)
if c < b:
return True
if c > b:
return False
return True
def size_lower_bound(seed, testdata, value):
rnd = random.Random(seed)
return len(testdata.buffer) >= rnd.randint(1, 50)
usually = sometimes(0.9, 'usually')
def minsum(seed, testdata, value):
return sum(value) >= 1000
def has_duplicates(seed, testdata, value):
return len(set(value)) < len(value)
for k in STRATEGIES:
define_benchmark(k, always, never)
define_benchmark(k, always, always)
define_benchmark(k, always, usually)
define_benchmark(k, always, lower_bound)
define_benchmark(k, always, size_lower_bound)
define_benchmark(k, usually, size_lower_bound)
define_benchmark('intlists', always, minsum)
define_benchmark('intlists', always, has_duplicates)
define_benchmark('intlists', has_duplicates, minsum)
for p in [always, usually]:
define_benchmark('arrays10', p, array_average)
define_benchmark('arraysvar', p, array_average)
def run_benchmark_for_sizes(benchmark, n_runs):
click.echo('Calculating data for %s' % (benchmark.name,))
total_sizes = []
with click.progressbar(range(n_runs)) as runs:
for _ in runs:
sizes = []
valid_seed = random.getrandbits(64).to_bytes(8, 'big')
interesting_seed = random.getrandbits(64).to_bytes(8, 'big')
def test_function(data):
try:
try:
value = data.draw(benchmark.strategy)
except UnsatisfiedAssumption:
data.mark_invalid()
if not data.frozen:
if not benchmark.valid(valid_seed, data, value):
data.mark_invalid()
if benchmark.interesting(
interesting_seed, data, value
):
data.mark_interesting()
finally:
sizes.append(len(data.buffer))
engine = ConjectureRunner(
test_function, settings=BENCHMARK_SETTINGS, random=random
)
engine.run()
assert len(sizes) > 0
total_sizes.append(sum(sizes))
return total_sizes
def benchmark_difference_p_value(existing, recent):
"""This is a bootstrapped permutation test for the difference of means.
Under the null hypothesis that the two sides come from the same
distribution, we can randomly reassign values to different populations and
see how large a difference in mean we get. This gives us a p-value for our
actual observed difference in mean by counting the fraction of times our
resampling got a value that large.
See https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests
for details.
"""
rnd = random.Random(0)
threshold = abs(np.mean(existing) - np.mean(recent))
n = len(existing)
n_runs = 1000
greater = 0
all_values = existing + recent
for _ in range(n_runs):
rnd.shuffle(all_values)
l = all_values[:n]
r = all_values[n:]
score = abs(np.mean(l) - np.mean(r))
if score >= threshold:
greater += 1
return greater / n_runs
def benchmark_file(name):
return os.path.join(DATA_DIR, name)
def have_existing_data(name):
return os.path.exists(benchmark_file(name))
EXISTING_CACHE = {}
BLOBSTART = 'START'
BLOBEND = 'END'
def existing_data(name):
try:
return EXISTING_CACHE[name]
except KeyError:
pass
fname = benchmark_file(name)
result = None
with open(fname) as i:
for l in i:
l = l.strip()
if not l:
continue
if l.startswith('#'):
continue
key, blob = l.split(': ', 1)
magic, n = key.split(' ')
assert magic == 'Data'
n = int(n)
assert blob.startswith(BLOBSTART)
assert blob.endswith(BLOBEND), blob[-len(BLOBEND) * 2:]
assert len(blob) == n + len(BLOBSTART) + len(BLOBEND)
blob = blob[len(BLOBSTART):len(blob) - len(BLOBEND)]
assert len(blob) == n
result = blob_to_data(blob)
break
assert result is not None
EXISTING_CACHE[name] = result
return result
def data_to_blob(data):
as_json = json.dumps(attr.asdict(data)).encode('utf-8')
compressed = zlib.compress(as_json)
as_base64 = base64.b32encode(compressed)
return as_base64.decode('ascii')
def blob_to_data(blob):
from_base64 = base64.b32decode(blob.encode('ascii'))
decompressed = zlib.decompress(from_base64)
parsed = json.loads(decompressed)
return BenchmarkData(**parsed)
BENCHMARK_HEADER = """
# This is an automatically generated file from Hypothesis's benchmarking
# script (scripts/benchmarks.py).
#
# Lines like this starting with a # are designed to be useful for human
# consumption when reviewing, specifically with a goal of producing
# useful diffs so that you can get a sense of the impact of a change.
#
# This benchmark is for %(strategy_name)s [%(strategy)r], with the validity
# condition "%(valid)s" and the interestingness condition "%(interesting)s".
# See the script for the exact definitions of these criteria.
#
# This benchmark was generated with seed %(seed)d
#
# Key statistics for this benchmark:
#
# * %(count)d examples
# * Mean size: %(mean).2f bytes, standard deviation: %(sd).2f bytes
#
# Additional interesting statistics:
#
# * Ranging from %(min)d [%(nmin)s] to %(max)d [%(nmax)s] bytes.
# * Median size: %(median)d
# * 99%% of examples had at least %(lo)d bytes
# * 99%% of examples had at most %(hi)d bytes
#
# All data after this point is an opaque binary blob. You are not expected
# to understand it.
""".strip()
def times(n):
assert n > 0
if n > 1:
return '%d times' % (n,)
else:
return 'once'
def write_data(name, new_data):
benchmark = BENCHMARKS[name]
strategy_name = [
k for k, v in STRATEGIES.items() if v == benchmark.strategy
][0]
sizes = new_data.sizes
with open(benchmark_file(name), 'w') as o:
o.write(BENCHMARK_HEADER % {
'strategy_name': strategy_name,
'strategy': benchmark.strategy,
'valid': benchmark.valid.__name__,
'interesting': benchmark.interesting.__name__,
'seed': new_data.seed,
'count': len(sizes),
'min': min(sizes),
'nmin': times(sizes.count(min(sizes))),
'nmax': times(sizes.count(max(sizes))),
'max': max(sizes),
'mean': np.mean(sizes),
'sd': np.std(sizes),
'median': int(np.percentile(sizes, 50, interpolation='lower')),
'lo': int(np.percentile(sizes, 1, interpolation='lower')),
'hi': int(np.percentile(sizes, 99, interpolation='higher')),
})
o.write('\n')
o.write('\n')
blob = data_to_blob(new_data)
assert '\n' not in blob
o.write('Data %d: ' % (len(blob),))
o.write(BLOBSTART)
o.write(blob)
o.write(BLOBEND)
o.write('\n')
NONE = 'none'
NEW = 'new'
ALL = 'all'
CHANGED = 'changed'
IMPROVED = 'improved'
@attr.s
class Report(object):
name = attr.ib()
p = attr.ib()
old_mean = attr.ib()
new_mean = attr.ib()
new_data = attr.ib()
new_seed = attr.ib()
def seed_by_int(i):
# Get an actually good seed from an integer, as Random() doesn't guarantee
# similar but distinct seeds giving different distributions.
as_bytes = i.to_bytes(i.bit_length() // 8 + 1, 'big')
digest = hashlib.sha1(as_bytes).digest()
seedint = int.from_bytes(digest, 'big')
random.seed(seedint)
@click.command()
@click.option(
'--nruns', default=200, type=int, help="""
Specify the number of runs of each benchmark to perform. If this is larger than
the number of stored runs then this will result in the existing data treated as
if it were non-existing. If it is smaller, the existing data will be sampled.
""")
@click.argument('benchmarks', nargs=-1)
@click.option('--check/--no-check', default=False)
@click.option('--skip-existing/--no-skip-existing', default=False)
@click.option('--fdr', default=0.0001)
@click.option('--update', type=click.Choice([
NONE, NEW, ALL, CHANGED, IMPROVED
]), default=NEW)
@click.option('--only-update-headers/--full-run', default=False)
def cli(
benchmarks, nruns, check, update, fdr, skip_existing,
only_update_headers,
):
"""This is the benchmark runner script for Hypothesis.
Rather than running benchmarks by *time* this runs benchmarks by
*amount of data*. This is the major determiner of performance in
Hypothesis (other than speed of the end user's tests) and has the
important property that we can benchmark it without reference to the
underlying system's performance.
"""
if check:
if update not in [NONE, NEW]:
raise click.UsageError('check and update cannot be used together')
if skip_existing:
raise click.UsageError(
'check and skip-existing cannot be used together')
if only_update_headers:
raise click.UsageError(
'check and rewrite-only cannot be used together')
if only_update_headers:
for name in BENCHMARKS:
if have_existing_data(name):
write_data(name, existing_data(name))
sys.exit(0)
for name in benchmarks:
if name not in BENCHMARKS:
raise click.UsageError('Invalid benchmark name %s' % (name,))
try:
os.mkdir(DATA_DIR)
except FileExistsError:
pass
last_seed = 0
for name in BENCHMARKS:
if have_existing_data(name):
last_seed = max(existing_data(name).seed, last_seed)
next_seed = last_seed + 1
reports = []
if check:
for name in benchmarks or BENCHMARKS:
if not have_existing_data(name):
click.echo('No existing data for benchmark %s' % (
name,
))
sys.exit(1)
for name in benchmarks or BENCHMARKS:
new_seed = next_seed
next_seed += 1
seed_by_int(new_seed)
if have_existing_data(name):
if skip_existing:
continue
old_data = existing_data(name)
new_data = run_benchmark_for_sizes(BENCHMARKS[name], nruns)
pp = benchmark_difference_p_value(old_data.sizes, new_data)
click.echo(
'%r -> %r. p-value for difference %.5f' % (
np.mean(old_data.sizes), np.mean(new_data), pp,))
reports.append(Report(
name, pp, np.mean(old_data.sizes), np.mean(new_data), new_data,
new_seed=new_seed,
))
if update == ALL:
write_data(name, BenchmarkData(sizes=new_data, seed=next_seed))
elif update != NONE:
new_data = run_benchmark_for_sizes(BENCHMARKS[name], nruns)
write_data(name, BenchmarkData(sizes=new_data, seed=next_seed))
if not reports:
sys.exit(0)
click.echo('Checking for different means')
# We now perform a Benjamini Hochberg test. This gives us a list of
# possibly significant differences while controlling the false discovery
# rate. https://en.wikipedia.org/wiki/False_discovery_rate
reports.sort(key=lambda x: x.p)
threshold = 0
n = len(reports)
for k, report in enumerate(reports, 1):
if report.p <= k * fdr / n:
assert report.p <= fdr
threshold = k
different = reports[:threshold]
if threshold > 0:
click.echo((
'Found %d benchmark%s with significant difference '
'at false discovery rate %r'
) % (
threshold,
's' if threshold > 1 else '',
fdr,
))
if different:
for report in different:
click.echo('Different means for %s: %.2f -> %.2f. p=%.5f' % (
report.name, report.old_mean, report.new_mean, report.p
))
if check:
sys.exit(1)
for r in different:
if update == CHANGED:
write_data(r.name, BenchmarkData(r.new_data, r.new_seed))
elif update == IMPROVED and r.new_mean < r.old_mean:
write_data(r.name, BenchmarkData(r.new_data, r.new_seed))
else:
click.echo('No significant differences')
if __name__ == '__main__':
cli()
hypothesis-python-3.44.1/scripts/build-documentation.sh 0000775 0000000 0000000 00000000546 13215577651 0023336 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
set -e
set -u
set -x
SPHINX_BUILD=$1
PYTHON=$2
HERE="$(dirname "$0")"
cd "$HERE"/..
if [ -e RELEASE.rst ] ; then
trap "git checkout docs/changes.rst src/hypothesis/version.py" EXIT
$PYTHON scripts/update-changelog-for-docs.py
fi
export PYTHONPATH=src
$SPHINX_BUILD -W -b html -d docs/_build/doctrees docs docs/_build/html
hypothesis-python-3.44.1/scripts/check-ancient-pip.sh 0000775 0000000 0000000 00000001343 13215577651 0022646 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
set -e
set -x
PYTHON=$1
BROKEN_VIRTUALENV=$($PYTHON -c'import tempfile; print(tempfile.mkdtemp())')
trap 'rm -rf $BROKEN_VIRTUALENV' EXIT
rm -rf tmp-dist-dir
$PYTHON setup.py sdist --dist-dir=tmp-dist-dir
$PYTHON -m pip install virtualenv
$PYTHON -m virtualenv "$BROKEN_VIRTUALENV"
"$BROKEN_VIRTUALENV"/bin/pip install -rrequirements/test.txt
# These are versions from debian stable as of 2017-04-21
# See https://packages.debian.org/stable/python/
"$BROKEN_VIRTUALENV"/bin/python -m pip install --upgrade pip==1.5.6
"$BROKEN_VIRTUALENV"/bin/pip install --upgrade setuptools==5.5.1
"$BROKEN_VIRTUALENV"/bin/pip install tmp-dist-dir/*
"$BROKEN_VIRTUALENV"/bin/python -m pytest tests/cover/test_testdecorators.py
hypothesis-python-3.44.1/scripts/check-release-file.py 0000664 0000000 0000000 00000002204 13215577651 0023004 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import hypothesistooling as tools
sys.path.append(os.path.dirname(__file__)) # noqa
if __name__ == '__main__':
if tools.has_source_changes():
if not tools.has_release():
print(
'There are source changes but no RELEASE.rst. Please create '
'one to describe your changes.'
)
sys.exit(1)
tools.parse_release_file()
hypothesis-python-3.44.1/scripts/check_encoding_header.py 0000664 0000000 0000000 00000002256 13215577651 0023636 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
VALID_STARTS = (
'# coding=utf-8',
'#!/usr/bin/env python',
)
if __name__ == '__main__':
import sys
n = max(map(len, VALID_STARTS))
bad = False
for f in sys.argv[1:]:
with open(f, 'r', encoding='utf-8') as i:
start = i.read(n)
if not any(start.startswith(s) for s in VALID_STARTS):
print(
'%s has incorrect start %r' % (f, start), file=sys.stderr)
bad = True
sys.exit(int(bad))
hypothesis-python-3.44.1/scripts/deploy.py 0000664 0000000 0000000 00000012212 13215577651 0020670 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import random
import shutil
import subprocess
from time import time, sleep
import hypothesistooling as tools
sys.path.append(os.path.dirname(__file__)) # noqa
DIST = os.path.join(tools.ROOT, 'dist')
PENDING_STATUS = ('started', 'created')
if __name__ == '__main__':
last_release = tools.latest_version()
print('Current version: %s. Latest released version: %s' % (
tools.__version__, last_release
))
HEAD = tools.hash_for_name('HEAD')
MASTER = tools.hash_for_name('origin/master')
print('Current head:', HEAD)
print('Current master:', MASTER)
on_master = tools.is_ancestor(HEAD, MASTER)
has_release = tools.has_release()
if has_release:
print('Updating changelog and version')
tools.update_for_pending_release()
print('Building an sdist...')
if os.path.exists(DIST):
shutil.rmtree(DIST)
subprocess.check_output([
sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,
])
if not on_master:
print('Not deploying due to not being on master')
sys.exit(0)
if not has_release:
print('Not deploying due to no release')
sys.exit(0)
start_time = time()
prev_pending = None
# We time out after an hour, which is a stupidly long time and it should
# never actually take that long: A full Travis run only takes about 20-30
# minutes! This is really just here as a guard in case something goes
# wrong and we're not paying attention so as to not be too mean to Travis..
while time() <= start_time + 60 * 60:
jobs = tools.build_jobs()
failed_jobs = [
(k, v)
for k, vs in jobs.items()
if k not in PENDING_STATUS + ('passed',)
for v in vs
]
if failed_jobs:
print('Failing this due to failure of jobs %s' % (
', '.join('%s(%s)' % (s, j) for j, s in failed_jobs),
))
sys.exit(1)
else:
pending = [j for s in PENDING_STATUS for j in jobs.get(s, ())]
try:
# This allows us to test the deploy job for a build locally.
pending.remove('deploy')
except ValueError:
pass
if pending:
still_pending = set(pending)
if prev_pending is None:
print('Waiting for the following jobs to complete:')
for p in sorted(still_pending):
print(' * %s' % (p,))
print()
else:
completed = prev_pending - still_pending
if completed:
print('%s completed since last check.' % (
', '.join(sorted(completed)),))
prev_pending = still_pending
naptime = 10.0 * (2 + random.random())
print('Waiting %.2fs for %d more job%s to complete' % (
naptime, len(pending), 's' if len(pending) > 1 else '',))
sleep(naptime)
else:
break
else:
print("We've been waiting for an hour. That seems bad. Failing now.")
sys.exit(1)
print('Looks good to release!')
if os.environ.get('TRAVIS_SECURE_ENV_VARS', None) != 'true':
print("But we don't have the keys to do it")
sys.exit(0)
print('Decrypting secrets')
# We'd normally avoid the use of shell=True, but this is more or less
# intended as an opaque string that was given to us by Travis that happens
# to be a shell command that we run, and there are a number of good reasons
# this particular instance is harmless and would be high effort to
# convert (principally: Lack of programmatic generation of the string and
# extensive use of environment variables in it), so we're making an
# exception here.
subprocess.check_call(
'openssl aes-256-cbc -K $encrypted_39cb4cc39a80_key '
'-iv $encrypted_39cb4cc39a80_iv -in secrets.tar.enc '
'-out secrets.tar -d',
shell=True
)
subprocess.check_call([
'tar', '-xvf', 'secrets.tar',
])
print('Release seems good. Pushing to github now.')
tools.create_tag_and_push()
print('Now uploading to pypi.')
subprocess.check_call([
sys.executable, '-m', 'twine', 'upload',
'--config-file', './.pypirc',
os.path.join(DIST, '*'),
])
sys.exit(0)
hypothesis-python-3.44.1/scripts/enforce_header.py 0000664 0000000 0000000 00000003716 13215577651 0022336 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
from datetime import datetime
HEADER_FILE = 'scripts/header.py'
CURRENT_YEAR = datetime.utcnow().year
HEADER_SOURCE = open(HEADER_FILE).read().strip().format(year=CURRENT_YEAR)
def main():
rootdir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
os.chdir(rootdir)
files = sys.argv[1:]
for f in files:
print(f)
lines = []
with open(f, encoding='utf-8') as o:
shebang = None
first = True
header_done = False
for l in o.readlines():
if first:
first = False
if l[:2] == '#!':
shebang = l
continue
if 'END HEADER' in l and not header_done:
lines = []
header_done = True
else:
lines.append(l)
source = ''.join(lines).strip()
with open(f, 'w', encoding='utf-8') as o:
if shebang is not None:
o.write(shebang)
o.write('\n')
o.write(HEADER_SOURCE)
if source:
o.write('\n\n')
o.write(source)
o.write('\n')
if __name__ == '__main__':
main()
hypothesis-python-3.44.1/scripts/files-to-format.py 0000664 0000000 0000000 00000003165 13215577651 0022413 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import hypothesistooling as tools
sys.path.append(os.path.dirname(__file__)) # noqa
def should_format_file(path):
if os.path.basename(path) in ('header.py', 'test_lambda_formatting.py'):
return False
if 'vendor' in path.split(os.path.sep):
return False
return path.endswith('.py')
if __name__ == '__main__':
changed = tools.modified_files()
format_all = os.environ.get('FORMAT_ALL', '').lower() == 'true'
if 'scripts/header.py' in changed:
# We've changed the header, so everything needs its header updated.
format_all = True
if 'requirements/tools.txt' in changed:
# We've changed the tools, which includes a lot of our formatting
# logic, so we need to rerun formatters.
format_all = True
files = tools.all_files() if format_all else changed
for f in sorted(files):
if should_format_file(f):
print(f)
hypothesis-python-3.44.1/scripts/fix_doctests.py 0000664 0000000 0000000 00000011026 13215577651 0022074 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import re
import sys
from subprocess import PIPE, run
from collections import defaultdict
import hypothesistooling as tools
class FailingExample(object):
def __init__(self, chunk):
"""Turn a chunk of text into an object representing the test."""
location, *lines = [l + '\n' for l in chunk.split('\n') if l.strip()]
self.location = location.strip()
pattern = 'File "(.+?)", line (\d+?), in .+'
file, line = re.match(pattern, self.location).groups()
self.file = os.path.join('docs', file)
self.line = int(line) + 1
got = lines.index('Got:\n')
self.expected_lines = lines[lines.index('Expected:\n') + 1:got]
self.got_lines = lines[got + 1:]
self.checked_ok = None
self.adjust()
@property
def indices(self):
return slice(self.line, self.line + len(self.expected_lines))
def adjust(self):
with open(self.file) as f:
lines = f.readlines()
# The raw line number is the first line of *input*, so adjust to
# first line of output by skipping lines which start with a prompt
while self.line < len(lines):
if lines[self.line].strip()[:4] not in ('>>> ', '... '):
break
self.line += 1
# Sadly the filename and line number for doctests in docstrings is
# wrong - see https://github.com/sphinx-doc/sphinx/issues/4223
# Luckily, we can just cheat because they're all in one file for now!
# (good luck if this changes without an upstream fix...)
if lines[self.indices] != self.expected_lines:
self.file = 'src/hypothesis/strategies.py'
with open(self.file) as f:
lines = f.readlines()
self.line = 0
while self.expected_lines[0] in lines:
self.line = lines[self.line:].index(self.expected_lines[0])
if lines[self.indices] == self.expected_lines:
break
# Finally, set the flag for location quality
self.checked_ok = lines[self.indices] == self.expected_lines
def __repr__(self):
return '{}\nExpected: {!r:.60}\nGot: {!r:.60}'.format(
self.location, self.expected, self.got)
def get_doctest_output():
# Return a dict of filename: list of examples, sorted from last to first
# so that replacing them in sequence works
command = run(['sphinx-build', '-b', 'doctest', 'docs', 'docs/_build'],
stdout=PIPE, stderr=PIPE, encoding='utf-8')
output = [FailingExample(c) for c in command.stdout.split('*' * 70)
if c.strip().startswith('File "')]
if not all(ex.checked_ok for ex in output):
broken = '\n'.join(ex.location for ex in output if not ex.checked_ok)
print('Could not find some tests:\n' + broken)
sys.exit(1)
tests = defaultdict(set)
for ex in output:
tests[ex.file].add(ex)
return {fname: sorted(examples, key=lambda x: x.line, reverse=True)
for fname, examples in tests.items()}
def main():
os.chdir(tools.ROOT)
failing = get_doctest_output()
if not failing:
print('All doctests are OK')
sys.exit(0)
if tools.has_uncommitted_changes('.'):
print('Cannot fix doctests in place with uncommited changes')
sys.exit(1)
for fname, examples in failing.items():
with open(fname) as f:
lines = f.readlines()
for ex in examples:
lines[ex.indices] = ex.got_lines
with open(fname, 'w') as f:
f.writelines(lines)
still_failing = get_doctest_output()
if still_failing:
print('Fixes failed: script broken or flaky tests.\n', still_failing)
sys.exit(1)
print('All failing doctests have been fixed.')
sys.exit(0)
if __name__ == '__main__':
main()
hypothesis-python-3.44.1/scripts/header.py 0000664 0000000 0000000 00000001204 13215577651 0020623 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-{year} David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-python-3.44.1/scripts/hypothesistooling.py 0000664 0000000 0000000 00000026164 13215577651 0023202 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import re
import sys
import subprocess
from datetime import datetime, timedelta
def current_branch():
return subprocess.check_output([
'git', 'rev-parse', '--abbrev-ref', 'HEAD'
]).decode('ascii').strip()
def tags():
result = [t.decode('ascii') for t in subprocess.check_output([
'git', 'tag'
]).split(b'\n')]
assert len(set(result)) == len(result)
return set(result)
ROOT = subprocess.check_output([
'git', 'rev-parse', '--show-toplevel']).decode('ascii').strip()
SRC = os.path.join(ROOT, 'src')
assert os.path.exists(SRC)
__version__ = None
__version_info__ = None
VERSION_FILE = os.path.join(ROOT, 'src/hypothesis/version.py')
with open(VERSION_FILE) as o:
exec(o.read())
assert __version__ is not None
assert __version_info__ is not None
def latest_version():
versions = []
for t in tags():
# All versions get tags but not all tags are versions (and there are
# a large number of historic tags with a different format for versions)
# so we parse each tag as a triple of ints (MAJOR, MINOR, PATCH)
# and skip any tag that doesn't match that.
assert t == t.strip()
parts = t.split('.')
if len(parts) != 3:
continue
try:
v = tuple(map(int, parts))
except ValueError:
continue
versions.append((v, t))
_, latest = max(versions)
assert latest in tags()
return latest
def hash_for_name(name):
return subprocess.check_output([
'git', 'rev-parse', name
]).decode('ascii').strip()
def is_ancestor(a, b):
check = subprocess.call([
'git', 'merge-base', '--is-ancestor', a, b
])
assert 0 <= check <= 1
return check == 0
CHANGELOG_FILE = os.path.join(ROOT, 'docs', 'changes.rst')
def changelog():
with open(CHANGELOG_FILE) as i:
return i.read()
def merge_base(a, b):
return subprocess.check_output([
'git', 'merge-base', a, b,
]).strip()
def has_source_changes(version=None):
if version is None:
version = latest_version()
# Check where we branched off from the version. We're only interested
# in whether *we* introduced any source changes, so we check diff from
# there rather than the diff to the other side.
point_of_divergence = merge_base('HEAD', version)
return subprocess.call([
'git', 'diff', '--exit-code', point_of_divergence, 'HEAD', '--', SRC,
]) != 0
def has_uncommitted_changes(filename):
return subprocess.call([
'git', 'diff', '--exit-code', filename
]) != 0
def git(*args):
subprocess.check_call(('git',) + args)
def create_tag_and_push():
assert __version__ not in tags()
git('config', 'user.name', 'Travis CI on behalf of David R. MacIver')
git('config', 'user.email', 'david@drmaciver.com')
git('config', 'core.sshCommand', 'ssh -i deploy_key')
git(
'remote', 'add', 'ssh-origin',
'git@github.com:HypothesisWorks/hypothesis-python.git'
)
git('tag', __version__)
subprocess.check_call([
'ssh-agent', 'sh', '-c',
'chmod 0600 deploy_key && ' +
'ssh-add deploy_key && ' +
'git push ssh-origin HEAD:master &&'
'git push ssh-origin --tags'
])
def build_jobs():
"""Query the Travis API to find out what the state of the other build jobs
is.
Note: This usage of Travis has been somewhat reverse engineered due
to a certain dearth of documentation as to what values what takes
when.
"""
import requests
build_id = os.environ['TRAVIS_BUILD_ID']
url = 'https://api.travis-ci.org/builds/%s' % (build_id,)
data = requests.get(url, headers={
'Accept': 'application/vnd.travis-ci.2+json'
}).json()
matrix = data['jobs']
jobs = {}
for m in matrix:
name = m['config']['env'].replace('TASK=', '')
status = m['state']
jobs.setdefault(status, []).append(name)
return jobs
def modified_files():
files = set()
for command in [
['git', 'diff', '--name-only', '--diff-filter=d',
latest_version(), 'HEAD'],
['git', 'diff', '--name-only']
]:
diff_output = subprocess.check_output(command).decode('ascii')
for l in diff_output.split('\n'):
filepath = l.strip()
if filepath:
assert os.path.exists(filepath), filepath
files.add(filepath)
return files
def all_files():
return subprocess.check_output(['git', 'ls-files']).decode(
'ascii').splitlines()
RELEASE_FILE = os.path.join(ROOT, 'RELEASE.rst')
def has_release():
return os.path.exists(RELEASE_FILE)
CHANGELOG_BORDER = re.compile(r"^-+$")
CHANGELOG_HEADER = re.compile(r"^\d+\.\d+\.\d+ - \d\d\d\d-\d\d-\d\d$")
RELEASE_TYPE = re.compile(r"^RELEASE_TYPE: +(major|minor|patch)")
MAJOR = 'major'
MINOR = 'minor'
PATCH = 'patch'
VALID_RELEASE_TYPES = (MAJOR, MINOR, PATCH)
def parse_release_file():
with open(RELEASE_FILE) as i:
release_contents = i.read()
release_lines = release_contents.split('\n')
m = RELEASE_TYPE.match(release_lines[0])
if m is not None:
release_type = m.group(1)
if release_type not in VALID_RELEASE_TYPES:
print('Unrecognised release type %r' % (release_type,))
sys.exit(1)
del release_lines[0]
release_contents = '\n'.join(release_lines).strip()
else:
print(
'RELEASE.rst does not start by specifying release type. The first '
'line of the file should be RELEASE_TYPE: followed by one of '
'major, minor, or patch, to specify the type of release that '
'this is (i.e. which version number to increment). Instead the '
'first line was %r' % (release_lines[0],)
)
sys.exit(1)
return release_type, release_contents
def update_changelog_and_version():
global __version_info__
global __version__
with open(CHANGELOG_FILE) as i:
contents = i.read()
assert '\r' not in contents
lines = contents.split('\n')
assert contents == '\n'.join(lines)
for i, l in enumerate(lines):
if CHANGELOG_BORDER.match(l):
assert CHANGELOG_HEADER.match(lines[i + 1]), repr(lines[i + 1])
assert CHANGELOG_BORDER.match(lines[i + 2]), repr(lines[i + 2])
beginning = '\n'.join(lines[:i])
rest = '\n'.join(lines[i:])
assert '\n'.join((beginning, rest)) == contents
break
release_type, release_contents = parse_release_file()
new_version = list(__version_info__)
bump = VALID_RELEASE_TYPES.index(release_type)
new_version[bump] += 1
for i in range(bump + 1, len(new_version)):
new_version[i] = 0
new_version = tuple(new_version)
new_version_string = '.'.join(map(str, new_version))
__version_info__ = new_version
__version__ = new_version_string
with open(VERSION_FILE) as i:
version_lines = i.read().split('\n')
for i, l in enumerate(version_lines):
if 'version_info' in l:
version_lines[i] = '__version_info__ = %r' % (new_version,)
break
with open(VERSION_FILE, 'w') as o:
o.write('\n'.join(version_lines))
now = datetime.utcnow()
date = max([
d.strftime('%Y-%m-%d') for d in (now, now + timedelta(hours=1))
])
heading_for_new_version = ' - '.join((new_version_string, date))
border_for_new_version = '-' * len(heading_for_new_version)
new_changelog_parts = [
beginning.strip(),
'',
border_for_new_version,
heading_for_new_version,
border_for_new_version,
'',
release_contents,
'',
rest
]
with open(CHANGELOG_FILE, 'w') as o:
o.write('\n'.join(new_changelog_parts))
def update_for_pending_release():
update_changelog_and_version()
git('rm', RELEASE_FILE)
git('add', CHANGELOG_FILE, VERSION_FILE)
git(
'commit', '-m',
'Bump version to %s and update changelog\n\n[skip ci]' % (__version__,)
)
def could_affect_tests(path):
"""Does this file have any effect on test results?"""
# RST files are the input to some tests -- in particular, the
# documentation build and doctests. Both of those jobs are always run,
# so we can ignore their effect here.
#
# IPython notebooks aren't currently used in any tests.
if path.endswith(('.rst', '.ipynb')):
return False
# These files exist but have no effect on tests.
if path in ('CITATION', 'LICENSE.txt', ):
return False
# We default to marking a file "interesting" unless we know otherwise --
# it's better to run tests that could have been skipped than skip tests
# when they needed to be run.
return True
def changed_files_from_master():
"""Returns a list of files which have changed between a branch and
master."""
files = set()
command = ['git', 'diff', '--name-only', 'HEAD', 'master']
diff_output = subprocess.check_output(command).decode('ascii')
for line in diff_output.splitlines():
filepath = line.strip()
if filepath:
files.add(filepath)
return files
def should_run_ci_task(task, is_pull_request):
"""Given a task name, should we run this task?"""
if not is_pull_request:
print('We only skip tests if the job is a pull request.')
return True
# These tests are usually fast; we always run them rather than trying
# to keep up-to-date rules of exactly which changed files mean they
# should run.
if task in [
'check-pyup-yml',
'check-release-file',
'check-shellcheck',
'documentation',
'lint',
]:
print('We always run the %s task.' % task)
return True
# The remaining tasks are all some sort of test of Hypothesis
# functionality. Since it's better to run tests when we don't need to
# than skip tests when it was important, we remove any files which we
# know are safe to ignore, and run tests if there's anything left.
changed_files = changed_files_from_master()
interesting_changed_files = [
f for f in changed_files if could_affect_tests(f)
]
if interesting_changed_files:
print(
'Changes to the following files mean we need to run tests: %s' %
', '.join(interesting_changed_files)
)
return True
else:
print('There are no changes which would need a test run.')
return False
hypothesis-python-3.44.1/scripts/install.ps1 0000664 0000000 0000000 00000013543 13215577651 0021125 0 ustar 00root root 0000000 0000000 # Sample script to install Python and pip under Windows
# Authors: Olivier Grisel, Jonathan Helmus and Kyle Kastner
# License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
$MINICONDA_URL = "http://repo.continuum.io/miniconda/"
$BASE_URL = "https://www.python.org/ftp/python/"
$GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
$GET_PIP_PATH = "C:\get-pip.py"
function DownloadPython ($python_version, $platform_suffix) {
$webclient = New-Object System.Net.WebClient
$filename = "python-" + $python_version + $platform_suffix + ".msi"
$url = $BASE_URL + $python_version + "/" + $filename
$basedir = $pwd.Path + "\"
$filepath = $basedir + $filename
if (Test-Path $filename) {
Write-Host "Reusing" $filepath
return $filepath
}
# Download and retry up to 3 times in case of network transient errors.
Write-Host "Downloading" $filename "from" $url
$retry_attempts = 2
for($i=0; $i -lt $retry_attempts; $i++){
try {
$webclient.DownloadFile($url, $filepath)
break
}
Catch [Exception]{
Start-Sleep 1
}
}
if (Test-Path $filepath) {
Write-Host "File saved at" $filepath
} else {
# Retry once to get the error message if any at the last try
$webclient.DownloadFile($url, $filepath)
}
return $filepath
}
function InstallPython ($python_version, $architecture, $python_home) {
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
if (Test-Path $python_home) {
Write-Host $python_home "already exists, skipping."
return $false
}
if ($architecture -eq "32") {
$platform_suffix = ""
} else {
$platform_suffix = ".amd64"
}
$msipath = DownloadPython $python_version $platform_suffix
Write-Host "Installing" $msipath "to" $python_home
$install_log = $python_home + ".log"
$install_args = "/qn /log $install_log /i $msipath TARGETDIR=$python_home"
$uninstall_args = "/qn /x $msipath"
RunCommand "msiexec.exe" $install_args
if (-not(Test-Path $python_home)) {
Write-Host "Python seems to be installed else-where, reinstalling."
RunCommand "msiexec.exe" $uninstall_args
RunCommand "msiexec.exe" $install_args
}
if (Test-Path $python_home) {
Write-Host "Python $python_version ($architecture) installation complete"
} else {
Write-Host "Failed to install Python in $python_home"
Get-Content -Path $install_log
Exit 1
}
}
function RunCommand ($command, $command_args) {
Write-Host $command $command_args
Start-Process -FilePath $command -ArgumentList $command_args -Wait -Passthru
}
function InstallPip ($python_home) {
$pip_path = $python_home + "\Scripts\pip.exe"
$python_path = $python_home + "\python.exe"
if (-not(Test-Path $pip_path)) {
Write-Host "Installing pip..."
$webclient = New-Object System.Net.WebClient
$webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH)
Write-Host "Executing:" $python_path $GET_PIP_PATH
Start-Process -FilePath "$python_path" -ArgumentList "$GET_PIP_PATH" -Wait -Passthru
} else {
Write-Host "pip already installed."
}
}
function DownloadMiniconda ($python_version, $platform_suffix) {
$webclient = New-Object System.Net.WebClient
if ($python_version -eq "3.4") {
$filename = "Miniconda3-3.5.5-Windows-" + $platform_suffix + ".exe"
} else {
$filename = "Miniconda-3.5.5-Windows-" + $platform_suffix + ".exe"
}
$url = $MINICONDA_URL + $filename
$basedir = $pwd.Path + "\"
$filepath = $basedir + $filename
if (Test-Path $filename) {
Write-Host "Reusing" $filepath
return $filepath
}
# Download and retry up to 3 times in case of network transient errors.
Write-Host "Downloading" $filename "from" $url
$retry_attempts = 2
for($i=0; $i -lt $retry_attempts; $i++){
try {
$webclient.DownloadFile($url, $filepath)
break
}
Catch [Exception]{
Start-Sleep 1
}
}
if (Test-Path $filepath) {
Write-Host "File saved at" $filepath
} else {
# Retry once to get the error message if any at the last try
$webclient.DownloadFile($url, $filepath)
}
return $filepath
}
function InstallMiniconda ($python_version, $architecture, $python_home) {
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
if (Test-Path $python_home) {
Write-Host $python_home "already exists, skipping."
return $false
}
if ($architecture -eq "32") {
$platform_suffix = "x86"
} else {
$platform_suffix = "x86_64"
}
$filepath = DownloadMiniconda $python_version $platform_suffix
Write-Host "Installing" $filepath "to" $python_home
$install_log = $python_home + ".log"
$args = "/S /D=$python_home"
Write-Host $filepath $args
Start-Process -FilePath $filepath -ArgumentList $args -Wait -Passthru
if (Test-Path $python_home) {
Write-Host "Python $python_version ($architecture) installation complete"
} else {
Write-Host "Failed to install Python in $python_home"
Get-Content -Path $install_log
Exit 1
}
}
function InstallMinicondaPip ($python_home) {
$pip_path = $python_home + "\Scripts\pip.exe"
$conda_path = $python_home + "\Scripts\conda.exe"
if (-not(Test-Path $pip_path)) {
Write-Host "Installing pip..."
$args = "install --yes pip"
Write-Host $conda_path $args
Start-Process -FilePath "$conda_path" -ArgumentList $args -Wait -Passthru
} else {
Write-Host "pip already installed."
}
}
function main () {
InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON
InstallPip $env:PYTHON
}
main
hypothesis-python-3.44.1/scripts/install.sh 0000775 0000000 0000000 00000005370 13215577651 0021036 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
# Special license: Take literally anything you want out of this file. I don't
# care. Consider it WTFPL licensed if you like.
# Basically there's a lot of suffering encoded here that I don't want you to
# have to go through and you should feel free to use this to avoid some of
# that suffering in advance.
set -e
set -x
# OS X seems to have some weird Localse problems on Travis. This attempts to set
# the Locale to known good ones during install
env | grep UTF
# This is to guard against multiple builds in parallel. The various installers will tend
# to stomp all over each other if you do this and they haven't previously successfully
# succeeded. We use a lock file to block progress so only one install runs at a time.
# This script should be pretty fast once files are cached, so the lost of concurrency
# is not a major problem.
# This should be using the lockfile command, but that's not available on the
# containerized travis and we can't install it without sudo.
# Is is unclear if this is actually useful. I was seeing behaviour that suggested
# concurrent runs of the installer, but I can't seem to find any evidence of this lock
# ever not being acquired.
BASE=${BUILD_RUNTIMES-$PWD/.runtimes}
mkdir -p "$BASE"
LOCKFILE="$BASE/.install-lockfile"
while true; do
if mkdir "$LOCKFILE" 2>/dev/null; then
echo "Successfully acquired installer."
break
else
echo "Failed to acquire lock. Is another installer running? Waiting a bit."
fi
sleep $(( ( RANDOM % 10 ) + 1 )).$(( RANDOM % 100 ))s
if (( $(date '+%s') > 300 + $(stat --format=%X "$LOCKFILE") )); then
echo "We've waited long enough"
rm -rf "$LOCKFILE"
fi
done
trap 'rm -rf $LOCKFILE' EXIT
PYENV=$BASE/pyenv
if [ ! -d "$PYENV/.git" ]; then
rm -rf "$PYENV"
git clone https://github.com/yyuu/pyenv.git "$BASE/pyenv"
else
back=$PWD
cd "$PYENV"
git fetch || echo "Update failed to complete. Ignoring"
git reset --hard origin/master
cd "$back"
fi
SNAKEPIT=$BASE/snakepit
install () {
VERSION="$1"
ALIAS="$2"
mkdir -p "$BASE/versions"
SOURCE=$BASE/versions/$ALIAS
if [ ! -e "$SOURCE" ]; then
mkdir -p "$SNAKEPIT"
mkdir -p "$BASE/versions"
"$BASE/pyenv/plugins/python-build/bin/python-build" "$VERSION" "$SOURCE"
fi
rm -f "$SNAKEPIT/$ALIAS"
mkdir -p "$SNAKEPIT"
"$SOURCE/bin/python" -m pip.__main__ install --upgrade pip wheel virtualenv
ln -s "$SOURCE/bin/python" "$SNAKEPIT/$ALIAS"
}
for var in "$@"; do
case "${var}" in
2.7)
install 2.7.11 python2.7
;;
2.7.3)
install 2.7.3 python2.7.3
;;
3.4)
install 3.4.3 python3.4
;;
3.5)
install 3.5.1 python3.5
;;
3.6)
install 3.6.1 python3.6
;;
pypy)
install pypy2.7-5.8.0 pypy
;;
esac
done
hypothesis-python-3.44.1/scripts/pyenv-installer 0000775 0000000 0000000 00000004177 13215577651 0022117 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
set -e
[ -n "$PYENV_DEBUG" ] && set -x
if [ -z "$PYENV_ROOT" ]; then
PYENV_ROOT="${HOME}/.pyenv"
fi
shell="$1"
if [ -z "$shell" ]; then
shell="$(ps c -p "$PPID" -o 'ucomm=' 2>/dev/null || true)"
shell="${shell##-}"
shell="${shell%% *}"
shell="$(basename "${shell:-$SHELL}")"
fi
colorize() {
if [ -t 1 ]; then printf "\e[%sm%s\e[m" "$1" "$2"
else echo -n "$2"
fi
}
checkout() {
[ -d "$2" ] || git clone "$1" "$2"
}
if ! command -v git 1>/dev/null 2>&1; then
echo "pyenv: Git is not installed, can't continue." >&2
exit 1
fi
if [ -n "${USE_HTTPS}" ]; then
GITHUB="https://github.com"
else
GITHUB="git://github.com"
fi
checkout "${GITHUB}/yyuu/pyenv.git" "${PYENV_ROOT}"
checkout "${GITHUB}/yyuu/pyenv-doctor.git" "${PYENV_ROOT}/plugins/pyenv-doctor"
checkout "${GITHUB}/yyuu/pyenv-installer.git" "${PYENV_ROOT}/plugins/pyenv-installer"
checkout "${GITHUB}/yyuu/pyenv-pip-rehash.git" "${PYENV_ROOT}/plugins/pyenv-pip-rehash"
checkout "${GITHUB}/yyuu/pyenv-update.git" "${PYENV_ROOT}/plugins/pyenv-update"
checkout "${GITHUB}/yyuu/pyenv-virtualenv.git" "${PYENV_ROOT}/plugins/pyenv-virtualenv"
checkout "${GITHUB}/yyuu/pyenv-which-ext.git" "${PYENV_ROOT}/plugins/pyenv-which-ext"
if ! command -v pyenv 1>/dev/null; then
{ echo
colorize 1 "WARNING"
echo ": seems you still have not added 'pyenv' to the load path."
echo
} >&2
case "$shell" in
bash )
profile="~/.bash_profile"
;;
zsh )
profile="~/.zshrc"
;;
ksh )
profile="~/.profile"
;;
fish )
profile="~/.config/fish/config.fish"
;;
* )
profile="your profile"
;;
esac
{ echo "# Load pyenv automatically by adding"
echo "# the following to ${profile}:"
echo
case "$shell" in
fish )
echo "set -x PATH \"\$HOME/.pyenv/bin\" \$PATH"
echo 'status --is-interactive; and . (pyenv init -|psub)'
echo 'status --is-interactive; and . (pyenv virtualenv-init -|psub)'
;;
* )
echo "export PATH=\"\$HOME/.pyenv/bin:\$PATH\""
echo "eval \"\$(pyenv init -)\""
echo "eval \"\$(pyenv virtualenv-init -)\""
;;
esac
} >&2
fi
hypothesis-python-3.44.1/scripts/retry.sh 0000775 0000000 0000000 00000000364 13215577651 0020533 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
for _ in $(seq 5); do
if "$@" ; then
exit 0
fi
echo "Command failed. Retrying..."
sleep $(( ( RANDOM % 10 ) + 1 )).$(( RANDOM % 100 ))s
done
echo "Command failed five times. Giving up now"
exit 1
hypothesis-python-3.44.1/scripts/run_circle.py 0000775 0000000 0000000 00000002435 13215577651 0021532 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import subprocess
from hypothesistooling import should_run_ci_task
if __name__ == '__main__':
if (
os.environ['CIRCLE_BRANCH'] != 'master' and
os.environ['CI_PULL_REQUESTS'] == ''
):
print('We only run CI builds on the master branch or in pull requests')
sys.exit(0)
is_pull_request = (os.environ['CI_PULL_REQUESTS'] != '')
for task in ['check-pypy', 'check-py36', 'check-py27']:
if should_run_ci_task(task=task, is_pull_request=is_pull_request):
subprocess.check_call(['make', task])
hypothesis-python-3.44.1/scripts/run_travis_make_task.py 0000775 0000000 0000000 00000002046 13215577651 0023616 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import subprocess
from hypothesistooling import should_run_ci_task
if __name__ == '__main__':
is_pull_request = (os.environ.get('TRAVIS_EVENT_TYPE') == 'pull_request')
task = os.environ['TASK']
if should_run_ci_task(task=task, is_pull_request=is_pull_request):
subprocess.check_call(['make', task])
hypothesis-python-3.44.1/scripts/run_with_env.cmd 0000664 0000000 0000000 00000003462 13215577651 0022225 0 ustar 00root root 0000000 0000000 :: To build extensions for 64 bit Python 3, we need to configure environment
:: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of:
:: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1)
::
:: To build extensions for 64 bit Python 2, we need to configure environment
:: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of:
:: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0)
::
:: 32 bit builds do not require specific environment configurations.
::
:: Note: this script needs to be run with the /E:ON and /V:ON flags for the
:: cmd interpreter, at least for (SDK v7.0)
::
:: More details at:
:: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows
:: http://stackoverflow.com/a/13751649/163740
::
:: Author: Olivier Grisel
:: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
@ECHO OFF
SET COMMAND_TO_RUN=%*
SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows
SET MAJOR_PYTHON_VERSION="%PYTHON_VERSION:~0,1%"
IF %MAJOR_PYTHON_VERSION% == "2" (
SET WINDOWS_SDK_VERSION="v7.0"
) ELSE IF %MAJOR_PYTHON_VERSION% == "3" (
SET WINDOWS_SDK_VERSION="v7.1"
) ELSE (
ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%"
EXIT 1
)
IF "%PYTHON_ARCH%"=="64" (
ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture
SET DISTUTILS_USE_SDK=1
SET MSSdk=1
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION%
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
) ELSE (
ECHO Using default MSVC build environment for 32 bit architecture
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
)
hypothesis-python-3.44.1/scripts/tool-hash.py 0000775 0000000 0000000 00000002213 13215577651 0021275 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import hashlib
SCRIPTS_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT_DIR = os.path.dirname(SCRIPTS_DIR)
if __name__ == '__main__':
name = sys.argv[1]
requirements = os.path.join(
ROOT_DIR, 'requirements', '%s.txt' % (name,)
)
assert os.path.exists(requirements)
with open(requirements, 'rb') as f:
tools = f.read()
print(hashlib.sha1(tools).hexdigest()[:10])
hypothesis-python-3.44.1/scripts/unicodechecker.py 0000664 0000000 0000000 00000003234 13215577651 0022353 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import inspect
import warnings
from tempfile import mkdtemp
import unicodenazi
from hypothesis import settings, unlimited
from hypothesis.errors import HypothesisDeprecationWarning
from hypothesis.configuration import set_hypothesis_home_dir
warnings.filterwarnings('error', category=UnicodeWarning)
warnings.filterwarnings('error', category=HypothesisDeprecationWarning)
unicodenazi.enable()
set_hypothesis_home_dir(mkdtemp())
assert isinstance(settings, type)
settings.register_profile(
'default', settings(timeout=unlimited)
)
settings.load_profile('default')
TESTS = [
'test_testdecorators',
]
sys.path.append(os.path.join(
os.path.dirname(__file__), '..', 'tests', 'cover',
))
if __name__ == '__main__':
for t in TESTS:
module = __import__(t)
for k, v in sorted(module.__dict__.items(), key=lambda x: x[0]):
if k.startswith('test_') and inspect.isfunction(v):
print(k)
v()
hypothesis-python-3.44.1/scripts/update-changelog-for-docs.py 0000664 0000000 0000000 00000002334 13215577651 0024321 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import hypothesistooling as tools
sys.path.append(os.path.dirname(__file__)) # noqa
if __name__ == '__main__':
if not tools.has_release():
sys.exit(0)
if tools.has_uncommitted_changes(tools.CHANGELOG_FILE):
print(
'Cannot build documentation with uncommitted changes to '
'changelog and a pending release. Please commit your changes or '
'delete your release file.')
sys.exit(1)
tools.update_changelog_and_version()
hypothesis-python-3.44.1/scripts/validate_branch_check.py 0000664 0000000 0000000 00000003326 13215577651 0023645 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
import json
from collections import defaultdict
if __name__ == '__main__':
with open('branch-check') as i:
data = [
json.loads(l) for l in i
]
checks = defaultdict(set)
for d in data:
checks[d['name']].add(d['value'])
always_true = []
always_false = []
for c, vs in sorted(checks.items()):
if len(vs) < 2:
v = list(vs)[0]
assert v in (False, True)
if v:
always_true.append(c)
else:
always_false.append(c)
failure = always_true or always_false
if failure:
print('Some branches were not properly covered.')
print()
if always_true:
print('The following were always True:')
print()
for c in always_true:
print(' * %s' % (c,))
if always_false:
print('The following were always False:')
print()
for c in always_false:
print(' * %s' % (c,))
if failure:
sys.exit(1)
hypothesis-python-3.44.1/scripts/validate_pyup.py 0000664 0000000 0000000 00000002202 13215577651 0022240 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis, which may be found at
# https://github.com/HypothesisWorks/hypothesis-python
#
# Most of this work is copyright (C) 2013-2017 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# CONTRIBUTING.rst for a full list of people who may hold copyright, and
# consult the git log if you need to determine who owns an individual
# contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import yaml
from pyup.config import Config
from hypothesistooling import ROOT
PYUP_FILE = os.path.join(ROOT, '.pyup.yml')
if __name__ == '__main__':
with open(PYUP_FILE, 'r') as i:
data = yaml.safe_load(i.read())
config = Config()
config.update_config(data)
if not config.is_valid_schedule():
print('Schedule %r is invalid' % (config.schedule,))
sys.exit(1)
hypothesis-python-3.44.1/secrets.tar.enc 0000664 0000000 0000000 00000024020 13215577651 0020257 0 ustar 00root root 0000000 0000000 Io$K.Sߦ%
ٛgՂLn!z "<*ztȋDD0YLtR&OPk:aB]Fbu}!tPJWmVAD߲|ӏ>lva)r#^ vI15#-Ϋ..GAݾJ%cdDm>˅i,8`
3 Xs7h"fy):&ؾhee 5(T{>wNqf
x#x\w7Iq5̸!&bL>5;QR\c]e(,kd.p9aSLnd ml
^ҩtMNF]+Z[=KN.wӆs]52'a}^BXw[~ ݙcĀ:-B|q( eOq
E>dF1reeT\-*;TPݦgIA\)tM(j՜j.1{kQMo儜yw)XXoϚ3(SC[A(hx֔TiùTL3a_| ؓ$Ŀ$\^''l{l>prQ7 qǑ=D3&PI>|m,ǖTZɿhZJUD?$yew&5F͔]˴l=ܑFpop8螠X֩G
OZnMHP=uє,CfGzXnSCA! D_(hΩ?aq`bJI],߭f9KHBSCQ&RHذ) S z(H}W^"&Hc^u|$,>ڜO9VnZj{&^6U=ZK.HuT"q,^+!@l.X.G:_ێ2f5ϥ]?FlC4?գ IQ*mQ> STTCwݬ<əZGzY4T#g2$5u.0ֶ+<ʍK!#yyCU2\Jx1|}- 0\"_׳ޓY"v#D%xÔEweV=Dm`7 ͍te(/pñ+ԩ,PCJL̀1MpZz]Z9
ѣˈvDN}M64~ a2^}e
GxC[#ﺋdoQu*)&Όg*-,E,cUže7JtyJYe,g:d=êG4/艵6m+EW("Yu0?(!MDY&PVy 5$GS-ubUR!lPTWV[{y@vNiCZEՋ %LX&6.nۃ_D
]:Q2*}6!XvH! o5M
\cgWP˟A#zB_yp`fIJ 嶓:/s<'d2忽qkM/hiax)[N^Ո9eatH*|^v
d\lm'Y bu.RM2(Kn9$#l\@|pyt7aS9~͡~VY2״cc[cGjljM!L̔<_nj![Ika8l9*^я\惙YðWrEj3W
c!9{EHCL;G}U.QO=:Zk$0piFA+ +hR/MP ԝ.)qN.Mp6H@nWJp,MYY-8
V`D1ΣwiIC PF"uG0[T@]e1 \5 DAVyF
W1yEhNu7b1:zQ$ȘgElvB-+UӲj1dAG uql}<1~;Lף
/0>r0#٨xe(BݠhiK7'3:e 0 09NϬx#Q