pax_global_header 0000666 0000000 0000000 00000000064 12661275660 0014525 g ustar 00root root 0000000 0000000 52 comment=c832816b9c18b23824d6c6502d37cb2575953579
hypothesis-3.0.1/ 0000775 0000000 0000000 00000000000 12661275660 0013725 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/.coveragerc 0000664 0000000 0000000 00000000674 12661275660 0016055 0 ustar 00root root 0000000 0000000 [run]
branch = True
include =
.tox/*/lib/*/site-packages/hypothesis/*.py
.tox/*/lib/*/site-packages/hypothesis/**/*.py
omit =
**/_settings.py
**/pytestplugin.py
**/strategytests.py
**/internal/debug.py
**/compat*.py
**/extra/__init__.py
[report]
exclude_lines =
@abc.abstractmethod
@abc.abstractproperty
NotImplementedError
pragma: no cover
__repr__
__ne__
__copy__
__deepcopy__
hypothesis-3.0.1/.gitignore 0000664 0000000 0000000 00000000142 12661275660 0015712 0 ustar 00root root 0000000 0000000 *.swo
*.swp
*.pyc
venv*
.cache
.hypothesis
docs/_build
*.egg-info
_build
.tox
.coverage
.runtimes
hypothesis-3.0.1/.travis.yml 0000664 0000000 0000000 00000003172 12661275660 0016041 0 ustar 00root root 0000000 0000000 language: c
sudo: false
env: PYTHONDONTWRITEBYTECODE=x
os:
- osx
- linux
cache:
apt: true
directories:
- $HOME/.runtimes
- $HOME/.venv
- $HOME/.cache/pip
- $HOME/wheelhouse
env:
global:
- BUILD_RUNTIMES=$HOME/.runtimes
matrix:
- TASK=documentation
- TASK=lint
- TASK=check-format
- TASK=check-coverage
- TASK=check-unicode
- TASK=check-pypy
- TASK=check-py35
- TASK=check-py27
- TASK=check-py34
- TASK=check-nose
- TASK=check-pytest27
- TASK=check-pytest26
- TASK=check-fakefactory052
- TASK=check-fakefactory053
- TASK=check-pytest26
- TASK=check-django17
- TASK=check-django18
script:
- make $TASK
matrix:
exclude:
- os: osx
env: TASK=check-unicode
- os: osx
env: TASK=check-fakefactory052
- os: osx
env: TASK=check-fakefactory053
- os: osx
env: TASK=check-pytest26
- os: osx
env: TASK=check-pytest27
- os: osx
env: TASK=documentation
- os: osx
env: TASK=check-django17
- os: osx
env: TASK=check-django18
- os: osx
env: TASK=check-examples2
- os: osx
env: TASK=check-examples3
- os: osx
env: TASK=check-coverage
- os: osx
env: TASK=check-format
- os: osx
env: TASK=lint
fast_finish: true
notifications:
email:
recipients:
- david@drmaciver.com
on_success: never
on_failure: change
hypothesis-3.0.1/CONTRIBUTING.rst 0000664 0000000 0000000 00000020166 12661275660 0016373 0 ustar 00root root 0000000 0000000 =============
Contributing
=============
First off: It's great that you want to contribute to Hypothesis! Thanks!
The process is a little involved (don't worry, I'll help you through it), so
do read this document first.
-----------------------
Copyright and Licensing
-----------------------
It's important to make sure that you own the rights to the work you are submitting.
If it is done on work time, or you have a particularly onerous contract, make sure
you've checked with your employer.
All work in Hypothesis is licensed under the terms of the
`Mozilla Public License, version 2.0 `_. By
submitting a contribution you are agreeing to licence your work under those
terms.
Finally, if it is not there already, add your name (and a link to your GitHub
and email address if you want) to the list of contributors found at
the end of this document, in alphabetical order. It doesn't have to be your
"real" name (whatever that means), any sort of public identifier
is fine. In particular a GitHub account is sufficient.
-----------------------
The actual contribution
-----------------------
OK, so you want to make a contribution and have sorted out the legalese. What now?
First off: If you're planning on implementing a new feature, talk to me first! I'll probably
tell you to go for it, but I might have some feedback on aspects of it or tell you how it fits
into the broader scheme of things. Remember: A feature is for 1.x, not just for Christmas. Once
a feature is in, it can only be evolved in backwards compatible ways until I bump the "I can break
your code" number and release Hypothesis 2.0. This means I spend a lot of time thinking about
getting features right. It may sometimes also mean I reject your feature, or feel you need to
rethink it, so it's best to have that conversation early.
Once you've done that, feel free to ask me for help as you go. You're welcome to submit a work in
progress pull request early if you want feedback, just please mark it as such.
The review process will probably take some time, with me providing feedback on what I like about
your work and what I think could use improving. Particularly when adding features it's very unlikely
I'll accept a pull request as is, but that's not a sign that I don't like your code and you shouldn't
get discouraged.
Before it's merged your contribution will have to be:
1. Tested (the build will probably fail if it's not, but even if the build passes new work needs test)
2. Documented
3. Complying to the code standard (running 'make format' locally will fix most formatting errors and 'make lint'
will tell you about the rest)
4. Otherwise passing the build
Note: If you can't figure out how to test your work, I'm happy to help. If *I* can't figure out how to
test your work, I may pass it anyway.
~~~~~~~~~
The build
~~~~~~~~~
The build is orchestrated by a giant Makefile which handles installation of the relevant pythons.
Actually running the tests is managed by `tox `_, but the Makefile
will call out to the relevant tox environments so you mostly don't have to know anything about that
unless you want to make changes to the test config. You also mostly don't need to know anything about make
except to type 'make' followed by the name of the task you want to run.
All of it will be checked on Travis so you don't *have* to run anything locally, but you might
find it useful to do so: A full travis run takes about an hour, so running a smaller set of
tests locally can be helpful.
The makefile should be "fairly" portable, but is currently only known to work on Linux or OS X. It *might* work
on a BSD or on Windows with cygwin installed, but it probably won't.
Some notable commands:
'make format' will reformat your code according to the Hypothesis coding style. You should use this before each
commit ideally, but you only really have to use it when you want your code to be ready to merge.
You can also use 'make check-format', which will run format and some linting and will then error if you have a
git diff. Note: This will error even if you started with a git diff, so if you've got any uncommitted changes
this will necessarily report an error.
'make check' will run check-format and all of the tests. Warning: This will take a *very* long time. On travis the
currently takes multiple hours of total build time (it runs in parallel on Travis so you don't have to wait
quite that long). If you've got a multi-core machine you can run 'make -j 2' (or any higher number if you want
more) to run 2 jobs in parallel, but to be honest you're probably better off letting travis run this step.
You can also run a number of finer grained make tasks:
* check-fast runs a fast but reasonably comprehensive subset of make check. It's still not *that* fast, but it
takes a couple of minutes instead of a couple of hours.
* You can run the tests just for a single version of Python using one of check-py26, check-py27, check-py34,
check-py35, check-pypy.
* check-coverage will run a subset of the tests on python 3.5 and then assert that this gave 100% coverage
* lint will just run some source code checks.
* django will just run tests for the django integration
* pytest will just run tests for the pytest plugin
Note: The build requires a lot of different versions of python, so rather than have you install them yourself,
the makefile will install them itself in a local directory. This means that the first time you run a task you
may have to wait a while as the build downloads and installs the right version of python for you.
----------------------------
If Pull Requests put you off
----------------------------
If you don't feel able to contribute code to Hypothesis that's *100% OK*. There
are lots of other things you can do to help too!
For example, it's super useful and highly appreciated if you do any of:
* Submit bug reports
* Submit feature requests
* Write about Hypothesis
* Build libraries and tools on top of Hypothesis outside the main repo
Of, if you're OK with the pull request but don't feel quite ready to touch the code, you can always
help to improve the documentation. Spot a tyop? Fix it up and send me a pull request!
If you need any help with any of these, get in touch and I'll be extremely happy to provide it.
--------------------
List of Contributors
--------------------
The primary author for most of Hypothesis is David R. MacIver (me). However the following
people have also contributed work. As well as my thanks, they also have copyright over
their individual contributions.
* `Adam Johnson `_
* `Adam Sven Johnson `_
* `Alex Stapleton `_
* `Alex Willmer `_ (`alex@moreati.org.uk `_)
* `Charles O'Farrell `_
* `Chris Down `_
* `Christopher Martin `_ (`ch.martin@gmail.com `_)
* `Cory Benfield `_
* `Derek Gustafson `_
* `Florian Bruhin `_
* `follower `_
* `Jonty Wareing `_ (`jonty@jonty.co.uk `_)
* `kbara `_
* `marekventur `_
* `Marius Gedminas `_ (`marius@gedmin.as `_)
* `Matt Bachmann `_ (`bachmann.matt@gmail.com `_)
* `Nicholas Chammas `_
* `Richard Boulton `_ (`richard@tartarus.org `_)
* `Saul Shanabrook `_ (`s.shanabrook@gmail.com `_)
* `Tariq Khokhar `_ (`tariq@khokhar.net `_)
* `Will Hall `_ (`wrsh07@gmail.com `_)
* `Will Thompson `_ (`will@willthompson.co.uk `_)
hypothesis-3.0.1/LICENSE.txt 0000664 0000000 0000000 00000000324 12661275660 0015547 0 ustar 00root root 0000000 0000000 Copyright (c) 2013, David R. MacIver
All code in this repository except where explicitly noted otherwise is released
under the Mozilla Public License v 2.0. You can obtain a copy at http://mozilla.org/MPL/2.0/.
hypothesis-3.0.1/Makefile 0000664 0000000 0000000 00000011137 12661275660 0015370 0 ustar 00root root 0000000 0000000 .PHONY: clean documentation
DEVELOPMENT_DATABASE?=postgres://whereshouldilive@localhost/whereshouldilive_dev
SPHINXBUILD = $(DEV_PYTHON) -m sphinx
SPHINX_BUILDDIR = docs/_build
ALLSPHINXOPTS = -d $(SPHINX_BUILDDIR)/doctrees docs -W
BUILD_RUNTIMES?=$(PWD)/.runtimes
PY26=$(BUILD_RUNTIMES)/snakepit/python2.6
PY27=$(BUILD_RUNTIMES)/snakepit/python2.7
PY33=$(BUILD_RUNTIMES)/snakepit/python3.3
PY34=$(BUILD_RUNTIMES)/snakepit/python3.4
PY35=$(BUILD_RUNTIMES)/snakepit/python3.5
PYPY=$(BUILD_RUNTIMES)/snakepit/pypy
TOOLS=$(BUILD_RUNTIMES)/tools
TOX=$(TOOLS)/tox
SPHINX_BUILD=$(TOOLS)/sphinx-build
SPHINX_AUTOBUILD=$(TOOLS)/sphinx-autobuild
ISORT=$(TOOLS)/isort
FLAKE8=$(TOOLS)/flake8
PYFORMAT=$(TOOLS)/pyformat
TOOL_VIRTUALENV=$(BUILD_RUNTIMES)/virtualenvs/tools
ISORT_VIRTUALENV=$(BUILD_RUNTIMES)/virtualenvs/isort
TOOL_PYTHON=$(TOOL_VIRTUALENV)/bin/python
TOOL_PIP=$(TOOL_VIRTUALENV)/bin/pip
TOOL_INSTALL=$(TOOL_PIP) install --upgrade
export PATH:=$(BUILD_RUNTIMES)/snakepit:$(TOOLS):$(PATH)
export LC_ALL=C.UTF-8
$(PY26):
scripts/retry.sh scripts/install.sh 2.6
$(PY27):
scripts/retry.sh scripts/install.sh 2.7
$(PY33):
scripts/retry.sh scripts/install.sh 3.3
$(PY34):
scripts/retry.sh scripts/install.sh 3.4
$(PY35):
scripts/retry.sh scripts/install.sh 3.5
$(PYPY):
scripts/retry.sh scripts/install.sh pypy
$(TOOL_VIRTUALENV): $(PY34)
$(PY34) -m virtualenv $(TOOL_VIRTUALENV)
mkdir -p $(TOOLS)
$(TOOLS): $(TOOL_VIRTUALENV)
$(ISORT_VIRTUALENV): $(PY34)
$(PY34) -m virtualenv $(ISORT_VIRTUALENV)
format: $(PYFORMAT) $(ISORT)
$(TOOL_PYTHON) scripts/enforce_header.py
# isort will sort packages differently depending on whether they're installed
$(ISORT_VIRTUALENV)/bin/python -m pip install django pytz pytest fake-factory numpy
env -i PATH=$(PATH) $(ISORT) -p hypothesis -ls -m 2 -w 75 \
-a "from __future__ import absolute_import, print_function, division" \
-rc src tests examples
find src tests examples -name '*.py' | xargs $(PYFORMAT) -i
lint: $(FLAKE8)
$(FLAKE8) src tests --exclude=compat.py,test_reflection.py,test_imports.py,tests/py2 --ignore=E731,E721
check-format: format
find src tests -name "*.py" | xargs $(TOOL_PYTHON) scripts/check_encoding_header.py
git diff --exit-code
check-py26: $(PY26) $(TOX)
$(TOX) -e py26-full
check-py27: $(PY27) $(TOX)
$(TOX) -e py27-full
check-py33: $(PY33) $(TOX)
$(TOX) -e py33-full
check-py34: $(py34) $(TOX)
$(TOX) -e py34-full
check-py35: $(PY35) $(TOX)
$(TOX) -e py35-full
check-pypy: $(PYPY) $(TOX)
$(TOX) -e pypy-full
check-nose: $(TOX) $(PY35)
$(TOX) -e nose
check-pytest27: $(TOX) $(PY35)
$(TOX) -e pytest27
check-pytest26: $(TOX) $(PY35)
$(TOX) -e pytest26
check-pytest: check-pytest26 check-pytest27
check-fakefactory052: $(TOX) $(PY35)
$(TOX) -e fakefactory052
check-fakefactory053: $(TOX) $(PY35)
$(TOX) -e fakefactory053
check-django17: $(TOX) $(PY35)
$(TOX) -e django17
check-django18: $(TOX) $(PY35)
$(TOX) -e django18
check-django19: $(TOX) $(PY35)
$(TOX) -e django19
check-django: check-django17 check-django18 check-django19
check-examples2: $(TOX) $(PY27)
$(TOX) -e examples2
check-examples3: $(TOX) $(PY35)
$(TOX) -e examples3
check-coverage: $(TOX) $(PY35)
$(TOX) -e coverage
check-unicode: $(TOX) $(PY27)
$(TOX) -e unicode
check-noformat: check-coverage check-py26 check-py27 check-py33 check-py34 check-py35 check-pypy check-django check-pytest
check: check-format check-noformat
check-fast: lint $(PY26) $(PY35) $(PYPY) $(TOX)
$(TOX) -e pypy-brief
$(TOX) -e py35-brief
$(TOX) -e py26-brief
$(TOX) -e py35-prettyquick
$(TOX): $(PY35) tox.ini $(TOOLS)
$(TOOL_INSTALL) tox
rm -f $(TOX)
rm -rf .tox
ln -sf $(TOOL_VIRTUALENV)/bin/tox $(TOX)
$(SPHINX_BUILD): $(TOOL_VIRTUALENV)
$(TOOL_PYTHON) -m pip install sphinx
ln -sf $(TOOL_VIRTUALENV)/bin/sphinx-build $(SPHINX_BUILD)
$(SPHINX_AUTOBUILD): $(TOOL_VIRTUALENV)
$(TOOL_PYTHON) -m pip install sphinx-autobuild
ln -sf $(TOOL_VIRTUALENV)/bin/sphinx-autobuild $(SPHINX_AUTOBUILD)
$(PYFORMAT): $(TOOL_VIRTUALENV)
$(TOOL_INSTALL) pyformat
ln -sf $(TOOL_VIRTUALENV)/bin/pyformat $(PYFORMAT)
$(ISORT): $(ISORT_VIRTUALENV)
$(ISORT_VIRTUALENV)/bin/python -m pip install isort==4.1.0
ln -sf $(ISORT_VIRTUALENV)/bin/isort $(ISORT)
$(FLAKE8): $(TOOL_VIRTUALENV)
$(TOOL_INSTALL) flake8
ln -sf $(TOOL_VIRTUALENV)/bin/flake8 $(FLAKE8)
clean:
rm -rf .tox
rm -rf .hypothesis
rm -rf docs/_build
rm -rf $(TOOLS)
rm -rf $(BUILD_RUNTIMES)/snakepit
rm -rf $(BUILD_RUNTIMES)/virtualenvs
find src tests -name "*.pyc" -delete
find src tests -name "__pycache__" -delete
documentation: $(SPHINX_BUILD) docs/*.rst
$(SPHINX_BUILD) -W -b html -d docs/_build/doctrees docs docs/_build/html
hypothesis-3.0.1/README.rst 0000664 0000000 0000000 00000004065 12661275660 0015421 0 ustar 00root root 0000000 0000000 ==========
Hypothesis
==========
Hypothesis is a library for testing your Python code against a much larger range
of examples than you would ever want to write by hand. It's based on the Haskell
library, Quickcheck, and is designed to integrate seamlessly into your existing
Python unit testing work flow.
Hypothesis is both extremely practical and also advances the state of the art of
unit testing by some way. It's easy to use, stable, and extremely powerful. If
you're not using Hypothesis to test your project then you're missing out.
Hypothesis works with most widely used versions of Python. It officially supports
CPython 2.7, 3.4 and 3.5, as well as PyPy. Most other versions of Python are known
not to work.
-----------------
Links of interest
-----------------
To learn more about how to use Hypothesis, extensive documentation and
examples of usage are `available at readthedocs `_.
If you want to talk to people about using Hypothesis, `we have both an IRC channel
and a mailing list `_.
If you want to receive occasional updates about Hypothesis, including useful tips and tricks, there's a
`TinyLetter mailing list to sign up for them `_.
If you want to contribute to Hypothesis, `instructions are here `_.
If you want to hear from people who are already using Hypothesis, some of them `have written
about it `_.
If you want to create a downstream package of Hypothesis, please read `these guidelines for packagers `_
-------------------
Ongoing Development
-------------------
Development on Hypothesis is a mix of community provided and sponsored. If you wish to contribute,
either financially or through code, `you can read more about the process in the documentation
`_.
hypothesis-3.0.1/appveyor.yml 0000664 0000000 0000000 00000003633 12661275660 0016322 0 ustar 00root root 0000000 0000000 environment:
global:
# SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the
# /E:ON and /V:ON options are not enabled in the batch script intepreter
# See: http://stackoverflow.com/a/13751649/163740
CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\scripts\\run_with_env.cmd"
matrix:
- PYTHON: "C:\\Python27"
PYTHON_VERSION: "2.7.8"
PYTHON_ARCH: "32"
- PYTHON: "C:\\Python27-x64"
PYTHON_VERSION: "2.7.8"
PYTHON_ARCH: "64"
- PYTHON: "C:\\Python34"
PYTHON_VERSION: "3.4.1"
PYTHON_ARCH: "32"
- PYTHON: "C:\\Python34-x64"
PYTHON_VERSION: "3.4.1"
PYTHON_ARCH: "64"
install:
- ECHO "Filesystem root:"
- ps: "ls \"C:/\""
- ECHO "Installed SDKs:"
- ps: "ls \"C:/Program Files/Microsoft SDKs/Windows\""
# Install Python (from the official .msi of http://python.org) and pip when
# not already installed.
- "powershell ./scripts/install.ps1"
# Prepend newly installed Python to the PATH of this build (this cannot be
# done from inside the powershell script as it would require to restart
# the parent CMD process).
- "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
# Check that we have the expected version and architecture for Python
- "python --version"
- "python -c \"import struct; print(struct.calcsize('P') * 8)\""
- "%CMD_IN_ENV% python -m pip install --upgrade setuptools pip"
- "%CMD_IN_ENV% python -m pip install setuptools pytest==2.8.0 flaky"
- "%CMD_IN_ENV% python -m pip install .[all]"
build: false # Not a C# project, build stuff at the test step instead.
test_script:
# Build the compiled extension and run the project tests
- "%CMD_IN_ENV% python -m pytest tests/cover"
- "%CMD_IN_ENV% python -m pytest tests/datetime"
- "%CMD_IN_ENV% python -m pytest tests/fakefactory"
- "%CMD_IN_ENV% python -m pip uninstall flaky -y"
- "%CMD_IN_ENV% python -m pytest tests/pytest -p pytester --runpytest subprocess"
hypothesis-3.0.1/benchmarks/ 0000775 0000000 0000000 00000000000 12661275660 0016042 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/benchmarks/test_strategies.py 0000664 0000000 0000000 00000006717 12661275660 0021640 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import hypothesis.strategies as st
from hypothesis import find, settings, given
settings.register_profile('benchmarking', settings(
database=None,
))
import pytest
import random
def setup_module():
settings.load_profile('benchmarking')
def teardown_module():
settings.load_profile(os.getenv('HYPOTHESIS_PROFILE', 'default'))
@st.composite
def sorted_three(draw):
x = draw(st.integers())
y = draw(st.integers(min_value=x))
z = draw(st.integers(min_value=y))
return (x, y, z)
strategies = [
st.integers(),
st.text(),
st.binary(),
st.floats(),
st.integers().flatmap(lambda x: st.lists(st.integers(max_value=x))),
st.integers().filter(lambda x: x % 3 == 1),
st.tuples(st.integers(), st.integers(), st.integers(), st.integers()),
st.text() | st.integers(),
sorted_three(),
st.text(average_size=20)
]
strategies.extend(list(map(st.lists, strategies)))
# Nothing special, just want a fixed seed list.
seeds = [
17449917217797177955,
10900658426497387440,
3678508287585343099,
11902419052042326073,
8648395390016624135
]
counter = 0
ids = []
for strat in strategies:
for seed in range(1, 1 + len(seeds)):
counter += 1
ids.append('example%d-%r-seed%d' % (counter, strat, seed))
bench = pytest.mark.parametrize(
('strategy', 'seed'), [
(strat, seed) for strat in strategies for seed in seeds
],
ids=ids
)
@bench
def test_empty_given(benchmark, strategy, seed):
@benchmark
def run():
random.seed(seed)
@given(strategy)
def test(s):
pass
test()
@bench
def test_failing_given(benchmark, strategy, seed):
@benchmark
def run():
random.seed(seed)
@given(strategy)
def test(s):
raise ValueError()
with pytest.raises(ValueError):
test()
@bench
def test_one_off_generation(benchmark, strategy, seed):
@benchmark
def run():
strategy.example(random.Random(seed))
@bench
def test_minimize_to_minimal(benchmark, strategy, seed):
@benchmark
def run():
find(strategy, lambda x: True, random=random.Random(seed))
@bench
def test_minimize_to_not_minimal(benchmark, strategy, seed):
@benchmark
def run():
rnd = random.Random(seed)
minimal = find(strategy, lambda x: True, random=rnd)
find(strategy, lambda x: x != minimal, random=rnd)
@bench
def test_total_failure_to_minimize(benchmark, strategy, seed):
@benchmark
def run():
rnd = random.Random(seed)
ex = []
def is_first(x):
if ex:
return x == ex[0]
else:
ex.append(x)
return True
find(strategy, lambda x: True, random=rnd)
hypothesis-3.0.1/docs/ 0000775 0000000 0000000 00000000000 12661275660 0014655 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/docs/_static/ 0000775 0000000 0000000 00000000000 12661275660 0016303 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/docs/_static/.empty 0000664 0000000 0000000 00000000000 12661275660 0017430 0 ustar 00root root 0000000 0000000 hypothesis-3.0.1/docs/changes.rst 0000664 0000000 0000000 00000202754 12661275660 0017031 0 ustar 00root root 0000000 0000000 =========
Changelog
=========
This is a record of all past Hypothesis releases and what went into them,
in reverse chronological order. All previous releases should still be available
on pip.
Hypothesis APIs come in three flavours:
* Public: Hypothesis releases since 1.0 are `semantically versioned `_
with respect to these parts of the API. These will not break except between
major version bumps. All APIs mentioned in this documentation are public unless
explicitly noted otherwise.
* Semi-public: These are APIs that are considered ready to use but are not wholly
nailed down yet. They will not break in patch releases and will *usually* not break
in minor releases, but when necessary minor releases may break semi-public APIs.
* Internal: These may break at any time and you really should not use them at
all.
You should generally assume that an API is internal unless you have specific
information to the contrary.
------------------
3.0.1 - 2016-02-18
------------------
* Fix a case where it was possible to trigger an "Unreachable" assertion when
running certain flaky stateful tests.
* Improve shrinking of large stateful tests by eliminating a case where it was
hard to delete early steps.
* Improve efficiency of drawing binary(min_size=n, max_size=n0 significantly by
provide a custom implementation for fixed size blocks that can bypass a lot
of machinery.
* Set default home directory based on the current working directory at the
point Hypothesis is imported, not whenever the function first happens to be
called.
------------------
3.0.0 - 2016-02-17
------------------
Codename: This really should have been 2.1.
Externally this looks like a very small release. It has one small breaking change
that probably doesn't affect anyone at all (some behaviour that never really worked
correctly is now outright forbidden) but necessitated a major version bump and one
visible new feature.
Internally this is a complete rewrite. Almost nothing other than the public API is
the same.
New features:
* Addition of data() strategy which allows you to draw arbitrary data interactively
within the test.
* New "exploded" database format which allows you to more easily check the example
database into a source repository while supporting merging.
* Better management of how examples are saved in the database.
* Health checks will now raise as errors when they fail. It was too easy to have
the warnings be swallowed entirely.
New limitations:
* choices and streaming strategies may no longer be used with find(). Neither may
data() (this is the change that necessitated a major version bump).
Feature removal:
* The ForkingTestCase executor has gone away. It may return in some more working
form at a later date.
Performance improvements:
* A new model which allows flatmap, composite strategies and stateful testing to
perform *much* better. They should also be more reliable.
* Filtering may in some circumstances have improved significantly. This will
help especially in cases where you have lots of values with individual filters
on them, such as lists(x.filter(...)).
* Modest performance improvements to the general test runner by avoiding expensive
operations
In general your tests should have got faster. If they've instead got significantly
slower, I'm interested in hearing about it.
Data distribution:
The data distribution should have changed significantly. This may uncover bugs the
previous version missed. It may also miss bugs the previous version could have
uncovered. Hypothesis is now producing less strongly correlated data than it used
to, but the correlations are extended over more of the structure.
Shrinking:
Shrinking quality should have improved. In particular Hypothesis can now perform
simultaneous shrinking of separate examples within a single test (previously it
was only able to do this for elements of a single collection). In some cases
performance will have improved, in some cases it will have got worse but generally
shouldn't have by much.
------------------
2.0.0 - 2016-01-10
------------------
Codename: A new beginning
This release cleans up all of the legacy that accrued in the course of
Hypothesis 1.0. These are mostly things that were emitting deprecation warnings
in 1.19.0, but there were a few additional changes.
In particular:
* non-strategy values will no longer be converted to strategies when used in
given or find.
* FailedHealthCheck is now an error and not a warning.
* Handling of non-ascii reprs in user types have been simplified by using raw
strings in more places in Python 2.
* given no longer allows mixing positional and keyword arguments.
* given no longer works with functions with defaults.
* given no longer turns provided arguments into defaults - they will not appear
in the argspec at all.
* the basic() strategy no longer exists.
* the n_ary_tree strategy no longer exists.
* the average_list_length setting no longer exists. Note: If you're using
using recursive() this will cause you a significant slow down. You should
pass explicit average_size parameters to collections in recursive calls.
* @rule can no longer be applied to the same method twice.
* Python 2.6 and 3.3 are no longer officially supported, although in practice
they still work fine.
This also includes two non-deprecation changes:
* given's keyword arguments no longer have to be the rightmost arguments and
can appear anywhere in the method signature.
* The max_shrinks setting would sometimes not have been respected.
-------------------
1.19.0 - 2016-01-09
-------------------
Codename: IT COMES
This release heralds the beginning of a new and terrible age of Hypothesis 2.0.
It's primary purpose is some final deprecations prior to said release. The goal
is that if your code emits no warnings under this release then it will probably run
unchanged under Hypothesis 2.0 (there are some caveats to this: 2.0 will drop
support for some Python versions, and if you're using internal APIs then as usual
that may break without warning).
It does have two new features:
* New @seed() decorator which allows you to manually seed a test. This may be
harmlessly combined with and overrides the derandomize setting.
* settings objects may now be used as a decorator to fix those settings to a
particular @given test.
API changes (old usage still works but is deprecated):
* Settings has been renamed to settings (lower casing) in order to make the
decorator usage more natural.
* Functions for the storage directory that were in hypothesis.settings are now
in a new hypothesis.configuration module.
Additional deprecations:
* the average_list_length setting has been deprecated in favour of being
explicit.
* the basic() strategy has been deprecated as it is impossible to support
it under a Conjecture based model, which will hopefully be implemented at
some point in the 2.x series.
* the n_ary_tree strategy (which was never actually part of the public API)
has been deprecated.
* Passing settings or random as keyword arguments to given is deprecated (use
the new functionality instead)
Bug fixes:
* No longer emit PendingDeprecationWarning for __iter__ and StopIteration in
streaming() values.
* When running in health check mode with non strict, don't print quite so
many errors for an exception in reify.
* When an assumption made in a test or a filter is flaky, tests will now
raise Flaky instead of UnsatisfiedAssumption.
-----------------------------------------------------------------------
`1.18.1 `_ - 2015-12-22
-----------------------------------------------------------------------
Two behind the scenes changes:
* Hypothesis will no longer write generated code to the file system. This
will improve performance on some systems (e.g. if you're using
`PythonAnywhere `_ which is running your
code from NFS) and prevent some annoying interactions with auto-restarting
systems.
* Hypothesis will cache the creation of some strategies. This can significantly
improve performance for code that uses flatmap or composite and thus has to
instantiate strategies a lot.
-----------------------------------------------------------------------
`1.18.0 `_ - 2015-12-21
-----------------------------------------------------------------------
Features:
* Tests and find are now explicitly seeded off the global random module. This
means that if you nest one inside the other you will now get a health check
error. It also means that you can control global randomization by seeding
random.
* There is a new random_module() strategy which seeds the global random module
for you and handles things so that you don't get a health check warning if
you use it inside your tests.
* floats() now accepts two new arguments: allow_nan and allow_infinity. These
default to the old behaviour, but when set to False will do what the names
suggest.
Bug fixes:
* Fix a bug where tests that used text() on Python 3.4+ would not actually be
deterministic even when explicitly seeded or using the derandomize mode,
because generation depended on dictionary iteration order which was affected
by hash randomization.
* Fix a bug where with complicated strategies the timing of the initial health
check could affect the seeding of the subsequent test, which would also
render supposedly deterministic tests non-deterministic in some scenarios.
* In some circumstances flatmap() could get confused by two structurally
similar things it could generate and would produce a flaky test where the
first time it produced an error but the second time it produced the other
value, which was not an error. The same bug was presumably also possible in
composite().
* flatmap() and composite() initial generation should now be moderately faster.
This will be particularly noticeable when you have many values drawn from the
same strategy in a single run, e.g. constructs like lists(s.flatmap(f)).
Shrinking performance *may* have suffered, but this didn't actually produce
an interestingly worse result in any of the standard scenarios tested.
-----------------------------------------------------------------------
`1.17.1 `_ - 2015-12-16
-----------------------------------------------------------------------
A small bug fix release, which fixes the fact that the 'note' function could
not be used on tests which used the @example decorator to provide explicit
examples.
-----------------------------------------------------------------------
`1.17.0 `_ - 2015-12-15
-----------------------------------------------------------------------
This is actually the same release as 1.16.1, but 1.16.1 has been pulled because
it contains the following additional change that was not intended to be in a
patch release (it's perfectly stable, but is a larger change that should have
required a minor version bump):
* Hypothesis will now perform a series of "health checks" as part of running
your tests. These detect and warn about some common error conditions that
people often run into which wouldn't necessarily have caued the test to fail
but would cause e.g. degraded performance or confusing results.
-----------------------------------------------------------------------
`1.16.1 `_ - 2015-12-14
-----------------------------------------------------------------------
Note: This release has been removed.
A small bugfix release that allows bdists for Hypothesis to be built
under 2.7 - the compat3.py file which had Python 3 syntax wasn't intended
to be loaded under Python 2, but when building a bdist it was. In particular
this would break running setup.py test.
-----------------------------------------------------------------------
`1.16.0 `_ - 2015-12-08
-----------------------------------------------------------------------
There are no public API changes in this release but it includes a behaviour
change that I wasn't comfortable putting in a patch release.
* Functions from hypothesis.strategies will no longer raise InvalidArgument
on bad arguments. Instead the same errors will be raised when a test
using such a strategy is run. This may improve startup time in some
cases, but the main reason for it is so that errors in strategies
won't cause errors in loading, and it can interact correctly with things
like pytest.mark.skipif.
* Errors caused by accidentally invoking the legacy API are now much less
confusing, although still throw NotImplementedError.
* hypothesis.extra.django is 1.9 compatible.
* When tests are run with max_shrinks=0 this will now still rerun the test
on failure and will no longer print "Trying example:" before each run.
Additionally note() will now work correctly when used with max_shrinks=0.
-----------------------------------------------------------------------
`1.15.0 `_ - 2015-11-24
-----------------------------------------------------------------------
A release with two new features.
* A 'characters' strategy for more flexible generation of text with particular
character ranges and types, kindly contributed by `Alexander Shorin `_.
* Add support for preconditions to the rule based stateful testing. Kindly
contributed by `Christopher Armstrong `_
-----------------------------------------------------------------------
`1.14.0 `_ - 2015-11-01
-----------------------------------------------------------------------
New features:
* Add 'note' function which lets you include additional information in the
final test run's output.
* Add 'choices' strategy which gives you a choice function that emulates
random.choice.
* Add 'uuid' strategy that generates UUIDs'
* Add 'shared' strategy that lets you create a strategy that just generates a
single shared value for each test run
Bugs:
* Using strategies of the form streaming(x.flatmap(f)) with find or in stateful
testing would have caused InvalidArgument errors when the resulting values
were used (because code that expected to only be called within a test context
would be invoked).
-----------------------------------------------------------------------
`1.13.0 `_ - 2015-10-29
-----------------------------------------------------------------------
This is quite a small release, but deprecates some public API functions
and removes some internal API functionality so gets a minor version bump.
* All calls to the 'strategy' function are now deprecated, even ones which
pass just a SearchStrategy instance (which is still a no-op).
* Never documented hypothesis.extra entry_points mechanism has now been removed (
it was previously how hypothesis.extra packages were loaded and has been deprecated
and unused for some time)
* Some corner cases that could previously have produced an OverflowError when simplifying
failing cases using hypothesis.extra.datetimes (or dates or times) have now been fixed.
* Hypothesis load time for first import has been significantly reduced - it used to be
around 250ms (on my SSD laptop) and now is around 100-150ms. This almost never
matters but was slightly annoying when using it in the console.
* hypothesis.strategies.randoms was previously missing from \_\_all\_\_.
-----------------------------------------------------------------------
`1.12.0 `_ - 2015-10-18
-----------------------------------------------------------------------
* Significantly improved performance of creating strategies using the functions
from the hypothesis.strategies module by deferring the calculation of their
repr until it was needed. This is unlikely to have been an performance issue
for you unless you were using flatmap, composite or stateful testing, but for
some cases it could be quite a significant impact.
* A number of cases where the repr of strategies build from lambdas is improved
* Add dates() and times() strategies to hypothesis.extra.datetimes
* Add new 'profiles' mechanism to the settings system
* Deprecates mutability of Settings, both the Settings.default top level property
and individual settings.
* A Settings object may now be directly initialized from a parent Settings.
* @given should now give a better error message if you attempt to use it with a
function that uses destructuring arguments (it still won't work, but it will
error more clearly),
* A number of spelling corrections in error messages
* py.test should no longer display the intermediate modules Hypothesis generates
when running in verbose mode
* Hypothesis should now correctly handle printing objects with non-ascii reprs
on python 3 when running in a locale that cannot handle ascii printing to
stdout.
* Add a unique=True argument to lists(). This is equivalent to
unique_by=lambda x: x, but offers a more convenient syntax.
-----------------------------------------------------------------------
`1.11.4 `_ - 2015-09-27
-----------------------------------------------------------------------
* Hide modifications Hypothesis needs to make to sys.path by undoing them
after we've imported the relevant modules. This is a workaround for issues
cryptography experienced on windows.
* Slightly improved performance of drawing from sampled_from on large lists
of alternatives.
* Significantly improved performance of drawing from one_of or strategies
using \| (note this includes a lot of strategies internally - floats()
and integers() both fall into this category). There turned out to be a
massive performance regression introduced in 1.10.0 affecting these which
probably would have made tests using Hypothesis significantly slower than
they should have been.
-----------------------------------------------------------------------
`1.11.3 `_ - 2015-09-23
-----------------------------------------------------------------------
* Better argument validation for datetimes() strategy - previously setting
max_year < datetime.MIN_YEAR or min_year > datetime.MAX_YEAR would not have
raised an InvalidArgument error and instead would have behaved confusingly.
* Compatibility with being run on pytest < 2.7 (achieved by disabling the
plugin).
-----------------------------------------------------------------------
`1.11.2 `_ - 2015-09-23
-----------------------------------------------------------------------
Bug fixes:
* Settings(database=my_db) would not be correctly inherited when used as a
default setting, so that newly created settings would use the database_file
setting and create an SQLite example database.
* Settings.default.database = my_db would previously have raised an error and
now works.
* Timeout could sometimes be significantly exceeded if during simplification
there were a lot of examples tried that didn't trigger the bug.
* When loading a heavily simplified example using a basic() strategy from the
database this could cause Python to trigger a recursion error.
* Remove use of deprecated API in pytest plugin so as to not emit warning
Misc:
* hypothesis-pytest is now part of hypothesis core. This should have no
externally visible consequences, but you should update your dependencies to
remove hypothesis-pytest and depend on only Hypothesis.
* Better repr for hypothesis.extra.datetimes() strategies.
* Add .close() method to abstract base class for Backend (it was already present
in the main implementation).
-----------------------------------------------------------------------
`1.11.1 `_ - 2015-09-16
-----------------------------------------------------------------------
Bug fixes:
* When running Hypothesis tests in parallel (e.g. using pytest-xdist) there was a race
condition caused by code generation.
* Example databases are now cached per thread so as to not use sqlite connections from
multiple threads. This should make Hypothesis now entirely thread safe.
* floats() with only min_value or max_value set would have had a very bad distribution.
* Running on 3.5, Hypothesis would have emitted deprecation warnings because of use of
inspect.getargspec
-----------------------------------------------------------------------
`1.11.0 `_ - 2015-08-31
-----------------------------------------------------------------------
* text() with a non-string alphabet would have used the repr() of the the alphabet
instead of its contexts. This is obviously silly. It now works with any sequence
of things convertible to unicode strings.
* @given will now work on methods whose definitions contains no explicit positional
arguments, only varargs (`bug #118 `_).
This may have some knock on effects because it means that @given no longer changes the
argspec of functions other than by adding defaults.
* Introduction of new @composite feature for more natural definition of strategies you'd
previously have used flatmap for.
-----------------------------------------------------------------------
`1.10.6 `_ - 2015-08-26
-----------------------------------------------------------------------
Fix support for fixtures on Django 1.7.
-----------------------------------------------------------------------
`1.10.4 `_ - 2015-08-21
-----------------------------------------------------------------------
Tiny bug fix release:
* If the database_file setting is set to None, this would have resulted in
an error when running tests. Now it does the same as setting database to
None.
-----------------------------------------------------------------------
`1.10.3 `_ - 2015-08-19
-----------------------------------------------------------------------
Another small bug fix release.
* lists(elements, unique_by=some_function, min_size=n) would have raised a
ValidationError if n > Settings.default.average_list_length because it would
have wanted to use an average list length shorter than the minimum size of
the list, which is impossible. Now it instead defaults to twice the minimum
size in these circumstances.
* basic() strategy would have only ever produced at most ten distinct values
per run of the test (which is bad if you e.g. have it inside a list). This
was obviously silly. It will now produce a much better distribution of data,
both duplicated and non duplicated.
-----------------------------------------------------------------------
`1.10.2 `_ - 2015-08-19
-----------------------------------------------------------------------
This is a small bug fix release:
* star imports from hypothesis should now work correctly.
* example quality for examples using flatmap will be better, as the way it had
previously been implemented was causing problems where Hypothesis was
erroneously labelling some examples as being duplicates.
-----------------------------------------------------------------------
`1.10.0 `_ - 2015-08-04
-----------------------------------------------------------------------
This is just a bugfix and performance release, but it changes some
semi-public APIs, hence the minor version bump.
* Significant performance improvements for strategies which are one\_of()
many branches. In particular this included recursive() strategies. This
should take the case where you use one recursive() strategy as the base
strategy of another from unusably slow (tens of seconds per generated
example) to reasonably fast.
* Better handling of just() and sampled_from() for values which have an
incorrect \_\_repr\_\_ implementation that returns non-ASCII unicode
on Python 2.
* Better performance for flatmap from changing the internal morpher API
to be significantly less general purpose.
* Introduce a new semi-public BuildContext/cleanup API. This allows
strategies to register cleanup activities that should run once the
example is complete. Note that this will interact somewhat weirdly with
find.
* Better simplification behaviour for streaming strategies.
* Don't error on lambdas which use destructuring arguments in Python 2.
* Add some better reprs for a few strategies that were missing good ones.
* The Random instances provided by randoms() are now copyable.
* Slightly more debugging information about simplify when using a debug
verbosity level.
* Support using given for functions with varargs, but not passing arguments
to it as positional.
---------------------------------------------------------------------
`1.9.0 `_ - 2015-07-27
---------------------------------------------------------------------
Codename: The great bundling.
This release contains two fairly major changes.
The first is the deprecation of the hypothesis-extra mechanism. From
now on all the packages that were previously bundled under it other
than hypothesis-pytest (which is a different beast and will remain
separate). The functionality remains unchanged and you can still import
them from exactly the same location, they just are no longer separate
packages.
The second is that this introduces a new way of building strategies
which lets you build up strategies recursively from other strategies.
It also contains the minor change that calling .example() on a
strategy object will give you examples that are more representative of
the actual data you'll get. There used to be some logic in there to make
the examples artificially simple but this proved to be a bad idea.
---------------------------------------------------------------------
`1.8.5 `_ - 2015-07-24
---------------------------------------------------------------------
This contains no functionality changes but fixes a mistake made with
building the previous package that would have broken installation on
Windows.
---------------------------------------------------------------------
`1.8.4 `_ - 2015-07-20
---------------------------------------------------------------------
Bugs fixed:
* When a call to floats() had endpoints which were not floats but merely
convertible to one (e.g. integers), these would be included in the generated
data which would cause it to generate non-floats.
* Splitting lambdas used in the definition of flatmap, map or filter over
multiple lines would break the repr, which would in turn break their usage.
---------------------------------------------------------------------
`1.8.3 `_ - 2015-07-20
---------------------------------------------------------------------
"Falsifying example" would not have been printed when the failure came from an
explicit example.
---------------------------------------------------------------------
`1.8.2 `_ - 2015-07-18
---------------------------------------------------------------------
Another small bugfix release:
* When using ForkingTestCase you would usually not get the falsifying example
printed if the process exited abnormally (e.g. due to os._exit).
* Improvements to the distribution of characters when using text() with a
default alphabet. In particular produces a better distribution of ascii and
whitespace in the alphabet.
---------------------------------------------------------------------
`1.8.1 `_ - 2015-07-17
---------------------------------------------------------------------
This is a small release that contains a workaround for people who have
bad reprs returning non ascii text on Python 2.7. This is not a bug fix
for Hypothesis per se because that's not a thing that is actually supposed
to work, but Hypothesis leans more heavily on repr than is typical so it's
worth having a workaround for.
---------------------------------------------------------------------
`1.8.0 `_ - 2015-07-16
---------------------------------------------------------------------
New features:
* Much more sensible reprs for strategies, especially ones that come from
hypothesis.strategies. These should now have as reprs python code that
would produce the same strategy.
* lists() accepts a unique_by argument which forces the generated lists to be
only contain elements unique according to some function key (which must
return a hashable value).
* Better error messages from flaky tests to help you debug things.
Mostly invisible implementation details that may result in finding new bugs
in your code:
* Sets and dictionary generation should now produce a better range of results.
* floats with bounds now focus more on 'critical values', trying to produce
values at edge cases.
* flatmap should now have better simplification for complicated cases, as well
as generally being (I hope) more reliable.
Bug fixes:
* You could not previously use assume() if you were using the forking executor.
---------------------------------------------------------------------
`1.7.2 `_ - 2015-07-10
---------------------------------------------------------------------
This is purely a bug fix release:
* When using floats() with stale data in the database you could sometimes get
values in your tests that did not respect min_value or max_value.
* When getting a Flaky error from an unreliable test it would have incorrectly
displayed the example that caused it.
* 2.6 dependency on backports was incorrectly specified. This would only have
caused you problems if you were building a universal wheel from Hypothesis,
which is not how Hypothesis ships, so unless you're explicitly building wheels
for your dependencies and support Python 2.6 plus a later version of Python
this probably would never have affected you.
* If you use flatmap in a way that the strategy on the right hand side depends
sensitively on the left hand side you may have occasionally seen Flaky errors
caused by producing unreliable examples when minimizing a bug. This use case
may still be somewhat fraught to be honest. This code is due a major rearchitecture
for 1.8, but in the meantime this release fixes the only source of this error that
I'm aware of.
---------------------------------------------------------------------
`1.7.1 `_ - 2015-06-29
---------------------------------------------------------------------
Codename: There is no 1.7.0.
A slight technical hitch with a premature upload means there's was a yanked
1.7.0 release. Oops.
The major feature of this release is Python 2.6 support. Thanks to Jeff Meadows
for doing most of the work there.
Other minor features
* strategies now has a permutations() function which returns a strategy
yielding permutations of values from a given collection.
* if you have a flaky test it will print the exception that it last saw before
failing with Flaky, even if you do not have verbose reporting on.
* Slightly experimental git merge script available as "python -m
hypothesis.tools.mergedbs". Instructions on how to use it in the docstring
of that file.
Bug fixes:
* Better performance from use of filter. In particular tests which involve large
numbers of heavily filtered strategies should perform a lot better.
* floats() with a negative min_value would not have worked correctly (worryingly,
it would have just silently failed to run any examples). This is now fixed.
* tests using sampled\_from would error if the number of sampled elements was smaller
than min\_satisfying\_examples.
---------------------------------------------------------------------
`1.6.2 `_ - 2015-06-08
---------------------------------------------------------------------
This is just a few small bug fixes:
* Size bounds were not validated for values for a binary() strategy when
reading examples from the database.
* sampled\_from is now in __all__ in hypothesis.strategies
* floats no longer consider negative integers to be simpler than positive
non-integers
* Small floating point intervals now correctly count members, so if you have a
floating point interval so narrow there are only a handful of values in it,
this will no longer cause an error when Hypothesis runs out of values.
---------------------------------------------------------------------
`1.6.1 `_ - 2015-05-21
---------------------------------------------------------------------
This is a small patch release that fixes a bug where 1.6.0 broke the use
of flatmap with the deprecated API and assumed the passed in function returned
a SearchStrategy instance rather than converting it to a strategy.
---------------------------------------------------------------------
`1.6.0 `_ - 2015-05-21
---------------------------------------------------------------------
This is a smallish release designed to fix a number of bugs and smooth out
some weird behaviours.
* Fix a critical bug in flatmap where it would reuse old strategies. If all
your flatmap code was pure you're fine. If it's not, I'm surprised it's
working at all. In particular if you want to use flatmap with django models,
you desperately need to upgrade to this version.
* flatmap simplification performance should now be better in some cases where
it previously had to redo work.
* Fix for a bug where invalid unicode data with surrogates could be generated
during simplification (it was already filtered out during actual generation).
* The Hypothesis database is now keyed off the name of the test instead of the
type of data. This makes much more sense now with the new strategies API and
is generally more robust. This means you will lose old examples on upgrade.
* The database will now not delete values which fail to deserialize correctly,
just skip them. This is to handle cases where multiple incompatible strategies
share the same key.
* find now also saves and loads values from the database, keyed off a hash of the
function you're finding from.
* Stateful tests now serialize and load values from the database. They should have
before, really. This was a bug.
* Passing a different verbosity level into a test would not have worked entirely
correctly, leaving off some messages. This is now fixed.
* Fix a bug where derandomized tests with unicode characters in the function
body would error on Python 2.7.
---------------------------------------------------------------------
`1.5.0 `_ - 2015-05-14
---------------------------------------------------------------------
Codename: Strategic withdrawal.
The purpose of this release is a radical simplification of the API for building
strategies. Instead of the old approach of @strategy.extend and things that
get converted to strategies, you just build strategies directly.
The old method of defining strategies will still work until Hypothesis 2.0,
because it's a major breaking change, but will now emit deprecation warnings.
The new API is also a lot more powerful as the functions for defining strategies
give you a lot of dials to turn. See :doc:`the updated data section ` for
details.
Other changes:
* Mixing keyword and positional arguments in a call to @given is deprecated as well.
* There is a new setting called 'strict'. When set to True, Hypothesis will raise
warnings instead of merely printing them. Turning it on by default is inadvisable because
it means that Hypothesis minor releases can break your code, but it may be useful for
making sure you catch all uses of deprecated APIs.
* max_examples in settings is now interpreted as meaning the maximum number
of unique (ish) examples satisfying assumptions. A new setting max_iterations
which defaults to a larger value has the old interpretation.
* Example generation should be significantly faster due to a new faster parameter
selection algorithm. This will mostly show up for simple data types - for complex
ones the parameter selection is almost certainly dominated.
* Simplification has some new heuristics that will tend to cut down on cases
where it could previously take a very long time.
* timeout would previously not have been respected in cases where there were a lot
of duplicate examples. You probably wouldn't have previously noticed this because
max_examples counted duplicates, so this was very hard to hit in a way that mattered.
* A number of internal simplifications to the SearchStrategy API.
* You can now access the current Hypothesis version as hypothesis.__version__.
* A top level function is provided for running the stateful tests without the
TestCase infrastructure.
---------------------------------------------------------------------
`1.4.0 `_ - 2015-05-04
---------------------------------------------------------------------
Codename: What a state.
The *big* feature of this release is the new and slightly experimental
stateful testing API. You can read more about that in :doc:`the
appropriate section `.
Two minor features the were driven out in the course of developing this:
* You can now set settings.max_shrinks to limit the number of times
Hypothesis will try to shrink arguments to your test. If this is set to
<= 0 then Hypothesis will not rerun your test and will just raise the
failure directly. Note that due to technical limitations if max_shrinks
is <= 0 then Hypothesis will print *every* example it calls your test
with rather than just the failing one. Note also that I don't consider
settings max_shrinks to zero a sensible way to run your tests and it
should really be considered a debug feature.
* There is a new debug level of verbosity which is even *more* verbose than
verbose. You probably don't want this.
Breakage of semi-public SearchStrategy API:
* It is now a required invariant of SearchStrategy that if u simplifies to
v then it is not the case that strictly_simpler(u, v). i.e. simplifying
should not *increase* the complexity even though it is not required to
decrease it. Enforcing this invariant lead to finding some bugs where
simplifying of integers, floats and sets was suboptimal.
* Integers in basic data are now required to fit into 64 bits. As a result
python integer types are now serialized as strings, and some types have
stopped using quite so needlessly large random seeds.
Hypothesis Stateful testing was then turned upon Hypothesis itself, which lead
to an amazing number of minor bugs being found in Hypothesis itself.
Bugs fixed (most but not all from the result of stateful testing) include:
* Serialization of streaming examples was flaky in a way that you would
probably never notice: If you generate a template, simplify it, serialize
it, deserialize it, serialize it again and then deserialize it you would
get the original stream instead of the simplified one.
* If you reduced max_examples below the number of examples already saved in
the database, you would have got a ValueError. Additionally, if you had
more than max_examples in the database all of them would have been
considered.
* @given will no longer count duplicate examples (which it never called
your function with) towards max_examples. This may result in your tests
running slower, but that's probably just because they're trying more
examples.
* General improvements to example search which should result in better
performance and higher quality examples. In particular parameters which
have a history of producing useless results will be more aggressively
culled. This is useful both because it decreases the chance of useless
examples and also because it's much faster to not check parameters which
we were unlikely to ever pick!
* integers_from and lists of types with only one value (e.g. [None]) would
previously have had a very high duplication rate so you were probably
only getting a handful of examples. They now have a much lower
duplication rate, as well as the improvements to search making this
less of a problem in the first place.
* You would sometimes see simplification taking significantly longer than
your defined timeout. This would happen because timeout was only being
checked after each *successful* simplification, so if Hypothesis was
spending a lot of time unsuccessfully simplifying things it wouldn't
stop in time. The timeout is now applied for unsuccessful simplifications
too.
* In Python 2.7, integers_from strategies would have failed during
simplification with an OverflowError if their starting point was at or
near to the maximum size of a 64-bit integer.
* flatmap and map would have failed if called with a function without a
__name__ attribute.
* If max_examples was less than min_satisfying_examples this would always
error. Now min_satisfying_examples is capped to max_examples. Note that
if you have assumptions to satisfy here this will still cause an error.
Some minor quality improvements:
* Lists of streams, flatmapped strategies and basic strategies should now
now have slightly better simplification.
---------------------------------------------------------------------
`1.3.0 `_ - 2015-05-22
---------------------------------------------------------------------
New features:
* New verbosity level API for printing intermediate results and exceptions.
* New specifier for strings generated from a specified alphabet.
* Better error messages for tests that are failing because of a lack of enough
examples.
Bug fixes:
* Fix error where use of ForkingTestCase would sometimes result in too many
open files.
* Fix error where saving a failing example that used flatmap could error.
* Implement simplification for sampled_from, which apparently never supported
it previously. Oops.
General improvements:
* Better range of examples when using one_of or sampled_from.
* Fix some pathological performance issues when simplifying lists of complex
values.
* Fix some pathological performance issues when simplifying examples that
require unicode strings with high codepoints.
* Random will now simplify to more readable examples.
---------------------------------------------------------------------
`1.2.1 `_ - 2015-04-16
---------------------------------------------------------------------
A small patch release for a bug in the new executors feature. Tests which require
doing something to their result in order to fail would have instead reported as
flaky.
---------------------------------------------------------------------
`1.2.0 `_ - 2015-04-15
---------------------------------------------------------------------
Codename: Finders keepers.
A bunch of new features and improvements.
* Provide a mechanism for customizing how your tests are executed.
* Provide a test runner that forks before running each example. This allows
better support for testing native code which might trigger a segfault or a C
level assertion failure.
* Support for using Hypothesis to find examples directly rather than as just as
a test runner.
* New streaming type which lets you generate infinite lazily loaded streams of
data - perfect for if you need a number of examples but don't know how many.
* Better support for large integer ranges. You can now use integers_in_range
with ranges of basically any size. Previously large ranges would have eaten
up all your memory and taken forever.
* Integers produce a wider range of data than before - previously they would
only rarely produce integers which didn't fit into a machine word. Now it's
much more common. This percolates to other numeric types which build on
integers.
* Better validation of arguments to @given. Some situations that would
previously have caused silently wrong behaviour will now raise an error.
* Include +/- sys.float_info.max in the set of floating point edge cases that
Hypothesis specifically tries.
* Fix some bugs in floating point ranges which happen when given
+/- sys.float_info.max as one of the endpoints... (really any two floats that
are sufficiently far apart so that x, y are finite but y - x is infinite).
This would have resulted in generating infinite values instead of ones inside
the range.
---------------------------------------------------------------------
`1.1.1 `_ - 2015-04-07
---------------------------------------------------------------------
Codename: Nothing to see here
This is just a patch release put out because it fixed some internal bugs that would
block the Django integration release but did not actually affect anything anyone could
previously have been using. It also contained a minor quality fix for floats that
I'd happened to have finished in time.
* Fix some internal bugs with object lifecycle management that were impossible to
hit with the previously released versions but broke hypothesis-django.
* Bias floating point numbers somewhat less aggressively towards very small numbers
---------------------------------------------------------------------
`1.1.0 `_ - 2015-04-06
---------------------------------------------------------------------
Codename: No-one mention the M word.
* Unicode strings are more strongly biased towards ascii characters. Previously they
would generate all over the space. This is mostly so that people who try to
shape their unicode strings with assume() have less of a bad time.
* A number of fixes to data deserialization code that could theoretically have
caused mysterious bugs when using an old version of a Hypothesis example
database with a newer version. To the best of my knowledge a change that could
have triggered this bug has never actually been seen in the wild. Certainly
no-one ever reported a bug of this nature.
* Out of the box support for Decimal and Fraction.
* new dictionary specifier for dictionaries with variable keys.
* Significantly faster and higher quality simplification, especially for
collections of data.
* New filter() and flatmap() methods on Strategy for better ways of building
strategies out of other strategies.
* New BasicStrategy class which allows you to define your own strategies from
scratch without needing an existing matching strategy or being exposed to the
full horror or non-public nature of the SearchStrategy interface.
---------------------------------------------------------------------
`1.0.0 `_ - 2015-03-27
---------------------------------------------------------------------
Codename: Blast-off!
There are no code changes in this release. This is precisely the 0.9.2 release
with some updated documentation.
------------------
0.9.2 - 2015-03-26
------------------
Codename: T-1 days.
* floats_in_range would not actually have produced floats_in_range unless that
range happened to be (0, 1). Fix this.
------------------
0.9.1 - 2015-03-25
------------------
Codename: T-2 days.
* Fix a bug where if you defined a strategy using map on a lambda then the results would not be saved in the database.
* Significant performance improvements when simplifying examples using lists, strings or bounded integer ranges.
------------------
0.9.0 - 2015-03-23
------------------
Codename: The final countdown
This release could also be called 1.0-RC1.
It contains a teeny tiny bugfix, but the real point of this release is to declare
feature freeze. There will be zero functionality changes between 0.9.0 and 1.0 unless
something goes really really wrong. No new features will be added, no breaking API changes
will occur, etc. This is the final shakedown before I declare Hypothesis stable and ready
to use and throw a party to celebrate.
Bug bounty for any bugs found between now and 1.0: I will buy you a drink (alcoholic,
caffeinated, or otherwise) and shake your hand should we ever find ourselves in the
same city at the same time.
The one tiny bugfix:
* Under pypy, databases would fail to close correctly when garbage collected, leading to a memory leak and a confusing error message if you were repeatedly creating databases and not closing them. It is very unlikely you were doing this and the chances of you ever having noticed this bug are very low.
------------------
0.7.2 - 2015-03-22
------------------
Codename: Hygienic macros or bust
* You can now name an argument to @given 'f' and it won't break (issue #38)
* strategy_test_suite is now named strategy_test_suite as the documentation claims and not in fact strategy_test_suitee
* Settings objects can now be used as a context manager to temporarily override the default values inside their context.
------------------
0.7.1 - 2015-03-21
------------------
Codename: Point releases go faster
* Better string generation by parametrizing by a limited alphabet
* Faster string simplification - previously if simplifying a string with high range unicode characters it would try every unicode character smaller than that. This was pretty pointless. Now it stops after it's a short range (it can still reach smaller ones through recursive calls because of other simplifying operations).
* Faster list simplification by first trying a binary chop down the middle
* Simultaneous simplification of identical elements in a list. So if a bug only triggers when you have duplicates but you drew e.g. [-17, -17], this will now simplify to [0, 0].
-------------------
0.7.0, - 2015-03-20
-------------------
Codename: Starting to look suspiciously real
This is probably the last minor release prior to 1.0. It consists of stability
improvements, a few usability things designed to make Hypothesis easier to try
out, and filing off some final rough edges from the API.
* Significant speed and memory usage improvements
* Add an example() method to strategy objects to give an example of the sort of data that the strategy generates.
* Remove .descriptor attribute of strategies
* Rename descriptor_test_suite to strategy_test_suite
* Rename the few remaining uses of descriptor to specifier (descriptor already has a defined meaning in Python)
---------------------------------------------------------
0.6.0 - 2015-03-13
---------------------------------------------------------
Codename: I'm sorry, were you using that API?
This is primarily a "simplify all the weird bits of the API" release. As a result there are a lot of breaking changes. If
you just use @given with core types then you're probably fine.
In particular:
* Stateful testing has been removed from the API
* The way the database is used has been rendered less useful (sorry). The feature for reassembling values saved from other
tests doesn't currently work. This will probably be brought back in post 1.0.
* SpecificationMapper is no longer a thing. Instead there is an ExtMethod called strategy which you extend to specify how
to convert other types to strategies.
* Settings are now extensible so you can add your own for configuring a strategy
* MappedSearchStrategy no longer needs an unpack method
* Basically all the SearchStrategy internals have changed massively. If you implemented SearchStrategy directly rather than
using MappedSearchStrategy talk to me about fixing it.
* Change to the way extra packages work. You now specify the package. This
must have a load() method. Additionally any modules in the package will be
loaded in under hypothesis.extra
Bug fixes:
* Fix for a bug where calling falsify on a lambda with a non-ascii character
in its body would error.
Hypothesis Extra:
hypothesis-fakefactory\: An extension for using faker data in hypothesis. Depends
on fake-factory.
------------------
0.5.0 - 2015-02-10
------------------
Codename: Read all about it.
Core hypothesis:
* Add support back in for pypy and python 3.2
* @given functions can now be invoked with some arguments explicitly provided. If all arguments that hypothesis would have provided are passed in then no falsification is run.
* Related to the above, this means that you can now use pytest fixtures and mark.parametrize with Hypothesis without either interfering with the other.
* Breaking change: @given no longer works for functions with varargs (varkwargs are fine). This might be added back in at a later date.
* Windows is now fully supported. A limited version (just the tests with none of the extras) of the test suite is run on windows with each commit so it is now a first class citizen of the Hypothesis world.
* Fix a bug for fuzzy equality of equal complex numbers with different reprs (this can happen when one coordinate is zero). This shouldn't affect users - that feature isn't used anywhere public facing.
* Fix generation of floats on windows and 32-bit builds of python. I was using some struct.pack logic that only worked on certain word sizes.
* When a test times out and hasn't produced enough examples this now raises a Timeout subclass of Unfalsifiable.
* Small search spaces are better supported. Previously something like a @given(bool, bool) would have failed because it couldn't find enough examples. Hypothesis is now aware of the fact that these are small search spaces and will not error in this case.
* Improvements to parameter search in the case of hard to satisfy assume. Hypothesis will now spend less time exploring parameters that are unlikely to provide anything useful.
* Increase chance of generating "nasty" floats
* Fix a bug that would have caused unicode warnings if you had a sampled_from that was mixing unicode and byte strings.
* Added a standard test suite that you can use to validate a custom strategy you've defined is working correctly.
Hypothesis extra:
First off, introducing Hypothesis extra packages!
These are packages that are separated out from core Hypothesis because they have one or more dependencies. Every
hypothesis-extra package is pinned to a specific point release of Hypothesis and will have some version requirements
on its dependency. They use entry_points so you will usually not need to explicitly import them, just have them installed
on the path.
This release introduces two of them:
hypothesis-datetime:
Does what it says on the tin: Generates datetimes for Hypothesis. Just install the package and datetime support will start
working.
Depends on pytz for timezone support
hypothesis-pytest:
A very rudimentary pytest plugin. All it does right now is hook the display of falsifying examples into pytest reporting.
Depends on pytest.
------------------
0.4.3 - 2015-02-05
------------------
Codename: TIL narrow Python builds are a thing
This just fixes the one bug.
* Apparently there is such a thing as a "narrow python build" and OS X ships with these by default
for python 2.7. These are builds where you only have two bytes worth of unicode. As a result,
generating unicode was completely broken on OS X. Fix this by only generating unicode codepoints
in the range supported by the system.
------------------
0.4.2 - 2015-02-04
------------------
Codename: O(dear)
This is purely a bugfix release:
* Provide sensible external hashing for all core types. This will significantly improve
performance of tracking seen examples which happens in literally every falsification
run. For Hypothesis fixing this cut 40% off the runtime of the test suite. The behaviour
is quadratic in the number of examples so if you're running the default configuration
this will be less extreme (Hypothesis's test suite runs at a higher number of examples
than default), but you should still see a significant improvement.
* Fix a bug in formatting of complex numbers where the string could get incorrectly truncated.
------------------
0.4.1 - 2015-02-03
------------------
Codename: Cruel and unusual edge cases
This release is mostly about better test case generation.
Enhancements:
* Has a cool release name
* text_type (str in python 3, unicode in python 2) example generation now
actually produces interesting unicode instead of boring ascii strings.
* floating point numbers are generated over a much wider range, with particular
attention paid to generating nasty numbers - nan, infinity, large and small
values, etc.
* examples can be generated using pieces of examples previously saved in the
database. This allows interesting behaviour that has previously been discovered
to be propagated to other examples.
* improved parameter exploration algorithm which should allow it to more reliably
hit interesting edge cases.
* Timeout can now be disabled entirely by setting it to any value <= 0.
Bug fixes:
* The descriptor on a OneOfStrategy could be wrong if you had descriptors which
were equal but should not be coalesced. e.g. a strategy for one_of((frozenset({int}), {int}))
would have reported its descriptor as {int}. This is unlikely to have caused you
any problems
* If you had strategies that could produce NaN (which float previously couldn't but
e.g. a Just(float('nan')) could) then this would have sent hypothesis into an infinite
loop that would have only been terminated when it hit the timeout.
* Given elements that can take a long time to minimize, minimization of floats or tuples
could be quadratic or worse in the that value. You should now see much better performance
for simplification, albeit at some cost in quality.
Other:
* A lot of internals have been been rewritten. This shouldn't affect you at all, but
it opens the way for certain of hypothesis's oddities to be a lot more extensible by
users. Whether this is a good thing may be up for debate...
------------------
0.4.0 - 2015-01-21
------------------
FLAGSHIP FEATURE: Hypothesis now persists examples for later use. It stores
data in a local SQLite database and will reuse it for all tests of the same
type.
LICENSING CHANGE: Hypothesis is now released under the Mozilla Public License
2.0. This applies to all versions from 0.4.0 onwards until further notice.
The previous license remains applicable to all code prior to 0.4.0.
Enhancements:
* Printing of failing examples. I was finding that the pytest runner was not
doing a good job of displaying these, and that Hypothesis itself could do
much better.
* Drop dependency on six for cross-version compatibility. It was easy
enough to write the shim for the small set of features that we care about
and this lets us avoid a moderately complex dependency.
* Some improvements to statistical distribution of selecting from small (<=
3 elements)
* Improvements to parameter selection for finding examples.
Bugs fixed:
* could_have_produced for lists, dicts and other collections would not have
examined the elements and thus when using a union of different types of
list this could result in Hypothesis getting confused and passing a value
to the wrong strategy. This could potentially result in exceptions being
thrown from within simplification.
* sampled_from would not work correctly on a single element list.
* Hypothesis could get *very* confused by values which are
equal despite having different types being used in descriptors. Hypothesis
now has its own more specific version of equality it uses for descriptors
and tracking. It is always more fine grained than Python equality: Things
considered != are not considered equal by hypothesis, but some things that
are considered == are distinguished. If your test suite uses both frozenset
and set tests this bug is probably affecting you.
------------------
0.3.2 - 2015-01-16
------------------
* Fix a bug where if you specified floats_in_range with integer arguments
Hypothesis would error in example simplification.
* Improve the statistical distribution of the floats you get for the
floats_in_range strategy. I'm not sure whether this will affect users in
practice but it took my tests for various conditions from flaky to rock
solid so it at the very least improves discovery of the artificial cases
I'm looking for.
* Improved repr() for strategies and RandomWithSeed instances.
* Add detection for flaky test cases where hypothesis managed to find an
example which breaks it but on the final invocation of the test it does
not raise an error. This will typically happen with too much recursion
errors but could conceivably happen in other circumstances too.
* Provide a "derandomized" mode. This allows you to run hypothesis with
zero real randomization, making your build nice and deterministic. The
tests run with a seed calculated from the function they're testing so you
should still get a good distribution of test cases.
* Add a mechanism for more conveniently defining tests which just sample
from some collection.
* Fix for a really subtle bug deep in the internals of the strategy table.
In some circumstances if you were to define instance strategies for both
a parent class and one or more of its subclasses you would under some
circumstances get the strategy for the wrong superclass of an instance.
It is very unlikely anyone has ever encountered this in the wild, but it
is conceivably possible given that a mix of namedtuple and tuple are used
fairly extensively inside hypothesis which do exhibit this pattern of
strategy.
------------------
0.3.1 - 2015-01-13
------------------
* Support for generation of frozenset and Random values
* Correct handling of the case where a called function mutates it argument.
This involved introducing a notion of a strategies knowing how to copy
their argument. The default method should be entirely acceptable and the
worst case is that it will continue to have the old behaviour if you
don't mark your strategy as mutable, so this shouldn't break anything.
* Fix for a bug where some strategies did not correctly implement
could_have_produced. It is very unlikely that any of these would have
been seen in the wild, and the consequences if they had been would have
been minor.
* Re-export the @given decorator from the main hypothesis namespace. It's
still available at the old location too.
* Minor performance optimisation for simplifying long lists.
------------------
0.3.0 - 2015-01-12
------------------
* Complete redesign of the data generation system. Extreme breaking change
for anyone who was previously writing their own SearchStrategy
implementations. These will not work any more and you'll need to modify
them.
* New settings system allowing more global and modular control of Verifier
behaviour.
* Decouple SearchStrategy from the StrategyTable. This leads to much more
composable code which is a lot easier to understand.
* A significant amount of internal API renaming and moving. This may also
break your code.
* Expanded available descriptors, allowing for generating integers or
floats in a specific range.
* Significantly more robust. A very large number of small bug fixes, none
of which anyone is likely to have ever noticed.
* Deprecation of support for pypy and python 3 prior to 3.3. 3.3 and 3.4.
Supported versions are 2.7.x, 3.3.x, 3.4.x. I expect all of these to
remain officially supported for a very long time. I would not be
surprised to add pypy support back in later but I'm not going to do so
until I know someone cares about it. In the meantime it will probably
still work.
------------------
0.2.2 - 2015-01-08
------------------
* Fix an embarrassing complete failure of the installer caused by my being
bad at version control
------------------
0.2.1 - 2015-01-07
------------------
* Fix a bug in the new stateful testing feature where you could make
__init__ a @requires method. Simplification would not always work if the
prune method was able to successfully shrink the test.
------------------
0.2.0 - 2015-01-07
------------------
* It's aliiive.
* Improve python 3 support using six.
* Distinguish between byte and unicode types.
* Fix issues where FloatStrategy could raise.
* Allow stateful testing to request constructor args.
* Fix for issue where test annotations would timeout based on when the
module was loaded instead of when the test started
------------------
0.1.4 - 2013-12-14
------------------
* Make verification runs time bounded with a configurable timeout
------------------
0.1.3 - 2013-05-03
------------------
* Bugfix: Stateful testing behaved incorrectly with subclassing.
* Complex number support
* support for recursive strategies
* different error for hypotheses with unsatisfiable assumptions
------------------
0.1.2 - 2013-03-24
------------------
* Bugfix: Stateful testing was not minimizing correctly and could
throw exceptions.
* Better support for recursive strategies.
* Support for named tuples.
* Much faster integer generation.
------------------
0.1.1 - 2013-03-24
------------------
* Python 3.x support via 2to3.
* Use new style classes (oops).
------------------
0.1.0 - 2013-03-23
------------------
* Introduce stateful testing.
* Massive rewrite of internals to add flags and strategies.
------------------
0.0.5 - 2013-03-13
------------------
* No changes except trying to fix packaging
------------------
0.0.4 - 2013-03-13
------------------
* No changes except that I checked in a failing test case for 0.0.3
so had to replace the release. Doh
------------------
0.0.3 - 2013-03-13
------------------
* Improved a few internals.
* Opened up creating generators from instances as a general API.
* Test integration.
------------------
0.0.2 - 2013-03-12
------------------
* Starting to tighten up on the internals.
* Change API to allow more flexibility in configuration.
* More testing.
------------------
0.0.1 - 2013-03-10
------------------
* Initial release.
* Basic working prototype. Demonstrates idea, probably shouldn't be used.
hypothesis-3.0.1/docs/community.rst 0000664 0000000 0000000 00000005127 12661275660 0017440 0 ustar 00root root 0000000 0000000 =========
Community
=========
The Hypothesis community is small for the moment but is full of excellent people
who can answer your questions and help you out. Please do join us.
The two major places for community discussion are:
* `The mailing list `_.
* An IRC channel: #hypothesis on freenode.
Feel free to use these to ask for help, provide feedback, or discuss anything remotely
Hypothesis related at all.
The IRC channel is the more active of the two. If you don't know how to use
IRC, don't worry about it. Just `click here to sign up to IRCCloud and log in `_
(don't worry, it's free).
(IRCCloud is made by friends of mine, but that's not why I'm recommending it. I'm
recommending it because it's great).
---------------
Code of conduct
---------------
Hypothesis's community is an inclusive space, and everyone in it is expected to abide by a code of conduct.
At the high level the code of conduct goes like this:
1. Be kind
2. Be respectful
3. Be helpful
While it is impossible to enumerate everything that is unkind, disrespectful or unhelpful, here are some specific things that are definitely against the code of conduct:
1. -isms and -phobias (e.g. racism, sexism, transphobia and homophobia) are unkind, disrespectful *and* unhelpful. Just don't.
2. All software is broken. This is not a moral failing on the part of the authors. Don't give people a hard time for bad code.
3. It's OK not to know things. Everybody was a beginner once, nobody should be made to feel bad for it.
4. It's OK not to *want* to know something. If you think someone's question is fundamentally flawed, you should still ask permission before explaining what they should actually be asking.
5. Note that "I was just joking" is not a valid defence.
What happens when this goes wrong?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For minor infractions, I'll just call people on it and ask them to apologise and not do it again. You should
feel free to do this too if you're comfortable doing so.
Major infractions and repeat offenders will be banned from the community.
Also, people who have a track record of bad behaviour outside of the Hypothesis community may be banned even
if they obey all these rules if their presence is making people uncomfortable.
At the current volume level it's not hard for me to pay attention to the whole community, but if you think I've
missed something please feel free to alert me. You can either message me as DRMacIver on freenode or send a me
an email at david@drmaciver.com.
hypothesis-3.0.1/docs/conf.py 0000664 0000000 0000000 00000004474 12661275660 0016165 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# -*- coding: utf-8 -*-
# on_rtd is whether we are on readthedocs.org
import os
import sys
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append(
os.path.join(os.path.dirname(__file__), "..", "src")
)
from hypothesis import __version__
autodoc_member_order = 'bysource'
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.intersphinx',
]
templates_path = ['_templates']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Hypothesis'
copyright = u'2015, David R. MacIver'
author = u'David R. MacIver'
version = __version__
release = __version__
language = None
exclude_patterns = ['_build']
pygments_style = 'sphinx'
todo_include_todos = False
intersphinx_mapping = {
'python': ('http://docs.python.org/', None),
}
# -- Options for HTML output ----------------------------------------------
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_static_path = ['_static']
htmlhelp_basename = 'Hypothesisdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
}
latex_documents = [
(master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',
u'David R. MacIver', 'manual'),
]
man_pages = [
(master_doc, 'hypothesis', u'Hypothesis Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'Hypothesis', u'Hypothesis Documentation',
author, 'Hypothesis', 'One line description of project.',
'Miscellaneous'),
]
hypothesis-3.0.1/docs/data.rst 0000664 0000000 0000000 00000030552 12661275660 0016325 0 ustar 00root root 0000000 0000000 =============================
What you can generate and how
=============================
The general philosophy of Hypothesis data generation is that everything
should be possible to generate and most things should be easy. Most things in
the standard library
is more aspirational than achieved, the state of the art is already pretty
good.
This document is a guide to what strategies are available for generating data
and how to build them. Strategies have a variety of other important internal
features, such as how they simplify, but the data they can generate is the only
public part of their API.
Functions for building strategies are all available in the hypothesis.strategies
module. The salient functions from it are as follows:
.. automodule:: hypothesis.strategies
:members:
~~~~~~~~~~~~~~~~
Choices
~~~~~~~~~~~~~~~~
Sometimes you need an input to be from a known set of items. hypothesis gives you 2 ways to do this, choice() and sampled_from().
Examples on how to use them both are below. First up choice:
.. code:: python
from hypothesis import given, strategies as st
@given(user=st.text(min_size=1), service=st.text(min_size=1), choice=st.choices())
def test_tickets(user, service, choice):
t=choice(('ST', 'LT', 'TG', 'CT'))
# asserts go here.
This means t will randomly be one of the items in the list ('ST', 'LT', 'TG', 'CT'). just like if you were calling random.choice() on the list.
A different, and probably better way to do this, is to use sampled_from:
.. code:: python
from hypothesis import given, strategies as st
@given(
user=st.text(min_size=1), service=st.text(min_size=1),
t=st.sampled_from(('ST', 'LT', 'TG', 'CT')))
def test_tickets(user, service, t):
# asserts and test code go here.
Values from sampled_from will not be copied and thus you should be careful of using mutable data. Which makes it great for the above use case, but may not always work out.
~~~~~~~~~~~~~~~~
Infinite streams
~~~~~~~~~~~~~~~~
Sometimes you need examples of a particular type to keep your test going but
you're not sure how many you'll need in advance. For this, we have streaming
types.
.. code-block:: pycon
>>> from hypothesis.strategies import streaming, integers
>>> x = streaming(integers()).example()
>>> x
Stream(...)
>>> x[2]
209
>>> x
Stream(32, 132, 209, ...)
>>> x[10]
130
>>> x
Stream(32, 132, 209, 843, -19, 58, 141, -1046, 37, 243, 130, ...)
Think of a Stream as an infinite list where we've only evaluated as much as
we need to. As per above, you can index into it and the stream will be evaluated up to
that index and no further.
You can iterate over it too (warning: iter on a stream given to you
by Hypothesis in this way will never terminate):
.. code-block:: pycon
>>> it = iter(x)
>>> next(it)
32
>>> next(it)
132
>>> next(it)
209
>>> next(it)
843
Slicing will also work, and will give you back Streams. If you set an upper
bound then iter on those streams *will* terminate:
.. code-block:: pycon
>>> list(x[:5])
[32, 132, 209, 843, -19]
>>> y = x[1::2]
>>> y
Stream(...)
>>> y[0]
132
>>> y[1]
843
>>> y
Stream(132, 843, ...)
You can also apply a function to transform a stream:
.. code-block:: pycon
>>> t = streaming(int).example()
>>> tm = t.map(lambda n: n * 2)
>>> tm[0]
26
>>> t[0]
13
>>> tm
Stream(26, ...)
>>> t
Stream(13, ...)
map creates a new stream where each element of the stream is the function
applied to the corresponding element of the original stream. Evaluating the
new stream will force evaluating the original stream up to that index.
(Warning: This isn't the map builtin. In Python 3 the builtin map should do
more or less the right thing, but in Python 2 it will never terminate and
will just eat up all your memory as it tries to build an infinitely long list)
These are the only operations a Stream supports. There are a few more internal
ones, but you shouldn't rely on them.
~~~~~~~~~~~~~~~~~~~
Adapting strategies
~~~~~~~~~~~~~~~~~~~
Often it is the case that a strategy doesn't produce exactly what you want it
to and you need to adapt it. Sometimes you can do this in the test, but this
hurts reuse because you then have to repeat the adaption in every test.
Hypothesis gives you ways to build strategies from other strategies given
functions for transforming the data.
-------
Mapping
-------
Map is probably the easiest and most useful of these to use. If you have a
strategy s and a function f, then an example s.map(f).example() is
f(s.example()). i.e. we draw an example from s and then apply f to it.
e.g.:
.. code-block:: pycon
>>> lists(integers()).map(sorted).example()
[1, 5, 17, 21, 24, 30, 45, 82, 88, 88, 90, 96, 105]
Note that many things that you might use mapping for can also be done with the
builds function in hypothesis.strategies.
---------
Filtering
---------
filter lets you reject some examples. s.filter(f).example() is some example
of s such that f(s) is truthy.
.. code-block:: pycon
>>> integers().filter(lambda x: x > 11).example()
1873
>>> integers().filter(lambda x: x > 11).example()
73
It's important to note that filter isn't magic and if your condition is too
hard to satisfy then this can fail:
.. code-block:: pycon
>>> integers().filter(lambda x: False).example()
Traceback (most recent call last):
File "", line 1, in
File "/home/david/projects/hypothesis/src/hypothesis/searchstrategy/strategies.py", line 175, in example
'Could not find any valid examples in 20 tries'
hypothesis.errors.NoExamples: Could not find any valid examples in 20 tries
In general you should try to use filter only to avoid corner cases that you
don't want rather than attempting to cut out a large chunk of the search space.
A technique that often works well here is to use map to first transform the data
and then use filter to remove things that didn't work out. So for example if you
wanted pairs of integers (x,y) such that x < y you could do the following:
.. code-block:: pycon
>>> tuples(integers(), integers())).map(
... lambda x: tuple(sorted(x))).filter(lambda x: x[0] != x[1]).example()
(42, 1281698)
----------------------------
Chaining strategies together
----------------------------
Finally there is flatmap. Flatmap draws an example, then turns that example
into a strategy, then draws an example from *that* strategy.
It may not be obvious why you want this at first, but it turns out to be
quite useful because it lets you generate different types of data with
relationships to eachother.
For example suppose we wanted to generate a list of lists of the same
length:
.. code-block:: pycon
>>> from hypothesis.strategies import integers, lists
>>> from hypothesis import find
>>> rectangle_lists = integers(min_value=0, max_value=10).flatmap(lambda n:
... lists(lists(integers(), min_size=n, max_size=n)))
>>> find(rectangle_lists, lambda x: True)
[]
>>> find(rectangle_lists, lambda x: len(x) >= 10)
[[], [], [], [], [], [], [], [], [], []]
>>> find(rectangle_lists, lambda t: len(t) >= 3 and len(t[0]) >= 3)
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]
>>> find(rectangle_lists, lambda t: sum(len(s) for s in t) >= 10)
[[0], [0], [0], [0], [0], [0], [0], [0], [0], [0]]
In this example we first choose a length for our tuples, then we build a
strategy which generates lists containing lists precisely of that length. The
finds show what simple examples for this look like.
Most of the time you probably don't want flatmap, but unlike filter and map
which are just conveniences for things you could just do in your tests,
flatmap allows genuinely new data generation that you wouldn't otherwise be
able to easily do.
(If you know Haskell: Yes, this is more or less a monadic bind. If you don't
know Haskell, ignore everything in these parentheses. You do not need to
understand anything about monads to use this, or anything else in Hypothesis).
--------------
Recursive data
--------------
Sometimes the data you want to generate has a recursive definition. e.g. if you
wanted to generate JSON data, valid JSON is:
1. Any float, any boolean, any unicode string.
2. Any list of valid JSON data
3. Any dictionary mapping unicode strings to valid JSON data.
The problem is that you cannot call a strategy recursively and expect it to not just
blow up and eat all your memory.
The way Hypothesis handles this is with the 'recursive' function in hypothesis.strategies
which you pass in a base case and a function that given a strategy for your data type
returns a new strategy for it. So for example:
.. code-block:: pycon
>>> import hypothesis.strategies as st
>>> json = st.recursive(st.floats() | st.booleans() | st.text() | st.none(),
... lambda children: st.lists(children) | st.dictionaries(st.text(), children))
>>> json.example()
{'': None, '\U000b3407\U000b3407\U000b3407': {
'': '"é""é\x11', '\x13': 1.6153068016570349e-282,
'\x00': '\x11\x11\x11"\x11"é"éé\x11""éé"\x11"éé\x11éé\x11é\x11',
'\x80': 'é\x11\x11\x11\x11\x11\x11', '\x13\x13\x00\x80\x80\x00': 4.643602465868519e-144
}, '\U000b3407': None}
>>> json.example()
[]
>>> json.example()
'\x06ě\U000d25e4H\U000d25e4\x06ě'
That is, we start with our leaf data and then we augment it by allowing lists and dictionaries of anything we can generate as JSON data.
The size control of this works by limiting the maximum number of values that can be drawn from the base strategy. So for example if
we wanted to only generate really small JSON we could do this as:
.. code-block:: pycon
>>> small_lists = st.recursive(st.booleans(), st.lists, max_leaves=5)
>>> small_lists.example()
False
>>> small_lists.example()
[[False], [], [], [], [], []]
>>> small_lists.example()
False
>>> small_lists.example()
[]
~~~~~~~~~~~~~~~~~~~~
Composite strategies
~~~~~~~~~~~~~~~~~~~~
The @composite decorator lets you combine other strategies in more or less
arbitrary ways. It's probably the main thing you'll want to use for
complicated custom strategies.
The composite decorator works by giving you a function as the first argument
that you can use to draw examples from other strategies. For example, the
following gives you a list and an index into it:
.. code-block:: python
@composite
def list_and_index(draw, elements=integers()):
xs = draw(lists(elements, min_size=1))
i = draw(integers(min_value=0, max_value=len(xs) - 1))
return (xs, i)
'draw(s)' is a function that should be thought of as returning s.example(),
except that the result is reproducible and will minimize correctly. The
decorated function has the initial argument removed from the list, but will
accept all the others in the expected order. Defaults are preserved.
.. code-block:: pycon
>>> list_and_index()
list_and_index()
>>> list_and_index().example()
([5585, 4073], 1)
>>> list_and_index(booleans())
list_and_index(elements=booleans())
>>> list_and_index(booleans()).example()
([False, False, True], 1)
Note that the repr will work exactly like it does for all the built-in
strategies: It will be a function that you can call to get the strategy in
question, with values provided only if they do not match the defaults.
You can use assume inside composite functions:
.. code-block:: python
@composite
def distinct_strings_with_common_characters(draw):
x = draw(text(), min_size=1)
y = draw(text(alphabet=x))
assume(x != y)
return (x, y)
This works as assume normally would, filtering out any examples for which the
passed in argument is falsey.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drawing interactively in tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is also the ``data()`` strategy, which gives you a means of using
strategies interactively. Rather than having to specify everything up front in
``@given`` you can draw from strategies in the body of your test:
.. code-block:: python
@given(data())
def test_draw_sequentially(data):
x = data.draw(integers())
y = data.draw(integers(min_value=x))
assert x < y
If the test fails, each draw will be printed with the falsifying example. e.g.
the above is wrong (it has a boundary condition error), so will print:
::
Falsifying example: test_draw_sequentially(data=data(...))
Draw 1: 0
Draw 2: 0
As you can see, data drawn this way is simplified as usual.
hypothesis-3.0.1/docs/database.rst 0000664 0000000 0000000 00000006047 12661275660 0017162 0 ustar 00root root 0000000 0000000 ===============================
The Hypothesis Example Database
===============================
When Hypothesis finds a bug it stores enough information in its database to reproduce it. This
enables you to have a classic testing workflow of find a bug, fix a bug, and be confident that
this is actually doing the right thing because Hypothesis will start by retrying the examples that
broke things last time.
-----------
Limitations
-----------
The database is best thought of as a cache that you never need to invalidate: Information may be
lost when you upgrade a Hypothesis version or change your test, so you shouldn't rely on it for
correctness - if there's an example you want to ensure occurs each time then :ref:`there's a feature for
including them in your source code ` - but it helps the development
workflow considerably by making sure that the examples you've just found are reproduced.
--------------
File locations
--------------
The default (and currently only) storage format is as rather weirdly unidiomatic JSON saved
in an sqlite3 database. The standard location for that is .hypothesis/examples.db in your current
working directory. You can override this, either by setting either the database\_file property on
a settings object (you probably want to specify it on settings.default) or by setting the
HYPOTHESIS\_DATABASE\_FILE environment variable.
Note: There are other files in .hypothesis but everything other than the examples.db will be
transparently created on demand. You don't need to and probably shouldn't check those into git.
Adding .hypothesis/eval_source to your .gitignore or equivalent is probably a good idea.
--------------------------------------------
Upgrading Hypothesis and changing your tests
--------------------------------------------
The design of the Hypothesis database is such that you can put arbitrary data in the database
and not get wrong behaviour. When you upgrade Hypothesis, old data *might* be invalidated, but
this should happen transparently. It should never be the case that e.g. changing the strategy
that generates an argument sometimes gives you data from the old strategy.
-----------------------------
Sharing your example database
-----------------------------
It may be convenient to share an example database between multiple machines - e.g. having a CI
server continually running to look for bugs, then sharing any changes it makes.
The only currently supported workflow for this (though it would be easy enough to add new ones)
is via checking the examples.db file into git. Hypothesis provides a git merge script, executable
as python -m hypothesis.tools.mergedbs.
For example, in order to make this work with the standard location:
In .gitattributes add:
.. code::
.hypothesis/examples.db merge=hypothesisdb
And in .git/config add:
.. code::
[merge "hypothesisdb"]
name = Hypothesis database files
driver = python -m hypothesis.tools.mergedbs %O %A %B
This will cause the Hypothesis merge script to be used when both sides of a merge have changed
the example database.
hypothesis-3.0.1/docs/details.rst 0000664 0000000 0000000 00000034565 12661275660 0017051 0 ustar 00root root 0000000 0000000 =============================
Details and advanced features
=============================
This is an account of slightly less common Hypothesis features that you don't need
to get started but will nevertheless make your life easier.
----------------------
Additional test output
----------------------
Normally the output of a failing test will look something like:
.. code::
Falsifying example: test_a_thing(x=1, y="foo")
With the ``repr`` of each keyword argument being printed.
Sometimes this isn't enough, either because you have values with a ``repr`` that
isn't very descriptive or because you need to see the output of some
intermediate steps of your test. That's where the ``note`` function comes in:
.. code:: pycon
>>> from hypothesis import given, note, strategies as st
>>> @given(st.lists(st.integers()), st.randoms())
... def test_shuffle_is_noop(ls, r):
... ls2 = list(ls)
... r.shuffle(ls2)
... note("Shuffle: %r" % (ls2))
... assert ls == ls2
...
>>> test_shuffle_is_noop()
Falsifying example: test_shuffle_is_noop(ls=[0, 0, 1], r=RandomWithSeed(0))
Shuffle: [0, 1, 0]
Traceback (most recent call last):
...
AssertionError
The note is printed in the final run of the test in order to include any
additional information you might need in your test.
------------------
Making assumptions
------------------
Sometimes Hypothesis doesn't give you exactly the right sort of data you want - it's
mostly of the right shape, but some examples won't work and you don't want to care about
them. You *can* just ignore these by aborting the test early, but this runs the risk of
accidentally testing a lot less than you think you are. Also it would be nice to spend
less time on bad examples - if you're running 200 examples per test (the default) and
it turns out 150 of those examples don't match your needs, that's a lot of wasted time.
The way Hypothesis handles this is to let you specify things which you *assume* to be
true. This lets you abort a test in a way that marks the example as bad rather than
failing the test. Hypothesis will use this information to try to avoid similar examples
in future.
For example suppose had the following test:
.. code:: python
from hypothesis import given
from hypothesis.strategies import floats
@given(floats())
def test_negation_is_self_inverse(x):
assert x == -(-x)
Running this gives us:
.. code::
Falsifying example: test_negation_is_self_inverse(x=float('nan'))
AssertionError
This is annoying. We know about NaN and don't really care about it, but as soon as Hypothesis
finds a NaN example it will get distracted by that and tell us about it. Also the test will
fail and we want it to pass.
So lets block off this particular example:
.. code:: python
from hypothesis import given, assume
from hypothesis.strategies import floats
from math import isnan
@given(floats())
def test_negation_is_self_inverse_for_non_nan(x):
assume(not isnan(x))
assert x == -(-x)
And this passes without a problem.
:func:`~hypothesis.core.assume` throws an exception which
terminates the test when provided with a false argument.
It's essentially an :ref:`assert `, except that
the exception it throws is one that Hypothesis
identifies as meaning that this is a bad example, not a failing test.
In order to avoid the easy trap where you assume a lot more than you intended, Hypothesis
will fail a test when it can't find enough examples passing the assumption.
If we'd written:
.. code:: python
from hypothesis import given, assume
from hypothesis.strategies import floats
@given(floats())
def test_negation_is_self_inverse_for_non_nan(x):
assume(False)
assert x == -(-x)
Then on running we'd have got the exception:
.. code::
Unsatisfiable: Unable to satisfy assumptions of hypothesis test_negation_is_self_inverse_for_non_nan. Only 0 examples found after 0.0791318 seconds
~~~~~~~~~~~~~~~~~~~
How good is assume?
~~~~~~~~~~~~~~~~~~~
Hypothesis has an adaptive exploration strategy to try to avoid things which falsify
assumptions, which should generally result in it still being able to find examples in
hard to find situations.
Suppose we had the following:
.. code:: python
@given(lists(integers()))
def test_sum_is_positive(xs):
assert sum(xs) > 0
Unsurprisingly this fails and gives the falsifying example [].
Adding ``assume(xs)`` to this removes the trivial empty example and gives us [0].
Adding ``assume(all(x > 0 for x in xs))`` and it passes: A sum of a list of
positive integers is positive.
The reason that this should be surprising is not that it doesn't find a
counter-example, but that it finds enough examples at all.
In order to make sure something interesting is happening, suppose we wanted to
try this for long lists. e.g. suppose we added an assume(len(xs) > 10) to it.
This should basically never find an example: A naive strategy would find fewer
than one in a thousand examples, because if each element of the list is
negative with probability half, you'd have to have ten of these go the right
way by chance. In the default configuration Hypothesis gives up long before
it's tried 1000 examples (by default it tries 200).
Here's what happens if we try to run this:
.. code:: python
@given(lists(integers()))
def test_sum_is_positive(xs):
assume(len(xs) > 10)
assume(all(x > 0 for x in xs))
print(xs)
assert sum(xs) > 0
In: test_sum_is_positive()
[17, 12, 7, 13, 11, 3, 6, 9, 8, 11, 47, 27, 1, 31, 1]
[6, 2, 29, 30, 25, 34, 19, 15, 50, 16, 10, 3, 16]
[25, 17, 9, 19, 15, 2, 2, 4, 22, 10, 10, 27, 3, 1, 14, 17, 13, 8, 16, 9, 2...
[17, 65, 78, 1, 8, 29, 2, 79, 28, 18, 39]
[13, 26, 8, 3, 4, 76, 6, 14, 20, 27, 21, 32, 14, 42, 9, 24, 33, 9, 5, 15, ...
[2, 1, 2, 2, 3, 10, 12, 11, 21, 11, 1, 16]
As you can see, Hypothesis doesn't find *many* examples here, but it finds some - enough to
keep it happy.
In general if you *can* shape your strategies better to your tests you should - for example
``integers_in_range(1, 1000)`` is a lot better than ``assume(1 <= x <= 1000)``, but assume will take
you a long way if you can't.
---------------------
Defining strategies
---------------------
The type of object that is used to explore the examples given to your test
function is called a :class:`~hypothesis.SearchStrategy`.
These are created using the functions
exposed in the :mod:`hypothesis.strategies` module.
Many of these strategies expose a variety of arguments you can use to customize
generation. For example for integers you can specify ``min`` and ``max`` values of
integers you want:
.. code:: python
>>> from hypothesis.strategies import integers
>>> integers()
RandomGeometricIntStrategy() | WideRangeIntStrategy()
>>> integers(min_value=0)
IntegersFromStrategy(0)
>>> integers(min_value=0, max_value=10)
BoundedIntStrategy(0, 10)
If you want to see exactly what a strategy produces you can ask for an example:
.. code:: python
>>> integers(min_value=0, max_value=10).example()
7
Many strategies are build out of other strategies. For example, if you want
to define a tuple you need to say what goes in each element:
.. code:: python
>>> from hypothesis.strategies import tuples
>>> tuples(integers(), integers()).example()
(-1953, 85733644253897814191482551773726674360154905303788466954)
Further details are :doc:`available in a separate document `.
------------------------------------
The gory details of given parameters
------------------------------------
The :func:`@given ` decorator may be used
to specify what arguments of a function should
be parametrized over. You can use either positional or keyword arguments or a mixture
of the two.
For example all of the following are valid uses:
.. code:: python
@given(integers(), integers())
def a(x, y):
pass
@given(integers())
def b(x, y):
pass
@given(y=integers())
def c(x, y):
pass
@given(x=integers())
def d(x, y):
pass
@given(x=integers(), y=integers())
def e(x, **kwargs):
pass
@given(x=integers(), y=integers())
def f(x, *args, **kwargs):
pass
class SomeTest(TestCase):
@given(integers())
def test_a_thing(self, x):
pass
The following are not:
.. code:: python
@given(integers(), integers(), integers())
def g(x, y):
pass
@given(integers())
def h(x, *args):
pass
@given(integers(), x=integers())
def i(x, y):
pass
@given()
def j(x, y):
pass
The rules for determining what are valid uses of given are as follows:
1. You may pass any keyword argument to given.
2. Positional arguments to given are equivalent to the rightmost named
arguments for the test function.
3. positional arguments may not be used if the underlying test function has
varargs or arbitrary keywords.
4. Functions tested with given may not have any defaults.
The reason for the "rightmost named arguments" behaviour is so that
using :func:`@given ` with instance methods works: self
will be passed to the function as normal and not be parametrized over.
The function returned by given has all the arguments that the original test did
, minus the ones that are being filled in by given.
-------------------------
Custom function execution
-------------------------
Hypothesis provides you with a hook that lets you control how it runs
examples.
This lets you do things like set up and tear down around each example, run
examples in a subprocess, transform coroutine tests into normal tests, etc.
The way this works is by introducing the concept of an executor. An executor
is essentially a function that takes a block of code and run it. The default
executor is:
.. code:: python
def default_executor(function):
return function()
You define executors by defining a method execute_example on a class. Any
test methods on that class with :func:`@given ` used on them will use
``self.execute_example`` as an executor with which to run tests. For example,
the following executor runs all its code twice:
.. code:: python
from unittest import TestCase
class TestTryReallyHard(TestCase):
@given(integers())
def test_something(self, i):
perform_some_unreliable_operation(i)
def execute_example(self, f):
f()
return f()
Note: The functions you use in map, etc. will run *inside* the executor. i.e.
they will not be called until you invoke the function passed to setup\_example.
An executor must be able to handle being passed a function which returns None,
otherwise it won't be able to run normal test cases. So for example the following
executor is invalid:
.. code:: python
from unittest import TestCase
class TestRunTwice(TestCase):
def execute_example(self, f):
return f()()
and should be rewritten as:
.. code:: python
from unittest import TestCase
import inspect
class TestRunTwice(TestCase):
def execute_example(self, f):
result = f()
if inspect.isfunction(result):
result = result()
return result
Methods of a BasicStrategy however will typically be called whenever. This may
happen inside your executor or outside. This is why they have a "Warning you
have no control over the lifecycle of these values" attached.
-------------------------------
Using Hypothesis to find values
-------------------------------
You can use Hypothesis's data exploration features to find values satisfying
some predicate:
.. code:: python
>>> from hypothesis import find
>>> from hypothesis.strategies import sets, lists, integers
>>> find(lists(integers()), lambda x: sum(x) >= 10)
[10]
>>> find(lists(integers()), lambda x: sum(x) >= 10 and len(x) >= 3)
[0, 0, 10]
>>> find(sets(integers()), lambda x: sum(x) >= 10 and len(x) >= 3)
{0, 1, 9}
The first argument to :func:`~hypothesis.find` describes data in the usual way for an argument to
given, and supports :doc:`all the same data types `. The second is a
predicate it must satisfy.
Of course not all conditions are satisfiable. If you ask Hypothesis for an
example to a condition that is always false it will raise an error:
.. code:: python
>>> find(integers(), lambda x: False)
Traceback (most recent call last):
...
hypothesis.errors.NoSuchExample: No examples of condition lambda x:
>>> from hypothesis.strategies import booleans
>>> find(booleans(), lambda x: False)
Traceback (most recent call last):
...
hypothesis.errors.DefinitelyNoSuchExample: No examples of condition lambda x: (all 2 considered)
(The "lambda x: unknown" is because Hypothesis can't retrieve the source code
of lambdas from the interactive python console. It gives a better error message
most of the time which contains the actual condition)
The reason for the two different types of errors is that there are only a small
number of booleans, so it is feasible for Hypothesis to enumerate all of them
and simply check that your condition is never true.
.. _providing-explicit-examples:
---------------------------
Providing explicit examples
---------------------------
You can explicitly ask Hypothesis to try a particular example as follows:
.. code:: python
from hypothesis import given, example
from hypothesis.strategies import text
@given(text())
@example("Hello world")
@example(x="Some very long string")
def test_some_code(x):
assert True
Hypothesis will run all examples you've asked for first. If any of them fail it
will not go on to look for more examples.
It doesn't matter whether you put the example decorator before or after given.
Any permutation of the decorators in the above will do the same thing.
Note that examples can be positional or keyword based. If they're positional then
they will be filled in from the right when calling, so things like the following
will also work:
.. code:: python
from unittest import TestCase
from hypothesis import given, example
from hypothesis.strategies import text
class TestThings(TestCase):
@given(text())
@example("Hello world")
@example(x="Some very long string")
def test_some_code(self, x):
assert True
It is *not* permitted for a single example to be a mix of positional and
keyword arguments. Either are fine, and you can use one in one example and the
other in another example if for some reason you really want to, but a single
example must be consistent.
hypothesis-3.0.1/docs/development.rst 0000664 0000000 0000000 00000003420 12661275660 0017730 0 ustar 00root root 0000000 0000000 ==============================
Ongoing Hypothesis Development
==============================
Hypothesis releases and development are managed by me, `David R. MacIver `_.
I am the primary author of Hypothesis.
*However*, I no longer do unpaid feature development on Hypothesis. My roles as leader of the project are:
1. Helping other people do feature development on Hypothesis
2. Fixing bugs and other code health issues
3. Improving documentation
4. General release management work
5. Planning the general roadmap of the project
6. Doing sponsored development on tasks that are too large or in depth for other people to take on
So all new features must either be sponsored or implemented by someone else. That being said, I take a fairly active
role in shepherding pull requests and helping people write a new feature (see `the
contributing guidelines `_ for
details and `this pull request
`_ for an example of how the process goes). This isn't
"patches welcome", it's "I will help you write a patch".
All enhancement tickets on GitHub are tagged with either `help-wanted `_
if I think they're viable for someone else to pick up or `for-a-modest-fee `_ if
I think they are not and that if you want them you should probably talk to me about paid development.
You are of course entirely welcome to ask for paid development on something that is marked help-wanted,
or indeed to try to tackle something marked for-a-modest-fee yourself if you're feeling ambitious. These labels
are very much intended as guidelines rather than rules.
hypothesis-3.0.1/docs/django.rst 0000664 0000000 0000000 00000012626 12661275660 0016660 0 ustar 00root root 0000000 0000000 .. _hypothesis-django:
===========================
Hypothesis for Django users
===========================
Hypothesis offers a number of features specific for Django testing, available
in the :mod:`hypothesis[django]` :doc:`extra `.
Using it is quite straightforward: All you need to do is subclass
:class:`hypothesis.extra.django.TestCase` or
:class:`hypothesis.extra.django.TransactionTestCase`
and you can use :func:`@given ` as normal,
and the transactions will be per example
rather than per test function as they would be if you used @given with a normal
django test suite (this is important because your test function will be called
multiple times and you don't want them to interfere with eachother). Test cases
on these classes that do not use
:func:`@given ` will be run as normal.
I strongly recommend not using
:class:`~hypothesis.extra.django.TransactionTestCase`
unless you really have to.
Because Hypothesis runs this in a loop the performance problems it normally has
are significantly exacerbated and your tests will be really slow.
In addition to the above, Hypothesis has some limited support for automatically
deriving strategies for your model types, which you can then customize further.
.. warning::
Hypothesis creates saved models. This will run inside your testing
transaction when using the test runner, but if you use the dev console this
will leave debris in your database.
For example, using the trivial django project I have for testing:
.. code-block:: python
>>> from hypothesis.extra.django.models import models
>>> from toystore.models import Customer
>>> c = models(Customer).example()
>>> c
>>> c.email
'jaime.urbina@gmail.com'
>>> c.name
'\U00109d3d\U000e07be\U000165f8\U0003fabf\U000c12cd\U000f1910\U00059f12\U000519b0\U0003fabf\U000f1910\U000423fb\U000423fb\U00059f12\U000e07be\U000c12cd\U000e07be\U000519b0\U000165f8\U0003fabf\U0007bc31'
>>> c.age
-873375803
Hypothesis has just created this with whatever the relevant type of data is.
Obviously the customer's age is implausible, so lets fix that:
.. code-block:: python
>>> from hypothesis.strategies import integers
>>> c = models(Customer, age=integers(min_value=0, max_value=120)).example()
>>> c
>>> c.age
5
You can use this to override any fields you like. Sometimes this will be
mandatory: If you have a non-nullable field of a type Hypothesis doesn't know
how to create (e.g. a foreign key) then the models function will error unless
you explicitly pass a strategy to use there.
Foreign keys are not automatically derived. If they're nullable they will default
to always being null, otherwise you always have to specify them. e.g. suppose
we had a Shop type with a foreign key to company, we would define a strategy
for it as:
.. code:: python
shop_strategy = models(Shop, company=models(Company))
---------------
Tips and tricks
---------------
Custom field types
==================
If you have a custom Django field type you can register it with Hypothesis's
model deriving functionality by registering a default strategy for it:
.. code-block:: python
>>> from toystore.models import CustomishField, Customish
>>> models(Customish).example()
hypothesis.errors.InvalidArgument: Missing arguments for mandatory field
customish for model Customish
>>> from hypothesis.extra.django.models import add_default_field_mapping
>>> from hypothesis.strategies import just
>>> add_default_field_mapping(CustomishField, just("hi"))
>>> x = models(Customish).example()
>>> x.customish
'hi'
Note that this mapping is on exact type. Subtypes will not inherit it.
Generating child models
=======================
For the moment there's no explicit support in hypothesis-django for generating
dependent models. i.e. a Company model will generate no Shops. However if you
want to generate some dependent models as well, you can emulate this by using
the *flatmap* function as follows:
.. code:: python
from hypothesis.strategies import lists, just
def generate_with_shops(company):
return lists(models(Shop, company=just(company))).map(lambda _: company)
company_with_shops_strategy = models(Company).flatmap(generate_with_shops)
Lets unpack what this is doing:
The way flatmap works is that we draw a value from the original strategy, then
apply a function to it which gives us a new strategy. We then draw a value from
*that* strategy. So in this case we're first drawing a company, and then we're
drawing a list of shops belonging to that company: The *just* strategy is a
strategy such that drawing it always produces the individual value, so
``models(Shop, company=just(company))`` is a strategy that generates a Shop belonging
to the original company.
So the following code would give us a list of shops all belonging to the same
company:
.. code:: python
models(Company).flatmap(lambda c: lists(models(Shop, company=just(c))))
The only difference from this and the above is that we want the company, not
the shops. This is where the inner map comes in. We build the list of shops
and then throw it away, instead returning the company we started for. This
works because the models that Hypothesis generates are saved in the database,
so we're essentially running the inner strategy purely for the side effect of
creating those children in the database.
hypothesis-3.0.1/docs/endorsements.rst 0000664 0000000 0000000 00000013252 12661275660 0020120 0 ustar 00root root 0000000 0000000 ========================
Who is using Hypothesis?
========================
This is a page for listing people who are using Hypothesis and how excited they
are about that. If that's you and your name is not on the list, `this file is in
Git `_
and I'd love it if you sent me a pull request to fix that.
--------------------------------------------------------------------------------------
Kristian Glass - Director of Technology at `LaterPay GmbH `_
--------------------------------------------------------------------------------------
Hypothesis has been brilliant for expanding the coverage of our test cases,
and also for making them much easier to read and understand,
so we're sure we're testing the things we want in the way we want.
-----------------------------------------------
`Seth Morton `_
-----------------------------------------------
When I first heard about Hypothesis, I knew I had to include it in my two
open-source Python libraries, `natsort `_
and `fastnumbers `_ . Quite frankly,
I was a little appalled at the number of bugs and "holes" I found in the code. I can
now say with confidence that my libraries are more robust to "the wild." In
addition, Hypothesis gave me the confidence to expand these libraries to fully
support Unicode input, which I never would have had the stomach for without such
thorough testing capabilities. Thanks!
-------------------------------------------
`Sixty North `_
-------------------------------------------
At Sixty North we use Hypothesis for testing
`Segpy `_ an open source Python library for
shifting data between Python data structures and SEG Y files which contain
geophysical data from the seismic reflection surveys used in oil and gas
exploration.
This is our first experience of property-based testing – as opposed to example-based
testing. Not only are our tests more powerful, they are also much better
explanations of what we expect of the production code. In fact, the tests are much
closer to being specifications. Hypothesis has located real defects in our code
which went undetected by traditional test cases, simply because Hypothesis is more
relentlessly devious about test case generation than us mere humans! We found
Hypothesis particularly beneficial for Segpy because SEG Y is an antiquated format
that uses legacy text encodings (EBCDIC) and even a legacy floating point format
we implemented from scratch in Python.
Hypothesis is sure to find a place in most of our future Python codebases and many
existing ones too.
-------------------------------------------
`mulkieran `_
-------------------------------------------
Just found out about this excellent QuickCheck for Python implementation and
ran up a few tests for my `bytesize `_
package last night. Refuted a few hypotheses in the process.
Looking forward to using it with a bunch of other projects as well.
-----------------------------------------------
`Adam Johnson `_
-----------------------------------------------
I have written a small library to serialize ``dict``\s to MariaDB's dynamic
columns binary format,
`mariadb-dyncol `_. When I first
developed it, I thought I had tested it really well - there were hundreds of
test cases, some of them even taken from MariaDB's test suite itself. I was
ready to release.
Lucky for me, I tried Hypothesis with David at the PyCon UK sprints. Wow! It
found bug after bug after bug. Even after a first release, I thought of a way
to make the tests do more validation, which revealed a further round of bugs!
Most impressively, Hypothesis found a complicated off-by-one error in a
condition with 4095 versus 4096 bytes of data - something that I would never
have found.
Long live Hypothesis! (Or at least, property-based testing).
-------------------------------------------
`Josh Bronson `_
-------------------------------------------
Adopting Hypothesis improved `bidict `_'s
test coverage and significantly increased our ability to make changes to
the code with confidence that correct behavior would be preserved.
Thank you, David, for the great testing tool.
--------------------------------------------
`Cory Benfield `_
--------------------------------------------
Hypothesis is the single most powerful tool in my toolbox for working with
algorithmic code, or any software that produces predictable output from a wide
range of sources. When using it with
`Priority `_, Hypothesis consistently found
errors in my assumptions and extremely subtle bugs that would have taken months
of real-world use to locate. In some cases, Hypothesis found subtle deviations
from the correct output of the algorithm that may never have been noticed at
all.
When it comes to validating the correctness of your tools, nothing comes close
to the thoroughness and power of Hypothesis.
-------------------------------------------
`Your name goes here `_
-------------------------------------------
I know there are many more, because I keep finding out about new people I'd never
even heard of using Hypothesis. If you're looking to way to give back to a tool you
love, adding your name here only takes a moment and would really help a lot. As per
instructions at the top, just send me a pull request and I'll add you to the list.
hypothesis-3.0.1/docs/examples.rst 0000664 0000000 0000000 00000041510 12661275660 0017226 0 ustar 00root root 0000000 0000000 ==================
Some more examples
==================
This is a collection of examples of how to use Hypothesis in interesting ways.
It's small for now but will grow over time.
All of these examples are designed to be run under `py.test`_ (`nose`_ should probably
work too).
----------------------------------
How not to sort by a partial order
----------------------------------
The following is an example that's been extracted and simplified from a real
bug that occurred in an earlier version of Hypothesis. The real bug was a lot
harder to find.
Suppose we've got the following type:
.. code:: python
class Node(object):
def __init__(self, label, value):
self.label = label
self.value = tuple(value)
def __repr__(self):
return "Node(%r, %r)" % (self.label, self.value)
def sorts_before(self, other):
if len(self.value) >= len(other.value):
return False
return other.value[:len(self.value)] == self.value
Each node is a label and a sequence of some data, and we have the relationship
sorts_before meaning the data of the left is an initial segment of the right.
So e.g. a node with value ``[1, 2]`` will sort before a node with value ``[1, 2, 3]``,
but neither of ``[1, 2]`` nor ``[1, 3]`` will sort before the other.
We have a list of nodes, and we want to topologically sort them with respect to
this ordering. That is, we want to arrange the list so that if ``x.sorts_before(y)``
then x appears earlier in the list than y. We naively think that the easiest way
to do this is to extend the partial order defined here to a total order by
breaking ties arbitrarily and then using a normal sorting algorithm. So we
define the following code:
.. code:: python
from functools import total_ordering
@total_ordering
class TopoKey(object):
def __init__(self, node):
self.value = node
def __lt__(self, other):
if self.value.sorts_before(other.value):
return True
if other.value.sorts_before(self.value):
return False
return self.value.label < other.value.label
def sort_nodes(xs):
xs.sort(key=TopoKey)
This takes the order defined by ``sorts_before`` and extends it by breaking ties by
comparing the node labels.
But now we want to test that it works.
First we write a function to verify that our desired outcome holds:
.. code:: python
def is_prefix_sorted(xs):
for i in range(len(xs)):
for j in range(i+1, len(xs)):
if xs[j].sorts_before(xs[i]):
return False
return True
This will return false if it ever finds a pair in the wrong order and
return true otherwise.
Given this function, what we want to do with Hypothesis is assert that for all
sequences of nodes, the result of calling ``sort_nodes`` on it is sorted.
First we need to define a strategy for Node:
.. code:: python
from hypothesis import settings, strategy
import hypothesis.strategies as s
NodeStrategy = s.builds(
Node,
s.integers(),
s.lists(s.booleans(), average_size=5, max_size=10))
We want to generate *short* lists of values so that there's a decent chance of
one being a prefix of the other (this is also why the choice of bool as the
elements). We then define a strategy which builds a node out of an integer and
one of those short lists of booleans.
We can now write a test:
.. code:: python
from hypothesis import given
@given(s.lists(Node))
def test_sorting_nodes_is_prefix_sorted(xs):
sort_nodes(xs)
assert is_prefix_sorted(xs)
this immediately fails with the following example:
.. code:: python
[Node(0, (False, True)), Node(0, (True,)), Node(0, (False,))]
The reason for this is that because False is not a prefix of (True, True) nor vice
versa, sorting things the first two nodes are equal because they have equal labels.
This makes the whole order non-transitive and produces basically nonsense results.
But this is pretty unsatisfying. It only works because they have the same label. Perhaps
we actually wanted our labels to be unique. Lets change the test to do that.
.. code:: python
def deduplicate_nodes_by_label(nodes):
table = {}
for node in nodes:
table[node.label] = node
return list(table.values())
NodeSet = s.lists(Node).map(deduplicate_nodes_by_label)
We define a function to deduplicate nodes by labels, and then map that over a strategy
for lists of nodes to give us a strategy for lists of nodes with unique labels. We can
now rewrite the test to use that:
.. code:: python
@given(NodeSet)
def test_sorting_nodes_is_prefix_sorted(xs):
sort_nodes(xs)
assert is_prefix_sorted(xs)
Hypothesis quickly gives us an example of this *still* being wrong:
.. code:: python
[Node(0, (False,)), Node(-1, (True,)), Node(-2, (False, False))])
Now this is a more interesting example. None of the nodes will sort equal. What is
happening here is that the first node is strictly less than the last node because
(False,) is a prefix of (False, False). This is in turn strictly less than the middle
node because neither is a prefix of the other and -2 < -1. The middle node is then
less than the first node because -1 < 0.
So, convinced that our implementation is broken, we write a better one:
.. code:: python
def sort_nodes(xs):
for i in hrange(1, len(xs)):
j = i - 1
while j >= 0:
if xs[j].sorts_before(xs[j+1]):
break
xs[j], xs[j+1] = xs[j+1], xs[j]
j -= 1
This is just insertion sort slightly modified - we swap a node backwards until swapping
it further would violate the order constraints. The reason this works is because our
order is a partial order already (this wouldn't produce a valid result for a general
topological sorting - you need the transitivity).
We now run our test again and it passes, telling us that this time we've successfully
managed to sort some nodes without getting it completely wrong. Go us.
--------------------
Time zone arithmetic
--------------------
This is an example of some tests for pytz which check that various timezone
conversions behave as you would expect them to. These tests should all pass,
and are mostly a demonstration of some useful sorts of thing to test with
Hypothesis, and how the hypothesis-datetime extra package works.
.. code:: python
from hypothesis import given, settings
from hypothesis.extra.datetime import datetimes
from hypothesis.strategies import sampled_from
import pytz
from datetime import timedelta
ALL_TIMEZONES = list(map(pytz.timezone, pytz.all_timezones))
# There are a lot of fiddly edge cases in dates, so we run a larger number of
# examples just to be sure
with settings(max_examples=1000):
@given(
datetimes(), # datetimes generated are non-naive by default
sampled_from(ALL_TIMEZONES), sampled_from(ALL_TIMEZONES),
)
def test_convert_via_intermediary(dt, tz1, tz2):
"""
Test that converting between timezones is not affected by a detour via
another timezone.
"""
assert dt.astimezone(tz1).astimezone(tz2) == dt.astimezone(tz2)
@given(
datetimes(timezones=[]), # Now generate naive datetimes
sampled_from(ALL_TIMEZONES), sampled_from(ALL_TIMEZONES),
)
def test_convert_to_and_fro(dt, tz1, tz2):
"""
If we convert to a new timezone and back to the old one this should
leave the result unchanged.
"""
dt = tz1.localize(dt)
assert dt == dt.astimezone(tz2).astimezone(tz1)
@given(
datetimes(),
sampled_from(ALL_TIMEZONES),
)
def test_adding_an_hour_commutes(dt, tz):
"""
When converting between timezones it shouldn't matter if we add an hour
here or add an hour there.
"""
an_hour = timedelta(hours=1)
assert (dt + an_hour).astimezone(tz) == dt.astimezone(tz) + an_hour
@given(
datetimes(),
sampled_from(ALL_TIMEZONES),
)
def test_adding_a_day_commutes(dt, tz):
"""
When converting between timezones it shouldn't matter if we add a day
here or add a day there.
"""
a_day = timedelta(days=1)
assert (dt + a_day).astimezone(tz) == dt.astimezone(tz) + a_day
-------------------
Condorcet's Paradox
-------------------
A classic paradox in voting theory, called Condorcet's paradox, is that
majority preferences are not transitive. That is, there is a population
and a set of three candidates A, B and C such that the majority of the
population prefer A to B, B to C and C to A.
Wouldn't it be neat if we could use Hypothesis to provide an example of this?
Well as you can probably guess from the presence of this section, we can! This
is slightly surprising because it's not really obvious how we would generate an
election given the types that Hypothesis knows about.
The trick here turns out to be twofold:
1. We can generate a type that is *much larger* than an election, extract an election out of that, and rely on minimization to throw away all the extraneous detail.
2. We can use assume and rely on Hypothesis's adaptive exploration to focus on the examples that turn out to generate interesting elections
Without further ado, here is the code:
.. code:: python
from hypothesis import given, assume
from hypothesis.strategies import integers, lists
from collections import Counter
def candidates(votes):
return {candidate for vote in votes for candidate in vote}
def build_election(votes):
"""
Given a list of lists we extract an election out of this. We do this
in two phases:
1. First of all we work out the full set of candidates present in all
votes and throw away any votes that do not have that whole set.
2. We then take each vote and make it unique, keeping only the first
instance of any candidate.
This gives us a list of total orderings of some set. It will usually
be a lot smaller than the starting list, but that's OK.
"""
all_candidates = candidates(votes)
votes = list(filter(lambda v: set(v) == all_candidates, votes))
if not votes:
return []
rebuilt_votes = []
for vote in votes:
rv = []
for v in vote:
if v not in rv:
rv.append(v)
assert len(rv) == len(all_candidates)
rebuilt_votes.append(rv)
return rebuilt_votes
@given(lists(lists(integers(min_value=1, max_value=5))))
def test_elections_are_transitive(election):
election = build_election(election)
# Small elections are unlikely to be interesting
assume(len(election) >= 3)
all_candidates = candidates(election)
# Elections with fewer than three candidates certainly can't exhibit
# intransitivity
assume(len(all_candidates) >= 3)
# Now we check if the election is transitive
# First calculate the pairwise counts of how many prefer each candidate
# to the other
counts = Counter()
for vote in election:
for i in range(len(vote)):
for j in range(i+1, len(vote)):
counts[(vote[i], vote[j])] += 1
# Now look at which pairs of candidates one has a majority over the
# other and store that.
graph = {}
all_candidates = candidates(election)
for i in all_candidates:
for j in all_candidates:
if counts[(i, j)] > counts[(j, i)]:
graph.setdefault(i, set()).add(j)
# Now for each triple assert that it is transitive.
for x in all_candidates:
for y in graph.get(x, ()):
for z in graph.get(y, ()):
assert x not in graph.get(z, ())
The example Hypothesis gives me on my first run (your mileage may of course
vary) is:
.. code:: python
[[3, 1, 4], [4, 3, 1], [1, 4, 3]]
Which does indeed do the job: The majority (votes 0 and 1) prefer 3 to 1, the
majority (votes 0 and 2) prefer 1 to 4 and the majority (votes 1 and 2) prefer
4 to 3. This is in fact basically the canonical example of the voting paradox,
modulo variations on the names of candidates.
-------------------
Fuzzing an HTTP API
-------------------
Hypothesis's support for testing HTTP services is somewhat nascent. There are
plans for some fully featured things around this, but right now they're
probably quite far down the line.
But you can do a lot yourself without any explicit support! Here's a script
I wrote to throw random data against the API for an entirely fictitious service
called Waspfinder (this is only lightly obfuscated and you can easily figure
out who I'm actually talking about, but I don't want you to run this code and
hammer their API without their permission).
All this does is use Hypothesis to generate random JSON data matching the
format their API asks for and check for 500 errors. More advanced tests which
then use the result and go on to do other things are definitely also possible.
.. code:: python
import unittest
from hypothesis import given, assume, settings
from collections import namedtuple
import requests
import os
import random
import time
import math
from hypothesis.strategies import one_of, sampled_from, lists
# These tests will be quite slow because we have to talk to an external
# service. Also we'll put in a sleep between calls so as to not hammer it.
# As a result we reduce the number of test cases and turn off the timeout.
settings.default.max_examples = 100
settings.default.timeout = -1
Goal = namedtuple("Goal", ("slug",))
# We just pass in our API credentials via environment variables.
waspfinder_token = os.getenv('WASPFINDER_TOKEN')
waspfinder_user = os.getenv('WASPFINDER_USER')
assert waspfinder_token is not None
assert waspfinder_user is not None
GoalData = {
'title': str,
'goal_type': sampled_from(lists
"hustler", "biker", "gainer", "fatloser", "inboxer",
"drinker", "custom")),
'goaldate': one_of((None, float)),
'goalval': one_of((None, float)),
'rate': one_of((None, float)),
'initval': float,
'panic': float,
'secret': bool,
'datapublic': bool,
}
needs2 = ['goaldate', 'goalval', 'rate']
class WaspfinderTest(unittest.TestCase):
@given(GoalData)
def test_create_goal_dry_run(self, data):
# We want slug to be unique for each run so that multiple test runs
# don't interfere with eachother. If for some reason some slugs trigger
# an error and others don't we'll get a Flaky error, but that's OK.
slug = hex(random.getrandbits(32))[2:]
# Use assume to guide us through validation we know about, otherwise
# we'll spend a lot of time generating boring examples.
# Title must not be empty
assume(data["title"])
# Exactly two of these values should be not None. The other will be
# inferred by the API.
assume(len([1 for k in needs2 if data[k] is not None]) == 2)
for v in data.values():
if isinstance(v, float):
assume(not math.isnan(v))
data["slug"] = slug
# The API nicely supports a dry run option, which means we don't have
# to worry about the user account being spammed with lots of fake goals
# Otherwise we would have to make sure we cleaned up after ourselves
# in this test.
data["dryrun"] = True
data["auth_token"] = waspfinder_token
for d, v in data.items():
if v is None:
data[d] = "null"
else:
data[d] = str(v)
result = requests.post(
"https://waspfinder.example.com/api/v1/users/"
"%s/goals.json" % (waspfinder_user,), data=data)
# Lets not hammer the API too badly. This will of course make the
# tests even slower than they otherwise would have been, but that's
# life.
time.sleep(1.0)
# For the moment all we're testing is that this doesn't generate an
# internal error. If we didn't use the dry run option we could have
# then tried doing more with the result, but this is a good start.
self.assertNotEqual(result.status_code, 500)
if __name__ == '__main__':
unittest.main()
.. _py.test: http://pytest.org/
.. _nose: https://nose.readthedocs.org/
hypothesis-3.0.1/docs/extras.rst 0000664 0000000 0000000 00000015645 12661275660 0016730 0 ustar 00root root 0000000 0000000 ===================
Additional packages
===================
Hypothesis itself does not have any dependencies, but there are some packages that
need additional things installed in order to work.
You can install these dependencies using the setuptools extra feature as e.g.
``pip install hypothesis[django]``. This will check installation of compatible versions.
You can also just install hypothesis into a project using them, ignore the version
constraints, and hope for the best.
In general "Which version is Hypothesis compatible with?" is a hard question to answer
and even harder to regularly test. Hypothesis is always tested against the latest
compatible version and each package will note the expected compatibility range. If
you run into a bug with any of these please specify the dependency version.
--------------------
hypothesis[datetime]
--------------------
As might be expected, this provides strategies for which generating instances
of objects from the ``datetime`` module: ``datetime``\s, ``date``\s, and
``time``\s. It depends on ``pytz`` to work.
It should work with just about any version of ``pytz``. ``pytz`` has a very
stable API and Hypothesis works around a bug or two in older versions.
It lives in the ``hypothesis.extra.datetime`` package.
.. method:: datetimes(allow_naive=None, timezones=None, min_year=None, \
max_year=None)
This strategy generates ``datetime`` objects. For example:
.. code-block:: pycon
>>> from hypothesis.extra.datetime import datetimes
>>> datetimes().example()
datetime.datetime(1705, 1, 20, 0, 32, 0, 973139, tzinfo=>> datetimes().example()
datetime.datetime(7274, 6, 9, 23, 0, 31, 75498, tzinfo=>> datetimes(min_year=2001, max_year=2010).example()
datetime.datetime(2010, 7, 7, 0, 15, 0, 614034, tzinfo=>> datetimes(min_year=2001, max_year=2010).example()
datetime.datetime(2006, 9, 26, 22, 0, 0, 220365, tzinfo=>> import pytz
>>> pytz.all_timezones[:3]
['Africa/Abidjan', 'Africa/Accra', 'Africa/Addis_Ababa']
>>> datetimes(timezones=pytz.all_timezones[:3]).example()
datetime.datetime(6257, 8, 21, 13, 6, 24, 8751, tzinfo=)
>>> datetimes(timezones=pytz.all_timezones[:3]).example()
datetime.datetime(7851, 2, 3, 0, 0, 0, 767400, tzinfo=)
>>> datetimes(timezones=pytz.all_timezones[:3]).example()
datetime.datetime(8262, 6, 22, 16, 0, 0, 154235, tzinfo=)
If the set of timezones is empty you will get a naive datetime:
.. code-block:: pycon
>>> datetimes(timezones=[]).example()
datetime.datetime(918, 11, 26, 2, 0, 35, 916439)
You can also explicitly get a mix of naive and non-naive datetimes if you
want:
.. code-block:: pycon
>>> datetimes(allow_naive=True).example()
datetime.datetime(2433, 3, 20, 0, 0, 44, 460383, tzinfo=)
>>> datetimes(allow_naive=True).example()
datetime.datetime(7003, 1, 22, 0, 0, 52, 401259)
.. method:: dates(min_year=None, max_year=None)
This strategy generates ``date`` objects. For example:
.. code-block:: pycon
>>> from hypothesis.extra.datetime import dates
>>> dates().example()
datetime.date(1687, 3, 23)
>>> dates().example()
datetime.date(9565, 5, 2)
Again, you can restrict the range with the ``min_year`` and ``max_year``
arguments.
.. method:: times()
This strategy generates ``time`` objects. For example:
.. code-block:: pycon
>>> from hypothesis.extra.datetime import times
>>> times().example()
datetime.time(0, 15, 55, 188712)
>>> times().example()
datetime.time(9, 0, 47, 959374)
-----------------------
hypothesis[fakefactory]
-----------------------
`Fake-factory `_ is another Python
library for data generation. hypothesis.extra.fakefactory is a package which
lets you use fake-factory generators to parametrize tests.
The fake-factory API is extremely unstable, even between patch releases, and
Hypothesis's support for it is unlikely to work with anything except the exact
version it has been tested against.
hypothesis.extra.fakefactory defines a function fake_factory which returns a
strategy for producing text data from any FakeFactory provider.
So for example the following will parametrize a test by an email address:
.. code-block:: pycon
>>> fake_factory('email').example()
'tnader@prosacco.info'
>>> fake_factory('name').example()
'Zbyněk Černý CSc.'
You can explicitly specify the locale (otherwise it uses any of the available
locales), either as a single locale or as several:
.. code-block:: pycon
>>> fake_factory('name', locale='en_GB').example()
'Antione Gerlach'
>>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example()
'Miloš Šťastný'
>>> fake_factory('name', locales=['en_GB', 'cs_CZ']).example()
'Harm Sanford'
If you want to your own FakeFactory providers you can do that too, passing them
in as a providers argument:
.. code-block:: pycon
>>> from faker.providers import BaseProvider
>>> class KittenProvider(BaseProvider):
... def meows(self):
... return 'meow %d' % (self.random_number(digits=10),)
...
>>> fake_factory('meows', providers=[KittenProvider]).example()
'meow 9139348419'
Generally you probably shouldn't do this unless you're reusing a provider you
already have - Hypothesis's facilities for strategy generation are much more
powerful and easier to use. Consider using something like BasicStrategy instead
if you want to write a strategy from scratch. This is only here to provide easy
reuse of things you already have.
------------------
hypothesis[django]
------------------
hypothesis.extra.django adds support for testing your Django models with Hypothesis.
It should be compatible with any Django since 1.7, but is only tested extensively
against 1.8.
It's large enough that it is :doc:`documented elsewhere `.
-----------------
hypothesis-pytest
-----------------
hypothesis-pytest is actually available as a separate package that is installed as
hypothesis-pytest rather than hypothesis[pytest]. This may change in future but the
package will remain for compatibility reasons if it does.
hypothesis-pytest is the world's most basic pytest plugin. Install it to get
slightly better integrated example reporting when using @given and running
under pytest.
It can also load :ref:`settings Profiles `.
hypothesis-3.0.1/docs/healthchecks.rst 0000664 0000000 0000000 00000001637 12661275660 0020044 0 ustar 00root root 0000000 0000000 =============
Health checks
=============
Hypothesis tries to detect common mistakes and things that will cause difficulty
at run time in the form of a number of 'health checks'.
These include detecting and warning about:
* Strategies with very slow data generation
* Strategies which filter out too much
* Recursive strategies which branch too much
* Use of the global random module
If any of these scenarios are detected, Hypothesis will emit a warning about them.
The general goal of these health checks is to warn you about things that you are doing that might
appear to work but will either cause Hypothesis to not work correctly or to perform badly.
These health checks are affected by two settings:
* If the strict setting is set to True, these will be exceptions instead of warnings.
* If the perform_health_check setting is set to False, these health checks will be skipped entirely. This is not
recommended.
hypothesis-3.0.1/docs/index.rst 0000664 0000000 0000000 00000005165 12661275660 0016525 0 ustar 00root root 0000000 0000000 ======================
Welcome to Hypothesis!
======================
`Hypothesis `_ is a Python library for
creating unit tests which are simpler to write and more powerful when run,
finding edge cases in your code you wouldn't have thought to look for. It is
stable, powerful and easy to add to any existing test suite.
It works by letting you write tests that assert that something should be true
for every case, not just the ones you happen to think of.
Think of a normal unit test as being something like the following:
1. Set up some data.
2. Perform some operations on the data.
3. Assert something about the result.
Hypothesis lets you write tests which instead look like this:
1. For all data matching some specification.
2. Perform some operations on the data.
3. Assert something about the result.
This is often called property based testing, and was popularised by the
Haskell library `Quickcheck `_.
It works by generating random data matching your specification and checking
that your guarantee still holds in that case. If it finds an example where it doesn't,
it takes that example and cuts it down to size, simplifying it until it finds a
much smaller example that still causes the problem. It then saves that example
for later, so that once it has found a problem with your code it will not forget
it in the future.
Writing tests of this form usually consists of deciding on guarantees that
your code should make - properties that should always hold true,
regardless of what the world throws at you. Examples of such guarantees
might be:
* Your code shouldn't throw an exception, or should only throw a particular type of exception (this works particularly well if you have a lot of internal assertions).
* If you delete an object, it is no longer visible.
* If you serialize and then deserialize a value, then you get the same value back.
Now you know the basics of what Hypothesis does, the rest of this
documentation will take you through how and why. It's divided into a
number of sections, which you can see in the sidebar (or the
menu at the top if you're on mobile), but you probably want to begin with
the :doc:`Quick start guide `, which will give you a worked
example of how to use Hypothesis and a detailed outline
of the things you need to know to begin testing your code with it.
.. toctree::
:maxdepth: 1
:hidden:
quickstart
django
details
settings
data
extras
healthchecks
database
stateful
supported
examples
community
manifesto
endorsements
usage
changes
development
support
packaging
hypothesis-3.0.1/docs/manifesto.rst 0000664 0000000 0000000 00000006435 12661275660 0017404 0 ustar 00root root 0000000 0000000 =========================
The Purpose of Hypothesis
=========================
What is Hypothesis for?
From the perspective of a user, the purpose of Hypothesis is to make it easier for
you to write better tests.
From my perspective as the author, that is of course also a purpose of Hypothesis,
but (if you will permit me to indulge in a touch of megalomania for a moment), the
larger purpose of Hypothesis is to drag the world kicking and screaming into a new
and terrifying age of high quality software.
Software is, as they say, eating the world. Software is also `terrible`_. It's buggy,
insecure and generally poorly thought out. This combination is clearly a recipe for
disaster.
And the state of software testing is even worse. Although it's fairly uncontroversial
at this point that you *should* be testing your code, can you really say with a straight
face that most projects you've worked on are adequately tested?
A lot of the problem here is that it's too hard to write good tests. Your tests encode
exactly the same assumptions and fallacies that you had when you wrote the code, so they
miss exactly the same bugs that you missed when you wrote the code.
Meanwhile, there are all sorts of tools for making testing better that are basically
unused. The original Quickcheck is from *1999* and the majority of developers have
not even heard of it, let alone used it. There are a bunch of half-baked implementations
for most languages, but very few of them are worth using.
The goal of Hypothesis is to bring advanced testing techniques to the masses, and to
provide an implementation that is so high quality that it is easier to use them than
it is not to use them. Where I can, I will beg, borrow and steal every good idea
I can find that someone has had to make software testing better. Where I can't, I will
invent new ones.
Quickcheck is the start, but I also plan to integrate ideas from fuzz testing (a
planned future feature is to use coverage information to drive example selection, and
the example saving database is already inspired by the workflows people use for fuzz
testing), and am open to and actively seeking out other suggestions and ideas.
The plan is to treat the social problem of people not using these ideas as a bug to
which there is a technical solution: Does property-based testing not match your workflow?
That's a bug, let's fix it by figuring out how to integrate Hypothesis into it.
Too hard to generate custom data for your application? That's a bug. Let's fix it by
figuring out how to make it easier, or how to take something you're already using to
specify your data and derive a generator from that automatically. Find the explanations
of these advanced ideas hopelessly obtuse and hard to follow? That's a bug. Let's provide
you with an easy API that lets you test your code better without a PhD in software
verification.
Grand ambitions, I know, and I expect ultimately the reality will be somewhat less
grand, but so far in about three months of development, Hypothesis has become the most
solid implementation of Quickcheck ever seen in a mainstream language (as long as we don't
count Scala as mainstream yet), and at the same time managed to
significantly push forward the state of the art, so I think there's
reason to be optimistic.
.. _terrible: https://www.youtube.com/watch?v=csyL9EC0S0c
hypothesis-3.0.1/docs/packaging.rst 0000664 0000000 0000000 00000007776 12661275660 0017354 0 ustar 00root root 0000000 0000000 ====================
Packaging Guidelines
====================
Downstream packagers often want to package Hypothesis. Here are some guidelines.
The primary guideline is this: If you are not prepared to keep up with the Hypothesis release schedule,
don't. You will annoy me and are doing your users a disservice.
Hypothesis has quite a frequent release schedule. It's very rare that it goes a month without a release,
and there are often multiple releases in a given month.
Many people not only fail to follow the release schedule but also seem included to package versions
which are months out of date even at the point of packaging. This will cause me to be very annoyed with you and
you will consequently get very little co-operation from me.
If you *are* prepared to keep up with the Hypothesis release schedule, the rest of this document outlines
some information you might find useful.
----------------
Release tarballs
----------------
These are available from `the GitHub releases page `_. The
tarballs on pypi are intended for installation from a Python tool such as pip or easy_install and should not
be considered complete releases. Requests to include additional files in them will not be granted. Their absence
is not a bug.
------------
Dependencies
------------
~~~~~~~~~~~~~~~
Python versions
~~~~~~~~~~~~~~~
Hypothesis is designed to work with a range of Python versions. Currently supported are:
* pypy-2.6.1 (earlier versions of pypy *may* work)
* CPython 2.6.x
* CPython 2.7.x
* CPython 3.3.x
* CPython 3.4.x
* CPython 3.5.x
If you feel the need to have separate Python 3 and Python 2 packages you can, but Hypothesis works unmodified
on either.
~~~~~~~~~~~~~~~~~~~~~~
Other Python libraries
~~~~~~~~~~~~~~~~~~~~~~
Hypothesis has *optional* dependencies on the following libraries:
* pytz (almost any version should work)
* fake-factory (0.5.2 or 0.5.3)
* Django, 1.7 through 1.9 (This requires fake-factory to be installed)
* numpy, 1.10.x (earlier versions will probably work fine)
* py.test (2.7.0 or greater). This is a mandatory dependency for testing Hypothesis itself but optional for users.
The way this works when installing Hypothesis normally is that these features become available if the relevant
library is installed.
------------------
Testing Hypothesis
------------------
If you want to test Hypothesis as part of your packaging you will probably not want to use the mechanisms
Hypothesis itself uses for running its tests, because it has a lot of logic for installing and testing against
different versions of Python.
The tests must be run with py.test. A version more recent than 2.7.0 is strongly encouraged, but it may work
with earlier versions (however py.test specific logic is disabled before 2.7.0).
Tests are organised into a number of top level subdirectories of the tests/ directory.
* cover: This is a small, reasonably fast, collection of tests designed to give 100% coverage of all but a select
subset of the files when run under Python 3.
* nocover: This is a much slower collection of tests that should not be run under coverage for performance reasons.
* py2: Tests that can only be run under Python 2
* py3: Tests that can only be run under Python 3
* datetime: This tests the subset of Hypothesis that depends on pytz
* fakefactory: This tests the subset of Hypothesis that depends on fakefactory.
* django: This tests the subset of Hypothesis that depends on django (this also depends on fakefactory).
An example invocation for running the coverage subset of these tests:
.. code-block:: bash
python setup.py install
pip install pytest # you will probably want to use your own packaging here
python -m pytest tests/cover
--------
Examples
--------
* `arch linux `_
* `gentoo `_ (slightly behind at the time of this writing)
Other distros appear to really like Hypothesis 1.11. I do not encourage following their example.
hypothesis-3.0.1/docs/quickstart.rst 0000664 0000000 0000000 00000022262 12661275660 0017605 0 ustar 00root root 0000000 0000000 =================
Quick start guide
=================
This document should talk you through everything you need to get started with
Hypothesis.
----------
An example
----------
Suppose we've written a `run length encoding
`_ system and we want to test
it out.
We have the following code which I took straight from the
`Rosetta Code `_ wiki (OK, I
removed some commented out code and fixed the formatting, but there are no
functional modifications):
.. code:: python
def encode(input_string):
count = 1
prev = ''
lst = []
for character in input_string:
if character != prev:
if prev:
entry = (prev, count)
lst.append(entry)
count = 1
prev = character
else:
count += 1
else:
entry = (character, count)
lst.append(entry)
return lst
def decode(lst):
q = ''
for character, count in lst:
q += character * count
return q
We want to write a test for this that will check some invariant of these
functions.
The invariant one tends to try when you've got this sort of encoding /
decoding is that if you encode something and then decode it then you get the same
value back.
Lets see how you'd do that with Hypothesis:
.. code:: python
from hypothesis import given
from hypothesis.strategies import text
@given(text())
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
(For this example we'll just let pytest discover and run the test. We'll cover
other ways you could have run it later).
The text function returns what Hypothesis calls a search strategy. An object
with methods that describe how to generate and simplify certain kinds of
values. The @given decorator then takes our test function and turns it into a
parametrized one which, when called, will run the test function over a wide
range of matching data from that strategy.
Anyway, this test immediately finds a bug in the code:
.. code::
Falsifying example: test_decode_inverts_encode(s='')
UnboundLocalError: local variable 'character' referenced before assignment
Hypothesis correctly points out that this code is simply wrong if called on
an empty string.
If we fix that by just adding the following code to the beginning of the function
then Hypothesis tells us the code is correct (by doing nothing as you'd expect
a passing test to).
.. code:: python
if not input_string:
return []
If we wanted to make sure this example was always checked we could add it in
explicitly:
.. code:: python
from hypothesis import given, example
from hypothesis.strategies import text
@given(text())
@example('')
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
You don't have to do this, but it can be useful both for clarity purposes and
for reliably hitting hard to find examples. Also in local development
Hypothesis will just remember and reuse the examples anyway, but there's not
currently a very good workflow for sharing those in your CI.
It's also worth noting that both example and given support keyword arguments as
well as positional. The following would have worked just as well:
.. code:: python
@given(s=text())
@example(s='')
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
Suppose we had a more interesting bug and forgot to reset the count
each time. Say we missed a line in our ``encode`` method:
.. code:: python
def encode(input_string):
count = 1
prev = ''
lst = []
for character in input_string:
if character != prev:
if prev:
entry = (prev, count)
lst.append(entry)
# count = 1 # Missing reset operation
prev = character
else:
count += 1
else:
entry = (character, count)
lst.append(entry)
return lst
Hypothesis quickly informs us of the following example:
.. code::
Falsifying example: test_decode_inverts_encode(s='001')
Note that the example provided is really quite simple. Hypothesis doesn't just
find *any* counter-example to your tests, it knows how to simplify the examples
it finds to produce small easy to understand ones. In this case, two identical
values are enough to set the count to a number different from one, followed by
another distinct value which should have reset the count but in this case
didn't.
The examples Hypothesis provides are valid Python code you can run. Any
arguments that you explicitly provide when calling the function are not
generated by Hypothesis, and if you explicitly provide *all* the arguments
Hypothesis will just call the underlying function the once rather than
running it multiple times.
----------
Installing
----------
Hypothesis is `available on pypi as "hypothesis"
`_. You can install it with:
.. code:: bash
pip install hypothesis
or
.. code:: bash
easy_install hypothesis
If you want to install directly from the source code (e.g. because you want to
make changes and install the changed version) you can do this with:
.. code:: bash
python setup.py install
You should probably run the tests first to make sure nothing is broken. You can
do this with:
.. code:: bash
python setup.py test
Note that if they're not already installed this will try to install the test
dependencies.
You may wish to do all of this in a `virtualenv `_. For example:
.. code:: bash
virtualenv venv
source venv/bin/activate
pip install hypothesis
Will create an isolated environment for you to try hypothesis out in without
affecting your system installed packages.
-------------
Running tests
-------------
In our example above we just let pytest discover and run our tests, but we could
also have run it explicitly ourselves:
.. code:: python
if __name__ == '__main__':
test_decode_inverts_encode()
We could also have done this as a unittest TestCase:
.. code:: python
import unittest
class TestEncoding(unittest.TestCase):
@given(text())
def test_decode_inverts_encode(self, s):
self.assertEqual(decode(encode(s)), s)
if __name__ == '__main__':
unittest.main()
A detail: This works because Hypothesis ignores any arguments it hasn't been
told to provide (positional arguments start from the right), so the self
argument to the test is simply ignored and works as normal. This also means
that Hypothesis will play nicely with other ways of parameterizing tests. e.g
it works fine if you use pytest fixtures for some arguments and Hypothesis for
others.
-------------
Writing tests
-------------
A test in Hypothesis consists of two parts: A function that looks like a normal
test in your test framework of choice but with some additional arguments, and
a :func:`@given ` decorator that specifies
how to provide those arguments.
Here are some other examples of how you could use that:
.. code:: python
from hypothesis import given
import hypothesis.strategies as st
@given(st.integers(), st.integers())
def test_ints_are_commutative(x, y):
assert x + y == y + x
@given(x=st.integers(), y=st.integers())
def test_ints_cancel(x, y):
assert (x + y) - y == x
@given(st.lists(st.integers()))
def test_reversing_twice_gives_same_list(xs):
# This will generate lists of arbitrary length (usually between 0 and
# 100 elements) whose elements are integers.
ys = list(xs)
ys.reverse()
ys.reverse()
assert xs == ys
@given(st.tuples(st.booleans(), st.text()))
def test_look_tuples_work_too(t):
# A tuple is generated as the one you provided, with the corresponding
# types in those positions.
assert len(t) == 2
assert isinstance(t[0], bool)
assert isinstance(t[1], str)
Note that as we saw in the above example you can pass arguments to :func:`@given `
either as positional or as keywords.
--------------
Where to start
--------------
You should now know enough of the basics to write some tests for your code
using Hypothesis. The best way to learn is by doing, so go have a try.
If you're stuck for ideas for how to use this sort of test for your code, here
are some good starting points:
1. Try just calling functions with appropriate random data and see if they
crash. You may be surprised how often this works. e.g. note that the first
bug we found in the encoding example didn't even get as far as our
assertion: It crashed because it couldn't handle the data we gave it, not
because it did the wrong thing.
2. Look for duplication in your tests. Are there any cases where you're testing
the same thing with multiple different examples? Can you generalise that to
a single test using Hypothesis?
3. `This piece is designed for an F# implementation
`_, but
is still very good advice which you may find helps give you good ideas for
using Hypothesis.
If you have any trouble getting started, don't feel shy about
:doc:`asking for help `.
hypothesis-3.0.1/docs/settings.rst 0000664 0000000 0000000 00000017243 12661275660 0017256 0 ustar 00root root 0000000 0000000 ========
settings
========
Hypothesis tries to have good defaults for its behaviour, but sometimes that's
not enough and you need to tweak it.
The mechanism for doing this is the :class:`~hypothesis.settings` object.
You can set up a @given based test to use this using a settings decorator:
:func:`@given ` invocation as follows:
.. code:: python
from hypothesis import given, settings
@given(integers())
@settings(max_examples=500)
def test_this_thoroughly(x):
pass
This uses a :class:`~hypothesis.settings` object which causes the test to receive a much larger
set of examples than normal.
This may be applied either before or after the given and the results are
the same. The following is exactly equivalent:
.. code:: python
from hypothesis import given, settings
@settings(max_examples=500)
@given(integers())
def test_this_thoroughly(x):
pass
------------------
Available settings
------------------
.. module:: hypothesis
.. autoclass:: settings
:members: max_examples, max_iterations, min_satisfying_examples,
max_shrinks, timeout, strict, database_file, stateful_step_count,
database
.. _verbose-output:
~~~~~~~~~~~~~~~~~~~~~~~~~~
Seeing intermediate result
~~~~~~~~~~~~~~~~~~~~~~~~~~
To see what's going on while Hypothesis runs your tests, you can turn
up the verbosity setting. This works with both :func:`~hypothesis.core.find` and :func:`@given `.
(The following examples are somewhat manually truncated because the results
of verbose output are, well, verbose, but they should convey the idea).
.. code:: pycon
>>> from hypothesis import find, settings, Verbosity
>>> from hypothesis.strategies import lists, booleans
>>> find(lists(booleans()), any, settings=settings(verbosity=Verbosity.verbose))
Found satisfying example [True, True, ...
Shrunk example to [False, False, False, True, ...
Shrunk example to [False, False, True, False, False, ...
Shrunk example to [False, True, False, True, True, ...
Shrunk example to [True, True, True]
Shrunk example to [True, True]
Shrunk example to [True]
[True]
>>> from hypothesis import given
>>> from hypothesis.strategies import integers()
>>> settings.default.verbosity = Verbosity.verbose
>>> @given(integers())
... def test_foo(x):
... assert x > 0
...
>>> test_foo()
Trying example: test_foo(x=-565872324465712963891750807252490657219)
Traceback (most recent call last):
...
AssertionError
Trying example: test_foo(x=565872324465712963891750807252490657219)
Trying example: test_foo(x=0)
Traceback (most recent call last):
...
AssertionError
Falsifying example: test_foo(x=0)
Traceback (most recent call last):
...
AssertionError
The four levels are quiet, normal, verbose and debug. normal is the default,
while in quiet Hypothesis will not print anything out, even the final
falsifying example. debug is basically verbose but a bit more so. You probably
don't want it.
You can also override the default by setting the environment variable
:envvar:`HYPOTHESIS_VERBOSITY_LEVEL` to the name of the level you want. So e.g.
setting ``HYPOTHESIS_VERBOSITY_LEVEL=verbose`` will run all your tests printing
intermediate results and errors.
-------------------------
Building settings objects
-------------------------
settings can be created by calling settings with any of the available settings
values. Any absent ones will be set to defaults:
.. code:: pycon
>>> from hypothesis import settings
>>> settings()
settings(average_list_length=25.0, database_file='/home/david/projects/hypothesis/.hypothesis/examples.db', derandomize=False, max_examples=200, max_iterations=1000, max_shrinks=500, min_satisfying_examples=5, stateful_step_count=50, strict=False, timeout=60, verbosity=Verbosity.normal)
>>> settings().max_examples
200
>>> settings(max_examples=10).max_examples
10
You can also copy settings off other settings:
.. code:: pycon
>>> s = settings(max_examples=10)
>>> t = settings(s, max_iterations=20)
>>> s.max_examples
10
>>> t.max_iterations
20
>>> s.max_iterations
1000
>>> s.max_shrinks
500
>>> t.max_shrinks
500
----------------
Default settings
----------------
At any given point in your program there is a current default settings,
available as settings.default. As well as being a settings object in its own
right, all newly created settings objects which are not explicitly based off
another settings are based off the default, so will inherit any values that are
not explicitly set from it.
You can change the defaults by using profiles (see next section), but you can
also override them locally by using a settings object as a :ref:`context manager `
.. code:: pycon
>>> with settings(max_examples=150):
... print(settings.default.max_examples)
... print(settings().max_examples)
...
150
150
>>> settings().max_examples
200
Note that after the block exits the default is returned to normal.
You can use this by nesting test definitions inside the context:
.. code:: python
from hypothesis import given, settings
with settings(max_examples=500):
@given(integers())
def test_this_thoroughly(x):
pass
All settings objects created or tests defined inside the block will inherit their
defaults from the settings object used as the context. You can still override them
with custom defined settings of course.
Warning: If you use define test functions which don't use @given inside a context
block, these will not use the enclosing settings. This is because the context
manager only affects the definition, not the execution of the function.
.. _settings_profiles:
~~~~~~~~~~~~~~~~~
settings Profiles
~~~~~~~~~~~~~~~~~
Depending on your environment you may want different default settings.
For example: during development you may want to lower the number of examples
to speed up the tests. However, in a CI environment you may want more examples
so you are more likely to find bugs.
Hypothesis allows you to define different settings profiles. These profiles
can be loaded at any time.
Loading a profile changes the default settings but will not change the behavior
of tests that explicitly change the settings.
.. code:: pycon
>>> from hypothesis import settings
>>> settings.register_profile("ci", settings(max_examples=1000))
>>> settings().max_examples
200
>>> settings.load_profile("ci")
>>> settings().max_examples
1000
Instead of loading the profile and overriding the defaults you can retrieve profiles for
specific tests.
.. code:: pycon
>>> with settings.get_profile("ci"):
... print(settings().max_examples)
...
1000
Optionally, you may define the environment variable to load a profile for you.
This is the suggested pattern for running your tests on CI.
The code below should run in a `conftest.py` or any setup/initialization section of your test suite.
If this variable is not defined the Hypothesis defined defaults will be loaded.
.. code:: pycon
>>> from hypothesis import settings
>>> settings.register_profile("ci", settings(max_examples=1000))
>>> settings.register_profile("dev", settings(max_examples=10))
>>> settings.register_profile("debug", settings(max_examples=10, verbosity=Verbosity.verbose))
>>> settings.load_profile(os.getenv(u'HYPOTHESIS_PROFILE', 'default'))
If you are using the hypothesis pytest plugin and your profiles are registered
by your conftest you can load one with the command line option ``--hypothesis-profile``.
.. code:: bash
$ py.test tests --hypothesis-profile
hypothesis-3.0.1/docs/stateful.rst 0000664 0000000 0000000 00000033341 12661275660 0017242 0 ustar 00root root 0000000 0000000 ================
Stateful testing
================
Hypothesis offers support for a stateful style of test, where instead of
trying to produce a single data value that causes a specific test to fail, it
tries to generate a program that errors. In many ways, this sort of testing is
to classical property based testing as property based testing is to normal
example based testing.
The idea doesn't originate with Hypothesis, though Hypothesis's implementation
and approach is mostly not based on an existing implementation and should be
considered some mix of novel and independent reinventions.
This style of testing is useful both for programs which involve some sort
of mutable state and for complex APIs where there's no state per se but the
actions you perform involve e.g. taking data from one function and feeding it
into another.
The idea is that you teach Hypothesis how to interact with your program: Be it
a server, a python API, whatever. All you need is to be able to answer the
question "Given what I've done so far, what could I do now?". After that,
Hypothesis takes over and tries to find sequences of actions which cause a
test failure.
Right now the stateful testing is a bit new and experimental and should be
considered as a semi-public API: It may break between minor versions but won't
break between patch releases, and there are still some rough edges in the API
that will need to be filed off.
This shouldn't discourage you from using it. Although it's not as robust as the
rest of Hypothesis, it's still pretty robust and more importantly is extremely
powerful. I found a number of really subtle bugs in Hypothesis by turning the
stateful testing onto a subset of the Hypothesis API, and you likely will find
the same.
Enough preamble, lets see how to use it.
The first thing to note is that there are two levels of API: The low level
but more flexible API and the higher level rule based API which is both
easier to use and also produces a much better display of data due to its
greater structure. We'll start with the more structured one.
-------------------------
Rule based state machines
-------------------------
Rule based state machines are the ones you're most likely to want to use.
They're significantly more user friendly and should be good enough for most
things you'd want to do.
A rule based state machine is a collection of functions (possibly with side
effects) which may depend on both values that Hypothesis can generate and
also on values that have resulted from previous function calls.
You define a rule based state machine as follows:
.. code:: python
import unittest
from collections import namedtuple
from hypothesis import strategies as st
from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule
Leaf = namedtuple('Leaf', ('label',))
Split = namedtuple('Split', ('left', 'right'))
class BalancedTrees(RuleBasedStateMachine):
trees = Bundle('BinaryTree')
@rule(target=trees, x=st.integers())
def leaf(self, x):
return Leaf(x)
@rule(target=trees, left=trees, right=trees)
def split(self, left, right):
return Split(left, right)
@rule(tree=trees)
def check_balanced(self, tree):
if isinstance(tree, Leaf):
return
else:
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1
self.check_balanced(tree.left)
self.check_balanced(tree.right)
def size(self, tree):
if isinstance(tree, Leaf):
return 1
else:
return 1 + self.size(tree.left) + self.size(tree.right)
In this we declare a Bundle, which is a named collection of previously generated
values. We define two rules which put data onto this bundle - one which just
generates leaves with integer labels, the other of which takes two previously
generated values and returns a new one.
We can then integrate this into our test suite by getting a unittest TestCase
from it:
.. code:: python
TestTrees = BalancedTrees.TestCase
if __name__ == '__main__':
unittest.main()
(these will also be picked up by py.test if you prefer to use that). Running
this we get:
.. code:: bash
Step #1: v1 = leaf(x=0)
Step #2: v2 = split(left=v1, right=v1)
Step #3: v3 = split(left=v2, right=v1)
Step #4: check_balanced(tree=v3)
F
======================================================================
FAIL: runTest (hypothesis.stateful.BalancedTrees.TestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
(...)
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1
AssertionError
Note how it's printed out a very short program that will demonstrate the
problem.
...the problem of course being that we've not actually written any code to
balance this tree at *all*, so of course it's not balanced.
So lets balance some trees.
.. code:: python
from collections import namedtuple
from hypothesis import strategies as st
from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule
Leaf = namedtuple('Leaf', ('label',))
Split = namedtuple('Split', ('left', 'right'))
class BalancedTrees(RuleBasedStateMachine):
trees = Bundle('BinaryTree')
balanced_trees = Bundle('balanced BinaryTree')
@rule(target=trees, x=st.integers())
def leaf(self, x):
return Leaf(x)
@rule(target=trees, left=trees, right=trees)
def split(self, left, right):
return Split(left, right)
@rule(tree=balanced_trees)
def check_balanced(self, tree):
if isinstance(tree, Leaf):
return
else:
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1, \
repr(tree)
self.check_balanced(tree.left)
self.check_balanced(tree.right)
@rule(target=balanced_trees, tree=trees)
def balance_tree(self, tree):
return self.split_leaves(self.flatten(tree))
def size(self, tree):
if isinstance(tree, Leaf):
return 1
else:
return self.size(tree.left) + self.size(tree.right)
def flatten(self, tree):
if isinstance(tree, Leaf):
return (tree.label,)
else:
return self.flatten(tree.left) + self.flatten(tree.right)
def split_leaves(self, leaves):
assert leaves
if len(leaves) == 1:
return Leaf(leaves[0])
else:
mid = len(leaves) // 2
return Split(
self.split_leaves(leaves[:mid]),
self.split_leaves(leaves[mid:]),
)
We've now written a really noddy tree balancing implementation. This takes
trees and puts them into a new bundle of data, and we only assert that things
in the balanced_trees bundle are actually balanced.
If you run this it will sit their silently for a while (you can turn on
:ref:`verbose output ` to get slightly more information about
what's happening. debug will give you all the intermediate programs being run)
and then run, telling you your test has passed! Our balancing algorithm worked.
Now lets break it to make sure the test is still valid:
Changing the split to mid = max(len(leaves) // 3, 1) this should no longer
balance, which gives us the following counter-example:
.. code:: python
v1 = leaf(x=0)
v2 = split(left=v1, right=v1)
v3 = balance_tree(tree=v1)
v4 = split(left=v2, right=v2)
v5 = balance_tree(tree=v4)
check_balanced(tree=v5)
Note that the example could be shrunk further by deleting v3. Due to some
technical limitations, Hypothesis was unable to find that particular shrink.
In general it's rare for examples produced to be long, but they won't always be
minimal.
You can control the detailed behaviour with a settings object on the TestCase
(this is a normal hypothesis settings object using the defaults at the time
the TestCase class was first referenced). For example if you wanted to run
fewer examples with larger programs you could change the settings to:
.. code:: python
TestTrees.settings = settings(max_examples=100, stateful_step_count=100)
Which doubles the number of steps each program runs and halves the number of
runs relative to the example. settings.timeout will also be respected as usual.
Preconditions
-------------
While it's possible to use ``assume`` in RuleBasedStateMachine rules, if you
use it in only a few rules you can quickly run into a situation where few or
none of your rules pass their assumptions. Thus, Hypothesis provides a
``precondition`` decorator to avoid this problem. The ``precondition``
decorator is used on ``rule``-decorated functions, and must be given a function
that returns True or False based on the RuleBasedStateMachine instance.
.. code:: python
from hypothesis.stateful import RuleBasedStateMachine, rule, precondition
class NumberModifier(RuleBasedStateMachine):
num = 0
@rule()
def add_one(self):
self.num += 1
@precondition(lambda self: self.num != 0)
@rule()
def divide_with_one(self):
self.num = 1 / self.num
By using ``precondition`` here instead of ``assume``, Hypothesis can filter the
inapplicable rules before running them. This makes it much more likely that a
useful sequence of steps will be generated.
Note that currently preconditions can't access bundles; if you need to use
preconditions, you should store relevant data on the instance instead.
----------------------
Generic state machines
----------------------
The class GenericStateMachine is the underlying machinery of stateful testing
in Hypothesis. In execution it looks much like the RuleBasedStateMachine but
it allows the set of steps available to depend in essentially arbitrary
ways on what has happened so far. For example, if you wanted to
use Hypothesis to test a game, it could choose each step in the machine based
on the game to date and the set of actions the game program is telling it it
has available.
It essentially executes the following loop:
.. code:: python
machine = MyStateMachine()
try:
for _ in range(n_steps):
step = machine.steps().example()
machine.execute_step(step)
finally:
machine.teardown()
Where steps() and execute_step() are methods you must implement, and teardown
is a method you can implement if you need to clean something up at the end.
steps returns a strategy, which is allowed to depend arbitrarily on the
current state of the test execution. *Ideally* a good steps implementation
should be robust against minor changes in the state. Steps that change a lot
between slightly different executions will tend to produce worse quality
examples because they're hard to simplify.
The steps method *may* depend on external state, but it's not advisable and
may produce flaky tests.
If any of execute_step or teardown produces an error, Hypothesis will try to
find a minimal sequence of values steps such that the following throws an
exception:
.. code:: python
try:
machine = MyStateMachine()
for step in steps:
machine.execute_step(step)
finally:
machine.teardown()
and such that at every point, the step executed is one that could plausible
have come from a call to steps() in the current state.
Here's an example of using stateful testing to test a broken implementation
of a set in terms of a list (note that you could easily do something close to
this example with the rule based testing instead, and probably should. This
is mostly for illustration purposes):
.. code:: python
import unittest
from hypothesis.stateful import GenericStateMachine
from hypothesis.strategies import tuples, sampled_from, just, integers
class BrokenSet(GenericStateMachine):
def __init__(self):
self.data = []
def steps(self):
add_strategy = tuples(just("add"), integers())
if not self.data:
return add_strategy
else:
return (
add_strategy |
tuples(just("delete"), sampled_from(self.data)))
def execute_step(self, step):
action, value = step
if action == 'delete':
try:
self.data.remove(value)
except ValueError:
pass
assert value not in self.data
else:
assert action == 'add'
self.data.append(value)
assert value in self.data
TestSet = BrokenSet.TestCase
if __name__ == '__main__':
unittest.main()
Note that the strategy changes each time based on the data that's currently
in the state machine.
Running this gives us the following:
.. code:: bash
Step #1: ('add', 0)
Step #2: ('add', 0)
Step #3: ('delete', 0)
F
======================================================================
FAIL: runTest (hypothesis.stateful.BrokenSet.TestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
(...)
assert value not in self.data
AssertionError
So it adds two elements, then deletes one, and throws an assertion when it
finds out that this only deleted one of the copies of the element.
-------------------------
More fine grained control
-------------------------
If you want to bypass the TestCase infrastructure you can invoke these
manually. The stateful module exposes the function run_state_machine_as_test,
which takes an arbitrary function returning a GenericStateMachine and an
optional settings parameter and does the same as the class based runTest
provided.
In particular this may be useful if you wish to pass parameters to a custom
__init__ in your subclass.
hypothesis-3.0.1/docs/support.rst 0000664 0000000 0000000 00000001732 12661275660 0017126 0 ustar 00root root 0000000 0000000 ================
Help and Support
================
For questions you are happy to ask in public, the :doc:`Hypothesis community ` is a
friendly place where I or others will be more than happy to help you out. You're also welcome to
ask questions on Stack Overflow. If you do, please tag them with 'python-hypothesis' so someone
sees them.
For bugs and enhancements, please file an issue on the `GitHub issue tracker `_.
Note that as per the :doc:`development policy `, enhancements will probably not get
implemented unless you're willing to pay for development or implement them yourself (with assistance from me). Bugs
will tend to get fixed reasonably promptly, though it is of course on a best effort basis.
If you need to ask questions privately or want more of a guarantee of bugs being fixed promptly, please contact me on
hypothesis-support@drmaciver.com to talk about availability of support contracts.
hypothesis-3.0.1/docs/supported.rst 0000664 0000000 0000000 00000010344 12661275660 0017436 0 ustar 00root root 0000000 0000000 =============
Compatibility
=============
Hypothesis does its level best to be compatible with everything you could
possibly need it to be compatible with. Generally you should just try it and
expect it to work. If it doesn't, you can be surprised and check this document
for the details.
---------------
Python versions
---------------
Hypothesis is supported and tested on python 2.7
and python 3.3+. Python 3.0 through 3.2 are unsupported and definitely don't work.
It's not infeasible to make them work but would need a very good reason.
Python 2.6 and 3.3 are supported on a "best effort" basis. They probably work,
and bugs that affect them *might* get fixed.
Hypothesis also supports PyPy (PyPy3 does not work because it only runs 3.2 compatible
code, but if and when there's a 3.3 compatible version it will be supported), and
should support 32-bit and narrow builds, though this is currently only tested on Windows.
Hypothesis does not currently work on Jython (it requires sqlite), though could feasibly
be made to do so. IronPython might work but hasn't been tested.
In general Hypothesis does not officially support anything except the latest
patch release of any version of Python it supports. Earlier releases should work
and bugs in them will get fixed if reported, but they're not tested in CI and
no guarantees are made.
-----------------
Operating systems
-----------------
In theory Hypothesis should work anywhere that Python does. In practice it is
only known to work and regularly tested on OS X, Windows and Linux, and you may
experience issues running it elsewhere. For example a known issue is that FreeBSD
splits out the python-sqlite package from the main python package, and you will
need to install that in order for it to work.
If you're using something else and it doesn't work, do get in touch and I'll try
to help, but unless you can come up with a way for me to run a CI server on that
operating system it probably won't stay fixed due to the inevitable march of time.
------------------
Testing frameworks
------------------
In general Hypothesis goes to quite a lot of effort to generate things that
look like normal Python test functions that behave as closely to the originals
as possible, so it should work sensibly out of the box with every test framework.
If your testing relies on doing something other than calling a function and seeing
if it raises an exception then it probably *won't* work out of the box. In particular
things like tests which return generators and expect you to do something with them
(e.g. nose's yield based tests) will not work. Use a decorator or similar to wrap the
test to take this form.
In terms of what's actually *known* to work:
* Hypothesis integrates as smoothly with py.test and unittest as I can make it,
and this is verified as part of the CI.
* py.test fixtures work correctly with Hypothesis based functions, but note that
function based fixtures will only run once for the whole function, not once per
example.
* Nose has been tried at least once and works fine, and I'm aware of people who
use Hypothesis with Nose, but this isn't tested as part of the CI. yield based
tests simply won't work.
* Integration with Django's testing requires use of the :ref:`hypothesis-django` package.
The issue is that in Django's tests' normal mode of execution it will reset the
database one per test rather than once per example, which is not what you want.
Coverage works out of the box with Hypothesis (and Hypothesis has 100% branch
coverage in its own tests). However you should probably not use Coverage, Hypothesis
and PyPy together. Because Hypothesis does quite a lot of CPU heavy work compared
to normal tests, it really exacerbates the performance problems the two normally
have working together.
---------------
Django Versions
---------------
The Hypothesis Django integration is supported on 1.7 and 1.8. It will probably
not work on versions prior to that.
------------------------
Regularly verifying this
------------------------
Everything mentioned above as explicitly supported is checked on every commit
with `Travis `_ and `Appveyor `_
and goes green before a release happens, so when I say they're supported I really
mean it.
hypothesis-3.0.1/docs/usage.rst 0000664 0000000 0000000 00000003655 12661275660 0016524 0 ustar 00root root 0000000 0000000 =====================================
Open Source Projects using Hypothesis
=====================================
The following is a non-exhaustive list of open source projects I know are using Hypothesis. If you're aware of
any others please add them to the list! The only inclusion criterion right now is that if it's a Python library
then it should be available on pypi.
* `aur `_
* `axelrod `_
* `bidict `_
* `binaryornot `_
* `brotlipy `_
* `chardet `_
* `cmph-cffi `_
* `cryptography `_
* `fastnumbers `_
* `flocker `_
* `flownetpy `_
* `funsize `_
* `fusion-index `_
* `hyper-h2 `_
* `mariadb-dyncol `_
* `mercurial `_
* `natsort `_
* `pretext `_
* `priority `_
* `pyrsistent `_
* `pyudev `_
* `qutebrowser `_
* `RubyMarshal `_
* `Segpy `_
* `simoa `_
* `srt `_
* `tchannel `_
* `wcag-contrast-ratio `_
* `yturl `_
hypothesis-3.0.1/examples/ 0000775 0000000 0000000 00000000000 12661275660 0015543 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/examples/README.rst 0000664 0000000 0000000 00000000662 12661275660 0017236 0 ustar 00root root 0000000 0000000 ============================
Examples of Hypothesis usage
============================
This is a directory for examples of using Hypothesis that show case its
features or demonstrate a useful way of testing something.
Right now it's a bit small and fairly algorithmically focused. Pull requests to
add more examples would be *greatly* appreciated, especially ones using e.g.
the Django integration or testing something "Businessy".
hypothesis-3.0.1/examples/test_binary_search.py 0000664 0000000 0000000 00000011567 12661275660 0021777 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This file demonstrates testing a binary search.
It's a useful example because the result of the binary search is so clearly
determined by the invariants it must satisfy, so we can simply test for those
invariants.
It also demonstrates the useful testing technique of testing how the answer
should change (or not) in response to movements in the underlying data.
"""
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given
def binary_search(ls, v):
"""Take a list ls and a value v such that ls is sorted and v is comparable
with the elements of ls.
Return an index i such that 0 <= i <= len(v) with the properties:
1. ls.insert(i, v) is sorted
2. ls.insert(j, v) is not sorted for j < i
"""
# Without this check we will get an index error on the next line when the
# list is empty.
if not ls:
return 0
# Without this check we will miss the case where the insertion point should
# be zero: The invariant we maintain in the next section is that lo is
# always strictly lower than the insertion point.
if v <= ls[0]:
return 0
# Invariant: There is no insertion point i with i <= lo
lo = 0
# Invariant: There is an insertion point i with i <= hi
hi = len(ls)
while lo + 1 < hi:
mid = (lo + hi) // 2
if v > ls[mid]:
# Inserting v anywhere below mid would result in an unsorted list
# because it's > the value at mid. Therefore mid is a valid new lo
lo = mid
# Uncommenting the following lines will cause this to return a valid
# insertion point which is not always minimal.
# elif v == ls[mid]:
# return mid
else:
# Either v == ls[mid] in which case mid is a valid insertion point
# or v < ls[mid], in which case all valid insertion points must be
# < hi. Either way, mid is a valid new hi.
hi = mid
assert lo + 1 == hi
# We now know that there is a valid insertion point <= hi and there is no
# valid insertion point < hi because hi - 1 is lo. Therefore hi is the
# answer we were seeking
return hi
def is_sorted(ls):
"""Is this list sorted?"""
for i in range(len(ls) - 1):
if ls[i] > ls[i + 1]:
return False
return True
Values = st.integers()
# We generate arbitrary lists and turn this into generating sorting lists
# by just sorting them.
SortedLists = st.lists(Values).map(sorted)
# We could also do it this way, but that would be a bad idea:
# SortedLists = st.lists(Values).filter(is_sorted)
# The problem is that Hypothesis will only generate long sorted lists with very
# low probability, so we are much better off post-processing values into the
# form we want than filtering them out.
@given(ls=SortedLists, v=Values)
def test_insert_is_sorted(ls, v):
"""
We test the first invariant: binary_search should return an index such that
inserting the value provided at that index would result in a sorted set.
"""
ls.insert(binary_search(ls, v), v)
assert is_sorted(ls)
@given(ls=SortedLists, v=Values)
def test_is_minimal(ls, v):
"""
We test the second invariant: binary_search should return an index such
that no smaller index is a valid insertion point for v
"""
for i in range(binary_search(ls, v)):
ls2 = list(ls)
ls2.insert(i, v)
assert not is_sorted(ls2)
@given(ls=SortedLists, v=Values)
def test_inserts_into_same_place_twice(ls, v):
"""
In this we test a *consequence* of the second invariant: When we insert a
value into a list twice, the insertion point should be the same both times.
This is because we know that v is > the previous element and == the next
element.
In theory if the former passes, this should always pass. In practice,
failures are detected by this test with much higher probability because it
deliberately puts the data into a shape that is likely to trigger a
failure.
This is an instance of a good general category of test: Testing how the
function moves in responses to changes in the underlying data.
"""
i = binary_search(ls, v)
ls.insert(i, v)
assert binary_search(ls, v) == i
hypothesis-3.0.1/examples/test_rle.py 0000664 0000000 0000000 00000007015 12661275660 0017741 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This example demonstrates testing a run length encoding scheme. That is, we
take a sequence and represent it by a shorter sequence where each 'run' of
consecutive equal elements is represented as a single element plus a count. So
e.g.
[1, 1, 1, 1, 2, 1] is represented as [[1, 4], [2, 1], [1, 1]]
This demonstrates the useful decode(encode(x)) == x invariant that is often
a fruitful source of testing with Hypothesis.
It also has an example of testing invariants in resposne to changes in the
underlying data.
"""
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given, assume
def run_length_encode(seq):
"""Encode a sequence as a new run-length encoded sequence."""
if not seq:
return []
# By starting off the count at zero we simplify the iteration logic
# slightly.
result = [[seq[0], 0]]
for s in seq:
if (
# If you uncomment this line this branch will be skipped and we'll
# always append a new run of length 1. Note which tests fail.
# False and
s == result[-1][0]
# Try uncommenting this line and see what problems occur:
# and result[-1][-1] < 2
):
result[-1][1] += 1
else:
result.append([s, 1])
return result
def run_length_decode(seq):
"""Take a previously encoded sequence and reconstruct the original from
it."""
result = []
for s, i in seq:
for _ in range(i):
result.append(s)
return result
# We use lists of a type that should have a relatively high duplication rate,
# otherwise we'd almost never get any runs.
Lists = st.lists(st.integers(0, 10))
@given(Lists)
def test_decodes_to_starting_sequence(ls):
"""If we encode a sequence and then decode the result, we should get the
original sequence back.
Otherwise we've done something very wrong.
"""
assert run_length_decode(run_length_encode(ls)) == ls
@given(Lists, st.integers(0, 100))
def test_duplicating_an_element_does_not_increase_length(ls, i):
"""The previous test could be passed by simply returning the input sequence
so we need something that tests the compression property of our encoding.
In this test we deliberately introduce or extend a run and assert
that this does not increase the length of our encoding, because they
should be part of the same run in the final result.
"""
# We use assume to get a valid index into the list. We could also have used
# e.g. flatmap, but this is relatively straightforward and will tend to
# perform better.
assume(i < len(ls))
ls2 = list(ls)
# duplicating the value at i right next to it guarantees they are part of
# the same run in the resulting compression.
ls2.insert(i, ls2[i])
assert len(run_length_encode(ls2)) == len(run_length_encode(ls))
hypothesis-3.0.1/notebooks/ 0000775 0000000 0000000 00000000000 12661275660 0015730 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/notebooks/Designing a better simplifier.ipynb 0000664 0000000 0000000 00000471123 12661275660 0024505 0 ustar 00root root 0000000 0000000 {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Designing a better simplifier\n",
"\n",
"This is a notebook talking through some of the considerations in the design of Hypothesis's approach to simplification.\n",
"\n",
"It doesn't perfectly mirror what actually happens in Hypothesis, but it should give some consideration to the sort of things that Hypothesis does and why it takes a particular approach.\n",
"\n",
"In order to simplify the scope of this document we are only going to\n",
"concern ourselves with lists of integers. There are a number of API considerations involved in expanding beyond that point, however most of the algorithmic considerations are the same.\n",
"\n",
"The big difference between lists of integers and the general case is that integers can never be too complex. In particular we will rapidly get to the point where individual elements can be simplified in usually only log(n) calls. When dealing with e.g. lists of lists this is a much more complicated proposition. That may be covered in another notebook.\n",
"\n",
"Our objective here is to minimize the number of times we check the condition. We won't be looking at actual timing performance, because usually the speed of the condition is the bottleneck there (and where it's not, everything is fast enough that we need not worry)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def greedy_shrink(ls, constraint, shrink):\n",
" \"\"\"\n",
" This is the \"classic\" QuickCheck algorithm which takes a shrink function\n",
" which will iterate over simpler versions of an example. We are trying\n",
" to find a local minima: That is an example ls such that condition(ls)\n",
" is True but that constraint(t) is False for each t in shrink(ls).\n",
" \"\"\"\n",
" while True:\n",
" for s in shrink(ls):\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink1(ls):\n",
" \"\"\"\n",
" This is our prototype shrink function. It is very bad. It makes the\n",
" mistake of only making very small changes to an example each time.\n",
" \n",
" Most people write something like this the first time they come to\n",
" implement example shrinking. In particular early Hypothesis very much\n",
" made this mistake.\n",
" \n",
" What this does:\n",
" \n",
" For each index, if the value of the index is non-zero we try\n",
" decrementing it by 1.\n",
" \n",
" We then (regardless of if it's zero) try the list with the value at\n",
" that index deleted.\n",
" \"\"\"\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" if s[i] > 0:\n",
" s[i] -= 1\n",
" yield list(s)\n",
" del s[i]\n",
" yield list(s)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def show_trace(start, constraint, simplifier):\n",
" \"\"\"\n",
" This is a debug function. You shouldn't concern yourself with\n",
" its implementation too much.\n",
" \n",
" What it does is print out every intermediate step in applying a\n",
" simplifier (a function of the form (list, constraint) -> list)\n",
" along with whether it is a successful shrink or not.\n",
" \"\"\"\n",
" if start is None:\n",
" while True:\n",
" start = gen_list()\n",
" if constraint(start):\n",
" break\n",
"\n",
" shrinks = [0]\n",
" tests = [0]\n",
"\n",
" def print_shrink(ls):\n",
" tests[0] += 1\n",
" if constraint(ls):\n",
" shrinks[0] += 1\n",
" print(\"✓\", ls)\n",
" return True\n",
" else:\n",
" print(\"✗\", ls)\n",
" return False\n",
" print(\"✓\", start)\n",
" simplifier(start, print_shrink)\n",
" print()\n",
" print(\"%d shrinks with %d function calls\" % (\n",
" shrinks[0], tests[0]))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from functools import partial"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5]\n",
"✓ [4, 5]\n",
"✓ [3, 5]\n",
"✓ [2, 5]\n",
"✓ [1, 5]\n",
"✓ [0, 5]\n",
"✗ [5]\n",
"✓ [0, 4]\n",
"✗ [4]\n",
"✓ [0, 3]\n",
"✗ [3]\n",
"✓ [0, 2]\n",
"✗ [2]\n",
"✓ [0, 1]\n",
"✗ [1]\n",
"✓ [0, 0]\n",
"✗ [0]\n",
"✗ [0]\n",
"\n",
"10 shrinks with 17 function calls\n"
]
}
],
"source": [
"show_trace([5, 5], lambda x: len(x) >= 2, partial(greedy_shrink, shrink=shrink1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That worked reasonably well, but it sure was a lot of function calls for such a small amount of shrinking. What would have happened if we'd started with [100, 100]?"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink2(ls):\n",
" \"\"\"\n",
" Here is an improved shrink function. We first try deleting each element\n",
" and then we try making each element smaller, but we do so from the left\n",
" hand side instead of the right. This means we will always find the\n",
" smallest value that can go in there, but we will do so much sooner.\n",
" \"\"\"\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
" \n",
" for i in range(len(ls)):\n",
" for x in range(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5]\n",
"✗ [5]\n",
"✗ [5]\n",
"✓ [0, 5]\n",
"✗ [5]\n",
"✗ [0]\n",
"✓ [0, 0]\n",
"✗ [0]\n",
"✗ [0]\n",
"\n",
"2 shrinks with 8 function calls\n"
]
}
],
"source": [
"show_trace([5, 5], lambda x: len(x) >= 2, partial(\n",
" greedy_shrink, shrink=shrink2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This did indeed reduce the number of function calls significantly - we immediately determine that the value in the cell doesn't matter and we can just put zero there. \n",
"\n",
"But what would have happened if the value *did* matter?"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [1000]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [3]\n",
"✗ [4]\n",
"✗ [5]\n",
"✗ [6]\n",
"✗ [7]\n",
"✗ [8]\n",
"✗ [9]\n",
"✗ [10]\n",
"✗ [11]\n",
"✗ [12]\n",
"✗ [13]\n",
"✗ [14]\n",
"✗ [15]\n",
"✗ [16]\n",
"✗ [17]\n",
"✗ [18]\n",
"✗ [19]\n",
"✗ [20]\n",
"✗ [21]\n",
"✗ [22]\n",
"✗ [23]\n",
"✗ [24]\n",
"✗ [25]\n",
"✗ [26]\n",
"✗ [27]\n",
"✗ [28]\n",
"✗ [29]\n",
"✗ [30]\n",
"✗ [31]\n",
"✗ [32]\n",
"✗ [33]\n",
"✗ [34]\n",
"✗ [35]\n",
"✗ [36]\n",
"✗ [37]\n",
"✗ [38]\n",
"✗ [39]\n",
"✗ [40]\n",
"✗ [41]\n",
"✗ [42]\n",
"✗ [43]\n",
"✗ [44]\n",
"✗ [45]\n",
"✗ [46]\n",
"✗ [47]\n",
"✗ [48]\n",
"✗ [49]\n",
"✗ [50]\n",
"✗ [51]\n",
"✗ [52]\n",
"✗ [53]\n",
"✗ [54]\n",
"✗ [55]\n",
"✗ [56]\n",
"✗ [57]\n",
"✗ [58]\n",
"✗ [59]\n",
"✗ [60]\n",
"✗ [61]\n",
"✗ [62]\n",
"✗ [63]\n",
"✗ [64]\n",
"✗ [65]\n",
"✗ [66]\n",
"✗ [67]\n",
"✗ [68]\n",
"✗ [69]\n",
"✗ [70]\n",
"✗ [71]\n",
"✗ [72]\n",
"✗ [73]\n",
"✗ [74]\n",
"✗ [75]\n",
"✗ [76]\n",
"✗ [77]\n",
"✗ [78]\n",
"✗ [79]\n",
"✗ [80]\n",
"✗ [81]\n",
"✗ [82]\n",
"✗ [83]\n",
"✗ [84]\n",
"✗ [85]\n",
"✗ [86]\n",
"✗ [87]\n",
"✗ [88]\n",
"✗ [89]\n",
"✗ [90]\n",
"✗ [91]\n",
"✗ [92]\n",
"✗ [93]\n",
"✗ [94]\n",
"✗ [95]\n",
"✗ [96]\n",
"✗ [97]\n",
"✗ [98]\n",
"✗ [99]\n",
"✗ [100]\n",
"✗ [101]\n",
"✗ [102]\n",
"✗ [103]\n",
"✗ [104]\n",
"✗ [105]\n",
"✗ [106]\n",
"✗ [107]\n",
"✗ [108]\n",
"✗ [109]\n",
"✗ [110]\n",
"✗ [111]\n",
"✗ [112]\n",
"✗ [113]\n",
"✗ [114]\n",
"✗ [115]\n",
"✗ [116]\n",
"✗ [117]\n",
"✗ [118]\n",
"✗ [119]\n",
"✗ [120]\n",
"✗ [121]\n",
"✗ [122]\n",
"✗ [123]\n",
"✗ [124]\n",
"✗ [125]\n",
"✗ [126]\n",
"✗ [127]\n",
"✗ [128]\n",
"✗ [129]\n",
"✗ [130]\n",
"✗ [131]\n",
"✗ [132]\n",
"✗ [133]\n",
"✗ [134]\n",
"✗ [135]\n",
"✗ [136]\n",
"✗ [137]\n",
"✗ [138]\n",
"✗ [139]\n",
"✗ [140]\n",
"✗ [141]\n",
"✗ [142]\n",
"✗ [143]\n",
"✗ [144]\n",
"✗ [145]\n",
"✗ [146]\n",
"✗ [147]\n",
"✗ [148]\n",
"✗ [149]\n",
"✗ [150]\n",
"✗ [151]\n",
"✗ [152]\n",
"✗ [153]\n",
"✗ [154]\n",
"✗ [155]\n",
"✗ [156]\n",
"✗ [157]\n",
"✗ [158]\n",
"✗ [159]\n",
"✗ [160]\n",
"✗ [161]\n",
"✗ [162]\n",
"✗ [163]\n",
"✗ [164]\n",
"✗ [165]\n",
"✗ [166]\n",
"✗ [167]\n",
"✗ [168]\n",
"✗ [169]\n",
"✗ [170]\n",
"✗ [171]\n",
"✗ [172]\n",
"✗ [173]\n",
"✗ [174]\n",
"✗ [175]\n",
"✗ [176]\n",
"✗ [177]\n",
"✗ [178]\n",
"✗ [179]\n",
"✗ [180]\n",
"✗ [181]\n",
"✗ [182]\n",
"✗ [183]\n",
"✗ [184]\n",
"✗ [185]\n",
"✗ [186]\n",
"✗ [187]\n",
"✗ [188]\n",
"✗ [189]\n",
"✗ [190]\n",
"✗ [191]\n",
"✗ [192]\n",
"✗ [193]\n",
"✗ [194]\n",
"✗ [195]\n",
"✗ [196]\n",
"✗ [197]\n",
"✗ [198]\n",
"✗ [199]\n",
"✗ [200]\n",
"✗ [201]\n",
"✗ [202]\n",
"✗ [203]\n",
"✗ [204]\n",
"✗ [205]\n",
"✗ [206]\n",
"✗ [207]\n",
"✗ [208]\n",
"✗ [209]\n",
"✗ [210]\n",
"✗ [211]\n",
"✗ [212]\n",
"✗ [213]\n",
"✗ [214]\n",
"✗ [215]\n",
"✗ [216]\n",
"✗ [217]\n",
"✗ [218]\n",
"✗ [219]\n",
"✗ [220]\n",
"✗ [221]\n",
"✗ [222]\n",
"✗ [223]\n",
"✗ [224]\n",
"✗ [225]\n",
"✗ [226]\n",
"✗ [227]\n",
"✗ [228]\n",
"✗ [229]\n",
"✗ [230]\n",
"✗ [231]\n",
"✗ [232]\n",
"✗ [233]\n",
"✗ [234]\n",
"✗ [235]\n",
"✗ [236]\n",
"✗ [237]\n",
"✗ [238]\n",
"✗ [239]\n",
"✗ [240]\n",
"✗ [241]\n",
"✗ [242]\n",
"✗ [243]\n",
"✗ [244]\n",
"✗ [245]\n",
"✗ [246]\n",
"✗ [247]\n",
"✗ [248]\n",
"✗ [249]\n",
"✗ [250]\n",
"✗ [251]\n",
"✗ [252]\n",
"✗ [253]\n",
"✗ [254]\n",
"✗ [255]\n",
"✗ [256]\n",
"✗ [257]\n",
"✗ [258]\n",
"✗ [259]\n",
"✗ [260]\n",
"✗ [261]\n",
"✗ [262]\n",
"✗ [263]\n",
"✗ [264]\n",
"✗ [265]\n",
"✗ [266]\n",
"✗ [267]\n",
"✗ [268]\n",
"✗ [269]\n",
"✗ [270]\n",
"✗ [271]\n",
"✗ [272]\n",
"✗ [273]\n",
"✗ [274]\n",
"✗ [275]\n",
"✗ [276]\n",
"✗ [277]\n",
"✗ [278]\n",
"✗ [279]\n",
"✗ [280]\n",
"✗ [281]\n",
"✗ [282]\n",
"✗ [283]\n",
"✗ [284]\n",
"✗ [285]\n",
"✗ [286]\n",
"✗ [287]\n",
"✗ [288]\n",
"✗ [289]\n",
"✗ [290]\n",
"✗ [291]\n",
"✗ [292]\n",
"✗ [293]\n",
"✗ [294]\n",
"✗ [295]\n",
"✗ [296]\n",
"✗ [297]\n",
"✗ [298]\n",
"✗ [299]\n",
"✗ [300]\n",
"✗ [301]\n",
"✗ [302]\n",
"✗ [303]\n",
"✗ [304]\n",
"✗ [305]\n",
"✗ [306]\n",
"✗ [307]\n",
"✗ [308]\n",
"✗ [309]\n",
"✗ [310]\n",
"✗ [311]\n",
"✗ [312]\n",
"✗ [313]\n",
"✗ [314]\n",
"✗ [315]\n",
"✗ [316]\n",
"✗ [317]\n",
"✗ [318]\n",
"✗ [319]\n",
"✗ [320]\n",
"✗ [321]\n",
"✗ [322]\n",
"✗ [323]\n",
"✗ [324]\n",
"✗ [325]\n",
"✗ [326]\n",
"✗ [327]\n",
"✗ [328]\n",
"✗ [329]\n",
"✗ [330]\n",
"✗ [331]\n",
"✗ [332]\n",
"✗ [333]\n",
"✗ [334]\n",
"✗ [335]\n",
"✗ [336]\n",
"✗ [337]\n",
"✗ [338]\n",
"✗ [339]\n",
"✗ [340]\n",
"✗ [341]\n",
"✗ [342]\n",
"✗ [343]\n",
"✗ [344]\n",
"✗ [345]\n",
"✗ [346]\n",
"✗ [347]\n",
"✗ [348]\n",
"✗ [349]\n",
"✗ [350]\n",
"✗ [351]\n",
"✗ [352]\n",
"✗ [353]\n",
"✗ [354]\n",
"✗ [355]\n",
"✗ [356]\n",
"✗ [357]\n",
"✗ [358]\n",
"✗ [359]\n",
"✗ [360]\n",
"✗ [361]\n",
"✗ [362]\n",
"✗ [363]\n",
"✗ [364]\n",
"✗ [365]\n",
"✗ [366]\n",
"✗ [367]\n",
"✗ [368]\n",
"✗ [369]\n",
"✗ [370]\n",
"✗ [371]\n",
"✗ [372]\n",
"✗ [373]\n",
"✗ [374]\n",
"✗ [375]\n",
"✗ [376]\n",
"✗ [377]\n",
"✗ [378]\n",
"✗ [379]\n",
"✗ [380]\n",
"✗ [381]\n",
"✗ [382]\n",
"✗ [383]\n",
"✗ [384]\n",
"✗ [385]\n",
"✗ [386]\n",
"✗ [387]\n",
"✗ [388]\n",
"✗ [389]\n",
"✗ [390]\n",
"✗ [391]\n",
"✗ [392]\n",
"✗ [393]\n",
"✗ [394]\n",
"✗ [395]\n",
"✗ [396]\n",
"✗ [397]\n",
"✗ [398]\n",
"✗ [399]\n",
"✗ [400]\n",
"✗ [401]\n",
"✗ [402]\n",
"✗ [403]\n",
"✗ [404]\n",
"✗ [405]\n",
"✗ [406]\n",
"✗ [407]\n",
"✗ [408]\n",
"✗ [409]\n",
"✗ [410]\n",
"✗ [411]\n",
"✗ [412]\n",
"✗ [413]\n",
"✗ [414]\n",
"✗ [415]\n",
"✗ [416]\n",
"✗ [417]\n",
"✗ [418]\n",
"✗ [419]\n",
"✗ [420]\n",
"✗ [421]\n",
"✗ [422]\n",
"✗ [423]\n",
"✗ [424]\n",
"✗ [425]\n",
"✗ [426]\n",
"✗ [427]\n",
"✗ [428]\n",
"✗ [429]\n",
"✗ [430]\n",
"✗ [431]\n",
"✗ [432]\n",
"✗ [433]\n",
"✗ [434]\n",
"✗ [435]\n",
"✗ [436]\n",
"✗ [437]\n",
"✗ [438]\n",
"✗ [439]\n",
"✗ [440]\n",
"✗ [441]\n",
"✗ [442]\n",
"✗ [443]\n",
"✗ [444]\n",
"✗ [445]\n",
"✗ [446]\n",
"✗ [447]\n",
"✗ [448]\n",
"✗ [449]\n",
"✗ [450]\n",
"✗ [451]\n",
"✗ [452]\n",
"✗ [453]\n",
"✗ [454]\n",
"✗ [455]\n",
"✗ [456]\n",
"✗ [457]\n",
"✗ [458]\n",
"✗ [459]\n",
"✗ [460]\n",
"✗ [461]\n",
"✗ [462]\n",
"✗ [463]\n",
"✗ [464]\n",
"✗ [465]\n",
"✗ [466]\n",
"✗ [467]\n",
"✗ [468]\n",
"✗ [469]\n",
"✗ [470]\n",
"✗ [471]\n",
"✗ [472]\n",
"✗ [473]\n",
"✗ [474]\n",
"✗ [475]\n",
"✗ [476]\n",
"✗ [477]\n",
"✗ [478]\n",
"✗ [479]\n",
"✗ [480]\n",
"✗ [481]\n",
"✗ [482]\n",
"✗ [483]\n",
"✗ [484]\n",
"✗ [485]\n",
"✗ [486]\n",
"✗ [487]\n",
"✗ [488]\n",
"✗ [489]\n",
"✗ [490]\n",
"✗ [491]\n",
"✗ [492]\n",
"✗ [493]\n",
"✗ [494]\n",
"✗ [495]\n",
"✗ [496]\n",
"✗ [497]\n",
"✗ [498]\n",
"✗ [499]\n",
"✓ [500]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [3]\n",
"✗ [4]\n",
"✗ [5]\n",
"✗ [6]\n",
"✗ [7]\n",
"✗ [8]\n",
"✗ [9]\n",
"✗ [10]\n",
"✗ [11]\n",
"✗ [12]\n",
"✗ [13]\n",
"✗ [14]\n",
"✗ [15]\n",
"✗ [16]\n",
"✗ [17]\n",
"✗ [18]\n",
"✗ [19]\n",
"✗ [20]\n",
"✗ [21]\n",
"✗ [22]\n",
"✗ [23]\n",
"✗ [24]\n",
"✗ [25]\n",
"✗ [26]\n",
"✗ [27]\n",
"✗ [28]\n",
"✗ [29]\n",
"✗ [30]\n",
"✗ [31]\n",
"✗ [32]\n",
"✗ [33]\n",
"✗ [34]\n",
"✗ [35]\n",
"✗ [36]\n",
"✗ [37]\n",
"✗ [38]\n",
"✗ [39]\n",
"✗ [40]\n",
"✗ [41]\n",
"✗ [42]\n",
"✗ [43]\n",
"✗ [44]\n",
"✗ [45]\n",
"✗ [46]\n",
"✗ [47]\n",
"✗ [48]\n",
"✗ [49]\n",
"✗ [50]\n",
"✗ [51]\n",
"✗ [52]\n",
"✗ [53]\n",
"✗ [54]\n",
"✗ [55]\n",
"✗ [56]\n",
"✗ [57]\n",
"✗ [58]\n",
"✗ [59]\n",
"✗ [60]\n",
"✗ [61]\n",
"✗ [62]\n",
"✗ [63]\n",
"✗ [64]\n",
"✗ [65]\n",
"✗ [66]\n",
"✗ [67]\n",
"✗ [68]\n",
"✗ [69]\n",
"✗ [70]\n",
"✗ [71]\n",
"✗ [72]\n",
"✗ [73]\n",
"✗ [74]\n",
"✗ [75]\n",
"✗ [76]\n",
"✗ [77]\n",
"✗ [78]\n",
"✗ [79]\n",
"✗ [80]\n",
"✗ [81]\n",
"✗ [82]\n",
"✗ [83]\n",
"✗ [84]\n",
"✗ [85]\n",
"✗ [86]\n",
"✗ [87]\n",
"✗ [88]\n",
"✗ [89]\n",
"✗ [90]\n",
"✗ [91]\n",
"✗ [92]\n",
"✗ [93]\n",
"✗ [94]\n",
"✗ [95]\n",
"✗ [96]\n",
"✗ [97]\n",
"✗ [98]\n",
"✗ [99]\n",
"✗ [100]\n",
"✗ [101]\n",
"✗ [102]\n",
"✗ [103]\n",
"✗ [104]\n",
"✗ [105]\n",
"✗ [106]\n",
"✗ [107]\n",
"✗ [108]\n",
"✗ [109]\n",
"✗ [110]\n",
"✗ [111]\n",
"✗ [112]\n",
"✗ [113]\n",
"✗ [114]\n",
"✗ [115]\n",
"✗ [116]\n",
"✗ [117]\n",
"✗ [118]\n",
"✗ [119]\n",
"✗ [120]\n",
"✗ [121]\n",
"✗ [122]\n",
"✗ [123]\n",
"✗ [124]\n",
"✗ [125]\n",
"✗ [126]\n",
"✗ [127]\n",
"✗ [128]\n",
"✗ [129]\n",
"✗ [130]\n",
"✗ [131]\n",
"✗ [132]\n",
"✗ [133]\n",
"✗ [134]\n",
"✗ [135]\n",
"✗ [136]\n",
"✗ [137]\n",
"✗ [138]\n",
"✗ [139]\n",
"✗ [140]\n",
"✗ [141]\n",
"✗ [142]\n",
"✗ [143]\n",
"✗ [144]\n",
"✗ [145]\n",
"✗ [146]\n",
"✗ [147]\n",
"✗ [148]\n",
"✗ [149]\n",
"✗ [150]\n",
"✗ [151]\n",
"✗ [152]\n",
"✗ [153]\n",
"✗ [154]\n",
"✗ [155]\n",
"✗ [156]\n",
"✗ [157]\n",
"✗ [158]\n",
"✗ [159]\n",
"✗ [160]\n",
"✗ [161]\n",
"✗ [162]\n",
"✗ [163]\n",
"✗ [164]\n",
"✗ [165]\n",
"✗ [166]\n",
"✗ [167]\n",
"✗ [168]\n",
"✗ [169]\n",
"✗ [170]\n",
"✗ [171]\n",
"✗ [172]\n",
"✗ [173]\n",
"✗ [174]\n",
"✗ [175]\n",
"✗ [176]\n",
"✗ [177]\n",
"✗ [178]\n",
"✗ [179]\n",
"✗ [180]\n",
"✗ [181]\n",
"✗ [182]\n",
"✗ [183]\n",
"✗ [184]\n",
"✗ [185]\n",
"✗ [186]\n",
"✗ [187]\n",
"✗ [188]\n",
"✗ [189]\n",
"✗ [190]\n",
"✗ [191]\n",
"✗ [192]\n",
"✗ [193]\n",
"✗ [194]\n",
"✗ [195]\n",
"✗ [196]\n",
"✗ [197]\n",
"✗ [198]\n",
"✗ [199]\n",
"✗ [200]\n",
"✗ [201]\n",
"✗ [202]\n",
"✗ [203]\n",
"✗ [204]\n",
"✗ [205]\n",
"✗ [206]\n",
"✗ [207]\n",
"✗ [208]\n",
"✗ [209]\n",
"✗ [210]\n",
"✗ [211]\n",
"✗ [212]\n",
"✗ [213]\n",
"✗ [214]\n",
"✗ [215]\n",
"✗ [216]\n",
"✗ [217]\n",
"✗ [218]\n",
"✗ [219]\n",
"✗ [220]\n",
"✗ [221]\n",
"✗ [222]\n",
"✗ [223]\n",
"✗ [224]\n",
"✗ [225]\n",
"✗ [226]\n",
"✗ [227]\n",
"✗ [228]\n",
"✗ [229]\n",
"✗ [230]\n",
"✗ [231]\n",
"✗ [232]\n",
"✗ [233]\n",
"✗ [234]\n",
"✗ [235]\n",
"✗ [236]\n",
"✗ [237]\n",
"✗ [238]\n",
"✗ [239]\n",
"✗ [240]\n",
"✗ [241]\n",
"✗ [242]\n",
"✗ [243]\n",
"✗ [244]\n",
"✗ [245]\n",
"✗ [246]\n",
"✗ [247]\n",
"✗ [248]\n",
"✗ [249]\n",
"✗ [250]\n",
"✗ [251]\n",
"✗ [252]\n",
"✗ [253]\n",
"✗ [254]\n",
"✗ [255]\n",
"✗ [256]\n",
"✗ [257]\n",
"✗ [258]\n",
"✗ [259]\n",
"✗ [260]\n",
"✗ [261]\n",
"✗ [262]\n",
"✗ [263]\n",
"✗ [264]\n",
"✗ [265]\n",
"✗ [266]\n",
"✗ [267]\n",
"✗ [268]\n",
"✗ [269]\n",
"✗ [270]\n",
"✗ [271]\n",
"✗ [272]\n",
"✗ [273]\n",
"✗ [274]\n",
"✗ [275]\n",
"✗ [276]\n",
"✗ [277]\n",
"✗ [278]\n",
"✗ [279]\n",
"✗ [280]\n",
"✗ [281]\n",
"✗ [282]\n",
"✗ [283]\n",
"✗ [284]\n",
"✗ [285]\n",
"✗ [286]\n",
"✗ [287]\n",
"✗ [288]\n",
"✗ [289]\n",
"✗ [290]\n",
"✗ [291]\n",
"✗ [292]\n",
"✗ [293]\n",
"✗ [294]\n",
"✗ [295]\n",
"✗ [296]\n",
"✗ [297]\n",
"✗ [298]\n",
"✗ [299]\n",
"✗ [300]\n",
"✗ [301]\n",
"✗ [302]\n",
"✗ [303]\n",
"✗ [304]\n",
"✗ [305]\n",
"✗ [306]\n",
"✗ [307]\n",
"✗ [308]\n",
"✗ [309]\n",
"✗ [310]\n",
"✗ [311]\n",
"✗ [312]\n",
"✗ [313]\n",
"✗ [314]\n",
"✗ [315]\n",
"✗ [316]\n",
"✗ [317]\n",
"✗ [318]\n",
"✗ [319]\n",
"✗ [320]\n",
"✗ [321]\n",
"✗ [322]\n",
"✗ [323]\n",
"✗ [324]\n",
"✗ [325]\n",
"✗ [326]\n",
"✗ [327]\n",
"✗ [328]\n",
"✗ [329]\n",
"✗ [330]\n",
"✗ [331]\n",
"✗ [332]\n",
"✗ [333]\n",
"✗ [334]\n",
"✗ [335]\n",
"✗ [336]\n",
"✗ [337]\n",
"✗ [338]\n",
"✗ [339]\n",
"✗ [340]\n",
"✗ [341]\n",
"✗ [342]\n",
"✗ [343]\n",
"✗ [344]\n",
"✗ [345]\n",
"✗ [346]\n",
"✗ [347]\n",
"✗ [348]\n",
"✗ [349]\n",
"✗ [350]\n",
"✗ [351]\n",
"✗ [352]\n",
"✗ [353]\n",
"✗ [354]\n",
"✗ [355]\n",
"✗ [356]\n",
"✗ [357]\n",
"✗ [358]\n",
"✗ [359]\n",
"✗ [360]\n",
"✗ [361]\n",
"✗ [362]\n",
"✗ [363]\n",
"✗ [364]\n",
"✗ [365]\n",
"✗ [366]\n",
"✗ [367]\n",
"✗ [368]\n",
"✗ [369]\n",
"✗ [370]\n",
"✗ [371]\n",
"✗ [372]\n",
"✗ [373]\n",
"✗ [374]\n",
"✗ [375]\n",
"✗ [376]\n",
"✗ [377]\n",
"✗ [378]\n",
"✗ [379]\n",
"✗ [380]\n",
"✗ [381]\n",
"✗ [382]\n",
"✗ [383]\n",
"✗ [384]\n",
"✗ [385]\n",
"✗ [386]\n",
"✗ [387]\n",
"✗ [388]\n",
"✗ [389]\n",
"✗ [390]\n",
"✗ [391]\n",
"✗ [392]\n",
"✗ [393]\n",
"✗ [394]\n",
"✗ [395]\n",
"✗ [396]\n",
"✗ [397]\n",
"✗ [398]\n",
"✗ [399]\n",
"✗ [400]\n",
"✗ [401]\n",
"✗ [402]\n",
"✗ [403]\n",
"✗ [404]\n",
"✗ [405]\n",
"✗ [406]\n",
"✗ [407]\n",
"✗ [408]\n",
"✗ [409]\n",
"✗ [410]\n",
"✗ [411]\n",
"✗ [412]\n",
"✗ [413]\n",
"✗ [414]\n",
"✗ [415]\n",
"✗ [416]\n",
"✗ [417]\n",
"✗ [418]\n",
"✗ [419]\n",
"✗ [420]\n",
"✗ [421]\n",
"✗ [422]\n",
"✗ [423]\n",
"✗ [424]\n",
"✗ [425]\n",
"✗ [426]\n",
"✗ [427]\n",
"✗ [428]\n",
"✗ [429]\n",
"✗ [430]\n",
"✗ [431]\n",
"✗ [432]\n",
"✗ [433]\n",
"✗ [434]\n",
"✗ [435]\n",
"✗ [436]\n",
"✗ [437]\n",
"✗ [438]\n",
"✗ [439]\n",
"✗ [440]\n",
"✗ [441]\n",
"✗ [442]\n",
"✗ [443]\n",
"✗ [444]\n",
"✗ [445]\n",
"✗ [446]\n",
"✗ [447]\n",
"✗ [448]\n",
"✗ [449]\n",
"✗ [450]\n",
"✗ [451]\n",
"✗ [452]\n",
"✗ [453]\n",
"✗ [454]\n",
"✗ [455]\n",
"✗ [456]\n",
"✗ [457]\n",
"✗ [458]\n",
"✗ [459]\n",
"✗ [460]\n",
"✗ [461]\n",
"✗ [462]\n",
"✗ [463]\n",
"✗ [464]\n",
"✗ [465]\n",
"✗ [466]\n",
"✗ [467]\n",
"✗ [468]\n",
"✗ [469]\n",
"✗ [470]\n",
"✗ [471]\n",
"✗ [472]\n",
"✗ [473]\n",
"✗ [474]\n",
"✗ [475]\n",
"✗ [476]\n",
"✗ [477]\n",
"✗ [478]\n",
"✗ [479]\n",
"✗ [480]\n",
"✗ [481]\n",
"✗ [482]\n",
"✗ [483]\n",
"✗ [484]\n",
"✗ [485]\n",
"✗ [486]\n",
"✗ [487]\n",
"✗ [488]\n",
"✗ [489]\n",
"✗ [490]\n",
"✗ [491]\n",
"✗ [492]\n",
"✗ [493]\n",
"✗ [494]\n",
"✗ [495]\n",
"✗ [496]\n",
"✗ [497]\n",
"✗ [498]\n",
"✗ [499]\n",
"\n",
"1 shrinks with 1003 function calls\n"
]
}
],
"source": [
"show_trace([1000], lambda x: sum(x) >= 500,\n",
" partial(greedy_shrink, shrink=shrink2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because we're trying every intermediate value, what we have amounts to a linear probe up to the smallest value that will work. If that smallest value is large, this will take a long time. Our shrinking is still O(n), but n is now the size of the smallest value that will work rather than the starting value. This is still pretty suboptimal.\n",
"\n",
"What we want to do is try to replace our linear probe with a binary search. What we'll get isn't exactly a binary search, but it's close enough."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_integer(n):\n",
" \"\"\"\n",
" Shrinker for individual integers.\n",
" \n",
" What happens is that we start from the left, first probing upwards in powers of two.\n",
" \n",
" When this would take us past our target value we then binary chop towards it.\n",
" \"\"\"\n",
" if not n:\n",
" return\n",
" for k in range(64):\n",
" probe = 2 ** k\n",
" if probe >= n:\n",
" break\n",
" yield probe - 1\n",
" probe //= 2\n",
" while True:\n",
" probe = (probe + n) // 2\n",
" yield probe\n",
" if probe == n - 1:\n",
" break\n",
"\n",
"\n",
"def shrink3(ls):\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
" for x in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[0, 1, 3, 7, 15, 31, 63, 127, 255, 378, 439, 469, 484, 492, 496, 498, 499]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list(shrink_integer(500))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This gives us a reasonable distribution of O(log(n)) values in the middle while still making sure we start with 0 and finish with n - 1.\n",
"\n",
"In Hypothesis's actual implementation we also try random values in the probe region in case there's something special about things near powers of two, but we won't worry about that here."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [1000]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✓ [511]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [383]\n",
"✗ [447]\n",
"✗ [479]\n",
"✗ [495]\n",
"✓ [503]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [379]\n",
"✗ [441]\n",
"✗ [472]\n",
"✗ [487]\n",
"✗ [495]\n",
"✗ [499]\n",
"✓ [501]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [378]\n",
"✗ [439]\n",
"✗ [470]\n",
"✗ [485]\n",
"✗ [493]\n",
"✗ [497]\n",
"✗ [499]\n",
"✓ [500]\n",
"✗ []\n",
"✗ [0]\n",
"✗ [1]\n",
"✗ [3]\n",
"✗ [7]\n",
"✗ [15]\n",
"✗ [31]\n",
"✗ [63]\n",
"✗ [127]\n",
"✗ [255]\n",
"✗ [378]\n",
"✗ [439]\n",
"✗ [469]\n",
"✗ [484]\n",
"✗ [492]\n",
"✗ [496]\n",
"✗ [498]\n",
"✗ [499]\n",
"\n",
"4 shrinks with 79 function calls\n"
]
}
],
"source": [
"show_trace([1000], lambda x: sum(x) >= 500, partial(\n",
" greedy_shrink, shrink=shrink3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This now runs in a much more reasonable number of function calls.\n",
"\n",
"Now we want to look at how to reduce the number of elements in the list more efficiently. We're currently making the same mistake we did with n umbers. Only reducing one at a time."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2, 2]\n",
"✓ [2, 2, 2, 2]\n",
"✓ [2, 2, 2]\n",
"✓ [2, 2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✓ [1, 2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✗ [1]\n",
"✗ [1, 0]\n",
"✗ [1, 1]\n",
"\n",
"19 shrinks with 26 function calls\n"
]
}
],
"source": [
"show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n",
" greedy_shrink, shrink=shrink3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We won't try too hard here, because typically our lists are not *that* long. We will just attempt to start by finding a shortish initial prefix that demonstrates the behaviour:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_to_prefix(ls):\n",
" i = 1\n",
" while i < len(ls):\n",
" yield ls[:i]\n",
" i *= 2\n",
"\n",
"\n",
"def delete_individual_elements(ls):\n",
" for i in range(len(ls)):\n",
" s = list(ls)\n",
" del s[i]\n",
" yield list(s)\n",
"\n",
"\n",
"def shrink_individual_elements(ls):\n",
" for i in range(len(ls)):\n",
" for x in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = x\n",
" yield s\n",
" \n",
"def shrink4(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from shrink_individual_elements(ls) "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [2]\n",
"✓ [2, 2]\n",
"✗ [2]\n",
"✗ [2]\n",
"✗ [2]\n",
"✗ [0, 2]\n",
"✓ [1, 2]\n",
"✗ [1]\n",
"✗ [2]\n",
"✗ [1]\n",
"✗ [0, 2]\n",
"✗ [1, 0]\n",
"✗ [1, 1]\n",
"\n",
"2 shrinks with 13 function calls\n"
]
}
],
"source": [
"show_trace([2] * 20, lambda x: sum(x) >= 3, partial(\n",
" greedy_shrink, shrink=shrink4))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The problem we now want to address is the fact that when we're shrinking elements we're only shrinking them one at a time. This means that even though we're only O(log(k)) in each element, we're O(log(k)^n) in the whole list where n is the length of the list. For even very modest k this is bad.\n",
"\n",
"In general we may not be able to fix this, but in practice for a lot of common structures we can exploit similarity to try to do simultaneous shrinking.\n",
"\n",
"Here is our starting example: We start and finish with all identical values. We would like to be able to shortcut through a lot of the uninteresting intermediate examples somehow."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✓ [7, 20, 20, 20, 20]\n",
"✗ [7]\n",
"✗ [7, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✓ [5, 20, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [0, 20, 20, 20, 20]\n",
"✗ [1, 20, 20, 20, 20]\n",
"✗ [3, 20, 20, 20, 20]\n",
"✗ [4, 20, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✓ [5, 7, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 7]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [7, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [0, 7, 20, 20, 20]\n",
"✗ [1, 7, 20, 20, 20]\n",
"✗ [3, 7, 20, 20, 20]\n",
"✗ [4, 7, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✓ [5, 5, 20, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 20, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [0, 5, 20, 20, 20]\n",
"✗ [1, 5, 20, 20, 20]\n",
"✗ [3, 5, 20, 20, 20]\n",
"✗ [4, 5, 20, 20, 20]\n",
"✗ [5, 0, 20, 20, 20]\n",
"✗ [5, 1, 20, 20, 20]\n",
"✗ [5, 3, 20, 20, 20]\n",
"✗ [5, 4, 20, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✓ [5, 5, 7, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 7, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [0, 5, 7, 20, 20]\n",
"✗ [1, 5, 7, 20, 20]\n",
"✗ [3, 5, 7, 20, 20]\n",
"✗ [4, 5, 7, 20, 20]\n",
"✗ [5, 0, 7, 20, 20]\n",
"✗ [5, 1, 7, 20, 20]\n",
"✗ [5, 3, 7, 20, 20]\n",
"✗ [5, 4, 7, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✓ [5, 5, 5, 20, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 20, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [0, 5, 5, 20, 20]\n",
"✗ [1, 5, 5, 20, 20]\n",
"✗ [3, 5, 5, 20, 20]\n",
"✗ [4, 5, 5, 20, 20]\n",
"✗ [5, 0, 5, 20, 20]\n",
"✗ [5, 1, 5, 20, 20]\n",
"✗ [5, 3, 5, 20, 20]\n",
"✗ [5, 4, 5, 20, 20]\n",
"✗ [5, 5, 0, 20, 20]\n",
"✗ [5, 5, 1, 20, 20]\n",
"✗ [5, 5, 3, 20, 20]\n",
"✗ [5, 5, 4, 20, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✓ [5, 5, 5, 7, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 7, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [0, 5, 5, 7, 20]\n",
"✗ [1, 5, 5, 7, 20]\n",
"✗ [3, 5, 5, 7, 20]\n",
"✗ [4, 5, 5, 7, 20]\n",
"✗ [5, 0, 5, 7, 20]\n",
"✗ [5, 1, 5, 7, 20]\n",
"✗ [5, 3, 5, 7, 20]\n",
"✗ [5, 4, 5, 7, 20]\n",
"✗ [5, 5, 0, 7, 20]\n",
"✗ [5, 5, 1, 7, 20]\n",
"✗ [5, 5, 3, 7, 20]\n",
"✗ [5, 5, 4, 7, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✓ [5, 5, 5, 5, 20]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 20]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 20]\n",
"✗ [1, 5, 5, 5, 20]\n",
"✗ [3, 5, 5, 5, 20]\n",
"✗ [4, 5, 5, 5, 20]\n",
"✗ [5, 0, 5, 5, 20]\n",
"✗ [5, 1, 5, 5, 20]\n",
"✗ [5, 3, 5, 5, 20]\n",
"✗ [5, 4, 5, 5, 20]\n",
"✗ [5, 5, 0, 5, 20]\n",
"✗ [5, 5, 1, 5, 20]\n",
"✗ [5, 5, 3, 5, 20]\n",
"✗ [5, 5, 4, 5, 20]\n",
"✗ [5, 5, 5, 0, 20]\n",
"✗ [5, 5, 5, 1, 20]\n",
"✗ [5, 5, 5, 3, 20]\n",
"✗ [5, 5, 5, 4, 20]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 7]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 7]\n",
"✗ [1, 5, 5, 5, 7]\n",
"✗ [3, 5, 5, 5, 7]\n",
"✗ [4, 5, 5, 5, 7]\n",
"✗ [5, 0, 5, 5, 7]\n",
"✗ [5, 1, 5, 5, 7]\n",
"✗ [5, 3, 5, 5, 7]\n",
"✗ [5, 4, 5, 5, 7]\n",
"✗ [5, 5, 0, 5, 7]\n",
"✗ [5, 5, 1, 5, 7]\n",
"✗ [5, 5, 3, 5, 7]\n",
"✗ [5, 5, 4, 5, 7]\n",
"✗ [5, 5, 5, 0, 7]\n",
"✗ [5, 5, 5, 1, 7]\n",
"✗ [5, 5, 5, 3, 7]\n",
"✗ [5, 5, 5, 4, 7]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"12 shrinks with 236 function calls\n"
]
}
],
"source": [
"show_trace([20] * 7,\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink4))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shrink_shared(ls):\n",
" \"\"\"\n",
" Look for all sets of shared indices and try to perform a simultaneous shrink on\n",
" their value, replacing all of them at once.\n",
" \n",
" In actual Hypothesis we also try replacing only subsets of the values when there\n",
" are more than two shared values, but we won't worry about that here.\n",
" \"\"\"\n",
" shared_indices = {}\n",
" for i in range(len(ls)):\n",
" shared_indices.setdefault(ls[i], []).append(i)\n",
" for sharing in shared_indices.values():\n",
" if len(sharing) > 1:\n",
" for v in shrink_integer(ls[sharing[0]]):\n",
" s = list(ls)\n",
" for i in sharing:\n",
" s[i] = v\n",
" yield s\n",
"\n",
"\n",
"def shrink5(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from shrink_shared(ls)\n",
" yield from shrink_individual_elements(ls)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✓ [20, 20, 20, 20, 20]\n",
"✗ [20]\n",
"✗ [20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [20, 20, 20, 20]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [7, 7, 7, 7, 7]\n",
"✗ [7]\n",
"✗ [7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"4 shrinks with 64 function calls\n"
]
}
],
"source": [
"show_trace([20] * 7,\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This achieves the desired result. We rapidly progress through all of the intermediate stages. We do still have to perform individual shrinks at the end unfortunately (this is unavoidable), but the size of the elements is much smaller now so it takes less time.\n",
"\n",
"Unfortunately while this solves the problem in this case it's almost useless, because unless you find yourself in the exact right starting position it never does anything."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 21, 22, 23, 24, 25, 26]\n",
"✗ [20]\n",
"✗ [20, 21]\n",
"✗ [20, 21, 22, 23]\n",
"✓ [21, 22, 23, 24, 25, 26]\n",
"✗ [21]\n",
"✗ [21, 22]\n",
"✗ [21, 22, 23, 24]\n",
"✓ [22, 23, 24, 25, 26]\n",
"✗ [22]\n",
"✗ [22, 23]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [22, 24, 25, 26]\n",
"✗ [22, 23, 25, 26]\n",
"✗ [22, 23, 24, 26]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✓ [7, 23, 24, 25, 26]\n",
"✗ [7]\n",
"✗ [7, 23]\n",
"✗ [7, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [7, 24, 25, 26]\n",
"✗ [7, 23, 25, 26]\n",
"✗ [7, 23, 24, 26]\n",
"✗ [7, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✓ [5, 23, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 23]\n",
"✗ [5, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 23, 25, 26]\n",
"✗ [5, 23, 24, 26]\n",
"✗ [5, 23, 24, 25]\n",
"✗ [0, 23, 24, 25, 26]\n",
"✗ [1, 23, 24, 25, 26]\n",
"✗ [3, 23, 24, 25, 26]\n",
"✗ [4, 23, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✓ [5, 7, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 7]\n",
"✗ [5, 7, 24, 25]\n",
"✗ [7, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 7, 24, 26]\n",
"✗ [5, 7, 24, 25]\n",
"✗ [0, 7, 24, 25, 26]\n",
"✗ [1, 7, 24, 25, 26]\n",
"✗ [3, 7, 24, 25, 26]\n",
"✗ [4, 7, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✓ [5, 5, 24, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 24, 25]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 24, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 24, 26]\n",
"✗ [5, 5, 24, 25]\n",
"✗ [0, 0, 24, 25, 26]\n",
"✗ [1, 1, 24, 25, 26]\n",
"✗ [3, 3, 24, 25, 26]\n",
"✗ [4, 4, 24, 25, 26]\n",
"✗ [0, 5, 24, 25, 26]\n",
"✗ [1, 5, 24, 25, 26]\n",
"✗ [3, 5, 24, 25, 26]\n",
"✗ [4, 5, 24, 25, 26]\n",
"✗ [5, 0, 24, 25, 26]\n",
"✗ [5, 1, 24, 25, 26]\n",
"✗ [5, 3, 24, 25, 26]\n",
"✗ [5, 4, 24, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✓ [5, 5, 7, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 7, 25]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 7, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 25]\n",
"✗ [0, 0, 7, 25, 26]\n",
"✗ [1, 1, 7, 25, 26]\n",
"✗ [3, 3, 7, 25, 26]\n",
"✗ [4, 4, 7, 25, 26]\n",
"✗ [0, 5, 7, 25, 26]\n",
"✗ [1, 5, 7, 25, 26]\n",
"✗ [3, 5, 7, 25, 26]\n",
"✗ [4, 5, 7, 25, 26]\n",
"✗ [5, 0, 7, 25, 26]\n",
"✗ [5, 1, 7, 25, 26]\n",
"✗ [5, 3, 7, 25, 26]\n",
"✗ [5, 4, 7, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✓ [5, 5, 5, 25, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 25]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 25, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 25]\n",
"✗ [0, 0, 0, 25, 26]\n",
"✗ [1, 1, 1, 25, 26]\n",
"✗ [3, 3, 3, 25, 26]\n",
"✗ [4, 4, 4, 25, 26]\n",
"✗ [0, 5, 5, 25, 26]\n",
"✗ [1, 5, 5, 25, 26]\n",
"✗ [3, 5, 5, 25, 26]\n",
"✗ [4, 5, 5, 25, 26]\n",
"✗ [5, 0, 5, 25, 26]\n",
"✗ [5, 1, 5, 25, 26]\n",
"✗ [5, 3, 5, 25, 26]\n",
"✗ [5, 4, 5, 25, 26]\n",
"✗ [5, 5, 0, 25, 26]\n",
"✗ [5, 5, 1, 25, 26]\n",
"✗ [5, 5, 3, 25, 26]\n",
"✗ [5, 5, 4, 25, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✓ [5, 5, 5, 7, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 7, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [0, 0, 0, 7, 26]\n",
"✗ [1, 1, 1, 7, 26]\n",
"✗ [3, 3, 3, 7, 26]\n",
"✗ [4, 4, 4, 7, 26]\n",
"✗ [0, 5, 5, 7, 26]\n",
"✗ [1, 5, 5, 7, 26]\n",
"✗ [3, 5, 5, 7, 26]\n",
"✗ [4, 5, 5, 7, 26]\n",
"✗ [5, 0, 5, 7, 26]\n",
"✗ [5, 1, 5, 7, 26]\n",
"✗ [5, 3, 5, 7, 26]\n",
"✗ [5, 4, 5, 7, 26]\n",
"✗ [5, 5, 0, 7, 26]\n",
"✗ [5, 5, 1, 7, 26]\n",
"✗ [5, 5, 3, 7, 26]\n",
"✗ [5, 5, 4, 7, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✓ [5, 5, 5, 5, 26]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 26]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 26]\n",
"✗ [1, 1, 1, 1, 26]\n",
"✗ [3, 3, 3, 3, 26]\n",
"✗ [4, 4, 4, 4, 26]\n",
"✗ [0, 5, 5, 5, 26]\n",
"✗ [1, 5, 5, 5, 26]\n",
"✗ [3, 5, 5, 5, 26]\n",
"✗ [4, 5, 5, 5, 26]\n",
"✗ [5, 0, 5, 5, 26]\n",
"✗ [5, 1, 5, 5, 26]\n",
"✗ [5, 3, 5, 5, 26]\n",
"✗ [5, 4, 5, 5, 26]\n",
"✗ [5, 5, 0, 5, 26]\n",
"✗ [5, 5, 1, 5, 26]\n",
"✗ [5, 5, 3, 5, 26]\n",
"✗ [5, 5, 4, 5, 26]\n",
"✗ [5, 5, 5, 0, 26]\n",
"✗ [5, 5, 5, 1, 26]\n",
"✗ [5, 5, 5, 3, 26]\n",
"✗ [5, 5, 5, 4, 26]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 7]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 7]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 7]\n",
"✗ [1, 1, 1, 1, 7]\n",
"✗ [3, 3, 3, 3, 7]\n",
"✗ [4, 4, 4, 4, 7]\n",
"✗ [0, 5, 5, 5, 7]\n",
"✗ [1, 5, 5, 5, 7]\n",
"✗ [3, 5, 5, 5, 7]\n",
"✗ [4, 5, 5, 5, 7]\n",
"✗ [5, 0, 5, 5, 7]\n",
"✗ [5, 1, 5, 5, 7]\n",
"✗ [5, 3, 5, 5, 7]\n",
"✗ [5, 4, 5, 5, 7]\n",
"✗ [5, 5, 0, 5, 7]\n",
"✗ [5, 5, 1, 5, 7]\n",
"✗ [5, 5, 3, 5, 7]\n",
"✗ [5, 5, 4, 5, 7]\n",
"✗ [5, 5, 5, 0, 7]\n",
"✗ [5, 5, 5, 1, 7]\n",
"✗ [5, 5, 5, 3, 7]\n",
"✗ [5, 5, 5, 4, 7]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"12 shrinks with 264 function calls\n"
]
}
],
"source": [
"show_trace([20 + i for i in range(7)],\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what we're going to try to do is to try a simplification first which *creates* that exact right starting condition. Further it's one that will be potentially very useful even if we don't actually have the situation where we have shared shrinks.\n",
"\n",
"What we're going to do is we're going to use values from the list to act as evidence for how complex things need to be. Starting from the smallest, we'll try capping the array at each individual value and see what happens.\n",
"\n",
"As well as being potentially a very rapid shrink, this creates lists with lots of duplicates, which enables the simultaneous shrinking to shine."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def replace_with_simpler(ls):\n",
" if not ls:\n",
" return\n",
" values = set(ls)\n",
" values.remove(max(ls))\n",
" values = sorted(values)\n",
" for v in values:\n",
" yield [min(v, l) for l in ls]\n",
"\n",
"\n",
"def shrink6(ls):\n",
" yield from shrink_to_prefix(ls)\n",
" yield from delete_individual_elements(ls)\n",
" yield from replace_with_simpler(ls)\n",
" yield from shrink_shared(ls)\n",
" yield from shrink_individual_elements(ls)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [20, 21, 22, 23, 24, 25, 26]\n",
"✗ [20]\n",
"✗ [20, 21]\n",
"✗ [20, 21, 22, 23]\n",
"✓ [21, 22, 23, 24, 25, 26]\n",
"✗ [21]\n",
"✗ [21, 22]\n",
"✗ [21, 22, 23, 24]\n",
"✓ [22, 23, 24, 25, 26]\n",
"✗ [22]\n",
"✗ [22, 23]\n",
"✗ [22, 23, 24, 25]\n",
"✗ [23, 24, 25, 26]\n",
"✗ [22, 24, 25, 26]\n",
"✗ [22, 23, 25, 26]\n",
"✗ [22, 23, 24, 26]\n",
"✗ [22, 23, 24, 25]\n",
"✓ [22, 22, 22, 22, 22]\n",
"✗ [22]\n",
"✗ [22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [22, 22, 22, 22]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [7, 7, 7, 7, 7]\n",
"✗ [7]\n",
"✗ [7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [7, 7, 7, 7]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✓ [5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✗ [0, 0, 0, 0, 0]\n",
"✗ [1, 1, 1, 1, 1]\n",
"✗ [3, 3, 3, 3, 3]\n",
"✗ [4, 4, 4, 4, 4]\n",
"✗ [0, 5, 5, 5, 5]\n",
"✗ [1, 5, 5, 5, 5]\n",
"✗ [3, 5, 5, 5, 5]\n",
"✗ [4, 5, 5, 5, 5]\n",
"✗ [5, 0, 5, 5, 5]\n",
"✗ [5, 1, 5, 5, 5]\n",
"✗ [5, 3, 5, 5, 5]\n",
"✗ [5, 4, 5, 5, 5]\n",
"✗ [5, 5, 0, 5, 5]\n",
"✗ [5, 5, 1, 5, 5]\n",
"✗ [5, 5, 3, 5, 5]\n",
"✗ [5, 5, 4, 5, 5]\n",
"✗ [5, 5, 5, 0, 5]\n",
"✗ [5, 5, 5, 1, 5]\n",
"✗ [5, 5, 5, 3, 5]\n",
"✗ [5, 5, 5, 4, 5]\n",
"✗ [5, 5, 5, 5, 0]\n",
"✗ [5, 5, 5, 5, 1]\n",
"✗ [5, 5, 5, 5, 3]\n",
"✗ [5, 5, 5, 5, 4]\n",
"\n",
"5 shrinks with 73 function calls\n"
]
}
],
"source": [
"show_trace([20 + i for i in range(7)],\n",
" lambda x: len([t for t in x if t >= 5]) >= 5,\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we're going to start looking at some numbers.\n",
"\n",
"What we'll do is we'll generate 1000 random lists satisfying some predicate, and then simplify them down to the smallest possible examples satisfying those predicates. This lets us verify that these aren't just cherry-picked examples and our methods help in the general case. We fix the set of examples per predicate so that we're comparing like for like.\n",
"\n",
"A more proper statistical treatment would probably be a good idea."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from collections import OrderedDict\n",
"\n",
"conditions = OrderedDict([\n",
" (\"length >= 2\", lambda xs: len(xs) >= 2),\n",
" (\"sum >= 500\", lambda xs: sum(xs) >= 500),\n",
" (\"sum >= 3\", lambda xs: sum(xs) >= 3),\n",
" (\"At least 10 by 5\", lambda xs: len(\n",
" [t for t in xs if t >= 5]) >= 10),\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[17861213645196285187,\n",
" 15609796832515195084,\n",
" 8808697621832673046,\n",
" 1013319847337885109,\n",
" 1252281976438780211,\n",
" 15526909770962854196,\n",
" 2065337703776048239,\n",
" 11654092230944134701,\n",
" 5554896851708700201,\n",
" 17485190250805381572,\n",
" 7700396730246958474,\n",
" 402840882133605445,\n",
" 5303116940477413125,\n",
" 7459257850255946545,\n",
" 10349184495871650178,\n",
" 4361155591615075311,\n",
" 15194020468024244632,\n",
" 14428821588688846242,\n",
" 5754975712549869618,\n",
" 13740966788951413307,\n",
" 15209704957418077856,\n",
" 12562588328524673262,\n",
" 8415556016795311987,\n",
" 3993098291779210741,\n",
" 16874756914619597640,\n",
" 7932421182532982309,\n",
" 1080869529149674704,\n",
" 13878842261614060122,\n",
" 229976195287031921,\n",
" 8378461140013520338,\n",
" 6189522326946191255,\n",
" 16684625600934047114,\n",
" 12533448641134015292,\n",
" 10459192142175991903,\n",
" 15688511015570391481,\n",
" 3091340728247101611,\n",
" 4034760776171697910,\n",
" 6258572097778886531,\n",
" 13555449085571665140,\n",
" 6727488149749641424,\n",
" 7125107819562430884,\n",
" 1557872425804423698,\n",
" 4810250441100696888,\n",
" 10500486959813930693,\n",
" 841300069403644975,\n",
" 9278626999406014662,\n",
" 17219731431761688449,\n",
" 15650446646901259126,\n",
" 8683172055034528265,\n",
" 5138373693056086816,\n",
" 4055877702343936882,\n",
" 5696765901584750542,\n",
" 7133363948804979946,\n",
" 988518370429658551,\n",
" 16302597472193523184,\n",
" 579078764159525857,\n",
" 10678347012503400890,\n",
" 8433836779160269996,\n",
" 13884258181758870664,\n",
" 13594877609651310055]"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import random\n",
"\n",
"N_EXAMPLES = 1000\n",
"\n",
"datasets = {}\n",
"\n",
"def gen_list(rnd):\n",
" return [\n",
" random.getrandbits(64)\n",
" for _ in range(random.randint(0, 100))\n",
" ]\n",
"\n",
"def dataset_for(condition):\n",
" if condition in datasets:\n",
" return datasets[condition]\n",
" constraint = conditions[condition]\n",
" dataset = []\n",
" rnd = random.Random(condition)\n",
" while len(dataset) < N_EXAMPLES:\n",
" ls = gen_list(rnd)\n",
" if constraint(ls):\n",
" dataset.append(ls)\n",
" datasets[condition] = dataset\n",
" return dataset\n",
"\n",
"dataset_for(\"sum >= 3\")[1]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"13"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# In order to avoid run-away cases where things will take basically forever\n",
"# we cap at 5000 as \"you've taken too long. Stop it\". Because we're only ever\n",
"# showing the worst case scenario we'll just display this as > 5000 if we ever\n",
"# hit it and it won't distort statistics.\n",
"MAX_COUNT = 5000\n",
"\n",
"class MaximumCountExceeded(Exception):\n",
" pass\n",
"\n",
"def call_counts(condition, simplifier):\n",
" constraint = conditions[condition]\n",
" dataset = dataset_for(condition)\n",
" counts = []\n",
"\n",
" for ex in dataset:\n",
" counter = [0]\n",
" \n",
" def run_and_count(ls):\n",
" counter[0] += 1\n",
" if counter[0] > MAX_COUNT:\n",
" raise MaximumCountExceeded()\n",
" return constraint(ls)\n",
" \n",
" try:\n",
" simplifier(ex, run_and_count)\n",
" counts.extend(counter)\n",
" except MaximumCountExceeded:\n",
" counts.append(MAX_COUNT + 1)\n",
" break\n",
" return counts\n",
" \n",
"def worst_case(condition, simplifier):\n",
" return max(call_counts(condition, simplifier))\n",
"\n",
"worst_case(\n",
" \"length >= 2\",\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
"\n",
"def compare_simplifiers(named_simplifiers):\n",
" \"\"\"\n",
" Given a list of (name, simplifier) pairs, output a table comparing\n",
" the worst case performance of each on our current set of examples.\n",
" \"\"\"\n",
" html_fragments = []\n",
" html_fragments.append(\"\\n\\n\")\n",
" header = [\"Condition\"]\n",
" header.extend(name for name, _ in named_simplifiers)\n",
" for h in header:\n",
" html_fragments.append(\"%s | \" % (h,))\n",
" html_fragments.append(\"
\\n\\n\")\n",
" \n",
" for name in conditions:\n",
" bits = [name.replace(\">\", \">\")] \n",
" for _, simplifier in named_simplifiers:\n",
" value = worst_case(name, simplifier)\n",
" if value <= MAX_COUNT:\n",
" bits.append(str(value))\n",
" else:\n",
" bits.append(\" > %d\" % (MAX_COUNT,))\n",
" html_fragments.append(\" \")\n",
" html_fragments.append(' '.join(\n",
" \"%s | \" % (b,) for b in bits))\n",
" html_fragments.append(\"
\")\n",
" html_fragments.append(\"\\n
\")\n",
" return HTML('\\n'.join(html_fragments))"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"2 | \n",
"3 | \n",
"4 | \n",
"5 | \n",
"6 | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 106 | 105 | 13 | 13 | 13 | \n",
"
\n",
" \n",
"sum >= 500 | 1102 | 178 | 80 | 80 | 80 | \n",
"
\n",
" \n",
"sum >= 3 | 108 | 107 | 9 | 9 | 9 | \n",
"
\n",
" \n",
"At least 10 by 5 | 535 | 690 | 809 | 877 | 144 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (f.__name__[-1], partial(greedy_shrink, shrink=f))\n",
" for f in [shrink2, shrink3, shrink4, shrink5, shrink6]\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So you can see from the above table, the iterations 2 through 5 were a little ambiguous ion that they helped a lot in the cases they were designed to help with but hurt in other cases. 6 however is clearly the best of the lot, being no worse than any of the others on any of the cases and often significantly better.\n",
"\n",
"Rather than continuing to refine our shrink further, we instead look to improvements to how we use shrinking. We'll start by noting a simple optimization: If you look at our traces above, we often checked the same example twice. We're only interested in deterministic conditions, so this isn't useful to do. So we'll start by simply pruning out all duplicates. This should have exactly the same set and order of successful shrinks but will avoid a bunch of redundant work."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def greedy_shrink_with_dedupe(ls, constraint, shrink):\n",
" seen = set()\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Normal | \n",
"Deduped | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 13 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 80 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 9 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 144 | 107 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Normal\", partial(greedy_shrink, shrink=shrink6)),\n",
" (\"Deduped\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
"\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As expected, this is a significant improvement in some cases. It is logically impossible that it could ever make things worse, but it's nice that it makes it better.\n",
"\n",
"So far we've only looked at things where the interaction between elements was fairly light - the sum cases the values of other elements mattered a bit, but shrinking an integer could never enable other shrinks. Lets look at one where this is not the case: Where our condition is that we have at least 10 distinct elements."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 101]\n",
"✗ [0, 101, 102, 103]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [0, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [0, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 102, 103]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107]\n",
"✗ [1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [0, 1, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 3, 103]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107]\n",
"✗ [1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 3, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 3, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 3, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 3, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 3, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 103]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107]\n",
"✗ [1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [0, 1, 2, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 2, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 103, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107]\n",
"✗ [1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 104, 104, 104, 104, 104, 104]\n",
"✗ [0, 1, 2, 3, 104, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 104, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107]\n",
"✗ [1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 7, 7, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 7, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 7, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107]\n",
"✗ [1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 5, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 5, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 5, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107]\n",
"✗ [1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 105, 105, 105, 105, 105]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 105, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107]\n",
"✗ [1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 7, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 7, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107]\n",
"✗ [1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 106, 106, 106]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 106, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107]\n",
"✗ [1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107]\n",
"✗ [1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 107, 107]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 107, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 108]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 108, 108]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15, 15]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11, 11]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9, 9]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0]\n",
"✗ [0, 1]\n",
"✗ [0, 1, 2, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7]\n",
"✗ [1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8]\n",
"✗ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n",
"✗ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n",
"✗ [0, 1, 2, 2, 2, 2, 2, 2, 2, 2]\n",
"✗ [0, 1, 2, 3, 3, 3, 3, 3, 3, 3]\n",
"✗ [0, 1, 2, 3, 4, 4, 4, 4, 4, 4]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 5, 5, 5]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 6, 6]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 848 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" partial(greedy_shrink, shrink=shrink6))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This does not do very well at all.\n",
"\n",
"The reason it doesn't is that we keep trying useless shrinks. e.g. none of the shrinks done by shrink\\_to\\_prefix, replace\\_with\\_simpler or shrink\\_shared will ever do anything useful here.\n",
"\n",
"So lets switch to an approach where we try shrink types until they stop working and then we move on to the next type:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink1(ls, constraint):\n",
" seen = set()\n",
" for shrink in [\n",
" shrink_to_prefix,\n",
" replace_with_simpler,\n",
" shrink_shared,\n",
" shrink_individual_elements,\n",
" ]:\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 7, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 7, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 5, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 5, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 7, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 7, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 15, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 15, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 11, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 11, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 9, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 9, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 15]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 15]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 11]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 11]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 0, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 0, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 1, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 0, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 1, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 2, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 0, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 1, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 3, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 0, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 1, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 3, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 4, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 318 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" multicourse_shrink1)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"conditions[\"10 distinct elements\"] = lambda xs: len(set(xs)) >= 10"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 4 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 34 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 5 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 58 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 320 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink1)\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So that helped, but not as much as we'd have liked. It's saved us about half the calls, when really we wanted to save 90% of the calls.\n",
"\n",
"We're on the right track though. The problem is not that our solution isn't good, it's that it didn't go far enough: We're *still* making an awful lot of useless calls. The problem is that each time we shrink the element at index i we try shrinking the elements at indexes 0 through i - 1, and this will never work. So what we want to do is to break shrinking elements into a separate shrinker for each index:"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def simplify_index(i):\n",
" def accept(ls):\n",
" if i >= len(ls):\n",
" return\n",
" for v in shrink_integer(ls[i]):\n",
" s = list(ls)\n",
" s[i] = v\n",
" yield s\n",
" return accept\n",
"\n",
"def shrinkers_for(ls):\n",
" yield shrink_to_prefix\n",
" yield delete_individual_elements\n",
" yield replace_with_simpler\n",
" yield shrink_shared\n",
" for i in range(len(ls)):\n",
" yield simplify_index(i)\n",
"\n",
"def multicourse_shrink2(ls, constraint):\n",
" seen = set()\n",
" for shrink in shrinkers_for(ls):\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100]\n",
"✗ [100, 101]\n",
"✗ [100, 101, 102, 103]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107]\n",
"✗ [101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 104, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 105, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 106, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 107, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 108, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 109]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108]\n",
"✗ [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]\n",
"✗ [100, 101, 101, 101, 101, 101, 101, 101, 101, 101]\n",
"✗ [100, 101, 102, 102, 102, 102, 102, 102, 102, 102]\n",
"✗ [100, 101, 102, 103, 103, 103, 103, 103, 103, 103]\n",
"✗ [100, 101, 102, 103, 104, 104, 104, 104, 104, 104]\n",
"✗ [100, 101, 102, 103, 104, 105, 105, 105, 105, 105]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 106, 106, 106]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 107, 107]\n",
"✗ [100, 101, 102, 103, 104, 105, 106, 107, 108, 108]\n",
"✓ [0, 101, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 0, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 102, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 0, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 1, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 3, 103, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 103, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 0, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 1, 104, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 2, 104, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 0, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 1, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 3, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 7, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 5, 105, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 105, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 0, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 1, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 3, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 7, 106, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 4, 106, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 0, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 1, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 3, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 7, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 5, 107, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 107, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 0, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 1, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 3, 108, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 5, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 6, 108, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 0, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 1, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 3, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 7, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 15, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 11, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 9, 109]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 6, 109]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 1]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 3]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 7]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 15]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 11]\n",
"✓ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n",
"✗ [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]\n",
"\n",
"20 shrinks with 75 function calls\n"
]
}
],
"source": [
"show_trace([100 + i for i in range(10)],\n",
" lambda x: len(set(x)) >= 10,\n",
" multicourse_shrink2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This worked great! It saved us a huge number of function calls.\n",
"\n",
"Unfortunately it's wrong. Actually the previous one was wrong too, but this one is more obviously wrong. The problem is that shrinking later elements can unlock more shrinks for earlier elements and we'll never be able to benefit from that here:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [101, 100]\n",
"✗ [101]\n",
"✗ [100]\n",
"✗ [100, 100]\n",
"✗ [0, 100]\n",
"✗ [1, 100]\n",
"✗ [3, 100]\n",
"✗ [7, 100]\n",
"✗ [15, 100]\n",
"✗ [31, 100]\n",
"✗ [63, 100]\n",
"✗ [82, 100]\n",
"✗ [91, 100]\n",
"✗ [96, 100]\n",
"✗ [98, 100]\n",
"✗ [99, 100]\n",
"✓ [101, 0]\n",
"\n",
"1 shrinks with 16 function calls\n"
]
}
],
"source": [
"show_trace([101, 100],\n",
" lambda x: len(x) >= 2 and x[0] > x[1],\n",
" multicourse_shrink2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Armed with this example we can also show an example where the previous one is wrong because a later simplification unlocks an earlier one because shrinking values allows us to delete more elements:"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0, 0, 0]\n",
"\n",
"2 shrinks with 5 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink1)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"conditions[\"First > Second\"] = lambda xs: len(xs) >= 2 and xs[0] > xs[1]"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Note: We modify this to mask off the high bits because otherwise the probability of\n",
"# hitting the condition at random is too low.\n",
"conditions[\"Size > max & 63\"] = lambda xs: xs and len(xs) > (max(xs) & 63)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what we'll try doing is iterating this to a fixed point and see what happens:"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink3(ls, constraint):\n",
" seen = set()\n",
" while True:\n",
" old_ls = ls\n",
" for shrink in shrinkers_for(ls):\n",
" while True:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if ls == old_ls:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [101, 100]\n",
"✗ [101]\n",
"✗ [100]\n",
"✗ [100, 100]\n",
"✗ [0, 100]\n",
"✗ [1, 100]\n",
"✗ [3, 100]\n",
"✗ [7, 100]\n",
"✗ [15, 100]\n",
"✗ [31, 100]\n",
"✗ [63, 100]\n",
"✗ [82, 100]\n",
"✗ [91, 100]\n",
"✗ [96, 100]\n",
"✗ [98, 100]\n",
"✗ [99, 100]\n",
"✓ [101, 0]\n",
"✗ [0]\n",
"✗ [0, 0]\n",
"✓ [1, 0]\n",
"✗ [1]\n",
"\n",
"2 shrinks with 20 function calls\n"
]
}
],
"source": [
"show_trace([101, 100],\n",
" lambda xs: len(xs) >= 2 and xs[0] > xs[1],\n",
" multicourse_shrink3)"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0]\n",
"✓ [0]\n",
"✗ []\n",
"\n",
"5 shrinks with 10 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So that worked. Yay!\n",
"\n",
"Lets compare how this does to our single pass implementation."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3)\n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So the answer is generally favourably but *ouch* that last one.\n",
"\n",
"What's happening there is that because later shrinks are opening up potentially very large improvements accessible to the lower shrinks, the original greedy algorithm can exploit that much better, while the multi pass algorithm spends a lot of time in the later stages with their incremental shrinks.\n",
"\n",
"Lets see another similar example before we try to fix this:"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import hashlib\n",
"\n",
"conditions[\"Messy\"] = lambda xs: hashlib.md5(repr(xs).encode('utf-8')).hexdigest()[0] == '0'"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3)\n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This one is a bit different in that the problem is not that the structure is one we're ill suited to exploiting, it's that there is no structure at all so we have no hope of exploiting it. Literally any change at all will unlock earlier shrinks we could have done.\n",
"\n",
"What we're going to try to do is hybridize the two approaches. If we notice we're performing an awful lot of shrinks we can take that as a hint that we should be trying again from earlier stages.\n",
"\n",
"Here is our first approach. We simply restart the whole process every five shrinks:"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"MAX_SHRINKS_PER_RUN = 2\n",
"\n",
"\n",
"def multicourse_shrink4(ls, constraint):\n",
" seen = set()\n",
" while True:\n",
" old_ls = ls\n",
" shrinks_this_run = 0\n",
" for shrink in shrinkers_for(ls):\n",
" while shrinks_this_run < MAX_SHRINKS_PER_RUN:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" shrinks_this_run += 1\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if ls == old_ls:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"Multi pass with restart | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | 90 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | 396 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | 1463 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | > 5000 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | 1423 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3),\n",
" (\"Multi pass with restart\", multicourse_shrink4) \n",
" \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That works OK, but it's pretty unsatisfying as it loses us most of the benefits of the multi pass shrinking - we're now at most twice as good as the greedy one.\n",
"\n",
"So what we're going to do is bet on the multi pass working and then gradually degrade to the greedy algorithm as it fails to work."
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def multicourse_shrink5(ls, constraint):\n",
" seen = set()\n",
" max_shrinks_per_run = 10\n",
" while True:\n",
" shrinks_this_run = 0\n",
" for shrink in shrinkers_for(ls):\n",
" while shrinks_this_run < max_shrinks_per_run:\n",
" for s in shrink(ls):\n",
" key = tuple(s)\n",
" if key in seen:\n",
" continue\n",
" seen.add(key)\n",
" if constraint(s):\n",
" shrinks_this_run += 1\n",
" ls = s\n",
" break\n",
" else:\n",
" break\n",
" if max_shrinks_per_run > 1:\n",
" max_shrinks_per_run -= 2\n",
" if not shrinks_this_run:\n",
" return ls"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"✓ [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]\n",
"✗ [5]\n",
"✗ [5, 5]\n",
"✗ [5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5, 5]\n",
"✓ [5, 5, 5, 5, 5, 5]\n",
"✗ [5, 5, 5, 5, 5]\n",
"✓ [0, 0, 0, 0, 0, 0]\n",
"✓ [0]\n",
"✗ []\n",
"\n",
"5 shrinks with 10 function calls\n"
]
}
],
"source": [
"show_trace([5] * 10,\n",
" lambda x: x and len(x) > max(x),\n",
" multicourse_shrink5)"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"Condition | \n",
"Single pass | \n",
"Multi pass | \n",
"Multi pass with restart | \n",
"Multi pass with variable restart | \n",
"
\n",
"\n",
"\n",
" \n",
"length >= 2 | 6 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"sum >= 500 | 35 | 35 | 35 | 35 | \n",
"
\n",
" \n",
"sum >= 3 | 6 | 6 | 6 | 6 | \n",
"
\n",
" \n",
"At least 10 by 5 | 107 | 73 | 90 | 73 | \n",
"
\n",
" \n",
"10 distinct elements | 623 | 131 | 396 | 212 | \n",
"
\n",
" \n",
"First > Second | 1481 | 1445 | 1463 | 1168 | \n",
"
\n",
" \n",
"Size > max & 63 | 600 | > 5000 | > 5000 | 1002 | \n",
"
\n",
" \n",
"Messy | 1032 | > 5000 | 1423 | 824 | \n",
"
\n",
"\n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"compare_simplifiers([\n",
" (\"Single pass\", partial(greedy_shrink_with_dedupe,\n",
" shrink=shrink6)),\n",
" (\"Multi pass\", multicourse_shrink3), \n",
" (\"Multi pass with restart\", multicourse_shrink4),\n",
" (\"Multi pass with variable restart\", multicourse_shrink5) \n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is now more or less the current state of the art (it's actually a bit different from the Hypothesis state of the art at the time of this writing. I'm planning to merge some of the things I figured out in the course of writing this back in). We've got something that is able to adaptively take advantage of structure where it is present, but degrades reasonably gracefully back to the more aggressive version that works better in unstructured examples.\n",
"\n",
"Surprisingly, on some examples it seems to even be best of all of them. I think that's more coincidence than truth though."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
hypothesis-3.0.1/scripts/ 0000775 0000000 0000000 00000000000 12661275660 0015414 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/scripts/basic-test.sh 0000775 0000000 0000000 00000003511 12661275660 0020011 0 ustar 00root root 0000000 0000000 #!/bin/bash
set -e -o xtrace
# We run a reduced set of tests on OSX mostly so the CI runs in vaguely
# reasonable time.
if [[ "$(uname -s)" == 'Darwin' ]]; then
DARWIN=true
else
DARWIN=false
fi
python -c '
import os
for k, v in sorted(dict(os.environ).items()):
print("%s=%s" % (k, v))
'
if [ "$(python -c 'import sys; print(sys.version_info[:2] >= (3, 5))')" = "True" ] ; then
PYTEST="python -m pytest --assert=plain"
else
PYTEST="python -m pytest"
fi
$PYTEST tests/cover
if [ "$(python -c 'import sys; print(sys.version_info[0] == 2)')" = "True" ] ; then
$PYTEST tests/py2
else
$PYTEST tests/py3
fi
$PYTEST --runpytest=subprocess tests/pytest
if [ "$DARWIN" != true ]; then
for f in tests/nocover/test_*.py; do
$PYTEST $f
done
fi
pip install .[datetime]
$PYTEST tests/datetime/
pip uninstall -y pytz
if [ "$DARWIN" = true ]; then
exit 0
fi
# fake-factory doesn't have a correct universal wheel
pip install --no-use-wheel .[fakefactory]
$PYTEST tests/fakefactory/
if [ "$(python -c 'import platform; print(platform.python_implementation())')" != "PyPy" ]; then
if [ "$(python -c 'import sys; print(sys.version_info[:2] <= (2, 6))')" != "True" ] ; then
if [ "$(python -c 'import sys; print(sys.version_info[0] == 2 or sys.version_info[:2] >= (3, 4))')" == "True" ] ; then
pip install .[django]
python -m tests.django.manage test tests.django
pip uninstall -y django fake-factory
fi
fi
if [ "$(python -c 'import sys; print(sys.version_info[:2] < (3, 5))')" = "True" ] ; then
if [ "$(python -c 'import sys; print(sys.version_info[:2] <= (2, 6))')" != "True" ] ; then
pushd $HOME
pip wheel numpy==1.9.2
popd
pip install --find-links=$HOME/wheelhouse numpy==1.9.2
else
pip install numpy==1.9.2
fi
$PYTEST tests/numpy
pip uninstall -y numpy
fi
fi
hypothesis-3.0.1/scripts/check_encoding_header.py 0000664 0000000 0000000 00000002277 12661275660 0022231 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
VALID_STARTS = (
"# coding=utf-8",
"#!/usr/bin/env python",
)
if __name__ == '__main__':
import sys
n = max(map(len, VALID_STARTS))
bad = False
for f in sys.argv[1:]:
with open(f, "r", encoding="utf-8") as i:
start = i.read(n)
if not any(start.startswith(s) for s in VALID_STARTS):
print(
"%s has incorrect start %r" % (f, start), file=sys.stderr)
bad = True
sys.exit(int(bad))
hypothesis-3.0.1/scripts/enforce_header.py 0000664 0000000 0000000 00000003074 12661275660 0020723 0 ustar 00root root 0000000 0000000 import subprocess
import os
import sys
HEADER_FILE = "scripts/header.py"
HEADER_SOURCE = open(HEADER_FILE).read().strip()
def all_python_files():
lines = subprocess.check_output([
"git", "ls-tree", "--full-tree", "-r", "HEAD",
]).decode('utf-8').split("\n")
files = [
l.split()[-1]
for l in lines
if l
]
return [
f for f in files
if f[-3:] == ".py"
]
def main():
rootdir = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
print("cd %r" % (rootdir,))
os.chdir(rootdir)
if len(sys.argv) > 1:
files = sys.argv[1:]
else:
files = all_python_files()
try:
files.remove("scripts/enforce_header.py")
except ValueError:
pass
for f in files:
print(f)
lines = []
with open(f, encoding="utf-8") as o:
shebang = None
first = True
for l in o.readlines():
if first:
first = False
if l[:2] == '#!':
shebang = l
continue
if 'END HEADER' in l:
lines = []
else:
lines.append(l)
source = ''.join(lines).strip()
with open(f, "w", encoding="utf-8") as o:
if shebang is not None:
o.write(shebang)
o.write("\n")
o.write(HEADER_SOURCE)
o.write("\n\n")
o.write(source)
o.write("\n")
if __name__ == '__main__':
main()
hypothesis-3.0.1/scripts/header.py 0000664 0000000 0000000 00000001222 12661275660 0017213 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/scripts/install.ps1 0000664 0000000 0000000 00000013543 12661275660 0017515 0 ustar 00root root 0000000 0000000 # Sample script to install Python and pip under Windows
# Authors: Olivier Grisel, Jonathan Helmus and Kyle Kastner
# License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
$MINICONDA_URL = "http://repo.continuum.io/miniconda/"
$BASE_URL = "https://www.python.org/ftp/python/"
$GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
$GET_PIP_PATH = "C:\get-pip.py"
function DownloadPython ($python_version, $platform_suffix) {
$webclient = New-Object System.Net.WebClient
$filename = "python-" + $python_version + $platform_suffix + ".msi"
$url = $BASE_URL + $python_version + "/" + $filename
$basedir = $pwd.Path + "\"
$filepath = $basedir + $filename
if (Test-Path $filename) {
Write-Host "Reusing" $filepath
return $filepath
}
# Download and retry up to 3 times in case of network transient errors.
Write-Host "Downloading" $filename "from" $url
$retry_attempts = 2
for($i=0; $i -lt $retry_attempts; $i++){
try {
$webclient.DownloadFile($url, $filepath)
break
}
Catch [Exception]{
Start-Sleep 1
}
}
if (Test-Path $filepath) {
Write-Host "File saved at" $filepath
} else {
# Retry once to get the error message if any at the last try
$webclient.DownloadFile($url, $filepath)
}
return $filepath
}
function InstallPython ($python_version, $architecture, $python_home) {
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
if (Test-Path $python_home) {
Write-Host $python_home "already exists, skipping."
return $false
}
if ($architecture -eq "32") {
$platform_suffix = ""
} else {
$platform_suffix = ".amd64"
}
$msipath = DownloadPython $python_version $platform_suffix
Write-Host "Installing" $msipath "to" $python_home
$install_log = $python_home + ".log"
$install_args = "/qn /log $install_log /i $msipath TARGETDIR=$python_home"
$uninstall_args = "/qn /x $msipath"
RunCommand "msiexec.exe" $install_args
if (-not(Test-Path $python_home)) {
Write-Host "Python seems to be installed else-where, reinstalling."
RunCommand "msiexec.exe" $uninstall_args
RunCommand "msiexec.exe" $install_args
}
if (Test-Path $python_home) {
Write-Host "Python $python_version ($architecture) installation complete"
} else {
Write-Host "Failed to install Python in $python_home"
Get-Content -Path $install_log
Exit 1
}
}
function RunCommand ($command, $command_args) {
Write-Host $command $command_args
Start-Process -FilePath $command -ArgumentList $command_args -Wait -Passthru
}
function InstallPip ($python_home) {
$pip_path = $python_home + "\Scripts\pip.exe"
$python_path = $python_home + "\python.exe"
if (-not(Test-Path $pip_path)) {
Write-Host "Installing pip..."
$webclient = New-Object System.Net.WebClient
$webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH)
Write-Host "Executing:" $python_path $GET_PIP_PATH
Start-Process -FilePath "$python_path" -ArgumentList "$GET_PIP_PATH" -Wait -Passthru
} else {
Write-Host "pip already installed."
}
}
function DownloadMiniconda ($python_version, $platform_suffix) {
$webclient = New-Object System.Net.WebClient
if ($python_version -eq "3.4") {
$filename = "Miniconda3-3.5.5-Windows-" + $platform_suffix + ".exe"
} else {
$filename = "Miniconda-3.5.5-Windows-" + $platform_suffix + ".exe"
}
$url = $MINICONDA_URL + $filename
$basedir = $pwd.Path + "\"
$filepath = $basedir + $filename
if (Test-Path $filename) {
Write-Host "Reusing" $filepath
return $filepath
}
# Download and retry up to 3 times in case of network transient errors.
Write-Host "Downloading" $filename "from" $url
$retry_attempts = 2
for($i=0; $i -lt $retry_attempts; $i++){
try {
$webclient.DownloadFile($url, $filepath)
break
}
Catch [Exception]{
Start-Sleep 1
}
}
if (Test-Path $filepath) {
Write-Host "File saved at" $filepath
} else {
# Retry once to get the error message if any at the last try
$webclient.DownloadFile($url, $filepath)
}
return $filepath
}
function InstallMiniconda ($python_version, $architecture, $python_home) {
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
if (Test-Path $python_home) {
Write-Host $python_home "already exists, skipping."
return $false
}
if ($architecture -eq "32") {
$platform_suffix = "x86"
} else {
$platform_suffix = "x86_64"
}
$filepath = DownloadMiniconda $python_version $platform_suffix
Write-Host "Installing" $filepath "to" $python_home
$install_log = $python_home + ".log"
$args = "/S /D=$python_home"
Write-Host $filepath $args
Start-Process -FilePath $filepath -ArgumentList $args -Wait -Passthru
if (Test-Path $python_home) {
Write-Host "Python $python_version ($architecture) installation complete"
} else {
Write-Host "Failed to install Python in $python_home"
Get-Content -Path $install_log
Exit 1
}
}
function InstallMinicondaPip ($python_home) {
$pip_path = $python_home + "\Scripts\pip.exe"
$conda_path = $python_home + "\Scripts\conda.exe"
if (-not(Test-Path $pip_path)) {
Write-Host "Installing pip..."
$args = "install --yes pip"
Write-Host $conda_path $args
Start-Process -FilePath "$conda_path" -ArgumentList $args -Wait -Passthru
} else {
Write-Host "pip already installed."
}
}
function main () {
InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON
InstallPip $env:PYTHON
}
main
hypothesis-3.0.1/scripts/install.sh 0000775 0000000 0000000 00000005166 12661275660 0017431 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
# Special license: Take literally anything you want out of this file. I don't
# care. Consider it WTFPL licensed if you like.
# Basically there's a lot of suffering encoded here that I don't want you to
# have to go through and you should feel free to use this to avoid some of
# that suffering in advance.
set -e
set -x
# This is to guard against multiple builds in parallel. The various installers will tend
# to stomp all over eachother if you do this and they haven't previously successfully
# succeeded. We use a lock file to block progress so only one install runs at a time.
# This script should be pretty fast once files are cached, so the lost of concurrency
# is not a major problem.
# This should be using the lockfile command, but that's not available on the
# containerized travis and we can't install it without sudo.
# Is is unclear if this is actually useful. I was seeing behaviour that suggested
# concurrent runs of the installer, but I can't seem to find any evidence of this lock
# ever not being acquired.
BASE=${BUILD_RUNTIMES-$PWD/.runtimes}
echo $BASE
mkdir -p $BASE
LOCKFILE="$BASE/.install-lockfile"
while true; do
if mkdir $LOCKFILE 2>/dev/null; then
echo "Successfully acquired installer."
break
else
echo "Failed to acquire lock. Is another installer running? Waiting a bit."
fi
sleep $[ ( $RANDOM % 10) + 1 ].$[ ( $RANDOM % 100) ]s
if (( $(date '+%s') > 300 + $(stat --format=%X $LOCKFILE) )); then
echo "We've waited long enough"
rm -rf $LOCKFILE
fi
done
trap "rm -rf $LOCKFILE" EXIT
PYENV=$BASE/pyenv
if [ ! -d "$PYENV/.git" ]; then
rm -rf $PYENV
git clone https://github.com/yyuu/pyenv.git $BASE/pyenv
else
back=$PWD
cd $PYENV
git fetch || echo "Update failed to complete. Ignoring"
git reset --hard origin/master
cd $back
fi
SNAKEPIT=$BASE/snakepit
install () {
VERSION="$1"
ALIAS="$2"
mkdir -p $BASE/versions
SOURCE=$BASE/versions/$ALIAS
if [ ! -e "$SOURCE" ]; then
mkdir -p $SNAKEPIT
mkdir -p $BASE/versions
$BASE/pyenv/plugins/python-build/bin/python-build $VERSION $SOURCE
fi
rm -f $SNAKEPIT/$ALIAS
mkdir -p $SNAKEPIT
$SOURCE/bin/python -m pip.__main__ install --upgrade pip wheel virtualenv
ln -s $SOURCE/bin/python $SNAKEPIT/$ALIAS
}
for var in "$@"; do
case "${var}" in
2.6)
install 2.6.9 python2.6
;;
2.7)
install 2.7.11 python2.7
;;
3.2)
install 3.2.6 python3.2
;;
3.3)
install 3.3.6 python3.3
;;
3.4)
install 3.4.3 python3.4
;;
3.5)
install 3.5.1 python3.5
;;
pypy)
install pypy-2.6.1 pypy
;;
esac
done
hypothesis-3.0.1/scripts/pyenv-installer 0000775 0000000 0000000 00000004177 12661275660 0020507 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
set -e
[ -n "$PYENV_DEBUG" ] && set -x
if [ -z "$PYENV_ROOT" ]; then
PYENV_ROOT="${HOME}/.pyenv"
fi
shell="$1"
if [ -z "$shell" ]; then
shell="$(ps c -p "$PPID" -o 'ucomm=' 2>/dev/null || true)"
shell="${shell##-}"
shell="${shell%% *}"
shell="$(basename "${shell:-$SHELL}")"
fi
colorize() {
if [ -t 1 ]; then printf "\e[%sm%s\e[m" "$1" "$2"
else echo -n "$2"
fi
}
checkout() {
[ -d "$2" ] || git clone "$1" "$2"
}
if ! command -v git 1>/dev/null 2>&1; then
echo "pyenv: Git is not installed, can't continue." >&2
exit 1
fi
if [ -n "${USE_HTTPS}" ]; then
GITHUB="https://github.com"
else
GITHUB="git://github.com"
fi
checkout "${GITHUB}/yyuu/pyenv.git" "${PYENV_ROOT}"
checkout "${GITHUB}/yyuu/pyenv-doctor.git" "${PYENV_ROOT}/plugins/pyenv-doctor"
checkout "${GITHUB}/yyuu/pyenv-installer.git" "${PYENV_ROOT}/plugins/pyenv-installer"
checkout "${GITHUB}/yyuu/pyenv-pip-rehash.git" "${PYENV_ROOT}/plugins/pyenv-pip-rehash"
checkout "${GITHUB}/yyuu/pyenv-update.git" "${PYENV_ROOT}/plugins/pyenv-update"
checkout "${GITHUB}/yyuu/pyenv-virtualenv.git" "${PYENV_ROOT}/plugins/pyenv-virtualenv"
checkout "${GITHUB}/yyuu/pyenv-which-ext.git" "${PYENV_ROOT}/plugins/pyenv-which-ext"
if ! command -v pyenv 1>/dev/null; then
{ echo
colorize 1 "WARNING"
echo ": seems you still have not added 'pyenv' to the load path."
echo
} >&2
case "$shell" in
bash )
profile="~/.bash_profile"
;;
zsh )
profile="~/.zshrc"
;;
ksh )
profile="~/.profile"
;;
fish )
profile="~/.config/fish/config.fish"
;;
* )
profile="your profile"
;;
esac
{ echo "# Load pyenv automatically by adding"
echo "# the following to ${profile}:"
echo
case "$shell" in
fish )
echo "set -x PATH \"\$HOME/.pyenv/bin\" \$PATH"
echo 'status --is-interactive; and . (pyenv init -|psub)'
echo 'status --is-interactive; and . (pyenv virtualenv-init -|psub)'
;;
* )
echo "export PATH=\"\$HOME/.pyenv/bin:\$PATH\""
echo "eval \"\$(pyenv init -)\""
echo "eval \"\$(pyenv virtualenv-init -)\""
;;
esac
} >&2
fi
hypothesis-3.0.1/scripts/retry.sh 0000775 0000000 0000000 00000000363 12661275660 0017122 0 ustar 00root root 0000000 0000000 #!/usr/bin/env bash
for _ in $(seq 5); do
if $@ ; then
exit 0
fi
echo "Command failed. Retrying..."
sleep $[ ( $RANDOM % 10) + 1 ].$[ ( $RANDOM % 100) ]s
done
echo "Command failed five times. Giving up now"
exit 1
hypothesis-3.0.1/scripts/run_with_env.cmd 0000664 0000000 0000000 00000003462 12661275660 0020615 0 ustar 00root root 0000000 0000000 :: To build extensions for 64 bit Python 3, we need to configure environment
:: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of:
:: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1)
::
:: To build extensions for 64 bit Python 2, we need to configure environment
:: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of:
:: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0)
::
:: 32 bit builds do not require specific environment configurations.
::
:: Note: this script needs to be run with the /E:ON and /V:ON flags for the
:: cmd interpreter, at least for (SDK v7.0)
::
:: More details at:
:: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows
:: http://stackoverflow.com/a/13751649/163740
::
:: Author: Olivier Grisel
:: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
@ECHO OFF
SET COMMAND_TO_RUN=%*
SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows
SET MAJOR_PYTHON_VERSION="%PYTHON_VERSION:~0,1%"
IF %MAJOR_PYTHON_VERSION% == "2" (
SET WINDOWS_SDK_VERSION="v7.0"
) ELSE IF %MAJOR_PYTHON_VERSION% == "3" (
SET WINDOWS_SDK_VERSION="v7.1"
) ELSE (
ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%"
EXIT 1
)
IF "%PYTHON_ARCH%"=="64" (
ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture
SET DISTUTILS_USE_SDK=1
SET MSSdk=1
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION%
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
) ELSE (
ECHO Using default MSVC build environment for 32 bit architecture
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
)
hypothesis-3.0.1/scripts/unicodechecker.py 0000664 0000000 0000000 00000003044 12661275660 0020742 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import warnings
from tempfile import mkdtemp
import unicodenazi
warnings.filterwarnings('error', category=UnicodeWarning)
unicodenazi.enable()
from hypothesis import settings
from hypothesis.configuration import set_hypothesis_home_dir
set_hypothesis_home_dir(mkdtemp())
assert isinstance(settings, type)
settings.register_profile(
'default', settings(timeout=-1, strict=True)
)
settings.load_profile('default')
import inspect
import os
TESTS = [
'test_testdecorators',
]
import sys
sys.path.append(os.path.join(
os.path.dirname(__file__), "..", "tests", "cover",
))
if __name__ == '__main__':
for t in TESTS:
module = __import__(t)
for k, v in sorted(module.__dict__.items(), key=lambda x: x[0]):
if k.startswith("test_") and inspect.isfunction(v):
print(k)
v()
hypothesis-3.0.1/setup.cfg 0000664 0000000 0000000 00000000000 12661275660 0015534 0 ustar 00root root 0000000 0000000 hypothesis-3.0.1/setup.py 0000664 0000000 0000000 00000005332 12661275660 0015442 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from setuptools import find_packages, setup
import os
def local_file(name):
return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
SOURCE = local_file("src")
README = local_file("README.rst")
# Assignment to placate pyflakes. The actual version is from the exec that
# follows.
__version__ = None
with open(local_file("src/hypothesis/version.py")) as o:
exec(o.read())
assert __version__ is not None
extras = {
'datetime': ["pytz"],
'fakefactory': ["fake-factory>=0.5.2,<=0.5.3"],
'django': ['pytz', 'django>=1.7'],
'numpy': ['numpy>=1.9.0'],
'pytest': ['pytest>=2.7.0'],
}
extras['all'] = sum(extras.values(), [])
extras['django'].extend(extras['fakefactory'])
extras[":python_version == '2.6'"] = [
'importlib', 'ordereddict', 'Counter', 'enum34']
extras[":python_version == '2.7'"] = ['enum34']
setup(
name='hypothesis',
version=__version__,
author='David R. MacIver',
author_email='david@drmaciver.com',
packages=find_packages(SOURCE),
package_dir={"": SOURCE},
url='https://github.com/DRMacIver/hypothesis',
license='MPL v2',
description='A library for property based testing',
zip_safe=False,
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Operating System :: Unix",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development :: Testing",
],
entry_points={
'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
},
long_description=open(README).read(),
)
hypothesis-3.0.1/src/ 0000775 0000000 0000000 00000000000 12661275660 0014514 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/ 0000775 0000000 0000000 00000000000 12661275660 0016713 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/__init__.py 0000664 0000000 0000000 00000002360 12661275660 0021025 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""Hypothesis is a library for writing unit tests which are parametrized by
some source of data.
It verifies your code against a wide range of input and minimizes any
failing examples it finds.
"""
from hypothesis._settings import settings, Verbosity
from hypothesis.version import __version_info__, __version__
from hypothesis.control import assume, note, reject
from hypothesis.core import given, find, example, seed
__all__ = [
'settings',
'Verbosity',
'assume',
'reject',
'seed',
'given',
'find',
'example',
'note',
'__version__',
'__version_info__',
]
hypothesis-3.0.1/src/hypothesis/_settings.py 0000664 0000000 0000000 00000041344 12661275660 0021272 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""A module controlling settings for Hypothesis to use in falsification.
Either an explicit settings object can be used or the default object on
this module can be modified.
"""
from __future__ import division, print_function, absolute_import
import os
import inspect
import warnings
import threading
from collections import namedtuple
from hypothesis.errors import InvalidArgument, HypothesisDeprecationWarning
from hypothesis.configuration import hypothesis_home_dir
from hypothesis.utils.conventions import not_set
from hypothesis.utils.dynamicvariables import DynamicVariable
__all__ = [
'settings',
]
all_settings = {}
_db_cache = {}
class settingsProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, type=None):
if obj is None:
return self
else:
try:
return obj.__dict__[self.name]
except KeyError:
raise AttributeError(self.name)
def __set__(self, obj, value):
obj.__dict__[self.name] = value
def __delete__(self, obj):
try:
del obj.__dict__[self.name]
except KeyError:
raise AttributeError(self.name)
@property
def __doc__(self):
return '\n'.join((
all_settings[self.name].description,
'default value: %r' % (getattr(settings.default, self.name),)
))
default_variable = DynamicVariable(None)
class settingsMeta(type):
def __init__(self, *args, **kwargs):
super(settingsMeta, self).__init__(*args, **kwargs)
@property
def default(self):
v = default_variable.value
if v is not None:
return v
if hasattr(settings, '_current_profile'):
default_variable.value = settings.load_profile(
settings._current_profile
)
assert default_variable.value is not None
return default_variable.value
@default.setter
def default(self, value):
if default_variable.value is not None:
raise AttributeError('Cannot assign settings.default')
self._assign_default_internal(value)
def _assign_default_internal(self, value):
default_variable.value = value
class settings(settingsMeta('settings', (object,), {})):
"""A settings object controls a variety of parameters that are used in
falsification. These may control both the falsification strategy and the
details of the data that is generated.
Default values are picked up from the settings.default object and
changes made there will be picked up in newly created settings.
"""
_WHITELISTED_REAL_PROPERTIES = [
'_database', '_construction_complete', 'storage'
]
__definitions_are_locked = False
_profiles = {}
def __getattr__(self, name):
if name in all_settings:
d = all_settings[name].default
if inspect.isfunction(d):
d = d()
return d
else:
raise AttributeError('settings has no attribute %s' % (name,))
def __init__(
self,
parent=None,
**kwargs
):
self._construction_complete = False
self._database = kwargs.pop('database', not_set)
explicit_kwargs = list(kwargs)
defaults = parent or settings.default
if defaults is not None:
for setting in all_settings.values():
if kwargs.get(setting.name, not_set) is not_set:
kwargs[setting.name] = getattr(defaults, setting.name)
if self._database is not_set:
self._database = defaults.database
for name, value in kwargs.items():
if name not in all_settings:
raise InvalidArgument(
'Invalid argument %s' % (name,))
setattr(self, name, value)
self.storage = threading.local()
self._construction_complete = True
for k in explicit_kwargs:
deprecation = all_settings[k].deprecation
if deprecation:
note_deprecation(deprecation, self)
def defaults_stack(self):
try:
return self.storage.defaults_stack
except AttributeError:
self.storage.defaults_stack = []
return self.storage.defaults_stack
def __call__(self, test):
test._hypothesis_internal_use_settings = self
return test
@classmethod
def define_setting(
cls, name, description, default, options=None, deprecation=None,
):
"""Add a new setting.
- name is the name of the property that will be used to access the
setting. This must be a valid python identifier.
- description will appear in the property's docstring
- default is the default value. This may be a zero argument
function in which case it is evaluated and its result is stored
the first time it is accessed on any given settings object.
"""
if settings.__definitions_are_locked:
from hypothesis.errors import InvalidState
raise InvalidState(
'settings have been locked and may no longer be defined.'
)
if options is not None:
options = tuple(options)
if default not in options:
raise InvalidArgument(
'Default value %r is not in options %r' % (
default, options
)
)
all_settings[name] = Setting(
name, description.strip(), default, options, deprecation)
setattr(settings, name, settingsProperty(name))
@classmethod
def lock_further_definitions(cls):
settings.__definitions_are_locked = True
def __setattr__(self, name, value):
if name in settings._WHITELISTED_REAL_PROPERTIES:
return object.__setattr__(self, name, value)
elif name == 'database':
if self._construction_complete:
raise AttributeError(
'settings objects are immutable and may not be assigned to'
' after construction.'
)
else:
return object.__setattr__(self, '_database', value)
elif name in all_settings:
if self._construction_complete:
raise AttributeError(
'settings objects are immutable and may not be assigned to'
' after construction.'
)
else:
setting = all_settings[name]
if (
setting.options is not None and
value not in setting.options
):
raise InvalidArgument(
'Invalid %s, %r. Valid options: %r' % (
name, value, setting.options
)
)
return object.__setattr__(self, name, value)
else:
raise AttributeError('No such setting %s' % (name,))
def __repr__(self):
bits = []
for name in all_settings:
value = getattr(self, name)
bits.append('%s=%r' % (name, value))
bits.sort()
return 'settings(%s)' % ', '.join(bits)
@property
def database(self):
"""An ExampleDatabase instance to use for storage of examples. May be
None.
If this was explicitly set at settings instantiation then that
value will be used (even if it was None). If not and the
database_file setting is not None this will be lazily loaded as
an SQLite backed ExampleDatabase using that file the first time
this property is accessed on a particular thread.
"""
try:
if self._database is not_set and self.database_file is not None:
from hypothesis.database import ExampleDatabase
if self.database_file not in _db_cache:
_db_cache[self.database_file] = (
ExampleDatabase(self.database_file))
return _db_cache[self.database_file]
if self._database is not_set:
self._database = None
return self._database
except AttributeError:
import traceback
traceback.print_exc()
assert False
def __enter__(self):
default_context_manager = default_variable.with_value(self)
self.defaults_stack().append(default_context_manager)
default_context_manager.__enter__()
return self
def __exit__(self, *args, **kwargs):
default_context_manager = self.defaults_stack().pop()
return default_context_manager.__exit__(*args, **kwargs)
@staticmethod
def register_profile(name, settings):
"""registers a collection of values to be used as a settings profile.
These settings can be loaded in by name. Enable different defaults for
different settings.
- settings is a settings object
"""
settings._profiles[name] = settings
@staticmethod
def get_profile(name):
"""Return the profile with the given name.
- name is a string representing the name of the profile
to load
A InvalidArgument exception will be thrown if the
profile does not exist
"""
try:
return settings._profiles[name]
except KeyError:
raise InvalidArgument(
"Profile '{0}' has not been registered".format(
name
)
)
@staticmethod
def load_profile(name):
"""Loads in the settings defined in the profile provided If the profile
does not exist an InvalidArgument will be thrown.
Any setting not defined in the profile will be the library
defined default for that setting
"""
settings._current_profile = name
settings._assign_default_internal(settings.get_profile(name))
Setting = namedtuple(
'Setting', (
'name', 'description', 'default', 'options', 'deprecation'))
settings.define_setting(
'min_satisfying_examples',
default=5,
description="""
Raise Unsatisfiable for any tests which do not produce at least this many
values that pass all assume() calls and which have not exhaustively covered the
search space.
"""
)
settings.define_setting(
'max_examples',
default=200,
description="""
Once this many satisfying examples have been considered without finding any
counter-example, falsification will terminate.
"""
)
settings.define_setting(
'max_iterations',
default=1000,
description="""
Once this many iterations of the example loop have run, including ones which
failed to satisfy assumptions and ones which produced duplicates, falsification
will terminate.
"""
)
settings.define_setting(
'max_mutations',
default=10,
description="""
Hypothesis will try this many variations on a single example before moving on
to an entirely fresh start. If you've got hard to satisfy properties raising
this might help, but you probably shouldn't touch this dial unless you really
know what you're doing.
"""
)
settings.define_setting(
'buffer_size',
default=8 * 1024,
description="""
The size of the underlying data used to generate examples. If you need to
generate really large examples you may want to increase this, but it will make
your tests slower.
"""
)
settings.define_setting(
'max_shrinks',
default=500,
description="""
Once this many successful shrinks have been performed, Hypothesis will assume
something has gone a bit wrong and give up rather than continuing to try to
shrink the example.
"""
)
settings.define_setting(
'timeout',
default=60,
description="""
Once this many seconds have passed, falsify will terminate even
if it has not found many examples. This is a soft rather than a hard
limit - Hypothesis won't e.g. interrupt execution of the called
function to stop it. If this value is <= 0 then no timeout will be
applied.
"""
)
settings.define_setting(
'derandomize',
default=False,
description="""
If this is True then hypothesis will run in deterministic mode
where each falsification uses a random number generator that is seeded
based on the hypothesis to falsify, which will be consistent across
multiple runs. This has the advantage that it will eliminate any
randomness from your tests, which may be preferable for some situations
. It does have the disadvantage of making your tests less likely to
find novel breakages.
"""
)
settings.define_setting(
'strict',
default=os.getenv('HYPOTHESIS_STRICT_MODE') == 'true',
description="""
If set to True, anything that would cause Hypothesis to issue a warning will
instead raise an error. Note that new warnings may be added at any time, so
running with strict set to True means that new Hypothesis releases may validly
break your code.
You can enable this setting temporarily by setting the HYPOTHESIS_STRICT_MODE
environment variable to the string 'true'.
"""
)
settings.define_setting(
'database_file',
default=lambda: (
os.getenv('HYPOTHESIS_DATABASE_FILE') or
os.path.join(hypothesis_home_dir(), 'examples')
),
description="""
database: An instance of hypothesis.database.ExampleDatabase that will be
used to save examples to and load previous examples from. May be None
in which case no storage will be used.
"""
)
class Verbosity(object):
def __repr__(self):
return 'Verbosity.%s' % (self.name,)
def __init__(self, name, level):
self.name = name
self.level = level
def __eq__(self, other):
return isinstance(other, Verbosity) and (
self.level == other.level
)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return self.level
def __lt__(self, other):
return self.level < other.level
def __le__(self, other):
return self.level <= other.level
def __gt__(self, other):
return self.level > other.level
def __ge__(self, other):
return self.level >= other.level
@classmethod
def by_name(cls, key):
result = getattr(cls, key, None)
if isinstance(result, Verbosity):
return result
raise InvalidArgument('No such verbosity level %r' % (key,))
Verbosity.quiet = Verbosity('quiet', 0)
Verbosity.normal = Verbosity('normal', 1)
Verbosity.verbose = Verbosity('verbose', 2)
Verbosity.debug = Verbosity('debug', 3)
Verbosity.all = [
Verbosity.quiet, Verbosity.normal, Verbosity.verbose, Verbosity.debug
]
ENVIRONMENT_VERBOSITY_OVERRIDE = os.getenv('HYPOTHESIS_VERBOSITY_LEVEL')
if ENVIRONMENT_VERBOSITY_OVERRIDE:
DEFAULT_VERBOSITY = Verbosity.by_name(ENVIRONMENT_VERBOSITY_OVERRIDE)
else:
DEFAULT_VERBOSITY = Verbosity.normal
settings.define_setting(
'verbosity',
options=Verbosity.all,
default=DEFAULT_VERBOSITY,
description='Control the verbosity level of Hypothesis messages',
)
settings.define_setting(
name='stateful_step_count',
default=50,
description="""
Number of steps to run a stateful program for before giving up on it breaking.
"""
)
settings.define_setting(
'perform_health_check',
default=True,
description=u"""
If set to True, Hypothesis will run a preliminary health check before
attempting to actually execute your test.
"""
)
settings.lock_further_definitions()
settings.register_profile('default', settings())
settings.load_profile('default')
assert settings.default is not None
def note_deprecation(message, s=None):
# If *either* self or the current default are non-strict
# then this should be an error. This is to handle e.g. the case
# where defining a new setting while non-strict updates a
# profile which is strict. This should not be an error, but
# using the profile here would cause it to be one.
if s is None:
s = settings.default
assert s is not None
strict = settings.default.strict and s.strict
verbosity = s.verbosity
warning = HypothesisDeprecationWarning(message)
if strict:
raise warning
elif verbosity > Verbosity.quiet:
warnings.warn(warning, stacklevel=3)
hypothesis-3.0.1/src/hypothesis/configuration.py 0000664 0000000 0000000 00000003060 12661275660 0022133 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
__hypothesis_home_directory_default = os.path.join(os.getcwd(), '.hypothesis')
__hypothesis_home_directory = None
def set_hypothesis_home_dir(directory):
global __hypothesis_home_directory
__hypothesis_home_directory = directory
def mkdir_p(path):
try:
os.makedirs(path)
except OSError:
pass
def hypothesis_home_dir():
global __hypothesis_home_directory
if not __hypothesis_home_directory:
__hypothesis_home_directory = os.getenv(
'HYPOTHESIS_STORAGE_DIRECTORY')
if not __hypothesis_home_directory:
__hypothesis_home_directory = __hypothesis_home_directory_default
mkdir_p(__hypothesis_home_directory)
return __hypothesis_home_directory
def storage_directory(*names):
path = os.path.join(hypothesis_home_dir(), *names)
mkdir_p(path)
return path
hypothesis-3.0.1/src/hypothesis/control.py 0000664 0000000 0000000 00000006663 12661275660 0020760 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import traceback
from hypothesis.errors import CleanupFailed, InvalidArgument, \
UnsatisfiedAssumption
from hypothesis.reporting import report
from hypothesis.utils.dynamicvariables import DynamicVariable
def reject():
raise UnsatisfiedAssumption()
def assume(condition):
"""Assert a precondition for this test.
If this is not truthy then the test will abort but not fail and
Hypothesis will make a "best effort" attempt to avoid similar
examples in future.
"""
if not condition:
raise UnsatisfiedAssumption()
return True
_current_build_context = DynamicVariable(None)
def current_build_context():
context = _current_build_context.value
if context is None:
raise InvalidArgument(
u'No build context registered')
return context
class BuildContext(object):
def __init__(self, is_final=False, close_on_capture=True):
self.tasks = []
self.is_final = is_final
self.close_on_capture = close_on_capture
self.close_on_del = False
self.notes = []
def __enter__(self):
self.assign_variable = _current_build_context.with_value(self)
self.assign_variable.__enter__()
return self
def __exit__(self, exc_type, exc_value, tb):
self.assign_variable.__exit__(exc_type, exc_value, tb)
if self.close() and exc_type is None:
raise CleanupFailed()
def local(self):
return _current_build_context.with_value(self)
def close(self):
any_failed = False
for task in self.tasks:
try:
task()
except:
any_failed = True
report(traceback.format_exc())
return any_failed
def cleanup(teardown):
"""Register a function to be called when the current test has finished
executing. Any exceptions thrown in teardown will be printed but not
rethrown.
Inside a test this isn't very interesting, because you can just use
a finally block, but note that you can use this inside map, flatmap,
etc. in order to e.g. insist that a value is closed at the end.
"""
context = _current_build_context.value
if context is None:
raise InvalidArgument(
u'Cannot register cleanup outside of build context')
context.tasks.append(teardown)
def note(value):
"""Report this value in the final execution.
Will always call string conversion function of the value even if not
printing for consistency of execution
"""
context = _current_build_context.value
if context is None:
raise InvalidArgument(
'Cannot make notes outside of build context')
context.notes.append(value)
if context.is_final:
report(value)
hypothesis-3.0.1/src/hypothesis/core.py 0000664 0000000 0000000 00000061372 12661275660 0020226 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This module provides the core primitives of Hypothesis, assume and given."""
from __future__ import division, print_function, absolute_import
import time
import inspect
import functools
import traceback
from random import getstate as getglobalrandomstate
from random import Random
from collections import namedtuple
from hypothesis.errors import Flaky, Timeout, NoSuchExample, \
Unsatisfiable, InvalidArgument, FailedHealthCheck, \
UnsatisfiedAssumption, HypothesisDeprecationWarning
from hypothesis.control import BuildContext
from hypothesis._settings import settings as Settings
from hypothesis._settings import Verbosity
from hypothesis.executors import new_style_executor, \
default_new_style_executor
from hypothesis.reporting import report, verbose_report, current_verbosity
from hypothesis.internal.compat import getargspec, str_to_bytes
from hypothesis.internal.reflection import arg_string, impersonate, \
copy_argspec, function_digest, fully_qualified_name, \
convert_positional_arguments, get_pretty_function_description
from hypothesis.searchstrategy.strategies import SearchStrategy
def new_random():
import random
return random.Random(random.getrandbits(128))
def test_is_flaky(test, expected_repr):
@functools.wraps(test)
def test_or_flaky(*args, **kwargs):
text_repr = arg_string(test, args, kwargs)
raise Flaky(
(
'Hypothesis %s(%s) produces unreliable results: Falsified'
' on the first call but did not on a subsequent one'
) % (test.__name__, text_repr,))
return test_or_flaky
Example = namedtuple('Example', ('args', 'kwargs'))
def example(*args, **kwargs):
"""Add an explicit example called with these args and kwargs to the
test."""
if args and kwargs:
raise InvalidArgument(
'Cannot mix positional and keyword arguments for examples'
)
if not (args or kwargs):
raise InvalidArgument(
'An example must provide at least one argument'
)
def accept(test):
if not hasattr(test, 'hypothesis_explicit_examples'):
test.hypothesis_explicit_examples = []
test.hypothesis_explicit_examples.append(Example(tuple(args), kwargs))
return test
return accept
def reify_and_execute(
search_strategy, test,
print_example=False,
is_final=False,
):
def run(data):
with BuildContext(is_final=is_final):
args, kwargs = data.draw(search_strategy)
if print_example:
report(
lambda: 'Falsifying example: %s(%s)' % (
test.__name__, arg_string(test, args, kwargs)))
elif current_verbosity() >= Verbosity.verbose:
report(
lambda: 'Trying example: %s(%s)' % (
test.__name__, arg_string(test, args, kwargs)))
return test(*args, **kwargs)
return run
def seed(seed):
"""
seed: Start the test execution from a specific seed. May be any hashable
object. No exact meaning for seed is provided other than that
for a fixed seed value Hypothesis will try the same actions (insofar
as it can given external sources of non-determinism. e.g. timing and
hash randomization).
Overrides the derandomize setting if it is present.
"""
def accept(test):
test._hypothesis_internal_use_seed = seed
return test
return accept
def given(*generator_arguments, **generator_kwargs):
"""A decorator for turning a test function that accepts arguments into a
randomized test.
This is the main entry point to Hypothesis. See the full tutorial
for details of its behaviour.
"""
def run_test_with_generator(test):
original_argspec = getargspec(test)
def invalid(message):
def wrapped_test(*arguments, **kwargs):
raise InvalidArgument(message)
return wrapped_test
if not (generator_arguments or generator_kwargs):
return invalid(
'given must be called with at least one argument')
if (
generator_arguments and (
original_argspec.varargs or original_argspec.keywords)
):
return invalid(
'varargs or keywords are not supported with positional '
'arguments to @given'
)
if (
len(generator_arguments) > len(original_argspec.args)
):
return invalid((
'Too many positional arguments for %s() (got %d but'
' expected at most %d') % (
test.__name__, len(generator_arguments),
len(original_argspec.args)))
if generator_arguments and generator_kwargs:
return invalid(
'cannot mix positional and keyword arguments to @given'
)
extra_kwargs = [
k for k in generator_kwargs if k not in original_argspec.args]
if extra_kwargs and not original_argspec.keywords:
return invalid(
'%s() got an unexpected keyword argument %r' % (
test.__name__,
extra_kwargs[0]
))
arguments = original_argspec.args
for a in arguments:
if isinstance(a, list): # pragma: no cover
return invalid((
'Cannot decorate function %s() because it has '
'destructuring arguments') % (
test.__name__,
))
if original_argspec.defaults:
return invalid(
'Cannot apply @given to a function with defaults.'
)
for name, strategy in zip(
arguments[-len(generator_arguments):], generator_arguments
):
generator_kwargs[name] = strategy
argspec = inspect.ArgSpec(
args=[a for a in arguments if a not in generator_kwargs],
keywords=original_argspec.keywords,
varargs=original_argspec.varargs,
defaults=None
)
@impersonate(test)
@copy_argspec(
test.__name__, argspec
)
def wrapped_test(*arguments, **kwargs):
settings = wrapped_test._hypothesis_internal_use_settings
if wrapped_test._hypothesis_internal_use_seed is not None:
random = Random(
wrapped_test._hypothesis_internal_use_seed)
elif settings.derandomize:
random = Random(function_digest(test))
else:
random = new_random()
import hypothesis.strategies as sd
selfy = None
arguments, kwargs = convert_positional_arguments(
wrapped_test, arguments, kwargs)
# If the test function is a method of some kind, the bound object
# will be the first named argument if there are any, otherwise the
# first vararg (if any).
if argspec.args:
selfy = kwargs.get(argspec.args[0])
elif arguments:
selfy = arguments[0]
test_runner = new_style_executor(selfy)
for example in reversed(getattr(
wrapped_test, 'hypothesis_explicit_examples', ()
)):
if example.args:
if len(example.args) > len(original_argspec.args):
raise InvalidArgument(
'example has too many arguments for test. '
'Expected at most %d but got %d' % (
len(original_argspec.args), len(example.args)))
example_kwargs = dict(zip(
original_argspec.args[-len(example.args):],
example.args
))
else:
example_kwargs = example.kwargs
example_kwargs.update(kwargs)
# Note: Test may mutate arguments and we can't rerun explicit
# examples, so we have to calculate the failure message at this
# point rather than than later.
message_on_failure = 'Falsifying example: %s(%s)' % (
test.__name__, arg_string(test, arguments, example_kwargs)
)
try:
with BuildContext() as b:
test_runner(
None,
lambda data: test(*arguments, **example_kwargs)
)
except BaseException:
report(message_on_failure)
for n in b.notes:
report(n)
raise
if settings.max_examples <= 0:
return
arguments = tuple(arguments)
given_specifier = sd.tuples(
sd.just(arguments),
sd.fixed_dictionaries(generator_kwargs).map(
lambda args: dict(args, **kwargs)
)
)
def fail_health_check(message):
message += (
'\nSee http://hypothesis.readthedocs.org/en/latest/health'
'checks.html for more information about this.'
)
raise FailedHealthCheck(message)
search_strategy = given_specifier
search_strategy.validate()
perform_health_check = settings.perform_health_check
perform_health_check &= Settings.default.perform_health_check
from hypothesis.internal.conjecture.data import TestData, Status, \
StopTest
if perform_health_check:
initial_state = getglobalrandomstate()
health_check_random = Random(random.getrandbits(128))
count = 0
overruns = 0
filtered_draws = 0
start = time.time()
while (
count < 10 and time.time() < start + 1 and
filtered_draws < 50 and overruns < 20
):
try:
data = TestData(
max_length=settings.buffer_size,
draw_bytes=lambda data, n, distribution:
distribution(health_check_random, n)
)
with Settings(settings, verbosity=Verbosity.quiet):
test_runner(data, reify_and_execute(
search_strategy,
lambda *args, **kwargs: None,
))
count += 1
except UnsatisfiedAssumption:
filtered_draws += 1
except StopTest:
if data.status == Status.INVALID:
filtered_draws += 1
else:
assert data.status == Status.OVERRUN
overruns += 1
except Exception:
report(traceback.format_exc())
if test_runner is default_new_style_executor:
fail_health_check(
'An exception occurred during data '
'generation in initial health check. '
'This indicates a bug in the strategy. '
'This could either be a Hypothesis bug or '
"an error in a function yo've passed to "
'it to construct your data.'
)
else:
fail_health_check(
'An exception occurred during data '
'generation in initial health check. '
'This indicates a bug in the strategy. '
'This could either be a Hypothesis bug or '
'an error in a function you\'ve passed to '
'it to construct your data. Additionally, '
'you have a custom executor, which means '
'that this could be your executor failing '
'to handle a function which returns None. '
)
if overruns >= 20 or (
not count and overruns > 0
):
fail_health_check((
'Examples routinely exceeded the max allowable size. '
'(%d examples overran while generating %d valid ones)'
'. Generating examples this large will usually lead to'
' bad results. You should try setting average_size or '
'max_size parameters on your collections and turning '
'max_leaves down on recursive() calls.') % (
overruns, count
))
if filtered_draws >= 50 or (
not count and filtered_draws > 0
):
fail_health_check((
'It looks like your strategy is filtering out a lot '
'of data. Health check found %d filtered examples but '
'only %d good ones. This will make your tests much '
'slower, and also will probably distort the data '
'generation quite a lot. You should adapt your '
'strategy to filter less. This can also be caused by '
'a low max_leaves parameter in recursive() calls') % (
filtered_draws, count
))
runtime = time.time() - start
if runtime > 1.0 or count < 10:
fail_health_check((
'Data generation is extremely slow: Only produced '
'%d valid examples in %.2f seconds (%d invalid ones '
'and %d exceeded maximum size). Try decreasing '
"size of the data you're generating (with e.g."
'average_size or max_leaves parameters).'
) % (count, runtime, filtered_draws, overruns))
if getglobalrandomstate() != initial_state:
fail_health_check(
'Data generation depends on global random module. '
'This makes results impossible to replay, which '
'prevents Hypothesis from working correctly. '
'If you want to use methods from random, use '
'randoms() from hypothesis.strategies to get an '
'instance of Random you can use. Alternatively, you '
'can use the random_module() strategy to explicitly '
'seed the random module.'
)
last_exception = [None]
repr_for_last_exception = [None]
performed_random_check = [False]
def evaluate_test_data(data):
if perform_health_check and not performed_random_check[0]:
initial_state = getglobalrandomstate()
performed_random_check[0] = True
else:
initial_state = None
try:
result = test_runner(data, reify_and_execute(
search_strategy, test,
))
if result is not None and settings.perform_health_check:
raise FailedHealthCheck((
'Tests run under @given should return None, but '
'%s returned %r instead.'
) % (test.__name__, result), settings)
return False
except UnsatisfiedAssumption:
data.mark_invalid()
except (
HypothesisDeprecationWarning, FailedHealthCheck,
StopTest,
):
raise
except Exception:
last_exception[0] = traceback.format_exc()
verbose_report(last_exception[0])
data.mark_interesting()
finally:
if (
initial_state is not None and
getglobalrandomstate() != initial_state
):
fail_health_check(
'Your test used the global random module. '
'This is unlikely to work correctly. You should '
'consider using the randoms() strategy from '
'hypothesis.strategies instead. Alternatively, '
'you can use the random_module() strategy to '
'explicitly seed the random module.')
from hypothesis.internal.conjecture.engine import TestRunner
falsifying_example = None
database_key = str_to_bytes(fully_qualified_name(test))
start_time = time.time()
runner = TestRunner(
evaluate_test_data,
settings=settings, random=random,
database_key=database_key,
)
runner.run()
run_time = time.time() - start_time
timed_out = (
settings.timeout > 0 and
run_time >= settings.timeout
)
if runner.last_data.status == Status.INTERESTING:
falsifying_example = runner.last_data.buffer
if settings.database is not None:
settings.database.save(
database_key, falsifying_example
)
else:
if runner.valid_examples < min(
settings.min_satisfying_examples,
settings.max_examples,
):
if timed_out:
raise Timeout((
'Ran out of time before finding a satisfying '
'example for '
'%s. Only found %d examples in ' +
'%.2fs.'
) % (
get_pretty_function_description(test),
runner.valid_examples, run_time
))
else:
raise Unsatisfiable((
'Unable to satisfy assumptions of hypothesis '
'%s. Only %d examples considered '
'satisfied assumptions'
) % (
get_pretty_function_description(test),
runner.valid_examples,))
return
assert last_exception[0] is not None
try:
with settings:
test_runner(
TestData.for_buffer(falsifying_example),
reify_and_execute(
search_strategy, test,
print_example=True, is_final=True
))
except (UnsatisfiedAssumption, StopTest):
report(traceback.format_exc())
raise Flaky(
'Unreliable assumption: An example which satisfied '
'assumptions on the first run now fails it.'
)
report(
'Failed to reproduce exception. Expected: \n' +
last_exception[0],
)
filter_message = (
'Unreliable test data: Failed to reproduce a failure '
'and then when it came to recreating the example in '
'order to print the test data with a flaky result '
'the example was filtered out (by e.g. a '
'call to filter in your strategy) when we didn\'t '
'expect it to be.'
)
try:
test_runner(
TestData.for_buffer(falsifying_example),
reify_and_execute(
search_strategy,
test_is_flaky(test, repr_for_last_exception[0]),
print_example=True, is_final=True
))
except (UnsatisfiedAssumption, StopTest):
raise Flaky(filter_message)
for attr in dir(test):
if attr[0] != '_' and not hasattr(wrapped_test, attr):
setattr(wrapped_test, attr, getattr(test, attr))
wrapped_test.is_hypothesis_test = True
wrapped_test._hypothesis_internal_use_seed = getattr(
test, '_hypothesis_internal_use_seed', None
)
wrapped_test._hypothesis_internal_use_settings = getattr(
test, '_hypothesis_internal_use_settings', None
) or Settings.default
return wrapped_test
return run_test_with_generator
def find(specifier, condition, settings=None, random=None, database_key=None):
settings = settings or Settings(
max_examples=2000,
min_satisfying_examples=0,
max_shrinks=2000,
)
if database_key is None and settings.database is not None:
database_key = function_digest(condition)
if not isinstance(specifier, SearchStrategy):
raise InvalidArgument(
'Expected SearchStrategy but got %r of type %s' % (
specifier, type(specifier).__name__
))
search = specifier
random = random or new_random()
successful_examples = [0]
last_data = [None]
def template_condition(data):
with BuildContext():
try:
data.is_find = True
result = data.draw(search)
data.note(result)
success = condition(result)
except UnsatisfiedAssumption:
data.mark_invalid()
if success:
successful_examples[0] += 1
if settings.verbosity == Verbosity.verbose:
if not successful_examples[0]:
report(lambda: u'Trying example %s' % (
repr(result),
))
elif success:
if successful_examples[0] == 1:
report(lambda: u'Found satisfying example %s' % (
repr(result),
))
else:
report(lambda: u'Shrunk example to %s' % (
repr(result),
))
last_data[0] = data
if success and not data.frozen:
data.mark_interesting()
from hypothesis.internal.conjecture.engine import TestRunner
from hypothesis.internal.conjecture.data import TestData, Status
start = time.time()
runner = TestRunner(
template_condition, settings=settings, random=random,
database_key=database_key,
)
runner.run()
run_time = time.time() - start
if runner.last_data.status == Status.INTERESTING:
with BuildContext():
return TestData.for_buffer(runner.last_data.buffer).draw(search)
if runner.valid_examples <= settings.min_satisfying_examples:
if settings.timeout > 0 and run_time > settings.timeout:
raise Timeout((
'Ran out of time before finding enough valid examples for '
'%s. Only %d valid examples found in %.2f seconds.'
) % (
get_pretty_function_description(condition),
runner.valid_examples, run_time))
else:
raise Unsatisfiable((
'Unable to satisfy assumptions of '
'%s. Only %d examples considered satisfied assumptions'
) % (
get_pretty_function_description(condition),
runner.valid_examples,))
raise NoSuchExample(get_pretty_function_description(condition))
hypothesis-3.0.1/src/hypothesis/database.py 0000664 0000000 0000000 00000016042 12661275660 0021034 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import re
import base64
import hashlib
import sqlite3
import binascii
import threading
from contextlib import contextmanager
SQLITE_PATH = re.compile(r"\.\(db|sqlite|sqlite3\)$")
def _db_for_path(path=None):
if path in (None, ':memory:'):
return InMemoryExampleDatabase()
path = str(path)
if os.path.isdir(path):
return DirectoryBasedExampleDatabase(path)
if os.path.exists(path):
return SQLiteExampleDatabase(path)
if SQLITE_PATH.search(path):
return SQLiteExampleDatabase(path)
else:
return DirectoryBasedExampleDatabase(path)
class EDMeta(type):
def __call__(self, *args, **kwargs):
if self is ExampleDatabase:
return _db_for_path(*args, **kwargs)
return super(EDMeta, self).__call__(*args, **kwargs)
class ExampleDatabase(EDMeta('ExampleDatabase', (object,), {})):
"""Interface class for storage systems.
A key -> multiple distinct values mapping.
Keys and values are binary data.
"""
def save(self, key, value):
"""save this value under this key.
If this value is already present for this key, silently do
nothing
"""
raise NotImplementedError('%s.save' % (type(self).__name__))
def delete(self, key, value):
"""Remove this value from this key.
If this value is not present, silently do nothing.
"""
raise NotImplementedError('%s.delete' % (type(self).__name__))
def fetch(self, key):
"""Return all values matching this key."""
raise NotImplementedError('%s.fetch' % (type(self).__name__))
def close(self):
"""Clear up any resources associated with this database."""
raise NotImplementedError('%s.close' % (type(self).__name__))
class InMemoryExampleDatabase(ExampleDatabase):
def __init__(self):
self.data = {}
def __repr__(self):
return 'InMemoryExampleDatabase(%r)' % (self.data,)
def fetch(self, key):
for v in self.data.get(key, ()):
yield v
def save(self, key, value):
self.data.setdefault(key, set()).add(value)
def delete(self, key, value):
self.data.get(key, set()).discard(value)
def close(self):
pass
class SQLiteExampleDatabase(ExampleDatabase):
def __init__(self, path=u':memory:'):
self.path = path
self.db_created = False
self.current_connection = threading.local()
def connection(self):
if not hasattr(self.current_connection, 'connection'):
self.current_connection.connection = sqlite3.connect(self.path)
return self.current_connection.connection
def close(self):
if hasattr(self.current_connection, 'connection'):
try:
self.connection().close()
finally:
del self.current_connection.connection
def __repr__(self):
return u'%s(%s)' % (self.__class__.__name__, self.path)
@contextmanager
def cursor(self):
conn = self.connection()
cursor = conn.cursor()
try:
try:
yield cursor
finally:
cursor.close()
except:
conn.rollback()
raise
else:
conn.commit()
def save(self, key, value):
self.create_db_if_needed()
with self.cursor() as cursor:
try:
cursor.execute("""
insert into hypothesis_data_mapping(key, value)
values(?, ?)
""", (base64.b64encode(key), base64.b64encode(value)))
except sqlite3.IntegrityError:
pass
def delete(self, key, value):
self.create_db_if_needed()
with self.cursor() as cursor:
cursor.execute("""
delete from hypothesis_data_mapping
where key = ? and value = ?
""", (base64.b64encode(key), base64.b64encode(value)))
def fetch(self, key):
self.create_db_if_needed()
with self.cursor() as cursor:
cursor.execute("""
select value from hypothesis_data_mapping
where key = ?
""", (base64.b64encode(key),))
for (value,) in cursor:
try:
yield base64.b64decode(value)
except (binascii.Error, TypeError):
pass
def create_db_if_needed(self):
if self.db_created:
return
with self.cursor() as cursor:
cursor.execute("""
create table if not exists hypothesis_data_mapping(
key text,
value text,
unique(key, value)
)
""")
self.db_created = True
def mkdirp(path):
try:
os.makedirs(path)
except OSError:
pass
return path
def _hash(key):
return hashlib.sha1(key).hexdigest()[:16]
class DirectoryBasedExampleDatabase(ExampleDatabase):
def __init__(self, path):
self.path = path
self.keypaths = {}
def __repr__(self):
return 'DirectoryBasedExampleDatabase(%r)' % (self.path,)
def close(self):
pass
def _key_path(self, key):
try:
return self.keypaths[key]
except KeyError:
pass
directory = os.path.join(self.path, _hash(key))
mkdirp(directory)
self.keypaths[key] = directory
return directory
def _value_path(self, key, value):
return os.path.join(
self._key_path(key),
hashlib.sha1(value).hexdigest()[:16]
)
def fetch(self, key):
kp = self._key_path(key)
for path in os.listdir(kp):
with open(os.path.join(kp, path), 'rb') as i:
yield i.read()
def save(self, key, value):
path = self._value_path(key, value)
if not os.path.exists(path):
tmpname = path + '.' + str(binascii.hexlify(os.urandom(16)))
with open(tmpname, 'wb') as o:
o.write(value)
try:
os.rename(tmpname, path)
except OSError: # pragma: no cover
os.unlink(tmpname)
assert not os.path.exists(tmpname)
def delete(self, key, value):
try:
os.unlink(self._value_path(key, value))
except OSError:
pass
hypothesis-3.0.1/src/hypothesis/errors.py 0000664 0000000 0000000 00000012473 12661275660 0020610 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import warnings
class HypothesisException(Exception):
"""Generic parent class for exceptions thrown by Hypothesis."""
pass
class CleanupFailed(HypothesisException):
"""At least one cleanup task failed and no other exception was raised."""
class UnsatisfiedAssumption(HypothesisException):
"""An internal error raised by assume.
If you're seeing this error something has gone wrong.
"""
class BadTemplateDraw(HypothesisException):
"""An internal error raised when something unfortunate happened during
template generation and you should restart the draw, preferably with a new
parameter.
This is not an error condition internally, but if you ever see this
in your code it's probably a Hypothesis bug
"""
class NoSuchExample(HypothesisException):
"""The condition we have been asked to satisfy appears to be always false.
This does not guarantee that no example exists, only that we were
unable to find one.
"""
def __init__(self, condition_string, extra=''):
super(NoSuchExample, self).__init__(
'No examples found of condition %s%s' % (
condition_string, extra)
)
class DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover
"""Hypothesis used to be able to detect exhaustive coverage of a search
space and no longer can.
This exception remains for compatibility reasons for now but can
never actually be thrown.
"""
class NoExamples(HypothesisException):
"""Raised when example() is called on a strategy but we cannot find any
examples after enough tries that we really should have been able to if this
was ever going to work."""
pass
class Unsatisfiable(HypothesisException):
"""We ran out of time or examples before we could find enough examples
which satisfy the assumptions of this hypothesis.
This could be because the function is too slow. If so, try upping
the timeout. It could also be because the function is using assume
in a way that is too hard to satisfy. If so, try writing a custom
strategy or using a better starting point (e.g if you are requiring
a list has unique values you could instead filter out all duplicate
values from the list)
"""
class Flaky(HypothesisException):
"""
This function appears to fail non-deterministically: We have seen it fail
when passed this example at least once, but a subsequent invocation did not
fail.
Common causes for this problem are:
1. The function depends on external state. e.g. it uses an external
random number generator. Try to make a version that passes all the
relevant state in from Hypothesis.
2. The function is suffering from too much recursion and its failure
depends sensitively on where it's been called from.
3. The function is timing sensitive and can fail or pass depending on
how long it takes. Try breaking it up into smaller functions which
dont' do that and testing those instead.
"""
class Timeout(Unsatisfiable):
"""We were unable to find enough examples that satisfied the preconditions
of this hypothesis in the amount of time allotted to us."""
class WrongFormat(HypothesisException, ValueError):
"""An exception indicating you have attempted to serialize a value that
does not match the type described by this format."""
class BadData(HypothesisException, ValueError):
"""The data that we got out of the database does not seem to match the data
we could have put into the database given this schema."""
class InvalidArgument(HypothesisException, TypeError):
"""Used to indicate that the arguments to a Hypothesis function were in
some manner incorrect."""
class InvalidState(HypothesisException):
"""The system is not in a state where you were allowed to do that."""
class InvalidDefinition(HypothesisException, TypeError):
"""Used to indicate that a class definition was not well put together and
has something wrong with it."""
class AbnormalExit(HypothesisException):
"""Raised when a test running in a child process exits without returning or
raising an exception."""
class FailedHealthCheck(HypothesisException, Warning):
"""Raised when a test fails a preliminary healthcheck that occurs before
execution."""
class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):
pass
warnings.simplefilter('once', HypothesisDeprecationWarning)
class Frozen(HypothesisException):
"""Raised when a mutation method has been called on a TestData object after
freeze() has been called."""
hypothesis-3.0.1/src/hypothesis/executors.py 0000664 0000000 0000000 00000004206 12661275660 0021310 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
def default_executor(function): # pragma: nocover
raise NotImplementedError() # We don't actually use this any more
def setup_teardown_executor(setup, teardown):
setup = setup or (lambda: None)
teardown = teardown or (lambda ex: None)
def execute(function):
token = None
try:
token = setup()
return function()
finally:
teardown(token)
return execute
def executor(runner):
try:
return runner.execute_example
except AttributeError:
pass
if (
hasattr(runner, 'setup_example') or
hasattr(runner, 'teardown_example')
):
return setup_teardown_executor(
getattr(runner, 'setup_example', None),
getattr(runner, 'teardown_example', None),
)
return default_executor
def default_new_style_executor(data, function):
return function(data)
class TestRunner(object):
def hypothesis_execute_example_with_data(self, data, function):
return function(data)
def new_style_executor(runner):
if runner is None:
return default_new_style_executor
if isinstance(runner, TestRunner):
return runner.hypothesis_execute_example_with_data
old_school = executor(runner)
if old_school is default_executor:
return default_new_style_executor
else:
return lambda data, function: old_school(
lambda: function(data)
)
hypothesis-3.0.1/src/hypothesis/extra/ 0000775 0000000 0000000 00000000000 12661275660 0020036 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/extra/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0022142 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/src/hypothesis/extra/datetime.py 0000664 0000000 0000000 00000010111 12661275660 0022176 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import datetime as dt
import pytz
import hypothesis.internal.conjecture.utils as cu
from hypothesis.errors import InvalidArgument
from hypothesis.strategies import defines_strategy
from hypothesis.searchstrategy.strategies import SearchStrategy
class DatetimeStrategy(SearchStrategy):
def __init__(self, allow_naive, timezones, min_year=None, max_year=None):
self.allow_naive = allow_naive
self.timezones = timezones
self.min_year = min_year or dt.MINYEAR
self.max_year = max_year or dt.MAXYEAR
for a in ['min_year', 'max_year']:
year = getattr(self, a)
if year < dt.MINYEAR:
raise InvalidArgument(u'%s out of range: %d < %d' % (
a, year, dt.MINYEAR
))
if year > dt.MAXYEAR:
raise InvalidArgument(u'%s out of range: %d > %d' % (
a, year, dt.MAXYEAR
))
def do_draw(self, data):
while True:
try:
result = dt.datetime(
year=cu.centered_integer_range(
data, self.min_year, self.max_year, 2000
),
month=cu.integer_range(data, 1, 12),
day=cu.integer_range(data, 1, 31),
hour=cu.integer_range(data, 0, 24),
minute=cu.integer_range(data, 0, 59),
second=cu.integer_range(data, 0, 59),
microsecond=cu.integer_range(data, 0, 999999)
)
if (
not self.allow_naive or
(self.timezones and cu.boolean(data))
):
result = cu.choice(data, self.timezones).localize(result)
return result
except (OverflowError, ValueError):
pass
@defines_strategy
def datetimes(allow_naive=None, timezones=None, min_year=None, max_year=None):
"""Return a strategy for generating datetimes.
allow_naive=True will cause the values to sometimes be naive.
timezones is the set of permissible timezones. If set to an empty
collection all timezones must be naive. If set to None all available
timezones will be used.
"""
if timezones is None:
timezones = list(pytz.all_timezones)
timezones.remove(u'UTC')
timezones.insert(0, u'UTC')
timezones = [
tz if isinstance(tz, dt.tzinfo) else pytz.timezone(tz)
for tz in timezones
]
if allow_naive is None:
allow_naive = not timezones
if not (timezones or allow_naive):
raise InvalidArgument(
u'Cannot create non-naive datetimes with no timezones allowed'
)
return DatetimeStrategy(
allow_naive=allow_naive, timezones=timezones,
min_year=min_year, max_year=max_year,
)
@defines_strategy
def dates(min_year=None, max_year=None):
"""Return a strategy for generating dates."""
return datetimes(
allow_naive=True, timezones=[],
min_year=min_year, max_year=max_year,
).map(datetime_to_date)
def datetime_to_date(dt):
return dt.date()
@defines_strategy
def times(allow_naive=None, timezones=None):
"""Return a strategy for generating times."""
return datetimes(
allow_naive=allow_naive, timezones=timezones,
).map(datetime_to_time)
def datetime_to_time(dt):
return dt.timetz()
hypothesis-3.0.1/src/hypothesis/extra/django/ 0000775 0000000 0000000 00000000000 12661275660 0021300 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/extra/django/__init__.py 0000664 0000000 0000000 00000002420 12661275660 0023407 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
import unittest
import django.test as dt
class HypothesisTestCase(object):
def setup_example(self):
self._pre_setup()
def teardown_example(self, example):
self._post_teardown()
def __call__(self, result=None):
testMethod = getattr(self, self._testMethodName)
if getattr(testMethod, u'is_hypothesis_test', False):
return unittest.TestCase.__call__(self, result)
else:
return dt.SimpleTestCase.__call__(self, result)
class TestCase(HypothesisTestCase, dt.TestCase):
pass
class TransactionTestCase(HypothesisTestCase, dt.TransactionTestCase):
pass
hypothesis-3.0.1/src/hypothesis/extra/django/models.py 0000664 0000000 0000000 00000007627 12661275660 0023151 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import django.db.models as dm
from django.db import IntegrityError
import hypothesis.strategies as st
import hypothesis.extra.fakefactory as ff
from hypothesis.errors import InvalidArgument
from hypothesis.extra.datetime import datetimes
from hypothesis.searchstrategy.strategies import SearchStrategy
class ModelNotSupported(Exception):
pass
def referenced_models(model, seen=None):
if seen is None:
seen = set()
for f in model._meta.concrete_fields:
if isinstance(f, dm.ForeignKey):
t = f.rel.to
if t not in seen:
seen.add(t)
referenced_models(t, seen)
return seen
__default_field_mappings = None
def field_mappings():
global __default_field_mappings
if __default_field_mappings is None:
__default_field_mappings = {
dm.SmallIntegerField: st.integers(-32768, 32767),
dm.IntegerField: st.integers(-2147483648, 2147483647),
dm.BigIntegerField:
st.integers(-9223372036854775808, 9223372036854775807),
dm.PositiveIntegerField: st.integers(0, 2147483647),
dm.PositiveSmallIntegerField: st.integers(0, 32767),
dm.BinaryField: st.binary(),
dm.BooleanField: st.booleans(),
dm.CharField: st.text(),
dm.TextField: st.text(),
dm.DateTimeField: datetimes(allow_naive=False),
dm.EmailField: ff.fake_factory(u'email'),
dm.FloatField: st.floats(),
dm.NullBooleanField: st.one_of(st.none(), st.booleans()),
}
return __default_field_mappings
def add_default_field_mapping(field_type, strategy):
field_mappings()[field_type] = strategy
def models(model, **extra):
result = {}
mappings = field_mappings()
mandatory = set()
for f in model._meta.concrete_fields:
if isinstance(f, dm.AutoField):
continue
try:
mapped = mappings[type(f)]
except KeyError:
if not f.null:
mandatory.add(f.name)
continue
if f.null:
mapped = st.one_of(st.none(), mapped)
result[f.name] = mapped
missed = {x for x in mandatory if x not in extra}
if missed:
raise InvalidArgument((
u'Missing arguments for mandatory field%s %s for model %s' % (
u's' if len(missed) > 1 else u'',
u', '.join(missed),
model.__name__,
)))
for k, v in extra.items():
if isinstance(v, SearchStrategy):
result[k] = v
else:
result[k] = st.just(v)
result.update(extra)
return ModelStrategy(model, result)
class ModelStrategy(SearchStrategy):
def __init__(self, model, mappings):
super(ModelStrategy, self).__init__()
self.model = model
self.arg_strategy = st.fixed_dictionaries(mappings)
def __repr__(self):
return u'ModelStrategy(%s)' % (self.model.__name__,)
def do_draw(self, data):
try:
result, _ = self.model.objects.get_or_create(
**self.arg_strategy.do_draw(data)
)
return result
except IntegrityError:
data.mark_invalid()
hypothesis-3.0.1/src/hypothesis/extra/fakefactory.py 0000664 0000000 0000000 00000006310 12661275660 0022706 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import random as globalrandom
from random import Random
import faker
from faker.factory import AVAILABLE_LOCALES
from hypothesis.internal.compat import text_type
from hypothesis.internal.reflection import check_valid_identifier
from hypothesis.searchstrategy.strategies import SearchStrategy
def fake_factory(source, locale=None, locales=None, providers=()):
check_valid_identifier(source)
if source[0] == u'_':
raise ValueError(u'Bad source name %s' % (source,))
if locale is not None and locales is not None:
raise ValueError(u'Cannot specify both single and multiple locales')
if locale:
locales = (locale,)
elif locales:
locales = tuple(locales)
else:
locales = None
for l in (locales or ()):
if l not in AVAILABLE_LOCALES:
raise ValueError(u'Unsupported locale %r' % (l,))
def supports_source(locale):
test_faker = faker.Faker(locale)
for provider in providers:
test_faker.add_provider(provider)
return hasattr(test_faker, source)
if locales is None:
locales = list(filter(supports_source, AVAILABLE_LOCALES))
if not locales:
raise ValueError(u'No such source %r' % (source,))
else:
for l in locales:
if not supports_source(locale):
raise ValueError(u'Unsupported source %s for locale %s' % (
source, l
))
return FakeFactoryStrategy(source, providers, locales)
class FakeFactoryStrategy(SearchStrategy):
def __init__(self, source, providers, locales):
self.source = source
self.providers = tuple(providers)
self.locales = tuple(locales)
self.factories = {}
def do_draw(self, data):
seed = data.draw_bytes(4)
random = Random(bytes(seed))
return self.gen_example(random)
def factory_for(self, locale):
try:
return self.factories[locale]
except KeyError:
pass
factory = faker.Faker(locale=locale)
self.factories[locale] = factory
for p in self.providers:
factory.add_provider(p)
return factory
def gen_example(self, random):
factory = self.factory_for(random.choice(self.locales))
original = globalrandom.getstate()
seed = random.getrandbits(128)
try:
factory.seed(seed)
return text_type(getattr(factory, self.source)())
finally:
globalrandom.setstate(original)
hypothesis-3.0.1/src/hypothesis/extra/numpy.py 0000664 0000000 0000000 00000005367 12661275660 0021573 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import operator
import numpy as np
import hypothesis.strategies as st
from hypothesis.searchstrategy import SearchStrategy
from hypothesis.internal.compat import hrange, reduce, text_type, \
binary_type
def from_dtype(dtype):
if dtype.kind == u'b':
result = st.booleans()
elif dtype.kind == u'f':
result = st.floats()
elif dtype.kind == u'c':
result = st.complex_numbers()
elif dtype.kind in (u'S', u'a', u'V'):
result = st.binary()
elif dtype.kind == u'u':
result = st.integers(
min_value=0, max_value=1 << (4 * dtype.itemsize) - 1)
elif dtype.kind == u'i':
min_integer = -1 << (4 * dtype.itemsize - 1)
result = st.integers(min_value=min_integer, max_value=-min_integer - 1)
elif dtype.kind == u'U':
result = st.text()
else:
raise NotImplementedError(
u'No strategy implementation for %r' % (dtype,)
)
return result.map(dtype.type)
class ArrayStrategy(SearchStrategy):
def __init__(self, element_strategy, shape, dtype):
self.shape = tuple(shape)
assert shape
self.array_size = reduce(operator.mul, shape)
self.dtype = dtype
self.element_strategy = element_strategy
def do_draw(self, data):
result = np.zeros(dtype=self.dtype, shape=self.array_size)
for i in hrange(self.array_size):
result[i] = self.element_strategy.do_draw(data)
return result.reshape(self.shape)
def is_scalar(spec):
return spec in (
int, bool, text_type, binary_type, float, complex
)
def arrays(dtype, shape, elements=None):
if not isinstance(dtype, np.dtype):
dtype = np.dtype(dtype)
if elements is None:
elements = from_dtype(dtype)
if isinstance(shape, int):
shape = (shape,)
shape = tuple(shape)
if not shape:
if dtype.kind != u'O':
return elements
else:
return ArrayStrategy(
shape=shape,
dtype=dtype,
element_strategy=elements
)
hypothesis-3.0.1/src/hypothesis/extra/pytestplugin.py 0000664 0000000 0000000 00000005040 12661275660 0023156 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis.reporting import default as default_reporter
PYTEST_VERSION = tuple(map(int, pytest.__version__.split('.')[:3]))
LOAD_PROFILE_OPTION = '--hypothesis-profile'
if PYTEST_VERSION >= (2, 7, 0):
class StoringReporter(object):
def __init__(self, config):
self.config = config
self.results = []
def __call__(self, msg):
if self.config.getoption('capture', 'fd') == 'no':
default_reporter(msg)
self.results.append(msg)
def pytest_addoption(parser):
parser.addoption(
LOAD_PROFILE_OPTION,
action='store',
help='Load in a registered hypothesis.settings profile'
)
def pytest_configure(config):
from hypothesis import settings
profile = config.getoption(LOAD_PROFILE_OPTION)
if profile:
settings.load_profile(profile)
@pytest.mark.hookwrapper
def pytest_pyfunc_call(pyfuncitem):
from hypothesis.reporting import with_reporter
store = StoringReporter(pyfuncitem.config)
with with_reporter(store):
yield
if store.results:
pyfuncitem.hypothesis_report_information = list(store.results)
@pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
report = (yield).get_result()
if hasattr(item, 'hypothesis_report_information'):
report.sections.append((
'Hypothesis',
'\n'.join(item.hypothesis_report_information)
))
def pytest_collection_modifyitems(items):
for item in items:
if not isinstance(item, pytest.Function):
continue
if getattr(item.function, 'is_hypothesis_test', False):
item.add_marker('hypothesis')
def load():
pass
hypothesis-3.0.1/src/hypothesis/internal/ 0000775 0000000 0000000 00000000000 12661275660 0020527 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/internal/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0022633 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/src/hypothesis/internal/charmap.py 0000664 0000000 0000000 00000010247 12661275660 0022520 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import gzip
import pickle
import unicodedata
from hypothesis.configuration import storage_directory
from hypothesis.internal.compat import hunichr
def charmap_file():
return os.path.join(
storage_directory('unicodedata', unicodedata.unidata_version),
'charmap.pickle.gz'
)
_charmap = None
def charmap():
global _charmap
if _charmap is None:
f = charmap_file()
if not os.path.exists(f):
_charmap = {}
for i in range(0, sys.maxunicode + 1):
cat = unicodedata.category(hunichr(i))
rs = _charmap.setdefault(cat, [])
if rs and rs[-1][-1] == i - 1:
rs[-1][-1] += 1
else:
rs.append([i, i])
# We explicitly set the mtime to an arbitary value so as to get
# a stable format for our charmap.
data = sorted(
(k, tuple((map(tuple, v))))
for k, v in _charmap.items())
with gzip.GzipFile(f, 'wb', mtime=1) as o:
o.write(pickle.dumps(data, pickle.HIGHEST_PROTOCOL))
with gzip.open(f, 'rb') as i:
_charmap = dict(pickle.loads(i.read()))
assert _charmap is not None
return _charmap
_categories = None
def categories():
global _categories
if _categories is None:
cm = charmap()
_categories = sorted(
cm.keys(), key=lambda c: len(cm[c])
)
_categories.remove('Cc')
_categories.remove('Cs')
_categories.append('Cc')
_categories.append('Cs')
return _categories
def _union_interval_lists(x, y):
if not x:
return y
if not y:
return x
intervals = sorted(x + y, reverse=True)
result = [intervals.pop()]
while intervals:
u, v = intervals.pop()
a, b = result[-1]
if u <= b + 1:
result[-1] = (a, v)
else:
result.append((u, v))
return tuple(result)
category_index_cache = {
(): (),
}
def _category_key(exclude, include):
cs = categories()
if include is None:
include = set(cs)
else:
include = set(include)
exclude = set(exclude or ())
include -= exclude
result = tuple(c for c in cs if c in include)
return result
def _query_for_key(key):
try:
return category_index_cache[key]
except KeyError:
pass
assert key
cs = categories()
if len(key) == len(cs):
result = ((0, sys.maxunicode),)
else:
result = _union_interval_lists(
_query_for_key(key[:-1]), charmap()[key[-1]]
)
category_index_cache[key] = result
return result
limited_category_index_cache = {}
def query(
exclude_categories=(), include_categories=None,
min_codepoint=None, max_codepoint=None
):
if min_codepoint is None:
min_codepoint = 0
if max_codepoint is None:
max_codepoint = sys.maxunicode
catkey = _category_key(exclude_categories, include_categories)
qkey = (catkey, min_codepoint, max_codepoint)
try:
return limited_category_index_cache[qkey]
except KeyError:
pass
base = _query_for_key(catkey)
result = []
for u, v in base:
if v >= min_codepoint and u <= max_codepoint:
result.append((
max(u, min_codepoint), min(v, max_codepoint)
))
result = tuple(result)
limited_category_index_cache[qkey] = result
return result
hypothesis-3.0.1/src/hypothesis/internal/classmap.py 0000664 0000000 0000000 00000002326 12661275660 0022707 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
class ClassMap(object):
def __init__(self):
self.data = {}
def all_mappings(self, key):
for c in type.mro(key):
try:
yield self.data[c]
except KeyError:
pass
def __getitem__(self, key):
try:
return self.data[key]
except KeyError:
for m in self.all_mappings(key):
return m
raise KeyError(key)
def __setitem__(self, key, value):
self.data[key] = value
hypothesis-3.0.1/src/hypothesis/internal/compat.py 0000664 0000000 0000000 00000025006 12661275660 0022367 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# pylint: skip-file
from __future__ import division, print_function, absolute_import
import os
import re
import sys
import math
import codecs
import platform
import importlib
from decimal import Context, Decimal, Inexact
from collections import namedtuple
try:
from collections import OrderedDict, Counter
except ImportError: # pragma: no cover
from ordereddict import OrderedDict
from counter import Counter
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
PYPY = platform.python_implementation() == 'PyPy'
PY26 = sys.version_info[:2] == (2, 6)
NO_ARGSPEC = sys.version_info[:2] >= (3, 5)
HAS_SIGNATURE = sys.version_info[:2] >= (3, 3)
WINDOWS = platform.system() == 'Windows'
if PY26:
_special_floats = {
float(u'inf'): Decimal(u'Infinity'),
float(u'-inf'): Decimal(u'-Infinity'),
}
def float_to_decimal(f):
"""Convert a floating point number to a Decimal with no loss of
information."""
if f in _special_floats:
return _special_floats[f]
elif math.isnan(f):
return Decimal(u'NaN')
n, d = f.as_integer_ratio()
numerator, denominator = Decimal(n), Decimal(d)
ctx = Context(prec=60)
result = ctx.divide(numerator, denominator)
while ctx.flags[Inexact]:
ctx.flags[Inexact] = False
ctx.prec *= 2
result = ctx.divide(numerator, denominator)
return result
else:
def float_to_decimal(f):
return Decimal(f)
if PY3:
def str_to_bytes(s):
return s.encode(a_good_encoding())
def int_to_text(i):
return str(i)
text_type = str
binary_type = bytes
hrange = range
ARG_NAME_ATTRIBUTE = 'arg'
integer_types = (int,)
hunichr = chr
from functools import reduce
def unicode_safe_repr(x):
return repr(x)
def isidentifier(s):
return s.isidentifier()
def escape_unicode_characters(s):
return codecs.encode(s, 'unicode_escape').decode('ascii')
exec("""
def quiet_raise(exc):
raise exc from None
""")
def int_from_bytes(data):
return int.from_bytes(data, 'big')
def int_to_bytes(i, size):
return i.to_bytes(size, 'big')
def bytes_from_list(ls):
return bytes(ls)
def to_bytes_sequence(ls):
return bytes(ls)
def zero_byte_sequence(n):
return bytes(n)
else:
import struct
def zero_byte_sequence(n):
return b'\0' * n
def int_from_bytes(data):
assert isinstance(data, bytearray)
result = 0
i = 0
while i + 4 <= len(data):
result <<= 32
result |= struct.unpack('>I', data[i:i + 4])[0]
i += 4
while i < len(data):
result <<= 8
result |= data[i]
i += 1
return int(result)
def int_to_bytes(i, size):
assert i >= 0
result = bytearray(size)
j = size - 1
while i and j >= 0:
result[j] = i & 255
i >>= 8
j -= 1
if i:
raise OverflowError('int too big to convert')
return hbytes(result)
def bytes_from_list(ls):
return bytes(bytearray(ls))
def to_bytes_sequence(ls):
return bytearray(ls)
def str_to_bytes(s):
return s
def int_to_text(i):
return str(i).decode('ascii')
VALID_PYTHON_IDENTIFIER = re.compile(
r"^[a-zA-Z_][a-zA-Z0-9_]*$"
)
def isidentifier(s):
return VALID_PYTHON_IDENTIFIER.match(s)
def unicode_safe_repr(x):
r = repr(x)
assert isinstance(r, str)
return r.decode(a_good_encoding())
text_type = unicode
binary_type = str
def hrange(start_or_finish, finish=None, step=None):
try:
if step is None:
if finish is None:
return xrange(start_or_finish)
else:
return xrange(start_or_finish, finish)
else:
return xrange(start_or_finish, finish, step)
except OverflowError:
if step == 0:
raise ValueError(u'step argument may not be zero')
if step is None:
step = 1
if finish is not None:
start = start_or_finish
else:
start = 0
finish = start_or_finish
assert step != 0
if step > 0:
def shimrange():
i = start
while i < finish:
yield i
i += step
else:
def shimrange():
i = start
while i > finish:
yield i
i += step
return shimrange()
ARG_NAME_ATTRIBUTE = 'id'
integer_types = (int, long)
hunichr = unichr
reduce = reduce
def escape_unicode_characters(s):
return codecs.encode(s, 'string_escape')
def quiet_raise(exc):
raise exc
def a_good_encoding():
return 'utf-8'
def to_unicode(x):
if isinstance(x, text_type):
return x
else:
return x.decode(a_good_encoding())
def qualname(f):
try:
return f.__qualname__
except AttributeError:
pass
try:
return f.im_class.__name__ + '.' + f.__name__
except AttributeError:
return f.__name__
FakeArgSpec = namedtuple(
'ArgSpec', ('args', 'varargs', 'keywords', 'defaults'))
def signature_argspec(f):
from inspect import signature, Parameter, _empty
try:
if NO_ARGSPEC:
sig = signature(f, follow_wrapped=False)
else:
sig = signature(f)
except ValueError:
raise TypeError('unsupported callable')
args = list(
k
for k, v in sig.parameters.items()
if v.kind in (
Parameter.POSITIONAL_ONLY, Parameter.POSITIONAL_OR_KEYWORD))
varargs = None
keywords = None
for k, v in sig.parameters.items():
if v.kind == Parameter.VAR_POSITIONAL:
varargs = k
elif v.kind == Parameter.VAR_KEYWORD:
keywords = k
defaults = []
for a in reversed(args):
default = sig.parameters[a].default
if default is _empty:
break
else:
defaults.append(default)
if defaults:
defaults = tuple(reversed(defaults))
else:
defaults = None
return FakeArgSpec(args, varargs, keywords, defaults)
if NO_ARGSPEC:
getargspec = signature_argspec
ArgSpec = FakeArgSpec
else:
from inspect import getargspec, ArgSpec
importlib_invalidate_caches = getattr(
importlib, 'invalidate_caches', lambda: ())
if PY2:
CODE_FIELD_ORDER = [
'co_argcount',
'co_nlocals',
'co_stacksize',
'co_flags',
'co_code',
'co_consts',
'co_names',
'co_varnames',
'co_filename',
'co_name',
'co_firstlineno',
'co_lnotab',
'co_freevars',
'co_cellvars',
]
else:
CODE_FIELD_ORDER = [
'co_argcount',
'co_kwonlyargcount',
'co_nlocals',
'co_stacksize',
'co_flags',
'co_code',
'co_consts',
'co_names',
'co_varnames',
'co_filename',
'co_name',
'co_firstlineno',
'co_lnotab',
'co_freevars',
'co_cellvars',
]
def update_code_location(code, newfile, newlineno):
"""Take a code object and lie shamelessly about where it comes from.
Why do we want to do this? It's for really shallow reasons involving
hiding the hypothesis_temporary_module code from test runners like
py.test's verbose mode. This is a vastly disproportionate terrible
hack that I've done purely for vanity, and if you're reading this
code you're probably here because it's broken something and now
you're angry at me. Sorry.
"""
unpacked = [
getattr(code, name) for name in CODE_FIELD_ORDER
]
unpacked[CODE_FIELD_ORDER.index('co_filename')] = newfile
unpacked[CODE_FIELD_ORDER.index('co_firstlineno')] = newlineno
return type(code)(*unpacked)
class compatbytes(bytearray):
__name__ = 'bytes'
def __init__(self, *args, **kwargs):
bytearray.__init__(self, *args, **kwargs)
self.__hash = None
def __str__(self):
return bytearray.__str__(self)
def __repr__(self):
return 'compatbytes(b%r)' % (str(self),)
def __hash__(self):
if self.__hash is None:
self.__hash = hash(str(self))
return self.__hash
def count(self, value):
c = 0
for w in self:
if w == value:
c += 1
return c
def index(self, value):
for i, v in enumerate(self):
if v == value:
return i
raise ValueError('Value %r not in sequence %r' % (value, self))
def __add__(self, value):
return compatbytes(bytearray.__add__(self, value))
def __radd__(self, value):
return compatbytes(bytearray.__radd__(self, value))
def __mul__(self, value):
return compatbytes(bytearray.__mul__(self, value))
def __rmul__(self, value):
return compatbytes(bytearray.__rmul__(self, value))
def __getitem__(self, *args, **kwargs):
r = bytearray.__getitem__(self, *args, **kwargs)
if isinstance(r, bytearray):
return compatbytes(r)
else:
return r
__setitem__ = None
def join(self, parts):
result = bytearray()
first = True
for p in parts:
if not first:
result.extend(self)
first = False
result.extend(p)
return compatbytes(result)
def __contains__(self, value):
return any(v == value for v in self)
if PY2:
hbytes = compatbytes
reasonable_byte_type = bytearray
else:
hbytes = bytes
reasonable_byte_type = bytes
hypothesis-3.0.1/src/hypothesis/internal/conjecture/ 0000775 0000000 0000000 00000000000 12661275660 0022670 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/internal/conjecture/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0024774 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/src/hypothesis/internal/conjecture/data.py 0000664 0000000 0000000 00000012410 12661275660 0024151 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from enum import IntEnum
from hypothesis.errors import Frozen, InvalidArgument
from hypothesis.internal.compat import hbytes, text_type, int_to_bytes, \
unicode_safe_repr, reasonable_byte_type
def uniform(random, n):
return int_to_bytes(random.getrandbits(n * 8), n)
class Status(IntEnum):
OVERRUN = 0
INVALID = 1
VALID = 2
INTERESTING = 3
class StopTest(BaseException):
def __init__(self, testcounter):
super(StopTest, self).__init__(repr(testcounter))
self.testcounter = testcounter
global_test_counter = 0
class TestData(object):
@classmethod
def for_buffer(self, buffer):
return TestData(
max_length=len(buffer),
draw_bytes=lambda data, n, distribution:
buffer[data.index:data.index + n]
)
def __init__(self, max_length, draw_bytes):
self.max_length = max_length
self.is_find = False
self._draw_bytes = draw_bytes
self.overdraw = 0
self.level = 0
self.block_starts = {}
self.blocks = []
self.buffer = bytearray()
self.output = u''
self.status = Status.VALID
self.frozen = False
self.intervals_by_level = []
self.intervals = []
self.interval_stack = []
global global_test_counter
self.testcounter = global_test_counter
global_test_counter += 1
def __assert_not_frozen(self, name):
if self.frozen:
raise Frozen(
'Cannot call %s on frozen TestData' % (
name,))
@property
def index(self):
return len(self.buffer)
def note(self, value):
self.__assert_not_frozen('note')
if not isinstance(value, text_type):
value = unicode_safe_repr(value)
self.output += value
def draw(self, strategy):
if self.is_find and not strategy.supports_find:
raise InvalidArgument((
'Cannot use strategy %r within a call to find (presumably '
'because it would be invalid after the call had ended).'
) % (strategy,))
self.start_example()
try:
return strategy.do_draw(self)
finally:
if not self.frozen:
self.stop_example()
def start_example(self):
self.__assert_not_frozen('start_example')
self.interval_stack.append(self.index)
self.level += 1
def stop_example(self):
self.__assert_not_frozen('stop_example')
self.level -= 1
while self.level >= len(self.intervals_by_level):
self.intervals_by_level.append([])
k = self.interval_stack.pop()
if k != self.index:
t = (k, self.index)
self.intervals_by_level[self.level].append(t)
if not self.intervals or self.intervals[-1] != t:
self.intervals.append(t)
def freeze(self):
if self.frozen:
assert isinstance(self.buffer, hbytes)
return
self.frozen = True
# Intervals are sorted as longest first, then by interval start.
for l in self.intervals_by_level:
for i in range(len(l) - 1):
if l[i][1] == l[i + 1][0]:
self.intervals.append((l[i][0], l[i + 1][1]))
self.intervals = sorted(
set(self.intervals),
key=lambda se: (se[0] - se[1], se[0])
)
self.buffer = hbytes(self.buffer)
del self._draw_bytes
def draw_bytes(self, n, distribution=uniform):
if n == 0:
return hbytes(b'')
self.__assert_not_frozen('draw_bytes')
initial = self.index
if self.index + n > self.max_length:
self.overdraw = self.index + n - self.max_length
self.status = Status.OVERRUN
self.freeze()
raise StopTest(self.testcounter)
result = self._draw_bytes(self, n, distribution)
self.block_starts.setdefault(n, []).append(initial)
self.blocks.append((initial, initial + n))
assert len(result) == n
assert self.index == initial
self.buffer.extend(result)
self.intervals.append((initial, self.index))
return reasonable_byte_type(result)
def mark_interesting(self):
self.__assert_not_frozen('mark_interesting')
self.status = Status.INTERESTING
self.freeze()
raise StopTest(self.testcounter)
def mark_invalid(self):
self.__assert_not_frozen('mark_invalid')
self.status = Status.INVALID
self.freeze()
raise StopTest(self.testcounter)
hypothesis-3.0.1/src/hypothesis/internal/conjecture/engine.py 0000664 0000000 0000000 00000035367 12661275660 0024525 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
from random import Random, getrandbits
from hypothesis import settings as Settings
from hypothesis.reporting import debug_report
from hypothesis.internal.compat import hbytes, hrange, Counter, \
text_type, bytes_from_list, to_bytes_sequence, unicode_safe_repr
from hypothesis.internal.conjecture.data import Status, StopTest, TestData
from hypothesis.internal.conjecture.minimizer import minimize
class RunIsComplete(Exception):
pass
class TestRunner(object):
def __init__(
self, test_function, settings=None, random=None,
database_key=None,
):
self._test_function = test_function
self.settings = settings or Settings()
self.last_data = None
self.changed = 0
self.shrinks = 0
self.examples_considered = 0
self.iterations = 0
self.valid_examples = 0
self.start_time = time.time()
self.random = random or Random(getrandbits(128))
self.database_key = database_key
def new_buffer(self):
self.last_data = TestData(
max_length=self.settings.buffer_size,
draw_bytes=lambda data, n, distribution:
distribution(self.random, n)
)
self.test_function(self.last_data)
self.last_data.freeze()
self.note_for_corpus(self.last_data)
def test_function(self, data):
self.iterations += 1
try:
self._test_function(data)
data.freeze()
except StopTest as e:
if e.testcounter != data.testcounter:
self.save_buffer(data.buffer)
raise e
except:
self.save_buffer(data.buffer)
raise
if data.status >= Status.VALID:
self.valid_examples += 1
def consider_new_test_data(self, data):
# Transition rules:
# 1. Transition cannot decrease the status
# 2. Any transition which increases the status is valid
# 3. If the previous status was interesting, only shrinking
# transitions are allowed.
if self.last_data.status < data.status:
return True
if self.last_data.status > data.status:
return False
if data.status == Status.INVALID:
return data.index >= self.last_data.index
if data.status == Status.OVERRUN:
return data.overdraw <= self.last_data.overdraw
if data.status == Status.INTERESTING:
assert len(data.buffer) <= len(self.last_data.buffer)
if len(data.buffer) == len(self.last_data.buffer):
assert data.buffer < self.last_data.buffer
return True
return True
def save_buffer(self, buffer):
if (
self.settings.database is not None and
self.database_key is not None
):
self.settings.database.save(
self.database_key, hbytes(buffer)
)
def note_for_corpus(self, data):
if data.status == Status.INTERESTING:
self.save_buffer(data.buffer)
def debug(self, message):
with self.settings:
debug_report(message)
def debug_data(self, data):
self.debug(u'%d bytes %s -> %s, %s' % (
data.index,
unicode_safe_repr(list(data.buffer[:data.index])),
unicode_safe_repr(data.status),
data.output,
))
def incorporate_new_buffer(self, buffer):
assert self.last_data.status == Status.INTERESTING
if (
self.settings.timeout > 0 and
time.time() >= self.start_time + self.settings.timeout
):
raise RunIsComplete()
self.examples_considered += 1
buffer = buffer[:self.last_data.index]
if sort_key(buffer) >= sort_key(self.last_data.buffer):
return False
assert sort_key(buffer) <= sort_key(self.last_data.buffer)
data = TestData.for_buffer(buffer)
self.test_function(data)
data.freeze()
self.note_for_corpus(data)
if data.status >= self.last_data.status:
self.debug_data(data)
if self.consider_new_test_data(data):
self.shrinks += 1
self.last_data = data
if self.shrinks >= self.settings.max_shrinks:
raise RunIsComplete()
self.last_data = data
self.changed += 1
return True
return False
def run(self):
with self.settings:
try:
self._run()
except RunIsComplete:
pass
self.debug(
u'Run complete after %d examples (%d valid) and %d shrinks' % (
self.iterations, self.valid_examples, self.shrinks,
))
def _new_mutator(self):
def draw_new(data, n, distribution):
return distribution(self.random, n)
def draw_existing(data, n, distribution):
return self.last_data.buffer[data.index:data.index + n]
def draw_smaller(data, n, distribution):
existing = self.last_data.buffer[data.index:data.index + n]
r = distribution(self.random, n)
if r <= existing:
return r
return _draw_predecessor(self.random, existing)
def draw_larger(data, n, distribution):
existing = self.last_data.buffer[data.index:data.index + n]
r = distribution(self.random, n)
if r >= existing:
return r
return _draw_successor(self.random, existing)
def reuse_existing(data, n, distribution):
choices = data.block_starts.get(n, []) or \
self.last_data.block_starts.get(n, [])
if choices:
i = self.random.choice(choices)
return self.last_data.buffer[i:i + n]
else:
return distribution(self.random, n)
def flip_bit(data, n, distribution):
buf = bytearray(
self.last_data.buffer[data.index:data.index + n])
i = self.random.randint(0, n - 1)
k = self.random.randint(0, 7)
buf[i] ^= (1 << k)
return hbytes(buf)
def draw_zero(data, n, distribution):
return b'\0' * n
def draw_constant(data, n, distribution):
return bytes_from_list([
self.random.randint(0, 255)
] * n)
options = [
draw_new,
reuse_existing, reuse_existing,
draw_existing, draw_smaller, draw_larger,
flip_bit, draw_zero, draw_constant,
]
bits = [
self.random.choice(options) for _ in hrange(3)
]
def draw_mutated(data, n, distribution):
if (
data.index + n > len(self.last_data.buffer)
):
return distribution(self.random, n)
return self.random.choice(bits)(data, n, distribution)
return draw_mutated
def _run(self):
self.last_data = None
mutations = 0
start_time = time.time()
if (
self.settings.database is not None and
self.database_key is not None
):
corpus = sorted(
self.settings.database.fetch(self.database_key),
key=lambda d: (len(d), d)
)
for existing in corpus:
if self.valid_examples >= self.settings.max_examples:
return
if self.iterations >= max(
self.settings.max_iterations, self.settings.max_examples
):
return
data = TestData.for_buffer(existing)
self.test_function(data)
data.freeze()
self.last_data = data
if data.status < Status.VALID:
self.settings.database.delete(
self.database_key, existing)
elif data.status == Status.VALID:
# Incremental garbage collection! we store a lot of
# examples in the DB as we shrink: Those that stay
# interesting get kept, those that become invalid get
# dropped, but those that are merely valid gradually go
# away over time.
if self.random.randint(0, 2) == 0:
self.settings.database.delete(
self.database_key, existing)
else:
assert data.status == Status.INTERESTING
self.last_data = data
break
if (
self.last_data is None or
self.last_data.status < Status.INTERESTING
):
self.new_buffer()
mutator = self._new_mutator()
while self.last_data.status != Status.INTERESTING:
if self.valid_examples >= self.settings.max_examples:
return
if self.iterations >= max(
self.settings.max_iterations, self.settings.max_examples
):
return
if (
self.settings.timeout > 0 and
time.time() >= start_time + self.settings.timeout
):
return
if mutations >= self.settings.max_mutations:
mutations = 0
self.new_buffer()
mutator = self._new_mutator()
else:
data = TestData(
draw_bytes=mutator,
max_length=self.settings.buffer_size
)
self.test_function(data)
data.freeze()
self.note_for_corpus(data)
prev_data = self.last_data
if self.consider_new_test_data(data):
self.last_data = data
if data.status > prev_data.status:
mutations = 0
else:
mutator = self._new_mutator()
mutations += 1
data = self.last_data
assert isinstance(data.output, text_type)
self.debug_data(data)
if self.settings.max_shrinks <= 0:
return
if not self.last_data.buffer:
return
data = TestData.for_buffer(self.last_data.buffer)
self.test_function(data)
if data.status != Status.INTERESTING:
return
change_counter = -1
while self.changed > change_counter:
change_counter = self.changed
i = 0
while i < len(self.last_data.intervals):
u, v = self.last_data.intervals[i]
if not self.incorporate_new_buffer(
self.last_data.buffer[:u] +
self.last_data.buffer[v:]
):
i += 1
i = 0
while i < len(self.last_data.blocks):
u, v = self.last_data.blocks[i]
buf = self.last_data.buffer
block = buf[u:v]
n = v - u
all_blocks = sorted(set([bytes(n)] + [
buf[a:a + n]
for a in self.last_data.block_starts[n]
]))
better_blocks = all_blocks[:all_blocks.index(block)]
for b in better_blocks:
if self.incorporate_new_buffer(
buf[:u] + b + buf[v:]
):
break
i += 1
block_counter = -1
while block_counter < self.changed:
block_counter = self.changed
blocks = [
k for k, v in
Counter(
self.last_data.buffer[u:v]
for u, v in self.last_data.blocks).items()
if v > 1
]
for block in blocks:
parts = self.last_data.buffer.split(block)
assert self.last_data.buffer == block.join(parts)
minimize(
block,
lambda b: self.incorporate_new_buffer(
b.join(parts)),
self.random
)
i = 0
while i < len(self.last_data.blocks):
u, v = self.last_data.blocks[i]
minimize(
self.last_data.buffer[u:v],
lambda b: self.incorporate_new_buffer(
self.last_data.buffer[:u] + b +
self.last_data.buffer[v:],
), self.random
)
i += 1
i = 0
alternatives = None
while i < len(self.last_data.intervals):
if alternatives is None:
alternatives = sorted(set(
self.last_data.buffer[u:v]
for u, v in self.last_data.intervals), key=len)
u, v = self.last_data.intervals[i]
for a in alternatives:
buf = self.last_data.buffer
if (
len(a) < v - u or
(len(a) == (v - u) and a < buf[u:v])
):
if self.incorporate_new_buffer(buf[:u] + a + buf[v:]):
alternatives = None
break
i += 1
def _draw_predecessor(rnd, xs):
r = bytearray()
any_strict = False
for x in to_bytes_sequence(xs):
if not any_strict:
c = rnd.randint(0, x)
if c < x:
any_strict = True
else:
c = rnd.randint(0, 255)
r.append(c)
return hbytes(r)
def _draw_successor(rnd, xs):
r = bytearray()
any_strict = False
for x in to_bytes_sequence(xs):
if not any_strict:
c = rnd.randint(x, 255)
if c > x:
any_strict = True
else:
c = rnd.randint(0, 255)
r.append(c)
return hbytes(r)
def sort_key(buffer):
return (len(buffer), buffer)
hypothesis-3.0.1/src/hypothesis/internal/conjecture/minimizer.py 0000664 0000000 0000000 00000011134 12661275660 0025245 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.internal.compat import hbytes, hrange
"""
This module implements a lexicographic minimizer for blocks of bytearray.
That is, given a block of bytes of size n, and a predicate that accepts such
blocks, it tries to find a lexicographically minimal block of that size
that satisifies the predicate, starting from that initial starting point.
Assuming it is allowed to run to completion (which due to the way we use it it
actually often isn't) it makes the following guarantees, but it usually tries
to do better in practice:
1. The lexicographic predecessor (i.e. the largest block smaller than it) of
the answer is not a solution.
2. No individual byte in the solution may be lowered while holding the others
fixed.
"""
class Minimizer(object):
def __init__(self, initial, condition, random):
self.current = hbytes(initial)
self.size = len(self.current)
self.condition = condition
self.random = random
self.changes = 0
self.seen = set()
self.considerations = 0
self.duplicates = 0
def incorporate(self, buffer):
assert isinstance(buffer, hbytes)
assert len(buffer) == self.size
assert buffer <= self.current
self.considerations += 1
if buffer in self.seen:
self.duplicates += 1
return False
self.seen.add(buffer)
if self.condition(buffer):
self.current = buffer
self.changes += 1
return True
return False
def _shrink_index(self, i, c):
assert isinstance(self.current, hbytes)
assert 0 <= i < self.size
if self.current[i] <= c:
return False
if self.incorporate(
self.current[:i] + hbytes([c]) +
self.current[i + 1:]
):
return True
if i == self.size - 1:
return False
return self.incorporate(
self.current[:i] + hbytes([c, 255]) +
self.current[i + 2:]
) or self.incorporate(
self.current[:i] + hbytes([c]) +
hbytes([255] * (self.size - i - 1))
)
def run(self):
if not any(self.current):
return
if self.incorporate(hbytes(self.size)):
return
for c in hrange(max(self.current)):
if self.incorporate(
hbytes(min(b, c) for b in self.current)
):
break
change_counter = -1
while self.current and change_counter < self.changes:
change_counter = self.changes
for i in hrange(self.size):
t = self.current[i]
if t > 0:
ss = small_shrinks[self.current[i]]
for c in ss:
if self._shrink_index(i, c):
for c in hrange(self.current[i]):
if c in ss:
continue
if self._shrink_index(i, c):
break
break
# Table of useful small shrinks to apply to a number.
# The idea is that we use these first to see if shrinking is likely to work.
# If it is, we try a full shrink. In the best case scenario this speeds us
# up by a factor of about 25. It will occasonally cause us to miss
# shrinks that we could have succeeded with, but oh well. It doesn't fail any
# of our guarantees because we do try to shrink to -1 among other things.
small_shrinks = [
set(range(b)) for b in hrange(10)
]
for b in hrange(10, 256):
result = set()
result.add(0)
result.add(b - 1)
for i in hrange(8):
result.add(b ^ (1 << i))
result.discard(b)
assert len(result) <= 10
small_shrinks.append(sorted(result))
def minimize(initial, condition, random=None):
m = Minimizer(initial, condition, random)
m.run()
return m.current
hypothesis-3.0.1/src/hypothesis/internal/conjecture/utils.py 0000664 0000000 0000000 00000007223 12661275660 0024406 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
from hypothesis.internal.compat import hbytes, int_to_bytes, int_from_bytes
def n_byte_unsigned(data, n):
return int_from_bytes(data.draw_bytes(n))
def saturate(n):
bits = n.bit_length()
k = 1
while k < bits:
n |= (n >> k)
k *= 2
return n
def integer_range(data, lower, upper, center=None, distribution=None):
assert lower <= upper
if lower == upper:
return int(lower)
if center is None:
center = lower
center = min(max(center, lower), upper)
if distribution is None:
if lower < center < upper:
def distribution(random):
if random.randint(0, 1):
return random.randint(center, upper)
else:
return random.randint(lower, center)
else:
distribution = lambda random: random.randint(lower, upper)
gap = upper - lower
bits = gap.bit_length()
nbytes = bits // 8 + int(bits % 8 != 0)
mask = saturate(gap)
def byte_distribution(random, n):
assert n == nbytes
v = distribution(random)
if v >= center:
probe = v - center
else:
probe = upper - v
return int_to_bytes(probe, n)
probe = int_from_bytes(data.draw_bytes(nbytes, byte_distribution)) & mask
if probe <= gap:
if center == upper:
result = upper - probe
elif center == lower:
result = lower + probe
else:
if center + probe <= upper:
result = center + probe
else:
result = upper - probe
assert lower <= result <= upper
return int(result)
else:
data.mark_invalid()
def integer_range_with_distribution(data, lower, upper, nums):
return integer_range(
data, lower, upper, distribution=nums
)
def centered_integer_range(data, lower, upper, center):
return integer_range(
data, lower, upper, center=center
)
def choice(data, values):
return values[integer_range(data, 0, len(values) - 1)]
def geometric(data, p):
denom = math.log1p(-p)
n_bytes = 8
def distribution(random, n):
assert n == n_bytes
for _ in range(100):
try:
return int_to_bytes(int(
math.log1p(-random.random()) / denom), n)
# This is basically impossible to hit but is required for
# correctness
except OverflowError: # pragma: no cover
pass
# We got a one in a million chance 100 times in a row. Something is up.
assert False # pragma: no cover
return int_from_bytes(data.draw_bytes(n_bytes, distribution))
def boolean(data):
return bool(n_byte_unsigned(data, 1) & 1)
def biased_coin(data, p):
def distribution(random, n):
assert n == 1
return hbytes([int(random.random() <= p)])
return bool(
data.draw_bytes(1, distribution)[0] & 1
)
hypothesis-3.0.1/src/hypothesis/internal/debug.py 0000664 0000000 0000000 00000005312 12661275660 0022170 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import time
import signal
from hypothesis import settings as Settings
from hypothesis.core import find
from hypothesis.internal.reflection import proxies
class Timeout(BaseException):
pass
class CatchableTimeout(Exception):
pass
try:
signal.SIGALRM
# The tests here have a tendency to run away with themselves a it if
# something goes wrong, so we use a relatively hard kill timeout.
def timeout(seconds=1, catchable=False):
def decorate(f):
@proxies(f)
def wrapped(*args, **kwargs):
start = time.time()
def handler(signum, frame):
if catchable:
raise CatchableTimeout(
u'Timed out after %.2fs' % (time.time() - start))
else:
raise Timeout(
u'Timed out after %.2fs' % (time.time() - start))
old_handler = signal.signal(signal.SIGALRM, handler)
signal.alarm(int(math.ceil(seconds)))
try:
return f(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old_handler)
signal.alarm(0)
return wrapped
return decorate
except AttributeError:
# We're on an OS with no SIGALRM. Fall back to no timeout.
def timeout(seconds=1):
def decorate(f):
return f
return decorate
def minimal(
definition, condition=None,
settings=None, timeout_after=10, random=None
):
settings = Settings(
settings,
max_examples=50000,
max_iterations=100000,
max_shrinks=5000,
database=None,
timeout=timeout_after,
)
condition = condition or (lambda x: True)
@timeout(timeout_after * 1.20)
def run():
return find(
definition,
condition,
settings=settings,
random=random,
)
return run()
hypothesis-3.0.1/src/hypothesis/internal/floats.py 0000664 0000000 0000000 00000003013 12661275660 0022366 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import struct
def sign(x):
try:
return math.copysign(1.0, x)
except TypeError:
raise TypeError('Expected float but got %r of type %s' % (
x, type(x).__name__
))
def is_negative(x):
return sign(x) < 0
def count_between_floats(x, y):
assert x <= y
if is_negative(x):
if is_negative(y):
return float_to_int(x) - float_to_int(y) + 1
else:
return count_between_floats(x, -0.0) + count_between_floats(0.0, y)
else:
assert not is_negative(y)
return float_to_int(y) - float_to_int(x) + 1
def float_to_int(value):
return (
struct.unpack(b'!Q', struct.pack(b'!d', value))[0]
)
def int_to_float(value):
return (
struct.unpack(b'!d', struct.pack(b'!Q', value))[0]
)
hypothesis-3.0.1/src/hypothesis/internal/intervalsets.py 0000664 0000000 0000000 00000005117 12661275660 0023630 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
class IntervalSet(object):
def __init__(self, intervals):
self.intervals = tuple(intervals)
self.offsets = [0]
for u, v in self.intervals:
self.offsets.append(
self.offsets[-1] + v - u + 1
)
self.size = self.offsets.pop()
def __len__(self):
return self.size
def __iter__(self):
for u, v in self.intervals:
for i in range(u, v + 1):
yield i
def __getitem__(self, i):
if i < 0:
i = self.size + i
if i < 0 or i >= self.size:
raise IndexError('Invalid index %d for [0, %d)' % (i, self.size))
# Want j = maximal such that offsets[j] <= i
j = len(self.intervals) - 1
if self.offsets[j] > i:
hi = j
lo = 0
# Invariant: offsets[lo] <= i < offsets[hi]
while lo + 1 < hi:
mid = (lo + hi) // 2
if self.offsets[mid] <= i:
lo = mid
else:
hi = mid
j = lo
t = i - self.offsets[j]
u, v = self.intervals[j]
r = u + t
assert r <= v
return r
def __repr__(self):
return 'IntervalSet(%r)' % (self.intervals,)
def index(self, value):
for offset, (u, v) in zip(self.offsets, self.intervals):
if u == value:
return offset
elif u > value:
raise ValueError('%d is not in list' % (value,))
if value <= v:
return offset + (value - u)
raise ValueError('%d is not in list' % (value,))
def index_above(self, value):
for offset, (u, v) in zip(self.offsets, self.intervals):
if u >= value:
return offset
if value <= v:
return offset + (value - u)
return self.size
hypothesis-3.0.1/src/hypothesis/internal/reflection.py 0000664 0000000 0000000 00000032477 12661275660 0023250 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This file can approximately be considered the collection of hypothesis going
to really unreasonable lengths to produce pretty output."""
from __future__ import division, print_function, absolute_import
import re
import ast
import types
import hashlib
import inspect
from types import ModuleType
from functools import wraps
from hypothesis.configuration import storage_directory
from hypothesis.internal.compat import hrange, qualname, getargspec, \
to_unicode, isidentifier, str_to_bytes, ARG_NAME_ATTRIBUTE, \
update_code_location
def fully_qualified_name(f):
"""Returns a unique identifier for f pointing to the module it was define
on, and an containing functions."""
if f.__module__ is not None:
return f.__module__ + '.' + qualname(f)
else:
return qualname(f)
def function_digest(function):
"""Returns a string that is stable across multiple invocations across
multiple processes and is prone to changing significantly in response to
minor changes to the function.
No guarantee of uniqueness though it usually will be.
"""
hasher = hashlib.md5()
try:
hasher.update(to_unicode(inspect.getsource(function)).encode('utf-8'))
# Different errors on different versions of python. What fun.
except (OSError, IOError, TypeError):
pass
try:
hasher.update(str_to_bytes(function.__name__))
except AttributeError:
pass
try:
hasher.update(function.__module__.__name__.encode('utf-8'))
except AttributeError:
pass
try:
hasher.update(str_to_bytes(repr(getargspec(function))))
except TypeError:
pass
return hasher.digest()
def convert_keyword_arguments(function, args, kwargs):
"""Returns a pair of a tuple and a dictionary which would be equivalent
passed as positional and keyword args to the function. Unless function has.
**kwargs the dictionary will always be empty.
"""
argspec = getargspec(function)
new_args = []
kwargs = dict(kwargs)
defaults = {}
if argspec.defaults:
for name, value in zip(
argspec.args[-len(argspec.defaults):],
argspec.defaults
):
defaults[name] = value
n = max(len(args), len(argspec.args))
for i in hrange(n):
if i < len(args):
new_args.append(args[i])
else:
arg_name = argspec.args[i]
if arg_name in kwargs:
new_args.append(kwargs.pop(arg_name))
elif arg_name in defaults:
new_args.append(defaults[arg_name])
else:
raise TypeError('No value provided for argument %r' % (
arg_name
))
if kwargs and not argspec.keywords:
if len(kwargs) > 1:
raise TypeError('%s() got unexpected keyword arguments %s' % (
function.__name__, ', '.join(map(repr, kwargs))
))
else:
bad_kwarg = next(iter(kwargs))
raise TypeError('%s() got an unexpected keyword argument %r' % (
function.__name__, bad_kwarg
))
return tuple(new_args), kwargs
def convert_positional_arguments(function, args, kwargs):
"""Return a tuple (new_args, new_kwargs) where all possible arguments have
been moved to kwargs.
new_args will only be non-empty if function has a
variadic argument.
"""
argspec = getargspec(function)
kwargs = dict(kwargs)
if not argspec.keywords:
for k in kwargs.keys():
if k not in argspec.args:
raise TypeError(
'%s() got an unexpected keyword argument %r' % (
function.__name__, k
))
if len(args) < len(argspec.args):
for i in hrange(
len(args), len(argspec.args) - len(argspec.defaults or ())
):
if argspec.args[i] not in kwargs:
raise TypeError('No value provided for argument %s' % (
argspec.args[i],
))
if len(args) > len(argspec.args) and not argspec.varargs:
raise TypeError(
'%s() takes at most %d positional arguments (%d given)' % (
function.__name__, len(argspec.args), len(args)
)
)
for arg, name in zip(args, argspec.args):
if name in kwargs:
raise TypeError(
'%s() got multiple values for keyword argument %r' % (
function.__name__, name
))
else:
kwargs[name] = arg
return (
tuple(args[len(argspec.args):]),
kwargs,
)
def extract_all_lambdas(tree):
lambdas = []
class Visitor(ast.NodeVisitor):
def visit_Lambda(self, node):
lambdas.append(node)
Visitor().visit(tree)
return lambdas
def args_for_lambda_ast(l):
return [getattr(n, ARG_NAME_ATTRIBUTE) for n in l.args.args]
LINE_CONTINUATION = re.compile(r"\\\n")
WHITESPACE = re.compile(r"\s+")
PROBABLY_A_COMMENT = re.compile("""#[^'"]*$""")
SPACE_FOLLOWS_OPEN_BRACKET = re.compile(r"\( ")
SPACE_PRECEDES_CLOSE_BRACKET = re.compile(r"\( ")
def extract_lambda_source(f):
"""Extracts a single lambda expression from the string source. Returns a
string indicating an unknown body if it gets confused in any way.
This is not a good function and I am sorry for it. Forgive me my
sins, oh lord
"""
args = getargspec(f).args
arg_strings = []
# In Python 2 you can have destructuring arguments to functions. This
# results in an argspec with non-string values. I'm not very interested in
# handling these properly, but it's important to not crash on them.
bad_lambda = False
for a in args:
if isinstance(a, (tuple, list)): # pragma: no cover
arg_strings.append('(%s)' % (', '.join(a),))
bad_lambda = True
else:
assert isinstance(a, str)
arg_strings.append(a)
if_confused = 'lambda %s: ' % (', '.join(arg_strings),)
if bad_lambda: # pragma: no cover
return if_confused
try:
source = inspect.getsource(f)
except IOError:
return if_confused
source = LINE_CONTINUATION.sub(' ', source)
source = WHITESPACE.sub(' ', source)
source = source.strip()
try:
tree = ast.parse(source)
except SyntaxError:
for i in hrange(len(source) - 1, len('lambda'), -1):
prefix = source[:i]
if 'lambda' not in prefix:
return if_confused
try:
tree = ast.parse(prefix)
source = prefix
break
except SyntaxError:
continue
else:
return if_confused
all_lambdas = extract_all_lambdas(tree)
aligned_lambdas = [
l for l in all_lambdas
if args_for_lambda_ast(l) == args
]
if len(aligned_lambdas) != 1:
return if_confused
lambda_ast = aligned_lambdas[0]
assert lambda_ast.lineno == 1
source = source[lambda_ast.col_offset:].strip()
source = source[source.index('lambda'):]
for i in hrange(len(source), len('lambda'), -1): # pragma: no branch
try:
parsed = ast.parse(source[:i])
assert len(parsed.body) == 1
assert parsed.body
if not isinstance(parsed.body[0].value, ast.Lambda):
continue
source = source[:i]
break
except SyntaxError:
pass
lines = source.split('\n')
lines = [PROBABLY_A_COMMENT.sub('', l) for l in lines]
source = '\n'.join(lines)
source = WHITESPACE.sub(' ', source)
source = SPACE_FOLLOWS_OPEN_BRACKET.sub('(', source)
source = SPACE_PRECEDES_CLOSE_BRACKET.sub(')', source)
source = source.strip()
return source
def get_pretty_function_description(f):
if not hasattr(f, '__name__'):
return repr(f)
name = f.__name__
if name == '':
result = extract_lambda_source(f)
return result
elif isinstance(f, types.MethodType):
self = f.__self__
if not (self is None or inspect.isclass(self)):
return '%r.%s' % (self, name)
return name
def nicerepr(v):
if inspect.isfunction(v):
return get_pretty_function_description(v)
elif isinstance(v, type):
return v.__name__
else:
return repr(v)
def arg_string(f, args, kwargs, reorder=True):
if reorder:
args, kwargs = convert_positional_arguments(f, args, kwargs)
argspec = getargspec(f)
bits = []
for a in argspec.args:
if a in kwargs:
bits.append('%s=%s' % (a, nicerepr(kwargs.pop(a))))
if kwargs:
for a in sorted(kwargs):
bits.append('%s=%s' % (a, nicerepr(kwargs[a])))
return ', '.join(
[nicerepr(x) for x in args] +
bits
)
def unbind_method(f):
"""Take something that might be a method or a function and return the
underlying function."""
return getattr(f, 'im_func', getattr(f, '__func__', f))
def check_valid_identifier(identifier):
if not isidentifier(identifier):
raise ValueError('%r is not a valid python identifier' %
(identifier,))
def eval_directory():
return storage_directory('eval_source')
eval_cache = {}
def source_exec_as_module(source):
try:
return eval_cache[source]
except KeyError:
pass
result = ModuleType('hypothesis_temporary_module_%s' % (
hashlib.sha1(str_to_bytes(source)).hexdigest(),
))
assert isinstance(source, str)
exec(source, result.__dict__)
eval_cache[source] = result
return result
COPY_ARGSPEC_SCRIPT = """
from hypothesis.utils.conventions import not_set
def accept(%(funcname)s):
def %(name)s(%(argspec)s):
return %(funcname)s(%(invocation)s)
return %(name)s
""".strip() + '\n'
def copy_argspec(name, argspec):
"""A decorator which sets the name and argspec of the function passed into
it."""
check_valid_identifier(name)
for a in argspec.args:
check_valid_identifier(a)
if argspec.varargs is not None:
check_valid_identifier(argspec.varargs)
if argspec.keywords is not None:
check_valid_identifier(argspec.keywords)
n_defaults = len(argspec.defaults or ())
if n_defaults:
parts = []
for a in argspec.args[:-n_defaults]:
parts.append(a)
for a in argspec.args[-n_defaults:]:
parts.append('%s=not_set' % (a,))
else:
parts = list(argspec.args)
used_names = list(argspec.args)
used_names.append(name)
def accept(f):
fargspec = getargspec(f)
must_pass_as_kwargs = []
invocation_parts = []
for a in argspec.args:
if a not in fargspec.args and not fargspec.varargs:
must_pass_as_kwargs.append(a)
else:
invocation_parts.append(a)
if argspec.varargs:
used_names.append(argspec.varargs)
parts.append('*' + argspec.varargs)
invocation_parts.append('*' + argspec.varargs)
for k in must_pass_as_kwargs:
invocation_parts.append('%(k)s=%(k)s' % {'k': k})
if argspec.keywords:
used_names.append(argspec.keywords)
parts.append('**' + argspec.keywords)
invocation_parts.append('**' + argspec.keywords)
candidate_names = ['f'] + [
'f_%d' % (i,) for i in hrange(1, len(used_names) + 2)
]
for funcname in candidate_names: # pragma: no branch
if funcname not in used_names:
break
base_accept = source_exec_as_module(
COPY_ARGSPEC_SCRIPT % {
'name': name,
'funcname': funcname,
'argspec': ', '.join(parts),
'invocation': ', '.join(invocation_parts)
}).accept
result = base_accept(f)
result.__defaults__ = argspec.defaults
return result
return accept
def impersonate(target):
"""Decorator to update the attributes of a function so that to external
introspectors it will appear to be the target function.
Note that this updates the function in place, it doesn't return a
new one.
"""
def accept(f):
f.__code__ = update_code_location(
f.__code__,
target.__code__.co_filename, target.__code__.co_firstlineno
)
f.__name__ = target.__name__
f.__module__ = target.__module__
f.__doc__ = target.__doc__
return f
return accept
def proxies(target):
def accept(proxy):
return impersonate(target)(wraps(target)(
copy_argspec(target.__name__, getargspec(target))(proxy)))
return accept
hypothesis-3.0.1/src/hypothesis/reporting.py 0000664 0000000 0000000 00000003463 12661275660 0021304 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import inspect
from hypothesis._settings import settings, Verbosity
from hypothesis.internal.compat import escape_unicode_characters
from hypothesis.utils.dynamicvariables import DynamicVariable
def silent(value):
pass
def default(value):
try:
print(value)
except UnicodeEncodeError:
print(escape_unicode_characters(value))
reporter = DynamicVariable(default)
def current_reporter():
return reporter.value
def with_reporter(new_reporter):
return reporter.with_value(new_reporter)
def current_verbosity():
return settings.default.verbosity
def to_text(textish):
if inspect.isfunction(textish):
textish = textish()
if isinstance(textish, bytes):
textish = textish.decode('utf-8')
return textish
def verbose_report(text):
if current_verbosity() >= Verbosity.verbose:
current_reporter()(to_text(text))
def debug_report(text):
if current_verbosity() >= Verbosity.debug:
current_reporter()(to_text(text))
def report(text):
if current_verbosity() >= Verbosity.normal:
current_reporter()(to_text(text))
hypothesis-3.0.1/src/hypothesis/searchstrategy/ 0000775 0000000 0000000 00000000000 12661275660 0021743 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/searchstrategy/__init__.py 0000664 0000000 0000000 00000001503 12661275660 0024053 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""Package defining SearchStrategy, which is the core type that Hypothesis uses
to explore data."""
from .strategies import SearchStrategy
__all__ = [
'SearchStrategy',
]
hypothesis-3.0.1/src/hypothesis/searchstrategy/collections.py 0000664 0000000 0000000 00000015323 12661275660 0024637 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from collections import namedtuple
import hypothesis.internal.conjecture.utils as cu
from hypothesis.control import assume
from hypothesis.internal.compat import OrderedDict
from hypothesis.searchstrategy.strategies import SearchStrategy, \
one_of_strategies, MappedSearchStrategy
class TupleStrategy(SearchStrategy):
"""A strategy responsible for fixed length tuples based on heterogenous
strategies for each of their elements.
This also handles namedtuples
"""
def __init__(self,
strategies, tuple_type):
SearchStrategy.__init__(self)
strategies = tuple(strategies)
self.element_strategies = strategies
def validate(self):
for s in self.element_strategies:
s.validate()
def __repr__(self):
if len(self.element_strategies) == 1:
tuple_string = '%s,' % (repr(self.element_strategies[0]),)
else:
tuple_string = ', '.join(map(repr, self.element_strategies))
return 'TupleStrategy((%s))' % (
tuple_string,
)
def newtuple(self, xs):
"""Produce a new tuple of the correct type."""
return tuple(xs)
def do_draw(self, data):
return self.newtuple(
data.draw(e) for e in self.element_strategies
)
class ListStrategy(SearchStrategy):
"""A strategy for lists which takes an intended average length and a
strategy for each of its element types and generates lists containing any
of those element types.
The conditional distribution of the length is geometric, and the
conditional distribution of each parameter is whatever their
strategies define.
"""
Parameter = namedtuple(
'Parameter', ('child_parameter', 'average_length')
)
def __init__(
self,
strategies, average_length=50.0, min_size=0, max_size=float('inf')
):
SearchStrategy.__init__(self)
assert average_length > 0
self.average_length = average_length
strategies = tuple(strategies)
self.min_size = min_size or 0
self.max_size = max_size or float('inf')
self.element_strategy = one_of_strategies(strategies)
def validate(self):
self.element_strategy.validate()
def do_draw(self, data):
if self.max_size == self.min_size:
return [
data.draw(self.element_strategy)
for _ in range(self.min_size)
]
stopping_value = 1 - 1.0 / (1 + self.average_length)
result = []
while True:
data.start_example()
more = cu.biased_coin(data, stopping_value)
value = data.draw(self.element_strategy)
data.stop_example()
if not more:
if len(result) < self.min_size:
continue
else:
break
result.append(value)
if self.max_size < float('inf'):
result = result[:self.max_size]
return result
def __repr__(self):
return (
'ListStrategy(%r, min_size=%r, average_size=%r, max_size=%r)'
) % (
self.element_strategy, self.min_size, self.average_length,
self.max_size
)
class UniqueListStrategy(SearchStrategy):
def __init__(
self,
elements, min_size, max_size, average_size,
key
):
super(UniqueListStrategy, self).__init__()
assert min_size <= average_size <= max_size
self.min_size = min_size
self.max_size = max_size
self.average_size = average_size
self.element_strategy = elements
self.key = key
def validate(self):
self.element_strategy.validate()
Parameter = namedtuple(
'Parameter', ('parameter_seed', 'parameter')
)
def do_draw(self, data):
seen = set()
result = []
if self.max_size == self.min_size:
while len(result) < self.max_size:
v = data.draw(self.element_strategy)
k = self.key(v)
if k not in seen:
result.append(v)
seen.add(k)
return result
stopping_value = 1 - 1.0 / (1 + self.average_size)
duplicates = 0
while len(result) < self.max_size:
data.start_example()
if len(result) >= self.min_size:
more = cu.biased_coin(data, stopping_value)
else:
more = True
if not more:
data.stop_example()
break
value = data.draw(self.element_strategy)
data.stop_example()
k = self.key(value)
if k in seen:
duplicates += 1
assume(duplicates <= len(result))
continue
seen.add(k)
result.append(value)
assume(len(result) >= self.min_size)
return result
class FixedKeysDictStrategy(MappedSearchStrategy):
"""A strategy which produces dicts with a fixed set of keys, given a
strategy for each of their equivalent values.
e.g. {'foo' : some_int_strategy} would
generate dicts with the single key 'foo' mapping to some integer.
"""
def __init__(self, strategy_dict):
self.dict_type = type(strategy_dict)
if isinstance(strategy_dict, OrderedDict):
self.keys = tuple(strategy_dict.keys())
else:
try:
self.keys = tuple(sorted(
strategy_dict.keys(),
))
except TypeError:
self.keys = tuple(sorted(
strategy_dict.keys(), key=repr,
))
super(FixedKeysDictStrategy, self).__init__(
strategy=TupleStrategy(
(strategy_dict[k] for k in self.keys), tuple
)
)
def __repr__(self):
return 'FixedKeysDictStrategy(%r, %r)' % (
self.keys, self.mapped_strategy)
def pack(self, value):
return self.dict_type(zip(self.keys, value))
hypothesis-3.0.1/src/hypothesis/searchstrategy/deferred.py 0000664 0000000 0000000 00000006705 12661275660 0024105 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import settings
from hypothesis.internal.compat import hrange, getargspec
from hypothesis.internal.reflection import arg_string, \
convert_keyword_arguments, convert_positional_arguments
from hypothesis.searchstrategy.strategies import SearchStrategy
def tupleize(x):
if isinstance(x, (tuple, list)):
return tuple(x)
else:
return x
class DeferredStrategy(SearchStrategy):
"""A strategy which is defined purely by conversion to and from another
strategy.
Its parameter and distribution come from that other strategy.
"""
def __init__(self, function, args, kwargs):
SearchStrategy.__init__(self)
self.__wrapped_strategy = None
self.__representation = None
self.__function = function
self.__args = tuple(map(tupleize, args))
self.__kwargs = dict(
(k, tupleize(v)) for k, v in kwargs.items()
)
self.__settings = settings.default or settings()
@property
def supports_find(self):
return self.wrapped_strategy.supports_find
@property
def wrapped_strategy(self):
if self.__wrapped_strategy is None:
with self.__settings:
self.__wrapped_strategy = self.__function(
*self.__args,
**self.__kwargs
)
return self.__wrapped_strategy
def validate(self):
w = self.wrapped_strategy
assert isinstance(w, SearchStrategy), \
'%r returned non-strategy %r' % (self, w)
w.validate()
def __repr__(self):
if self.__representation is None:
_args = self.__args
_kwargs = self.__kwargs
argspec = getargspec(self.__function)
defaults = {}
if argspec.defaults is not None:
for k in hrange(1, len(argspec.defaults) + 1):
defaults[argspec.args[-k]] = argspec.defaults[-k]
if len(argspec.args) > 1 or argspec.defaults:
_args, _kwargs = convert_positional_arguments(
self.__function, _args, _kwargs)
else:
_args, _kwargs = convert_keyword_arguments(
self.__function, _args, _kwargs)
kwargs_for_repr = dict(_kwargs)
for k, v in defaults.items():
if k in kwargs_for_repr and kwargs_for_repr[k] is defaults[k]:
del kwargs_for_repr[k]
self.__representation = '%s(%s)' % (
self.__function.__name__,
arg_string(
self.__function, _args, kwargs_for_repr, reorder=False),
)
return self.__representation
def do_draw(self, data):
return data.draw(self.wrapped_strategy)
hypothesis-3.0.1/src/hypothesis/searchstrategy/fixed.py 0000664 0000000 0000000 00000004113 12661275660 0023413 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.control import assume
from hypothesis.searchstrategy.strategies import SearchStrategy
class FixedStrategy(SearchStrategy):
def __init__(self, block_size):
self.block_size = block_size
def do_draw(self, data):
block = data.draw_bytes(self.block_size, self.distribution)
assert len(block) == self.block_size
value = self.from_bytes(block)
assume(self.is_acceptable(value))
return value
def distribution(self, random, n):
assert n == self.block_size
for _ in range(100):
value = self.draw_value(random)
if self.is_acceptable(value):
block = self.to_bytes(value)
assert len(block) == self.block_size
return block
raise AssertionError(
'After 100 tries was unable to draw a valid value. This is a bug '
'in the implementation of %s.' % (type(self).__name__,))
def draw_value(self, random):
raise NotImplementedError('%s.draw' % (
type(self).__name__,
))
def to_bytes(self, value):
raise NotImplementedError('%s.to_bytes' % (
type(self).__name__,
))
def from_bytes(self, value):
raise NotImplementedError('%s.from_bytes' % (
type(self).__name__,
))
def is_acceptable(self, value):
return True
hypothesis-3.0.1/src/hypothesis/searchstrategy/flatmapped.py 0000664 0000000 0000000 00000003031 12661275660 0024427 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis._settings import settings
from hypothesis.internal.reflection import get_pretty_function_description
from hypothesis.searchstrategy.strategies import SearchStrategy
class FlatMapStrategy(SearchStrategy):
def __init__(
self, strategy, expand
):
super(FlatMapStrategy, self).__init__()
self.flatmapped_strategy = strategy
self.expand = expand
self.settings = settings.default
def __repr__(self):
if not hasattr(self, u'_cached_repr'):
self._cached_repr = u'%r.flatmap(%s)' % (
self.flatmapped_strategy, get_pretty_function_description(
self.expand))
return self._cached_repr
def do_draw(self, data):
source = data.draw(self.flatmapped_strategy)
return data.draw(self.expand(source))
hypothesis-3.0.1/src/hypothesis/searchstrategy/misc.py 0000664 0000000 0000000 00000004506 12661275660 0023255 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.internal.conjecture.utils as d
from hypothesis.types import RandomWithSeed
from hypothesis.searchstrategy.strategies import SearchStrategy, \
MappedSearchStrategy
class BoolStrategy(SearchStrategy):
"""A strategy that produces Booleans with a Bernoulli conditional
distribution."""
def __repr__(self):
return u'BoolStrategy()'
def do_draw(self, data):
return d.boolean(data)
class JustStrategy(SearchStrategy):
"""
A strategy which simply returns a single fixed value with probability 1.
"""
def __init__(self, value):
SearchStrategy.__init__(self)
self.value = value
def __repr__(self):
return 'JustStrategy(value=%r)' % (self.value,)
def do_draw(self, data):
return self.value
class RandomStrategy(MappedSearchStrategy):
"""A strategy which produces Random objects.
The conditional distribution is simply a RandomWithSeed seeded with
a 128 bits of data chosen uniformly at random.
"""
def pack(self, i):
return RandomWithSeed(i)
class SampledFromStrategy(SearchStrategy):
"""A strategy which samples from a set of elements. This is essentially
equivalent to using a OneOfStrategy over Just strategies but may be more
efficient and convenient.
The conditional distribution chooses uniformly at random from some
non-empty subset of the elements.
"""
def __init__(self, elements):
SearchStrategy.__init__(self)
self.elements = tuple(elements)
assert self.elements
def do_draw(self, data):
return d.choice(data, self.elements)
hypothesis-3.0.1/src/hypothesis/searchstrategy/numbers.py 0000664 0000000 0000000 00000015624 12661275660 0024000 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import struct
from collections import namedtuple
import hypothesis.internal.conjecture.utils as d
from hypothesis.control import assume
from hypothesis.internal.compat import int_to_bytes, int_from_bytes, \
bytes_from_list
from hypothesis.internal.floats import sign
from hypothesis.searchstrategy.strategies import SearchStrategy, \
MappedSearchStrategy
class IntStrategy(SearchStrategy):
"""A generic strategy for integer types that provides the basic methods
other than produce.
Subclasses should provide the produce method.
"""
class IntegersFromStrategy(SearchStrategy):
def __init__(self, lower_bound, average_size=100000.0):
super(IntegersFromStrategy, self).__init__()
self.lower_bound = lower_bound
self.average_size = average_size
def __repr__(self):
return 'IntegersFromStrategy(%d)' % (self.lower_bound,)
def do_draw(self, data):
return int(
self.lower_bound + d.geometric(data, 1.0 / self.average_size))
class WideRangeIntStrategy(IntStrategy):
def __repr__(self):
return 'WideRangeIntStrategy()'
def do_draw(self, data):
size = 16
sign_mask = 2 ** (size * 8 - 1)
def distribution(random, n):
assert n == size
k = min(
random.randint(0, n * 8 - 1),
random.randint(0, n * 8 - 1),
)
if k > 0:
r = random.getrandbits(k)
else:
r = 0
if random.randint(0, 1):
r |= sign_mask
else:
r &= (~sign_mask)
return int_to_bytes(r, n)
byt = data.draw_bytes(size, distribution=distribution)
r = int_from_bytes(byt)
negative = r & sign_mask
r &= (~sign_mask)
if negative:
r = -r
return int(r)
class BoundedIntStrategy(SearchStrategy):
"""A strategy for providing integers in some interval with inclusive
endpoints."""
def __init__(self, start, end):
SearchStrategy.__init__(self)
self.start = start
self.end = end
def __repr__(self):
return 'BoundedIntStrategy(%d, %d)' % (self.start, self.end)
def do_draw(self, data):
return d.integer_range(data, self.start, self.end)
NASTY_FLOATS = [
0.0, 0.5, 1.0 / 3, 10e6, 10e-6, 1.175494351e-38, 2.2250738585072014e-308,
1.7976931348623157e+308, 3.402823466e+38, 9007199254740992, 1 - 10e-6,
2 + 10e-6, 1.192092896e-07, 2.2204460492503131e-016,
] + [float('inf'), float('nan')] * 5
NASTY_FLOATS.extend([-x for x in NASTY_FLOATS])
class FloatStrategy(SearchStrategy):
"""Generic superclass for strategies which produce floats."""
def __init__(self, allow_infinity, allow_nan):
SearchStrategy.__init__(self)
assert isinstance(allow_infinity, bool)
assert isinstance(allow_nan, bool)
self.allow_infinity = allow_infinity
self.allow_nan = allow_nan
def __repr__(self):
return '%s()' % (self.__class__.__name__,)
def permitted(self, f):
if not self.allow_infinity and math.isinf(f):
return False
if not self.allow_nan and math.isnan(f):
return False
return True
def do_draw(self, data):
def draw_float_bytes(random, n):
assert n == 8
while True:
i = random.randint(1, 10)
if i <= 4:
f = random.choice(NASTY_FLOATS)
elif i == 5:
return bytes_from_list(
random.randint(0, 255) for _ in range(8))
elif i == 6:
f = random.random() * (
random.randint(0, 1) * 2 - 1
)
elif i == 7:
f = random.gauss(0, 1)
elif i == 8:
f = float(random.randint(-2 ** 63, 2 ** 63))
else:
f = random.gauss(
random.randint(-2 ** 63, 2 ** 63), 1
)
if self.permitted(f):
return struct.pack(b'!d', f)
result = struct.unpack(b'!d', bytes(
data.draw_bytes(8, draw_float_bytes)))[0]
assume(self.permitted(result))
return result
def float_order_key(k):
return (sign(k), k)
class FixedBoundedFloatStrategy(SearchStrategy):
"""A strategy for floats distributed between two endpoints.
The conditional distribution tries to produce values clustered
closer to one of the ends.
"""
Parameter = namedtuple(
'Parameter',
('cut', 'leftwards')
)
def __init__(self, lower_bound, upper_bound):
SearchStrategy.__init__(self)
self.lower_bound = float(lower_bound)
self.upper_bound = float(upper_bound)
lb = float_order_key(self.lower_bound)
ub = float_order_key(self.upper_bound)
self.critical = [
z for z in (-0.0, 0.0)
if lb <= float_order_key(z) <= ub
]
self.critical.append(self.lower_bound)
self.critical.append(self.upper_bound)
def __repr__(self):
return 'FixedBoundedFloatStrategy(%s, %s)' % (
self.lower_bound, self.upper_bound,
)
def do_draw(self, data):
def draw_float_bytes(random, n):
assert n == 8
i = random.randint(0, 20)
if i <= 2:
f = random.choice(self.critical)
else:
f = random.random() * (
self.upper_bound - self.lower_bound
) + self.lower_bound
return struct.pack(b'!d', f)
f = struct.unpack(b'!d', bytes(
data.draw_bytes(8, draw_float_bytes)))[0]
assume(self.lower_bound <= f <= self.upper_bound)
assume(sign(self.lower_bound) <= sign(f) <= sign(self.upper_bound))
return f
class ComplexStrategy(MappedSearchStrategy):
"""A strategy over complex numbers, with real and imaginary values
distributed according to some provided strategy for floating point
numbers."""
def __repr__(self):
return 'ComplexStrategy()'
def pack(self, value):
return complex(*value)
hypothesis-3.0.1/src/hypothesis/searchstrategy/recursive.py 0000664 0000000 0000000 00000004421 12661275660 0024325 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from contextlib import contextmanager
from hypothesis.searchstrategy.wrappers import WrapperStrategy
from hypothesis.searchstrategy.strategies import OneOfStrategy, \
SearchStrategy
class LimitReached(BaseException):
pass
class LimitedStrategy(WrapperStrategy):
def __init__(self, strategy):
super(LimitedStrategy, self).__init__(strategy)
self.marker = 0
self.currently_capped = False
def do_draw(self, data):
assert self.currently_capped
if self.marker <= 0:
raise LimitReached()
self.marker -= 1
return super(LimitedStrategy, self).do_draw(data)
@contextmanager
def capped(self, max_templates):
assert not self.currently_capped
try:
self.currently_capped = True
self.marker = max_templates
yield
finally:
self.currently_capped = False
class RecursiveStrategy(SearchStrategy):
def __init__(self, base, extend, max_leaves):
self.max_leaves = max_leaves
self.base = LimitedStrategy(base)
self.extend = extend
strategies = [self.base, self.extend(self.base)]
while 2 ** len(strategies) <= max_leaves:
strategies.append(
extend(OneOfStrategy(tuple(strategies), bias=0.8)))
self.strategy = OneOfStrategy(strategies)
def do_draw(self, data):
while True:
try:
with self.base.capped(self.max_leaves):
return data.draw(self.strategy)
except LimitReached:
pass
hypothesis-3.0.1/src/hypothesis/searchstrategy/reprwrapper.py 0000664 0000000 0000000 00000002725 12661275660 0024674 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import inspect
from hypothesis.searchstrategy.wrappers import WrapperStrategy
class ReprWrapperStrategy(WrapperStrategy):
"""A strategy which is defined purely by conversion to and from another
strategy.
Its parameter and distribution come from that other strategy.
"""
def __init__(self, strategy, representation):
super(ReprWrapperStrategy, self).__init__(strategy)
if not inspect.isfunction(representation):
assert isinstance(representation, str)
self.representation = representation
def __repr__(self):
if inspect.isfunction(self.representation):
self.representation = self.representation()
assert isinstance(self.representation, str)
return self.representation
hypothesis-3.0.1/src/hypothesis/searchstrategy/shared.py 0000664 0000000 0000000 00000003061 12661275660 0023563 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.searchstrategy.wrappers import SearchStrategy
SHARED_STRATEGY_ATTRIBUTE = '_hypothesis_shared_strategies'
class SharedStrategy(SearchStrategy):
def __init__(self, base, key=None):
self.key = key
self.base = base
@property
def supports_find(self):
return self.base.supports_find
def __repr__(self):
if self.key is not None:
return 'shared(%r, key=%r)' % (self.base, self.key)
else:
return 'shared(%r)' % (self.base,)
def do_draw(self, data):
if not hasattr(data, SHARED_STRATEGY_ATTRIBUTE):
setattr(data, SHARED_STRATEGY_ATTRIBUTE, {})
sharing = getattr(data, SHARED_STRATEGY_ATTRIBUTE)
key = self.key or self
if key not in sharing:
sharing[key] = self.base.do_draw(data)
return sharing[key]
hypothesis-3.0.1/src/hypothesis/searchstrategy/strategies.py 0000664 0000000 0000000 00000023430 12661275660 0024471 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.internal.conjecture.utils as cu
from hypothesis.errors import NoExamples, NoSuchExample, Unsatisfiable, \
UnsatisfiedAssumption
from hypothesis.control import assume, reject
from hypothesis.internal.compat import hrange
from hypothesis.internal.reflection import get_pretty_function_description
def one_of_strategies(xs):
"""Helper function for unioning multiple strategies."""
xs = tuple(xs)
if not xs:
raise ValueError('Cannot join an empty list of strategies')
if len(xs) == 1:
return xs[0]
return OneOfStrategy(xs)
class SearchStrategy(object):
"""A SearchStrategy is an object that knows how to explore data of a given
type.
Except where noted otherwise, methods on this class are not part of the
public API and their behaviour may change significantly between minor
version releases. They will generally be stable between patch releases.
With that in mind, here is how SearchStrategy works.
A search strategy is responsible for generating, simplifying and
serializing examples for saving.
In order to do this a strategy has three types (where type here is more
precise than just the class of the value. For example a tuple of ints
should be considered different from a tuple of strings):
1. The strategy parameter type
2. The strategy template type
3. The generated type
Of these, the first two should be considered to be private implementation
details of a strategy and the only valid thing to do them is to pass them
back to the search strategy. Additionally, templates may be compared for
equality and hashed.
Templates must be of quite a restricted type. A template may be any of the
following:
1. Any instance of the types bool, float, int, str (unicode on 2.7)
2. None
3. Any tuple or namedtuple of valid template types
4. Any frozenset of valid template types
This may be relaxed a bit in future, but the requirement that templates are
hashable probably won't be.
This may all seem overly complicated but it's for a fairly good reason.
For more discussion of the motivation see
http://hypothesis.readthedocs.org/en/master/internals.html
Given these, data generation happens in three phases:
1. Draw a parameter value from a random number (defined by
draw_parameter)
2. Given a parameter value and a Random, draw a random template
3. Reify a template value, deterministically turning it into a value of
the desired type.
Data simplification proceeds on template values, taking a template and
providing a generator over some examples of similar but simpler templates.
"""
supports_find = True
def example(self, random=None):
"""Provide an example of the sort of value that this strategy
generates. This is biased to be slightly simpler than is typical for
values from this strategy, for clarity purposes.
This method shouldn't be taken too seriously. It's here for interactive
exploration of the API, not for any sort of real testing.
This method is part of the public API.
"""
from hypothesis import find, settings
try:
return find(
self,
lambda x: True,
random=random,
settings=settings(
max_shrinks=0,
max_iterations=1000,
database=None
)
)
except (NoSuchExample, Unsatisfiable):
raise NoExamples(
u'Could not find any valid examples in 100 tries'
)
def map(self, pack):
"""Returns a new strategy that generates values by generating a value
from this strategy and then calling pack() on the result, giving that.
This method is part of the public API.
"""
return MappedSearchStrategy(
pack=pack, strategy=self
)
def flatmap(self, expand):
"""Returns a new strategy that generates values by generating a value
from this strategy, say x, then generating a value from
strategy(expand(x))
This method is part of the public API.
"""
from hypothesis.searchstrategy.flatmapped import FlatMapStrategy
return FlatMapStrategy(
expand=expand, strategy=self
)
def filter(self, condition):
"""Returns a new strategy that generates values from this strategy
which satisfy the provided condition. Note that if the condition is too
hard to satisfy this might result in your tests failing with
Unsatisfiable.
This method is part of the public API.
"""
return FilteredStrategy(
condition=condition,
strategy=self,
)
def __or__(self, other):
"""Return a strategy which produces values by randomly drawing from one
of this strategy or the other strategy.
This method is part of the public API.
"""
if not isinstance(other, SearchStrategy):
raise ValueError('Cannot | a SearchStrategy with %r' % (other,))
return one_of_strategies((self, other))
def validate(self):
"""Through an exception if the strategy is not valid.
This can happen due to lazy construction
"""
pass
def do_draw(self, data):
raise NotImplementedError('%s.do_draw' % (type(self).__name__,))
def __init__(self):
pass
class OneOfStrategy(SearchStrategy):
"""Implements a union of strategies. Given a number of strategies this
generates values which could have come from any of them.
The conditional distribution draws uniformly at random from some non-empty
subset of these strategies and then draws from the conditional distribution
of that strategy.
"""
def __init__(self, strategies, bias=None):
SearchStrategy.__init__(self)
strategies = tuple(strategies)
self.element_strategies = list(strategies)
self.bias = bias
if bias is not None:
assert 0 < bias < 1
self.weights = [bias ** i for i in range(len(strategies))]
def do_draw(self, data):
n = len(self.element_strategies)
if self.bias is None:
i = cu.integer_range(data, 0, n - 1)
else:
def biased_i(random):
while True:
i = random.randint(0, n - 1)
if random.random() <= self.weights[i]:
return i
i = cu.integer_range_with_distribution(
data, 0, n - 1, biased_i)
return data.draw(self.element_strategies[i])
def __repr__(self):
return ' | '.join(map(repr, self.element_strategies))
def validate(self):
for e in self.element_strategies:
e.validate()
class MappedSearchStrategy(SearchStrategy):
"""A strategy which is defined purely by conversion to and from another
strategy.
Its parameter and distribution come from that other strategy.
"""
def __init__(self, strategy, pack=None):
SearchStrategy.__init__(self)
self.mapped_strategy = strategy
if pack is not None:
self.pack = pack
def __repr__(self):
if not hasattr(self, '_cached_repr'):
self._cached_repr = '%r.map(%s)' % (
self.mapped_strategy, get_pretty_function_description(
self.pack)
)
return self._cached_repr
def validate(self):
self.mapped_strategy.validate()
def pack(self, x):
"""Take a value produced by the underlying mapped_strategy and turn it
into a value suitable for outputting from this strategy."""
raise NotImplementedError(
'%s.pack()' % (self.__class__.__name__))
def do_draw(self, data):
for _ in range(3):
i = data.index
try:
return self.pack(self.mapped_strategy.do_draw(data))
except UnsatisfiedAssumption:
if data.index == i:
raise
reject()
class FilteredStrategy(SearchStrategy):
def __init__(self, strategy, condition):
super(FilteredStrategy, self).__init__()
self.condition = condition
self.filtered_strategy = strategy
def __repr__(self):
if not hasattr(self, '_cached_repr'):
self._cached_repr = '%r.filter(%s)' % (
self.filtered_strategy, get_pretty_function_description(
self.condition)
)
return self._cached_repr
def do_draw(self, data):
for _ in hrange(3):
start_index = data.index
value = data.draw(self.filtered_strategy)
if self.condition(value):
return value
else:
# This is to guard against the case where we consume no data.
# As long as we consume data, we'll eventually pass or raise.
# But if we don't this could be an infinite loop.
assume(data.index > start_index)
data.mark_invalid()
hypothesis-3.0.1/src/hypothesis/searchstrategy/streams.py 0000664 0000000 0000000 00000002366 12661275660 0024002 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.types import Stream
from hypothesis.searchstrategy.strategies import SearchStrategy
class StreamStrategy(SearchStrategy):
supports_find = False
def __init__(self, source_strategy):
super(StreamStrategy, self).__init__()
self.source_strategy = source_strategy
def __repr__(self):
return u'StreamStrategy(%r)' % (self.source_strategy,)
def do_draw(self, data):
def gen():
while True:
yield data.draw(self.source_strategy)
return Stream(gen())
hypothesis-3.0.1/src/hypothesis/searchstrategy/strings.py 0000664 0000000 0000000 00000010616 12661275660 0024012 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
from hypothesis.errors import InvalidArgument
from hypothesis.internal import charmap
from hypothesis.internal.compat import hunichr, text_type, binary_type
from hypothesis.internal.intervalsets import IntervalSet
from hypothesis.internal.conjecture.utils import integer_range
from hypothesis.searchstrategy.strategies import SearchStrategy, \
MappedSearchStrategy
class OneCharStringStrategy(SearchStrategy):
"""A strategy which generates single character strings of text type."""
specifier = text_type
zero_point = ord('0')
def __init__(self,
whitelist_categories=None,
blacklist_categories=None,
blacklist_characters=None,
min_codepoint=None,
max_codepoint=None):
intervals = charmap.query(
include_categories=whitelist_categories,
exclude_categories=blacklist_categories,
min_codepoint=min_codepoint,
max_codepoint=max_codepoint,
)
if not intervals:
raise InvalidArgument(
'No valid characters in set'
)
self.intervals = IntervalSet(intervals)
if blacklist_characters:
self.blacklist_characters = set(
b for b in blacklist_characters if ord(b) in self.intervals
)
if len(self.blacklist_characters) == len(self.intervals):
raise InvalidArgument(
'No valid characters in set'
)
else:
self.blacklist_characters = set()
self.zero_point = self.intervals.index_above(ord('0'))
self.special = []
if '\n' not in self.blacklist_characters:
n = ord('\n')
try:
self.special.append(self.intervals.index(n))
except ValueError:
pass
def do_draw(self, data):
denom = math.log1p(-1 / 127)
def d(random):
if self.special and random.randint(0, 10) == 0:
return random.choice(self.special)
if len(self.intervals) <= 256 or random.randint(0, 1):
i = random.randint(0, len(self.intervals.offsets) - 1)
u, v = self.intervals.intervals[i]
return self.intervals.offsets[i] + random.randint(0, v - u + 1)
else:
return min(
len(self.intervals) - 1,
int(math.log(random.random()) / denom))
while True:
i = integer_range(
data, 0, len(self.intervals) - 1,
center=self.zero_point, distribution=d
)
c = hunichr(self.intervals[i])
if c not in self.blacklist_characters:
return c
class StringStrategy(MappedSearchStrategy):
"""A strategy for text strings, defined in terms of a strategy for lists of
single character text strings."""
def __init__(self, list_of_one_char_strings_strategy):
super(StringStrategy, self).__init__(
strategy=list_of_one_char_strings_strategy
)
def __repr__(self):
return 'StringStrategy()'
def pack(self, ls):
return u''.join(ls)
class BinaryStringStrategy(MappedSearchStrategy):
"""A strategy for strings of bytes, defined in terms of a strategy for
lists of bytes."""
def __repr__(self):
return 'BinaryStringStrategy()'
def pack(self, x):
assert isinstance(x, list), repr(x)
ba = bytearray(x)
return binary_type(ba)
class FixedSizeBytes(SearchStrategy):
def __init__(self, size):
self.size = size
def do_draw(self, data):
return binary_type(data.draw_bytes(self.size))
hypothesis-3.0.1/src/hypothesis/searchstrategy/wrappers.py 0000664 0000000 0000000 00000002650 12661275660 0024163 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.searchstrategy.strategies import SearchStrategy
class WrapperStrategy(SearchStrategy):
"""A strategy which is defined purely by conversion to and from another
strategy.
Its parameter and distribution come from that other strategy.
"""
def __init__(self, strategy):
SearchStrategy.__init__(self)
self.wrapped_strategy = strategy
@property
def supports_find(self):
return self.wrapped_strategy.supports_find
def __repr__(self):
return u'%s(%r)' % (type(self).__name__, self.wrapped_strategy)
def validate(self):
self.wrapped_strategy.validate()
def do_draw(self, data):
return self.wrapped_strategy.do_draw(data)
hypothesis-3.0.1/src/hypothesis/stateful.py 0000664 0000000 0000000 00000035477 12661275660 0021134 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This module provides support for a stateful style of testing, where tests
attempt to find a sequence of operations that cause a breakage rather than just
a single value.
Notably, the set of steps available at any point may depend on the
execution to date.
"""
from __future__ import division, print_function, absolute_import
import inspect
import traceback
from unittest import TestCase
from collections import namedtuple
import hypothesis.internal.conjecture.utils as cu
from hypothesis.core import find
from hypothesis.errors import Flaky, NoSuchExample, InvalidDefinition, \
HypothesisException
from hypothesis.control import BuildContext
from hypothesis._settings import settings as Settings
from hypothesis._settings import Verbosity
from hypothesis.reporting import report, verbose_report, current_verbosity
from hypothesis.strategies import just, one_of, sampled_from
from hypothesis.internal.reflection import proxies
from hypothesis.internal.conjecture.data import StopTest
from hypothesis.searchstrategy.strategies import SearchStrategy
from hypothesis.searchstrategy.collections import TupleStrategy, \
FixedKeysDictStrategy
class TestCaseProperty(object): # pragma: no cover
def __get__(self, obj, typ=None):
if obj is not None:
typ = type(obj)
return typ._to_test_case()
def __set__(self, obj, value):
raise AttributeError(u'Cannot set TestCase')
def __delete__(self, obj):
raise AttributeError(u'Cannot delete TestCase')
def find_breaking_runner(state_machine_factory, settings=None):
def is_breaking_run(runner):
try:
runner.run(state_machine_factory())
return False
except HypothesisException:
raise
except Exception:
verbose_report(traceback.format_exc)
return True
if settings is None:
try:
settings = state_machine_factory.TestCase.settings
except AttributeError:
settings = Settings.default
search_strategy = StateMachineSearchStrategy(settings)
return find(
search_strategy,
is_breaking_run,
settings=settings,
database_key=state_machine_factory.__name__.encode('utf-8')
)
def run_state_machine_as_test(state_machine_factory, settings=None):
"""Run a state machine definition as a test, either silently doing nothing
or printing a minimal breaking program and raising an exception.
state_machine_factory is anything which returns an instance of
GenericStateMachine when called with no arguments - it can be a class or a
function. settings will be used to control the execution of the test.
"""
try:
breaker = find_breaking_runner(state_machine_factory, settings)
except NoSuchExample:
return
try:
with BuildContext(is_final=True):
breaker.run(state_machine_factory(), print_steps=True)
except StopTest:
pass
raise Flaky(
u'Run failed initially but succeeded on a second try'
)
class GenericStateMachine(object):
"""A GenericStateMachine is the basic entry point into Hypothesis's
approach to stateful testing.
The intent is for it to be subclassed to provide state machine descriptions
The way this is used is that Hypothesis will repeatedly execute something
that looks something like:
x = MyStatemachineSubclass()
for _ in range(n_steps):
x.execute_step(x.steps().example())
And if this ever produces an error it will shrink it down to a small
sequence of example choices demonstrating that.
"""
def steps(self):
"""Return a SearchStrategy instance the defines the available next
steps."""
raise NotImplementedError(u'%r.steps()' % (self,))
def execute_step(self, step):
"""Execute a step that has been previously drawn from self.steps()"""
raise NotImplementedError(u'%r.execute_step()' % (self,))
def print_step(self, step):
"""Print a step to the current reporter.
This is called right before a step is executed.
"""
self.step_count = getattr(self, u'step_count', 0) + 1
report(u'Step #%d: %s' % (self.step_count, repr(step)))
def teardown(self):
"""Called after a run has finished executing to clean up any necessary
state.
Does nothing by default
"""
pass
_test_case_cache = {}
TestCase = TestCaseProperty()
@classmethod
def _to_test_case(state_machine_class):
try:
return state_machine_class._test_case_cache[state_machine_class]
except KeyError:
pass
class StateMachineTestCase(TestCase):
settings = Settings(
min_satisfying_examples=1
)
def runTest(self):
run_state_machine_as_test(state_machine_class)
base_name = state_machine_class.__name__
StateMachineTestCase.__name__ = str(
base_name + u'.TestCase'
)
StateMachineTestCase.__qualname__ = str(
getattr(state_machine_class, u'__qualname__', base_name) +
u'.TestCase'
)
state_machine_class._test_case_cache[state_machine_class] = (
StateMachineTestCase
)
return StateMachineTestCase
GenericStateMachine.find_breaking_runner = classmethod(find_breaking_runner)
class StateMachineRunner(object):
"""A StateMachineRunner is a description of how to run a state machine.
It contains values that it will use to shape the examples.
"""
def __init__(self, data, n_steps):
self.data = data
self.n_steps = n_steps
def run(self, state_machine, print_steps=None):
if print_steps is None:
print_steps = current_verbosity() >= Verbosity.debug
stopping_value = 1 - 1.0 / (1 + self.n_steps * 0.5)
try:
steps = 0
while True:
if steps == self.n_steps:
stopping_value = 0
self.data.start_example()
if not cu.biased_coin(self.data, stopping_value):
self.data.stop_example()
break
value = self.data.draw(state_machine.steps())
steps += 1
if steps <= self.n_steps:
if print_steps:
state_machine.print_step(value)
state_machine.execute_step(value)
self.data.stop_example()
finally:
state_machine.teardown()
class StateMachineSearchStrategy(SearchStrategy):
def __init__(self, settings=None):
self.program_size = (settings or Settings.default).stateful_step_count
def do_draw(self, data):
return StateMachineRunner(data, self.program_size)
Rule = namedtuple(
u'Rule',
(u'targets', u'function', u'arguments', u'precondition',
u'parent_rule')
)
Bundle = namedtuple(u'Bundle', (u'name',))
RULE_MARKER = u'hypothesis_stateful_rule'
PRECONDITION_MARKER = u'hypothesis_stateful_precondition'
def rule(targets=(), target=None, **kwargs):
"""Decorator for RuleBasedStateMachine. Any name present in target or
targets will define where the end result of this function should go. If
both are empty then the end result will be discarded.
targets may either be a Bundle or the name of a Bundle.
kwargs then define the arguments that will be passed to the function
invocation. If their value is a Bundle then values that have previously
been produced for that bundle will be provided, if they are anything else
it will be turned into a strategy and values from that will be provided.
"""
if target is not None:
targets += (target,)
converted_targets = []
for t in targets:
while isinstance(t, Bundle):
t = t.name
converted_targets.append(t)
def accept(f):
parent_rule = getattr(f, RULE_MARKER, None)
if parent_rule is not None:
raise InvalidDefinition(
'A function cannot be used for two distinct rules. ',
Settings.default,
)
precondition = getattr(f, PRECONDITION_MARKER, None)
rule = Rule(targets=tuple(converted_targets), arguments=kwargs,
function=f, precondition=precondition,
parent_rule=parent_rule)
@proxies(f)
def rule_wrapper(*args, **kwargs):
return f(*args, **kwargs)
setattr(rule_wrapper, RULE_MARKER, rule)
return rule_wrapper
return accept
VarReference = namedtuple(u'VarReference', (u'name',))
def precondition(precond):
"""Decorator to apply a precondition for rules in a RuleBasedStateMachine.
Specifies a precondition for a rule to be considered as a valid step in the
state machine. The given function will be called with the instance of
RuleBasedStateMachine and should return True or False. Usually it will need
to look at attributes on that instance.
For example::
class MyTestMachine(RuleBasedStateMachine):
state = 1
@precondition(lambda self: self.state != 0)
@rule(numerator=integers())
def divide_with(self, numerator):
self.state = numerator / self.state
This is better than using assume in your rule since more valid rules
should be able to be run.
"""
def decorator(f):
@proxies(f)
def precondition_wrapper(*args, **kwargs):
return f(*args, **kwargs)
rule = getattr(f, RULE_MARKER, None)
if rule is None:
setattr(precondition_wrapper, PRECONDITION_MARKER, precond)
else:
new_rule = Rule(targets=rule.targets, arguments=rule.arguments,
function=rule.function, precondition=precond,
parent_rule=rule.parent_rule)
setattr(precondition_wrapper, RULE_MARKER, new_rule)
return precondition_wrapper
return decorator
class RuleBasedStateMachine(GenericStateMachine):
"""A RuleBasedStateMachine gives you a more structured way to define state
machines.
The idea is that a state machine carries a bunch of types of data
divided into Bundles, and has a set of rules which may read data
from bundles (or just from normal strategies) and push data onto
bundles. At any given point a random applicable rule will be
executed.
"""
_rules_per_class = {}
_base_rules_per_class = {}
def __init__(self):
if not self.rules():
raise InvalidDefinition(u'Type %s defines no rules' % (
type(self).__name__,
))
self.bundles = {}
self.name_counter = 1
self.names_to_values = {}
def __repr__(self):
return u'%s(%s)' % (
type(self).__name__,
repr(self.bundles),
)
def upcoming_name(self):
return u'v%d' % (self.name_counter,)
def new_name(self):
result = self.upcoming_name()
self.name_counter += 1
return result
def bundle(self, name):
return self.bundles.setdefault(name, [])
@classmethod
def rules(cls):
try:
return cls._rules_per_class[cls]
except KeyError:
pass
for k, v in inspect.getmembers(cls):
r = getattr(v, RULE_MARKER, None)
while r is not None:
cls.define_rule(
r.targets, r.function, r.arguments, r.precondition,
r.parent_rule
)
r = r.parent_rule
cls._rules_per_class[cls] = cls._base_rules_per_class.pop(cls, [])
return cls._rules_per_class[cls]
@classmethod
def define_rule(cls, targets, function, arguments, precondition=None,
parent_rule=None):
converted_arguments = {}
for k, v in arguments.items():
converted_arguments[k] = v
if cls in cls._rules_per_class:
target = cls._rules_per_class[cls]
else:
target = cls._base_rules_per_class.setdefault(cls, [])
return target.append(
Rule(
targets, function, converted_arguments, precondition,
parent_rule
)
)
def steps(self):
strategies = []
for rule in self.rules():
converted_arguments = {}
valid = True
if rule.precondition is not None and not rule.precondition(self):
continue
for k, v in sorted(rule.arguments.items()):
if isinstance(v, Bundle):
bundle = self.bundle(v.name)
if not bundle:
valid = False
break
else:
v = sampled_from(bundle)
converted_arguments[k] = v
if valid:
strategies.append(TupleStrategy((
just(rule),
FixedKeysDictStrategy(converted_arguments)
), tuple))
if not strategies:
raise InvalidDefinition(
u'No progress can be made from state %r' % (self,)
)
return one_of(*strategies)
def print_step(self, step):
rule, data = step
data_repr = {}
for k, v in data.items():
if isinstance(v, VarReference):
data_repr[k] = v.name
else:
data_repr[k] = repr(v)
self.step_count = getattr(self, u'step_count', 0) + 1
report(u'Step #%d: %s%s(%s)' % (
self.step_count,
u'%s = ' % (self.upcoming_name(),) if rule.targets else u'',
rule.function.__name__,
u', '.join(u'%s=%s' % kv for kv in data_repr.items())
))
def execute_step(self, step):
rule, data = step
data = dict(data)
for k, v in data.items():
if isinstance(v, VarReference):
data[k] = self.names_to_values[v.name]
result = rule.function(self, **data)
if rule.targets:
name = self.new_name()
self.names_to_values[name] = result
for target in rule.targets:
self.bundle(target).append(VarReference(name))
hypothesis-3.0.1/src/hypothesis/strategies.py 0000664 0000000 0000000 00000077020 12661275660 0021445 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
from decimal import Decimal
from hypothesis.errors import InvalidArgument
from hypothesis.control import assume
from hypothesis.searchstrategy import SearchStrategy
from hypothesis.internal.compat import ArgSpec, text_type, getargspec, \
integer_types, float_to_decimal
from hypothesis.internal.floats import is_negative, float_to_int, \
int_to_float, count_between_floats
from hypothesis.internal.reflection import proxies
from hypothesis.searchstrategy.reprwrapper import ReprWrapperStrategy
__all__ = [
'just', 'one_of',
'none',
'choices', 'streaming',
'booleans', 'integers', 'floats', 'complex_numbers', 'fractions',
'decimals',
'characters', 'text', 'binary',
'tuples', 'lists', 'sets', 'frozensets',
'dictionaries', 'fixed_dictionaries',
'sampled_from', 'permutations',
'builds',
'randoms', 'random_module',
'recursive', 'composite',
'shared',
'recursive', 'composite',
]
_strategies = set()
class FloatKey(object):
def __init__(self, f):
self.value = float_to_int(f)
def __eq__(self, other):
return isinstance(other, FloatKey) and (
other.value == self.value
)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.value)
def convert_value(v):
if isinstance(v, float):
return FloatKey(v)
return (type(v), v)
def cacheable(fn):
cache = {}
@proxies(fn)
def cached_strategy(*args, **kwargs):
kwargs_cache_key = set()
try:
for k, v in kwargs.items():
kwargs_cache_key.add((k, convert_value(v)))
except TypeError:
return fn(*args, **kwargs)
cache_key = (
tuple(map(convert_value, args)), frozenset(kwargs_cache_key))
try:
return cache[cache_key]
except TypeError:
return fn(*args, **kwargs)
except KeyError:
result = fn(*args, **kwargs)
cache[cache_key] = result
return result
return cached_strategy
def defines_strategy(strategy_definition):
from hypothesis.searchstrategy.deferred import DeferredStrategy
_strategies.add(strategy_definition.__name__)
@proxies(strategy_definition)
def accept(*args, **kwargs):
return DeferredStrategy(strategy_definition, args, kwargs)
return accept
def just(value):
"""Return a strategy which only generates value.
Note: value is not copied. Be wary of using mutable values.
"""
from hypothesis.searchstrategy.misc import JustStrategy
def calc_repr():
return 'just(%s)' % (repr(value),)
return ReprWrapperStrategy(JustStrategy(value), calc_repr)
@defines_strategy
def none():
"""Return a strategy which only generates None."""
return just(None)
def one_of(arg, *args):
"""Return a strategy which generates values from any of the argument
strategies."""
if not args:
check_strategy(arg)
return arg
from hypothesis.searchstrategy.strategies import OneOfStrategy
args = (arg,) + args
for arg in args:
check_strategy(arg)
return OneOfStrategy(args)
@cacheable
@defines_strategy
def integers(min_value=None, max_value=None):
"""Returns a strategy which generates integers (in Python 2 these may be
ints or longs).
If min_value is not None then all values will be >=
min_value. If max_value is not None then all values will be <= max_value
"""
check_valid_integer(min_value)
check_valid_integer(max_value)
check_valid_interval(min_value, max_value, 'min_value', 'max_value')
from hypothesis.searchstrategy.numbers import IntegersFromStrategy, \
BoundedIntStrategy, WideRangeIntStrategy
if min_value is None:
if max_value is None:
return (
WideRangeIntStrategy()
)
else:
return IntegersFromStrategy(0).map(lambda x: max_value - x)
else:
if max_value is None:
return IntegersFromStrategy(min_value)
else:
assert min_value <= max_value
if min_value == max_value:
return just(min_value)
elif min_value >= 0:
return BoundedIntStrategy(min_value, max_value)
elif max_value <= 0:
return BoundedIntStrategy(-max_value, -min_value).map(
lambda t: -t
)
else:
return integers(min_value=0, max_value=max_value) | \
integers(min_value=min_value, max_value=0)
@cacheable
@defines_strategy
def booleans():
"""Returns a strategy which generates instances of bool."""
from hypothesis.searchstrategy.misc import BoolStrategy
return BoolStrategy()
@cacheable
@defines_strategy
def floats(
min_value=None, max_value=None, allow_nan=None, allow_infinity=None
):
"""Returns a strategy which generates floats.
- If min_value is not None, all values will be >= min_value.
- If max_value is not None, all values will be <= max_value.
- If min_value or max_value is not None, it is an error to enable
allow_nan.
- If both min_value and max_value are not None, it is an error to enable
allow_infinity.
Where not explicitly ruled out by the bounds, all of infinity, -infinity
and NaN are possible values generated by this strategy.
"""
if allow_nan is None:
allow_nan = bool(min_value is None and max_value is None)
elif allow_nan:
if min_value is not None or max_value is not None:
raise InvalidArgument(
'Cannot have allow_nan=%r, with min_value or max_value' % (
allow_nan
))
check_valid_bound(min_value, 'min_value')
check_valid_bound(max_value, 'max_value')
check_valid_interval(min_value, max_value, 'min_value', 'max_value')
if min_value is not None:
min_value = float(min_value)
if max_value is not None:
max_value = float(max_value)
if min_value == float(u'-inf'):
min_value = None
if max_value == float(u'inf'):
max_value = None
if allow_infinity is None:
allow_infinity = bool(min_value is None or max_value is None)
elif allow_infinity:
if min_value is not None and max_value is not None:
raise InvalidArgument(
'Cannot have allow_infinity=%r, with both min_value and '
'max_value' % (
allow_infinity
))
from hypothesis.searchstrategy.numbers import FloatStrategy, \
FixedBoundedFloatStrategy
if min_value is None and max_value is None:
return FloatStrategy(
allow_infinity=allow_infinity, allow_nan=allow_nan,
)
elif min_value is not None and max_value is not None:
if min_value == max_value:
return just(min_value)
elif math.isinf(max_value - min_value):
assert min_value < 0 and max_value > 0
return floats(min_value=0, max_value=max_value) | floats(
min_value=min_value, max_value=0
)
elif count_between_floats(min_value, max_value) > 1000:
return FixedBoundedFloatStrategy(
lower_bound=min_value, upper_bound=max_value
)
elif is_negative(max_value):
assert is_negative(min_value)
ub_int = float_to_int(max_value)
lb_int = float_to_int(min_value)
assert ub_int <= lb_int
return integers(min_value=ub_int, max_value=lb_int).map(
int_to_float
)
elif is_negative(min_value):
return floats(min_value=min_value, max_value=-0.0) | floats(
min_value=0, max_value=max_value
)
else:
ub_int = float_to_int(max_value)
lb_int = float_to_int(min_value)
assert lb_int <= ub_int
return integers(min_value=lb_int, max_value=ub_int).map(
int_to_float
)
elif min_value is not None:
if min_value < 0:
result = floats(
min_value=0.0
) | floats(min_value=min_value, max_value=0.0)
else:
result = (
floats(allow_infinity=allow_infinity, allow_nan=False).map(
lambda x: assume(not math.isnan(x)) and min_value + abs(x)
)
)
if min_value == 0 and not is_negative(min_value):
result = result.filter(lambda x: math.copysign(1.0, x) == 1)
return result
else:
assert max_value is not None
if max_value > 0:
result = floats(
min_value=0.0,
max_value=max_value,
) | floats(max_value=0.0)
else:
result = (
floats(allow_infinity=allow_infinity, allow_nan=False).map(
lambda x: assume(not math.isnan(x)) and max_value - abs(x)
)
)
if max_value == 0 and is_negative(max_value):
result = result.filter(is_negative)
return result
@cacheable
@defines_strategy
def complex_numbers():
"""Returns a strategy that generates complex numbers."""
from hypothesis.searchstrategy.numbers import ComplexStrategy
return ComplexStrategy(
tuples(floats(), floats())
)
@cacheable
@defines_strategy
def tuples(*args):
"""Return a strategy which generates a tuple of the same length as args by
generating the value at index i from args[i].
e.g. tuples(integers(), integers()) would generate a tuple of length
two with both values an integer.
"""
for arg in args:
check_strategy(arg)
from hypothesis.searchstrategy.collections import TupleStrategy
return TupleStrategy(args, tuple)
@defines_strategy
def sampled_from(elements):
"""Returns a strategy which generates any value present in the iterable
elements.
Note that as with just, values will not be copied and thus you
should be careful of using mutable data
"""
from hypothesis.searchstrategy.misc import SampledFromStrategy, \
JustStrategy
elements = tuple(iter(elements))
if not elements:
raise InvalidArgument(
'sampled_from requires at least one value'
)
if len(elements) == 1:
return JustStrategy(elements[0])
else:
return SampledFromStrategy(elements)
@cacheable
@defines_strategy
def lists(
elements=None, min_size=None, average_size=None, max_size=None,
unique_by=None, unique=False,
):
"""Returns a list containing values drawn from elements length in the
interval [min_size, max_size] (no bounds in that direction if these are
None). If max_size is 0 then elements may be None and only the empty list
will be drawn.
average_size may be used as a size hint to roughly control the size
of list but it may not be the actual average of sizes you get, due
to a variety of factors.
If unique is True (or something that evaluates to True), we compare direct
object equality, as if unique_by was `lambda x: x`. This comparison only
works for hashable types.
if unique_by is not None it must be a function returning a hashable type
when given a value drawn from elements. The resulting list will satisfy the
condition that for i != j, unique_by(result[i]) != unique_by(result[j]).
"""
check_valid_sizes(min_size, average_size, max_size)
if elements is None or (max_size is not None and max_size <= 0):
if max_size is None or max_size > 0:
raise InvalidArgument(
u'Cannot create non-empty lists without an element type'
)
else:
return builds(list)
if unique:
if unique_by is not None:
raise InvalidArgument((
'cannot specify both unique and unique_by (you probably only '
'want to set unique_by)'
))
else:
unique_by = lambda x: x
if unique_by is not None:
from hypothesis.searchstrategy.collections import UniqueListStrategy
check_strategy(elements)
min_size = min_size or 0
max_size = max_size or float(u'inf')
if average_size is None:
if max_size < float(u'inf'):
if max_size <= 5:
average_size = min_size + 0.75 * (max_size - min_size)
else:
average_size = (max_size + min_size) / 2
else:
average_size = max(
_AVERAGE_LIST_LENGTH,
min_size * 2
)
check_valid_sizes(min_size, average_size, max_size)
result = UniqueListStrategy(
elements=elements,
average_size=average_size,
max_size=max_size,
min_size=min_size,
key=unique_by
)
return result
check_valid_sizes(min_size, average_size, max_size)
from hypothesis.searchstrategy.collections import ListStrategy
if min_size is None:
min_size = 0
if average_size is None:
if max_size is None:
average_size = _AVERAGE_LIST_LENGTH
else:
average_size = (min_size + max_size) * 0.5
check_strategy(elements)
return ListStrategy(
(elements,), average_length=average_size,
min_size=min_size, max_size=max_size,
)
@cacheable
@defines_strategy
def sets(elements=None, min_size=None, average_size=None, max_size=None):
"""This has the same behaviour as lists, but returns sets instead.
Note that Hypothesis cannot tell if values are drawn from elements
are hashable until running the test, so you can define a strategy
for sets of an unhashable type but it will fail at test time.
"""
return lists(
elements=elements, min_size=min_size, average_size=average_size,
max_size=max_size, unique=True
).map(set)
@cacheable
@defines_strategy
def frozensets(elements=None, min_size=None, average_size=None, max_size=None):
"""This is identical to the sets function but instead returns
frozensets."""
return lists(
elements=elements, min_size=min_size, average_size=average_size,
max_size=max_size, unique=True
).map(frozenset)
@defines_strategy
def fixed_dictionaries(mapping):
"""Generate a dictionary of the same type as mapping with a fixed set of
keys mapping to strategies. mapping must be a dict subclass.
Generated values have all keys present in mapping, with the
corresponding values drawn from mapping[key]. If mapping is an
instance of OrderedDict the keys will also be in the same order,
otherwise the order is arbitrary.
"""
from hypothesis.searchstrategy.collections import FixedKeysDictStrategy
check_type(dict, mapping)
for v in mapping.values():
check_type(SearchStrategy, v)
return FixedKeysDictStrategy(mapping)
@cacheable
@defines_strategy
def dictionaries(
keys, values, dict_class=dict,
min_size=None, average_size=None, max_size=None
):
"""Generates dictionaries of type dict_class with keys drawn from the keys
argument and values drawn from the values argument.
The size parameters have the same interpretation as for lists.
"""
check_valid_sizes(min_size, average_size, max_size)
if max_size == 0:
return fixed_dictionaries(dict_class())
check_strategy(keys)
check_strategy(values)
return lists(
tuples(keys, values),
min_size=min_size, average_size=average_size, max_size=max_size,
unique_by=lambda x: x[0]
).map(dict_class)
@cacheable
@defines_strategy
def streaming(elements):
"""Generates an infinite stream of values where each value is drawn from
elements.
The result is iterable (the iterator will never terminate) and
indexable.
"""
check_strategy(elements)
from hypothesis.searchstrategy.streams import StreamStrategy
return StreamStrategy(elements)
@cacheable
@defines_strategy
def characters(whitelist_categories=None, blacklist_categories=None,
blacklist_characters=None, min_codepoint=None,
max_codepoint=None):
"""Generates unicode text type (unicode on python 2, str on python 3)
characters following specified filtering rules.
This strategy accepts lists of Unicode categories, characters of which
should (`whitelist_categories`) or should not (`blacklist_categories`)
be produced.
Also there could be applied limitation by minimal and maximal produced
code point of the characters.
If you know what exactly characters you don't want to be produced,
pass them with `blacklist_characters` argument.
"""
if (
min_codepoint is not None and max_codepoint is not None and
min_codepoint > max_codepoint
):
raise InvalidArgument(
'Cannot have min_codepoint=%d > max_codepoint=%d ' % (
min_codepoint, max_codepoint
)
)
from hypothesis.searchstrategy.strings import OneCharStringStrategy
return OneCharStringStrategy(whitelist_categories=whitelist_categories,
blacklist_categories=blacklist_categories,
blacklist_characters=blacklist_characters,
min_codepoint=min_codepoint,
max_codepoint=max_codepoint)
@cacheable
@defines_strategy
def text(
alphabet=None,
min_size=None, average_size=None, max_size=None
):
"""Generates values of a unicode text type (unicode on python 2, str on
python 3) with values drawn from alphabet, which should be an iterable of
length one strings or a strategy generating such. If it is None it will
default to generating the full unicode range. If it is an empty collection
this will only generate empty strings.
min_size, max_size and average_size have the usual interpretations.
"""
from hypothesis.searchstrategy.strings import StringStrategy
if alphabet is None:
char_strategy = characters(blacklist_categories=('Cs',))
elif not alphabet:
if (min_size or 0) > 0:
raise InvalidArgument(
'Invalid min_size %r > 0 for empty alphabet' % (
min_size,
)
)
return just(u'')
elif isinstance(alphabet, SearchStrategy):
char_strategy = alphabet
else:
char_strategy = sampled_from(list(map(text_type, alphabet)))
return StringStrategy(lists(
char_strategy, average_size=average_size, min_size=min_size,
max_size=max_size
))
@cacheable
@defines_strategy
def binary(
min_size=None, average_size=None, max_size=None
):
"""Generates the appropriate binary type (str in python 2, bytes in python
3).
min_size, average_size and max_size have the usual interpretations.
"""
from hypothesis.searchstrategy.strings import BinaryStringStrategy, \
FixedSizeBytes
check_valid_sizes(min_size, average_size, max_size)
if min_size == max_size is not None:
return FixedSizeBytes(min_size)
return BinaryStringStrategy(
lists(
integers(min_value=0, max_value=255),
average_size=average_size, min_size=min_size, max_size=max_size
)
)
@cacheable
@defines_strategy
def randoms():
"""Generates instances of Random (actually a Hypothesis specific
RandomWithSeed class which displays what it was initially seeded with)"""
from hypothesis.searchstrategy.misc import RandomStrategy
return RandomStrategy(integers())
class RandomSeeder(object):
def __init__(self, seed):
self.seed = seed
def __repr__(self):
return 'random.seed(%r)' % (self.seed,)
@cacheable
@defines_strategy
def random_module():
"""If your code depends on the global random module then you need to use
this.
It will explicitly seed the random module at the start of your test
so that tests are reproducible. The value it passes you is an opaque
object whose only useful feature is that its repr displays the
random seed. It is not itself a random number generator. If you want
a random number generator you should use the randoms() strategy
which will give you one.
"""
from hypothesis.control import cleanup
import random
def seed_random(seed):
state = random.getstate()
random.seed(seed)
cleanup(lambda: random.setstate(state))
return RandomSeeder(seed)
return shared(
integers().map(seed_random),
'hypothesis.strategies.random_module()',
)
@cacheable
@defines_strategy
def fractions():
"""Generates instances of fractions.Fraction."""
from fractions import Fraction
return tuples(integers(), integers(min_value=1)).map(
lambda t: Fraction(*t)
)
@cacheable
@defines_strategy
def decimals():
"""Generates instances of decimals.Decimal."""
return (
floats().map(float_to_decimal) |
fractions().map(
lambda f: Decimal(f.numerator) / f.denominator
)
)
@cacheable
@defines_strategy
def builds(target, *args, **kwargs):
"""Generates values by drawing from args and kwargs and passing them to
target in the appropriate argument position.
e.g. builds(target,
integers(), flag=booleans()) would draw an integer i and a boolean b and
call target(i, flag=b).
"""
return tuples(tuples(*args), fixed_dictionaries(kwargs)).map(
lambda value: target(*value[0], **value[1])
)
@defines_strategy
def recursive(base, extend, max_leaves=100):
"""
base: A strategy to start from.
extend: A function which takes a strategy and returns a new strategy.
max_leaves: The maximum number of elements to be drawn from base on a given
run.
This returns a strategy S such that S = extend(base | S). That is, values
maybe drawn from base, or from any strategy reachable by mixing
applications of | and extend.
An example may clarify: recursive(booleans(), lists) would return a
strategy that may return arbitrarily nested and mixed lists of booleans.
So e.g. False, [True], [False, []], [[[[True]]]], are all valid values to
be drawn from that strategy.
"""
check_strategy(base)
extended = extend(base)
if not isinstance(extended, SearchStrategy):
raise InvalidArgument(
'Expected extend(%r) to be a SearchStrategy but got %r' % (
base, extended
))
from hypothesis.searchstrategy.recursive import RecursiveStrategy
return RecursiveStrategy(base, extend, max_leaves)
@defines_strategy
def permutations(values):
"""Return a strategy which returns permutations of the collection
"values"."""
values = list(values)
if not values:
return just(()).map(lambda _: [])
def build_permutation(swaps):
initial = list(values)
for i, j in swaps:
initial[i], initial[j] = initial[j], initial[i]
return initial
n = len(values)
index = integers(0, n - 1)
return lists(tuples(index, index), max_size=n ** 2).map(build_permutation)
@cacheable
def composite(f):
"""Defines a strategy that is built out of potentially arbitrarily many
other strategies.
This is intended to be used as a decorator. See the full
documentation for more details about how to use this function.
"""
from hypothesis.internal.reflection import copy_argspec
argspec = getargspec(f)
if (
argspec.defaults is not None and
len(argspec.defaults) == len(argspec.args)
):
raise InvalidArgument(
'A default value for initial argument will never be used')
if len(argspec.args) == 0 and not argspec.varargs:
raise InvalidArgument(
'Functions wrapped with composite must take at least one '
'positional argument.'
)
new_argspec = ArgSpec(
args=argspec.args[1:], varargs=argspec.varargs,
keywords=argspec.keywords, defaults=argspec.defaults
)
@defines_strategy
@copy_argspec(f.__name__, new_argspec)
def accept(*args, **kwargs):
class CompositeStrategy(SearchStrategy):
def do_draw(self, data):
return f(data.draw, *args, **kwargs)
return CompositeStrategy()
return accept
def shared(base, key=None):
"""Returns a strategy that draws a single shared value per run, drawn from
base. Any two shared instances with the same key will share the same
value, otherwise the identity of this strategy will be used. That is:
>>> x = shared(s)
>>> y = shared(s)
In the above x and y may draw different (or potentially the same) values.
In the following they will always draw the same:
>>> x = shared(s, key="hi")
>>> y = shared(s, key="hi")
"""
from hypothesis.searchstrategy.shared import SharedStrategy
return SharedStrategy(base, key)
@cacheable
def choices():
"""Strategy that generates a function that behaves like random.choice.
Will note choices made for reproducibility.
"""
from hypothesis.control import note, current_build_context
from hypothesis.internal.conjecture.utils import choice
class Chooser(object):
def __init__(self, build_context, data):
self.build_context = build_context
self.data = data
self.choice_count = 0
def __call__(self, values):
if not values:
raise IndexError('Cannot choose from empty sequence')
result = choice(self.data, values)
with self.build_context.local():
self.choice_count += 1
note('Choice #%d: %r' % (self.choice_count, result))
return result
def __repr__(self):
return 'choice'
class ChoiceStrategy(SearchStrategy):
supports_find = False
def do_draw(self, data):
return Chooser(current_build_context(), data)
return ReprWrapperStrategy(
shared(
ChoiceStrategy(),
key='hypothesis.strategies.chooser.choice_function'
), 'choices()')
@cacheable
def uuids():
"""Returns a strategy that generates UUIDs.
All returned values from this will be unique, so e.g. if you do
lists(uuids()) the resulting list will never contain duplicates.
"""
from uuid import UUID
return ReprWrapperStrategy(
shared(randoms(), key='hypothesis.strategies.uuids.generator').map(
lambda r: UUID(int=r.getrandbits(128))
), 'uuids()')
@cacheable
def data():
"""This isn't really a normal strategy, but instead gives you an object
which can be used to draw data interactively from other strategies.
It can only be used within @given, not find. This is because the lifetime
of the object cannot outlast the test body.
See the rest of the documentation for more complete information.
"""
from hypothesis.control import note
class DataObject(object):
def __init__(self, data):
self.count = 0
self.data = data
def __repr__(self):
return 'data(...)'
def draw(self, strategy):
result = self.data.draw(strategy)
self.count += 1
note('Draw %d: %r' % (self.count, result))
return result
class DataStrategy(SearchStrategy):
supports_find = False
def do_draw(self, data):
if not hasattr(data, 'hypothesis_shared_data_strategy'):
data.hypothesis_shared_data_strategy = DataObject(data)
return data.hypothesis_shared_data_strategy
def __repr__(self):
return 'data()'
def map(self, f):
self.__not_a_first_class_strategy('map')
def filter(self, f):
self.__not_a_first_class_strategy('filter')
def flatmap(self, f):
self.__not_a_first_class_strategy('flatmap')
def example(self):
self.__not_a_first_class_strategy('example')
def __not_a_first_class_strategy(self, name):
raise InvalidArgument((
'Cannot call %s on a DataStrategy. You should probably be '
"using @composite for whatever it is you're trying to do."
) % (name,))
return DataStrategy()
# Private API below here
def check_type(typ, arg):
if not isinstance(arg, typ):
if isinstance(typ, type):
typ_string = typ.__name__
else:
typ_string = 'one of %s' % (
', '.join(t.__name__ for t in typ))
raise InvalidArgument(
'Expected %s but got %r' % (typ_string, arg,))
def check_strategy(arg):
check_type(SearchStrategy, arg)
def check_valid_integer(value):
"""Checks that value is either unspecified, or a valid integer.
Otherwise raises InvalidArgument.
"""
if value is None:
return
check_type(integer_types, value)
def check_valid_bound(value, name):
"""Checks that value is either unspecified, or a valid interval bound.
Otherwise raises InvalidArgument.
"""
if value is None:
return
if math.isnan(value):
raise InvalidArgument(u'Invalid end point %s %r' % (value, name))
def check_valid_size(value, name):
"""Checks that value is either unspecified, or a valid non-negative size
expressed as an integer/float. Otherwise raises InvalidArgument.
"""
if value is None:
return
check_type(integer_types + (float,), value)
if value < 0:
raise InvalidArgument(u'Invalid size %s %r < 0' % (value, name))
if isinstance(value, float) and math.isnan(value):
raise InvalidArgument(u'Invalid size %s %r' % (value, name))
def check_valid_interval(lower_bound, upper_bound, lower_name, upper_name):
"""Checks that lower_bound and upper_bound are either unspecified, or they
define a valid interval on the number line.
Otherwise raises InvalidArgumet.
"""
if lower_bound is None or upper_bound is None:
return
if upper_bound < lower_bound:
raise InvalidArgument(
'Cannot have %s=%r < %s=%r' % (
upper_name, upper_bound, lower_name, lower_bound
))
def check_valid_sizes(min_size, average_size, max_size):
check_valid_size(min_size, 'min_size')
check_valid_size(max_size, 'max_size')
check_valid_size(average_size, 'average_size')
check_valid_interval(min_size, max_size, 'min_size', 'max_size')
check_valid_interval(average_size, max_size, 'average_size', 'max_size')
check_valid_interval(min_size, average_size, 'min_size', 'average_size')
if average_size is not None:
if (
(max_size is None or max_size > 0) and
average_size is not None and average_size <= 0.0
):
raise InvalidArgument(
'Cannot have average_size=%r < min_size=%r' % (
average_size, min_size
))
_AVERAGE_LIST_LENGTH = 5.0
assert _strategies.issubset(set(__all__)), _strategies - set(__all__)
hypothesis-3.0.1/src/hypothesis/strategytests.py 0000664 0000000 0000000 00000007332 12661275660 0022217 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""Support for testing your custom implementations of specifiers."""
from __future__ import division, print_function, absolute_import
import hashlib
from random import Random
from unittest import TestCase
from hypothesis import settings as Settings
from hypothesis import seed, given, reject
from hypothesis.errors import Unsatisfiable
from hypothesis.database import ExampleDatabase
from hypothesis.strategies import lists, integers
from hypothesis.internal.compat import hrange
class Rejected(Exception):
pass
def strategy_test_suite(
specifier,
max_examples=10, random=None,
):
settings = Settings(
database=None,
max_examples=max_examples,
max_iterations=max_examples * 2,
min_satisfying_examples=2,
)
random = random or Random()
strat = specifier
class ValidationSuite(TestCase):
def __repr__(self):
return 'strategy_test_suite(%s)' % (
repr(specifier),
)
@given(specifier)
@settings
def test_does_not_error(self, value):
pass
if strat.supports_find:
def test_can_give_example(self):
strat.example()
def test_can_give_list_of_examples(self):
lists(strat).example()
def test_will_give_unsatisfiable_if_all_rejected(self):
@given(specifier)
@settings
def nope(x):
reject()
self.assertRaises(Unsatisfiable, nope)
def test_will_find_a_constant_failure(self):
@given(specifier)
@settings
def nope(x):
raise Rejected()
self.assertRaises(Rejected, nope)
def test_will_find_a_failure_from_the_database(self):
db = ExampleDatabase()
@given(specifier)
@Settings(settings, max_examples=10, database=db)
def nope(x):
raise Rejected()
try:
for i in hrange(3):
self.assertRaises(Rejected, nope) # pragma: no cover
finally:
db.close()
@given(integers())
@settings
def test_will_handle_a_really_weird_failure(self, s):
db = ExampleDatabase()
@given(specifier)
@Settings(
settings,
database=db,
max_examples=max_examples,
min_satisfying_examples=2,
)
@seed(s)
def nope(x):
s = hashlib.sha1(repr(x).encode('utf-8')).digest()
assert Random(s).randint(0, 1) == Random(s).randint(0, 1)
if Random(s).randint(0, 1):
raise Rejected('%r with digest %r' % (
x, s
))
try:
try:
nope()
except Rejected:
pass
try:
nope()
except Rejected:
pass
finally:
db.close()
return ValidationSuite
hypothesis-3.0.1/src/hypothesis/tools/ 0000775 0000000 0000000 00000000000 12661275660 0020053 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/tools/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0022157 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/src/hypothesis/tools/mergedbs.py 0000664 0000000 0000000 00000007522 12661275660 0022223 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""This is a git merge driver for merging two Hypothesis database files. It
allows you to check in your Hypothesis database into your git repo and have
merging examples work correctly.
You can either install Hypothesis and invoke this as a module, or just copy
this file somewhere convenient and run it directly (it has no dependencies on
the rest of Hypothesis).
You can then set this up by following the instructions in
http://git-scm.com/docs/gitattributes to use this as the merge driver for
wherever you have put your hypothesis database (it is in
.hypothesis/examples.db by default). For example, the following should work
with a default configuration:
In .gitattributes add:
.hypothesis/examples.db merge=hypothesisdb
And in .git/config add:
[merge "hypothesisdb"]
name = Hypothesis database files
driver = python -m hypothesis.tools.mergedbs %O %A %B
"""
from __future__ import division, print_function, absolute_import
import sys
import sqlite3
from collections import namedtuple
def get_rows(cursor):
cursor.execute("""
select key, value
from hypothesis_data_mapping
""")
for r in cursor:
yield tuple(r)
Report = namedtuple(u'Report', (u'inserts', u'deletes'))
def merge_paths(ancestor, current, other):
ancestor = sqlite3.connect(ancestor)
current = sqlite3.connect(current)
other = sqlite3.connect(other)
result = merge_dbs(ancestor, current, other)
ancestor.close()
current.close()
other.close()
return result
def contains(db, key, value):
cursor = db.cursor()
cursor.execute("""
select 1 from hypothesis_data_mapping
where key = ? and value = ?
""", (key, value))
result = bool(list(cursor))
cursor.close()
return result
def merge_dbs(ancestor, current, other):
other_cursor = other.cursor()
other_cursor.execute("""
select key, value
from hypothesis_data_mapping
""")
current_cursor = current.cursor()
inserts = 0
for r in other_cursor:
if not contains(ancestor, *r):
try:
current_cursor.execute("""
insert into hypothesis_data_mapping(key, value)
values(?, ?)
""", tuple(r))
inserts += 1
except sqlite3.IntegrityError:
pass
current.commit()
deletes = 0
ancestor_cursor = ancestor.cursor()
ancestor_cursor.execute("""
select key, value
from hypothesis_data_mapping
""")
for r in ancestor_cursor:
if not contains(other, *r) and contains(current, *r):
try:
current_cursor.execute("""
delete from hypothesis_data_mapping
where key = ? and value = ?
""", tuple(r))
deletes += 1
current.commit()
except sqlite3.IntegrityError:
pass
return Report(inserts, deletes)
def main():
_, _, current, other = sys.argv
result = merge_dbs(destination=current, source=other)
print(u'%d new entries and %d deletions from merge' % (
result.inserts, result.deletions))
if __name__ == u'__main__':
main()
hypothesis-3.0.1/src/hypothesis/types.py 0000664 0000000 0000000 00000007131 12661275660 0020433 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import inspect
from random import Random
from itertools import islice
from hypothesis.errors import InvalidArgument
class RandomWithSeed(Random):
"""A subclass of Random designed to expose the seed it was initially
provided with.
We consistently use this instead of Random objects because it makes
examples much easier to recreate.
"""
def __init__(self, seed):
super(RandomWithSeed, self).__init__(seed)
self.seed = seed
def __copy__(self):
result = RandomWithSeed(self.seed)
result.setstate(self.getstate())
return result
def __deepcopy__(self, table):
return self.__copy__()
def __repr__(self):
return u'RandomWithSeed(%s)' % (self.seed,)
class Stream(object):
"""A stream is a possibly infinite list. You can index into it, and you can
iterate over it, but you can't ask its length and iterating over it will
not necessarily terminate.
Behind the scenes streams are backed by a generator, but they "remember"
the values as they evaluate them so you can replay them later.
Internally Hypothesis uses the fact that you can tell how much of a stream
has been evaluated, but you shouldn't use that. The only public APIs of
a Stream are that you can index, slice, and iterate it.
"""
def __init__(self, generator=None):
if generator is None:
generator = iter(())
elif not inspect.isgenerator(generator):
generator = iter(generator)
self.generator = generator
self.fetched = []
def map(self, f):
return Stream(f(v) for v in self)
def __iter__(self):
i = 0
while i < len(self.fetched):
yield self.fetched[i]
i += 1
for v in self.generator:
self.fetched.append(v)
yield v
def __getitem__(self, key):
if isinstance(key, slice):
return Stream(islice(
iter(self),
key.start, key.stop, key.step
))
if not isinstance(key, int):
raise InvalidArgument(u'Cannot index stream with %s' % (
type(key).__name__,))
self._thunk_to(key + 1)
return self.fetched[key]
def _thunk_to(self, i):
it = iter(self)
try:
while len(self.fetched) < i:
next(it)
except StopIteration:
raise IndexError(
u'Index %d out of bounds for finite stream of length %d' % (
i, len(self.fetched)
)
)
def _thunked(self):
return len(self.fetched)
def __repr__(self):
if not self.fetched:
return u'Stream(...)'
return u'Stream(%s, ...)' % (
u', '.join(map(repr, self.fetched))
)
def __deepcopy__(self, table):
return self
def __copy__(self):
return self
hypothesis-3.0.1/src/hypothesis/utils/ 0000775 0000000 0000000 00000000000 12661275660 0020053 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/src/hypothesis/utils/__init__.py 0000664 0000000 0000000 00000001440 12661275660 0022163 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""hypothesis.utils is a package for things that you can consider part of the
semi-public Hypothesis API but aren't really the core point.
"""
hypothesis-3.0.1/src/hypothesis/utils/conventions.py 0000664 0000000 0000000 00000001637 12661275660 0023001 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
class UniqueIdentifier(object):
def __init__(self, identifier):
self.identifier = identifier
def __repr__(self):
return self.identifier
not_set = UniqueIdentifier(u'not_set')
hypothesis-3.0.1/src/hypothesis/utils/dynamicvariables.py 0000664 0000000 0000000 00000002432 12661275660 0023743 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import threading
from contextlib import contextmanager
class DynamicVariable(object):
def __init__(self, default):
self.default = default
self.data = threading.local()
@property
def value(self):
return getattr(self.data, 'value', self.default)
@value.setter
def value(self, value):
setattr(self.data, 'value', value)
@contextmanager
def with_value(self, value):
old_value = self.value
try:
self.data.value = value
yield
finally:
self.data.value = old_value
hypothesis-3.0.1/src/hypothesis/utils/size.py 0000664 0000000 0000000 00000001575 12661275660 0021407 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
def clamp(lower, value, upper):
if lower is not None:
value = max(lower, value)
if upper is not None:
value = min(value, upper)
return value
hypothesis-3.0.1/src/hypothesis/version.py 0000664 0000000 0000000 00000001443 12661275660 0020754 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
__version_info__ = (3, 0, 1)
__version__ = '.'.join(map(str, __version_info__))
hypothesis-3.0.1/tests/ 0000775 0000000 0000000 00000000000 12661275660 0015067 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0017173 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/common/ 0000775 0000000 0000000 00000000000 12661275660 0016357 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/common/__init__.py 0000664 0000000 0000000 00000006635 12661275660 0020502 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
import sys
from collections import namedtuple
try:
import pytest
except ImportError:
pytest = None
from hypothesis._settings import settings
from hypothesis.internal.debug import timeout
from hypothesis.strategies import integers, floats, just, one_of, \
sampled_from, lists, booleans, dictionaries, tuples, \
frozensets, complex_numbers, sets, text, binary, decimals, fractions, \
none, randoms, builds, fixed_dictionaries, recursive
__all__ = ['small_verifier', 'timeout', 'standard_types', 'OrderedPair']
OrderedPair = namedtuple('OrderedPair', ('left', 'right'))
ordered_pair = integers().flatmap(
lambda right: integers(min_value=0).map(
lambda length: OrderedPair(right - length, right)))
def constant_list(strat):
return strat.flatmap(
lambda v: lists(just(v), average_size=10),
)
ABC = namedtuple('ABC', ('a', 'b', 'c'))
def abc(x, y, z):
return builds(ABC, x, y, z)
with settings(strict=False):
standard_types = [
lists(max_size=0), tuples(), sets(max_size=0), frozensets(max_size=0),
fixed_dictionaries({}),
abc(booleans(), booleans(), booleans()),
abc(booleans(), booleans(), integers()),
fixed_dictionaries({'a': integers(), 'b': booleans()}),
dictionaries(booleans(), integers()),
dictionaries(text(), booleans()),
one_of(integers(), tuples(booleans())),
sampled_from(range(10)),
one_of(just('a'), just('b'), just('c')),
sampled_from(('a', 'b', 'c')),
integers(),
integers(min_value=3),
integers(min_value=(-2 ** 32), max_value=(2 ** 64)),
floats(), floats(min_value=-2.0, max_value=3.0),
floats(), floats(min_value=-2.0),
floats(), floats(max_value=-0.0),
floats(), floats(min_value=0.0),
floats(min_value=3.14, max_value=3.14),
text(), binary(),
booleans(),
tuples(booleans(), booleans()),
frozensets(integers()),
sets(frozensets(booleans())),
complex_numbers(),
fractions(),
decimals(),
lists(lists(booleans(), average_size=10), average_size=10),
lists(lists(booleans(), average_size=100)),
lists(floats(0.0, 0.0), average_size=1.0),
ordered_pair, constant_list(integers()),
integers().filter(lambda x: abs(x) > 100),
floats(min_value=-sys.float_info.max, max_value=sys.float_info.max),
none(), randoms(),
booleans().flatmap(lambda x: booleans() if x else complex_numbers()),
recursive(
base=booleans(), extend=lambda x: lists(x, max_size=3),
max_leaves=10,
)
]
if pytest is not None:
def parametrize(args, values):
return pytest.mark.parametrize(
args, values, ids=list(map(repr, values)))
hypothesis-3.0.1/tests/common/setup.py 0000664 0000000 0000000 00000002733 12661275660 0020076 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import warnings
from tempfile import mkdtemp
from hypothesis import settings
from hypothesis.configuration import set_hypothesis_home_dir
from hypothesis.internal.charmap import charmap, charmap_file
def run():
warnings.filterwarnings(u'error', category=UnicodeWarning)
set_hypothesis_home_dir(mkdtemp())
charmap()
assert os.path.exists(charmap_file())
assert isinstance(settings, type)
settings.register_profile(
'default', settings(timeout=-1, strict=True)
)
settings.register_profile(
'speedy', settings(
timeout=1, max_examples=5,
))
settings.register_profile(
'nonstrict', settings(strict=False)
)
settings.load_profile(os.getenv('HYPOTHESIS_PROFILE', 'default'))
hypothesis-3.0.1/tests/common/utils.py 0000664 0000000 0000000 00000003160 12661275660 0020071 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
import contextlib
from io import StringIO
from hypothesis.reporting import default, with_reporter
from hypothesis.internal.reflection import proxies
@contextlib.contextmanager
def capture_out():
old_out = sys.stdout
try:
new_out = StringIO()
sys.stdout = new_out
with with_reporter(default):
yield new_out
finally:
sys.stdout = old_out
class ExcInfo(object):
pass
@contextlib.contextmanager
def raises(exctype):
e = ExcInfo()
try:
yield e
assert False, "Expected to raise an exception but didn't"
except exctype as err:
e.value = err
return
def fails_with(e):
def accepts(f):
@proxies(f)
def inverted_test(*arguments, **kwargs):
with raises(e):
f(*arguments, **kwargs)
return inverted_test
return accepts
fails = fails_with(AssertionError)
hypothesis-3.0.1/tests/conftest.py 0000664 0000000 0000000 00000001557 12661275660 0017276 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import gc
import pytest
from tests.common.setup import run
run()
@pytest.fixture(scope=u'function', autouse=True)
def some_fixture():
gc.collect()
hypothesis-3.0.1/tests/cover/ 0000775 0000000 0000000 00000000000 12661275660 0016205 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/cover/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0020311 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/cover/test_arbitrary_data.py 0000664 0000000 0000000 00000004665 12661275660 0022621 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import strategies as st
from hypothesis import find, given, reporting
from hypothesis.errors import InvalidArgument
from tests.common.utils import raises, capture_out
@given(st.integers(), st.data())
def test_conditional_draw(x, data):
y = data.draw(st.integers(min_value=x))
assert y >= x
def test_prints_on_failure():
@given(st.data())
def test(data):
x = data.draw(st.lists(st.integers(), min_size=1))
y = data.draw(st.sampled_from(x))
assert y in x
x.remove(y)
assert y not in x
with raises(AssertionError):
with capture_out() as out:
with reporting.with_reporter(reporting.default):
test()
result = out.getvalue()
assert 'Draw 1: [0, 0]' in result
assert 'Draw 2: 0' in result
def test_given_twice_is_same():
@given(st.data(), st.data())
def test(data1, data2):
data1.draw(st.integers())
data2.draw(st.integers())
assert False
with raises(AssertionError):
with capture_out() as out:
with reporting.with_reporter(reporting.default):
test()
result = out.getvalue()
assert 'Draw 1: 0' in result
assert 'Draw 2: 0' in result
def test_errors_when_used_in_find():
with raises(InvalidArgument):
find(st.data(), lambda x: x.draw(st.booleans()))
@pytest.mark.parametrize('f', [
'filter', 'map', 'flatmap',
])
def test_errors_when_normal_strategy_functions_are_used(f):
with raises(InvalidArgument):
getattr(st.data(), f)(lambda x: 1)
def test_errors_when_asked_for_example():
with raises(InvalidArgument):
st.data().example()
def test_nice_repr():
assert repr(st.data()) == 'data()'
hypothesis-3.0.1/tests/cover/test_bad_repr.py 0000664 0000000 0000000 00000004436 12661275660 0021403 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import warnings
import pytest
import hypothesis.strategies as st
from hypothesis import given, settings
from hypothesis.errors import HypothesisDeprecationWarning
from hypothesis.internal.compat import PY3
from hypothesis.internal.reflection import arg_string
original_profile = settings.default
settings.register_profile(
'nonstrict', settings(strict=False)
)
def setup_function(fn):
settings.load_profile('nonstrict')
warnings.simplefilter('always', HypothesisDeprecationWarning)
def teardown_function(fn):
settings.load_profile('default')
warnings.simplefilter('once', HypothesisDeprecationWarning)
class BadRepr(object):
def __init__(self, value):
self.value = value
def __repr__(self):
return self.value
Frosty = BadRepr('☃')
def test_just_frosty():
assert repr(st.just(Frosty)) == 'just(☃)'
def test_sampling_snowmen():
assert repr(st.sampled_from((
Frosty, 'hi'))) == 'sampled_from((☃, %s))' % (repr('hi'),)
def varargs(*args, **kwargs):
pass
@pytest.mark.skipif(PY3, reason='Unicode repr is kosher on python 3')
def test_arg_strings_are_bad_repr_safe():
assert arg_string(varargs, (Frosty,), {}) == '☃'
@pytest.mark.skipif(PY3, reason='Unicode repr is kosher on python 3')
def test_arg_string_kwargs_are_bad_repr_safe():
assert arg_string(varargs, (), {'x': Frosty}) == 'x=☃'
@given(st.sampled_from([
'✐', '✑', '✒', '✓', '✔', '✕', '✖', '✗', '✘',
'✙', '✚', '✛', '✜', '✝', '✞', '✟', '✠', '✡', '✢', '✣']))
def test_sampled_from_bad_repr(c):
pass
hypothesis-3.0.1/tests/cover/test_caching.py 0000664 0000000 0000000 00000003257 12661275660 0021221 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis.errors import InvalidArgument
def test_no_args():
assert st.text() is st.text()
def test_tuple_lengths():
assert st.tuples(st.integers()) is st.tuples(st.integers())
assert st.tuples(st.integers()) is not st.tuples(
st.integers(), st.integers())
def test_values():
assert st.integers() is not st.integers(min_value=1)
def test_alphabet_key():
assert st.text(alphabet='abcs') is st.text(alphabet='abcs')
def test_does_not_error_on_unhashable_posarg():
st.text(['a', 'b', 'c'])
def test_does_not_error_on_unhashable_kwarg():
with pytest.raises(InvalidArgument):
st.builds(lambda alphabet: 1, alphabet=['a', 'b', 'c']).validate()
def test_caches_floats_sensitively():
assert st.floats(min_value=0.0) is st.floats(min_value=0.0)
assert st.floats(min_value=0.0) is not st.floats(min_value=0)
assert st.floats(min_value=0.0) is not st.floats(min_value=-0.0)
hypothesis-3.0.1/tests/cover/test_charmap.py 0000664 0000000 0000000 00000006677 12661275660 0021251 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import unicodedata
import hypothesis.strategies as st
import hypothesis.internal.charmap as cm
from hypothesis import given, assume
from hypothesis.internal.compat import hunichr
def test_charmap_contains_all_unicode():
n = 0
for vs in cm.charmap().values():
for u, v in vs:
n += (v - u + 1)
assert n == sys.maxunicode + 1
def test_charmap_has_right_categories():
for cat, intervals in cm.charmap().items():
for u, v in intervals:
for i in range(u, v + 1):
real = unicodedata.category(hunichr(i))
assert real == cat, \
'%d is %s but reported in %s' % (i, real, cat)
def assert_valid_range_list(ls):
for u, v in ls:
assert u <= v
for i in range(len(ls) - 1):
assert ls[i] <= ls[i + 1]
assert ls[i][-1] < ls[i + 1][0]
@given(
st.sets(st.sampled_from(cm.categories())),
st.sets(st.sampled_from(cm.categories())) | st.none(),
)
def test_query_matches_categories(exclude, include):
values = cm.query(exclude, include)
assert_valid_range_list(values)
for u, v in values:
for i in (u, v, (u + v) // 2):
cat = unicodedata.category(hunichr(i))
if include is not None:
assert cat in include
assert cat not in exclude
@given(
st.sets(st.sampled_from(cm.categories())),
st.sets(st.sampled_from(cm.categories())) | st.none(),
st.integers(0, sys.maxunicode), st.integers(0, sys.maxunicode),
)
def test_query_matches_categories_codepoints(exclude, include, m1, m2):
m1, m2 = sorted((m1, m2))
values = cm.query(exclude, include, min_codepoint=m1, max_codepoint=m2)
assert_valid_range_list(values)
for u, v in values:
assert m1 <= u
assert v <= m2
@given(st.sampled_from(cm.categories()), st.integers(0, sys.maxunicode))
def test_exclude_only_excludes_from_that_category(cat, i):
c = hunichr(i)
assume(unicodedata.category(c) != cat)
intervals = cm.query(exclude_categories=(cat,))
assert any(a <= i <= b for a, b in intervals)
def test_reload_charmap():
x = cm.charmap()
assert x is cm.charmap()
cm._charmap = None
y = cm.charmap()
assert x is not y
assert x == y
def test_recreate_charmap():
x = cm.charmap()
assert x is cm.charmap()
cm._charmap = None
os.unlink(cm.charmap_file())
y = cm.charmap()
assert x is not y
assert x == y
def test_union_empty():
assert cm._union_interval_lists([], [[1, 2]]) == [[1, 2]]
assert cm._union_interval_lists([[1, 2]], []) == [[1, 2]]
def test_successive_union():
x = []
for v in cm.charmap().values():
x = cm._union_interval_lists(x, v)
assert x == ((0, sys.maxunicode),)
hypothesis-3.0.1/tests/cover/test_choices.py 0000664 0000000 0000000 00000002707 12661275660 0021241 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import find, given
from hypothesis.errors import InvalidArgument
def test_exhaustion():
@given(st.lists(st.text(), min_size=10), st.choices())
def test(ls, choice):
while ls:
l = choice(ls)
assert l in ls
ls.remove(l)
test()
@given(st.choices(), st.choices())
def test_choice_is_shared(choice1, choice2):
assert choice1 is choice2
def test_cannot_use_choices_within_find():
with pytest.raises(InvalidArgument):
find(st.choices(), lambda c: True)
def test_fails_to_draw_from_empty_sequence():
@given(st.choices())
def test(choice):
choice([])
with pytest.raises(IndexError):
test()
hypothesis-3.0.1/tests/cover/test_classmap.py 0000664 0000000 0000000 00000003363 12661275660 0021426 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis.internal.classmap import ClassMap
class A(object):
pass
class B(A):
pass
class C(A):
pass
class D(C):
pass
class BC(B, C):
pass
def test_can_set_and_lookup_class():
x = ClassMap()
x[A] = 1
assert x[A] == 1
def test_parent_values_will_be_used_if_child_is_not_set():
x = ClassMap()
x[A] = 1
assert x[D] == 1
def test_child_values_will_be_used_if_set():
x = ClassMap()
x[A] = 1
x[B] = 2
assert x[B] == 2
def test_grand_parent_values_will_be_used_if_child_is_not_set():
x = ClassMap()
x[A] = 1
assert x[B] == 1
def test_setting_child_does_not_set_parent():
x = ClassMap()
x[B] = 1
with pytest.raises(KeyError):
x[A]
def test_prefers_first_parent_in_mro():
x = ClassMap()
x[C] = 3
x[B] = 2
assert x[BC] == 2
def test_all_mappings_yields_all_mappings():
x = ClassMap()
x[object] = 1
x[BC] = 2
x[B] = 3
x[C] = 4
x[A] = 5
assert list(x.all_mappings(BC)) == [2, 3, 4, 5, 1]
hypothesis-3.0.1/tests/cover/test_composite.py 0000664 0000000 0000000 00000005102 12661275660 0021616 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import find, given, assume
from hypothesis.errors import InvalidArgument
from hypothesis.internal.compat import hrange
@st.composite
def badly_draw_lists(draw, m=0):
length = draw(st.integers(m, m + 10))
return [
draw(st.integers()) for _ in hrange(length)
]
def test_simplify_draws():
assert find(badly_draw_lists(), lambda x: len(x) >= 3) == [0] * 3
def test_can_pass_through_arguments():
assert find(badly_draw_lists(5), lambda x: True) == [0] * 5
assert find(badly_draw_lists(m=6), lambda x: True) == [0] * 6
@st.composite
def draw_ordered_with_assume(draw):
x = draw(st.floats())
y = draw(st.floats())
assume(x < y)
return (x, y)
@given(draw_ordered_with_assume())
def test_can_assume_in_draw(xy):
assert xy[0] < xy[1]
def test_uses_definitions_for_reprs():
assert repr(badly_draw_lists()) == 'badly_draw_lists()'
assert repr(badly_draw_lists(1)) == 'badly_draw_lists(m=1)'
assert repr(badly_draw_lists(m=1)) == 'badly_draw_lists(m=1)'
def test_errors_given_default_for_draw():
with pytest.raises(InvalidArgument):
@st.composite
def foo(x=None):
pass
def test_errors_given_function_of_no_arguments():
with pytest.raises(InvalidArgument):
@st.composite
def foo():
pass
def test_errors_given_kwargs_only():
with pytest.raises(InvalidArgument):
@st.composite
def foo(**kwargs):
pass
def test_can_use_pure_args():
@st.composite
def stuff(*args):
return args[0](st.sampled_from(args[1:]))
assert find(stuff(1, 2, 3, 4, 5), lambda x: True) == 1
def test_composite_of_lists():
@st.composite
def f(draw):
return draw(st.integers()) + draw(st.integers())
assert find(st.lists(f()), lambda x: len(x) >= 10) == [0] * 10
hypothesis-3.0.1/tests/cover/test_conjecture_engine.py 0000664 0000000 0000000 00000027521 12661275660 0023313 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
from random import Random
from hypothesis import strategies as st
from hypothesis import given, settings
from hypothesis.database import ExampleDatabase
from hypothesis.internal.compat import hbytes, int_from_bytes, \
bytes_from_list
from hypothesis.internal.conjecture.data import Status, TestData
from hypothesis.internal.conjecture.engine import TestRunner
MAX_SHRINKS = 2000
def run_to_buffer(f):
runner = TestRunner(f, settings=settings(
max_examples=5000, max_iterations=10000, max_shrinks=MAX_SHRINKS,
buffer_size=1024,
database=None,
))
runner.run()
assert runner.last_data.status == Status.INTERESTING
return hbytes(runner.last_data.buffer)
def test_can_index_results():
@run_to_buffer
def f(data):
data.draw_bytes(5)
data.mark_interesting()
assert f.index(0) == 0
assert f.count(0) == 5
def test_non_cloneable_intervals():
@run_to_buffer
def x(data):
data.draw_bytes(10)
data.draw_bytes(9)
data.mark_interesting()
assert x == hbytes(19)
def test_duplicate_buffers():
@run_to_buffer
def x(data):
t = data.draw_bytes(10)
if not any(t):
data.mark_invalid()
s = data.draw_bytes(10)
if s == t:
data.mark_interesting()
assert x == bytes_from_list([0] * 9 + [1]) * 2
def test_clone_into_variable_draws():
@run_to_buffer
def x(data):
small = 0
large = 0
for _ in range(30):
data.start_example()
b = data.draw_bytes(1)[0] & 1
if b:
data.draw_bytes(3)
large += 1
else:
data.draw_bytes(2)
small += 1
data.stop_example()
if small < 10:
data.mark_invalid()
if large >= 10:
data.mark_interesting()
assert set(x) == {0, 1}
assert x.count(1) == 10
assert len(x) == 30 + (20 * 2) + (10 * 3)
def test_deletable_draws():
@run_to_buffer
def x(data):
while True:
x = data.draw_bytes(2)
if x[0] == 255:
data.mark_interesting()
assert x == hbytes([255, 0])
def zero_dist(random, n):
return hbytes(n)
def test_distribution_may_be_ignored():
@run_to_buffer
def x(data):
t = data.draw_bytes(5, zero_dist)
if all(t) and 255 in t:
data.mark_interesting()
assert x == hbytes([1] * 4 + [255])
def test_can_load_data_from_a_corpus():
key = b'hi there'
db = ExampleDatabase()
value = b'=\xc3\xe4l\x81\xe1\xc2H\xc9\xfb\x1a\xb6bM\xa8\x7f'
db.save(key, value)
def f(data):
if data.draw_bytes(len(value)) == value:
data.mark_interesting()
runner = TestRunner(
f, settings=settings(database=db), database_key=key)
runner.run()
assert runner.last_data.status == Status.INTERESTING
assert runner.last_data.buffer == value
assert len(list(db.fetch(key))) == 1
def test_terminates_shrinks():
shrinks = [-1]
def tf(data):
x = hbytes(data.draw_bytes(100))
if sum(x) >= 500:
shrinks[0] += 1
data.mark_interesting()
runner = TestRunner(tf, settings=settings(
max_examples=5000, max_iterations=10000, max_shrinks=10,
database=None,
))
runner.run()
assert runner.last_data.status == Status.INTERESTING
# There's an extra non-shrinking check step to abort in the presence of
# flakiness
assert shrinks[0] == 11
def test_detects_flakiness():
failed_once = [False]
count = [0]
def tf(data):
data.draw_bytes(1)
count[0] += 1
if not failed_once[0]:
failed_once[0] = True
data.mark_interesting()
runner = TestRunner(tf)
runner.run()
assert count == [2]
def test_variadic_draw():
def draw_list(data):
result = []
while True:
data.start_example()
d = data.draw_bytes(1)[0] & 7
if d:
result.append(data.draw_bytes(d))
data.stop_example()
if not d:
break
return result
@run_to_buffer
def b(data):
if any(all(d) for d in draw_list(data)):
data.mark_interesting()
l = draw_list(TestData.for_buffer(b))
assert len(l) == 1
assert len(l[0]) == 1
def test_draw_to_overrun():
@run_to_buffer
def x(data):
d = (data.draw_bytes(1)[0] - 8) & 0xff
data.draw_bytes(128 * d)
if d >= 2:
data.mark_interesting()
assert x == hbytes([10]) + hbytes(128 * 2)
def test_can_navigate_to_a_valid_example():
def f(data):
i = int_from_bytes(data.draw_bytes(2))
data.draw_bytes(i)
data.mark_interesting()
runner = TestRunner(f, settings=settings(
max_examples=5000, max_iterations=10000,
buffer_size=2,
database=None,
))
runner.run()
assert runner.last_data.status == Status.INTERESTING
return hbytes(runner.last_data.buffer)
def test_stops_after_max_iterations_when_generating():
key = b'key'
value = b'rubber baby buggy bumpers'
max_iterations = 100
db = ExampleDatabase(':memory:')
db.save(key, value)
seen = []
def f(data):
seen.append(data.draw_bytes(len(value)))
data.mark_invalid()
runner = TestRunner(f, settings=settings(
max_examples=1, max_iterations=max_iterations,
database=db,
), database_key=key)
runner.run()
assert len(seen) == max_iterations
assert value in seen
def test_stops_after_max_iterations_when_reading():
key = b'key'
max_iterations = 1
db = ExampleDatabase(':memory:')
for i in range(10):
db.save(key, hbytes([i]))
seen = []
def f(data):
seen.append(data.draw_bytes(1))
data.mark_invalid()
runner = TestRunner(f, settings=settings(
max_examples=1, max_iterations=max_iterations,
database=db,
), database_key=key)
runner.run()
assert len(seen) == max_iterations
def test_stops_after_max_examples_when_reading():
key = b'key'
db = ExampleDatabase(':memory:')
for i in range(10):
db.save(key, hbytes([i]))
seen = []
def f(data):
seen.append(data.draw_bytes(1))
runner = TestRunner(f, settings=settings(
max_examples=1,
database=db,
), database_key=key)
runner.run()
assert len(seen) == 1
def test_stops_after_max_examples_when_generating():
seen = []
def f(data):
seen.append(data.draw_bytes(1))
runner = TestRunner(f, settings=settings(
max_examples=1,
database=None,
))
runner.run()
assert len(seen) == 1
@given(st.random_module())
@settings(max_shrinks=0, timeout=3, min_satisfying_examples=1)
def test_interleaving_engines(rnd):
@run_to_buffer
def x(data):
rnd = Random(hbytes(data.draw_bytes(8)))
def g(d2):
while True:
b = d2.draw_bytes(1)[0]
result = data.draw_bytes(b)
if 255 in result:
d2.mark_interesting()
if 0 in result:
d2.mark_invalid()
runner = TestRunner(g, random=rnd)
runner.run()
if runner.last_data.status == Status.INTERESTING:
data.mark_interesting()
assert x[8:].count(255) == 1
def test_run_with_timeout_while_shrinking():
def f(data):
time.sleep(0.1)
x = data.draw_bytes(32)
if any(x):
data.mark_interesting()
runner = TestRunner(f, settings=settings(database=None, timeout=0.2,))
start = time.time()
runner.run()
assert time.time() <= start + 1
assert runner.last_data.status == Status.INTERESTING
def test_run_with_timeout_while_boring():
def f(data):
time.sleep(0.1)
runner = TestRunner(f, settings=settings(database=None, timeout=0.2,))
start = time.time()
runner.run()
assert time.time() <= start + 1
assert runner.last_data.status == Status.VALID
def test_max_shrinks_can_disable_shrinking():
seen = set()
def f(data):
seen.add(hbytes(data.draw_bytes(32)))
data.mark_interesting()
runner = TestRunner(f, settings=settings(database=None, max_shrinks=0,))
runner.run()
assert len(seen) == 1
def test_saves_data_while_shrinking():
key = b'hi there'
n = 5
db = ExampleDatabase(':memory:')
assert list(db.fetch(key)) == []
seen = set()
def f(data):
x = data.draw_bytes(512)
if sum(x) >= 5000 and len(seen) < n:
seen.add(hbytes(x))
if hbytes(x) in seen:
data.mark_interesting()
runner = TestRunner(
f, settings=settings(database=db), database_key=key)
runner.run()
assert runner.last_data.status == Status.INTERESTING
assert len(seen) == n
in_db = set(db.fetch(key))
assert in_db.issubset(seen)
assert in_db == seen
def test_can_discard():
n = 32
@run_to_buffer
def x(data):
seen = set()
while len(seen) < n:
seen.add(hbytes(data.draw_bytes(1)))
data.mark_interesting()
assert len(x) == n
def test_erratic_draws():
n = [0]
@run_to_buffer
def x(data):
data.draw_bytes(n[0])
data.draw_bytes(255 - n[0])
if n[0] == 255:
data.mark_interesting()
else:
n[0] += 1
assert x == hbytes(255)
def test_no_read_no_shrink():
count = [0]
@run_to_buffer
def x(data):
count[0] += 1
data.mark_interesting()
assert x == b''
assert count == [1]
def test_garbage_collects_the_database():
key = b'hi there'
n = 200
db = ExampleDatabase(':memory:')
assert list(db.fetch(key)) == []
seen = set()
go = True
def f(data):
x = hbytes(data.draw_bytes(512))
if not go:
return
if sum(x) >= 5000 and len(seen) < n:
seen.add(x)
if x in seen:
data.mark_interesting()
runner = TestRunner(
f, settings=settings(database=db, max_shrinks=2 * n), database_key=key)
runner.run()
assert runner.last_data.status == Status.INTERESTING
assert len(seen) == n
assert set(db.fetch(key)) == seen
go = False
runner = TestRunner(
f, settings=settings(database=db, max_shrinks=2 * n), database_key=key)
runner.run()
assert 0 < len(set(db.fetch(key))) < n
def test_variable_replacement():
@run_to_buffer
def x(data):
for _ in range(5):
data.start_example()
c = 0
while True:
d = data.draw_bytes(1)[0]
if not d:
break
c += d
data.stop_example()
if c < 1000:
data.mark_invalid()
data.mark_interesting()
assert x == x[:x.index(0) + 1] * 5
@given(st.randoms(), st.random_module())
def test_maliciously_bad_generator(rnd, seed):
rnd = Random()
@run_to_buffer
def x(data):
for _ in range(rnd.randint(0, 100)):
data.draw_bytes(rnd.randint(0, 10))
if rnd.randint(0, 1):
data.mark_invalid()
else:
data.mark_interesting()
hypothesis-3.0.1/tests/cover/test_conjecture_minimizer.py 0000664 0000000 0000000 00000002047 12661275660 0024045 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.internal.compat import hbytes
from hypothesis.internal.conjecture.minimizer import minimize
def test_shrink_to_zero():
assert minimize(hbytes([255] * 8), lambda x: True) == hbytes(8)
def test_shrink_to_smallest():
assert minimize(
hbytes([255] * 8), lambda x: sum(x) > 10
) == hbytes([0] * 7 + [11])
hypothesis-3.0.1/tests/cover/test_conjecture_test_data.py 0000664 0000000 0000000 00000005636 12661275660 0024021 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import strategies as st
from hypothesis import given
from hypothesis.errors import Frozen
from hypothesis.internal.conjecture.data import Status, StopTest, TestData
from hypothesis.searchstrategy.strategies import SearchStrategy
def bogus_dist(dist, n):
assert False
@given(st.binary())
def test_buffer_draws_as_self(buf):
x = TestData.for_buffer(buf)
assert x.draw_bytes(len(buf), bogus_dist) == buf
def test_cannot_draw_after_freeze():
x = TestData.for_buffer(b'hi')
x.draw_bytes(1)
x.freeze()
with pytest.raises(Frozen):
x.draw_bytes(1)
def test_can_double_freeze():
x = TestData.for_buffer(b'hi')
x.freeze()
assert x.frozen
x.freeze()
assert x.frozen
def test_can_draw_zero_bytes():
x = TestData.for_buffer(b'')
for _ in range(10):
assert x.draw_bytes(0) == b''
def test_draw_past_end_sets_overflow():
x = TestData.for_buffer(bytes(5))
with pytest.raises(StopTest) as e:
x.draw_bytes(6)
assert e.value.testcounter == x.testcounter
assert x.frozen
assert x.status == Status.OVERRUN
def test_notes_repr():
x = TestData.for_buffer(b'')
x.note(b'hi')
assert repr(b'hi') in x.output
def test_can_mark_interesting():
x = TestData.for_buffer(bytes())
with pytest.raises(StopTest):
x.mark_interesting()
assert x.frozen
assert x.status == Status.INTERESTING
def test_can_mark_invalid():
x = TestData.for_buffer(bytes())
with pytest.raises(StopTest):
x.mark_invalid()
assert x.frozen
assert x.status == Status.INVALID
class BoomStrategy(SearchStrategy):
def do_draw(self, data):
data.draw_bytes(1)
raise ValueError()
def test_closes_interval_on_error_in_strategy():
x = TestData.for_buffer(b'hi')
with pytest.raises(ValueError):
x.draw(BoomStrategy())
x.freeze()
assert len(x.intervals) == 1
class BigStrategy(SearchStrategy):
def do_draw(self, data):
data.draw_bytes(10 ** 6)
def test_does_not_double_freeze_in_interval_close():
x = TestData.for_buffer(b'hi')
with pytest.raises(StopTest):
x.draw(BigStrategy())
assert x.frozen
assert len(x.intervals) == 0
hypothesis-3.0.1/tests/cover/test_conjecture_utils.py 0000664 0000000 0000000 00000001672 12661275660 0023205 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.internal.conjecture.data import TestData
from hypothesis.internal.conjecture.utils import integer_range
def test_does_not_draw_data_for_empty_range():
assert integer_range(TestData.for_buffer(b''), 1, 1) == 1
hypothesis-3.0.1/tests/cover/test_control.py 0000664 0000000 0000000 00000006456 12661275660 0021311 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis.errors import CleanupFailed, InvalidArgument
from hypothesis.control import note, cleanup, BuildContext, \
current_build_context, _current_build_context
from tests.common.utils import capture_out
def test_cannot_cleanup_with_no_context():
with pytest.raises(InvalidArgument):
cleanup(lambda: None)
assert _current_build_context.value is None
def test_cleanup_executes_on_leaving_build_context():
data = []
with BuildContext():
cleanup(lambda: data.append(1))
assert not data
assert data == [1]
assert _current_build_context.value is None
def test_can_nest_build_context():
data = []
with BuildContext():
cleanup(lambda: data.append(1))
with BuildContext():
cleanup(lambda: data.append(2))
assert not data
assert data == [2]
assert data == [2, 1]
assert _current_build_context.value is None
def test_does_not_suppress_exceptions():
with pytest.raises(AssertionError):
with BuildContext():
assert False
assert _current_build_context.value is None
def test_suppresses_exceptions_in_teardown():
with capture_out() as o:
with pytest.raises(AssertionError):
with BuildContext():
def foo():
raise ValueError()
cleanup(foo)
assert False
assert u'ValueError' in o.getvalue()
assert _current_build_context.value is None
def test_runs_multiple_cleanup_with_teardown():
with capture_out() as o:
with pytest.raises(AssertionError):
with BuildContext():
def foo():
raise ValueError()
cleanup(foo)
def bar():
raise TypeError()
cleanup(foo)
cleanup(bar)
assert False
assert u'ValueError' in o.getvalue()
assert u'TypeError' in o.getvalue()
assert _current_build_context.value is None
def test_raises_error_if_cleanup_fails_but_block_does_not():
with pytest.raises(CleanupFailed):
with BuildContext():
def foo():
raise ValueError()
cleanup(foo)
assert _current_build_context.value is None
def test_raises_if_note_out_of_context():
with pytest.raises(InvalidArgument):
note('Hi')
def test_raises_if_current_build_context_out_of_context():
with pytest.raises(InvalidArgument):
current_build_context()
def test_current_build_context_is_current():
with BuildContext() as a:
assert current_build_context() is a
hypothesis-3.0.1/tests/cover/test_conventions.py 0000664 0000000 0000000 00000001566 12661275660 0022173 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.utils.conventions import UniqueIdentifier
def test_unique_identifier_repr():
assert repr(UniqueIdentifier(u'hello_world')) == u'hello_world'
hypothesis-3.0.1/tests/cover/test_core.py 0000664 0000000 0000000 00000004677 12661275660 0020564 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
import pytest
import hypothesis.strategies as s
from hypothesis import find, given, reject, settings
from hypothesis.errors import NoSuchExample, Unsatisfiable
def test_stops_after_max_examples_if_satisfying():
tracker = []
def track(x):
tracker.append(x)
return False
max_examples = 100
with pytest.raises(NoSuchExample):
find(
s.integers(0, 10000),
track, settings=settings(max_examples=max_examples))
assert len(tracker) == max_examples
def test_stops_after_max_iterations_if_not_satisfying():
tracker = set()
def track(x):
tracker.add(x)
reject()
max_examples = 100
max_iterations = 200
with pytest.raises(Unsatisfiable):
find(
s.integers(0, 10000),
track, settings=settings(
max_examples=max_examples, max_iterations=max_iterations))
# May be less because of duplication
assert len(tracker) <= max_iterations
def test_can_time_out_in_simplify():
def slow_always_true(x):
time.sleep(0.1)
return True
start = time.time()
find(
s.lists(s.booleans()), slow_always_true,
settings=settings(timeout=0.1, database=None)
)
finish = time.time()
run_time = finish - start
assert run_time <= 0.3
some_normal_settings = settings()
def test_is_not_normally_default():
assert settings.default is not some_normal_settings
@given(s.booleans())
@some_normal_settings
def test_settings_are_default_in_given(x):
assert settings.default is some_normal_settings
def test_settings_are_default_in_find():
find(
s.booleans(), lambda x: settings.default is some_normal_settings,
settings=some_normal_settings)
hypothesis-3.0.1/tests/cover/test_custom_reprs.py 0000664 0000000 0000000 00000003346 12661275660 0022351 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
def test_includes_non_default_args_in_repr():
assert repr(st.integers()) == 'integers()'
assert repr(st.integers(min_value=1)) == 'integers(min_value=1)'
def hi(there, stuff):
return there
def test_supports_positional_and_keyword_args_in_builds():
assert repr(st.builds(hi, st.integers(), there=st.booleans())) == \
'builds(hi, integers(), there=booleans())'
def test_includes_a_trailing_comma_in_single_element_sampling():
assert repr(st.sampled_from([0])) == 'sampled_from((0,))'
class IHaveABadRepr(object):
def __repr__(self):
raise ValueError('Oh no!')
def test_errors_are_deferred_until_repr_is_calculated():
s = st.builds(
lambda x, y: 1,
st.just(IHaveABadRepr()),
y=st.one_of(
st.sampled_from((IHaveABadRepr(),)), st.just(IHaveABadRepr()))
).map(lambda t: t).filter(lambda t: True).flatmap(
lambda t: st.just(IHaveABadRepr()))
with pytest.raises(ValueError):
repr(s)
hypothesis-3.0.1/tests/cover/test_database_agreement.py 0000664 0000000 0000000 00000004267 12661275660 0023422 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import shutil
import tempfile
import hypothesis.strategies as st
from hypothesis.database import SQLiteExampleDatabase, \
InMemoryExampleDatabase, DirectoryBasedExampleDatabase
from hypothesis.stateful import rule, Bundle, RuleBasedStateMachine
class DatabaseComparison(RuleBasedStateMachine):
def __init__(self):
super(DatabaseComparison, self).__init__()
self.tempd = tempfile.mkdtemp()
exampledir = os.path.join(self.tempd, 'examples')
self.dbs = [
DirectoryBasedExampleDatabase(exampledir),
InMemoryExampleDatabase(), SQLiteExampleDatabase(':memory:'),
DirectoryBasedExampleDatabase(exampledir),
]
keys = Bundle('keys')
values = Bundle('values')
@rule(target=keys, k=st.binary())
def k(self, k):
return k
@rule(target=values, v=st.binary())
def v(self, v):
return v
@rule(k=keys, v=values)
def save(self, k, v):
for db in self.dbs:
db.save(k, v)
@rule(k=keys, v=values)
def delete(self, k, v):
for db in self.dbs:
db.delete(k, v)
@rule(k=keys)
def values_agree(self, k):
last = None
for db in self.dbs:
keys = set(db.fetch(k))
if last is not None:
assert last == keys
last = keys
def teardown(self):
for d in self.dbs:
d.close()
shutil.rmtree(self.tempd)
TestDBs = DatabaseComparison.TestCase
hypothesis-3.0.1/tests/cover/test_database_backend.py 0000664 0000000 0000000 00000013221 12661275660 0023030 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import base64
import pytest
from hypothesis import given, settings
from hypothesis.database import ExampleDatabase, SQLiteExampleDatabase, \
InMemoryExampleDatabase, DirectoryBasedExampleDatabase
from hypothesis.strategies import lists, binary, tuples
from hypothesis.internal.compat import PY26, hrange
small_settings = settings(max_examples=100, timeout=4)
if PY26:
# Workaround for bug with embedded null characters in a text string under
# python 2.6
alphabet = [chr(i) for i in hrange(1, 128)]
else:
alphabet = None
@given(lists(tuples(binary(), binary())))
@small_settings
def test_backend_returns_what_you_put_in(xs):
backend = SQLiteExampleDatabase(':memory:')
mapping = {}
for key, value in xs:
mapping.setdefault(key, set()).add(value)
backend.save(key, value)
for key, values in mapping.items():
backend_contents = list(backend.fetch(key))
distinct_backend_contents = set(backend_contents)
assert len(backend_contents) == len(distinct_backend_contents)
assert distinct_backend_contents == set(values)
def test_does_not_commit_in_error_state():
backend = SQLiteExampleDatabase(':memory:')
backend.create_db_if_needed()
try:
with backend.cursor() as cursor:
cursor.execute("""
insert into hypothesis_data_mapping(key, value)
values("a", "b")
""")
raise ValueError()
except ValueError:
pass
assert list(backend.fetch(b'a')) == []
def test_can_double_close():
backend = SQLiteExampleDatabase(':memory:')
backend.create_db_if_needed()
backend.close()
backend.close()
def test_can_delete_keys():
backend = SQLiteExampleDatabase(':memory:')
backend.save(b'foo', b'bar')
backend.save(b'foo', b'baz')
backend.delete(b'foo', b'bar')
assert list(backend.fetch(b'foo')) == [b'baz']
def test_ignores_badly_stored_values():
backend = SQLiteExampleDatabase(':memory:')
backend.create_db_if_needed()
with backend.cursor() as cursor:
cursor.execute("""
insert into hypothesis_data_mapping(key, value)
values(?, ?)
""", (base64.b64encode(b'foo'), u'kittens'))
assert list(backend.fetch(b'foo')) == []
def test_default_database_is_in_memory():
assert isinstance(ExampleDatabase(), InMemoryExampleDatabase)
def test_default_on_disk_database_is_dir(tmpdir):
assert isinstance(
ExampleDatabase(tmpdir.join('foo')), DirectoryBasedExampleDatabase)
def test_selects_sqlite_database_if_name_matches(tmpdir):
assert isinstance(
ExampleDatabase(tmpdir.join('foo.db')), SQLiteExampleDatabase)
assert isinstance(
ExampleDatabase(tmpdir.join('foo.sqlite')), SQLiteExampleDatabase)
assert isinstance(
ExampleDatabase(tmpdir.join('foo.sqlite3')), SQLiteExampleDatabase)
def test_selects_directory_based_if_already_directory(tmpdir):
path = str(tmpdir.join('hi.sqlite3'))
DirectoryBasedExampleDatabase(path).save(b"foo", b"bar")
assert isinstance(ExampleDatabase(path), DirectoryBasedExampleDatabase)
def test_selects_sqlite_if_already_sqlite(tmpdir):
path = str(tmpdir.join('hi'))
SQLiteExampleDatabase(path).save(b"foo", b"bar")
assert isinstance(ExampleDatabase(path), SQLiteExampleDatabase)
def test_does_not_error_when_fetching_when_not_exist(tmpdir):
db = DirectoryBasedExampleDatabase(tmpdir.join('examples'))
db.fetch(b'foo')
@pytest.fixture(scope='function', params=['memory', 'sql', 'directory'])
def exampledatabase(request, tmpdir):
if request.param == 'memory':
return ExampleDatabase()
if request.param == 'sql':
return SQLiteExampleDatabase(str(tmpdir.join('example.db')))
if request.param == 'directory':
return DirectoryBasedExampleDatabase(str(tmpdir.join('examples')))
assert False
def test_can_delete_a_key_that_is_not_present(exampledatabase):
exampledatabase.delete(b'foo', b'bar')
def test_can_fetch_a_key_that_is_not_present(exampledatabase):
assert list(exampledatabase.fetch(b'foo')) == []
def test_saving_a_key_twice_fetches_it_once(exampledatabase):
exampledatabase.save(b'foo', b'bar')
exampledatabase.save(b'foo', b'bar')
assert list(exampledatabase.fetch(b'foo')) == [b'bar']
def test_can_close_a_database_without_touching_it(exampledatabase):
exampledatabase.close()
def test_can_close_a_database_after_saving(exampledatabase):
exampledatabase.save(b'foo', b'bar')
def test_class_name_is_in_repr(exampledatabase):
assert type(exampledatabase).__name__ in repr(exampledatabase)
exampledatabase.close()
def test_two_directory_databases_can_interact(tmpdir):
path = str(tmpdir)
db1 = DirectoryBasedExampleDatabase(path)
db2 = DirectoryBasedExampleDatabase(path)
db1.save(b'foo', b'bar')
assert list(db2.fetch(b'foo')) == [b'bar']
db2.save(b'foo', b'bar')
db2.save(b'foo', b'baz')
assert sorted(db1.fetch(b'foo')) == [b'bar', b'baz']
hypothesis-3.0.1/tests/cover/test_database_usage.py 0000664 0000000 0000000 00000006131 12661275660 0022547 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import find, assume, settings
from hypothesis.errors import NoSuchExample, Unsatisfiable
from hypothesis.database import SQLiteExampleDatabase
def test_saves_incremental_steps_in_database():
key = b"a database key"
database = SQLiteExampleDatabase(':memory:')
find(
st.binary(min_size=10), lambda x: any(x),
settings=settings(database=database), database_key=key
)
assert len(set(database.fetch(key))) > 1
def test_clears_out_database_as_things_get_boring():
key = b"a database key"
database = SQLiteExampleDatabase(':memory:')
do_we_care = True
def stuff():
try:
find(
st.binary(min_size=50), lambda x: do_we_care and any(x),
settings=settings(database=database, max_examples=10),
database_key=key
)
except NoSuchExample:
pass
stuff()
assert len(set(database.fetch(key))) > 1
do_we_care = False
stuff()
assert len(set(database.fetch(key))) > 0
for _ in range(100):
stuff()
if not set(database.fetch(key)):
break
else:
assert False
def test_trashes_all_invalid_examples():
key = b"a database key"
database = SQLiteExampleDatabase(':memory:')
finicky = False
def stuff():
try:
find(
st.binary(min_size=100),
lambda x: assume(not finicky) and any(x),
settings=settings(database=database, timeout=1),
database_key=key
)
except Unsatisfiable:
pass
stuff()
assert len(set(database.fetch(key))) > 1
finicky = True
stuff()
assert len(set(database.fetch(key))) == 0
def test_respects_max_examples_in_database_usage():
key = b"a database key"
database = SQLiteExampleDatabase(':memory:')
do_we_care = True
counter = [0]
def check(x):
counter[0] += 1
return do_we_care and any(x)
def stuff():
try:
find(
st.binary(min_size=100), check,
settings=settings(database=database, max_examples=10),
database_key=key
)
except NoSuchExample:
pass
stuff()
assert len(set(database.fetch(key))) > 10
do_we_care = False
counter[0] = 0
stuff()
assert counter == [10]
hypothesis-3.0.1/tests/cover/test_deferred_errors.py 0000664 0000000 0000000 00000003665 12661275660 0023004 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import find, given
from hypothesis.errors import InvalidArgument
def test_does_not_error_on_initial_calculation():
st.floats(max_value=float('nan'))
st.sampled_from([])
st.lists(st.integers(), min_size=5, max_size=2)
st.floats(min_value=2.0, max_value=1.0)
def test_errors_each_time():
s = st.sampled_from([])
with pytest.raises(InvalidArgument):
s.example()
with pytest.raises(InvalidArgument):
s.example()
def test_errors_on_test_invocation():
@given(st.sampled_from([]))
def test(x):
pass
with pytest.raises(InvalidArgument):
test()
def test_errors_on_find():
s = st.lists(st.integers(), min_size=5, max_size=2)
with pytest.raises(InvalidArgument):
find(s, lambda x: True)
def test_errors_on_example():
s = st.floats(min_value=2.0, max_value=1.0)
with pytest.raises(InvalidArgument):
s.example()
def test_does_not_recalculate_the_strategy():
calls = [0]
@st.defines_strategy
def foo():
calls[0] += 1
return st.just(1)
f = foo()
assert calls == [0]
f.example()
assert calls == [1]
f.example()
assert calls == [1]
hypothesis-3.0.1/tests/cover/test_direct_strategies.py 0000664 0000000 0000000 00000015234 12661275660 0023327 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import pytest
import hypothesis.strategies as ds
from hypothesis import find, given, settings
from hypothesis.errors import InvalidArgument
from hypothesis.internal.reflection import nicerepr
def fn_test(*fnkwargs):
fnkwargs = list(fnkwargs)
return pytest.mark.parametrize(
(u'fn', u'args'), fnkwargs,
ids=[
u'%s(%s)' % (fn.__name__, u', '.join(map(nicerepr, args)))
for fn, args in fnkwargs
]
)
def fn_ktest(*fnkwargs):
fnkwargs = list(fnkwargs)
return pytest.mark.parametrize(
(u'fn', u'kwargs'), fnkwargs,
ids=[
u'%s(%s)' % (fn.__name__, u', '.join(sorted(
u'%s=%r' % (k, v)
for k, v in kwargs.items()
)),)
for fn, kwargs in fnkwargs
]
)
@fn_ktest(
(ds.integers, {u'min_value': float(u'nan')}),
(ds.integers, {u'min_value': 2, u'max_value': 1}),
(ds.sampled_from, {u'elements': ()}),
(ds.lists, {}),
(ds.lists, {u'average_size': float(u'nan')}),
(ds.lists, {u'min_size': 10, u'max_size': 9}),
(ds.lists, {u'min_size': -10, u'max_size': -9}),
(ds.lists, {u'max_size': -9}),
(ds.lists, {u'max_size': 10}),
(ds.lists, {u'min_size': -10}),
(ds.lists, {u'max_size': 10, u'average_size': 20}),
(ds.lists, {u'min_size': 1.0, u'average_size': 0.5}),
(ds.lists, {u'elements': u'hi'}),
(ds.text, {u'min_size': 10, u'max_size': 9}),
(ds.text, {u'max_size': 10, u'average_size': 20}),
(ds.binary, {u'min_size': 10, u'max_size': 9}),
(ds.binary, {u'max_size': 10, u'average_size': 20}),
(ds.floats, {u'min_value': float(u'nan')}),
(ds.floats, {u'max_value': 0.0, u'min_value': 1.0}),
(ds.fixed_dictionaries, {u'mapping': u'fish'}),
(ds.fixed_dictionaries, {u'mapping': {1: u'fish'}}),
(ds.dictionaries, {u'keys': ds.integers(), u'values': 1}),
(ds.dictionaries, {u'keys': 1, u'values': ds.integers()}),
(ds.text, {u'alphabet': u'', u'min_size': 1}),
)
def test_validates_keyword_arguments(fn, kwargs):
with pytest.raises(InvalidArgument):
fn(**kwargs).example()
@fn_ktest(
(ds.integers, {u'min_value': 0}),
(ds.integers, {u'min_value': 11}),
(ds.integers, {u'min_value': 11, u'max_value': 100}),
(ds.integers, {u'max_value': 0}),
(ds.lists, {u'max_size': 0}),
(ds.lists, {u'elements': ds.integers()}),
(ds.lists, {u'elements': ds.integers(), u'max_size': 5}),
(ds.lists, {u'elements': ds.booleans(), u'min_size': 5}),
(ds.lists, {u'elements': ds.booleans(), u'min_size': 5, u'max_size': 10}),
(ds.lists, {
u'average_size': 20, u'elements': ds.booleans(), u'max_size': 25}),
(ds.sets, {
u'min_size': 10, u'max_size': 10, u'elements': ds.integers(),
}),
(ds.booleans, {}),
(ds.just, {u'value': u'hi'}),
(ds.integers, {u'min_value': 12, u'max_value': 12}),
(ds.floats, {}),
(ds.floats, {u'min_value': 1.0}),
(ds.floats, {u'max_value': 1.0}),
(ds.floats, {u'max_value': 1.0, u'min_value': -1.0}),
(ds.sampled_from, {u'elements': [1]}),
(ds.sampled_from, {u'elements': [1, 2, 3]}),
(ds.fixed_dictionaries, {u'mapping': {1: ds.integers()}}),
(ds.dictionaries, {u'keys': ds.booleans(), u'values': ds.integers()}),
(ds.text, {u'alphabet': u'abc'}),
(ds.text, {u'alphabet': u''}),
(ds.text, {u'alphabet': ds.sampled_from(u'abc')}),
)
def test_produces_valid_examples_from_keyword(fn, kwargs):
fn(**kwargs).example()
@fn_test(
(ds.one_of, (1,))
)
def test_validates_args(fn, args):
with pytest.raises(InvalidArgument):
fn(*args).example()
@fn_test(
(ds.one_of, (ds.booleans(), ds.tuples(ds.booleans()))),
(ds.one_of, (ds.booleans(),)),
(ds.text, ()),
(ds.binary, ()),
(ds.builds, (lambda x, y: x + y, ds.integers(), ds.integers())),
)
def test_produces_valid_examples_from_args(fn, args):
fn(*args).example()
def test_tuples_raise_error_on_bad_kwargs():
with pytest.raises(TypeError):
ds.tuples(stuff=u'things')
@given(ds.lists(ds.booleans(), min_size=10, max_size=10))
def test_has_specified_length(xs):
assert len(xs) == 10
@given(ds.integers(max_value=100))
@settings(max_examples=100)
def test_has_upper_bound(x):
assert x <= 100
@given(ds.integers(min_value=100))
def test_has_lower_bound(x):
assert x >= 100
@given(ds.integers(min_value=1, max_value=2))
def test_is_in_bounds(x):
assert 1 <= x <= 2
def test_float_can_find_max_value_inf():
assert find(
ds.floats(max_value=float(u'inf')), lambda x: math.isinf(x)
) == float(u'inf')
assert find(
ds.floats(min_value=0.0), lambda x: math.isinf(x)) == float(u'inf')
def test_float_can_find_min_value_inf():
find(ds.floats(), lambda x: x < 0 and math.isinf(x))
find(
ds.floats(min_value=float(u'-inf'), max_value=0.0),
lambda x: math.isinf(x))
def test_can_find_none_list():
assert find(ds.lists(ds.none()), lambda x: len(x) >= 3) == [None] * 3
def test_fractions():
assert find(ds.fractions(), lambda f: f >= 1) == 1
def test_decimals():
assert find(ds.decimals(), lambda f: f.is_finite() and f >= 1) == 1
def test_non_float_decimal():
find(ds.decimals(), lambda d: ds.float_to_decimal(float(d)) != d)
def test_produces_dictionaries_of_at_least_minimum_size():
t = find(
ds.dictionaries(ds.booleans(), ds.integers(), min_size=2),
lambda x: True)
assert t == {False: 0, True: 0}
@given(ds.dictionaries(ds.integers(), ds.integers(), max_size=5))
@settings(max_examples=50)
def test_dictionaries_respect_size(d):
assert len(d) <= 5
@given(ds.dictionaries(ds.integers(), ds.integers(), max_size=0))
@settings(max_examples=50)
def test_dictionaries_respect_zero_size(d):
assert len(d) <= 5
@given(
ds.lists(ds.none(), max_size=5)
)
def test_none_lists_respect_max_size(ls):
assert len(ls) <= 5
@given(
ds.lists(ds.none(), max_size=5, min_size=1)
)
def test_none_lists_respect_max_and_min_size(ls):
assert 1 <= len(ls) <= 5
hypothesis-3.0.1/tests/cover/test_draw_example.py 0000664 0000000 0000000 00000002147 12661275660 0022272 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from tests.common import standard_types
from hypothesis.strategies import lists
@pytest.mark.parametrize(
u'spec', standard_types, ids=list(map(repr, standard_types)))
def test_single_example(spec):
spec.example()
@pytest.mark.parametrize(
u'spec', standard_types, ids=list(map(repr, standard_types)))
def test_list_example(spec):
lists(spec, average_size=2).example()
hypothesis-3.0.1/tests/cover/test_dynamic_variable.py 0000664 0000000 0000000 00000002205 12661275660 0023106 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.utils.dynamicvariables import DynamicVariable
def test_can_assign():
d = DynamicVariable(1)
assert d.value == 1
with d.with_value(2):
assert d.value == 2
assert d.value == 1
def test_can_nest():
d = DynamicVariable(1)
with d.with_value(2):
assert d.value == 2
with d.with_value(3):
assert d.value == 3
assert d.value == 2
assert d.value == 1
hypothesis-3.0.1/tests/cover/test_eval_as_source.py 0000664 0000000 0000000 00000002406 12661275660 0022612 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.internal.reflection import source_exec_as_module
def test_can_eval_as_source():
assert source_exec_as_module('foo=1').foo == 1
def test_caches():
x = source_exec_as_module('foo=2')
y = source_exec_as_module('foo=2')
assert x is y
RECURSIVE = """
from hypothesis.internal.reflection import source_exec_as_module
def test_recurse():
assert not (
source_exec_as_module("too_much_recursion = False").too_much_recursion)
"""
def test_can_call_self_recursively():
source_exec_as_module(RECURSIVE).test_recurse()
hypothesis-3.0.1/tests/cover/test_example.py 0000664 0000000 0000000 00000002206 12661275660 0021251 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from random import Random
import hypothesis.strategies as st
from hypothesis import given
@given(st.integers())
def test_deterministic_examples_are_deterministic(seed):
assert st.lists(st.integers()).example(Random(seed)) == \
st.lists(st.integers()).example(Random(seed))
def test_does_not_always_give_the_same_example():
s = st.integers()
assert len(set(
s.example() for _ in range(100)
)) >= 10
hypothesis-3.0.1/tests/cover/test_executors.py 0000664 0000000 0000000 00000004271 12661275660 0021643 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import inspect
from unittest import TestCase
import pytest
from hypothesis import given, example
from hypothesis.executors import TestRunner
from hypothesis.strategies import booleans, integers
def test_must_use_result_of_test():
class DoubleRun(object):
def execute_example(self, function):
x = function()
if inspect.isfunction(x):
return x()
@given(booleans())
def boom(self, b):
def f():
raise ValueError()
return f
with pytest.raises(ValueError):
DoubleRun().boom()
class TestTryReallyHard(TestCase):
@given(integers())
def test_something(self, i):
pass
def execute_example(self, f):
f()
return f()
class Valueless(object):
def execute_example(self, f):
try:
return f()
except ValueError:
return None
@given(integers())
@example(1)
def test_no_boom_on_example(self, x):
raise ValueError()
@given(integers())
def test_no_boom(self, x):
raise ValueError()
@given(integers())
def test_boom(self, x):
assert False
def test_boom():
with pytest.raises(AssertionError):
Valueless().test_boom()
def test_no_boom():
Valueless().test_no_boom()
def test_no_boom_on_example():
Valueless().test_no_boom_on_example()
class TestNormal(TestRunner, TestCase):
@given(booleans())
def test_stuff(self, b):
pass
hypothesis-3.0.1/tests/cover/test_explicit_examples.py 0000664 0000000 0000000 00000012561 12661275660 0023342 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from unittest import TestCase
import pytest
from hypothesis import note, given, example, settings, reporting
from hypothesis.errors import InvalidArgument
from tests.common.utils import capture_out
from hypothesis.strategies import text, integers
from hypothesis.internal.compat import integer_types
class TestInstanceMethods(TestCase):
@given(integers())
@example(1)
def test_hi_1(self, x):
assert isinstance(x, integer_types)
@given(integers())
@example(x=1)
def test_hi_2(self, x):
assert isinstance(x, integer_types)
@given(x=integers())
@example(x=1)
def test_hi_3(self, x):
assert isinstance(x, integer_types)
def test_kwarg_example_on_testcase():
class Stuff(TestCase):
@given(integers())
@example(x=1)
def test_hi(self, x):
assert isinstance(x, integer_types)
Stuff(u'test_hi').test_hi()
def test_errors_when_run_with_not_enough_args():
@given(integers(), int)
@example(1)
def foo(x, y):
pass
with pytest.raises(TypeError):
foo()
def test_errors_when_run_with_not_enough_kwargs():
@given(integers(), int)
@example(x=1)
def foo(x, y):
pass
with pytest.raises(TypeError):
foo()
def test_can_use_examples_after_given():
long_str = u"This is a very long string that you've no chance of hitting"
@example(long_str)
@given(text())
def test_not_long_str(x):
assert x != long_str
with pytest.raises(AssertionError):
test_not_long_str()
def test_can_use_examples_before_given():
long_str = u"This is a very long string that you've no chance of hitting"
@given(text())
@example(long_str)
def test_not_long_str(x):
assert x != long_str
with pytest.raises(AssertionError):
test_not_long_str()
def test_can_use_examples_around_given():
long_str = u"This is a very long string that you've no chance of hitting"
short_str = u'Still no chance'
seen = []
@example(short_str)
@given(text())
@example(long_str)
def test_not_long_str(x):
seen.append(x)
test_not_long_str()
assert set(seen[:2]) == set((long_str, short_str))
@pytest.mark.parametrize((u'x', u'y'), [(1, False), (2, True)])
@example(z=10)
@given(z=integers())
def test_is_a_thing(x, y, z):
pass
def test_no_args_and_kwargs():
with pytest.raises(InvalidArgument):
example(1, y=2)
def test_no_empty_examples():
with pytest.raises(InvalidArgument):
example()
def test_does_not_print_on_explicit_examples_if_no_failure():
@example(1)
@given(integers())
def test_positive(x):
assert x > 0
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
with capture_out() as out:
test_positive()
out = out.getvalue()
assert u'Falsifying example: test_positive(1)' not in out
def test_prints_output_for_explicit_examples():
@example(-1)
@given(integers())
def test_positive(x):
assert x > 0
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
with capture_out() as out:
test_positive()
out = out.getvalue()
assert u'Falsifying example: test_positive(x=-1)' in out
def test_captures_original_repr_of_example():
@example(x=[])
@given(integers())
def test_mutation(x):
x.append(1)
assert not x
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
with capture_out() as out:
test_mutation()
out = out.getvalue()
assert u'Falsifying example: test_mutation(x=[])' in out
def test_examples_are_tried_in_order():
@example(x=1)
@example(x=2)
@given(integers())
@settings(max_examples=0)
@example(x=3)
def test(x):
print(u"x -> %d" % (x,))
with capture_out() as out:
with reporting.with_reporter(reporting.default):
test()
ls = out.getvalue().splitlines()
assert ls == [u"x -> 1", 'x -> 2', 'x -> 3']
def test_prints_note_in_failing_example():
@example(x=42)
@example(x=43)
@given(integers())
def test(x):
note('x -> %d' % (x,))
assert x == 42
with capture_out() as out:
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
test()
v = out.getvalue()
print(v)
assert 'x -> 43' in v
assert 'x -> 42' not in v
def test_must_agree_with_number_of_arguments():
@example(1, 2)
@given(integers())
def test(a):
pass
with pytest.raises(InvalidArgument):
test()
hypothesis-3.0.1/tests/cover/test_fancy_repr.py 0000664 0000000 0000000 00000003066 12661275660 0021753 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
def test_floats_is_floats():
assert repr(st.floats()) == u'floats()'
def test_includes_non_default_values():
assert repr(st.floats(max_value=1.0)) == u'floats(max_value=1.0)'
def foo(*args, **kwargs):
pass
def test_builds_repr():
assert repr(st.builds(foo, st.just(1), x=st.just(10))) == \
u'builds(foo, just(1), x=just(10))'
def test_map_repr():
assert repr(st.integers().map(abs)) == u'integers().map(abs)'
assert repr(st.integers().map(lambda x: x * 2)) == \
u'integers().map(lambda x: x * 2)'
def test_filter_repr():
assert repr(st.integers().filter(lambda x: x != 3)) == \
u'integers().filter(lambda x: x != 3)'
def test_flatmap_repr():
assert repr(st.integers().flatmap(lambda x: st.booleans())) == \
u'integers().flatmap(lambda x: st.booleans())'
hypothesis-3.0.1/tests/cover/test_filestorage.py 0000664 0000000 0000000 00000003272 12661275660 0022126 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import hypothesis.configuration as fs
previous_home_dir = None
def setup_function(function):
global previous_home_dir
previous_home_dir = fs.hypothesis_home_dir()
fs.set_hypothesis_home_dir(None)
def teardown_function(function):
global previous_home_dir
fs.set_hypothesis_home_dir(previous_home_dir)
previous_home_dir = None
def test_homedir_exists_automatically():
assert os.path.exists(fs.hypothesis_home_dir())
def test_can_set_homedir_and_it_will_exist(tmpdir):
fs.set_hypothesis_home_dir(str(tmpdir.mkdir(u'kittens')))
d = fs.hypothesis_home_dir()
assert u'kittens' in d
assert os.path.exists(d)
def test_will_pick_up_location_from_env(tmpdir):
os.environ[u'HYPOTHESIS_STORAGE_DIRECTORY'] = str(tmpdir)
assert fs.hypothesis_home_dir() == str(tmpdir)
def test_storage_directories_are_created_automatically(tmpdir):
fs.set_hypothesis_home_dir(str(tmpdir))
assert os.path.exists(fs.storage_directory(u'badgers'))
hypothesis-3.0.1/tests/cover/test_filtering.py 0000664 0000000 0000000 00000002133 12661275660 0021600 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import given
from hypothesis.strategies import lists, integers
@pytest.mark.parametrize((u'specifier', u'condition'), [
(integers(), lambda x: x > 1),
(lists(integers()), bool),
])
def test_filter_correctly(specifier, condition):
@given(specifier.filter(condition))
def test_is_filtered(x):
assert condition(x)
test_is_filtered()
hypothesis-3.0.1/tests/cover/test_find.py 0000664 0000000 0000000 00000005156 12661275660 0020545 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import time
import pytest
from hypothesis import settings as Settings
from hypothesis import find
from hypothesis.errors import Timeout, NoSuchExample
from hypothesis.strategies import lists, floats, booleans, integers, \
dictionaries
def test_can_find_an_int():
assert find(integers(), lambda x: True) == 0
assert find(integers(), lambda x: x >= 13) == 13
def test_can_find_list():
x = find(lists(integers()), lambda x: sum(x) >= 10)
assert sum(x) == 10
def test_can_find_nan():
find(floats(), math.isnan)
def test_can_find_nans():
x = find(lists(floats()), lambda x: math.isnan(sum(x)))
if len(x) == 1:
assert math.isnan(x[0])
else:
assert 2 <= len(x) <= 3
def test_raises_when_no_example():
settings = Settings(
max_examples=20,
min_satisfying_examples=0,
)
with pytest.raises(NoSuchExample):
find(integers(), lambda x: False, settings=settings)
def test_condition_is_name():
settings = Settings(
max_examples=20,
min_satisfying_examples=0,
)
with pytest.raises(NoSuchExample) as e:
find(booleans(), lambda x: False, settings=settings)
assert 'lambda x:' in e.value.args[0]
with pytest.raises(NoSuchExample) as e:
find(integers(), lambda x: '☃' in str(x), settings=settings)
assert 'lambda x:' in e.value.args[0]
def bad(x):
return False
with pytest.raises(NoSuchExample) as e:
find(integers(), bad, settings=settings)
assert 'bad' in e.value.args[0]
def test_find_dictionary():
assert len(find(
dictionaries(keys=integers(), values=integers()),
lambda xs: any(kv[0] > kv[1] for kv in xs.items()))) == 1
def test_times_out():
with pytest.raises(Timeout) as e:
find(
integers(),
lambda x: time.sleep(0.05) or False,
settings=Settings(timeout=0.01))
e.value.args[0]
hypothesis-3.0.1/tests/cover/test_fixed_strategies.py 0000664 0000000 0000000 00000003657 12661275660 0023162 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import find, given
from hypothesis.internal.compat import int_to_bytes
from hypothesis.searchstrategy.fixed import FixedStrategy
class Blocks(FixedStrategy):
def draw_value(self, random):
return int_to_bytes(
random.getrandbits(self.block_size * 8), self.block_size)
def to_bytes(self, value):
return value
def from_bytes(self, value):
return value
@given(Blocks(3))
def test_blocks_are_of_fixed_size(x):
assert len(x) == 3
def test_blocks_shrink_bytewise():
assert find(Blocks(5), lambda x: True) == b'\0' * 5
class BadBlocks(Blocks):
def is_acceptable(self, value):
return False
def test_bad_blocks_error():
with pytest.raises(AssertionError):
find(BadBlocks(5), lambda x: True)
class BadlySizedBlocks(Blocks):
def to_bytes(self, value):
return value + b'\0'
def test_badly_sized_blocks_error():
with pytest.raises(AssertionError):
find(BadlySizedBlocks(5), lambda x: True)
class FilteredBlocks(Blocks):
def is_acceptable(self, value):
return value[-1] & 1
@given(FilteredBlocks(3))
def test_filtered_blocks_are_acceptable(x):
assert len(x) == 3
assert x[-1] & 1
hypothesis-3.0.1/tests/cover/test_flakiness.py 0000664 0000000 0000000 00000006010 12661275660 0021572 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import given, assume, reject, example, settings, Verbosity
from hypothesis.errors import Flaky, Unsatisfiable, UnsatisfiedAssumption
from hypothesis.strategies import lists, booleans, integers, composite, \
random_module
class Nope(Exception):
pass
def test_fails_only_once_is_flaky():
first_call = [True]
@given(integers())
def rude(x):
if first_call[0]:
first_call[0] = False
raise Nope()
with pytest.raises(Flaky):
rude()
def test_gives_flaky_error_if_assumption_is_flaky():
seen = set()
@given(integers())
@settings(verbosity=Verbosity.quiet)
def oops(s):
assume(s not in seen)
seen.add(s)
assert False
with pytest.raises(Flaky):
oops()
def test_does_not_attempt_to_shrink_flaky_errors():
values = []
@given(integers())
def test(x):
values.append(x)
assert len(values) != 1
with pytest.raises(Flaky):
test()
assert len(set(values)) == 1
class SatisfyMe(Exception):
pass
@composite
def single_bool_lists(draw):
n = draw(integers(0, 20))
result = [False] * (n + 1)
result[n] = True
return result
@example([True, False, False, False], [3], None,)
@example([False, True, False, False], [3], None,)
@example([False, False, True, False], [3], None,)
@example([False, False, False, True], [3], None,)
@settings(max_examples=0)
@given(
lists(booleans(), average_size=20) | single_bool_lists(),
lists(integers(1, 3), average_size=20), random_module())
def test_failure_sequence_inducing(building, testing, rnd):
buildit = iter(building)
testit = iter(testing)
def build(x):
try:
assume(not next(buildit))
except StopIteration:
pass
return x
@given(integers().map(build))
@settings(
verbosity=Verbosity.quiet, database=None,
perform_health_check=False, max_shrinks=0
)
def test(x):
try:
i = next(testit)
except StopIteration:
return
if i == 1:
return
elif i == 2:
reject()
else:
raise Nope()
try:
test()
except (Nope, Unsatisfiable, Flaky):
pass
except UnsatisfiedAssumption:
raise SatisfyMe()
hypothesis-3.0.1/tests/cover/test_flatmap.py 0000664 0000000 0000000 00000005050 12661275660 0021242 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import find, given, assume, settings
from hypothesis.database import ExampleDatabase
from hypothesis.strategies import just, text, lists, floats, tuples, \
booleans, integers
from hypothesis.internal.compat import Counter
ConstantLists = integers().flatmap(lambda i: lists(just(i)))
OrderedPairs = integers(1, 200).flatmap(
lambda e: tuples(integers(0, e - 1), just(e))
)
with settings(max_examples=200):
@given(ConstantLists)
def test_constant_lists_are_constant(x):
assume(len(x) >= 3)
assert len(set(x)) == 1
@given(OrderedPairs)
def test_in_order(x):
assert x[0] < x[1]
def test_flatmap_retrieve_from_db():
constant_float_lists = floats(0, 1).flatmap(
lambda x: lists(just(x))
)
track = []
db = ExampleDatabase()
@given(constant_float_lists)
@settings(database=db)
def record_and_test_size(xs):
if sum(xs) >= 1:
track.append(xs)
assert False
with pytest.raises(AssertionError):
record_and_test_size()
assert track
example = track[-1]
track = []
with pytest.raises(AssertionError):
record_and_test_size()
assert track[0] == example
def test_flatmap_does_not_reuse_strategies():
s = lists(max_size=0).flatmap(just)
assert s.example() is not s.example()
def test_flatmap_has_original_strategy_repr():
ints = integers()
ints_up = ints.flatmap(lambda n: integers(min_value=n))
assert repr(ints) in repr(ints_up)
def test_mixed_list_flatmap():
s = lists(
booleans().flatmap(lambda b: booleans() if b else text())
)
def criterion(ls):
c = Counter(type(l) for l in ls)
return len(c) >= 2 and min(c.values()) >= 3
result = find(s, criterion)
assert len(result) == 6
assert set(result) == set([False, u''])
hypothesis-3.0.1/tests/cover/test_float_nastiness.py 0000664 0000000 0000000 00000006300 12661275660 0023011 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
import math
import pytest
import hypothesis.strategies as st
from hypothesis import find, given, assume, settings
from hypothesis.internal.compat import WINDOWS
@pytest.mark.parametrize((u'l', u'r'), [
# Exact values don't matter, but they're large enough so that x + y = inf.
(9.9792015476736e+291, 1.7976931348623157e+308),
(-sys.float_info.max, sys.float_info.max)
])
def test_floats_are_in_range(l, r):
@given(st.floats(l, r))
def test_is_in_range(t):
assert l <= t <= r
test_is_in_range()
def test_can_generate_both_zeros():
find(
st.floats(),
lambda x: assume(x >= 0) and math.copysign(1, x) < 0,
settings=settings(max_examples=10000)
)
@pytest.mark.parametrize((u'l', u'r'), [
(-1.0, 1.0),
(-0.0, 1.0),
(-1.0, 0.0),
(-sys.float_info.min, sys.float_info.min),
])
def test_can_generate_both_zeros_when_in_interval(l, r):
interval = st.floats(l, r)
find(interval, lambda x: assume(x == 0) and math.copysign(1, x) == 1)
find(interval, lambda x: assume(x == 0) and math.copysign(1, x) == -1)
@given(st.floats(0.0, 1.0))
def test_does_not_generate_negative_if_right_boundary_is_positive(x):
assert math.copysign(1, x) == 1
@given(st.floats(-1.0, -0.0))
def test_does_not_generate_positive_if_right_boundary_is_negative(x):
assert math.copysign(1, x) == -1
@pytest.mark.parametrize((u'l', u'r'), [
(0.0, 1.0),
(-1.0, 0.0),
(-sys.float_info.min, sys.float_info.min),
])
def test_can_generate_interval_endpoints(l, r):
interval = st.floats(l, r)
find(interval, lambda x: x == l)
find(interval, lambda x: x == r)
def test_half_bounded_generates_endpoint():
find(st.floats(min_value=-1.0), lambda x: x == -1.0)
find(st.floats(max_value=-1.0), lambda x: x == -1.0)
def test_half_bounded_generates_zero():
find(st.floats(min_value=-1.0), lambda x: x == 0.0)
find(st.floats(max_value=1.0), lambda x: x == 0.0)
@pytest.mark.xfail(
WINDOWS,
reason=(
'Seems to be triggering a floating point bug on 2.7 + windows + x64'))
@given(st.floats(max_value=-0.0))
def test_half_bounded_respects_sign_of_upper_bound(x):
assert math.copysign(1, x) == -1
@given(st.floats(min_value=0.0))
def test_half_bounded_respects_sign_of_lower_bound(x):
assert math.copysign(1, x) == 1
@given(st.floats(allow_nan=False))
def test_filter_nan(x):
assert not math.isnan(x)
@given(st.floats(allow_infinity=False))
def test_filter_infinity(x):
assert not math.isinf(x)
hypothesis-3.0.1/tests/cover/test_given_error_conditions.py 0000664 0000000 0000000 00000003230 12661275660 0024366 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
import pytest
from hypothesis import given, assume, reject, settings
from hypothesis.errors import Timeout, Unsatisfiable
from hypothesis.strategies import booleans, integers
def test_raises_timeout_on_slow_test():
@given(integers())
@settings(timeout=0.01)
def test_is_slow(x):
time.sleep(0.02)
with pytest.raises(Timeout):
test_is_slow()
def test_raises_unsatisfiable_if_all_false():
@given(integers())
@settings(max_examples=50)
def test_assume_false(x):
reject()
with pytest.raises(Unsatisfiable):
test_assume_false()
def test_raises_unsatisfiable_if_all_false_in_finite_set():
@given(booleans())
def test_assume_false(x):
reject()
with pytest.raises(Unsatisfiable):
test_assume_false()
def test_does_not_raise_unsatisfiable_if_some_false_in_finite_set():
@given(booleans())
def test_assume_x(x):
assume(x)
test_assume_x()
hypothesis-3.0.1/tests/cover/test_health_checks.py 0000664 0000000 0000000 00000010407 12661275660 0022405 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
from pytest import raises
import hypothesis.reporting as reporting
import hypothesis.strategies as st
from hypothesis import given, settings
from hypothesis.errors import FailedHealthCheck
from hypothesis.control import assume
from hypothesis.internal.compat import int_from_bytes
from hypothesis.searchstrategy.strategies import SearchStrategy
def test_slow_generation_fails_a_health_check():
@given(st.integers().map(lambda x: time.sleep(0.2)))
def test(x):
pass
with raises(FailedHealthCheck):
test()
def test_global_random_in_strategy_fails_a_health_check():
import random
@given(st.lists(st.integers(), min_size=1).map(random.choice))
def test(x):
pass
with raises(FailedHealthCheck):
test()
def test_global_random_in_test_fails_a_health_check():
import random
@given(st.lists(st.integers(), min_size=1))
def test(x):
random.choice(x)
with raises(FailedHealthCheck):
test()
def test_default_health_check_can_weaken_specific():
import random
@given(st.lists(st.integers(), min_size=1))
def test(x):
random.choice(x)
with settings(perform_health_check=False):
test()
def test_error_in_strategy_produces_health_check_error():
def boom(x):
raise ValueError()
@given(st.integers().map(boom))
def test(x):
pass
with raises(FailedHealthCheck) as e:
with reporting.with_reporter(reporting.default):
test()
assert 'executor' not in e.value.args[0]
def test_error_in_strategy_with_custom_executor():
def boom(x):
raise ValueError()
class Foo(object):
def execute_example(self, f):
return f()
@given(st.integers().map(boom))
@settings(database=None)
def test(self, x):
pass
with raises(FailedHealthCheck) as e:
Foo().test()
assert 'executor' in e.value.args[0]
def test_filtering_everything_fails_a_health_check():
@given(st.integers().filter(lambda x: False))
@settings(database=None)
def test(x):
pass
with raises(FailedHealthCheck) as e:
test()
assert 'filter' in e.value.args[0]
class fails_regularly(SearchStrategy):
def do_draw(self, data):
b = int_from_bytes(data.draw_bytes(2))
assume(b == 3)
print('ohai')
@settings(max_shrinks=0)
def test_filtering_most_things_fails_a_health_check():
@given(fails_regularly())
@settings(database=None)
def test(x):
pass
with raises(FailedHealthCheck) as e:
test()
assert 'filter' in e.value.args[0]
def test_large_data_will_fail_a_health_check():
@given(st.lists(
st.lists(st.text(average_size=100), average_size=100),
average_size=100))
@settings(database=None, buffer_size=1000)
def test(x):
pass
with raises(FailedHealthCheck) as e:
test()
assert 'allowable size' in e.value.args[0]
def test_nesting_without_control_fails_health_check():
@given(st.integers())
def test_blah(x):
@given(st.integers())
def test_nest(y):
assert y < x
with raises(AssertionError):
test_nest()
with raises(FailedHealthCheck):
test_blah()
def test_returning_non_none_is_forbidden():
@given(st.integers())
def a(x):
return 1
with raises(FailedHealthCheck):
a()
def test_returning_non_none_does_not_fail_if_health_check_disabled():
@given(st.integers())
@settings(perform_health_check=False)
def a(x):
return 1
a()
hypothesis-3.0.1/tests/cover/test_imports.py 0000664 0000000 0000000 00000001700 12661275660 0021311 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import *
from hypothesis.strategies import *
def test_can_star_import_from_hypothesis():
find(lists(integers()), lambda x: sum(x) > 1, settings=settings(
max_examples=10000, verbosity=Verbosity.quiet
))
hypothesis-3.0.1/tests/cover/test_integer_ranges.py 0000664 0000000 0000000 00000004667 12661275660 0022627 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import find, given, settings
from hypothesis.internal.conjecture.data import TestData
from hypothesis.internal.conjecture.utils import integer_range
from hypothesis.searchstrategy.strategies import SearchStrategy
class interval(SearchStrategy):
def __init__(self, lower, upper, center=None, distribution=None):
self.lower = lower
self.upper = upper
self.center = center
self.distribution = distribution
def do_draw(self, data):
return integer_range(
data, self.lower, self.upper, center=self.center,
distribution=self.distribution,
)
@given(
st.tuples(st.integers(), st.integers(), st.integers()).map(sorted),
st.random_module(),
)
@settings(timeout=10, max_shrinks=0)
def test_intervals_shrink_to_center(inter, rnd):
lower, center, upper = inter
s = interval(lower, upper, center)
with settings(database=None, max_shrinks=2000):
assert find(s, lambda x: True) == center
if lower < center:
assert find(s, lambda x: x < center) == center - 1
if center < upper:
assert find(s, lambda x: x > center) == center + 1
@given(
st.tuples(
st.integers(), st.integers(), st.integers(), st.integers()
).map(sorted),
st.randoms(),
)
@settings(timeout=10, max_shrinks=0)
def test_distribution_is_correctly_translated(inter, rnd):
assert inter == sorted(inter)
lower, c1, c2, upper = inter
d = TestData(
draw_bytes=lambda data, n, distribution: distribution(rnd, n),
max_length=10 ** 6
)
assert d.draw(interval(lower, upper, c1, lambda r: c2)) == c2
assert d.draw(interval(lower, upper, c2, lambda r: c1)) == c1
hypothesis-3.0.1/tests/cover/test_interleaving.py 0000664 0000000 0000000 00000002467 12661275660 0022316 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import strategies as st
from hypothesis import find, note, given, settings
@given(st.streaming(st.integers(min_value=0)), st.random_module())
@settings(buffer_size=200, max_shrinks=5, max_examples=10)
def test_can_eval_stream_inside_find(stream, rnd):
x = find(
st.lists(st.integers(min_value=0), min_size=10),
lambda t: any(t > s for (t, s) in zip(t, stream)),
settings=settings(database=None, max_shrinks=2000, max_examples=2000)
)
note('x: %r' % (x,))
note('Evalled: %r' % (stream,))
assert len([1 for i, v in enumerate(x) if stream[i] < v]) == 1
hypothesis-3.0.1/tests/cover/test_internal_helpers.py 0000664 0000000 0000000 00000001643 12661275660 0023160 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis.internal.floats import sign
def test_sign_gives_good_type_error():
x = 'foo'
with pytest.raises(TypeError) as e:
sign(x)
assert repr(x) in e.value.args[0]
hypothesis-3.0.1/tests/cover/test_intervalset.py 0000664 0000000 0000000 00000004524 12661275660 0022163 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import given, assume
from hypothesis.internal.intervalsets import IntervalSet
def build_intervals(ls):
ls.sort()
result = []
for u, l in ls:
v = u + l
if result:
a, b = result[-1]
if u <= b:
result[-1] = (a, v)
continue
result.append((u, v))
return IntervalSet(result)
Intervals = st.builds(
build_intervals,
st.lists(st.tuples(st.integers(), st.integers(0, 20)))
)
@given(Intervals)
def test_intervals_are_equivalent_to_their_lists(intervals):
ls = list(intervals)
assert len(ls) == len(intervals)
for i in range(len(ls)):
assert ls[i] == intervals[i]
for i in range(1, len(ls) - 1):
assert ls[-i] == intervals[-i]
@given(Intervals)
def test_intervals_match_indexes(intervals):
ls = list(intervals)
for v in ls:
assert ls.index(v) == intervals.index(v)
@given(Intervals, st.integers())
def test_error_for_index_of_not_present_value(intervals, v):
assume(v not in intervals)
with pytest.raises(ValueError):
intervals.index(v)
def test_validates_index():
with pytest.raises(IndexError):
IntervalSet([])[1]
with pytest.raises(IndexError):
IntervalSet([[1, 10]])[11]
with pytest.raises(IndexError):
IntervalSet([[1, 10]])[-11]
def test_index_above_is_index_if_present():
assert IntervalSet([[1, 10]]).index_above(1) == 0
assert IntervalSet([[1, 10]]).index_above(2) == 1
def test_index_above_is_length_if_higher():
assert IntervalSet([[1, 10]]).index_above(100) == 10
hypothesis-3.0.1/tests/cover/test_limits.py 0000664 0000000 0000000 00000002001 12661275660 0021110 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import strategies as st
from hypothesis import given, settings
def test_max_examples_are_respected():
counter = [0]
@given(st.random_module(), st.integers())
@settings(max_examples=100)
def test(rnd, i):
counter[0] += 1
test()
assert counter == [100]
hypothesis-3.0.1/tests/cover/test_map.py 0000664 0000000 0000000 00000002134 12661275660 0020373 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import strategies as st
from hypothesis import given, assume
from hypothesis.errors import NoExamples
@given(st.integers().map(lambda x: assume(x % 3 != 0) and x))
def test_can_assume_in_map(x):
assert x % 3 != 0
def test_assume_in_just_raises_immediately():
with pytest.raises(NoExamples):
st.just(1).map(lambda x: assume(x == 2)).example()
hypothesis-3.0.1/tests/cover/test_numerics.py 0000664 0000000 0000000 00000002173 12661275660 0021446 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import given, assume
from tests.common.utils import fails
from hypothesis.strategies import decimals, fractions, float_to_decimal
@fails
@given(decimals())
def test_all_decimals_can_be_exact_floats(x):
assume(x.is_finite())
assert float_to_decimal(float(x)) == x
@given(fractions(), fractions(), fractions())
def test_fraction_addition_is_well_behaved(x, y, z):
assert x + y + z == y + x + z
hypothesis-3.0.1/tests/cover/test_permutations.py 0000664 0000000 0000000 00000002266 12661275660 0022356 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import find, given
from hypothesis.strategies import permutations
def test_can_find_non_trivial_permutation():
x = find(
permutations(list(range(5))), lambda x: x[0] != 0
)
assert x == [1, 0, 2, 3, 4]
@given(permutations(list(u'abcd')))
def test_permutation_values_are_permutations(perm):
assert len(perm) == 4
assert set(perm) == set(u'abcd')
@given(permutations([]))
def test_empty_permutations_are_empty(xs):
assert xs == []
hypothesis-3.0.1/tests/cover/test_random_module.py 0000664 0000000 0000000 00000002650 12661275660 0022446 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import given, reporting
from tests.common.utils import capture_out
def test_can_seed_random():
with capture_out() as out:
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
@given(st.random_module())
def test(r):
assert False
test()
assert 'random.seed(0)' in out.getvalue()
@given(st.random_module(), st.random_module())
def test_seed_random_twice(r, r2):
assert repr(r) == repr(r2)
@given(st.random_module())
def test_does_not_fail_health_check_if_randomness_is_used(r):
import random
random.getrandbits(128)
hypothesis-3.0.1/tests/cover/test_randomization.py 0000664 0000000 0000000 00000003040 12661275660 0022471 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import random
from pytest import raises
import hypothesis.strategies as st
from hypothesis import find, given, settings, Verbosity
def test_seeds_off_random():
s = settings(max_shrinks=0, database=None)
r = random.getstate()
x = find(st.integers(), lambda x: True, settings=s)
random.setstate(r)
y = find(st.integers(), lambda x: True, settings=s)
assert x == y
def test_nesting_with_control_passes_health_check():
@given(st.integers(0, 100), st.random_module())
@settings(max_examples=5, database=None)
def test_blah(x, rnd):
@given(st.integers())
@settings(
max_examples=100, max_shrinks=0, database=None,
verbosity=Verbosity.quiet)
def test_nest(y):
assert y < x
with raises(AssertionError):
test_nest()
test_blah()
hypothesis-3.0.1/tests/cover/test_recursive.py 0000664 0000000 0000000 00000003025 12661275660 0021625 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
import hypothesis.strategies as st
from hypothesis import find, given
from hypothesis.errors import InvalidArgument
@given(
st.recursive(
st.booleans(), lambda x: st.lists(x, average_size=20),
max_leaves=10))
def test_respects_leaf_limit(xs):
def flatten(x):
if isinstance(x, list):
return sum(map(flatten, x), [])
else:
return [x]
assert len(flatten(xs)) <= 10
def test_can_find_nested():
x = find(
st.recursive(st.booleans(), lambda x: st.tuples(x, x)),
lambda x: isinstance(x, tuple) and isinstance(x[0], tuple)
)
assert x == ((False, False), False)
def test_recursive_call_validates_expand_returns_strategies():
with pytest.raises(InvalidArgument):
st.recursive(st.booleans(), lambda x: 1).example()
hypothesis-3.0.1/tests/cover/test_reflection.py 0000664 0000000 0000000 00000036171 12661275660 0021760 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
from copy import deepcopy
from functools import partial
from tests.common.utils import raises
from hypothesis.internal.compat import PY3, ArgSpec, getargspec
from hypothesis.internal.reflection import proxies, arg_string, \
copy_argspec, unbind_method, eval_directory, function_digest, \
fully_qualified_name, source_exec_as_module, \
convert_keyword_arguments, convert_positional_arguments, \
get_pretty_function_description
def do_conversion_test(f, args, kwargs):
result = f(*args, **kwargs)
cargs, ckwargs = convert_keyword_arguments(f, args, kwargs)
assert result == f(*cargs, **ckwargs)
cargs2, ckwargs2 = convert_positional_arguments(f, args, kwargs)
assert result == f(*cargs2, **ckwargs2)
def test_simple_conversion():
def foo(a, b, c):
return (a, b, c)
assert convert_keyword_arguments(
foo, (1, 2, 3), {}) == ((1, 2, 3), {})
assert convert_keyword_arguments(
foo, (), {'a': 3, 'b': 2, 'c': 1}) == ((3, 2, 1), {})
do_conversion_test(foo, (1, 0), {'c': 2})
do_conversion_test(foo, (1,), {'c': 2, 'b': 'foo'})
def test_populates_defaults():
def bar(x=[], y=1):
pass
assert convert_keyword_arguments(bar, (), {}) == (([], 1), {})
assert convert_keyword_arguments(bar, (), {'y': 42}) == (([], 42), {})
do_conversion_test(bar, (), {})
do_conversion_test(bar, (1,), {})
def test_leaves_unknown_kwargs_in_dict():
def bar(x, **kwargs):
pass
assert convert_keyword_arguments(bar, (1,), {'foo': 'hi'}) == (
(1,), {'foo': 'hi'}
)
assert convert_keyword_arguments(bar, (), {'x': 1, 'foo': 'hi'}) == (
(1,), {'foo': 'hi'}
)
do_conversion_test(bar, (1,), {})
do_conversion_test(bar, (), {'x': 1, 'y': 1})
def test_errors_on_bad_kwargs():
def bar():
pass # pragma: no cover
with raises(TypeError):
convert_keyword_arguments(bar, (), {'foo': 1})
def test_passes_varargs_correctly():
def foo(*args):
pass
assert convert_keyword_arguments(foo, (1, 2, 3), {}) == ((1, 2, 3), {})
do_conversion_test(foo, (1, 2, 3), {})
def test_errors_if_keyword_precedes_positional():
def foo(x, y):
pass # pragma: no cover
with raises(TypeError):
convert_keyword_arguments(foo, (1,), {'x': 2})
def test_errors_if_not_enough_args():
def foo(a, b, c, d=1):
pass # pragma: no cover
with raises(TypeError):
convert_keyword_arguments(foo, (1, 2), {'d': 4})
def test_errors_on_extra_kwargs():
def foo(a):
pass # pragma: no cover
with raises(TypeError) as e:
convert_keyword_arguments(foo, (1,), {'b': 1})
assert 'keyword' in e.value.args[0]
with raises(TypeError) as e2:
convert_keyword_arguments(foo, (1,), {'b': 1, 'c': 2})
assert 'keyword' in e2.value.args[0]
def test_positional_errors_if_too_many_args():
def foo(a):
pass
with raises(TypeError) as e:
convert_positional_arguments(foo, (1, 2), {})
assert '2 given' in e.value.args[0]
def test_positional_errors_if_too_few_args():
def foo(a, b, c):
pass
with raises(TypeError):
convert_positional_arguments(foo, (1, 2), {})
def test_positional_does_not_error_if_extra_args_are_kwargs():
def foo(a, b, c):
pass
convert_positional_arguments(foo, (1, 2), {'c': 3})
def test_positional_errors_if_given_bad_kwargs():
def foo(a):
pass
with raises(TypeError) as e:
convert_positional_arguments(foo, (), {'b': 1})
assert 'unexpected keyword argument' in e.value.args[0]
def test_positional_errors_if_given_duplicate_kwargs():
def foo(a):
pass
with raises(TypeError) as e:
convert_positional_arguments(foo, (2,), {'a': 1})
assert 'multiple values' in e.value.args[0]
def test_names_of_functions_are_pretty():
assert get_pretty_function_description(
test_names_of_functions_are_pretty
) == 'test_names_of_functions_are_pretty'
def test_can_have_unicode_in_lambda_sources():
t = lambda x: 'é' not in x
assert get_pretty_function_description(t) == (
"lambda x: 'é' not in x"
)
ordered_pair = (
lambda right: [].map(
lambda length: ()))
def test_can_get_descriptions_of_nested_lambdas_with_different_names():
assert get_pretty_function_description(ordered_pair) == \
'lambda right: [].map(lambda length: ())'
class Foo(object):
@classmethod
def bar(cls):
pass # pragma: no cover
def baz(cls):
pass # pragma: no cover
def __repr__(self):
return 'SoNotFoo()'
def test_class_names_are_not_included_in_class_method_prettiness():
assert get_pretty_function_description(Foo.bar) == 'bar'
def test_repr_is_included_in_bound_method_prettiness():
assert get_pretty_function_description(Foo().baz) == 'SoNotFoo().baz'
def test_class_is_not_included_in_unbound_method():
assert (
get_pretty_function_description(Foo.baz)
== 'baz'
)
# Note: All of these no branch pragmas are because we don't actually ever want
# to call these lambdas. We're just inspecting their source.
def test_source_of_lambda_is_pretty():
assert get_pretty_function_description(
lambda x: True
) == 'lambda x: True' # pragma: no cover
def test_variable_names_are_not_pretty():
t = lambda x: True # pragma: no cover
assert get_pretty_function_description(t) == 'lambda x: True'
def test_does_not_error_on_dynamically_defined_functions():
x = eval('lambda t: 1')
get_pretty_function_description(x)
def test_collapses_whitespace_nicely():
t = (
lambda x, y: 1 # pragma: no cover
)
assert get_pretty_function_description(t) == 'lambda x, y: 1'
def test_is_not_confused_by_tuples():
p = (lambda x: x > 1, 2)[0] # pragma: no cover
assert get_pretty_function_description(p) == 'lambda x: x > 1'
def test_does_not_error_on_confused_sources():
def ed(f, *args):
return f
x = ed(
lambda x, y: ( # pragma: no cover
x * y
).conjugate() == x.conjugate() * y.conjugate(), complex, complex)
get_pretty_function_description(x)
def test_strips_comments_from_the_end():
t = lambda x: 1 # pragma: no cover
assert get_pretty_function_description(t) == 'lambda x: 1'
def test_does_not_strip_hashes_within_a_string():
t = lambda x: '#' # pragma: no cover
assert get_pretty_function_description(t) == "lambda x: '#'"
def test_can_distinguish_between_two_lambdas_with_different_args():
a, b = (lambda x: 1, lambda y: 2) # pragma: no cover
assert get_pretty_function_description(a) == 'lambda x: 1'
assert get_pretty_function_description(b) == 'lambda y: 2'
def test_does_not_error_if_it_cannot_distinguish_between_two_lambdas():
a, b = (lambda x: 1, lambda x: 2) # pragma: no cover
assert 'lambda x:' in get_pretty_function_description(a)
assert 'lambda x:' in get_pretty_function_description(b)
def test_lambda_source_break_after_def_with_brackets():
f = (lambda n:
'aaa')
source = get_pretty_function_description(f)
assert source == "lambda n: 'aaa'"
def test_lambda_source_break_after_def_with_line_continuation():
f = lambda n:\
'aaa'
source = get_pretty_function_description(f)
assert source == "lambda n: 'aaa'"
def test_digests_are_reasonably_unique():
assert (
function_digest(test_simple_conversion) !=
function_digest(test_does_not_error_on_dynamically_defined_functions)
)
def test_digest_returns_the_same_value_for_two_calls():
assert (
function_digest(test_simple_conversion) ==
function_digest(test_simple_conversion)
)
def test_can_digest_a_built_in_function():
import math
assert function_digest(math.isnan) != function_digest(range)
def test_can_digest_a_unicode_lambda():
function_digest(lambda x: '☃' in str(x))
def test_can_digest_a_function_with_no_name():
def foo(x, y):
pass
function_digest(partial(foo, 1))
def test_arg_string_is_in_order():
def foo(c, a, b, f, a1):
pass
assert arg_string(foo, (1, 2, 3, 4, 5), {}) == 'c=1, a=2, b=3, f=4, a1=5'
assert arg_string(
foo, (1, 2),
{'b': 3, 'f': 4, 'a1': 5}) == 'c=1, a=2, b=3, f=4, a1=5'
def test_varkwargs_are_sorted_and_after_real_kwargs():
def foo(d, e, f, **kwargs):
pass
assert arg_string(
foo, (), {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6}
) == 'd=4, e=5, f=6, a=1, b=2, c=3'
def test_varargs_come_without_equals():
def foo(a, *args):
pass
assert arg_string(foo, (1, 2, 3, 4), {}) == '2, 3, 4, a=1'
def test_can_mix_varargs_and_varkwargs():
def foo(*args, **kwargs):
pass
assert arg_string(
foo, (1, 2, 3), {'c': 7}
) == '1, 2, 3, c=7'
def test_arg_string_does_not_include_unprovided_defaults():
def foo(a, b, c=9, d=10):
pass
assert arg_string(foo, (1,), {'b': 1, 'd': 11}) == 'a=1, b=1, d=11'
class A(object):
def f(self):
pass
def g(self):
pass
class B(A):
pass
class C(A):
def f(self):
pass
def test_unbind_gives_parent_class_function():
assert unbind_method(B().f) == unbind_method(A.f)
def test_unbind_distinguishes_different_functions():
assert unbind_method(A.f) != unbind_method(A.g)
def test_unbind_distinguishes_overridden_functions():
assert unbind_method(C().f) != unbind_method(A.f)
def universal_acceptor(*args, **kwargs):
return args, kwargs
def has_one_arg(hello):
pass
def has_two_args(hello, world):
pass
def has_a_default(x, y, z=1):
pass
def has_varargs(*args):
pass
def has_kwargs(**kwargs):
pass
def test_copying_preserves_argspec():
for f in [has_one_arg, has_two_args, has_varargs, has_kwargs]:
af = getargspec(f)
t = copy_argspec('foo', getargspec(f))(universal_acceptor)
at = getargspec(t)
assert af.args == at.args
assert af.varargs == at.varargs
assert af.keywords == at.keywords
assert len(af.defaults or ()) == len(at.defaults or ())
def test_name_does_not_clash_with_function_names():
def f():
pass
@copy_argspec('f', getargspec(f))
def g():
pass
g()
def test_copying_sets_name():
f = copy_argspec(
'hello_world', getargspec(has_two_args))(universal_acceptor)
assert f.__name__ == 'hello_world'
def test_uses_defaults():
f = copy_argspec(
'foo', getargspec(has_a_default))(universal_acceptor)
assert f(3, 2) == ((3, 2, 1), {})
def test_uses_varargs():
f = copy_argspec(
'foo', getargspec(has_varargs))(universal_acceptor)
assert f(1, 2) == ((1, 2), {})
DEFINE_FOO_FUNCTION = """
def foo(x):
return x
"""
def test_exec_as_module_execs():
m = source_exec_as_module(DEFINE_FOO_FUNCTION)
assert m.foo(1) == 1
def test_exec_as_module_caches():
assert (
source_exec_as_module(DEFINE_FOO_FUNCTION) is
source_exec_as_module(DEFINE_FOO_FUNCTION)
)
def test_exec_leaves_sys_path_unchanged():
old_path = deepcopy(sys.path)
source_exec_as_module('hello_world = 42')
assert sys.path == old_path
def test_copy_argspec_works_with_conflicts():
def accepts_everything(*args, **kwargs):
pass
copy_argspec('hello', ArgSpec(
args=('f',), varargs=None, keywords=None, defaults=None
))(accepts_everything)(1)
copy_argspec('hello', ArgSpec(
args=(), varargs='f', keywords=None, defaults=None
))(accepts_everything)(1)
copy_argspec('hello', ArgSpec(
args=(), varargs=None, keywords='f', defaults=None
))(accepts_everything)()
copy_argspec('hello', ArgSpec(
args=('f', 'f_3'), varargs='f_1', keywords='f_2', defaults=None
))(accepts_everything)(1, 2)
def test_copy_argspec_validates_arguments():
with raises(ValueError):
copy_argspec('hello_world', ArgSpec(
args=['a b'], varargs=None, keywords=None, defaults=None))
def test_copy_argspec_validates_function_name():
with raises(ValueError):
copy_argspec('hello world', ArgSpec(
args=['a', 'b'], varargs=None, keywords=None, defaults=None))
class Container(object):
def funcy(self):
pass
def test_fully_qualified_name():
assert fully_qualified_name(test_copying_preserves_argspec) == \
'tests.cover.test_reflection.test_copying_preserves_argspec'
assert fully_qualified_name(Container.funcy) == \
'tests.cover.test_reflection.Container.funcy'
assert fully_qualified_name(fully_qualified_name) == \
'hypothesis.internal.reflection.fully_qualified_name'
def test_qualname_of_function_with_none_module_is_name():
def f():
pass
f.__module__ = None
assert fully_qualified_name(f)[-1] == 'f'
def test_can_proxy_functions_with_mixed_args_and_varargs():
def foo(a, *args):
return (a, args)
@proxies(foo)
def bar(*args, **kwargs):
return foo(*args, **kwargs)
assert bar(1, 2) == (1, (2,))
def test_can_delegate_to_a_function_with_no_positional_args():
def foo(a, b):
return (a, b)
@proxies(foo)
def bar(**kwargs):
return foo(**kwargs)
assert bar(2, 1) == (2, 1)
class Snowman(object):
def __repr__(self):
return '☃'
class BittySnowman(object):
def __repr__(self):
return '☃'
def test_can_handle_unicode_repr():
def foo(x):
pass
from hypothesis import settings
with settings(strict=False):
assert arg_string(foo, [Snowman()], {}) == 'x=☃'
assert arg_string(foo, [], {'x': Snowman()}) == 'x=☃'
class NoRepr(object):
pass
def test_can_handle_repr_on_type():
def foo(x):
pass
assert arg_string(foo, [Snowman], {}) == 'x=Snowman'
assert arg_string(foo, [NoRepr], {}) == 'x=NoRepr'
def test_can_handle_repr_of_none():
def foo(x):
pass
assert arg_string(foo, [None], {}) == 'x=None'
assert arg_string(foo, [], {'x': None}) == 'x=None'
if not PY3:
def test_can_handle_non_unicode_repr_containing_non_ascii():
def foo(x):
pass
assert arg_string(foo, [BittySnowman()], {}) == 'x=☃'
assert arg_string(foo, [], {'x': BittySnowman()}) == 'x=☃'
def test_does_not_put_eval_directory_on_path():
source_exec_as_module("hello = 'world'")
assert eval_directory() not in sys.path
def varargs(*args, **kwargs):
pass
def test_kwargs_appear_in_arg_string():
assert 'x=1' in arg_string(varargs, (), {'x': 1})
hypothesis-3.0.1/tests/cover/test_reporting.py 0000664 0000000 0000000 00000005340 12661275660 0021631 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
import pytest
from hypothesis import given, reporting
from tests.common.utils import capture_out
from hypothesis._settings import settings, Verbosity
from hypothesis.reporting import report, debug_report, verbose_report
from hypothesis.strategies import integers
from hypothesis.internal.compat import PY2
def test_can_suppress_output():
@given(integers())
def test_int(x):
assert False
with capture_out() as o:
with reporting.with_reporter(reporting.silent):
with pytest.raises(AssertionError):
test_int()
assert u'Falsifying example' not in o.getvalue()
def test_can_print_bytes():
with capture_out() as o:
with reporting.with_reporter(reporting.default):
report(b'hi')
assert o.getvalue() == u'hi\n'
def test_prints_output_by_default():
@given(integers())
def test_int(x):
assert False
with capture_out() as o:
with reporting.with_reporter(reporting.default):
with pytest.raises(AssertionError):
test_int()
assert u'Falsifying example' in o.getvalue()
def test_does_not_print_debug_in_verbose():
with settings(verbosity=Verbosity.verbose):
with capture_out() as o:
debug_report(u'Hi')
assert not o.getvalue()
def test_does_print_debug_in_debug():
with settings(verbosity=Verbosity.debug):
with capture_out() as o:
debug_report(u'Hi')
assert u'Hi' in o.getvalue()
def test_does_print_verbose_in_debug():
with settings(verbosity=Verbosity.debug):
with capture_out() as o:
verbose_report(u'Hi')
assert u'Hi' in o.getvalue()
@pytest.mark.skipif(
PY2, reason="Output streams don't have encodings in python 2")
def test_can_report_when_system_locale_is_ascii(monkeypatch):
import io
read, write = os.pipe()
read = io.open(read, 'r', encoding='ascii')
write = io.open(write, 'w', encoding='ascii')
monkeypatch.setattr(sys, 'stdout', write)
reporting.default(u"☃")
hypothesis-3.0.1/tests/cover/test_sampled_from.py 0000664 0000000 0000000 00000001667 12661275660 0022300 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import given, settings
from hypothesis.strategies import sampled_from
@given(sampled_from((1, 2)))
@settings(min_satisfying_examples=10)
def test_can_handle_sampling_from_fewer_than_min_satisfying(v):
pass
hypothesis-3.0.1/tests/cover/test_searchstrategy.py 0000664 0000000 0000000 00000005077 12661275660 0022657 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import functools
from collections import namedtuple
import pytest
from hypothesis.types import RandomWithSeed
from hypothesis.errors import NoExamples, InvalidArgument
from hypothesis.strategies import just, tuples, randoms, booleans, \
integers, sampled_from
from hypothesis.internal.compat import text_type
from hypothesis.searchstrategy.strategies import one_of_strategies
def test_or_errors_when_given_non_strategy():
bools = tuples(booleans())
with pytest.raises(ValueError):
bools | u'foo'
def test_joining_zero_strategies_fails():
with pytest.raises(ValueError):
one_of_strategies(())
SomeNamedTuple = namedtuple(u'SomeNamedTuple', (u'a', u'b'))
def last(xs):
t = None
for x in xs:
t = x
return t
def test_random_repr_has_seed():
rnd = randoms().example()
seed = rnd.seed
assert text_type(seed) in repr(rnd)
def test_random_only_produces_special_random():
st = randoms()
assert isinstance(st.example(), RandomWithSeed)
def test_just_strategy_uses_repr():
class WeirdRepr(object):
def __repr__(self):
return u'ABCDEFG'
assert repr(
just(WeirdRepr())
) == u'just(%r)' % (WeirdRepr(),)
def test_can_map():
s = integers().map(pack=lambda t: u'foo')
assert s.example() == u'foo'
def test_sample_from_empty_errors():
with pytest.raises(InvalidArgument):
sampled_from([]).example()
def test_example_raises_unsatisfiable_when_too_filtered():
with pytest.raises(NoExamples):
integers().filter(lambda x: False).example()
def nameless_const(x):
def f(u, v):
return u
return functools.partial(f, x)
def test_can_map_nameless():
f = nameless_const(2)
assert repr(f) in repr(integers().map(f))
def test_can_flatmap_nameless():
f = nameless_const(just(3))
assert repr(f) in repr(integers().flatmap(f))
hypothesis-3.0.1/tests/cover/test_sets.py 0000664 0000000 0000000 00000003127 12661275660 0020577 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import find, given, settings
from hypothesis.errors import InvalidArgument
from hypothesis.strategies import sets, lists, floats, randoms, integers
def test_unique_lists_error_on_too_large_average_size():
with pytest.raises(InvalidArgument):
lists(integers(), unique=True, average_size=10, max_size=5).example()
@given(randoms())
@settings(max_examples=5)
def test_can_draw_sets_of_hard_to_find_elements(rnd):
rarebool = floats(0, 1).map(lambda x: x <= 0.01)
find(
sets(rarebool, min_size=2), lambda x: True,
random=rnd, settings=settings(database=None))
def test_sets_of_small_average_size():
assert len(sets(integers(), average_size=1.0).example()) <= 10
@given(sets(max_size=0))
def test_empty_sets(x):
assert x == set()
@given(sets(integers(), max_size=2))
def test_bounded_size_sets(x):
assert len(x) <= 2
hypothesis-3.0.1/tests/cover/test_settings.py 0000664 0000000 0000000 00000011347 12661275660 0021464 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import pytest
import hypothesis
from hypothesis.errors import InvalidArgument
from hypothesis.database import ExampleDatabase
from hypothesis._settings import settings, Verbosity
def test_has_docstrings():
assert settings.verbosity.__doc__
original_default = settings.get_profile('default').max_examples
def setup_function(fn):
settings.load_profile('default')
settings.register_profile('test_settings', settings())
settings.load_profile('test_settings')
def test_cannot_set_non_settings():
s = settings()
with pytest.raises(AttributeError):
s.databas_file = u'some_file'
def test_settings_uses_defaults():
s = settings()
assert s.max_examples == settings.default.max_examples
def test_raises_attribute_error():
with pytest.raises(AttributeError):
settings().kittens
def test_respects_none_database():
assert settings(database=None).database is None
def test_settings_can_be_used_as_context_manager_to_change_defaults():
with settings(max_examples=12):
assert settings.default.max_examples == 12
assert settings.default.max_examples == original_default
def test_can_repeatedly_push_the_same_thing():
s = settings(max_examples=12)
t = settings(max_examples=17)
assert settings().max_examples == original_default
with s:
assert settings().max_examples == 12
with t:
assert settings().max_examples == 17
with s:
assert settings().max_examples == 12
with t:
assert settings().max_examples == 17
assert settings().max_examples == 12
assert settings().max_examples == 17
assert settings().max_examples == 12
assert settings().max_examples == original_default
def test_cannot_create_settings_with_invalid_options():
with pytest.raises(InvalidArgument):
settings(a_setting_with_limited_options=u'spoon')
def test_can_set_verbosity():
settings(verbosity=Verbosity.quiet)
settings(verbosity=Verbosity.normal)
settings(verbosity=Verbosity.verbose)
def test_can_not_set_verbosity_to_non_verbosity():
with pytest.raises(InvalidArgument):
settings(verbosity='kittens')
@pytest.mark.parametrize('db', [None, ExampleDatabase()])
def test_inherits_an_empty_database(db):
assert settings.default.database is not None
s = settings(database=db)
assert s.database is db
with s:
t = settings()
assert t.database is db
@pytest.mark.parametrize('db', [None, ExampleDatabase()])
def test_can_assign_database(db):
x = settings(database=db)
assert x.database is db
def test_load_profile():
settings.load_profile('default')
assert settings.default.max_examples == 200
assert settings.default.max_shrinks == 500
assert settings.default.min_satisfying_examples == 5
settings.register_profile(
'test',
settings(
max_examples=10,
max_shrinks=5
)
)
settings.load_profile('test')
assert settings.default.max_examples == 10
assert settings.default.max_shrinks == 5
assert settings.default.min_satisfying_examples == 5
settings.load_profile('default')
assert settings.default.max_examples == 200
assert settings.default.max_shrinks == 500
assert settings.default.min_satisfying_examples == 5
def test_loading_profile_keeps_expected_behaviour():
settings.register_profile('ci', settings(max_examples=10000))
settings.load_profile('ci')
assert settings().max_examples == 10000
with settings(max_examples=5):
assert settings().max_examples == 5
assert settings().max_examples == 10000
def test_load_non_existent_profile():
with pytest.raises(hypothesis.errors.InvalidArgument):
settings.get_profile('nonsense')
@pytest.mark.skipif(
os.getenv('HYPOTHESIS_PROFILE') not in (None, 'default'),
reason='Defaults have been overridden')
def test_runs_tests_with_defaults_from_conftest():
assert settings.default.strict
assert settings.default.timeout == -1
hypothesis-3.0.1/tests/cover/test_setup_teardown.py 0000664 0000000 0000000 00000006734 12661275660 0022673 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import given, assume, settings
from hypothesis.errors import FailedHealthCheck
from hypothesis.strategies import text, integers
class HasSetup(object):
def setup_example(self):
self.setups = getattr(self, u'setups', 0)
self.setups += 1
class HasTeardown(object):
def teardown_example(self, ex):
self.teardowns = getattr(self, u'teardowns', 0)
self.teardowns += 1
class SomeGivens(object):
@given(integers())
def give_me_an_int(self, x):
pass
@given(text())
def give_me_a_string(myself, x):
pass
@given(integers())
def give_me_a_positive_int(self, x):
assert x >= 0
@given(integers().map(lambda x: x.nope))
def fail_in_reify(self, x):
pass
@given(integers())
def assume_some_stuff(self, x):
assume(x > 0)
@given(integers().filter(lambda x: x > 0))
def assume_in_reify(self, x):
pass
class HasSetupAndTeardown(HasSetup, HasTeardown, SomeGivens):
pass
def test_calls_setup_and_teardown_on_self_as_first_argument():
x = HasSetupAndTeardown()
x.give_me_an_int()
x.give_me_a_string()
assert x.setups > 0
assert x.teardowns == x.setups
def test_calls_setup_and_teardown_on_self_unbound():
x = HasSetupAndTeardown()
HasSetupAndTeardown.give_me_an_int(x)
assert x.setups > 0
assert x.teardowns == x.setups
def test_calls_setup_and_teardown_on_failure():
x = HasSetupAndTeardown()
with pytest.raises(AssertionError):
x.give_me_a_positive_int()
assert x.setups > 0
assert x.teardowns == x.setups
def test_still_tears_down_on_failed_reify():
x = HasSetupAndTeardown()
with pytest.raises(AttributeError):
with settings(perform_health_check=False):
x.fail_in_reify()
assert x.setups > 0
assert x.teardowns == x.setups
def test_still_tears_down_on_failed_health_check():
x = HasSetupAndTeardown()
with pytest.raises(FailedHealthCheck):
x.fail_in_reify()
assert x.setups > 0
assert x.teardowns == x.setups
def test_still_tears_down_on_failed_assume():
x = HasSetupAndTeardown()
x.assume_some_stuff()
assert x.setups > 0
assert x.teardowns == x.setups
def test_still_tears_down_on_failed_assume_in_reify():
x = HasSetupAndTeardown()
x.assume_in_reify()
assert x.setups > 0
assert x.teardowns == x.setups
def test_sets_up_without_teardown():
class Foo(HasSetup, SomeGivens):
pass
x = Foo()
x.give_me_an_int()
assert x.setups > 0
assert not hasattr(x, u'teardowns')
def test_tears_down_without_setup():
class Foo(HasTeardown, SomeGivens):
pass
x = Foo()
x.give_me_an_int()
assert x.teardowns > 0
assert not hasattr(x, u'setups')
hypothesis-3.0.1/tests/cover/test_sharing.py 0000664 0000000 0000000 00000003755 12661275660 0021263 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import find, given
x = st.shared(st.integers())
@given(x, x)
def test_sharing_is_by_instance_by_default(a, b):
assert a == b
@given(
st.shared(st.integers(), key='hi'), st.shared(st.integers(), key='hi'))
def test_different_instances_with_the_same_key_are_shared(a, b):
assert a == b
def test_different_instances_are_not_shared():
find(
st.tuples(st.shared(st.integers()), st.shared(st.integers())),
lambda x: x[0] != x[1]
)
def test_different_keys_are_not_shared():
find(
st.tuples(
st.shared(st.integers(), key=1),
st.shared(st.integers(), key=2)),
lambda x: x[0] != x[1]
)
def test_keys_and_default_are_not_shared():
find(
st.tuples(
st.shared(st.integers(), key=1),
st.shared(st.integers())),
lambda x: x[0] != x[1]
)
def test_can_simplify_shared_lists():
xs = find(
st.lists(st.shared(st.integers())),
lambda x: len(x) >= 10 and x[0] != 0
)
assert xs == [1] * 10
def test_simplify_shared_linked_to_size():
xs = find(
st.lists(st.shared(st.integers())),
lambda t: sum(t) >= 1000
)
assert sum(xs[:-1]) < 1000
assert (xs[0] - 1) * len(xs) < 1000
hypothesis-3.0.1/tests/cover/test_shrinking_limits.py 0000664 0000000 0000000 00000002024 12661275660 0023171 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import find, settings
def test_max_shrinks():
seen = set()
def tracktrue(s):
seen.add(s)
return True
find(
st.binary(min_size=100, max_size=100), tracktrue,
settings=settings(max_shrinks=1)
)
assert len(seen) == 2
hypothesis-3.0.1/tests/cover/test_simple_characters.py 0000664 0000000 0000000 00000005671 12661275660 0023317 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import unicodedata
import pytest
from hypothesis import find
from hypothesis.errors import NoSuchExample, InvalidArgument
from hypothesis.strategies import characters
def test_bad_category_arguments():
with pytest.raises(InvalidArgument):
characters(
whitelist_categories=['foo'], blacklist_categories=['bar']
).example()
def test_bad_codepoint_arguments():
with pytest.raises(InvalidArgument):
characters(min_codepoint=42, max_codepoint=24).example()
def test_exclude_all_available_range():
with pytest.raises(InvalidArgument):
characters(min_codepoint=ord('0'), max_codepoint=ord('0'),
blacklist_characters='0').example()
def test_when_nothing_could_be_produced():
with pytest.raises(InvalidArgument):
characters(whitelist_categories=['Cc'],
min_codepoint=ord('0'), max_codepoint=ord('9')).example()
def test_characters_of_specific_groups():
st = characters(whitelist_categories=('Lu', 'Nd'))
find(st, lambda c: unicodedata.category(c) == 'Lu')
find(st, lambda c: unicodedata.category(c) == 'Nd')
with pytest.raises(NoSuchExample):
find(st, lambda c: unicodedata.category(c) not in ('Lu', 'Nd'))
def test_exclude_characters_of_specific_groups():
st = characters(blacklist_categories=('Lu', 'Nd'))
find(st, lambda c: unicodedata.category(c) != 'Lu')
find(st, lambda c: unicodedata.category(c) != 'Nd')
with pytest.raises(NoSuchExample):
find(st, lambda c: unicodedata.category(c) in ('Lu', 'Nd'))
def test_find_one():
char = find(characters(min_codepoint=48, max_codepoint=48), lambda _: True)
assert char == u'0'
def test_find_something_rare():
st = characters(whitelist_categories=['Zs'], min_codepoint=12288)
find(st, lambda c: unicodedata.category(c) == 'Zs')
with pytest.raises(NoSuchExample):
find(st, lambda c: unicodedata.category(c) != 'Zs')
def test_blacklisted_characters():
bad_chars = u'te02тест49st'
st = characters(min_codepoint=ord('0'), max_codepoint=ord('9'),
blacklist_characters=bad_chars)
assert '1' == find(st, lambda c: True)
with pytest.raises(NoSuchExample):
find(st, lambda c: c in bad_chars)
hypothesis-3.0.1/tests/cover/test_simple_collections.py 0000664 0000000 0000000 00000014734 12661275660 0023516 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from random import Random
from collections import namedtuple
import pytest
from flaky import flaky
from hypothesis import find, given, settings
from hypothesis.strategies import sets, text, lists, builds, tuples, \
booleans, integers, frozensets, dictionaries, fixed_dictionaries
from hypothesis.internal.debug import minimal
from hypothesis.internal.compat import OrderedDict
@pytest.mark.parametrize((u'col', u'strat'), [
((), tuples()),
([], lists(max_size=0)),
(set(), sets(max_size=0)),
(frozenset(), frozensets(max_size=0)),
({}, fixed_dictionaries({})),
])
def test_find_empty_collection_gives_empty(col, strat):
assert find(strat, lambda x: True) == col
@pytest.mark.parametrize((u'coltype', u'strat'), [
(list, lists),
(set, sets),
(frozenset, frozensets),
])
def test_find_non_empty_collection_gives_single_zero(coltype, strat):
assert find(
strat(integers()), bool
) == coltype((0,))
@pytest.mark.parametrize((u'coltype', u'strat'), [
(list, lists),
(set, sets),
(frozenset, frozensets),
])
def test_minimizes_to_empty(coltype, strat):
assert find(
strat(integers()), lambda x: True
) == coltype()
def test_minimizes_list_of_lists():
xs = find(lists(lists(booleans())), lambda x: any(x) and not all(x))
xs.sort()
assert xs == [[], [False]]
def test_minimize_long_list():
assert find(
lists(booleans(), average_size=100), lambda x: len(x) >= 70
) == [False] * 70
def test_minimize_list_of_longish_lists():
xs = find(
lists(lists(booleans())),
lambda x: len([t for t in x if any(t) and len(t) >= 3]) >= 10)
assert len(xs) == 10
for x in xs:
assert len(x) == 3
assert len([t for t in x if t]) == 1
def test_minimize_list_of_fairly_non_unique_ints():
xs = find(lists(integers()), lambda x: len(set(x)) < len(x))
assert len(xs) == 2
def test_list_with_complex_sorting_structure():
xs = find(
lists(lists(booleans())),
lambda x: [list(reversed(t)) for t in x] > x and len(x) > 3)
assert len(xs) == 4
def test_list_with_wide_gap():
xs = find(lists(integers()), lambda x: x and (max(x) > min(x) + 10 > 0))
assert len(xs) == 2
xs.sort()
assert xs[1] == 11 + xs[0]
def test_minimize_namedtuple():
T = namedtuple(u'T', (u'a', u'b'))
tab = find(
builds(T, integers(), integers()),
lambda x: x.a < x.b)
assert tab.b == tab.a + 1
def test_minimize_dict():
tab = find(
fixed_dictionaries({u'a': booleans(), u'b': booleans()}),
lambda x: x[u'a'] or x[u'b']
)
assert not (tab[u'a'] and tab[u'b'])
def test_minimize_list_of_sets():
assert find(
lists(sets(booleans())),
lambda x: len(list(filter(None, x))) >= 3) == (
[set((False,))] * 3
)
def test_minimize_list_of_lists():
assert find(
lists(lists(integers())),
lambda x: len(list(filter(None, x))) >= 3) == (
[[0]] * 3
)
def test_minimize_list_of_tuples():
xs = find(
lists(tuples(integers(), integers())), lambda x: len(x) >= 2)
assert xs == [(0, 0), (0, 0)]
def test_minimize_multi_key_dicts():
assert find(
dictionaries(keys=booleans(), values=booleans()),
bool
) == {False: False}
def test_minimize_dicts_with_incompatible_keys():
assert find(
fixed_dictionaries({1: booleans(), u'hi': lists(booleans())}),
lambda x: True
) == {1: False, u'hi': []}
def test_multiple_empty_lists_are_independent():
x = find(lists(lists(max_size=0)), lambda t: len(t) >= 2)
u, v = x
assert u is not v
@given(sets(integers(0, 100), min_size=2, max_size=10))
@settings(max_examples=100)
def test_sets_are_size_bounded(xs):
assert 2 <= len(xs) <= 10
def test_ordered_dictionaries_preserve_keys():
r = Random()
keys = list(range(100))
r.shuffle(keys)
x = fixed_dictionaries(
OrderedDict([(k, booleans()) for k in keys])).example()
assert list(x.keys()) == keys
@pytest.mark.parametrize(u'n', range(10))
def test_lists_of_fixed_length(n):
assert find(
lists(integers(), min_size=n, max_size=n), lambda x: True) == [0] * n
@pytest.mark.parametrize(u'n', range(10))
def test_sets_of_fixed_length(n):
x = find(
sets(integers(), min_size=n, max_size=n), lambda x: True)
assert len(x) == n
if not n:
assert x == set()
else:
assert x == set(range(min(x), min(x) + n))
@pytest.mark.parametrize(u'n', range(10))
def test_dictionaries_of_fixed_length(n):
x = set(find(
dictionaries(integers(), booleans(), min_size=n, max_size=n),
lambda x: True).keys())
if not n:
assert x == set()
else:
assert x == set(range(min(x), min(x) + n))
@pytest.mark.parametrize(u'n', range(10))
def test_lists_of_lower_bounded_length(n):
x = find(
lists(integers(), min_size=n), lambda x: sum(x) >= 2 * n
)
assert n <= len(x) <= 2 * n
assert all(t >= 0 for t in x)
assert len(x) == n or all(t > 0 for t in x)
assert sum(x) == 2 * n
@pytest.mark.parametrize(u'n', range(10))
def test_lists_forced_near_top(n):
assert find(
lists(integers(), min_size=n, max_size=n + 2),
lambda t: len(t) == n + 2
) == [0] * (n + 2)
@flaky(max_runs=5, min_passes=1)
def test_can_find_unique_lists_of_non_set_order():
ls = minimal(
lists(text(), unique=True),
lambda x: list(set(reversed(x))) != x
)
assert len(set(ls)) == len(ls)
assert len(ls) == 2
def test_can_find_sets_unique_by_incomplete_data():
ls = find(
lists(lists(integers(min_value=0), min_size=2), unique_by=max),
lambda x: len(x) >= 10
)
assert len(ls) == 10
assert sorted(list(map(max, ls))) == list(range(10))
for v in ls:
assert 0 in v
hypothesis-3.0.1/tests/cover/test_simple_numbers.py 0000664 0000000 0000000 00000014345 12661275660 0022651 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
import math
import pytest
from hypothesis import find, given
from hypothesis.strategies import lists, floats, integers, complex_numbers
def test_minimize_negative_int():
assert find(integers(), lambda x: x < 0) == -1
assert find(integers(), lambda x: x < -1) == -2
def test_positive_negative_int():
assert find(integers(), lambda x: x > 0) == 1
assert find(integers(), lambda x: x > 1) == 2
boundaries = pytest.mark.parametrize(u'boundary', sorted(
[2 ** i for i in range(10)] +
[2 ** i - 1 for i in range(10)] +
[2 ** i + 1 for i in range(10)] +
[10 ** i for i in range(6)]
))
@boundaries
def test_minimizes_int_down_to_boundary(boundary):
assert find(integers(), lambda x: x >= boundary) == boundary
@boundaries
def test_minimizes_int_up_to_boundary(boundary):
assert find(integers(), lambda x: x <= -boundary) == -boundary
@boundaries
def test_minimizes_ints_from_down_to_boundary(boundary):
assert find(
integers(min_value=boundary - 10), lambda x: x >= boundary) == boundary
assert find(integers(min_value=boundary), lambda x: True) == boundary
@boundaries
def test_minimizes_integer_range_to_boundary(boundary):
assert find(
integers(boundary, boundary + 100), lambda x: True
) == boundary
def test_single_integer_range_is_range():
assert find(integers(1, 1), lambda x: True) == 1
def test_find_small_number_in_large_range():
assert find(
integers((-2 ** 32), 2 ** 32), lambda x: x >= 101) == 101
def test_find_small_sum_float_list():
xs = find(
lists(floats(), min_size=10),
lambda x: sum(x) >= 1.0
)
assert sum(xs) <= 2.0
def test_finds_boundary_floats():
def f(x):
print(x)
return True
assert -1 <= find(floats(min_value=-1, max_value=1), f) <= 1
def test_find_non_boundary_float():
x = find(floats(min_value=1, max_value=9), lambda x: x > 2)
assert 2 < x < 3
def test_can_find_standard_complex_numbers():
find(complex_numbers(), lambda x: x.imag != 0) == 0j
find(complex_numbers(), lambda x: x.real != 0) == 1
def test_minimal_float_is_zero():
assert find(floats(), lambda x: True) == 0.0
def test_negative_floats_simplify_to_zero():
assert find(floats(), lambda x: x <= -1.0) == -1.0
def test_find_infinite_float_is_positive():
assert find(floats(), math.isinf) == float(u'inf')
def test_can_find_infinite_negative_float():
assert find(floats(), lambda x: x < -sys.float_info.max)
def test_can_find_float_on_boundary_of_representable():
find(floats(), lambda x: x + 1 == x and not math.isinf(x))
def test_minimize_nan():
assert math.isnan(find(floats(), math.isnan))
def test_minimize_very_large_float():
t = sys.float_info.max / 2
assert t <= find(floats(), lambda x: x >= t) < float(u'inf')
def is_integral(value):
try:
return int(value) == value
except (OverflowError, ValueError):
return False
def test_can_find_float_far_from_integral():
find(floats(), lambda x: not (
math.isnan(x) or
math.isinf(x) or
is_integral(x * (2 ** 32))
))
def test_can_find_integrish():
find(floats(), lambda x: (
is_integral(x * (2 ** 32)) and not is_integral(x * 16)
))
def test_list_of_fractional_float():
assert set(find(
lists(floats(), average_size=50),
lambda x: len([t for t in x if t >= 1.5]) >= 10
)) in (
set((1.5,)),
set((1.5, 2.0)),
set((2.0,)),
)
def test_minimal_fractional_float():
assert find(floats(), lambda x: x >= 1.5) in (1.5, 2.0)
def test_minimizes_lists_of_negative_ints_up_to_boundary():
result = find(
lists(integers()), lambda x: len([t for t in x if t <= -1]) >= 10)
assert result == [-1] * 10
@pytest.mark.parametrize((u'left', u'right'), [
(0.0, 5e-324),
(-5e-324, 0.0),
(-5e-324, 5e-324),
(5e-324, 1e-323),
])
def test_floats_in_constrained_range(left, right):
@given(floats(left, right))
def test_in_range(r):
assert left <= r <= right
test_in_range()
def test_bounds_are_respected():
assert find(floats(min_value=1.0), lambda x: True) == 1.0
assert find(floats(max_value=-1.0), lambda x: True) == -1.0
@pytest.mark.parametrize('k', range(10))
def test_floats_from_zero_have_reasonable_range(k):
n = 10 ** k
assert find(floats(min_value=0.0), lambda x: x >= n) == float(n)
assert find(floats(max_value=0.0), lambda x: x <= -n) == float(-n)
def test_explicit_allow_nan():
find(floats(allow_nan=True), math.isnan)
def test_one_sided_contains_infinity():
find(floats(min_value=1.0), math.isinf)
find(floats(max_value=1.0), math.isinf)
@given(floats(min_value=0.0, allow_infinity=False))
def test_no_allow_infinity_upper(x):
assert not math.isinf(x)
@given(floats(max_value=0.0, allow_infinity=False))
def test_no_allow_infinity_lower(x):
assert not math.isinf(x)
class TestFloatsAreFloats(object):
@given(floats())
def test_unbounded(self, arg):
assert isinstance(arg, float)
@given(floats(min_value=0, max_value=2 ** 64 - 1))
def test_int_int(self, arg):
assert isinstance(arg, float)
@given(floats(min_value=0, max_value=float(2 ** 64 - 1)))
def test_int_float(self, arg):
assert isinstance(arg, float)
@given(floats(min_value=float(0), max_value=2 ** 64 - 1))
def test_float_int(self, arg):
assert isinstance(arg, float)
@given(floats(min_value=float(0), max_value=float(2 ** 64 - 1)))
def test_float_float(self, arg):
assert isinstance(arg, float)
hypothesis-3.0.1/tests/cover/test_simple_strings.py 0000664 0000000 0000000 00000006524 12661275660 0022667 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import unicodedata
from random import Random
import pytest
from hypothesis import find, given, settings
from hypothesis.strategies import text, binary, tuples, characters
def test_can_minimize_up_to_zero():
s = find(text(), lambda x: any(lambda t: t <= u'0' for t in x))
assert s == u'0'
def test_minimizes_towards_ascii_zero():
s = find(text(), lambda x: any(t < u'0' for t in x))
assert s == chr(ord(u'0') - 1)
def test_can_handle_large_codepoints():
s = find(text(), lambda x: x >= u'☃')
assert s == u'☃'
def test_can_find_mixed_ascii_and_non_ascii_strings():
s = find(
text(), lambda x: (
any(t >= u'☃' for t in x) and
any(ord(t) <= 127 for t in x)))
assert len(s) == 2
assert sorted(s) == [u'0', u'☃']
def test_will_find_ascii_examples_given_the_chance():
s = find(
tuples(text(max_size=1), text(max_size=1)),
lambda x: x[0] and (x[0] < x[1]))
assert ord(s[1]) == ord(s[0]) + 1
assert u'0' in s
def test_finds_single_element_strings():
assert find(text(), bool, random=Random(4)) == u'0'
def test_binary_respects_changes_in_size():
@given(binary())
def test_foo(x):
assert len(x) <= 10
with pytest.raises(AssertionError):
test_foo()
@given(binary(max_size=10))
def test_foo(x):
assert len(x) <= 10
test_foo()
@given(text(min_size=1, max_size=1))
@settings(max_examples=2000)
def test_does_not_generate_surrogates(t):
assert unicodedata.category(t) != u'Cs'
def test_does_not_simplify_into_surrogates():
f = find(text(average_size=25.0), lambda x: x >= u'\udfff')
assert f == u'\ue000'
f = find(
text(average_size=25.0),
lambda x: len([t for t in x if t >= u'\udfff']) >= 10)
assert f == u'\ue000' * 10
@given(text(alphabet=[u'a', u'b']))
def test_respects_alphabet_if_list(xs):
assert set(xs).issubset(set(u'ab'))
@given(text(alphabet=u'cdef'))
def test_respects_alphabet_if_string(xs):
assert set(xs).issubset(set(u'cdef'))
@given(text())
def test_can_encode_as_utf8(s):
s.encode('utf-8')
@given(text(characters(blacklist_characters=u'\n')))
def test_can_blacklist_newlines(s):
assert u'\n' not in s
@given(text(characters(blacklist_categories=('Cc', 'Cs'))))
def test_can_exclude_newlines_by_category(s):
assert u'\n' not in s
@given(text(characters(max_codepoint=127)))
def test_can_restrict_to_ascii_only(s):
s.encode('ascii')
def test_fixed_size_bytes_just_draw_bytes():
from hypothesis.internal.conjecture.data import TestData
x = TestData.for_buffer(b'foo')
assert x.draw(binary(min_size=3, max_size=3)) == b'foo'
hypothesis-3.0.1/tests/cover/test_sizes.py 0000664 0000000 0000000 00000001630 12661275660 0020753 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.utils.size import clamp
def test_clamp():
assert clamp(None, 1, None) == 1
assert clamp(None, 10, 1) == 1
assert clamp(1, 0, 1) == 1
assert clamp(1, 0, None) == 1
hypothesis-3.0.1/tests/cover/test_stateful.py 0000664 0000000 0000000 00000030251 12661275660 0021446 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import inspect
from collections import namedtuple
import pytest
from hypothesis import settings as Settings
from hypothesis import assume
from hypothesis.errors import Flaky, InvalidDefinition
from hypothesis.control import current_build_context
from tests.common.utils import raises, capture_out
from hypothesis.database import ExampleDatabase
from hypothesis.stateful import rule, Bundle, precondition, \
GenericStateMachine, RuleBasedStateMachine, \
run_state_machine_as_test
from hypothesis.strategies import just, none, lists, binary, tuples, \
booleans, integers, sampled_from
class SetStateMachine(GenericStateMachine):
def __init__(self):
self.elements = []
def steps(self):
strat = tuples(just(False), integers(0, 5))
if self.elements:
strat |= tuples(just(True), sampled_from(self.elements))
return strat
def execute_step(self, step):
delete, value = step
if delete:
self.elements.remove(value)
assert value not in self.elements
else:
self.elements.append(value)
class OrderedStateMachine(GenericStateMachine):
def __init__(self):
self.counter = 0
def steps(self):
return (
integers(self.counter - 1, self.counter + 50)
)
def execute_step(self, step):
assert step >= self.counter
self.counter = step
class GoodSet(GenericStateMachine):
def __init__(self):
self.stuff = set()
def steps(self):
return tuples(booleans(), integers())
def execute_step(self, step):
delete, value = step
if delete:
self.stuff.discard(value)
else:
self.stuff.add(value)
assert delete == (value not in self.stuff)
Leaf = namedtuple(u'Leaf', (u'label',))
Split = namedtuple(u'Split', (u'left', u'right'))
class BalancedTrees(RuleBasedStateMachine):
trees = u'BinaryTree'
@rule(target=trees, x=booleans())
def leaf(self, x):
return Leaf(x)
@rule(target=trees, left=Bundle(trees), right=Bundle(trees))
def split(self, left, right):
return Split(left, right)
@rule(tree=Bundle(trees))
def test_is_balanced(self, tree):
if isinstance(tree, Leaf):
return
else:
assert abs(self.size(tree.left) - self.size(tree.right)) <= 1
self.test_is_balanced(tree.left)
self.test_is_balanced(tree.right)
def size(self, tree):
if isinstance(tree, Leaf):
return 1
else:
return 1 + self.size(tree.left) + self.size(tree.right)
class DepthCharge(object):
def __init__(self, value):
if value is None:
self.depth = 0
else:
self.depth = value.depth + 1
class DepthMachine(RuleBasedStateMachine):
charges = Bundle(u'charges')
@rule(targets=(charges,), child=charges)
def charge(self, child):
return DepthCharge(child)
@rule(targets=(charges,))
def none_charge(self):
return DepthCharge(None)
@rule(check=charges)
def is_not_too_deep(self, check):
assert check.depth < 3
class MultipleRulesSameFuncMachine(RuleBasedStateMachine):
def myfunc(self, data):
print(data)
rule1 = rule(data=just(u"rule1data"))(myfunc)
rule2 = rule(data=just(u"rule2data"))(myfunc)
class PreconditionMachine(RuleBasedStateMachine):
num = 0
@rule()
def add_one(self):
self.num += 1
@rule()
def set_to_zero(self):
self.num = 0
@rule(num=integers())
@precondition(lambda self: self.num != 0)
def div_by_precondition_after(self, num):
self.num = num / self.num
@precondition(lambda self: self.num != 0)
@rule(num=integers())
def div_by_precondition_before(self, num):
self.num = num / self.num
bad_machines = (
OrderedStateMachine, SetStateMachine, BalancedTrees,
DepthMachine,
)
for m in bad_machines:
m.TestCase.settings = Settings(
m.TestCase.settings, max_examples=1000, max_iterations=2000
)
cheap_bad_machines = list(bad_machines)
cheap_bad_machines.remove(BalancedTrees)
with_cheap_bad_machines = pytest.mark.parametrize(
u'machine',
cheap_bad_machines, ids=[t.__name__ for t in cheap_bad_machines]
)
@pytest.mark.parametrize(
u'machine',
bad_machines, ids=[t.__name__ for t in bad_machines]
)
def test_bad_machines_fail(machine):
test_class = machine.TestCase
try:
with capture_out() as o:
with raises(AssertionError):
test_class().runTest()
except Exception:
print(o.getvalue())
raise
v = o.getvalue()
print(v)
assert u'Step #1' in v
assert u'Step #50' not in v
def test_multiple_rules_same_func():
test_class = MultipleRulesSameFuncMachine.TestCase
with capture_out() as o:
test_class().runTest()
output = o.getvalue()
assert 'rule1data' in output
assert 'rule2data' in output
class GivenLikeStateMachine(GenericStateMachine):
def steps(self):
return lists(booleans(), average_size=25.0)
def execute_step(self, step):
assume(any(step))
def test_can_get_test_case_off_machine_instance():
assert GoodSet().TestCase is GoodSet().TestCase
assert GoodSet().TestCase is not None
class FlakyDrawLessMachine(GenericStateMachine):
def steps(self):
cb = current_build_context()
if cb.is_final:
return binary(min_size=1, max_size=1)
else:
return binary(min_size=1024, max_size=1024)
def execute_step(self, step):
cb = current_build_context()
if not cb.is_final:
assert 0 not in bytearray(step)
def test_flaky_draw_less_raises_flaky():
with raises(Flaky):
FlakyDrawLessMachine.TestCase().runTest()
class FlakyStateMachine(GenericStateMachine):
def steps(self):
return just(())
def execute_step(self, step):
assert not any(
t[3] == u'find_breaking_runner'
for t in inspect.getouterframes(inspect.currentframe())
)
def test_flaky_raises_flaky():
with raises(Flaky):
FlakyStateMachine.TestCase().runTest()
class FlakyRatchettingMachine(GenericStateMachine):
ratchet = 0
def steps(self):
FlakyRatchettingMachine.ratchet += 1
n = FlakyRatchettingMachine.ratchet
return lists(integers(), min_size=n, max_size=n)
def execute_step(self, step):
assert False
def test_ratchetting_raises_flaky():
with raises(Flaky):
FlakyRatchettingMachine.TestCase().runTest()
def test_empty_machine_is_invalid():
class EmptyMachine(RuleBasedStateMachine):
pass
with raises(InvalidDefinition):
EmptyMachine.TestCase().runTest()
def test_machine_with_no_terminals_is_invalid():
class NonTerminalMachine(RuleBasedStateMachine):
@rule(value=Bundle(u'hi'))
def bye(self, hi):
pass
with raises(InvalidDefinition):
NonTerminalMachine.TestCase().runTest()
class DynamicMachine(RuleBasedStateMachine):
@rule(value=Bundle(u'hi'))
def test_stuff(x):
pass
DynamicMachine.define_rule(
targets=(), function=lambda self: 1, arguments={}
)
class IntAdder(RuleBasedStateMachine):
pass
IntAdder.define_rule(
targets=(u'ints',), function=lambda self, x: x, arguments={
u'x': integers()
}
)
IntAdder.define_rule(
targets=(u'ints',), function=lambda self, x, y: x, arguments={
u'x': integers(), u'y': Bundle(u'ints'),
}
)
with Settings(max_examples=10):
TestGoodSets = GoodSet.TestCase
TestGivenLike = GivenLikeStateMachine.TestCase
TestDynamicMachine = DynamicMachine.TestCase
TestIntAdder = IntAdder.TestCase
TestPrecondition = PreconditionMachine.TestCase
def test_picks_up_settings_at_first_use_of_testcase():
assert TestDynamicMachine.settings.max_examples == 10
def test_new_rules_are_picked_up_before_and_after_rules_call():
class Foo(RuleBasedStateMachine):
pass
Foo.define_rule(
targets=(), function=lambda self: 1, arguments={}
)
assert len(Foo.rules()) == 1
Foo.define_rule(
targets=(), function=lambda self: 2, arguments={}
)
assert len(Foo.rules()) == 2
def test_settings_are_independent():
s = Settings()
orig = s.max_examples
with s:
class Foo(RuleBasedStateMachine):
pass
Foo.define_rule(
targets=(), function=lambda self: 1, arguments={}
)
Foo.TestCase.settings = Settings(
Foo.TestCase.settings, max_examples=1000000)
assert s.max_examples == orig
def test_minimizes_errors_in_teardown():
class Foo(GenericStateMachine):
def __init__(self):
self.counter = 0
def steps(self):
return tuples()
def execute_step(self, value):
self.counter += 1
def teardown(self):
assert not self.counter
runner = Foo.find_breaking_runner()
f = Foo()
with raises(AssertionError):
runner.run(f, print_steps=True)
assert f.counter == 1
class RequiresInit(GenericStateMachine):
def __init__(self, threshold):
super(RequiresInit, self).__init__()
self.threshold = threshold
def steps(self):
return integers()
def execute_step(self, value):
if value > self.threshold:
raise ValueError(u'%d is too high' % (value,))
def test_can_use_factory_for_tests():
with raises(ValueError):
run_state_machine_as_test(lambda: RequiresInit(42))
class FailsEventually(GenericStateMachine):
def __init__(self):
super(FailsEventually, self).__init__()
self.counter = 0
def steps(self):
return none()
def execute_step(self, _):
self.counter += 1
assert self.counter < 10
FailsEventually.TestCase.settings = Settings(
FailsEventually.TestCase.settings, stateful_step_count=5)
TestDoesNotFail = FailsEventually.TestCase
def test_can_explicitly_pass_settings():
try:
FailsEventually.TestCase.settings = Settings(
FailsEventually.TestCase.settings, stateful_step_count=15)
run_state_machine_as_test(
FailsEventually, settings=Settings(
stateful_step_count=2,
))
finally:
FailsEventually.TestCase.settings = Settings(
FailsEventually.TestCase.settings, stateful_step_count=5)
def test_saves_failing_example_in_database():
db = ExampleDatabase()
with raises(AssertionError):
run_state_machine_as_test(
SetStateMachine, Settings(database=db))
assert len(list(db.data.keys())) == 1
def test_can_run_with_no_db():
with raises(AssertionError):
run_state_machine_as_test(
SetStateMachine, Settings(database=None))
def test_stateful_double_rule_is_forbidden(recwarn):
with pytest.raises(InvalidDefinition):
class DoubleRuleMachine(RuleBasedStateMachine):
@rule(num=just(1))
@rule(num=just(2))
def whatevs(self, num):
pass
def test_can_explicitly_call_functions_when_precondition_not_satisfied():
class BadPrecondition(RuleBasedStateMachine):
def __init__(self):
super(BadPrecondition, self).__init__()
@precondition(lambda self: False)
@rule()
def test_blah(self):
raise ValueError()
@rule()
def test_foo(self):
self.test_blah()
with pytest.raises(ValueError):
run_state_machine_as_test(BadPrecondition)
hypothesis-3.0.1/tests/cover/test_strategytests.py 0000664 0000000 0000000 00000002105 12661275660 0022541 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# most interesting tests of this nature are exected in nocover, but we have
# a few here to make sure we have coverage of the strategytests module itself.
from __future__ import division, print_function, absolute_import
from hypothesis.strategies import sets, booleans, integers
from hypothesis.strategytests import strategy_test_suite
TestBoolSets = strategy_test_suite(sets(booleans()))
TestInts = strategy_test_suite(integers())
hypothesis-3.0.1/tests/cover/test_streams.py 0000664 0000000 0000000 00000005027 12661275660 0021300 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from itertools import islice
import pytest
from hypothesis import find, given
from hypothesis.errors import InvalidArgument
from hypothesis.strategies import text, lists, booleans, streaming
from hypothesis.searchstrategy.streams import Stream
@given(lists(booleans()))
def test_stream_give_lists(xs):
s = Stream(iter(xs))
assert list(s) == xs
assert list(s) == xs
@given(lists(booleans()))
def test_can_zip_streams_with_self(xs):
s = Stream(iter(xs))
assert list(zip(s, s)) == list(zip(xs, xs))
def loop(x):
while True:
yield x
def test_can_stream_infinite():
s = Stream(loop(False))
assert list(islice(s, 100)) == [False] * 100
@given(streaming(text()))
def test_fetched_repr_is_in_stream_repr(s):
assert repr(s) == u'Stream(...)'
assert repr(next(iter(s))) in repr(s)
def test_cannot_thunk_past_end_of_list():
with pytest.raises(IndexError):
Stream([1])._thunk_to(5)
def test_thunking_evaluates_initial_list():
x = Stream([1, 2, 3])
x._thunk_to(1)
assert len(x.fetched) == 1
def test_thunking_map_evaluates_source():
x = Stream(loop(False))
y = x.map(lambda t: True)
y[100]
assert y._thunked() == 101
assert x._thunked() == 101
def test_wrong_index_raises_type_error():
with pytest.raises(InvalidArgument):
Stream([])[u'kittens']
def test_can_index_into_unindexed():
x = Stream(loop(1))
assert x[100] == 1
def test_can_map():
x = Stream([1, 2, 3]).map(lambda i: i * 2)
assert isinstance(x, Stream)
assert list(x) == [2, 4, 6]
def test_streaming_errors_in_find():
with pytest.raises(InvalidArgument):
find(streaming(booleans()), lambda x: True)
def test_default_stream_is_empty():
assert list(Stream()) == []
def test_can_slice_streams():
assert list(Stream([1, 2, 3])[:2]) == [1, 2]
hypothesis-3.0.1/tests/cover/test_testdecorators.py 0000664 0000000 0000000 00000031404 12661275660 0022665 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
import functools
import threading
from collections import namedtuple
import hypothesis.reporting as reporting
from hypothesis import note, seed, given, assume, reject, settings, \
Verbosity
from hypothesis.errors import Unsatisfiable
from tests.common.utils import fails, raises, fails_with, capture_out
from hypothesis.strategies import just, sets, text, lists, binary, \
builds, floats, one_of, booleans, integers, frozensets, sampled_from
@given(integers(), integers())
def test_int_addition_is_commutative(x, y):
assert x + y == y + x
@fails
@given(text(), text())
def test_str_addition_is_commutative(x, y):
assert x + y == y + x
@fails
@given(binary(), binary())
def test_bytes_addition_is_commutative(x, y):
assert x + y == y + x
@given(integers(), integers(), integers())
def test_int_addition_is_associative(x, y, z):
assert x + (y + z) == (x + y) + z
@fails
@given(floats(), floats(), floats())
@settings(max_examples=2000,)
def test_float_addition_is_associative(x, y, z):
assert x + (y + z) == (x + y) + z
@given(lists(integers()))
def test_reversing_preserves_integer_addition(xs):
assert sum(xs) == sum(reversed(xs))
def test_still_minimizes_on_non_assertion_failures():
@settings(max_examples=50)
@given(integers())
def is_not_too_large(x):
if x >= 10:
raise ValueError('No, %s is just too large. Sorry' % x)
with raises(ValueError) as exinfo:
is_not_too_large()
assert ' 10 ' in exinfo.value.args[0]
@given(integers())
def test_integer_division_shrinks_positive_integers(n):
assume(n > 0)
assert n / 2 < n
class TestCases(object):
@given(integers())
def test_abs_non_negative(self, x):
assert abs(x) >= 0
assert isinstance(self, TestCases)
@given(x=integers())
def test_abs_non_negative_varargs(self, x, *args):
assert abs(x) >= 0
assert isinstance(self, TestCases)
@given(x=integers())
def test_abs_non_negative_varargs_kwargs(self, *args, **kw):
assert abs(kw['x']) >= 0
assert isinstance(self, TestCases)
@given(x=integers())
def test_abs_non_negative_varargs_kwargs_only(*args, **kw):
assert abs(kw['x']) >= 0
assert isinstance(args[0], TestCases)
@fails
@given(integers())
def test_int_is_always_negative(self, x):
assert x < 0
@fails
@given(floats(), floats())
def test_float_addition_cancels(self, x, y):
assert x + (y - x) == y
@fails
@given(x=integers(min_value=0, max_value=3), name=text())
def test_can_be_given_keyword_args(x, name):
assume(x > 0)
assert len(name) < x
@fails_with(Unsatisfiable)
@settings(timeout=0.1)
@given(integers())
def test_slow_test_times_out(x):
time.sleep(0.05)
# Cheap hack to make test functions which fail on their second invocation
calls = [0, 0, 0, 0]
timeout_settings = settings(timeout=0.2)
# The following tests exist to test that verifiers start their timeout
# from when the test first executes, not from when it is defined.
@fails
@given(integers())
@timeout_settings
def test_slow_failing_test_1(x):
time.sleep(0.05)
assert not calls[0]
calls[0] = 1
@fails
@timeout_settings
@given(integers())
def test_slow_failing_test_2(x):
time.sleep(0.05)
assert not calls[1]
calls[1] = 1
@fails
@given(integers())
@timeout_settings
def test_slow_failing_test_3(x):
time.sleep(0.05)
assert not calls[2]
calls[2] = 1
@fails
@timeout_settings
@given(integers())
def test_slow_failing_test_4(x):
time.sleep(0.05)
assert not calls[3]
calls[3] = 1
@fails
@given(one_of(floats(), booleans()), one_of(floats(), booleans()))
def test_one_of_produces_different_values(x, y):
assert type(x) == type(y)
@given(just(42))
def test_is_the_answer(x):
assert x == 42
@fails
@given(text(), text())
def test_text_addition_is_not_commutative(x, y):
assert x + y == y + x
@fails
@given(binary(), binary())
def test_binary_addition_is_not_commutative(x, y):
assert x + y == y + x
@given(integers(1, 10))
def test_integers_are_in_range(x):
assert 1 <= x <= 10
@given(integers(min_value=100))
def test_integers_from_are_from(x):
assert x >= 100
def test_does_not_catch_interrupt_during_falsify():
calls = [0]
@given(integers())
def flaky_base_exception(x):
if not calls[0]:
calls[0] = 1
raise KeyboardInterrupt()
with raises(KeyboardInterrupt):
flaky_base_exception()
def test_contains_the_test_function_name_in_the_exception_string():
calls = [0]
@given(integers())
@settings(max_iterations=10, max_examples=10)
def this_has_a_totally_unique_name(x):
calls[0] += 1
reject()
with raises(Unsatisfiable) as e:
this_has_a_totally_unique_name()
print('Called %d times' % tuple(calls))
assert this_has_a_totally_unique_name.__name__ in e.value.args[0]
calls2 = [0]
class Foo(object):
@given(integers())
@settings(max_iterations=10, max_examples=10)
def this_has_a_unique_name_and_lives_on_a_class(self, x):
calls2[0] += 1
reject()
with raises(Unsatisfiable) as e:
Foo().this_has_a_unique_name_and_lives_on_a_class()
print('Called %d times' % tuple(calls2))
assert (
Foo.this_has_a_unique_name_and_lives_on_a_class.__name__
) in e.value.args[0]
@given(lists(integers()), integers())
def test_removing_an_element_from_a_unique_list(xs, y):
assume(len(set(xs)) == len(xs))
try:
xs.remove(y)
except ValueError:
pass
assert y not in xs
@fails
@given(lists(integers(), average_size=25.0), integers())
def test_removing_an_element_from_a_non_unique_list(xs, y):
assume(y in xs)
xs.remove(y)
assert y not in xs
@given(sets(sampled_from(list(range(10)))))
def test_can_test_sets_sampled_from(xs):
assert all(isinstance(x, int) for x in xs)
assert all(0 <= x < 10 for x in xs)
mix = one_of(sampled_from([1, 2, 3]), text())
@fails
@given(mix, mix)
def test_can_mix_sampling_with_generating(x, y):
assert type(x) == type(y)
@fails
@given(frozensets(integers()))
def test_can_find_large_sum_frozenset(xs):
assert sum(xs) < 100
def test_prints_on_failure_by_default():
@given(integers(), integers())
@settings(max_examples=200, timeout=-1)
def test_ints_are_sorted(balthazar, evans):
assume(evans >= 0)
assert balthazar <= evans
with raises(AssertionError):
with capture_out() as out:
with reporting.with_reporter(reporting.default):
test_ints_are_sorted()
out = out.getvalue()
lines = [l.strip() for l in out.split(u'\n')]
assert (
u'Falsifying example: test_ints_are_sorted(balthazar=1, evans=0)'
in lines)
def test_does_not_print_on_success():
with settings(verbosity=Verbosity.normal):
@given(integers())
def test_is_an_int(x):
return
with capture_out() as out:
test_is_an_int()
out = out.getvalue()
lines = [l.strip() for l in out.split(u'\n')]
assert all(not l for l in lines), lines
@given(sampled_from([1]))
def test_can_sample_from_single_element(x):
assert x == 1
@fails
@given(lists(integers()))
def test_list_is_sorted(xs):
assert sorted(xs) == xs
@fails
@given(floats(1.0, 2.0))
def test_is_an_endpoint(x):
assert x == 1.0 or x == 2.0
def test_breaks_bounds():
@fails
@given(x=integers())
def test_is_bounded(t, x):
assert x < t
for t in [1, 10, 100, 1000]:
test_is_bounded(t)
@given(x=booleans())
def test_can_test_kwargs_only_methods(**kwargs):
assert isinstance(kwargs['x'], bool)
@fails_with(UnicodeEncodeError)
@given(text())
@settings(max_examples=200)
def test_is_ascii(x):
x.encode('ascii')
@fails
@given(text())
def test_is_not_ascii(x):
try:
x.encode('ascii')
assert False
except UnicodeEncodeError:
pass
@fails
@given(text())
def test_can_find_string_with_duplicates(s):
assert len(set(s)) == len(s)
@fails
@given(text())
def test_has_ascii(x):
if not x:
return
ascii_characters = (
u'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ \t\n'
)
assert any(c in ascii_characters for c in x)
def test_uses_provided_seed():
import random
initial = random.getstate()
@given(integers())
@seed(42)
def test_foo(x):
pass
test_foo()
assert random.getstate() == initial
def test_can_derandomize():
values = []
@fails
@given(integers())
@settings(derandomize=True, database=None)
def test_blah(x):
values.append(x)
assert x > 0
test_blah()
assert values
v1 = values
values = []
test_blah()
assert v1 == values
def test_can_run_without_database():
@given(integers())
@settings(database=None)
def test_blah(x):
assert False
with raises(AssertionError):
test_blah()
def test_can_run_with_database_in_thread():
results = []
@given(integers())
def test_blah(x):
assert False
def run_test():
try:
test_blah()
except AssertionError:
results.append('success')
# Run once in the main thread and once in another thread. Execution is
# strictly serial, so no need for locking.
run_test()
thread = threading.Thread(target=run_test)
thread.start()
thread.join()
assert results == ['success', 'success']
@given(integers())
def test_can_call_an_argument_f(f):
# See issue https://github.com/DRMacIver/hypothesis/issues/38 for details
pass
Litter = namedtuple('Litter', ('kitten1', 'kitten2'))
@given(builds(Litter, integers(), integers()))
def test_named_tuples_are_of_right_type(litter):
assert isinstance(litter, Litter)
@fails_with(AttributeError)
@given(integers().map(lambda x: x.nope))
@settings(perform_health_check=False)
def test_fails_in_reify(x):
pass
@given(text(u'a'))
def test_a_text(x):
assert set(x).issubset(set(u'a'))
@given(text(u''))
def test_empty_text(x):
assert not x
@given(text(u'abcdefg'))
def test_mixed_text(x):
assert set(x).issubset(set(u'abcdefg'))
def test_when_set_to_no_simplifies_runs_failing_example_twice():
failing = [0]
@given(integers())
@settings(max_shrinks=0, max_examples=200)
def foo(x):
if x > 11:
note('Lo')
failing[0] += 1
assert False
with settings(verbosity=Verbosity.normal):
with raises(AssertionError):
with capture_out() as out:
foo()
assert failing == [2]
assert u'Falsifying example' in out.getvalue()
assert u'Lo' in out.getvalue()
@given(integers())
@settings(max_examples=1)
def test_should_not_fail_if_max_examples_less_than_min_satisfying(x):
pass
def nameless_const(x):
def f(u, v):
return u
return functools.partial(f, x)
@given(sets(booleans()).map(nameless_const(2)))
def test_can_map_nameless(x):
assert x == 2
@given(
integers(0, 10).flatmap(nameless_const(just(3))))
def test_can_flatmap_nameless(x):
assert x == 3
def test_can_be_used_with_none_module():
def test_is_cool(i):
pass
test_is_cool.__module__ = None
test_is_cool = given(integers())(test_is_cool)
test_is_cool()
def test_does_not_print_notes_if_all_succeed():
@given(integers())
@settings(verbosity=Verbosity.normal)
def test(i):
note('Hi there')
with capture_out() as out:
with reporting.with_reporter(reporting.default):
test()
assert not out.getvalue()
def test_prints_notes_once_on_failure():
@given(lists(integers()))
@settings(database=None, verbosity=Verbosity.normal)
def test(xs):
note('Hi there')
assert sum(xs) > 100
with capture_out() as out:
with reporting.with_reporter(reporting.default):
with raises(AssertionError):
test()
lines = out.getvalue().strip().splitlines()
assert len(lines) == 2
assert u'Hi there' in lines
@given(lists(max_size=0))
def test_empty_lists(xs):
assert xs == []
hypothesis-3.0.1/tests/cover/test_timeout.py 0000664 0000000 0000000 00000002473 12661275660 0021312 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import time
from pytest import raises
from hypothesis import given, settings
from hypothesis.internal import debug
from hypothesis.strategies import lists, integers
def test_can_timeout_during_an_unsuccessful_simplify():
record = []
@debug.timeout(3)
@given(lists(integers(), min_size=10))
@settings(timeout=1, database=None)
def first_bad_float_list(xs):
if record:
time.sleep(0.1)
assert record[0] != xs
elif sum(xs) >= 10 ** 6:
record.append(xs)
assert False
with raises(AssertionError):
first_bad_float_list()
hypothesis-3.0.1/tests/cover/test_uuids.py 0000664 0000000 0000000 00000002102 12661275660 0020742 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import find, given
@given(st.lists(st.uuids()))
def test_are_unique(ls):
assert len(set(ls)) == len(ls)
@given(st.lists(st.uuids()), st.randoms())
def test_retains_uniqueness_in_simplify(ls, rnd):
ts = find(st.lists(st.uuids()), lambda x: len(x) >= 5, random=rnd)
assert len(ts) == len(set(ts)) == 5
hypothesis-3.0.1/tests/cover/test_validation.py 0000664 0000000 0000000 00000011431 12661275660 0021750 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import find, given, settings
from hypothesis.errors import InvalidArgument
from tests.common.utils import fails_with
from hypothesis.strategies import sets, lists, floats, booleans, \
integers, frozensets
def test_errors_when_given_varargs():
@given(integers())
def has_varargs(*args):
pass
with pytest.raises(InvalidArgument) as e:
has_varargs()
assert u'varargs' in e.value.args[0]
def test_varargs_without_positional_arguments_allowed():
@given(somearg=integers())
def has_varargs(somearg, *args):
pass
def test_errors_when_given_varargs_and_kwargs_with_positional_arguments():
@given(integers())
def has_varargs(*args, **kw):
pass
with pytest.raises(InvalidArgument) as e:
has_varargs()
assert u'varargs' in e.value.args[0]
def test_varargs_and_kwargs_without_positional_arguments_allowed():
@given(somearg=integers())
def has_varargs(*args, **kw):
pass
def test_bare_given_errors():
@given()
def test():
pass
with pytest.raises(InvalidArgument):
test()
def test_errors_on_unwanted_kwargs():
@given(hello=int, world=int)
def greet(world):
pass
with pytest.raises(InvalidArgument):
greet()
def test_errors_on_too_many_positional_args():
@given(integers(), int, int)
def foo(x, y):
pass
with pytest.raises(InvalidArgument):
foo()
def test_errors_on_any_varargs():
@given(integers())
def oops(*args):
pass
with pytest.raises(InvalidArgument):
oops()
def test_can_put_arguments_in_the_middle():
@given(y=integers())
def foo(x, y, z):
pass
foo(1, 2)
def test_float_ranges():
with pytest.raises(InvalidArgument):
floats(float(u'nan'), 0).example()
with pytest.raises(InvalidArgument):
floats(1, -1).example()
def test_float_range_and_allow_nan_cannot_both_be_enabled():
with pytest.raises(InvalidArgument):
floats(min_value=1, allow_nan=True).example()
with pytest.raises(InvalidArgument):
floats(max_value=1, allow_nan=True).example()
def test_float_finite_range_and_allow_infinity_cannot_both_be_enabled():
with pytest.raises(InvalidArgument):
floats(0, 1, allow_infinity=True).example()
def test_does_not_error_if_min_size_is_bigger_than_default_size():
lists(integers(), min_size=50).example()
sets(integers(), min_size=50).example()
frozensets(integers(), min_size=50).example()
lists(integers(), min_size=50, unique=True).example()
def test_list_unique_and_unique_by_cannot_both_be_enabled():
@given(lists(integers(), unique=True, unique_by=lambda x: x))
def boom(t):
pass
with pytest.raises(InvalidArgument) as e:
boom()
assert 'unique ' in e.value.args[0]
assert 'unique_by' in e.value.args[0]
def test_an_average_size_must_be_positive():
with pytest.raises(InvalidArgument):
lists(integers(), average_size=0.0).example()
with pytest.raises(InvalidArgument):
lists(integers(), average_size=-1.0).example()
def test_an_average_size_may_be_zero_if_max_size_is():
lists(integers(), average_size=0.0, max_size=0)
def test_min_before_max():
with pytest.raises(InvalidArgument):
integers(min_value=1, max_value=0).validate()
@fails_with(InvalidArgument)
@given(x=integers())
def test_stuff_keyword(x=1):
pass
@fails_with(InvalidArgument)
@given(integers())
def test_stuff_positional(x=1):
pass
@fails_with(InvalidArgument)
@given(integers(), integers())
def test_too_many_positional(x):
pass
def test_given_warns_on_use_of_non_strategies():
@given(bool)
@settings(strict=False)
def test(x):
pass
with pytest.raises(InvalidArgument):
test()
def test_given_warns_when_mixing_positional_with_keyword():
@given(booleans(), y=booleans())
@settings(strict=False)
def test(x, y):
pass
with pytest.raises(InvalidArgument):
test()
def test_cannot_find_non_strategies():
with pytest.raises(InvalidArgument):
find(bool, bool)
hypothesis-3.0.1/tests/cover/test_verbosity.py 0000664 0000000 0000000 00000005504 12661275660 0021650 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from contextlib import contextmanager
from hypothesis import find, given
from tests.common.utils import fails, capture_out
from hypothesis._settings import settings, Verbosity
from hypothesis.reporting import default as default_reporter
from hypothesis.reporting import with_reporter
from hypothesis.strategies import lists, booleans, integers
@contextmanager
def capture_verbosity(level):
with capture_out() as o:
with with_reporter(default_reporter):
with settings(verbosity=level):
yield o
def test_prints_intermediate_in_success():
with capture_verbosity(Verbosity.verbose) as o:
@given(booleans())
def test_works(x):
pass
test_works()
assert 'Trying example' in o.getvalue()
def test_does_not_log_in_quiet_mode():
with capture_verbosity(Verbosity.quiet) as o:
@fails
@given(integers())
def test_foo(x):
assert False
test_foo()
assert not o.getvalue()
def test_includes_progress_in_verbose_mode():
with capture_verbosity(Verbosity.verbose) as o:
with settings(verbosity=Verbosity.verbose):
find(lists(integers()), lambda x: sum(x) >= 1000000)
out = o.getvalue()
assert out
assert u'Shrunk example' in out
assert u'Found satisfying example' in out
def test_prints_initial_attempts_on_find():
with capture_verbosity(Verbosity.verbose) as o:
with settings(verbosity=Verbosity.verbose):
seen = []
def not_first(x):
if not seen:
seen.append(x)
return False
return x not in seen
find(integers(), not_first)
assert u'Trying example' in o.getvalue()
def test_includes_intermediate_results_in_verbose_mode():
with capture_verbosity(Verbosity.verbose) as o:
@fails
@given(lists(integers()))
def test_foo(x):
assert sum(x) < 1000000
test_foo()
lines = o.getvalue().splitlines()
assert len([l for l in lines if u'example' in l]) > 2
assert len([l for l in lines if u'AssertionError' in l])
hypothesis-3.0.1/tests/cover/test_weird_settings.py 0000664 0000000 0000000 00000001704 12661275660 0022652 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import strategies as st
from hypothesis import given, settings
def test_setting_database_to_none_disables_the_database():
@given(st.booleans())
@settings(database_file=None)
def test(b):
pass
test()
hypothesis-3.0.1/tests/datetime/ 0000775 0000000 0000000 00000000000 12661275660 0016663 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/datetime/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0020767 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/datetime/test_dates.py 0000664 0000000 0000000 00000002722 12661275660 0021377 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.strategytests import strategy_test_suite
from hypothesis.extra.datetime import dates
from hypothesis.internal.debug import minimal
from hypothesis.internal.compat import hrange
TestStandardDescriptorFeatures1 = strategy_test_suite(dates())
def test_can_find_after_the_year_2000():
assert minimal(dates(), lambda x: x.year > 2000).year == 2001
def test_can_find_before_the_year_2000():
assert minimal(dates(), lambda x: x.year < 2000).year == 1999
def test_can_find_each_month():
for i in hrange(1, 12):
minimal(dates(), lambda x: x.month == i)
def test_min_year_is_respected():
assert minimal(dates(min_year=2003)).year == 2003
def test_max_year_is_respected():
assert minimal(dates(max_year=1998)).year == 1998
hypothesis-3.0.1/tests/datetime/test_datetime.py 0000664 0000000 0000000 00000010522 12661275660 0022070 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from datetime import MINYEAR
import pytz
import pytest
import hypothesis._settings as hs
from hypothesis import given, assume, settings
from hypothesis.errors import InvalidArgument
from hypothesis.strategytests import strategy_test_suite
from hypothesis.extra.datetime import datetimes
from hypothesis.internal.debug import minimal
from hypothesis.internal.compat import hrange
TestStandardDescriptorFeatures1 = strategy_test_suite(datetimes())
TestStandardDescriptorFeatures2 = strategy_test_suite(
datetimes(allow_naive=False))
TestStandardDescriptorFeatures3 = strategy_test_suite(
datetimes(timezones=[]),
)
def test_can_find_after_the_year_2000():
assert minimal(datetimes(), lambda x: x.year > 2000).year == 2001
def test_can_find_before_the_year_2000():
assert minimal(datetimes(), lambda x: x.year < 2000).year == 1999
def test_can_find_each_month():
for i in hrange(1, 12):
minimal(datetimes(), lambda x: x.month == i)
def test_can_find_midnight():
minimal(
datetimes(),
lambda x: (x.hour == 0 and x.minute == 0 and x.second == 0),
)
def test_can_find_non_midnight():
assert minimal(datetimes(), lambda x: x.hour != 0).hour == 1
def test_can_find_off_the_minute():
minimal(datetimes(), lambda x: x.second == 0)
def test_can_find_on_the_minute():
minimal(datetimes(), lambda x: x.second != 0)
def test_simplifies_towards_midnight():
d = minimal(datetimes())
assert d.hour == 0
assert d.minute == 0
assert d.second == 0
assert d.microsecond == 0
def test_can_generate_naive_datetime():
minimal(datetimes(allow_naive=True), lambda d: not d.tzinfo)
def test_can_generate_non_naive_datetime():
assert minimal(
datetimes(allow_naive=True), lambda d: d.tzinfo).tzinfo == pytz.UTC
def test_can_generate_non_utc():
minimal(
datetimes(),
lambda d: assume(d.tzinfo) and d.tzinfo.zone != u'UTC')
with hs.settings(max_examples=1000):
@given(datetimes(timezones=[]))
def test_naive_datetimes_are_naive(dt):
assert not dt.tzinfo
@given(datetimes(allow_naive=False))
def test_timezone_aware_datetimes_are_timezone_aware(dt):
assert dt.tzinfo
def test_restricts_to_allowed_set_of_timezones():
timezones = list(map(pytz.timezone, list(pytz.all_timezones)[:3]))
x = minimal(datetimes(timezones=timezones))
assert any(tz.zone == x.tzinfo.zone for tz in timezones)
def test_min_year_is_respected():
assert minimal(datetimes(min_year=2003)).year == 2003
def test_max_year_is_respected():
assert minimal(datetimes(max_year=1998)).year == 1998
def test_validates_year_arguments_in_range():
with pytest.raises(InvalidArgument):
datetimes(min_year=-10 ** 6).example()
with pytest.raises(InvalidArgument):
datetimes(max_year=-10 ** 6).example()
with pytest.raises(InvalidArgument):
datetimes(min_year=10 ** 6).example()
with pytest.raises(InvalidArgument):
datetimes(max_year=10 ** 6).example()
def test_needs_permission_for_no_timezones():
with pytest.raises(InvalidArgument):
datetimes(allow_naive=False, timezones=[]).example()
def test_bordering_on_a_leap_year():
x = minimal(
datetimes(min_year=2002, max_year=2005),
lambda x: x.month == 2 and x.day == 29,
settings=settings(database=None, max_examples=10 ** 7)
)
assert x.year == 2004
def test_overflow_in_simplify():
"""This is a test that we don't trigger a pytz bug when we're simplifying
around MINYEAR where valid dates can produce an overflow error."""
minimal(
datetimes(max_year=MINYEAR),
lambda x: x.tzinfo != pytz.UTC
)
hypothesis-3.0.1/tests/datetime/test_times.py 0000664 0000000 0000000 00000004610 12661275660 0021416 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytz
import hypothesis._settings as hs
from hypothesis import given, assume
from hypothesis.strategytests import strategy_test_suite
from hypothesis.extra.datetime import times
from hypothesis.internal.debug import minimal
TestStandardDescriptorFeatures1 = strategy_test_suite(times())
def test_can_find_midnight():
minimal(
times(),
lambda x: (x.hour == 0 and x.minute == 0 and x.second == 0),
)
def test_can_find_non_midnight():
assert minimal(times(), lambda x: x.hour != 0).hour == 1
def test_can_find_off_the_minute():
minimal(times(), lambda x: x.second == 0)
def test_can_find_on_the_minute():
minimal(times(), lambda x: x.second != 0)
def test_simplifies_towards_midnight():
d = minimal(times())
assert d.hour == 0
assert d.minute == 0
assert d.second == 0
assert d.microsecond == 0
def test_can_generate_naive_time():
minimal(times(allow_naive=True), lambda d: not d.tzinfo)
def test_can_generate_non_naive_time():
assert minimal(
times(allow_naive=True), lambda d: d.tzinfo).tzinfo == pytz.UTC
def test_can_generate_non_utc():
minimal(
times(),
lambda d: assume(d.tzinfo) and d.tzinfo.zone != u'UTC')
with hs.settings(max_examples=1000):
@given(times(timezones=[]))
def test_naive_times_are_naive(dt):
assert not dt.tzinfo
@given(times(allow_naive=False))
def test_timezone_aware_times_are_timezone_aware(dt):
assert dt.tzinfo
def test_restricts_to_allowed_set_of_timezones():
timezones = list(map(pytz.timezone, list(pytz.all_timezones)[:3]))
x = minimal(times(timezones=timezones))
assert any(tz.zone == x.tzinfo.zone for tz in timezones)
hypothesis-3.0.1/tests/django/ 0000775 0000000 0000000 00000000000 12661275660 0016331 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/django/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0020435 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/django/manage.py 0000775 0000000 0000000 00000001773 12661275660 0020146 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import os
import sys
from tests.common.setup import run
if __name__ == u'__main__':
run()
os.environ.setdefault(
u'DJANGO_SETTINGS_MODULE', u'tests.django.toys.settings')
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
hypothesis-3.0.1/tests/django/toys/ 0000775 0000000 0000000 00000000000 12661275660 0017327 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/django/toys/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0021433 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/django/toys/settings.py 0000664 0000000 0000000 00000005377 12661275660 0021555 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""Django settings for toys project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
from __future__ import division, print_function, absolute_import
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = u'o0zlv@74u4e3s+o0^h$+tlalh&$r(7hbx01g4^h5-3gizj%hub'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
u'django.contrib.admin',
u'django.contrib.auth',
u'django.contrib.contenttypes',
u'django.contrib.sessions',
u'django.contrib.messages',
u'django.contrib.staticfiles',
u'tests.django.toystore',
)
MIDDLEWARE_CLASSES = (
u'django.contrib.sessions.middleware.SessionMiddleware',
u'django.middleware.common.CommonMiddleware',
u'django.middleware.csrf.CsrfViewMiddleware',
u'django.contrib.auth.middleware.AuthenticationMiddleware',
u'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
u'django.contrib.messages.middleware.MessageMiddleware',
u'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = u'toys.urls'
WSGI_APPLICATION = u'toys.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
DATABASES = {
u'default': {
u'ENGINE': u'django.db.backends.sqlite3',
u'NAME': os.path.join(BASE_DIR, u'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = u'en-us'
TIME_ZONE = u'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = u'/static/'
hypothesis-3.0.1/tests/django/toys/urls.py 0000664 0000000 0000000 00000002110 12661275660 0020660 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from django.contrib import admin
from django.conf.urls import url, include, patterns
urlpatterns = patterns(u'',
# Examples:
# url(r'^$', 'toys.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^admin/', include(admin.site.urls)),
)
hypothesis-3.0.1/tests/django/toys/wsgi.py 0000664 0000000 0000000 00000002127 12661275660 0020654 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""WSGI config for toys project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/
"""
from __future__ import division, print_function, absolute_import
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault(u'DJANGO_SETTINGS_MODULE', u'toys.settings')
application = get_wsgi_application()
hypothesis-3.0.1/tests/django/toystore/ 0000775 0000000 0000000 00000000000 12661275660 0020221 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/django/toystore/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0022325 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/django/toystore/admin.py 0000664 0000000 0000000 00000001322 12661275660 0021661 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
hypothesis-3.0.1/tests/django/toystore/models.py 0000664 0000000 0000000 00000004056 12661275660 0022063 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from django.db import models
class Company(models.Model):
name = models.CharField(max_length=100, unique=True)
class Store(models.Model):
name = models.CharField(max_length=100, unique=True)
company = models.ForeignKey(Company, null=False)
class CharmField(models.Field):
def db_type(self, connection):
return u'char(1)'
class CustomishField(models.Field):
def db_type(self, connection):
return u'char(1)'
class Customish(models.Model):
customish = CustomishField()
class Customer(models.Model):
name = models.CharField(max_length=100, unique=True)
email = models.EmailField(max_length=100, unique=True)
gender = models.CharField(max_length=50, null=True)
age = models.IntegerField()
birthday = models.DateTimeField()
class Charming(models.Model):
charm = CharmField()
class CouldBeCharming(models.Model):
charm = CharmField(null=True)
class SelfLoop(models.Model):
me = models.ForeignKey(u'self', null=True)
class LoopA(models.Model):
b = models.ForeignKey(u'LoopB', null=False)
class LoopB(models.Model):
a = models.ForeignKey(u'LoopA', null=True)
class ManyInts(models.Model):
i1 = models.IntegerField()
i2 = models.SmallIntegerField()
i3 = models.BigIntegerField()
p1 = models.PositiveIntegerField()
p2 = models.PositiveSmallIntegerField()
hypothesis-3.0.1/tests/django/toystore/test_basic_configuration.py 0000664 0000000 0000000 00000004557 12661275660 0025655 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from unittest import TestCase as VanillaTestCase
from django.db import IntegrityError
from hypothesis import given, settings
from hypothesis.strategies import integers
from hypothesis.extra.django import TestCase, TransactionTestCase
from hypothesis.internal.compat import PYPY
from tests.django.toystore.models import Company
class SomeStuff(object):
@given(integers())
def test_is_blank_slate(self, unused):
Company.objects.create(name=u'MickeyCo')
def test_normal_test_1(self):
Company.objects.create(name=u'MickeyCo')
def test_normal_test_2(self):
Company.objects.create(name=u'MickeyCo')
class TestConstraintsWithTransactions(SomeStuff, TestCase):
pass
if not PYPY:
# xfail
# This is excessively slow in general, but particularly on pypy. We just
# disable it altogether there as it's a niche case.
class TestConstraintsWithoutTransactions(SomeStuff, TransactionTestCase):
pass
class TestWorkflow(VanillaTestCase):
def test_does_not_break_later_tests(self):
def break_the_db(i):
Company.objects.create(name=u'MickeyCo')
Company.objects.create(name=u'MickeyCo')
class LocalTest(TestCase):
@given(integers().map(break_the_db))
@settings(perform_health_check=False)
def test_does_not_break_other_things(self, unused):
pass
def test_normal_test_1(self):
Company.objects.create(name=u'MickeyCo')
t = LocalTest(u'test_normal_test_1')
try:
t.test_does_not_break_other_things()
except IntegrityError:
pass
t.test_normal_test_1()
hypothesis-3.0.1/tests/django/toystore/test_given_models.py 0000664 0000000 0000000 00000005442 12661275660 0024312 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis import given, assume
from hypothesis.errors import InvalidArgument
from hypothesis.strategies import just, lists
from hypothesis.extra.django import TestCase, TransactionTestCase
from tests.django.toystore.models import Store, Company, Customer, \
ManyInts, SelfLoop, Customish, CustomishField, CouldBeCharming
from hypothesis.extra.django.models import models, \
add_default_field_mapping
add_default_field_mapping(CustomishField, just(u'a'))
class TestGetsBasicModels(TestCase):
@given(models(Company))
def test_is_company(self, company):
self.assertIsInstance(company, Company)
self.assertIsNotNone(company.pk)
@given(models(Store, company=models(Company)))
def test_can_get_a_store(self, store):
assert store.company.pk
@given(lists(models(Company)))
def test_can_get_multiple_models_with_unique_field(self, companies):
assume(len(companies) > 1)
for c in companies:
self.assertIsNotNone(c.pk)
self.assertEqual(
len({c.pk for c in companies}), len({c.name for c in companies})
)
@given(models(Customer))
def test_is_customer(self, customer):
self.assertIsInstance(customer, Customer)
self.assertIsNotNone(customer.pk)
self.assertIsNotNone(customer.email)
@given(models(CouldBeCharming))
def test_is_not_charming(self, not_charming):
self.assertIsInstance(not_charming, CouldBeCharming)
self.assertIsNotNone(not_charming.pk)
self.assertIsNone(not_charming.charm)
@given(models(SelfLoop))
def test_sl(self, sl):
self.assertIsNone(sl.me)
@given(lists(models(ManyInts)))
def test_no_overflow_in_integer(self, manyints):
pass
@given(models(Customish))
def test_custom_field(self, x):
assert x.customish == u'a'
def test_mandatory_fields_are_mandatory(self):
self.assertRaises(InvalidArgument, models, Store)
class TestsNeedingRollback(TransactionTestCase):
def test_can_get_examples(self):
for _ in range(200):
models(Company).example()
hypothesis-3.0.1/tests/django/toystore/views.py 0000664 0000000 0000000 00000001322 12661275660 0021726 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
hypothesis-3.0.1/tests/fakefactory/ 0000775 0000000 0000000 00000000000 12661275660 0017365 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/fakefactory/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0021471 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/fakefactory/test_fake_factory.py 0000664 0000000 0000000 00000005404 12661275660 0023436 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from faker.providers import BaseProvider
from hypothesis import given
from hypothesis.strategytests import strategy_test_suite
from hypothesis.internal.debug import minimal
from hypothesis.extra.fakefactory import fake_factory
class KittenProvider(BaseProvider):
def kittens(self):
return u'meow %d' % (self.random_number(digits=10),)
@given(fake_factory(u'kittens', providers=[KittenProvider]))
def test_kittens_meow(kitten):
assert u'meow' in kitten
@given(fake_factory(u'email'))
def test_email(email):
assert u'@' in email
@given(fake_factory(u'name', locale=u'en_US'))
def test_english_names_are_ascii(name):
name.encode(u'ascii')
def test_french_names_may_have_an_accent():
minimal(
fake_factory(u'name', locale=u'fr_FR'),
lambda x: u'é' not in x
)
def test_fake_factory_errors_with_both_locale_and_locales():
with pytest.raises(ValueError):
fake_factory(
u'name', locale=u'fr_FR', locales=[u'fr_FR', u'en_US']
)
def test_fake_factory_errors_with_unsupported_locale():
with pytest.raises(ValueError):
fake_factory(
u'name', locale=u'badger_BADGER'
)
def test_factory_errors_with_source_for_unsupported_locale():
with pytest.raises(ValueError):
fake_factory(u'state', locale=u'ja_JP')
def test_fake_factory_errors_if_any_locale_is_unsupported():
with pytest.raises(ValueError):
fake_factory(
u'name', locales=[u'fr_FR', u'en_US', u'mushroom_MUSHROOM']
)
def test_fake_factory_errors_if_unsupported_method():
with pytest.raises(ValueError):
fake_factory(u'spoon')
def test_fake_factory_errors_if_private_ish_method():
with pytest.raises(ValueError):
fake_factory(u'_Generator__config')
TestFakeEmail = strategy_test_suite(
fake_factory(u'email')
)
TestFakeNames = strategy_test_suite(
fake_factory(u'name')
)
TestFakeEnglishNames = strategy_test_suite(
fake_factory(u'name', locale=u'en_US')
)
TestStates = strategy_test_suite(
fake_factory(u'state')
)
hypothesis-3.0.1/tests/nocover/ 0000775 0000000 0000000 00000000000 12661275660 0016542 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/nocover/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0020646 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/nocover/test_choices.py 0000664 0000000 0000000 00000003233 12661275660 0021571 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given, settings
from tests.common.utils import raises, capture_out
from hypothesis.database import ExampleDatabase
from hypothesis.internal.compat import hrange
def test_stability():
@given(
st.lists(st.text(min_size=1, max_size=1), unique=True, min_size=5),
st.choices(),
)
@settings(
database=ExampleDatabase(),
)
def test_choose_and_then_fail(ls, choice):
for _ in hrange(100):
choice(ls)
assert False
# Run once first for easier debugging
with raises(AssertionError):
test_choose_and_then_fail()
with capture_out() as o:
with raises(AssertionError):
test_choose_and_then_fail()
out1 = o.getvalue()
with capture_out() as o:
with raises(AssertionError):
test_choose_and_then_fail()
out2 = o.getvalue()
assert out1 == out2
assert 'Choice #100:' in out1
hypothesis-3.0.1/tests/nocover/test_collective_minimization.py 0000664 0000000 0000000 00000003266 12661275660 0025102 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import find, settings
from tests.common import standard_types
from hypothesis.errors import NoSuchExample
from hypothesis.strategies import lists
@pytest.mark.parametrize(
u'spec', standard_types, ids=list(map(repr, standard_types)))
def test_can_collectively_minimize(spec):
"""This should generally exercise strategies' strictly_simpler heuristic by
putting us in a state where example cloning is required to get to the
answer fast enough."""
n = 10
def distinct_reprs(x):
result = set()
for t in x:
result.add(repr(t))
if len(result) >= 2:
return True
return False
try:
xs = find(
lists(spec, min_size=n, max_size=n),
distinct_reprs,
settings=settings(
timeout=10.0, max_examples=2000))
assert len(xs) == n
assert 2 <= len(set((map(repr, xs)))) <= 3
except NoSuchExample:
pass
hypothesis-3.0.1/tests/nocover/test_compat.py 0000664 0000000 0000000 00000005056 12661275660 0021444 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import strategies as st
from hypothesis import given
from hypothesis.internal.compat import hrange, qualname, int_to_bytes, \
HAS_SIGNATURE, int_from_bytes, signature_argspec
def test_small_hrange():
assert list(hrange(5)) == [0, 1, 2, 3, 4]
assert list(hrange(3, 5)) == [3, 4]
assert list(hrange(1, 10, 2)) == [1, 3, 5, 7, 9]
def test_large_hrange():
n = 1 << 1024
assert list(hrange(n, n + 5, 2)) == [n, n + 2, n + 4]
assert list(hrange(n, n)) == []
with pytest.raises(ValueError):
hrange(n, n, 0)
class Foo():
def bar(self):
pass
def test_qualname():
assert qualname(Foo.bar) == u'Foo.bar'
assert qualname(Foo().bar) == u'Foo.bar'
assert qualname(qualname) == u'qualname'
try:
from inspect import getargspec
except ImportError:
getargspec = None
def a(b, c, d):
pass
def b(c, d, *ar):
pass
def c(c, d, *ar, **k):
pass
def d(a1, a2=1, a3=2, a4=None):
pass
if getargspec is not None and HAS_SIGNATURE:
@pytest.mark.parametrize('f', [
a, b, c, d
])
def test_agrees_on_argspec(f):
real = getargspec(f)
fake = signature_argspec(f)
assert tuple(real) == tuple(fake)
for f in real._fields:
assert getattr(real, f) == getattr(fake, f)
@given(st.binary())
def test_convert_back(bs):
bs = bytearray(bs)
assert int_to_bytes(int_from_bytes(bs), len(bs)) == bs
bytes8 = st.builds(bytearray, st.binary(min_size=8, max_size=8))
@given(bytes8, bytes8)
def test_to_int_in_big_endian_order(x, y):
x, y = sorted((x, y))
assert 0 <= int_from_bytes(x) <= int_from_bytes(y)
ints8 = st.integers(min_value=0, max_value=2 ** 63 - 1)
@given(ints8, ints8)
def test_to_bytes_in_big_endian_order(x, y):
x, y = sorted((x, y))
assert int_to_bytes(x, 8) <= int_to_bytes(y, 8)
hypothesis-3.0.1/tests/nocover/test_descriptortests.py 0000664 0000000 0000000 00000014615 12661275660 0023423 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
from collections import namedtuple
from hypothesis.strategies import just, none, sets, text, lists, binary, \
builds, floats, one_of, tuples, randoms, booleans, decimals, \
integers, composite, fractions, recursive, streaming, frozensets, \
dictionaries, sampled_from, complex_numbers, fixed_dictionaries
from hypothesis.strategytests import strategy_test_suite
from hypothesis.internal.compat import OrderedDict
TestIntegerRange = strategy_test_suite(integers(min_value=0, max_value=5))
TestGiantIntegerRange = strategy_test_suite(
integers(min_value=(-(2 ** 129)), max_value=(2 ** 129))
)
TestFloatRange = strategy_test_suite(floats(min_value=0.5, max_value=10))
TestSampled10 = strategy_test_suite(sampled_from(elements=list(range(10))))
TestSampled1 = strategy_test_suite(sampled_from(elements=(1,)))
TestSampled2 = strategy_test_suite(sampled_from(elements=(1, 2)))
TestIntegersFrom = strategy_test_suite(integers(min_value=13))
TestIntegersFrom = strategy_test_suite(integers(min_value=1 << 1024))
TestOneOf = strategy_test_suite(one_of(
integers(), integers(), booleans()))
TestOneOfSameType = strategy_test_suite(
one_of(
integers(min_value=1, max_value=10),
integers(min_value=8, max_value=15),
)
)
TestRandom = strategy_test_suite(randoms())
TestInts = strategy_test_suite(integers())
TestBoolLists = strategy_test_suite(lists(booleans(), average_size=5.0))
TestDictionaries = strategy_test_suite(
dictionaries(keys=tuples(integers(), integers()), values=booleans()))
TestOrderedDictionaries = strategy_test_suite(
dictionaries(
keys=integers(), values=integers(), dict_class=OrderedDict))
TestString = strategy_test_suite(text())
BinaryString = strategy_test_suite(binary())
TestIntBool = strategy_test_suite(tuples(integers(), booleans()))
TestFloats = strategy_test_suite(floats())
TestComplex = strategy_test_suite(complex_numbers())
TestJust = strategy_test_suite(just(u'hi'))
TestEmptyString = strategy_test_suite(text(alphabet=u''))
TestSingleString = strategy_test_suite(
text(alphabet=u'a', average_size=10.0))
TestManyString = strategy_test_suite(text(alphabet=u'abcdef☃'))
Stuff = namedtuple(u'Stuff', (u'a', u'b'))
TestNamedTuple = strategy_test_suite(
builds(Stuff, integers(), integers()))
TestMixedSets = strategy_test_suite(sets(
one_of(integers(), booleans(), floats())))
TestFrozenSets = strategy_test_suite(frozensets(booleans()))
TestNestedSets = strategy_test_suite(
frozensets(frozensets(integers(), max_size=2)))
TestMisc1 = strategy_test_suite(fixed_dictionaries(
{(2, -374): frozensets(none())}))
TestMisc2 = strategy_test_suite(fixed_dictionaries(
{b'': frozensets(integers())}))
TestMisc3 = strategy_test_suite(tuples(sets(none() | text())))
TestEmptyTuple = strategy_test_suite(tuples())
TestEmptyList = strategy_test_suite(lists(max_size=0))
TestEmptySet = strategy_test_suite(sets(max_size=0))
TestEmptyFrozenSet = strategy_test_suite(frozensets(max_size=0))
TestEmptyDict = strategy_test_suite(fixed_dictionaries({}))
TestDecimal = strategy_test_suite(decimals())
TestFraction = strategy_test_suite(fractions())
TestNonEmptyLists = strategy_test_suite(
lists(integers(), average_size=5.0).filter(bool)
)
TestNoneLists = strategy_test_suite(lists(none(), average_size=5.0))
TestConstantLists = strategy_test_suite(
integers().flatmap(lambda i: lists(just(i), average_size=5.0))
)
TestListsWithUniqueness = strategy_test_suite(
lists(
lists(integers(), average_size=5.0),
average_size=5.0,
unique_by=lambda x: tuple(sorted(x)))
)
TestOrderedPairs = strategy_test_suite(
integers(min_value=1, max_value=200).flatmap(
lambda e: tuples(integers(min_value=0, max_value=e - 1), just(e))
)
)
TestMappedSampling = strategy_test_suite(
lists(integers(), min_size=1, average_size=5.0).flatmap(sampled_from)
)
TestDiverseFlatmap = strategy_test_suite(
sampled_from((
lists(integers(), average_size=5.0),
lists(text(), average_size=5.0), tuples(text(), text()),
booleans(), lists(complex_numbers())
)).flatmap(lambda x: x)
)
def integers_from(x):
return integers(min_value=x)
TestManyFlatmaps = strategy_test_suite(
integers()
.flatmap(integers_from)
.flatmap(integers_from)
.flatmap(integers_from)
.flatmap(integers_from)
)
TestIntStreams = strategy_test_suite(streaming(integers()))
TestStreamLists = strategy_test_suite(streaming(integers()))
TestIntStreamStreams = strategy_test_suite(
streaming(streaming(integers())))
TestRecursiveLowLeaves = strategy_test_suite(
recursive(
booleans(),
lambda x: tuples(x, x),
max_leaves=3,
)
)
TestRecursiveHighLeaves = strategy_test_suite(
recursive(
booleans(),
lambda x: lists(x, min_size=2, max_size=10),
max_leaves=200,
)
)
TestJSON = strategy_test_suite(
recursive(
floats().filter(lambda f: not math.isnan(f) or math.isinf(f)) |
text() | booleans() | none(),
lambda js:
lists(js, average_size=2) |
dictionaries(text(), js, average_size=2),
max_leaves=10))
TestWayTooClever = strategy_test_suite(
recursive(
frozensets(integers(), min_size=1, average_size=2.0),
lambda x: frozensets(x, min_size=2, max_size=4)).flatmap(
sampled_from
)
)
@composite
def tight_integer_list(draw):
x = draw(integers())
y = draw(integers(min_value=x))
return draw(lists(integers(min_value=x, max_value=y)))
TestComposite = strategy_test_suite(tight_integer_list())
def test_repr_has_specifier_in_it():
suite = TestComplex(
u'test_will_find_a_constant_failure')
assert repr(suite) == u'strategy_test_suite(%r)' % (complex_numbers(),)
hypothesis-3.0.1/tests/nocover/test_example_quality.py 0000664 0000000 0000000 00000037415 12661275660 0023370 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import sys
import math
import operator
from random import Random
from decimal import Decimal
from fractions import Fraction
import pytest
from hypothesis import find, given, assume, example, settings
from tests.common import parametrize, ordered_pair, constant_list
from hypothesis.strategies import just, sets, text, lists, binary, \
floats, tuples, randoms, booleans, decimals, integers, fractions, \
recursive, frozensets, dictionaries, sampled_from, random_module
from hypothesis.internal.debug import minimal
from hypothesis.internal.compat import PY3, hrange, reduce, Counter, \
OrderedDict, integer_types
def test_minimize_list_on_large_structure():
def test_list_in_range(xs):
return len([
x for x in xs
if x >= 10
]) >= 60
assert minimal(
lists(integers(), min_size=60, average_size=120), test_list_in_range,
timeout_after=30,
) == [10] * 60
def test_minimize_list_of_sets_on_large_structure():
def test_list_in_range(xs):
return len(list(filter(None, xs))) >= 50
x = minimal(
lists(frozensets(integers()), min_size=50), test_list_in_range,
timeout_after=20,
)
assert x == [frozenset([0])] * 50
def test_integers_from_minimizes_leftwards():
assert minimal(integers(min_value=101)) == 101
def test_minimal_fractions_1():
assert minimal(fractions()) == Fraction(0)
def test_minimal_fractions_2():
assert minimal(fractions(), lambda x: x >= 1) == Fraction(1)
def test_minimal_fractions_3():
assert minimal(
lists(fractions()), lambda s: len(s) >= 20) == [Fraction(0)] * 20
def test_minimal_fractions_4():
x = minimal(
lists(fractions(), min_size=20),
lambda s: len([t for t in s if t >= 1]) >= 20
)
assert x == [Fraction(1)] * 20
def test_minimize_list_of_floats_on_large_structure():
def test_list_in_range(xs):
return len([
x for x in xs
if x >= 3
]) >= 30
result = minimal(
lists(floats(), min_size=50, average_size=100),
test_list_in_range, timeout_after=20)
result.sort()
assert result == [0.0] * 20 + [3.0] * 30
def test_minimize_string_to_empty():
assert minimal(text()) == u''
def test_minimize_one_of():
for _ in hrange(100):
assert minimal(integers() | text() | booleans()) in (
0, u'', False
)
def test_minimize_mixed_list():
mixed = minimal(lists(integers() | text()), lambda x: len(x) >= 10)
assert set(mixed).issubset(set((0, u'')))
def test_minimize_longer_string():
assert minimal(text(), lambda x: len(x) >= 10) == u'0' * 10
def test_minimize_longer_list_of_strings():
assert minimal(lists(text()), lambda x: len(x) >= 10) == [u''] * 10
def test_minimize_3_set():
assert minimal(sets(integers()), lambda x: len(x) >= 3) in (
set((0, 1, 2)),
set((-1, 0, 1)),
)
def test_minimize_3_set_of_tuples():
assert minimal(
sets(tuples(integers())),
lambda x: len(x) >= 2) == set(((0,), (1,)))
def test_minimize_sets_of_sets():
elements = integers(1, 100)
size = 15
set_of_sets = minimal(
sets(frozensets(elements)), lambda s: len(s) >= size
)
assert frozenset() in set_of_sets
assert len(set_of_sets) == size
for s in set_of_sets:
if len(s) > 1:
assert any(
s != t and t.issubset(s)
for t in set_of_sets
)
@pytest.mark.parametrize(
(u'string',), [(text(),), (binary(),)],
ids=[u'text', u'binary()']
)
def test_minimal_unsorted_strings(string):
def dedupe(xs):
result = []
for x in xs:
if x not in result:
result.append(x)
return result
result = minimal(
lists(string).map(dedupe),
lambda xs: assume(len(xs) >= 5) and sorted(xs) != xs
)
assert len(result) == 5
for ex in result:
if len(ex) > 1:
for i in hrange(len(ex)):
assert ex[:i] in result
def test_finds_list_with_plenty_duplicates():
def is_good(xs):
return max(Counter(xs).values()) >= 3
result = minimal(
lists(text(min_size=1), average_size=50, min_size=1), is_good
)
assert result == [u'0'] * 3
def test_minimal_mixed_list_propagates_leftwards():
# one_of simplification can't actually simplify to the left, but it regards
# instances of the leftmost type as strictly simpler. This means that if we
# have any bools in the list we can clone them to replace the more complex
# examples to the right.
# The check that we have at least one bool is required for this to work,
# otherwise the features that ensure sometimes we can get a list of all of
# one type will occasionally give us an example which doesn't contain any
# bools to clone
def long_list_with_enough_bools(x):
if len(x) < 50:
return False
if len([t for t in x if isinstance(t, bool)]) < 10:
return False
return True
assert minimal(
lists(booleans() | tuples(integers()), min_size=50),
long_list_with_enough_bools
) == [False] * 50
def test_tuples_do_not_block_cloning():
assert minimal(
lists(tuples(booleans() | tuples(integers())), min_size=50),
lambda x: any(isinstance(t[0], bool) for t in x),
timeout_after=60,
) == [(False,)] * 50
def test_can_simplify_flatmap_with_bounded_left_hand_size():
assert minimal(
booleans().flatmap(lambda x: lists(just(x))),
lambda x: len(x) >= 10) == [False] * 10
def test_can_simplify_across_flatmap_of_just():
assert minimal(integers().flatmap(just)) == 0
def test_can_simplify_on_right_hand_strategy_of_flatmap():
assert minimal(integers().flatmap(lambda x: lists(just(x)))) == []
def test_can_ignore_left_hand_side_of_flatmap():
assert minimal(
integers().flatmap(lambda x: lists(integers())),
lambda x: len(x) >= 10
) == [0] * 10
def test_can_simplify_on_both_sides_of_flatmap():
assert minimal(
integers().flatmap(lambda x: lists(just(x))),
lambda x: len(x) >= 10
) == [0] * 10
def test_flatmap_rectangles():
lengths = integers(min_value=0, max_value=10)
def lists_of_length(n):
return lists(sampled_from('ab'), min_size=n, max_size=n)
xs = find(lengths.flatmap(
lambda w: lists(lists_of_length(w))), lambda x: ['a', 'b'] in x,
settings=settings(database=None, max_examples=2000)
)
assert xs == [['a', 'b']]
@parametrize(u'dict_class', [dict, OrderedDict])
def test_dictionary(dict_class):
assert minimal(dictionaries(
keys=integers(), values=text(),
dict_class=dict_class)) == dict_class()
x = minimal(
dictionaries(keys=integers(), values=text(), dict_class=dict_class),
lambda t: len(t) >= 3)
assert isinstance(x, dict_class)
assert set(x.values()) == set((u'',))
for k in x:
if k < 0:
assert k + 1 in x
if k > 0:
assert k - 1 in x
def test_minimize_single_element_in_silly_large_int_range():
ir = integers(-(2 ** 256), 2 ** 256)
assert minimal(ir, lambda x: x >= -(2 ** 255)) == 0
def test_minimize_multiple_elements_in_silly_large_int_range():
desired_result = [0] * 20
ir = integers(-(2 ** 256), 2 ** 256)
x = minimal(
lists(ir),
lambda x: len(x) >= 20,
timeout_after=20,
)
assert x == desired_result
def test_minimize_multiple_elements_in_silly_large_int_range_min_is_not_dupe():
ir = integers(0, 2 ** 256)
target = list(range(20))
x = minimal(
lists(ir),
lambda x: (
assume(len(x) >= 20) and all(x[i] >= target[i] for i in target))
)
assert x == target
def test_minimize_one_of_distinct_types():
y = booleans() | binary()
x = minimal(
tuples(y, y),
lambda x: type(x[0]) != type(x[1])
)
assert x in (
(False, b''),
(b'', False)
)
@pytest.mark.skipif(PY3, reason=u'Python 3 has better integers')
def test_minimize_long():
assert minimal(
integers(), lambda x: type(x).__name__ == u'long') == sys.maxint + 1
def test_non_reversible_ints_as_decimals():
def not_reversible(xs):
ts = list(map(Decimal, xs))
return sum(ts) != sum(reversed(ts))
sigh = minimal(lists(integers()), not_reversible, timeout_after=30)
assert len(sigh) <= 25
def test_non_reversible_fractions_as_decimals():
def not_reversible(xs):
xs = [Decimal(x.numerator) / x.denominator for x in xs]
return sum(xs) != sum(reversed(xs))
sigh = minimal(lists(fractions()), not_reversible, timeout_after=20)
assert len(sigh) <= 25
def test_non_reversible_decimals():
def not_reversible(xs):
assume(all(x.is_finite() for x in xs))
return sum(xs) != sum(reversed(xs))
sigh = minimal(lists(decimals()), not_reversible, timeout_after=30)
assert len(sigh) <= 25
def length_of_longest_ordered_sequence(xs):
if not xs:
return 0
# FIXME: Needlessly O(n^2) algorithm, but it's a test so eh.
lengths = [-1] * len(xs)
lengths[-1] = 1
for i in hrange(len(xs) - 2, -1, -1):
assert lengths[i] == -1
for j in hrange(i + 1, len(xs)):
assert lengths[j] >= 1
if xs[j] > xs[i]:
lengths[i] = max(lengths[i], lengths[j] + 1)
if lengths[i] < 0:
lengths[i] = 1
assert all(t >= 1 for t in lengths)
return max(lengths)
def test_increasing_integer_sequence():
k = 6
xs = minimal(
lists(integers()), lambda t: (
len(t) <= 30 and length_of_longest_ordered_sequence(t) >= k),
timeout_after=60,
)
start = xs[0]
assert xs == list(range(start, start + k))
def test_increasing_string_sequence():
n = 7
lb = u'✐'
xs = minimal(
lists(text(min_size=1), min_size=n, average_size=50), lambda t: (
t[0] >= lb and
t[-1] >= lb and
length_of_longest_ordered_sequence(t) >= n
),
timeout_after=30,
)
assert n <= len(xs) <= n + 2
for i in hrange(len(xs) - 1):
assert abs(len(xs[i + 1]) - len(xs[i])) <= 1
def test_decreasing_string_sequence():
n = 7
lb = u'✐'
xs = minimal(
lists(text(min_size=1), min_size=n, average_size=50), lambda t: (
n <= len(t) and
all(t) and
t[0] >= lb and
t[-1] >= lb and
length_of_longest_ordered_sequence(list(reversed(t))) >= n
),
timeout_after=30,
)
assert n <= len(xs) <= n + 2
for i in hrange(len(xs) - 1):
assert abs(len(xs[i + 1]) - len(xs[i])) <= 1
def test_small_sum_lists():
xs = minimal(
lists(floats(), min_size=100, average_size=200),
lambda x:
sum(t for t in x if float(u'inf') > t >= 0) >= 1,
timeout_after=60,
)
assert 1.0 <= sum(t for t in xs if t >= 0) <= 1.5
def test_increasing_float_sequence():
xs = minimal(
lists(floats()), lambda x: length_of_longest_ordered_sequence([
t for t in x if t >= 0
]) >= 7 and len([t for t in x if t >= 500.0]) >= 4
)
assert max(xs) < 1000
assert not any(math.isinf(x) for x in xs)
def test_increasing_integers_from_sequence():
n = 6
lb = 50000
xs = minimal(
lists(integers(min_value=0)), lambda t: (
n <= len(t) and
all(t) and
any(s >= lb for s in t) and
length_of_longest_ordered_sequence(t) >= n
),
timeout_after=60,
)
assert n <= len(xs) <= n + 2
def test_find_large_union_list():
def large_mostly_non_overlapping(xs):
assume(xs)
assume(all(xs))
union = reduce(operator.or_, xs)
return len(union) >= 30
result = minimal(
lists(sets(integers())),
large_mostly_non_overlapping, timeout_after=60)
union = reduce(operator.or_, result)
assert len(union) == 30
assert max(union) == min(union) + len(union) - 1
for x in result:
for y in result:
if x is not y:
assert not (x & y)
def test_anti_sorted_ordered_pair():
result = minimal(
lists(ordered_pair),
lambda x: (
len(x) >= 30 and
2 < length_of_longest_ordered_sequence(x) <= 10))
assert len(result) == 30
def test_constant_lists_of_diverse_length():
# This does not currently work very well. We delete, but we don't actually
# get all that far with simplification of the individual elements.
result = minimal(
lists(constant_list(integers())),
lambda x: len(set(map(len, x))) >= 20,
timeout_after=30,
)
assert len(result) == 20
def test_finds_non_reversible_floats():
t = minimal(
lists(floats()), lambda xs:
not math.isnan(sum(xs)) and sum(xs) != sum(reversed(xs)),
timeout_after=40,
settings=settings(database=None)
)
assert len(repr(t)) <= 200
print(t)
@pytest.mark.parametrize('n', [0, 1, 10, 100, 1000])
def test_containment(n):
iv = minimal(
tuples(lists(integers()), integers()),
lambda x: x[1] in x[0] and x[1] >= n,
timeout_after=60
)
assert iv == ([n], n)
def test_duplicate_containment():
ls, i = minimal(
tuples(lists(integers()), integers()),
lambda s: s[0].count(s[1]) > 1, timeout_after=100)
assert ls == [0, 0]
assert i == 0
def test_unique_lists_of_single_characters():
x = minimal(
lists(text(max_size=1), unique=True, min_size=5)
)
assert sorted(x) == ['', '0', '1', '2', '3']
@given(randoms())
@settings(max_examples=10, database=None, max_shrinks=0)
@example(rnd=Random(340282366920938463462798146679426884207))
def test_can_simplify_hard_recursive_data_into_boolean_alternative(rnd):
"""This test forces us to exercise the simplification through redrawing
functionality, thus testing that we can deal with bad templates."""
def leaves(ls):
if isinstance(ls, (bool,) + integer_types):
return [ls]
else:
return sum(map(leaves, ls), [])
def hard(base):
return recursive(
base, lambda x: lists(x, max_size=5), max_leaves=20)
r = find(
hard(booleans()) |
hard(booleans()) |
hard(booleans()) |
hard(integers()) |
hard(booleans()),
lambda x:
len(leaves(x)) >= 3 and
any(isinstance(t, bool) for t in leaves(x)),
random=rnd, settings=settings(
database=None, max_examples=5000, max_shrinks=1000))
lvs = leaves(r)
assert lvs == [False] * 3
assert all(isinstance(v, bool) for v in lvs), repr(lvs)
def test_can_clone_same_length_items():
ls = find(
lists(frozensets(integers(), min_size=10, max_size=10)),
lambda x: len(x) >= 20
)
assert len(set(ls)) == 1
@given(random_module(), integers(min_value=0))
@example(None, 62677)
@settings(max_examples=100, max_shrinks=0)
def test_minimize_down_to(rnd, i):
j = find(
integers(), lambda x: x >= i,
settings=settings(max_examples=1000, database=None, max_shrinks=1000))
assert i == j
hypothesis-3.0.1/tests/nocover/test_floating.py 0000664 0000000 0000000 00000007033 12661275660 0021761 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
"""Tests for being able to generate weird and wonderful floating point
numbers."""
from __future__ import division, print_function, absolute_import
import sys
import math
from hypothesis import seed, given, assume, reject, settings
from hypothesis.errors import Unsatisfiable
from tests.common.utils import fails
from hypothesis.strategies import lists, floats, integers
TRY_HARDER = settings(max_examples=1000, max_iterations=2000)
@given(floats())
@TRY_HARDER
def test_is_float(x):
assert isinstance(x, float)
@fails
@given(floats())
@TRY_HARDER
def test_inversion_is_imperfect(x):
assume(x != 0.0)
y = 1.0 / x
assert x * y == 1.0
@given(floats(-sys.float_info.max, sys.float_info.max))
def test_largest_range(x):
assert not math.isinf(x)
@given(floats())
@TRY_HARDER
def test_negation_is_self_inverse(x):
assume(not math.isnan(x))
y = -x
assert -y == x
@fails
@given(lists(floats()))
def test_is_not_nan(xs):
assert not any(math.isnan(x) for x in xs)
@fails
@given(floats())
@TRY_HARDER
def test_is_not_positive_infinite(x):
assume(x > 0)
assert not math.isinf(x)
@fails
@given(floats())
@TRY_HARDER
def test_is_not_negative_infinite(x):
assume(x < 0)
assert not math.isinf(x)
@fails
@given(floats())
@TRY_HARDER
def test_is_int(x):
assume(not (math.isinf(x) or math.isnan(x)))
assert x == int(x)
@fails
@given(floats())
@TRY_HARDER
def test_is_not_int(x):
assume(not (math.isinf(x) or math.isnan(x)))
assert x != int(x)
@fails
@given(floats())
@TRY_HARDER
def test_is_in_exact_int_range(x):
assume(not (math.isinf(x) or math.isnan(x)))
assert x + 1 != x
# Tests whether we can represent subnormal floating point numbers.
# This is essentially a function of how the python interpreter
# was compiled.
# Everything is terrible
if math.ldexp(0.25, -1022) > 0:
REALLY_SMALL_FLOAT = sys.float_info.min
else:
REALLY_SMALL_FLOAT = sys.float_info.min * 2
@fails
@given(floats())
@TRY_HARDER
def test_can_generate_really_small_positive_floats(x):
assume(x > 0)
assert x >= REALLY_SMALL_FLOAT
@fails
@given(floats())
@TRY_HARDER
def test_can_generate_really_small_negative_floats(x):
assume(x < 0)
assert x <= -REALLY_SMALL_FLOAT
@fails
@given(floats())
@TRY_HARDER
def test_can_find_floats_that_do_not_round_trip_through_strings(x):
assert float(str(x)) == x
@fails
@given(floats())
@TRY_HARDER
def test_can_find_floats_that_do_not_round_trip_through_reprs(x):
assert float(repr(x)) == x
@given(floats(), floats(), integers())
def test_floats_are_in_range(x, y, s):
assume(not (math.isnan(x) or math.isnan(y)))
assume(not (math.isinf(x) or math.isinf(y)))
x, y = sorted((x, y))
assume(x < y)
@given(floats(x, y))
@seed(s)
@settings(max_examples=10)
def test_is_in_range(t):
assert x <= t <= y
try:
test_is_in_range()
except Unsatisfiable:
reject()
hypothesis-3.0.1/tests/nocover/test_git_merge.py 0000664 0000000 0000000 00000010701 12661275660 0022114 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import base64
from collections import namedtuple
import hypothesis.strategies as s
from hypothesis import settings
from hypothesis.database import SQLiteExampleDatabase
from hypothesis.stateful import GenericStateMachine
from hypothesis.tools.mergedbs import merge_dbs
FORK_NOW = u'fork'
Insert = namedtuple(u'Insert', (u'key', u'value', u'target'))
Delete = namedtuple(u'Delete', (u'key', u'value', u'target'))
class TestingBackend(SQLiteExampleDatabase):
def __init__(self):
super(TestingBackend, self).__init__()
self.create_db_if_needed()
self.mirror = set()
def save(self, key, value):
super(TestingBackend, self).save(key, value)
self.mirror.add((key, value))
def delete(self, key, value):
super(TestingBackend, self).delete(key, value)
try:
self.mirror.remove((key, value))
except KeyError:
pass
def refresh_mirror(self):
self.mirror = set()
with self.cursor() as cursor:
cursor.execute("""
select key, value
from hypothesis_data_mapping
""")
for r in cursor:
self.mirror.add(tuple(map(base64.b64decode, r)))
class DatabaseMergingState(GenericStateMachine):
def __init__(self):
super(DatabaseMergingState, self).__init__()
self.forked = False
self.original = TestingBackend()
self.left = TestingBackend()
self.right = TestingBackend()
self.seen_strings = set()
def values(self):
base = s.binary()
if self.seen_strings:
return s.sampled_from(sorted(self.seen_strings)) | base
else:
return base
def steps(self):
values = self.values()
if not self.forked:
return (
s.just(FORK_NOW) |
s.builds(Insert, values, values, s.none()) |
s.builds(Delete, values, values, s.none())
)
else:
targets = s.sampled_from((self.left, self.right))
return (
s.builds(Insert, values, values, targets) |
s.builds(Delete, values, values, targets)
)
def execute_step(self, step):
if step == FORK_NOW:
self.forked = True
else:
assert isinstance(step, (Insert, Delete))
self.seen_strings.add(step.key)
self.seen_strings.add(step.value)
if self.forked:
targets = (step.target,)
else:
targets = (self.original, self.left, self.right)
for target in targets:
if isinstance(step, Insert):
target.save(step.key, step.value)
else:
assert isinstance(step, Delete)
target.delete(step.key, step.value)
def teardown(self):
target_mirror = (self.left.mirror | self.right.mirror) - (
(self.original.mirror - self.left.mirror) |
(self.original.mirror - self.right.mirror)
)
n_inserts = len(
self.right.mirror - self.left.mirror - self.original.mirror)
n_deletes = len(
(self.original.mirror - self.right.mirror) & self.left.mirror)
result = merge_dbs(
self.original.connection(),
self.left.connection(),
self.right.connection()
)
assert result.inserts == n_inserts
assert result.deletes == n_deletes
self.left.refresh_mirror()
self.original.close()
self.left.close()
self.right.close()
assert self.left.mirror == target_mirror
TestMerging = DatabaseMergingState.TestCase
TestMerging.settings = settings(
TestMerging.settings, timeout=60)
hypothesis-3.0.1/tests/nocover/test_pretty_repr.py 0000664 0000000 0000000 00000006551 12661275660 0022541 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import hypothesis.strategies as st
from hypothesis import given, settings
from hypothesis.errors import InvalidArgument
from hypothesis.control import reject
from hypothesis.internal.compat import OrderedDict
def foo(x):
pass
def bar(x):
pass
def baz(x):
pass
fns = [
foo, bar, baz
]
def return_args(*args, **kwargs):
return args, kwargs
def builds_ignoring_invalid(target, *args, **kwargs):
def splat(value):
try:
result = target(*value[0], **value[1])
result.validate()
return result
except InvalidArgument:
reject()
return st.tuples(
st.tuples(*args), st.fixed_dictionaries(kwargs)).map(splat)
size_strategies = dict(
min_size=st.integers(min_value=0, max_value=100) | st.none(),
max_size=st.integers(min_value=0, max_value=100) | st.none(),
average_size=st.floats(min_value=0.0, max_value=100.0) | st.none()
)
values = st.integers() | st.text(average_size=2.0)
Strategies = st.recursive(
st.one_of(
st.sampled_from([
st.none(), st.booleans(), st.randoms(), st.complex_numbers(),
st.randoms(), st.fractions(), st.decimals(),
]),
st.builds(st.just, values),
st.builds(st.sampled_from, st.lists(values, min_size=1)),
builds_ignoring_invalid(st.floats, st.floats(), st.floats()),
),
lambda x: st.one_of(
builds_ignoring_invalid(st.lists, x, **size_strategies),
builds_ignoring_invalid(st.sets, x, **size_strategies),
builds_ignoring_invalid(
lambda v: st.tuples(*v), st.lists(x, average_size=2.0)),
builds_ignoring_invalid(
lambda v: st.one_of(*v),
st.lists(x, average_size=2.0, min_size=1)),
builds_ignoring_invalid(
st.dictionaries, x, x,
dict_class=st.sampled_from([dict, OrderedDict]),
min_size=st.integers(min_value=0, max_value=100) | st.none(),
max_size=st.integers(min_value=0, max_value=100) | st.none(),
average_size=st.floats(min_value=0.0, max_value=100.0) | st.none()
),
st.builds(lambda s, f: s.map(f), x, st.sampled_from(fns)),
)
)
strategy_globals = dict(
(k, getattr(st, k))
for k in dir(st)
)
strategy_globals['OrderedDict'] = OrderedDict
strategy_globals['inf'] = float('inf')
strategy_globals['nan'] = float('nan')
strategy_globals['foo'] = foo
strategy_globals['bar'] = bar
strategy_globals['baz'] = baz
@given(Strategies)
@settings(max_examples=2000)
def test_repr_evals_to_thing_with_same_repr(strategy):
r = repr(strategy)
via_eval = eval(r, strategy_globals)
r2 = repr(via_eval)
assert r == r2
hypothesis-3.0.1/tests/nocover/test_recursive.py 0000664 0000000 0000000 00000011352 12661275660 0022164 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from random import Random
import hypothesis.strategies as st
from hypothesis import find, given, example, settings
from hypothesis.internal.debug import timeout
from hypothesis.internal.compat import integer_types
def test_can_generate_with_large_branching():
def flatten(x):
if isinstance(x, list):
return sum(map(flatten, x), [])
else:
return [x]
xs = find(
st.recursive(
st.integers(), lambda x: st.lists(x, average_size=50),
max_leaves=100),
lambda x: isinstance(x, list) and len(flatten(x)) >= 50
)
assert flatten(xs) == [0] * 50
def test_can_generate_some_depth_with_large_branching():
def depth(x):
if x and isinstance(x, list):
return 1 + max(map(depth, x))
else:
return 1
xs = find(
st.recursive(st.integers(), lambda x: st.lists(x, average_size=100)),
lambda x: depth(x) > 1
)
assert xs == [0]
def test_can_find_quite_deep_lists():
def depth(x):
if x and isinstance(x, list):
return 1 + max(map(depth, x))
else:
return 1
deep = find(
st.recursive(st.booleans(), lambda x: st.lists(x, max_size=3)),
lambda x: depth(x) >= 5)
assert deep == [[[[False]]]]
def test_can_find_quite_broad_lists():
def breadth(x):
if isinstance(x, list):
return sum(map(breadth, x))
else:
return 1
broad = find(
st.recursive(st.booleans(), lambda x: st.lists(x, max_size=10)),
lambda x: breadth(x) >= 20,
settings=settings(max_examples=10000)
)
assert breadth(broad) == 20
def test_drawing_many_near_boundary():
ls = find(
st.lists(st.recursive(
st.booleans(),
lambda x: st.lists(x, min_size=8, max_size=10).map(tuple),
max_leaves=9)),
lambda x: len(set(x)) >= 5,
settings=settings(max_examples=10000, database=None, max_shrinks=2000)
)
assert len(ls) == 5
@given(st.randoms())
@settings(max_examples=50, max_shrinks=0)
@example(Random(-1363972488426139))
@example(Random(-4))
def test_can_use_recursive_data_in_sets(rnd):
nested_sets = st.recursive(
st.booleans(),
lambda js: st.frozensets(js, average_size=2.0),
max_leaves=10
)
nested_sets.example(rnd)
def flatten(x):
if isinstance(x, bool):
return frozenset((x,))
else:
result = frozenset()
for t in x:
result |= flatten(t)
if len(result) == 2:
break
return result
assert rnd is not None
x = find(
nested_sets, lambda x: len(flatten(x)) == 2, random=rnd,
settings=settings(database=None, max_shrinks=1000, max_examples=1000))
assert x in (
frozenset((False, True)),
frozenset((False, frozenset((True,)))),
frozenset({frozenset({False, True})})
)
def test_can_form_sets_of_recursive_data():
trees = st.sets(st.recursive(
st.booleans(),
lambda x: st.lists(x, min_size=5).map(tuple),
max_leaves=20))
xs = find(trees, lambda x: len(x) >= 10, settings=settings(
database=None, timeout=20, max_shrinks=1000, max_examples=1000
))
assert len(xs) == 10
@given(st.randoms())
@settings(max_examples=2, database=None)
@timeout(60)
def test_can_flatmap_to_recursive_data(rnd):
stuff = st.lists(st.integers(), min_size=1).flatmap(
lambda elts: st.recursive(
st.sampled_from(elts), lambda x: st.lists(x, average_size=25),
max_leaves=25
))
def flatten(x):
if isinstance(x, integer_types):
return [x]
else:
return sum(map(flatten, x), [])
tree = find(
stuff, lambda x: sum(flatten(x)) >= 100,
settings=settings(
database=None, max_shrinks=2000, max_examples=1000,
timeout=20,
),
random=rnd
)
flat = flatten(tree)
assert (sum(flat) == 1000) or (len(set(flat)) == 1)
hypothesis-3.0.1/tests/nocover/test_statistical_distribution.py 0000664 0000000 0000000 00000027172 12661275660 0025307 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
# -*- coding: utf-8 -*-
"""Statistical tests over the forms of the distributions in the standard set of
definitions.
These tests all take the form of a classic hypothesis test with the null
hypothesis being that the probability of some event occurring when drawing
data from the distribution produced by some specifier is >= REQUIRED_P
"""
from __future__ import division, print_function, absolute_import
import re
import math
import collections
import pytest
import hypothesis.internal.reflection as reflection
from hypothesis import settings as Settings
from hypothesis.errors import UnsatisfiedAssumption
from hypothesis.strategies import just, sets, text, lists, floats, \
tuples, booleans, integers, sampled_from
from hypothesis.internal.compat import PY26, hrange
from hypothesis.internal.conjecture.engine import TestRunner
pytestmark = pytest.mark.skipif(PY26, reason=u'2.6 lacks erf')
# We run each individual test at a very high level of significance to the
# point where it will basically only fail if it's really really wildly wrong.
# We then run the Benjamini–Hochberg procedure at the end to detect
# which of these we should consider statistically significant at the 1% level.
REQUIRED_P = 10e-6
FALSE_POSITIVE_RATE = 0.01
MIN_RUNS = 500
MAX_RUNS = MIN_RUNS * 20
def cumulative_normal(x):
return 0.5 * (1 + math.erf(x / math.sqrt(2)))
def cumulative_binomial_probability(n, p, k):
assert 0 <= k <= n
assert n > 5
# Uses a normal approximation because our n is large enough
mean = float(n) * p
sd = math.sqrt(n * p * (1 - p))
assert mean + 3 * sd <= n
assert mean - 3 * sd >= 0
return cumulative_normal((k - mean) / sd)
class Result(object):
def __init__(
self,
success_count,
total_runs,
desired_probability,
predicate,
condition_string,
specifier,
):
self.success_count = success_count
self.total_runs = total_runs
self.desired_probability = desired_probability
self.predicate = predicate
self.condition_string = condition_string
self.p = cumulative_binomial_probability(
total_runs, self.desired_probability, success_count,
)
self.specifier = specifier
self.failed = False
def description(self):
condition_string = (
' | ' + self.condition_string if self.condition_string else u'')
return (
'P(%s%s) >= %g given %r: p = %g.'
' Occurred in %d / %d = %g of runs. '
) % (
strip_lambda(
reflection.get_pretty_function_description(self.predicate)),
condition_string,
self.desired_probability,
self.specifier,
self.p,
self.success_count, self.total_runs,
float(self.success_count) / self.total_runs
)
def teardown_module(module):
test_results = []
for k, v in vars(module).items():
if u'test_' in k and hasattr(v, u'test_result'):
test_results.append(v.test_result)
test_results.sort(key=lambda x: x.p)
n = len(test_results)
k = 0
for i in hrange(n):
if test_results[i].p < (FALSE_POSITIVE_RATE * (i + 1)) / n:
k = i + 1
rejected = [r for r in test_results[:k] if not r.failed]
if rejected:
raise HypothesisFalsified(
((
u'Although these tests were not significant at p < %g, '
u'the Benjamini-Hochberg procedure demonstrates that the '
u'following are rejected with a false discovery rate of %g: '
u'\n\n'
) % (REQUIRED_P, FALSE_POSITIVE_RATE)) + u'\n'.join(
(u' ' + p.description())
for p in rejected
))
INITIAL_LAMBDA = re.compile(u'^lambda[^:]*:\s*')
def strip_lambda(s):
return INITIAL_LAMBDA.sub(u'', s)
class HypothesisFalsified(AssertionError):
pass
class ConditionTooHard(Exception):
pass
def define_test(specifier, q, predicate, condition=None):
def run_test():
if condition is None:
_condition = lambda x: True
condition_string = u''
else:
_condition = condition
condition_string = strip_lambda(
reflection.get_pretty_function_description(condition))
count = [0]
successful_runs = [0]
def test_function(data):
try:
value = data.draw(specifier)
except UnsatisfiedAssumption:
data.mark_invalid()
if not _condition(value):
data.mark_invalid()
successful_runs[0] += 1
if predicate(value):
count[0] += 1
TestRunner(
test_function,
settings=Settings(
max_examples=MAX_RUNS,
max_iterations=MAX_RUNS * 10,
)).run()
successful_runs = successful_runs[0]
count = count[0]
if successful_runs < MIN_RUNS:
raise ConditionTooHard((
u'Unable to find enough examples satisfying predicate %s '
u'only found %d but required at least %d for validity'
) % (
condition_string, successful_runs, MIN_RUNS
))
result = Result(
count,
successful_runs,
q,
predicate,
condition_string,
specifier,
)
p = cumulative_binomial_probability(successful_runs, q, count)
run_test.test_result = result
# The test passes if we fail to reject the null hypothesis that
# the probability is at least q
if p < REQUIRED_P:
result.failed = True
raise HypothesisFalsified(result.description() + u' rejected')
return run_test
def test_assertion_error_message():
# no really. There's enough code in there that it's silly not to test it.
# By which I mostly mean "coverage will be sad if I don't".
# This also guards against my breaking the tests by making it so that they
# always pass even with implausible predicates.
with pytest.raises(AssertionError) as e:
define_test(floats(), 0.5, lambda x: x == 0.0)()
message = e.value.args[0]
assert u'x == 0.0' in message
assert u'lambda' not in message
assert u'rejected' in message
def test_raises_an_error_on_impossible_conditions():
with pytest.raises(ConditionTooHard) as e:
define_test(floats(), 0.5, lambda x: True, condition=lambda x: False)()
assert u'only found 0 ' in e.value.args[0]
def test_puts_the_condition_in_the_error_message():
def positive(x):
return x >= 0
with pytest.raises(AssertionError) as e:
define_test(
floats(), 0.5, lambda x: x == 0.0,
condition=positive)()
message = e.value.args[0]
assert u'x == 0.0' in message
assert u'lambda not in message'
assert u'rejected' in message
assert u'positive' in message
test_can_produce_zero = define_test(integers(), 0.01, lambda x: x == 0)
test_can_produce_large_magnitude_integers = define_test(
integers(), 0.25, lambda x: abs(x) > 1000
)
test_can_produce_large_positive_integers = define_test(
integers(), 0.13, lambda x: x > 1000
)
test_can_produce_large_negative_integers = define_test(
integers(), 0.13, lambda x: x < -1000
)
def long_list(xs):
return len(xs) >= 20
test_can_produce_unstripped_strings = define_test(
text(), 0.05, lambda x: x != x.strip()
)
test_can_produce_stripped_strings = define_test(
text(), 0.05, lambda x: x == x.strip()
)
test_can_produce_multi_line_strings = define_test(
text(average_size=25.0), 0.1, lambda x: u'\n' in x
)
test_can_produce_ascii_strings = define_test(
text(), 0.1, lambda x: all(ord(c) <= 127 for c in x),
)
test_can_produce_long_strings_with_no_ascii = define_test(
text(), 0.02, lambda x: all(ord(c) > 127 for c in x),
condition=lambda x: len(x) >= 10
)
test_can_produce_short_strings_with_some_non_ascii = define_test(
text(), 0.1, lambda x: any(ord(c) > 127 for c in x),
condition=lambda x: len(x) <= 3
)
test_can_produce_positive_infinity = define_test(
floats(), 0.01, lambda x: x == float(u'inf')
)
test_can_produce_negative_infinity = define_test(
floats(), 0.01, lambda x: x == float(u'-inf')
)
test_can_produce_nan = define_test(
floats(), 0.02, math.isnan
)
test_can_produce_long_lists_of_negative_integers = define_test(
lists(integers()), 0.01, lambda x: all(t <= 0 for t in x),
condition=lambda x: len(x) >= 20
)
test_can_produce_floats_near_left = define_test(
floats(0, 1), 0.1,
lambda t: t < 0.2
)
test_can_produce_floats_near_right = define_test(
floats(0, 1), 0.1,
lambda t: t > 0.8
)
test_can_produce_floats_in_middle = define_test(
floats(0, 1), 0.3,
lambda t: 0.2 <= t <= 0.8
)
test_can_produce_long_lists = define_test(
lists(integers(), average_size=25.0), 0.2, long_list
)
test_can_produce_short_lists = define_test(
lists(integers()), 0.2, lambda x: len(x) <= 10
)
test_can_produce_the_same_int_twice = define_test(
tuples(lists(integers(), average_size=25.0), integers()), 0.01,
lambda t: t[0].count(t[1]) > 1
)
def distorted_value(x):
c = collections.Counter(x)
return min(c.values()) * 3 <= max(c.values())
def distorted(x):
return distorted_value(map(type, x))
test_sampled_from_large_number_can_mix = define_test(
lists(sampled_from(range(50)), min_size=50), 0.1,
lambda x: len(set(x)) >= 25,
)
test_sampled_from_often_distorted = define_test(
lists(sampled_from(range(5))), 0.28, distorted_value,
condition=lambda x: len(x) >= 3,
)
test_non_empty_subset_of_two_is_usually_large = define_test(
sets(sampled_from((1, 2))), 0.1,
lambda t: len(t) == 2
)
test_subset_of_ten_is_sometimes_empty = define_test(
sets(integers(1, 10)), 0.05, lambda t: len(t) == 0
)
test_mostly_sensible_floats = define_test(
floats(), 0.5,
lambda t: t + 1 > t
)
test_mostly_largish_floats = define_test(
floats(), 0.5,
lambda t: t + 1 > 1,
condition=lambda x: x > 0,
)
test_ints_can_occasionally_be_really_large = define_test(
integers(), 0.01,
lambda t: t >= 2 ** 63
)
test_mixing_is_sometimes_distorted = define_test(
lists(booleans() | tuples(), average_size=25.0), 0.05, distorted,
condition=lambda x: len(set(map(type, x))) == 2,
)
test_mixes_2_reasonably_often = define_test(
lists(booleans() | tuples(), average_size=25.0), 0.15,
lambda x: len(set(map(type, x))) > 1,
condition=bool,
)
test_partial_mixes_3_reasonably_often = define_test(
lists(booleans() | tuples() | just(u'hi'), average_size=25.0), 0.10,
lambda x: 1 < len(set(map(type, x))) < 3,
condition=bool,
)
test_mixes_not_too_often = define_test(
lists(booleans() | tuples(), average_size=25.0), 0.1,
lambda x: len(set(map(type, x))) == 1,
condition=bool,
)
test_float_lists_have_non_reversible_sum = define_test(
lists(floats(), min_size=2), 0.01, lambda x: sum(x) != sum(reversed(x)),
condition=lambda x: not math.isnan(sum(x))
)
hypothesis-3.0.1/tests/nocover/test_strategy_state.py 0000664 0000000 0000000 00000016045 12661275660 0023223 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import math
import hashlib
from random import Random
from hypothesis import seed, given, assume, settings, Verbosity
from hypothesis.errors import NoExamples, FailedHealthCheck
from hypothesis.database import ExampleDatabase
from hypothesis.stateful import rule, Bundle, RuleBasedStateMachine
from hypothesis.strategies import just, none, text, lists, binary, \
floats, tuples, booleans, decimals, integers, fractions, \
float_to_int, int_to_float, sampled_from, complex_numbers
from hypothesis.utils.size import clamp
from hypothesis.internal.debug import timeout
from hypothesis.internal.compat import PYPY
AVERAGE_LIST_LENGTH = 2
class HypothesisSpec(RuleBasedStateMachine):
def __init__(self):
super(HypothesisSpec, self).__init__()
self.database = None
strategies = Bundle(u'strategy')
strategy_tuples = Bundle(u'tuples')
objects = Bundle(u'objects')
basic_data = Bundle(u'basic')
varied_floats = Bundle(u'varied_floats')
def teardown(self):
self.clear_database()
@timeout(60, catchable=True)
def execute_step(self, step):
return super(HypothesisSpec, self).execute_step(step)
@rule()
def clear_database(self):
if self.database is not None:
self.database.close()
self.database = None
@rule()
def set_database(self):
self.teardown()
self.database = ExampleDatabase()
@rule(strat=strategies, r=integers(), mshr=integers(0, 100))
def find_constant_failure(self, strat, r, mshr):
with settings(
verbosity=Verbosity.quiet, max_examples=1,
min_satisfying_examples=0,
database=self.database,
max_shrinks=mshr,
):
@given(strat)
@seed(r)
def test(x):
assert False
try:
test()
except (AssertionError, FailedHealthCheck):
pass
@rule(
strat=strategies, r=integers(), p=floats(0, 1),
mex=integers(1, 10), mshr=integers(1, 100)
)
def find_weird_failure(self, strat, r, mex, p, mshr):
with settings(
verbosity=Verbosity.quiet, max_examples=mex,
min_satisfying_examples=0,
database=self.database,
max_shrinks=mshr,
):
@given(strat)
@seed(r)
def test(x):
assert Random(
hashlib.md5(repr(x).encode(u'utf-8')).digest()
).random() <= p
try:
test()
except (AssertionError, FailedHealthCheck):
pass
@rule(target=strategies, spec=sampled_from((
integers(), booleans(), floats(), complex_numbers(),
fractions(), decimals(), text(), binary(), none(),
tuples(),
)))
def strategy(self, spec):
return spec
@rule(target=strategies, values=lists(integers() | text(), min_size=1))
def sampled_from_strategy(self, values):
return sampled_from(values)
@rule(target=strategies, spec=strategy_tuples)
def strategy_for_tupes(self, spec):
return tuples(*spec)
@rule(
target=strategies,
source=strategies,
level=integers(1, 10),
mixer=text())
def filtered_strategy(s, source, level, mixer):
def is_good(x):
return bool(Random(
hashlib.md5((mixer + repr(x)).encode(u'utf-8')).digest()
).randint(0, level))
return source.filter(is_good)
@rule(target=strategies, elements=strategies)
def list_strategy(self, elements):
return lists(elements, average_size=AVERAGE_LIST_LENGTH)
@rule(target=strategies, l=strategies, r=strategies)
def or_strategy(self, l, r):
return l | r
@rule(target=varied_floats, source=floats())
def float(self, source):
return source
@rule(
target=varied_floats,
source=varied_floats, offset=integers(-100, 100))
def adjust_float(self, source, offset):
return int_to_float(clamp(
0,
float_to_int(source) + offset,
2 ** 64 - 1
))
@rule(
target=strategies,
left=varied_floats, right=varied_floats
)
def float_range(self, left, right):
for f in (math.isnan, math.isinf):
for x in (left, right):
assume(not f(x))
left, right = sorted((left, right))
assert left <= right
return floats(left, right)
@rule(
target=strategies,
source=strategies, result1=strategies, result2=strategies,
mixer=text(), p=floats(0, 1))
def flatmapped_strategy(self, source, result1, result2, mixer, p):
assume(result1 is not result2)
def do_map(value):
rep = repr(value)
random = Random(
hashlib.md5((mixer + rep).encode(u'utf-8')).digest()
)
if random.random() <= p:
return result1
else:
return result2
return source.flatmap(do_map)
@rule(target=strategies, value=objects)
def just_strategy(self, value):
return just(value)
@rule(target=strategy_tuples, source=strategies)
def single_tuple(self, source):
return (source,)
@rule(target=strategy_tuples, l=strategy_tuples, r=strategy_tuples)
def cat_tuples(self, l, r):
return l + r
@rule(target=objects, strat=strategies)
def get_example(self, strat):
try:
strat.example()
except NoExamples:
# Because of filtering some strategies we look for don't actually
# have any examples.
pass
@rule(target=strategies, left=integers(), right=integers())
def integer_range(self, left, right):
left, right = sorted((left, right))
return integers(left, right)
@rule(strat=strategies)
def repr_is_good(self, strat):
assert u' at 0x' not in repr(strat)
MAIN = __name__ == u'__main__'
TestHypothesis = HypothesisSpec.TestCase
TestHypothesis.settings = settings(
TestHypothesis.settings,
stateful_step_count=10 if PYPY else 50,
max_shrinks=500,
timeout=500 if MAIN else 60,
min_satisfying_examples=0,
verbosity=max(TestHypothesis.settings.verbosity, Verbosity.verbose),
max_examples=10000 if MAIN else 200,
strict=True
)
if MAIN:
TestHypothesis().runTest()
hypothesis-3.0.1/tests/nocover/test_streams.py 0000664 0000000 0000000 00000002016 12661275660 0021630 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from itertools import islice
from hypothesis import given
from hypothesis.strategies import integers, streaming
from hypothesis.internal.compat import integer_types
@given(streaming(integers()))
def test_streams_are_arbitrarily_long(ss):
for i in islice(ss, 100):
assert isinstance(i, integer_types)
hypothesis-3.0.1/tests/numpy/ 0000775 0000000 0000000 00000000000 12661275660 0016237 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/numpy/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0020343 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/numpy/test_gen_data.py 0000664 0000000 0000000 00000005415 12661275660 0021417 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import numpy as np
import pytest
import hypothesis.strategies as st
from flaky import flaky
from hypothesis import find, given, settings
from hypothesis.extra.numpy import arrays, from_dtype
from hypothesis.strategytests import strategy_test_suite
from hypothesis.internal.compat import text_type, binary_type
TestFloats = strategy_test_suite(arrays(float, ()))
TestIntMatrix = strategy_test_suite(arrays(int, (3, 2)))
TestBoolTensor = strategy_test_suite(arrays(bool, (2, 2, 2)))
STANDARD_TYPES = list(map(np.dtype, [
u'int8', u'int32', u'int64',
u'float', u'float32', u'float64',
complex,
bool, text_type, binary_type
]))
@pytest.mark.parametrize(u't', STANDARD_TYPES)
def test_produces_instances(t):
@given(from_dtype(t))
def test_is_t(x):
assert isinstance(x, t.type)
assert x.dtype.kind == t.kind
test_is_t()
@given(arrays(float, ()))
def test_empty_dimensions_are_scalars(x):
assert isinstance(x, np.dtype(float).type)
@given(arrays(u'uint32', (5, 5)))
def test_generates_unsigned_ints(x):
assert (x >= 0).all()
@given(arrays(int, (1,)))
def test_assert_fits_in_machine_size(x):
pass
def test_generates_and_minimizes():
x = find(arrays(float, (2, 2)), lambda t: True)
assert (x == np.zeros(shape=(2, 2), dtype=float)).all()
def test_can_minimize_large_arrays():
x = find(arrays(u'uint32', 500), lambda t: t.any())
assert x.sum() == 1
@flaky(max_runs=5, min_passes=1)
def test_can_minimize_float_arrays():
x = find(
arrays(float, 50), lambda t: t.sum() >= 1.0,
settings=settings(database=None))
assert 1.0 <= x.sum() <= 1.1
class Foo(object):
pass
foos = st.tuples().map(lambda _: Foo())
def test_can_create_arrays_of_composite_types():
arr = find(arrays(object, 100, foos), lambda x: True)
for x in arr:
assert isinstance(x, Foo)
def test_can_create_arrays_of_tuples():
arr = find(
arrays(object, 10, st.tuples(st.integers(), st.integers())),
lambda x: all(t[0] != t[1] for t in x))
for a in arr:
assert a in ((1, 0), (0, 1))
hypothesis-3.0.1/tests/py2/ 0000775 0000000 0000000 00000000000 12661275660 0015601 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/py2/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0017705 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/py2/test_destructuring.py 0000664 0000000 0000000 00000002310 12661275660 0022110 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import given
from hypothesis.errors import InvalidArgument
from hypothesis.strategies import integers
from hypothesis.internal.reflection import get_pretty_function_description
def test_destructuring_lambdas():
assert get_pretty_function_description(lambda (x, y): 1) == \
u'lambda (x, y): '
def test_destructuring_not_allowed():
@given(integers())
def foo(a, (b, c)):
pass
with pytest.raises(InvalidArgument):
foo()
hypothesis-3.0.1/tests/py3/ 0000775 0000000 0000000 00000000000 12661275660 0015602 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/py3/__init__.py 0000664 0000000 0000000 00000001220 12661275660 0017706 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
hypothesis-3.0.1/tests/py3/test_unicode_identifiers.py 0000664 0000000 0000000 00000002102 12661275660 0023221 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.internal.reflection import proxies
def test_can_copy_argspec_of_unicode_args():
def foo(μ):
return μ
@proxies(foo)
def bar(μ):
return foo(μ)
assert bar(1) == 1
def test_can_copy_argspec_of_unicode_name():
def ā():
return 1
@proxies(ā)
def bar():
return 2
assert bar() == 2
hypothesis-3.0.1/tests/pytest/ 0000775 0000000 0000000 00000000000 12661275660 0016417 5 ustar 00root root 0000000 0000000 hypothesis-3.0.1/tests/pytest/test_capture.py 0000664 0000000 0000000 00000005132 12661275660 0021474 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import, \
unicode_literals
import pytest
from hypothesis.internal.compat import PY2, hunichr, WINDOWS, \
escape_unicode_characters
pytest_plugins = str('pytester')
TESTSUITE = """
from hypothesis import given, settings, Verbosity
from hypothesis.strategies import integers
@settings(verbosity=Verbosity.verbose)
@given(integers())
def test_should_be_verbose(x):
pass
"""
@pytest.mark.parametrize('capture,expected', [
('no', True),
('fd', False),
])
def test_output_without_capture(testdir, capture, expected):
script = testdir.makepyfile(TESTSUITE)
result = testdir.runpytest(script, '--verbose', '--capture', capture)
out = '\n'.join(result.stdout.lines)
assert 'test_should_be_verbose' in out
assert ('Trying example' in out) == expected
assert result.ret == 0
UNICODE_EMITTING = """
import pytest
from hypothesis import given, settings, Verbosity
from hypothesis.strategies import text
from hypothesis.internal.compat import PY3
import sys
@settings(verbosity=Verbosity.verbose)
def test_emits_unicode():
@given(text())
def test_should_emit_unicode(t):
assert all(ord(c) <= 1000 for c in t)
with pytest.raises(AssertionError):
test_should_emit_unicode()
"""
@pytest.mark.xfail(
WINDOWS,
reason=(
"Encoding issues in running the subprocess, possibly py.test's fault"))
@pytest.mark.skipif(
PY2, reason="Output streams don't have encodings in python 2")
def test_output_emitting_unicode(testdir, monkeypatch):
monkeypatch.setenv('LC_ALL', 'C')
monkeypatch.setenv('LANG', 'C')
script = testdir.makepyfile(UNICODE_EMITTING)
result = getattr(
testdir, 'runpytest_subprocess', testdir.runpytest)(
script, '--verbose', '--capture=no')
out = '\n'.join(result.stdout.lines)
assert 'test_emits_unicode' in out
assert escape_unicode_characters(hunichr(1001)) in out
assert result.ret == 0
hypothesis-3.0.1/tests/pytest/test_compat.py 0000664 0000000 0000000 00000001633 12661275660 0021316 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
import pytest
from hypothesis import given
from hypothesis.strategies import booleans
@given(booleans())
@pytest.mark.parametrize('hi', (1, 2, 3))
def test_parametrize_after_given(hi, i):
pass
hypothesis-3.0.1/tests/pytest/test_mark.py 0000664 0000000 0000000 00000002261 12661275660 0020763 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
pytest_plugins = str('pytester')
TESTSUITE = """
from hypothesis import given
from hypothesis.strategies import integers
@given(integers())
def test_foo(x):
pass
def test_bar():
pass
"""
def test_can_select_mark(testdir):
script = testdir.makepyfile(TESTSUITE)
result = testdir.runpytest(script, '--verbose', '--strict', '-m',
'hypothesis')
out = '\n'.join(result.stdout.lines)
assert '1 passed, 1 deselected' in out
hypothesis-3.0.1/tests/pytest/test_profiles.py 0000664 0000000 0000000 00000002571 12661275660 0021660 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
from hypothesis.extra.pytestplugin import LOAD_PROFILE_OPTION
pytest_plugins = str('pytester')
CONFTEST = """
from hypothesis._settings import settings
settings.register_profile("test", settings(max_examples=1))
"""
TESTSUITE = """
from hypothesis import given
from hypothesis.strategies import integers
from hypothesis._settings import settings
def test_this_one_is_ok():
assert settings().max_examples == 1
"""
def test_runs_reporting_hook(testdir):
script = testdir.makepyfile(TESTSUITE)
testdir.makeconftest(CONFTEST)
result = testdir.runpytest(script, LOAD_PROFILE_OPTION, 'test')
out = '\n'.join(result.stdout.lines)
assert '1 passed' in out
hypothesis-3.0.1/tests/pytest/test_reporting.py 0000664 0000000 0000000 00000002427 12661275660 0022046 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import
pytest_plugins = str('pytester')
TESTSUITE = """
from hypothesis import given
from hypothesis.strategies import lists, integers
@given(integers())
def test_this_one_is_ok(x):
pass
@given(lists(integers()))
def test_hi(xs):
assert False
"""
def test_runs_reporting_hook(testdir):
script = testdir.makepyfile(TESTSUITE)
result = testdir.runpytest(script, '--verbose')
out = '\n'.join(result.stdout.lines)
assert 'test_this_one_is_ok' in out
assert 'Captured stdout call' not in out
assert 'Falsifying example' in out
assert result.ret != 0
hypothesis-3.0.1/tests/pytest/test_runs.py 0000664 0000000 0000000 00000001752 12661275660 0021024 0 ustar 00root root 0000000 0000000 # coding=utf-8
#
# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
#
# Most of this work is copyright (C) 2013-2015 David R. MacIver
# (david@drmaciver.com), but it contains contributions by others. See
# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
# full list of people who may hold copyright, and consult the git log if you
# need to determine who owns an individual contribution.
#
# This Source Code Form is subject to the terms of the Mozilla Public License,
# v. 2.0. If a copy of the MPL was not distributed with this file, You can
# obtain one at http://mozilla.org/MPL/2.0/.
#
# END HEADER
from __future__ import division, print_function, absolute_import, \
unicode_literals
from hypothesis import given
from tests.common.utils import fails
from hypothesis.strategies import integers
@given(integers())
def test_ints_are_ints(x):
pass
@fails
@given(integers())
def test_ints_are_floats(x):
assert isinstance(x, float)
hypothesis-3.0.1/tox.ini 0000664 0000000 0000000 00000005717 12661275660 0015252 0 ustar 00root root 0000000 0000000 [tox]
envlist = py{26,27,33,34,35,py}-{brief,prettyquick,full,custom,benchmark}
setenv=
LC_ALL=en_GB.UTF-8
passenv=HOME
[testenv]
deps =
pytest==2.8.2
flaky
benchmark: pytest-benchmark==3.0.0
whitelist_externals=
bash
setenv=
LC_ALL=en_GB.UTF-8
LANG=en_GB.UTF-8
brief: HYPOTHESIS_PROFILE=speedy
passenv=
HOME
TOXENV
commands =
full: bash scripts/basic-test.sh
brief: python -m pytest tests/cover/test_testdecorators.py
prettyquick: python -m pytest tests/cover/
custom: python -m pytest {posargs}
benchmark: python -m pytest benchmarks
[testenv:py{26,27,33,34,35,py}-brief]
deps=
pytest==2.8.2
flaky
commands=
python -m pytest tests/cover/
[testenv:unicode]
basepython=python2.7
deps =
unicode-nazi
setenv=
UNICODENAZI=true
PYTHONPATH=.
commands=
python scripts/unicodechecker.py
[testenv:fakefactory052]
basepython=python3.5
deps =
pytest==2.8.2
commands =
pip install --no-use-wheel fake-factory==0.5.2
python -m pytest tests/fakefactory
[testenv:fakefactory053]
basepython=python3.5
deps =
pytest==2.8.2
commands =
pip install --no-use-wheel fake-factory==0.5.3
python -m pytest tests/fakefactory
[testenv:django17]
basepython=python3.4
commands =
pip install .[datetime]
pip install --no-use-wheel .[fakefactory]
pip install django>=1.7,<1.7.99
python -m tests.django.manage test tests.django
[testenv:django18]
basepython=python3.4
commands =
pip install .[datetime]
pip install --no-use-wheel .[fakefactory]
pip install django>=1.8,<1.8.99
python -m tests.django.manage test tests.django
[testenv:django19]
basepython=python3.4
commands =
pip install .[datetime]
pip install --no-use-wheel .[fakefactory]
pip install django>=1.9,<1.9.99
python -m tests.django.manage test tests.django
[testenv:nose]
basepython=python3.5
deps =
nose
commands=
nosetests tests/cover/test_testdecorators.py
[testenv:pytest27]
basepython=python3.5
deps =
pytest==2.7.3
commands=
python -m pytest tests/pytest tests/cover/test_testdecorators.py
[testenv:pytest26]
basepython=python2.7
deps =
pytest==2.6.3
commands=
python -m pytest tests/cover/test_testdecorators.py
[testenv:docs]
basepython=python3.4
deps = sphinx
commands=sphinx-build -W -b html -d docs/_build/doctrees docs docs/_build/html
[testenv:coverage]
basepython=python3.4
deps =
coverage
pytest==2.8.2
pytz
flaky
commands =
coverage --version
coverage debug sys
coverage run --rcfile=.coveragerc -m pytest --strict tests/cover tests/datetime tests/py3 --maxfail=1 {posargs}
coverage report -m --fail-under=100 --show-missing
[testenv:examples3]
setenv=
HYPOTHESIS_STRICT_MODE=true
basepython=python3.4
deps=pytest==2.8.2
commands=
python -m pytest examples
[testenv:examples2]
setenv=
HYPOTHESIS_STRICT_MODE=true
basepython=python2.7
deps=pytest==2.8.2
commands=
python -m pytest examples
[pytest]
addopts=--strict --tb=short -vv -p pytester