pytest-benchmark-3.2.2/0000755000175000017500000000000013416261170013101 5ustar hlehlepytest-benchmark-3.2.2/CONTRIBUTING.rst0000644000175000017500000000531713416261170015550 0ustar hlehle============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. Bug reports =========== When `reporting a bug `_ please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Documentation improvements ========================== pytest-benchmark could always use more documentation, whether as part of the official pytest-benchmark docs, in docstrings, or even on the web in blog posts, articles, and such. Feature requests and feedback ============================= The best way to send feedback is to file an issue at https://github.com/ionelmc/pytest-benchmark/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that code contributions are welcome :) Development =========== To set up `pytest-benchmark` for local development: 1. Fork `pytest-benchmark `_ (look for the "Fork" button). 2. Clone your fork locally:: git clone git@github.com:your_name_here/pytest-benchmark.git 3. Create a branch for local development:: git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 4. When you're done making changes, run all the checks, doc builder and spell checker with `tox `_ one command:: tox 5. Commit your changes and push your branch to GitHub:: git add . git commit -m "Your detailed description of your changes." git push origin name-of-your-bugfix-or-feature 6. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- If you need some code review or feedback while you're developing the code just make the pull request. For merging, you should: 1. Include passing tests (run ``tox``) [1]_. 2. Update documentation when there's new API, functionality etc. 3. Add a note to ``CHANGELOG.rst`` about the changes. 4. Add yourself to ``AUTHORS.rst``. .. [1] If you don't have all the necessary python versions available locally you can rely on Travis - it will `run the tests `_ for each change you add in the pull request. It will be slower though ... Tips ---- To run a subset of tests:: tox -e envname -- pytest -k test_myfeature To run all the test environments in *parallel* (you need to ``pip install detox``):: detox pytest-benchmark-3.2.2/tests/0000755000175000017500000000000013416261170014243 5ustar hlehlepytest-benchmark-3.2.2/tests/test_normal.py0000644000175000017500000000175013416261170017147 0ustar hlehle""" Just to make sure the plugin doesn't choke on doctests:: >>> print('Yay, doctests!') Yay, doctests! """ import sys # noqa import time from functools import partial import pytest @pytest.mark.skipif('sys.platform == "win32"') def test_fast(benchmark): @benchmark def result(): return time.sleep(0.000001) assert result is None if not benchmark.disabled: assert benchmark.stats.stats.min >= 0.000001 def test_slow(benchmark): assert benchmark(partial(time.sleep, 0.001)) is None def test_slower(benchmark): benchmark(lambda: time.sleep(0.01)) @pytest.mark.benchmark(min_rounds=2, timer=time.time, max_time=0.01) def test_xfast(benchmark): benchmark(str) @pytest.fixture(params=range(5)) def foo(request): return request.param @pytest.mark.skipif('sys.platform == "win32"') def test_parametrized(benchmark, foo): benchmark(time.sleep, 0.00001) if benchmark.enabled: assert benchmark.stats.stats.min >= 0.00001 pytest-benchmark-3.2.2/tests/test_calibration.py0000644000175000017500000000323513416261170020146 0ustar hlehleimport time from functools import partial import pytest def slow_warmup(): x = 0 for _ in range(1000): x *= 1 @pytest.mark.benchmark(warmup=True, warmup_iterations=10 ** 8, max_time=10) def test_calibrate(benchmark): benchmark(slow_warmup) @pytest.mark.benchmark(warmup=True, warmup_iterations=10 ** 8, max_time=10) def test_calibrate_fast(benchmark): benchmark(lambda: [int] * 100) @pytest.mark.benchmark(warmup=True, warmup_iterations=10 ** 8, max_time=10) def test_calibrate_xfast(benchmark): benchmark(lambda: None) @pytest.mark.benchmark(warmup=True, warmup_iterations=10 ** 8, max_time=10) def test_calibrate_slow(benchmark): benchmark(partial(time.sleep, 0.00001)) def timer(ratio, step, additive): t = 0 slowmode = False while 1: if additive: slowmode |= bool((yield t)) else: slowmode = bool((yield t)) if slowmode: t += step * ratio else: t += step @pytest.mark.parametrize("minimum", [1, 0.01, 0.000000001, 0.0000000001, 1.000000000000001]) @pytest.mark.parametrize("skew_ratio", [0, 1, -1]) @pytest.mark.parametrize("additive", [True, False]) @pytest.mark.benchmark(max_time=0, min_rounds=1, calibration_precision=100) def test_calibrate_stuck(benchmark, minimum, additive, skew_ratio): # if skew_ratio: # ratio += skew_ratio * SKEW if skew_ratio > 0: ratio = 50 * 1.000000000000001 elif skew_ratio < 0: ratio = 50 / 1.000000000000001 else: ratio = 50 t = timer(ratio, minimum, additive) benchmark._timer = partial(next, t) benchmark._min_time = minimum benchmark(t.send, True) pytest-benchmark-3.2.2/tests/test_doctest.rst0000644000175000017500000000012113416261170017473 0ustar hlehleJust test that pytest-benchmark doesn't choke on DoctestItems:: >>> 1 1 pytest-benchmark-3.2.2/tests/test_with_testcase.py0000644000175000017500000000107713416261170020527 0ustar hlehleimport time import unittest import pytest class TerribleTerribleWayToWriteTests(unittest.TestCase): @pytest.fixture(autouse=True) def setupBenchmark(self, benchmark): self.benchmark = benchmark def test_foo(self): self.benchmark(time.sleep, 0.000001) class TerribleTerribleWayToWritePatchTests(unittest.TestCase): @pytest.fixture(autouse=True) def setupBenchmark(self, benchmark_weave): self.benchmark_weave = benchmark_weave def test_foo2(self): self.benchmark_weave('time.sleep') time.sleep(0.0000001) pytest-benchmark-3.2.2/tests/test_pedantic.py0000644000175000017500000000566613416261170017460 0ustar hlehleimport pytest from pytest import mark from pytest import raises def test_single(benchmark): runs = [] benchmark.pedantic(runs.append, args=[123]) assert runs == [123] def test_setup(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) def setup(): return [1], {"bar": 2} benchmark.pedantic(stuff, setup=setup) assert runs == [(1, 2)] @pytest.mark.benchmark(cprofile=True) def test_setup_cprofile(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) def setup(): return [1], {"bar": 2} benchmark.pedantic(stuff, setup=setup) assert runs == [(1, 2), (1, 2)] def test_args_kwargs(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) benchmark.pedantic(stuff, args=[1], kwargs={"bar": 2}) assert runs == [(1, 2)] def test_iterations(benchmark): runs = [] benchmark.pedantic(runs.append, args=[1], iterations=10) assert runs == [1] * 11 def test_rounds_iterations(benchmark): runs = [] benchmark.pedantic(runs.append, args=[1], iterations=10, rounds=15) assert runs == [1] * 151 def test_rounds(benchmark): runs = [] benchmark.pedantic(runs.append, args=[1], rounds=15) assert runs == [1] * 15 def test_warmup_rounds(benchmark): runs = [] benchmark.pedantic(runs.append, args=[1], warmup_rounds=15, rounds=5) assert runs == [1] * 20 @mark.parametrize("value", [0, "x"]) def test_rounds_must_be_int(benchmark, value): runs = [] raises(ValueError, benchmark.pedantic, runs.append, args=[1], rounds=value) assert runs == [] @mark.parametrize("value", [-15, "x"]) def test_warmup_rounds_must_be_int(benchmark, value): runs = [] raises(ValueError, benchmark.pedantic, runs.append, args=[1], warmup_rounds=value) assert runs == [] def test_setup_many_rounds(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) def setup(): return [1], {"bar": 2} benchmark.pedantic(stuff, setup=setup, rounds=10) assert runs == [(1, 2)] * 10 def test_cant_use_both_args_and_setup_with_return(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) def setup(): return [1], {"bar": 2} raises(TypeError, benchmark.pedantic, stuff, setup=setup, args=[123]) assert runs == [] def test_can_use_both_args_and_setup_without_return(benchmark): runs = [] def stuff(foo, bar=123): runs.append((foo, bar)) benchmark.pedantic(stuff, setup=lambda: None, args=[123]) assert runs == [(123, 123)] def test_cant_use_setup_with_many_iterations(benchmark): raises(ValueError, benchmark.pedantic, None, setup=lambda: None, iterations=2) @mark.parametrize("value", [0, -1, "asdf"]) def test_iterations_must_be_positive_int(benchmark, value): raises(ValueError, benchmark.pedantic, None, setup=lambda: None, iterations=value) pytest-benchmark-3.2.2/tests/test_stats.py0000644000175000017500000000722613416261170017021 0ustar hlehlefrom pytest import mark from pytest_benchmark.stats import Stats def test_1(): stats = Stats() for i in 4., 36., 45., 50., 75.: stats.update(i) assert stats.mean == 42. assert stats.min == 4. assert stats.max == 75. assert stats.stddev == 25.700194551792794 assert stats.rounds == 5 assert stats.total == 210. assert stats.ops == 0.023809523809523808 def test_2(): stats = Stats() stats.update(17.) stats.update(19.) stats.update(24.) assert stats.mean == 20. assert stats.min == 17. assert stats.max == 24. assert stats.stddev == 3.605551275463989 assert stats.rounds == 3 assert stats.total == 60. assert stats.ops == 0.05 def test_single_item(): stats = Stats() stats.update(1) assert stats.mean == 1 assert stats.median == 1 assert stats.iqr_outliers == 0 assert stats.stddev_outliers == 0 assert stats.min == 1 assert stats.max == 1 assert stats.stddev == 0 assert stats.iqr == 0 assert stats.rounds == 1 assert stats.total == 1 assert stats.ld15iqr == 1 assert stats.hd15iqr == 1 assert stats.ops == 1 @mark.parametrize('length', range(1, 10)) def test_length(length): stats = Stats() for i in range(length): stats.update(1) assert stats.as_dict() def test_iqr(): stats = Stats() for i in 6, 7, 15, 36, 39, 40, 41, 42, 43, 47, 49: stats.update(i) assert stats.iqr == 22.5 # https://en.wikipedia.org/wiki/Quartile#Example_1 stats = Stats() for i in 7, 15, 36, 39, 40, 41: stats.update(i) assert stats.iqr == 25.0 # https://en.wikipedia.org/wiki/Quartile#Example_2 stats = Stats() for i in 1, 2, 3, 4, 5, 6, 7, 8, 9: stats.update(i) assert stats.iqr == 4.5 # http://www.phusewiki.org/docs/2012/PRESENTATIONS/SP/SP06%20.pdf - method 1 stats = Stats() for i in 1, 2, 3, 4, 5, 6, 7, 8: stats.update(i) assert stats.iqr == 4.0 # http://www.lexjansen.com/nesug/nesug07/po/po08.pdf - method 1 stats = Stats() for i in 1, 2, 1, 123, 4, 1234, 1, 234, 12, 34, 12, 3, 2, 34, 23: stats.update(i) assert stats.iqr == 32.0 stats = Stats() for i in [ 1, 2, 3, 10, 10.1234, 11, 12, 13., 10.1115, 11.1115, 12.1115, 13.5, 10.75, 11.75, 13.12175, 13.1175, 20, 50, 52 ]: stats.update(i) assert stats.stddev == 13.518730097622106 assert stats.iqr == 3.006212500000002 # close enough: http://www.wessa.net/rwasp_variability.wasp stats = Stats() for i in [ 11.2, 11.8, 13.2, 12.9, 12.1, 13.5, 14.8, 14.8, 13.6, 11.9, 10.4, 11.8, 11.5, 12.6, 14.1, 13.5, 12.5, 14.9, 17.0, 17.0, 15.8, 13.3, 11.4, 14.0, 14.5, 15.0, 17.8, 16.3, 17.2, 17.8, 19.9, 19.9, 18.4, 16.2, 14.6, 16.6, 17.1, 18.0, 19.3, 18.1, 18.3, 21.8, 23.0, 24.2, 20.9, 19.1, 17.2, 19.4, 19.6, 19.6, 23.6, 23.5, 22.9, 24.3, 26.4, 27.2, 23.7, 21.1, 18.0, 20.1, 20.4, 18.8, 23.5, 22.7, 23.4, 26.4, 30.2, 29.3, 25.9, 22.9, 20.3, 22.9, 24.2, 23.3, 26.7, 26.9, 27.0, 31.5, 36.4, 34.7, 31.2, 27.4, 23.7, 27.8, 28.4, 27.7, 31.7, 31.3, 31.8, 37.4, 41.3, 40.5, 35.5, 30.6, 27.1, 30.6, 31.5, 30.1, 35.6, 34.8, 35.5, 42.2, 46.5, 46.7, 40.4, 34.7, 30.5, 33.6, 34.0, 31.8, 36.2, 34.8, 36.3, 43.5, 49.1, 50.5, 40.4, 35.9, 31.0, 33.7, 36.0, 34.2, 40.6, 39.6, 42.0, 47.2, 54.8, 55.9, 46.3, 40.7, 36.2, 40.5, 41.7, 39.1, 41.9, 46.1, 47.2, 53.5, 62.2, 60.6, 50.8, 46.1, 39.0, 43.2, ]: stats.update(i) assert stats.iqr == 18.1 # close enough: http://www.wessa.net/rwasp_variability.wasp def test_ops(): stats = Stats() stats.update(0) assert stats.mean == 0 assert stats.ops == 0 pytest-benchmark-3.2.2/tests/test_utils.py0000644000175000017500000001653013416261170017021 0ustar hlehleimport argparse import distutils.spawn import os import subprocess import pytest from pytest import mark from pytest_benchmark.utils import clonefunc from pytest_benchmark.utils import get_commit_info from pytest_benchmark.utils import get_project_name from pytest_benchmark.utils import parse_columns from pytest_benchmark.utils import parse_elasticsearch_storage from pytest_benchmark.utils import parse_warmup pytest_plugins = 'pytester', f1 = lambda a: a # noqa def f2(a): return a @mark.parametrize('f', [f1, f2]) def test_clonefunc(f): assert clonefunc(f)(1) == f(1) assert clonefunc(f)(1) == f(1) def test_clonefunc_not_function(): assert clonefunc(1) == 1 @pytest.yield_fixture(params=(True, False)) def crazytestdir(request, testdir): if request.param: testdir.tmpdir.join('foo', 'bar').ensure(dir=1).chdir() yield testdir @pytest.fixture(params=('git', 'hg')) def scm(request, testdir): scm = request.param if not distutils.spawn.find_executable(scm): pytest.skip("%r not availabe on $PATH") subprocess.check_call([scm, 'init', '.']) if scm == 'git': subprocess.check_call('git config user.email you@example.com'.split()) subprocess.check_call('git config user.name you'.split()) else: testdir.tmpdir.join('.hg', 'hgrc').write(""" [ui] username = you """) return scm def test_get_commit_info(scm, crazytestdir): with open('test_get_commit_info.py', 'w') as fh: fh.write('asdf') subprocess.check_call([scm, 'add', 'test_get_commit_info.py']) subprocess.check_call([scm, 'commit', '-m', 'asdf']) out = get_commit_info() branch = 'master' if scm == 'git' else 'default' assert out['branch'] == branch assert out.get('dirty') is False assert 'id' in out with open('test_get_commit_info.py', 'w') as fh: fh.write('sadf') out = get_commit_info() assert out.get('dirty') is True assert 'id' in out def test_missing_scm_bins(scm, crazytestdir, monkeypatch): with open('test_get_commit_info.py', 'w') as fh: fh.write('asdf') subprocess.check_call([scm, 'add', 'test_get_commit_info.py']) subprocess.check_call([scm, 'commit', '-m', 'asdf']) monkeypatch.setenv('PATH', os.getcwd()) out = get_commit_info() assert ( 'No such file or directory' in out['error'] or 'The system cannot find the file specified' in out['error'] or 'FileNotFoundError' in out['error'] ) def test_get_branch_info(scm, testdir): # make an initial commit testdir.tmpdir.join('foo.txt').ensure(file=True) subprocess.check_call([scm, 'add', 'foo.txt']) subprocess.check_call([scm, 'commit', '-m', 'added foo.txt']) branch = get_commit_info()['branch'] expected = 'master' if scm == 'git' else 'default' assert branch == expected # # switch to a branch if scm == 'git': subprocess.check_call(['git', 'checkout', '-b', 'mybranch']) else: subprocess.check_call(['hg', 'branch', 'mybranch']) branch = get_commit_info()['branch'] assert branch == 'mybranch' # # git only: test detached head if scm == 'git': subprocess.check_call(['git', 'commit', '--allow-empty', '-m', '...']) subprocess.check_call(['git', 'commit', '--allow-empty', '-m', '...']) subprocess.check_call(['git', 'checkout', 'HEAD~1']) assert get_commit_info()['branch'] == '(detached head)' def test_no_branch_info(testdir): assert get_commit_info()['branch'] == '(unknown)' def test_commit_info_error(testdir): testdir.mkdir('.git') info = get_commit_info() assert info['branch'].lower() == '(unknown)'.lower() assert info['error'].lower().startswith("calledprocesserror(128, 'fatal: not a git repository") def test_parse_warmup(): assert parse_warmup('yes') is True assert parse_warmup('on') is True assert parse_warmup('true') is True assert parse_warmup('off') is False assert parse_warmup('off') is False assert parse_warmup('no') is False assert parse_warmup('') is True assert parse_warmup('auto') in [True, False] def test_parse_columns(): assert parse_columns('min,max') == ['min', 'max'] assert parse_columns('MIN, max ') == ['min', 'max'] with pytest.raises(argparse.ArgumentTypeError): parse_columns('min,max,x') @mark.parametrize('scm', [None, 'git', 'hg']) @mark.parametrize('set_remote', [ False, 'https://example.com/pytest_benchmark_repo', 'https://example.com/pytest_benchmark_repo.git', 'c:\\foo\\bar\\pytest_benchmark_repo.git' 'foo@example.com:pytest_benchmark_repo.git']) def test_get_project_name(scm, set_remote, testdir): if scm is None: assert get_project_name().startswith("test_get_project_name") return if not distutils.spawn.find_executable(scm): pytest.skip("%r not availabe on $PATH") subprocess.check_call([scm, 'init', '.']) if scm == 'git' and set_remote: subprocess.check_call(['git', 'config', 'remote.origin.url', set_remote]) elif scm == 'hg' and set_remote: set_remote = set_remote.replace('.git', '') set_remote = set_remote.replace('.com:', '/') testdir.tmpdir.join('.hg', 'hgrc').write( "[ui]\n" "username = you \n" "[paths]\n" "default = %s\n" % set_remote) if set_remote: assert get_project_name() == "pytest_benchmark_repo" else: # use directory name if remote branch is not set assert get_project_name().startswith("test_get_project_name") @mark.parametrize('scm', ['git', 'hg']) def test_get_project_name_broken(scm, testdir): testdir.tmpdir.join('.' + scm).ensure(dir=1) assert get_project_name() in ['test_get_project_name_broken0', 'test_get_project_name_broken1'] def test_get_project_name_fallback(testdir, capfd): testdir.tmpdir.ensure('.hg', dir=1) project_name = get_project_name() assert project_name.startswith("test_get_project_name_fallback") assert capfd.readouterr() == ('', '') def test_get_project_name_fallback_broken_hgrc(testdir, capfd): testdir.tmpdir.ensure('.hg', 'hgrc').write('[paths]\ndefault = /') project_name = get_project_name() assert project_name.startswith("test_get_project_name_fallback") assert capfd.readouterr() == ('', '') def test_parse_elasticsearch_storage(): benchdir = os.path.basename(os.getcwd()) assert parse_elasticsearch_storage("http://localhost:9200") == ( ["http://localhost:9200"], "benchmark", "benchmark", benchdir) assert parse_elasticsearch_storage("http://localhost:9200/benchmark2") == ( ["http://localhost:9200"], "benchmark2", "benchmark", benchdir) assert parse_elasticsearch_storage("http://localhost:9200/benchmark2/benchmark2") == ( ["http://localhost:9200"], "benchmark2", "benchmark2", benchdir) assert parse_elasticsearch_storage("http://host1:9200,host2:9200") == ( ["http://host1:9200", "http://host2:9200"], "benchmark", "benchmark", benchdir) assert parse_elasticsearch_storage("http://host1:9200,host2:9200/benchmark2") == ( ["http://host1:9200", "http://host2:9200"], "benchmark2", "benchmark", benchdir) assert parse_elasticsearch_storage("http://localhost:9200/benchmark2/benchmark2?project_name=project_name") == ( ["http://localhost:9200"], "benchmark2", "benchmark2", "project_name") pytest-benchmark-3.2.2/tests/test_benchmark.py0000644000175000017500000012436213416261170017616 0ustar hlehleimport json import platform import pytest pytest_plugins = 'pytester', platform def test_help(testdir): result = testdir.runpytest_subprocess('--help') result.stdout.fnmatch_lines([ "*", "*", "benchmark:", " --benchmark-min-time=SECONDS", " Minimum time per round in seconds. Default: '0.000005'", " --benchmark-max-time=SECONDS", " Maximum run time per test - it will be repeated until", " this total time is reached. It may be exceeded if test", " function is very slow or --benchmark-min-rounds is", " large (it takes precedence). Default: '1.0'", " --benchmark-min-rounds=NUM", " Minimum rounds, even if total time would exceed", " `--max-time`. Default: 5", " --benchmark-timer=FUNC", " Timer to use when measuring time. Default:*", " --benchmark-calibration-precision=NUM", " Precision to use when calibrating number of", " iterations. Precision of 10 will make the timer look", " 10 times more accurate, at a cost of less precise", " measure of deviations. Default: 10", " --benchmark-warmup=[KIND]", " Activates warmup. Will run the test function up to", " number of times in the calibration phase. See", " `--benchmark-warmup-iterations`. Note: Even the warmup", " phase obeys --benchmark-max-time. Available KIND:", " 'auto', 'off', 'on'. Default: 'auto' (automatically", " activate on PyPy).", " --benchmark-warmup-iterations=NUM", " Max number of iterations to run in the warmup phase.", " Default: 100000", " --benchmark-disable-gc", " Disable GC during benchmarks.", " --benchmark-skip Skip running any tests that contain benchmarks.", " --benchmark-only Only run benchmarks. This overrides --benchmark-skip.", " --benchmark-save=NAME", " Save the current run into 'STORAGE-", " PATH/counter_NAME.json'.", " --benchmark-autosave Autosave the current run into 'STORAGE-", " PATH/counter*.json", " --benchmark-save-data", " Use this to make --benchmark-save and --benchmark-", " autosave include all the timing data, not just the", " stats.", " --benchmark-json=PATH", " Dump a JSON report into PATH. Note that this will", " include the complete data (all the timings, not just", " the stats).", " --benchmark-compare=[NUM|_ID]", " Compare the current run against run NUM (or prefix of", " _id in elasticsearch) or the latest saved run if", " unspecified.", " --benchmark-compare-fail=EXPR?[[]EXPR?...[]]", " Fail test if performance regresses according to given", " EXPR (eg: min:5% or mean:0.001 for number of seconds).", " Can be used multiple times.", " --benchmark-cprofile=COLUMN", " If specified measure one run with cProfile and stores", " 25 top functions. Argument is a column to sort by.", " Available columns: 'ncallls_recursion', 'ncalls',", " 'tottime', 'tottime_per', 'cumtime', 'cumtime_per',", " 'function_name'.", " --benchmark-storage=URI", " Specify a path to store the runs as uri in form", " file://path or elasticsearch+http[s]://host1,host2/[in", " dex/doctype?project_name=Project] (when --benchmark-", " save or --benchmark-autosave are used). For backwards", " compatibility unexpected values are converted to", " file://. Default: 'file://./.benchmarks'.", " --benchmark-verbose Dump diagnostic and progress information.", " --benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max',", " 'mean', 'stddev', 'name', 'fullname'. Default: 'min'", " --benchmark-group-by=LABEL", " How to group tests. Can be one of: 'group', 'name',", " 'fullname', 'func', 'fullfunc', 'param' or", " 'param:NAME', where NAME is the name passed to", " @pytest.parametrize. Default: 'group'", " --benchmark-columns=LABELS", " Comma-separated list of columns to show in the result", " table. Default: 'min, max, mean, stddev, median, iqr,", " outliers, ops, rounds, iterations'", " --benchmark-histogram=[FILENAME-PREFIX]", " Plot graphs of min/max/avg/stddev over time in", " FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX", " contains slashes ('/') then directories will be", " created. Default: '*'", "*", ]) def test_groups(testdir): test = testdir.makepyfile('''""" >>> print('Yay, doctests!') Yay, doctests! """ import time import pytest def test_fast(benchmark): benchmark(lambda: time.sleep(0.000001)) assert 1 == 1 def test_slow(benchmark): benchmark(lambda: time.sleep(0.001)) assert 1 == 1 @pytest.mark.benchmark(group="A") def test_slower(benchmark): benchmark(lambda: time.sleep(0.01)) assert 1 == 1 @pytest.mark.benchmark(group="A", warmup=True) def test_xfast(benchmark): benchmark(lambda: None) assert 1 == 1 ''') result = testdir.runpytest_subprocess('-vv', '--doctest-modules', test) result.stdout.fnmatch_lines([ "*collected 5 items", "*", "test_groups.py::*test_groups PASSED*", "test_groups.py::test_fast PASSED*", "test_groups.py::test_slow PASSED*", "test_groups.py::test_slower PASSED*", "test_groups.py::test_xfast PASSED*", "*", "* benchmark: 2 tests *", "*", "* benchmark 'A': 2 tests *", "*", "*====== 5 passed* seconds ======*", ]) SIMPLE_TEST = ''' """ >>> print('Yay, doctests!') Yay, doctests! """ import time import pytest def test_fast(benchmark): @benchmark def result(): return time.sleep(0.000001) assert result == None def test_slow(benchmark): benchmark(lambda: time.sleep(0.1)) assert 1 == 1 ''' GROUPING_TEST = ''' import pytest @pytest.mark.parametrize("foo", range(2)) @pytest.mark.benchmark(group="A") def test_a(benchmark, foo): benchmark(str) @pytest.mark.parametrize("foo", range(2)) @pytest.mark.benchmark(group="B") def test_b(benchmark, foo): benchmark(int) ''' GROUPING_PARAMS_TEST = ''' import pytest @pytest.mark.parametrize("bar", ["bar1", "bar2"]) @pytest.mark.parametrize("foo", ["foo1", "foo2"]) @pytest.mark.benchmark(group="A") def test_a(benchmark, foo, bar): benchmark(str) @pytest.mark.parametrize("bar", ["bar1", "bar2"]) @pytest.mark.parametrize("foo", ["foo1", "foo2"]) @pytest.mark.benchmark(group="B") def test_b(benchmark, foo, bar): benchmark(int) ''' def test_group_by_name(testdir): test_x = testdir.makepyfile(test_x=GROUPING_TEST) test_y = testdir.makepyfile(test_y=GROUPING_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'name', test_x, test_y) result.stdout.fnmatch_lines([ '*', '*', '*', '*', '*', "* benchmark 'test_a[[]0[]]': 2 tests *", 'Name (time in ?s) *', '----------------------*', 'test_a[[]0[]] *', 'test_a[[]0[]] *', '----------------------*', '*', "* benchmark 'test_a[[]1[]]': 2 tests *", 'Name (time in ?s) *', '----------------------*', 'test_a[[]1[]] *', 'test_a[[]1[]] *', '----------------------*', '*', "* benchmark 'test_b[[]0[]]': 2 tests *", 'Name (time in ?s) *', '----------------------*', 'test_b[[]0[]] *', 'test_b[[]0[]] *', '----------------------*', '*', "* benchmark 'test_b[[]1[]]': 2 tests *", 'Name (time in ?s) *', '----------------------*', 'test_b[[]1[]] *', 'test_b[[]1[]] *', '----------------------*', ]) def test_group_by_func(testdir): test_x = testdir.makepyfile(test_x=GROUPING_TEST) test_y = testdir.makepyfile(test_y=GROUPING_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'func', test_x, test_y) result.stdout.fnmatch_lines([ '*', '*', '*', '*', "* benchmark 'test_a': 4 tests *", 'Name (time in ?s) *', '----------------------*', 'test_a[[]*[]] *', 'test_a[[]*[]] *', 'test_a[[]*[]] *', 'test_a[[]*[]] *', '----------------------*', '*', "* benchmark 'test_b': 4 tests *", 'Name (time in ?s) *', '----------------------*', 'test_b[[]*[]] *', 'test_b[[]*[]] *', 'test_b[[]*[]] *', 'test_b[[]*[]] *', '----------------------*', '*', '*', '============* 8 passed* seconds ============*', ]) def test_group_by_fullfunc(testdir): test_x = testdir.makepyfile(test_x=GROUPING_TEST) test_y = testdir.makepyfile(test_y=GROUPING_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'fullfunc', test_x, test_y) result.stdout.fnmatch_lines([ '*', '*', '*', '*', '*', "* benchmark 'test_x.py::test_a': 2 tests *", 'Name (time in ?s) *', '------------------*', 'test_a[[]*[]] *', 'test_a[[]*[]] *', '------------------*', '', "* benchmark 'test_x.py::test_b': 2 tests *", 'Name (time in ?s) *', '------------------*', 'test_b[[]*[]] *', 'test_b[[]*[]] *', '------------------*', '', "* benchmark 'test_y.py::test_a': 2 tests *", 'Name (time in ?s) *', '------------------*', 'test_a[[]*[]] *', 'test_a[[]*[]] *', '------------------*', '', "* benchmark 'test_y.py::test_b': 2 tests *", 'Name (time in ?s) *', '------------------*', 'test_b[[]*[]] *', 'test_b[[]*[]] *', '------------------*', '', 'Legend:', ' Outliers: 1 Standard Deviation from M*', '============* 8 passed* seconds ============*', ]) def test_group_by_param_all(testdir): test_x = testdir.makepyfile(test_x=GROUPING_TEST) test_y = testdir.makepyfile(test_y=GROUPING_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'param', test_x, test_y) result.stdout.fnmatch_lines([ '*', '*', '*', '*', '*', "* benchmark '0': 4 tests *", 'Name (time in ?s) *', '-------------------*', 'test_*[[]0[]] *', 'test_*[[]0[]] *', 'test_*[[]0[]] *', 'test_*[[]0[]] *', '-------------------*', '', "* benchmark '1': 4 tests *", 'Name (time in ?s) *', '------------------*', 'test_*[[]1[]] *', 'test_*[[]1[]] *', 'test_*[[]1[]] *', 'test_*[[]1[]] *', '------------------*', '', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd ' 'Quartile.', '============* 8 passed* seconds ============*', ]) def test_group_by_param_select(testdir): test_x = testdir.makepyfile(test_x=GROUPING_PARAMS_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'param:foo', '--benchmark-sort', 'fullname', test_x) result.stdout.fnmatch_lines([ '*', '*', '*', '*', '*', "* benchmark 'foo=foo1': 4 tests *", 'Name (time in ?s) *', '-------------------*', 'test_a[[]foo1-bar1[]] *', 'test_a[[]foo1-bar2[]] *', 'test_b[[]foo1-bar1[]] *', 'test_b[[]foo1-bar2[]] *', '-------------------*', '', "* benchmark 'foo=foo2': 4 tests *", 'Name (time in ?s) *', '------------------*', 'test_a[[]foo2-bar1[]] *', 'test_a[[]foo2-bar2[]] *', 'test_b[[]foo2-bar1[]] *', 'test_b[[]foo2-bar2[]] *', '------------------*', '', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd ' 'Quartile.', '============* 8 passed* seconds ============*', ]) def test_group_by_param_select_multiple(testdir): test_x = testdir.makepyfile(test_x=GROUPING_PARAMS_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'param:foo,param:bar', '--benchmark-sort', 'fullname', test_x) result.stdout.fnmatch_lines([ '*', '*', '*', '*', '*', "* benchmark 'foo=foo1 bar=bar1': 2 tests *", 'Name (time in ?s) *', '-------------------*', 'test_a[[]foo1-bar1[]] *', 'test_b[[]foo1-bar1[]] *', '-------------------*', '', "* benchmark 'foo=foo1 bar=bar2': 2 tests *", 'Name (time in ?s) *', '-------------------*', 'test_a[[]foo1-bar2[]] *', 'test_b[[]foo1-bar2[]] *', '-------------------*', '', "* benchmark 'foo=foo2 bar=bar1': 2 tests *", 'Name (time in ?s) *', '------------------*', 'test_a[[]foo2-bar1[]] *', 'test_b[[]foo2-bar1[]] *', '-------------------*', '', "* benchmark 'foo=foo2 bar=bar2': 2 tests *", 'Name (time in ?s) *', '-------------------*', 'test_a[[]foo2-bar2[]] *', 'test_b[[]foo2-bar2[]] *', '------------------*', '', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd ' 'Quartile.', '============* 8 passed* seconds ============*', ]) def test_group_by_fullname(testdir): test_x = testdir.makepyfile(test_x=GROUPING_TEST) test_y = testdir.makepyfile(test_y=GROUPING_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--benchmark-group-by', 'fullname', test_x, test_y) result.stdout.fnmatch_lines_random([ "* benchmark 'test_x.py::test_a[[]0[]]': 1 tests *", "* benchmark 'test_x.py::test_a[[]1[]]': 1 tests *", "* benchmark 'test_x.py::test_b[[]0[]]': 1 tests *", "* benchmark 'test_x.py::test_b[[]1[]]': 1 tests *", "* benchmark 'test_y.py::test_a[[]0[]]': 1 tests *", "* benchmark 'test_y.py::test_a[[]1[]]': 1 tests *", "* benchmark 'test_y.py::test_b[[]0[]]': 1 tests *", "* benchmark 'test_y.py::test_b[[]1[]]': 1 tests *", '============* 8 passed* seconds ============*', ]) def test_double_use(testdir): test = testdir.makepyfile(''' def test_a(benchmark): benchmark(lambda: None) benchmark.pedantic(lambda: None) def test_b(benchmark): benchmark.pedantic(lambda: None) benchmark(lambda: None) ''') result = testdir.runpytest_subprocess(test, '--tb=line') result.stdout.fnmatch_lines([ '*FixtureAlreadyUsed: Fixture can only be used once. Previously it was used in benchmark(...) mode.', '*FixtureAlreadyUsed: Fixture can only be used once. Previously it was used in benchmark.pedantic(...) mode.', ]) def test_only_override_skip(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-only', '--benchmark-skip', test) result.stdout.fnmatch_lines([ "*collected 2 items", "test_only_override_skip.py ..*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast *", "test_slow *", "------*", "*====== 2 passed* seconds ======*", ]) def test_conflict_between_only_and_disable(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-only', '--benchmark-disable', test) result.stderr.fnmatch_lines([ "ERROR: Can't have both --benchmark-only and --benchmark-disable options. Note that --benchmark-disable is " "automatically activated if xdist is on or you're missing the statistics dependency." ]) def test_max_time_min_rounds(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-max-time=0.000001', '--benchmark-min-rounds=1', test) result.stdout.fnmatch_lines([ "*collected 3 items", "test_max_time_min_rounds.py ...*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast * 1 *", "test_slow * 1 *", "------*", "*====== 3 passed* seconds ======*", ]) def test_max_time(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-max-time=0.000001', test) result.stdout.fnmatch_lines([ "*collected 3 items", "test_max_time.py ...*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast * 5 *", "test_slow * 5 *", "------*", "*====== 3 passed* seconds ======*", ]) def test_bogus_max_time(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-max-time=bogus', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-max-time: Invalid decimal value 'bogus': InvalidOperation*", ]) @pytest.mark.skipif("platform.python_implementation() == 'PyPy'") def test_pep418_timer(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-timer=pep418.perf_counter', test) result.stdout.fnmatch_lines([ "* (defaults: timer=*.perf_counter*", ]) def test_bad_save(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-save=asd:f?', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-save: Must not contain any of these characters: /:*?<>|\\ (it has ':?')", ]) def test_bad_save_2(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-save=', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-save: Can't be empty.", ]) def test_bad_compare_fail(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-compare-fail=?', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-compare-fail: Could not parse value: '?'.", ]) def test_bad_rounds(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-min-rounds=asd', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-min-rounds: invalid literal for int() with base 10: 'asd'", ]) def test_bad_rounds_2(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-min-rounds=0', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-min-rounds: Value for --benchmark-rounds must be at least 1.", ]) def test_compare(testdir): test = testdir.makepyfile(SIMPLE_TEST) testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-autosave', test) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare=0001', '--benchmark-compare-fail=min:0.1', test) result.stderr.fnmatch_lines([ "Comparing against benchmarks from: *0001_unversioned_*.json", ]) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare=0001', '--benchmark-compare-fail=min:1%', test) result.stderr.fnmatch_lines([ "Comparing against benchmarks from: *0001_unversioned_*.json", ]) def test_compare_last(testdir): test = testdir.makepyfile(SIMPLE_TEST) testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-autosave', test) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare', '--benchmark-compare-fail=min:0.1', test) result.stderr.fnmatch_lines([ "Comparing against benchmarks from: *0001_unversioned_*.json", ]) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare', '--benchmark-compare-fail=min:1%', test) result.stderr.fnmatch_lines([ "Comparing against benchmarks from: *0001_unversioned_*.json", ]) def test_compare_non_existing(testdir): test = testdir.makepyfile(SIMPLE_TEST) testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-autosave', test) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare=0002', '-rw', test) result.stderr.fnmatch_lines([ "* PytestBenchmarkWarning: Can't compare. No benchmark files * '0002'.", ]) def test_compare_non_existing_verbose(testdir): test = testdir.makepyfile(SIMPLE_TEST) testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-autosave', test) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-compare=0002', test, '--benchmark-verbose') result.stderr.fnmatch_lines([ " WARNING: Can't compare. No benchmark files * '0002'.", ]) def test_compare_no_files(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '-rw', test, '--benchmark-compare') result.stderr.fnmatch_lines([ "* PytestBenchmarkWarning: Can't compare. No benchmark files in '*'. Can't load the previous benchmark." ]) def test_compare_no_files_verbose(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', test, '--benchmark-compare', '--benchmark-verbose') result.stderr.fnmatch_lines([ " WARNING: Can't compare. No benchmark files in '*'." " Can't load the previous benchmark." ]) def test_compare_no_files_match(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '-rw', test, '--benchmark-compare=1') result.stderr.fnmatch_lines([ "* PytestBenchmarkWarning: Can't compare. No benchmark files in '*' match '1'." ]) def test_compare_no_files_match_verbose(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', test, '--benchmark-compare=1', '--benchmark-verbose') result.stderr.fnmatch_lines([ " WARNING: Can't compare. No benchmark files in '*' match '1'." ]) def test_verbose(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-max-time=0.0000001', '--doctest-modules', '--benchmark-verbose', '-vv', test) result.stderr.fnmatch_lines([ " Calibrating to target round *s; will estimate when reaching *s (using: *, precision: *).", " Measured * iterations: *s.", " Running * rounds x * iterations ...", " Ran for *s.", ]) def test_save(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-save=foobar', '--benchmark-max-time=0.0000001', test) result.stderr.fnmatch_lines([ "Saved benchmark data in: *", ]) json.loads(testdir.tmpdir.join('.benchmarks').listdir()[0].join('0001_foobar.json').read()) def test_save_extra_info(testdir): test = testdir.makepyfile(""" def test_extra(benchmark): benchmark.extra_info['foo'] = 'bar' benchmark(lambda: None) """) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-save=foobar', '--benchmark-max-time=0.0000001', test) result.stderr.fnmatch_lines([ "Saved benchmark data in: *", ]) info = json.loads(testdir.tmpdir.join('.benchmarks').listdir()[0].join('0001_foobar.json').read()) bench_info = info['benchmarks'][0] assert bench_info['name'] == 'test_extra' assert bench_info['extra_info'] == {'foo': 'bar'} def test_update_machine_info_hook_detection(testdir): """Tests detection and execution and update_machine_info_hooks. Verifies that machine info hooks are detected and executed in nested `conftest.py`s. """ record_path_conftest = ''' import os def pytest_benchmark_update_machine_info(config, machine_info): machine_info["conftest_path"] = ( machine_info.get("conftest_path", []) + [os.path.relpath(__file__)] ) ''' simple_test = ''' def test_simple(benchmark): @benchmark def resuilt(): 1+1 ''' testdir.makepyfile(**{ "conftest": record_path_conftest, "test_module/conftest": record_path_conftest, "test_module/tests/conftest": record_path_conftest, "test_module/tests/simple_test.py": simple_test, }) def run_verify_pytest(*args): testdir.runpytest_subprocess( '--benchmark-json=benchmark.json', '--benchmark-max-time=0.0000001', *args ) benchmark_json = json.loads(testdir.tmpdir.join('benchmark.json').read()) machine_info = benchmark_json["machine_info"] assert sorted( i.replace('\\', '/') for i in machine_info["conftest_path"] ) == sorted([ "conftest.py", "test_module/conftest.py", "test_module/tests/conftest.py", ]) run_verify_pytest("test_module/tests") run_verify_pytest("test_module") run_verify_pytest(".") def test_histogram(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-histogram=foobar', '--benchmark-max-time=0.0000001', test) result.stderr.fnmatch_lines([ "Generated histogram: *foobar.svg", ]) assert [f.basename for f in testdir.tmpdir.listdir("*.svg", sort=True)] == [ 'foobar.svg', ] def test_autosave(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-autosave', '--benchmark-max-time=0.0000001', test) result.stderr.fnmatch_lines([ "Saved benchmark data in: *", ]) json.loads(testdir.tmpdir.join('.benchmarks').listdir()[0].listdir('0001_*.json')[0].read()) def test_bogus_min_time(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-min-time=bogus', test) result.stderr.fnmatch_lines([ "usage: py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "py*: error: argument --benchmark-min-time: Invalid decimal value 'bogus': InvalidOperation*", ]) def test_disable_gc(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-disable-gc', test) result.stdout.fnmatch_lines([ "*collected 2 items", "test_disable_gc.py ..*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast *", "test_slow *", "------*", "*====== 2 passed* seconds ======*", ]) def test_custom_timer(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-timer=time.time', test) result.stdout.fnmatch_lines([ "*collected 2 items", "test_custom_timer.py ..*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast *", "test_slow *", "------*", "*====== 2 passed* seconds ======*", ]) def test_bogus_timer(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-timer=bogus', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-timer: Value for --benchmark-timer must be in dotted form. Eg: " "'module.attr'.", ]) def test_sort_by_mean(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-sort=mean', test) result.stdout.fnmatch_lines([ "*collected 2 items", "test_sort_by_mean.py ..*", "* benchmark: 2 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_fast *", "test_slow *", "------*", "*====== 2 passed* seconds ======*", ]) def test_bogus_sort(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-sort=bogus', test) result.stderr.fnmatch_lines([ "usage: *py* [[]options[]] [[]file_or_dir[]] [[]file_or_dir[]] [[]...[]]", "*py*: error: argument --benchmark-sort: Unacceptable value: 'bogus'. Value for --benchmark-sort must be one " "of: 'min', 'max', 'mean', 'stddev', 'name', 'fullname'." ]) def test_xdist(testdir): pytest.importorskip('xdist') test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '-n', '1', '-rw', test) result.stderr.fnmatch_lines([ "* Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be " "performed reliably in a parallelized environment.", ]) def test_xdist_verbose(testdir): pytest.importorskip('xdist') test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '-n', '1', '--benchmark-verbose', test) result.stderr.fnmatch_lines([ "------*", " WARNING: Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be performed " "reliably in a parallelized environment.", "------*", ]) def test_cprofile(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--benchmark-cprofile=cumtime', test) result.stdout.fnmatch_lines([ "------------*----------- cProfile (time in s) ------------*-----------", "test_cprofile.py::test_fast", "ncalls tottime percall cumtime percall filename:lineno(function)", # "1 0.0000 0.0000 0.0001 0.0001 test_cprofile0/test_cprofile.py:9(result)", # "1 0.0001 0.0001 0.0001 0.0001 ~:0()", # "1 0.0000 0.0000 0.0000 0.0000 ~:0()", "", "test_cprofile.py::test_slow", "ncalls tottime percall cumtime percall filename:lineno(function)", # "1 0.0000 0.0000 0.1002 0.1002 test_cprofile0/test_cprofile.py:15()", # "1 0.1002 0.1002 0.1002 0.1002 ~:0()", # "1 0.0000 0.0000 0.0000 0.0000 ~:0()", ]) def test_abort_broken(testdir): """ Test that we don't benchmark code that raises exceptions. """ test = testdir.makepyfile(''' """ >>> print('Yay, doctests!') Yay, doctests! """ import time import pytest def test_bad(benchmark): @benchmark def result(): raise Exception() assert 1 == 1 def test_bad2(benchmark): @benchmark def result(): time.sleep(0.1) assert 1 == 0 @pytest.fixture(params=['a', 'b', 'c']) def bad_fixture(request): raise ImportError() def test_ok(benchmark, bad_fixture): @benchmark def result(): time.sleep(0.1) assert 1 == 0 ''') result = testdir.runpytest_subprocess('-vv', test) result.stdout.fnmatch_lines([ "*collected 5 items", "*", "test_abort_broken.py::test_bad FAILED*", "test_abort_broken.py::test_bad2 FAILED*", "test_abort_broken.py::test_ok*a* ERROR*", "test_abort_broken.py::test_ok*b* ERROR*", "test_abort_broken.py::test_ok*c* ERROR*", "*====== ERRORS ======*", "*______ ERROR at setup of test_ok[[]a[]] ______*", "request = >", " @pytest.fixture(params=['a', 'b', 'c'])", " def bad_fixture(request):", "> raise ImportError()", "E ImportError", "test_abort_broken.py:22: ImportError", "*______ ERROR at setup of test_ok[[]b[]] ______*", "request = >", " @pytest.fixture(params=['a', 'b', 'c'])", " def bad_fixture(request):", "> raise ImportError()", "E ImportError", "test_abort_broken.py:22: ImportError", "*______ ERROR at setup of test_ok[[]c[]] ______*", "request = >", " @pytest.fixture(params=['a', 'b', 'c'])", " def bad_fixture(request):", "> raise ImportError()", "E ImportError", "test_abort_broken.py:22: ImportError", "*====== FAILURES ======*", "*______ test_bad ______*", "benchmark = ", " def test_bad(benchmark):", "> @benchmark", " def result():", "test_abort_broken.py:*", "_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _*", "*", "_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _*", " @benchmark", " def result():", "> raise Exception()", "E Exception", "test_abort_broken.py:11: Exception", "*______ test_bad2 ______*", "benchmark = ", " def test_bad2(benchmark):", " @benchmark", " def result():", " time.sleep(0.1)", "> assert 1 == 0", "E assert 1 == 0", "test_abort_broken.py:18: AssertionError", ]) result.stdout.fnmatch_lines([ "* benchmark: 1 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_bad2 *", "------*", "*====== 2 failed*, 3 error* seconds ======*", ]) BASIC_TEST = ''' """ Just to make sure the plugin doesn't choke on doctests:: >>> print('Yay, doctests!') Yay, doctests! """ import time from functools import partial import pytest def test_fast(benchmark): @benchmark def result(): return time.sleep(0.000001) assert result is None def test_slow(benchmark): assert benchmark(partial(time.sleep, 0.001)) is None def test_slower(benchmark): benchmark(lambda: time.sleep(0.01)) @pytest.mark.benchmark(min_rounds=2) def test_xfast(benchmark): benchmark(str) def test_fast(benchmark): benchmark(int) ''' def test_basic(testdir): test = testdir.makepyfile(BASIC_TEST) result = testdir.runpytest_subprocess('-vv', '--doctest-modules', test) result.stdout.fnmatch_lines([ "*collected 5 items", "test_basic.py::*test_basic PASSED*", "test_basic.py::test_slow PASSED*", "test_basic.py::test_slower PASSED*", "test_basic.py::test_xfast PASSED*", "test_basic.py::test_fast PASSED*", "", "* benchmark: 4 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_* *", "test_* *", "test_* *", "test_* *", "------*", "", "*====== 5 passed* seconds ======*", ]) def test_skip(testdir): test = testdir.makepyfile(BASIC_TEST) result = testdir.runpytest_subprocess('-vv', '--doctest-modules', '--benchmark-skip', test) result.stdout.fnmatch_lines([ "*collected 5 items", "test_skip.py::*test_skip PASSED*", "test_skip.py::test_slow SKIPPED*", "test_skip.py::test_slower SKIPPED*", "test_skip.py::test_xfast SKIPPED*", "test_skip.py::test_fast SKIPPED*", "*====== 1 passed, 4 skipped* seconds ======*", ]) def test_disable(testdir): test = testdir.makepyfile(BASIC_TEST) result = testdir.runpytest_subprocess('-vv', '--doctest-modules', '--benchmark-disable', test) result.stdout.fnmatch_lines([ "*collected 5 items", "test_disable.py::*test_disable PASSED*", "test_disable.py::test_slow PASSED*", "test_disable.py::test_slower PASSED*", "test_disable.py::test_xfast PASSED*", "test_disable.py::test_fast PASSED*", "*====== 5 passed* seconds ======*", ]) def test_mark_selection(testdir): test = testdir.makepyfile(BASIC_TEST) result = testdir.runpytest_subprocess('-vv', '--doctest-modules', '-m', 'benchmark', test) result.stdout.fnmatch_lines([ "*collected 5 items*", "test_mark_selection.py::test_xfast PASSED*", "* benchmark: 1 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_xfast *", "------*", "*====== 1 passed, 4 deselected* seconds ======*", ]) def test_only_benchmarks(testdir): test = testdir.makepyfile(BASIC_TEST) result = testdir.runpytest_subprocess('-vv', '--doctest-modules', '--benchmark-only', test) result.stdout.fnmatch_lines([ "*collected 5 items", "test_only_benchmarks.py::*test_only_benchmarks SKIPPED*", "test_only_benchmarks.py::test_slow PASSED*", "test_only_benchmarks.py::test_slower PASSED*", "test_only_benchmarks.py::test_xfast PASSED*", "test_only_benchmarks.py::test_fast PASSED*", "* benchmark: 4 tests *", "Name (time in ?s) * Min * Max * Mean * StdDev * Rounds * Iterations", "------*", "test_* *", "test_* *", "test_* *", "test_* *", "------*", "*====== 4 passed, 1 skipped* seconds ======*", ]) def test_columns(testdir): test = testdir.makepyfile(SIMPLE_TEST) result = testdir.runpytest_subprocess('--doctest-modules', '--benchmark-columns=max,iterations,min', test) result.stdout.fnmatch_lines([ "*collected 3 items", "test_columns.py ...*", "* benchmark: 2 tests *", "Name (time in ?s) * Max * Iterations * Min *", "------*", ]) pytest-benchmark-3.2.2/tests/test_storage/0000755000175000017500000000000013416261170016746 5ustar hlehle././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0005_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030207_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0005_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317113416261170026531 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "machine_info": { "python_compiler": "GCC 4.6.3", "python_version": "2.7.3", "python_implementation": "CPython", "processor": "x86_64", "system": "Linux", "node": "minibox", "machine": "x86_64", "release": "3.13.0-55-generic" }, "version": "2.5.0", "benchmarks": [ { "options": { "timer": "time", "disable_gc": false, "warmup": false, "min_time": 2.5e-05, "max_time": 1.0, "min_rounds": 5 }, "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "stats": { "iterations": 110, "iqr_outliers": 2138, "mean": 2.9613341201221054e-07, "stddev_outliers": 33, "median": 2.275813709605824e-07, "q3": 2.275813709605824e-07, "rounds": 9099, "q1": 2.189116044477983e-07, "max": 6.659897890957919e-05, "hd15iqr": 2.449209039861506e-07, "iqr": 8.669766512784094e-09, "ld15iqr": 2.1674416281960227e-07, "outliers": "33;2138", "min": 2.1674416281960227e-07, "stddev": 1.0773177082184698e-06 } } ], "datetime": "2015-08-15T00:02:07.400444" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0014_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030253_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0014_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026534 0ustar hlehle{ "version": "2.5.0", "machine_info": { "python_implementation": "CPython", "system": "Linux", "release": "3.13.0-55-generic", "python_version": "2.7.3", "python_compiler": "GCC 4.6.3", "node": "minibox", "machine": "x86_64", "processor": "x86_64" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "benchmarks": [ { "group": null, "options": { "disable_gc": false, "timer": "time", "min_rounds": 5, "max_time": 1.0, "min_time": 2.5e-05, "warmup": false }, "stats": { "max": 1.7053567179756074e-05, "median": 2.182136147709216e-07, "q3": 2.2052275355156628e-07, "ld15iqr": 2.1532719129511577e-07, "stddev": 3.89806564163915e-07, "outliers": "50;2495", "iqr_outliers": 2495, "min": 2.1532719129511577e-07, "rounds": 11009, "stddev_outliers": 50, "q1": 2.1763633007576044e-07, "hd15iqr": 2.2514103111285562e-07, "mean": 2.5026479245141756e-07, "iterations": 413, "iqr": 2.886423475805845e-09 }, "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ], "datetime": "2015-08-15T00:02:53.505468" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0018_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030315_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0018_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317513416261170026541 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "version": "2.5.0", "benchmarks": [ { "stats": { "ld15iqr": 2.199454688564051e-07, "iterations": 413, "mean": 2.4454952190050414e-07, "max": 1.9833192986957098e-05, "stddev_outliers": 76, "iqr_outliers": 4314, "rounds": 11009, "min": 2.1763633007576044e-07, "q1": 2.199454688564051e-07, "q3": 2.2052275355156628e-07, "outliers": "76;4314", "hd15iqr": 2.2225460763704978e-07, "iqr": 5.772846951611743e-10, "median": 2.2052275355156628e-07, "stddev": 2.7747228347912803e-07 }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "group": null, "name": "test_xfast_parametrized[0]", "options": { "timer": "time", "min_rounds": 5, "max_time": 1.0, "min_time": 2.5e-05, "disable_gc": false, "warmup": false } } ], "machine_info": { "machine": "x86_64", "processor": "x86_64", "node": "minibox", "python_compiler": "GCC 4.6.3", "python_implementation": "CPython", "release": "3.13.0-55-generic", "system": "Linux", "python_version": "2.7.3" }, "datetime": "2015-08-15T00:03:14.891748" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0024_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030346_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0024_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000320013416261170026523 0ustar hlehle{ "benchmarks": [ { "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "mean": 2.4120267459896217e-07, "outliers": "112;2106", "iterations": 401, "q1": 2.1701441738670902e-07, "q3": 2.1939265757724828e-07, "iqr": 2.378240190539261e-09, "min": 2.1641985733907418e-07, "iqr_outliers": 2106, "rounds": 11245, "hd15iqr": 2.2414913795832683e-07, "stddev": 2.0881103703757767e-07, "stddev_outliers": 112, "median": 2.1939265757724828e-07, "ld15iqr": 2.1641985733907418e-07, "max": 1.351672812292998e-05 }, "options": { "disable_gc": false, "max_time": 1.0, "warmup": false, "timer": "time", "min_time": 2.5e-05, "min_rounds": 5 }, "group": null } ], "version": "2.5.0", "datetime": "2015-08-15T00:03:45.705165", "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "node": "minibox", "processor": "x86_64", "machine": "x86_64", "python_version": "2.7.3", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "system": "Linux", "python_implementation": "CPython" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0007_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030218_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0007_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317613416261170026540 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "datetime": "2015-08-15T00:02:17.892028", "machine_info": { "node": "minibox", "python_implementation": "CPython", "processor": "x86_64", "system": "Linux", "python_version": "2.7.3", "machine": "x86_64", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic" }, "version": "2.5.0", "benchmarks": [ { "name": "test_xfast_parametrized[0]", "stats": { "ld15iqr": 2.1722581651475694e-07, "outliers": "47;2017", "iqr": 2.9434392481674513e-09, "rounds": 8775, "hd15iqr": 2.2723350995852623e-07, "stddev_outliers": 47, "stddev": 6.424232291757309e-07, "q1": 2.1958056791329089e-07, "min": 2.1722581651475694e-07, "q3": 2.2252400716145834e-07, "max": 3.0694184479890045e-05, "iqr_outliers": 2017, "iterations": 405, "median": 2.1958056791329089e-07, "mean": 2.7053752235143444e-07 }, "options": { "warmup": false, "min_time": 2.5e-05, "timer": "time", "max_time": 1.0, "disable_gc": false, "min_rounds": 5 }, "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0030_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030419_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0030_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000323013416261170026523 0ustar hlehle{ "benchmarks": [ { "name": "test_xfast_parametrized[0]", "group": null, "params": null, "stats": { "rounds": 9710, "iqr_outliers": 1726, "mean": 2.61205074138546e-07, "iterations": 431, "hd15iqr": 2.666305223363182e-07, "outliers": "160;1726", "ld15iqr": 2.1795109087242604e-07, "stddev": 2.639842188231226e-07, "stddev_outliers": 160, "median": 2.2016379230260297e-07, "iqr": 1.8807962156503776e-08, "max": 1.329003796500009e-05, "min": 2.1795109087242604e-07, "q1": 2.1795109087242604e-07, "q3": 2.3675905302892982e-07 }, "options": { "min_time": 2.5e-05, "max_time": 1.0, "min_rounds": 5, "warmup": false, "timer": "time", "disable_gc": false }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ], "datetime": "2015-08-15T00:04:18.687119", "machine_info": { "processor": "x86_64", "system": "Linux", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "python_version": "2.7.3", "node": "minibox", "machine": "x86_64" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "version": "2.5.0" } ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0029_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030413_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0029_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317513416261170026543 0ustar hlehle{ "version": "2.5.0", "machine_info": { "node": "minibox", "system": "Linux", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "python_version": "2.7.3", "machine": "x86_64", "processor": "x86_64" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "datetime": "2015-08-15T00:04:13.282283", "benchmarks": [ { "stats": { "median": 2.182136147709216e-07, "stddev_outliers": 72, "max": 1.559072776221767e-05, "q1": 2.1763633007576044e-07, "q3": 2.3495487093059548e-07, "rounds": 10895, "stddev": 3.0343534747680615e-07, "hd15iqr": 2.615099669080092e-07, "ld15iqr": 2.1532719129511577e-07, "iterations": 413, "iqr_outliers": 1749, "mean": 2.5172336199924715e-07, "min": 2.1532719129511577e-07, "iqr": 1.7318540854835044e-08, "outliers": "72;1749" }, "group": null, "options": { "max_time": 1.0, "timer": "time", "warmup": false, "min_time": 2.5e-05, "min_rounds": 5, "disable_gc": false }, "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0006_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030213_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0006_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026535 0ustar hlehle{ "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "version": "2.5.0", "benchmarks": [ { "options": { "max_time": 1.0, "min_time": 2.5e-05, "warmup": false, "disable_gc": false, "min_rounds": 5, "timer": "time" }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "group": null, "stats": { "stddev_outliers": 90, "stddev": 2.7587292537654084e-07, "hd15iqr": 2.449805583428899e-07, "iqr": 9.842968861991106e-09, "median": 2.2037313618791212e-07, "q1": 2.1982630458446818e-07, "q3": 2.296692734464593e-07, "ld15iqr": 2.1763897817069237e-07, "min": 2.1763897817069237e-07, "iterations": 436, "max": 1.2089353088938862e-05, "outliers": "90;1708", "rounds": 9893, "iqr_outliers": 1708, "mean": 2.5037088191877696e-07 } } ], "datetime": "2015-08-15T00:02:12.855407", "machine_info": { "node": "minibox", "python_version": "2.7.3", "system": "Linux", "machine": "x86_64", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "processor": "x86_64" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0003_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030157_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0003_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317713416261170026535 0ustar hlehle{ "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "max_time": 1.0, "disable_gc": false, "min_time": 2.5e-05, "min_rounds": 5, "warmup": false, "timer": "time" }, "stats": { "q3": 2.2790011237649358e-07, "outliers": "114;2290", "q1": 2.1796600491392846e-07, "max": 1.0318615857292624e-05, "stddev": 1.744253318545627e-07, "median": 2.1855036417643228e-07, "iterations": 408, "rounds": 11126, "mean": 2.4571645211663035e-07, "ld15iqr": 2.1562856786391314e-07, "stddev_outliers": 114, "hd15iqr": 2.430934532015931e-07, "iqr": 9.934107462565122e-09, "iqr_outliers": 2290, "min": 2.1562856786391314e-07 }, "group": null, "name": "test_xfast_parametrized[0]" } ], "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "python_version": "2.7.3", "system": "Linux", "processor": "x86_64", "python_implementation": "CPython", "node": "minibox", "python_compiler": "GCC 4.6.3", "machine": "x86_64", "release": "3.13.0-55-generic" }, "datetime": "2015-08-15T00:01:57.053896", "version": "2.5.0" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0009_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030228_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0009_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026540 0ustar hlehle{ "version": "2.5.0", "datetime": "2015-08-15T00:02:28.271504", "benchmarks": [ { "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "rounds": 10755, "iqr": 2.2706531343006136e-09, "mean": 2.433197485284543e-07, "hd15iqr": 2.2592998686290922e-07, "q1": 2.1911802746000743e-07, "q3": 2.2138868059430804e-07, "median": 2.1911802746000743e-07, "ld15iqr": 2.162797110421317e-07, "max": 1.0435921805245536e-05, "iqr_outliers": 2503, "min": 2.162797110421317e-07, "stddev_outliers": 49, "iterations": 420, "stddev": 2.397469802275838e-07, "outliers": "49;2503" }, "group": null, "options": { "timer": "time", "warmup": false, "min_time": 2.5e-05, "min_rounds": 5, "disable_gc": false, "max_time": 1.0 } } ], "machine_info": { "release": "3.13.0-55-generic", "python_implementation": "CPython", "system": "Linux", "node": "minibox", "machine": "x86_64", "processor": "x86_64", "python_version": "2.7.3", "python_compiler": "GCC 4.6.3" }, "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_1900000644000175000017500000000310513416261170026442 0ustar hlehle{ "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "outliers": "235;1688", "min": 2.1690275610947028e-07, "rounds": 11009, "mean": 2.540585175827918e-07, "hd15iqr": 2.87846821110423e-07, "q3": 2.465597013147866e-07, "q1": 2.192287910275343e-07, "max": 7.739299681128525e-06, "stddev_outliers": 235, "stddev": 0, "iterations": 410, "iqr": 2.7330910287252284e-08, "iqr_outliers": 1688, "ld15iqr": 2.1690275610947028e-07, "median": 2.1981029975705031e-07 }, "name": "test_xfast_parametrized[0]", "group": null, "options": { "disable_gc": false, "timer": "time", "min_rounds": 5, "max_time": 1.0, "min_time": 2.5e-05, "warmup": false } } ], "version": "2.5.0", "machine_info": { "machine": "x86_64", "processor": "x86_64", "python_implementation": "CPython", "python_version": "2.7.3", "release": "3.13.0-55-generic", "python_compiler": "GCC 4.6.3", "node": "minibox", "system": "Linux" }, "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "datetime": "2015-08-15T00:01:51.557705" } ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0027_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030402_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0027_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317513416261170026541 0ustar hlehle{ "benchmarks": [ { "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "rounds": 10867, "stddev": 3.311535289828111e-07, "iterations": 410, "min": 2.1690275610947028e-07, "median": 2.1981029975705031e-07, "outliers": "200;1702", "max": 2.3576108420767436e-05, "iqr": 2.6749401557736278e-08, "hd15iqr": 2.87265312380907e-07, "mean": 2.6770153290993514e-07, "iqr_outliers": 1702, "q3": 2.459781925852706e-07, "ld15iqr": 2.1690275610947028e-07, "stddev_outliers": 200, "q1": 2.192287910275343e-07 }, "group": null, "options": { "warmup": false, "timer": "time", "min_rounds": 5, "disable_gc": false, "min_time": 2.5e-05, "max_time": 1.0 } } ], "datetime": "2015-08-15T00:04:01.847239", "version": "2.5.0", "machine_info": { "machine": "x86_64", "processor": "x86_64", "python_implementation": "CPython", "python_version": "2.7.3", "release": "3.13.0-55-generic", "node": "minibox", "system": "Linux", "python_compiler": "GCC 4.6.3" }, "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_1900000644000175000017500000000313113416261170026440 0ustar hlehle{ "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "options": { "max_time": 1.0, "timer": "time", "min_rounds": 5, "min_time": 2.5e-05, "disable_gc": false, "warmup": false }, "group": null, "stats": { "q3": 2.5838185725599954e-07, "q1": 2.2016643907464864e-07, "max": 1.1447389118979422e-05, "stddev_outliers": 90, "median": 2.2016643907464864e-07, "hd15iqr": 3.1599017421594645e-07, "stddev": 2.140441942118885e-07, "rounds": 9987, "iterations": 418, "iqr_outliers": 1878, "outliers": "90;1878", "ld15iqr": 2.1731454219544334e-07, "min": 2.1731454219544334e-07, "mean": 2.622408132654948e-07, "iqr": 3.82154181813509e-08 } } ], "datetime": "2015-08-15T00:01:46.250433", "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "release": "3.13.0-55-generic", "python_compiler": "GCC 4.6.3", "system": "Linux", "node": "minibox", "processor": "x86_64", "machine": "x86_64", "python_version": "2.7.3", "python_implementation": "CPython" }, "version": "2.5.0" } ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0016_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030304_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0016_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317113416261170026533 0ustar hlehle{ "benchmarks": [ { "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "min_rounds": 5, "disable_gc": false, "timer": "time", "max_time": 1.0, "min_time": 2.5e-05, "warmup": false }, "stats": { "rounds": 10867, "iqr": 2.94415194532196e-08, "q3": 2.493869883096247e-07, "stddev_outliers": 130, "mean": 2.556898407276598e-07, "ld15iqr": 2.1763633007576044e-07, "q1": 2.199454688564051e-07, "stddev": 2.461246798649572e-07, "min": 2.1763633007576044e-07, "outliers": "130;1454", "median": 2.2052275355156628e-07, "iqr_outliers": 1454, "hd15iqr": 2.94992479227357e-07, "iterations": 413, "max": 1.849908805643964e-05 }, "group": null } ], "version": "2.5.0", "datetime": "2015-08-15T00:03:03.931894", "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "machine": "x86_64", "node": "minibox", "system": "Linux", "processor": "x86_64", "python_version": "2.7.3", "release": "3.13.0-55-generic", "python_compiler": "GCC 4.6.3", "python_implementation": "CPython" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0008_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030223_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0008_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317313416261170026536 0ustar hlehle{ "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "warmup": false, "timer": "time", "max_time": 1.0, "disable_gc": false, "min_time": 2.5e-05, "min_rounds": 5 }, "stats": { "max": 2.0802021026611328e-05, "iterations": 424, "min": 2.164885682879754e-07, "ld15iqr": 2.164885682879754e-07, "stddev_outliers": 63, "hd15iqr": 2.237985718925044e-07, "iqr": 2.249231878316627e-09, "iqr_outliers": 2201, "rounds": 10755, "q3": 2.1930010813587116e-07, "outliers": "63;2201", "q1": 2.1705087625755453e-07, "stddev": 3.5312655024766515e-07, "median": 2.1930010813587116e-07, "mean": 2.424960569194648e-07 }, "group": null, "name": "test_xfast_parametrized[0]" } ], "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "python_version": "2.7.3", "system": "Linux", "processor": "x86_64", "python_implementation": "CPython", "node": "minibox", "python_compiler": "GCC 4.6.3", "machine": "x86_64", "release": "3.13.0-55-generic" }, "datetime": "2015-08-15T00:02:23.031973", "version": "2.5.0" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0021_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030330_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0021_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317313416261170026531 0ustar hlehle{ "version": "2.5.0", "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "min_rounds": 5, "min_time": 2.5e-05, "disable_gc": false, "warmup": false, "max_time": 1.0, "timer": "time" }, "group": null, "name": "test_xfast_parametrized[0]", "stats": { "min": 2.1855036417643228e-07, "q3": 2.219563438778832e-07, "q1": 2.2138868059430804e-07, "outliers": "50;4737", "max": 1.237903322492327e-05, "iterations": 420, "stddev": 2.709628754222987e-07, "stddev_outliers": 50, "mean": 2.452763506605051e-07, "median": 2.2138868059430804e-07, "ld15iqr": 2.2138868059430804e-07, "iqr": 5.676632835751468e-10, "hd15iqr": 2.2365933372860863e-07, "rounds": 10867, "iqr_outliers": 4737 } } ], "datetime": "2015-08-15T00:03:30.215804", "machine_info": { "processor": "x86_64", "system": "Linux", "python_compiler": "GCC 4.6.3", "node": "minibox", "machine": "x86_64", "python_implementation": "CPython", "python_version": "2.7.3", "release": "3.13.0-55-generic" }, "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0013_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030248_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0013_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317713416261170026536 0ustar hlehle{ "benchmarks": [ { "group": null, "options": { "disable_gc": false, "min_rounds": 5, "min_time": 2.5e-05, "max_time": 1.0, "timer": "time", "warmup": false }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "stddev": 3.8242931011452764e-07, "iterations": 272, "q1": 2.1738164565142463e-07, "q3": 2.2088780122644762e-07, "max": 1.6187043750987333e-05, "outliers": "41;1664", "hd15iqr": 2.2790011237649358e-07, "min": 2.1650510675766889e-07, "iqr_outliers": 1664, "rounds": 10306, "median": 2.2001126233269188e-07, "stddev_outliers": 41, "mean": 2.4734709646898833e-07, "ld15iqr": 2.1650510675766889e-07, "iqr": 3.506155575022992e-09 }, "name": "test_xfast_parametrized[0]" } ], "version": "2.5.0", "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "machine": "x86_64", "release": "3.13.0-55-generic", "python_version": "2.7.3", "python_implementation": "CPython", "processor": "x86_64", "node": "minibox", "python_compiler": "GCC 4.6.3", "system": "Linux" }, "datetime": "2015-08-15T00:02:48.248888" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0010_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030233_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0010_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317313416261170026527 0ustar hlehle{ "benchmarks": [ { "name": "test_xfast_parametrized[0]", "group": null, "options": { "min_rounds": 5, "timer": "time", "disable_gc": false, "max_time": 1.0, "warmup": false, "min_time": 2.5e-05 }, "stats": { "min": 2.182705301634023e-07, "max": 2.170700422475036e-05, "stddev": 3.362069021656088e-07, "hd15iqr": 2.2834455463248238e-07, "mean": 2.4423029232851136e-07, "ld15iqr": 2.182705301634023e-07, "median": 2.2162853831976232e-07, "q1": 2.191100322024923e-07, "outliers": "47;2111", "q3": 2.2246804035885233e-07, "rounds": 11245, "iqr_outliers": 2111, "iqr": 3.3580081563600307e-09, "iterations": 284, "stddev_outliers": 47 }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ], "datetime": "2015-08-15T00:02:33.035429", "machine_info": { "processor": "x86_64", "python_version": "2.7.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "python_compiler": "GCC 4.6.3", "node": "minibox", "system": "Linux", "machine": "x86_64" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "version": "2.5.0" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0020_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030325_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0020_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000315313416261170026526 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "datetime": "2015-08-15T00:03:24.976065", "machine_info": { "node": "minibox", "python_implementation": "CPython", "processor": "x86_64", "release": "3.13.0-55-generic", "machine": "x86_64", "python_compiler": "GCC 4.6.3", "system": "Linux", "python_version": "2.7.3" }, "version": "2.5.0", "benchmarks": [ { "name": "test_xfast_parametrized[0]", "stats": { "ld15iqr": 2.1971908270143994e-07, "q1": 2.1971908270143994e-07, "min": 2.1504420860140932e-07, "q3": 2.1971908270143994e-07, "max": 1.150206023571538e-05, "median": 2.1971908270143994e-07, "stddev_outliers": 73, "stddev": 2.0233752820023266e-07, "rounds": 10513, "outliers": "73;4267", "iqr": 0.0, "iqr_outliers": 4267, "hd15iqr": 2.2345898198146446e-07, "mean": 2.435695228497899e-07, "iterations": 255 }, "options": { "disable_gc": false, "warmup": false, "min_rounds": 5, "max_time": 1.0, "min_time": 2.5e-05, "timer": "time" }, "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0017_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030310_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0017_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317713416261170026542 0ustar hlehle{ "benchmarks": [ { "group": null, "name": "test_xfast_parametrized[0]", "stats": { "iqr_outliers": 1848, "rounds": 13663, "mean": 2.5204152198346993e-07, "max": 2.2644692278922872e-05, "stddev_outliers": 95, "ld15iqr": 2.1522893007040748e-07, "iterations": 329, "min": 2.1522893007040748e-07, "median": 2.2247569539264344e-07, "stddev": 3.271520836129701e-07, "iqr": 1.8116913305589878e-08, "outliers": "95;1848", "q1": 2.1885231273152545e-07, "q3": 2.3696922603711532e-07, "hd15iqr": 2.6450693426161187e-07 }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "timer": "time", "min_rounds": 5, "warmup": false, "min_time": 2.5e-05, "disable_gc": false, "max_time": 1.0 } } ], "machine_info": { "system": "Linux", "node": "minibox", "python_compiler": "GCC 4.6.3", "python_implementation": "CPython", "python_version": "2.7.3", "release": "3.13.0-55-generic", "processor": "x86_64", "machine": "x86_64" }, "datetime": "2015-08-15T00:03:09.474706", "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "version": "2.5.0" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0028_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030408_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0028_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026541 0ustar hlehle{ "version": "2.5.0", "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "machine_info": { "system": "Linux", "node": "minibox", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_version": "2.7.3", "python_implementation": "CPython", "processor": "x86_64", "machine": "x86_64" }, "datetime": "2015-08-15T00:04:07.660276", "benchmarks": [ { "stats": { "stddev_outliers": 156, "median": 2.2016643907464864e-07, "max": 1.226828999496533e-05, "q1": 2.178849215712844e-07, "q3": 2.3670744097403933e-07, "min": 2.150330246920791e-07, "rounds": 10867, "ld15iqr": 2.150330246920791e-07, "hd15iqr": 2.652264097660923e-07, "stddev": 1.9291903880946324e-07, "iterations": 418, "mean": 2.5225501594359257e-07, "iqr": 1.882251940275494e-08, "outliers": "156;1653", "iqr_outliers": 1653 }, "group": null, "name": "test_xfast_parametrized[0]", "options": { "max_time": 1.0, "warmup": false, "timer": "time", "min_time": 2.5e-05, "min_rounds": 5, "disable_gc": false }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0012_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030243_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0012_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026532 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "datetime": "2015-08-15T00:02:43.499037", "machine_info": { "python_compiler": "GCC 4.6.3", "python_version": "2.7.3", "system": "Linux", "machine": "x86_64", "node": "minibox", "python_implementation": "CPython", "processor": "x86_64", "release": "3.13.0-55-generic" }, "version": "2.5.0", "benchmarks": [ { "name": "test_xfast_parametrized[0]", "stats": { "max": 1.3520982530381944e-05, "iqr_outliers": 2424, "median": 2.1958056791329089e-07, "mean": 2.43208133260719e-07, "iterations": 405, "q1": 2.1722581651475694e-07, "min": 2.1722581651475694e-07, "q3": 2.201692557629244e-07, "rounds": 11097, "hd15iqr": 2.2487875855999228e-07, "stddev_outliers": 68, "stddev": 2.412171921682738e-07, "ld15iqr": 2.1722581651475694e-07, "outliers": "68;2424", "iqr": 2.9434392481674513e-09 }, "options": { "min_rounds": 5, "disable_gc": false, "min_time": 2.5e-05, "timer": "time", "warmup": false, "max_time": 1.0 }, "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0011_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030238_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0011_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317513416261170026532 0ustar hlehle{ "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "min_rounds": 5, "disable_gc": false, "warmup": false, "timer": "time", "min_time": 2.5e-05, "max_time": 1.0 }, "group": null, "name": "test_xfast_parametrized[0]", "stats": { "stddev_outliers": 97, "stddev": 2.0541374142833106e-07, "median": 2.182136147709216e-07, "rounds": 11097, "max": 1.2184170776071618e-05, "min": 2.1532719129511577e-07, "q3": 2.2052275355156628e-07, "q1": 2.1763633007576044e-07, "ld15iqr": 2.1532719129511577e-07, "mean": 2.406636791816806e-07, "iterations": 413, "hd15iqr": 2.2514103111285562e-07, "iqr_outliers": 2276, "iqr": 2.886423475805845e-09, "outliers": "97;2276" } } ], "datetime": "2015-08-15T00:02:38.264260", "machine_info": { "system": "Linux", "release": "3.13.0-55-generic", "python_version": "2.7.3", "python_compiler": "GCC 4.6.3", "processor": "x86_64", "machine": "x86_64", "python_implementation": "CPython", "node": "minibox" }, "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "version": "2.5.0" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0023_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030340_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0023_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026534 0ustar hlehle{ "version": "2.5.0", "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "machine_info": { "python_version": "2.7.3", "machine": "x86_64", "release": "3.13.0-55-generic", "system": "Linux", "python_compiler": "GCC 4.6.3", "node": "minibox", "python_implementation": "CPython", "processor": "x86_64" }, "datetime": "2015-08-15T00:03:40.580148", "benchmarks": [ { "stats": { "rounds": 10755, "hd15iqr": 2.2138868059430804e-07, "stddev": 2.554912353337223e-07, "stddev_outliers": 88, "ld15iqr": 2.1855036417643228e-07, "iterations": 420, "outliers": "88;4350", "iqr": 5.676632835751468e-10, "iqr_outliers": 4350, "max": 1.0045369466145833e-05, "median": 2.1911802746000743e-07, "mean": 2.453054352222135e-07, "q1": 2.1855036417643228e-07, "q3": 2.1911802746000743e-07, "min": 2.162797110421317e-07 }, "name": "test_xfast_parametrized[0]", "group": null, "options": { "min_time": 2.5e-05, "timer": "time", "max_time": 1.0, "warmup": false, "min_rounds": 5, "disable_gc": false }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]" } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0025_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030351_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0025_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317613416261170026540 0ustar hlehle{ "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "version": "2.5.0", "benchmarks": [ { "options": { "disable_gc": false, "warmup": false, "min_rounds": 5, "max_time": 1.0, "min_time": 2.5e-05, "timer": "time" }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "group": null, "stats": { "q1": 2.1705087625755453e-07, "q3": 2.2604780377082104e-07, "min": 2.164885682879754e-07, "max": 1.2556899268672151e-05, "iterations": 424, "median": 2.1930010813587116e-07, "iqr": 8.996927513266507e-09, "stddev_outliers": 115, "ld15iqr": 2.164885682879754e-07, "stddev": 2.1513861863365412e-07, "hd15iqr": 2.4010550301029996e-07, "outliers": "115;2184", "rounds": 10755, "iqr_outliers": 2184, "mean": 2.454347499967438e-07 } } ], "datetime": "2015-08-15T00:03:50.942439", "machine_info": { "node": "minibox", "python_version": "2.7.3", "system": "Linux", "machine": "x86_64", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "processor": "x86_64" } }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0022_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030335_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0022_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317713416261170026536 0ustar hlehle{ "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "machine_info": { "python_compiler": "GCC 4.6.3", "machine": "x86_64", "node": "minibox", "python_version": "2.7.3", "python_implementation": "CPython", "processor": "x86_64", "system": "Linux", "release": "3.13.0-55-generic" }, "version": "2.5.0", "datetime": "2015-08-15T00:03:35.431534", "benchmarks": [ { "options": { "disable_gc": false, "warmup": false, "timer": "time", "max_time": 1.0, "min_time": 2.5e-05, "min_rounds": 5 }, "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "stats": { "hd15iqr": 2.2514103111285562e-07, "stddev": 3.4580860907455714e-07, "q3": 2.2052275355156628e-07, "min": 2.1763633007576044e-07, "q1": 2.182136147709216e-07, "outliers": "76;2423", "mean": 2.5109973376177697e-07, "iqr_outliers": 2423, "ld15iqr": 2.1763633007576044e-07, "stddev_outliers": 76, "median": 2.2052275355156628e-07, "rounds": 10755, "max": 1.9387529202292677e-05, "iterations": 413, "iqr": 2.3091387806446708e-09 } } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0015_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030259_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0015_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317513416261170026536 0ustar hlehle{ "version": "2.5.0", "machine_info": { "node": "minibox", "system": "Linux", "python_compiler": "GCC 4.6.3", "release": "3.13.0-55-generic", "python_implementation": "CPython", "python_version": "2.7.3", "machine": "x86_64", "processor": "x86_64" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "benchmarks": [ { "group": null, "stats": { "stddev_outliers": 70, "q1": 2.1855036417643228e-07, "iterations": 420, "q3": 2.1911802746000743e-07, "rounds": 10867, "mean": 2.3773541903690844e-07, "min": 2.162797110421317e-07, "iqr": 5.676632835751468e-10, "hd15iqr": 2.2138868059430804e-07, "median": 2.1911802746000743e-07, "ld15iqr": 2.1855036417643228e-07, "iqr_outliers": 5048, "outliers": "70;5048", "max": 1.4833041599818639e-05, "stddev": 1.855664701184603e-07 }, "name": "test_xfast_parametrized[0]", "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "min_time": 2.5e-05, "min_rounds": 5, "timer": "time", "disable_gc": false, "max_time": 1.0, "warmup": false } } ], "datetime": "2015-08-15T00:02:58.655142" }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0026_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030356_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0026_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317713416261170026542 0ustar hlehle{ "datetime": "2015-08-15T00:03:56.397076", "commit_info": { "dirty": true, "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd" }, "machine_info": { "release": "3.13.0-55-generic", "system": "Linux", "python_implementation": "CPython", "processor": "x86_64", "machine": "x86_64", "node": "minibox", "python_compiler": "GCC 4.6.3", "python_version": "2.7.3" }, "version": "2.5.0", "benchmarks": [ { "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "name": "test_xfast_parametrized[0]", "options": { "min_rounds": 5, "timer": "time", "max_time": 1.0, "min_time": 2.5e-05, "disable_gc": false, "warmup": false }, "group": null, "stats": { "ld15iqr": 2.3030485782323707e-07, "rounds": 11245, "median": 2.328013874473372e-07, "max": 1.1630707386276485e-05, "outliers": "133;2119", "min": 2.3030485782323707e-07, "iqr_outliers": 2119, "hd15iqr": 2.6400800774858884e-07, "stddev": 2.378591485128739e-07, "q3": 2.4341163834976277e-07, "q1": 2.3030485782323707e-07, "iqr": 1.3106780526525694e-08, "stddev_outliers": 133, "mean": 2.624057996414546e-07, "iterations": 382 } } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0004_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030202_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0004_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000320213416261170026523 0ustar hlehle{ "datetime": "2015-08-15T00:02:02.410071", "machine_info": { "system": "Linux", "python_implementation": "CPython", "python_compiler": "GCC 4.6.3", "node": "minibox", "processor": "x86_64", "machine": "x86_64", "release": "3.13.0-55-generic", "python_version": "2.7.3" }, "version": "2.5.0", "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "benchmarks": [ { "options": { "min_rounds": 5, "disable_gc": false, "warmup": false, "max_time": 1.0, "min_time": 2.5e-05, "timer": "time" }, "name": "test_xfast_parametrized[0]", "group": null, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "stats": { "stddev": 1.8812017786364639e-07, "median": 2.2153938766074392e-07, "max": 1.3510386149088541e-05, "outliers": "149;2432", "q3": 2.3349548159799042e-07, "q1": 2.1872618908727414e-07, "iqr": 1.4769292510716284e-08, "min": 2.1802288944390671e-07, "rounds": 13358, "iqr_outliers": 2432, "stddev_outliers": 149, "hd15iqr": 2.5600107018574853e-07, "mean": 2.4605368174286904e-07, "iterations": 339, "ld15iqr": 2.1802288944390671e-07 } } ] }././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootrootpytest-benchmark-3.2.2/tests/test_storage/0019_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030320_uncommitted-changes.jsonpytest-benchmark-3.2.2/tests/test_storage/0019_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_0300000644000175000017500000000317413416261170026541 0ustar hlehle{ "datetime": "2015-08-15T00:03:20.289311", "machine_info": { "python_compiler": "GCC 4.6.3", "machine": "x86_64", "release": "3.13.0-55-generic", "python_implementation": "CPython", "node": "minibox", "python_version": "2.7.3", "processor": "x86_64", "system": "Linux" }, "commit_info": { "id": "5b78858eb718649a31fb93d8dc96ca2cee41a4cd", "dirty": true }, "benchmarks": [ { "name": "test_xfast_parametrized[0]", "stats": { "max": 2.2278570955659805e-05, "stddev_outliers": 39, "stddev": 5.744074575899251e-07, "median": 2.182136147709216e-07, "ld15iqr": 2.1532719129511577e-07, "min": 2.1532719129511577e-07, "rounds": 11009, "q1": 2.1763633007576044e-07, "iqr_outliers": 2541, "mean": 2.597868174514109e-07, "iterations": 413, "hd15iqr": 2.2514103111285562e-07, "outliers": "39;2541", "q3": 2.2052275355156628e-07, "iqr": 2.886423475805845e-09 }, "fullname": "tests/test_normal.py::test_xfast_parametrized[0]", "options": { "min_rounds": 5, "disable_gc": false, "warmup": false, "timer": "time", "min_time": 2.5e-05, "max_time": 1.0 }, "group": null } ], "version": "2.5.0" }pytest-benchmark-3.2.2/tests/test_with_weaver.py0000644000175000017500000000101013416261170020170 0ustar hlehleimport time import pytest class Foo(object): def __init__(self, arg=0.01): self.arg = arg def run(self): self.internal(self.arg) def internal(self, duration): time.sleep(duration) @pytest.mark.benchmark(max_time=0.001) def test_weave_fixture(benchmark_weave): benchmark_weave(Foo.internal, lazy=True) f = Foo() f.run() @pytest.mark.benchmark(max_time=0.001) def test_weave_method(benchmark): benchmark.weave(Foo.internal, lazy=True) f = Foo() f.run() pytest-benchmark-3.2.2/tests/test_elasticsearch_storage.py0000644000175000017500000001710213416261170022213 0ustar hlehlefrom __future__ import absolute_import import json import logging import os from io import BytesIO from io import StringIO import elasticsearch import py import pytest from freezegun import freeze_time from pytest_benchmark import plugin from pytest_benchmark.plugin import BenchmarkSession from pytest_benchmark.plugin import pytest_benchmark_compare_machine_info from pytest_benchmark.plugin import pytest_benchmark_generate_json from pytest_benchmark.plugin import pytest_benchmark_group_stats from pytest_benchmark.storage.elasticsearch import ElasticsearchStorage from pytest_benchmark.storage.elasticsearch import _mask_hosts from pytest_benchmark.utils import parse_elasticsearch_storage try: import unittest.mock as mock except ImportError: import mock logger = logging.getLogger(__name__) THIS = py.path.local(__file__) BENCHFILE = THIS.dirpath('test_storage/0030_5b78858eb718649a31fb93d8dc96ca2cee41a4cd_20150815_030419_uncommitted-changes.json') SAVE_DATA = json.loads(BENCHFILE.read_text(encoding='utf8')) SAVE_DATA["machine_info"] = {'foo': 'bar'} SAVE_DATA["commit_info"] = {'foo': 'bar'} tmp = SAVE_DATA.copy() ES_DATA = tmp.pop("benchmarks")[0] ES_DATA.update(tmp) ES_DATA["benchmark_id"] = "FoobarOS_commitId" class Namespace(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) def __getitem__(self, item): return self.__dict__[item] class LooseFileLike(BytesIO): def close(self): value = self.getvalue() super(LooseFileLike, self).close() self.getvalue = lambda: value class MockStorage(ElasticsearchStorage): def __init__(self): self._es = mock.Mock(spec=elasticsearch.Elasticsearch) self._es_hosts = self._es_index = self._es_doctype = 'mocked' self.logger = logger self.default_machine_id = "FoobarOS" class MockSession(BenchmarkSession): def __init__(self): self.verbose = False self.histogram = True self.benchmarks = [] self.performance_regressions = [] self.sort = u"min" self.compare = '0001' self.logger = logging.getLogger(__name__) self.machine_id = "FoobarOS" self.machine_info = {'foo': 'bar'} self.save = self.autosave = self.json = False self.options = { 'min_rounds': 123, 'min_time': 234, 'max_time': 345, } self.compare_fail = [] self.config = Namespace(hook=Namespace( pytest_benchmark_group_stats=pytest_benchmark_group_stats, pytest_benchmark_generate_machine_info=lambda **kwargs: {'foo': 'bar'}, pytest_benchmark_update_machine_info=lambda **kwargs: None, pytest_benchmark_compare_machine_info=pytest_benchmark_compare_machine_info, pytest_benchmark_generate_json=pytest_benchmark_generate_json, pytest_benchmark_update_json=lambda **kwargs: None, pytest_benchmark_generate_commit_info=lambda **kwargs: {'foo': 'bar'}, pytest_benchmark_update_commit_info=lambda **kwargs: None, )) self.elasticsearch_host = "localhost:9200" self.elasticsearch_index = "benchmark" self.elasticsearch_doctype = "benchmark" self.storage = MockStorage() self.group_by = 'group' self.columns = ['min', 'max', 'mean', 'stddev', 'median', 'iqr', 'outliers', 'rounds', 'iterations'] self.benchmarks = [] data = json.loads(BENCHFILE.read_text(encoding='utf8')) self.benchmarks.extend( Namespace( as_dict=lambda include_data=False, stats=True, flat=False, _bench=bench: dict(_bench, **_bench["stats"]) if flat else dict(_bench), name=bench['name'], fullname=bench['fullname'], group=bench['group'], options=bench['options'], has_error=False, params=None, **bench['stats'] ) for bench in data['benchmarks'] ) try: text_type = unicode except NameError: text_type = str def force_text(text): if isinstance(text, text_type): return text else: return text.decode('utf-8') def force_bytes(text): if isinstance(text, text_type): return text.encode('utf-8') else: return text def make_logger(sess): output = StringIO() sess.logger = Namespace( info=lambda text, **opts: output.write(force_text(text) + u'\n'), error=lambda text: output.write(force_text(text) + u'\n'), ) sess.storage.logger = Namespace( info=lambda text, **opts: output.write(force_text(text) + u'\n'), error=lambda text: output.write(force_text(text) + u'\n'), ) return output @pytest.fixture def sess(): return MockSession() @pytest.fixture def logger_output(sess): return make_logger(sess) @freeze_time("2015-08-15T00:04:18.687119") def test_handle_saving(sess, logger_output, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = "commitId" sess.autosave = True sess.json = None sess.save_data = False sess.handle_saving() sess.storage._es.index.assert_called_with( index='mocked', doc_type='mocked', body=ES_DATA, id='FoobarOS_commitId_tests/test_normal.py::test_xfast_parametrized[0]', ) def test_parse_with_no_creds(): string = 'https://example.org,another.org' hosts, _, _, _ = parse_elasticsearch_storage(string) assert len(hosts) == 2 assert 'https://example.org' in hosts assert 'https://another.org' in hosts def test_parse_with_creds_in_first_host_of_url(): string = 'https://user:pass@example.org,another.org' hosts, _, _, _ = parse_elasticsearch_storage(string) assert len(hosts) == 2 assert 'https://user:pass@example.org' in hosts assert 'https://another.org' in hosts def test_parse_with_creds_in_second_host_of_url(): string = 'https://example.org,user:pass@another.org' hosts, _, _, _ = parse_elasticsearch_storage(string) assert len(hosts) == 2 assert 'https://example.org' in hosts assert 'https://user:pass@another.org' in hosts def test_parse_with_creds_in_netrc(tmpdir): netrc_file = os.path.join(tmpdir.strpath, 'netrc') with open(netrc_file, 'w') as f: f.write('machine example.org login user1 password pass1\n') f.write('machine another.org login user2 password pass2\n') string = 'https://example.org,another.org' hosts, _, _, _ = parse_elasticsearch_storage(string, netrc_file=netrc_file) assert len(hosts) == 2 assert 'https://user1:pass1@example.org' in hosts assert 'https://user2:pass2@another.org' in hosts def test_parse_url_creds_supersedes_netrc_creds(tmpdir): netrc_file = os.path.join(tmpdir.strpath, 'netrc') with open(netrc_file, 'w') as f: f.write('machine example.org login user1 password pass1\n') f.write('machine another.org login user2 password pass2\n') string = 'https://user3:pass3@example.org,another.org' hosts, _, _, _ = parse_elasticsearch_storage(string, netrc_file=netrc_file) assert len(hosts) == 2 assert 'https://user3:pass3@example.org' in hosts # superseded by creds in url assert 'https://user2:pass2@another.org' in hosts # got creds from netrc file def test__mask_hosts(): hosts = ['https://user1:pass1@example.org', 'https://user2:pass2@another.org'] masked_hosts = _mask_hosts(hosts) assert len(masked_hosts) == len(hosts) assert 'https://***:***@example.org' in masked_hosts assert 'https://***:***@another.org' in masked_hosts pytest-benchmark-3.2.2/tests/test_cli.py0000644000175000017500000002025113416261170016423 0ustar hlehleimport sys from collections import namedtuple import py import pytest from _pytest.pytester import LineMatcher pytest_plugins = 'pytester', THIS = py.path.local(__file__) STORAGE = THIS.dirpath('test_storage') @pytest.fixture def testdir(testdir, monkeypatch): return namedtuple('testdir', 'tmpdir,run')( testdir.tmpdir, lambda bin, *args: testdir.run(bin+".exe" if sys.platform == "win32" else bin, *args)) def test_help(testdir): result = testdir.run('py.test-benchmark', '--help') result.stdout.fnmatch_lines([ "usage: py.test-benchmark *", " {help,list,compare} ...", "", "pytest_benchmark's management commands.", "", "optional arguments:", " -h [COMMAND], --help [COMMAND]", " Display help and exit.", " --storage URI, -s URI", " Specify a path to store the runs as uri in form", " file://path or elasticsearch+http[s]://host1,host2/[in", " dex/doctype?project_name=Project] (when --benchmark-", " save or --benchmark-autosave are used). For backwards", " compatibility unexpected values are converted to", " file://. Default: 'file://./.benchmarks'.", " --verbose, -v Dump diagnostic and progress information.", "", "commands:", " {help,list,compare}", " help Display help and exit.", " list List saved runs.", " compare Compare saved runs.", ]) assert result.ret == 0 def test_help_command(testdir): result = testdir.run('py.test-benchmark', 'help') result.stdout.fnmatch_lines([ 'usage: py.test-benchmark help [-h] [command]', '', 'Display help and exit.', '', 'positional arguments:', ' command', '', 'optional arguments:', ' -h, --help show this help message and exit', ]) @pytest.mark.parametrize('args', ['list --help', 'help list']) def test_help_list(testdir, args): result = testdir.run('py.test-benchmark', *args.split()) result.stdout.fnmatch_lines([ "usage: py.test-benchmark list [-h]", "", "List saved runs.", "", "optional arguments:", " -h, --help show this help message and exit", ]) assert result.ret == 0 @pytest.mark.parametrize('args', ['compare --help', 'help compare']) def test_help_compare(testdir, args): result = testdir.run('py.test-benchmark', *args.split()) result.stdout.fnmatch_lines([ "usage: py.test-benchmark compare [-h] [--sort COL] [--group-by LABEL]", " [--columns LABELS] [--name FORMAT]", " [--histogram [FILENAME-PREFIX]]", " [--csv [FILENAME]]", " [glob_or_file [glob_or_file ...]]", "", "Compare saved runs.", "", "positional arguments:", " glob_or_file Glob or exact path for json files. If not specified", " all runs are loaded.", "", "optional arguments:", " -h, --help show this help message and exit", " --sort COL Column to sort on. Can be one of: 'min', 'max',", " 'mean', 'stddev', 'name', 'fullname'. Default: 'min'", " --group-by LABEL How to group tests. Can be one of: 'group', 'name',", " 'fullname', 'func', 'fullfunc', 'param' or", " 'param:NAME', where NAME is the name passed to", " @pytest.parametrize. Default: 'group'", " --columns LABELS Comma-separated list of columns to show in the result", " table. Default: 'min, max, mean, stddev, median, iqr,", " outliers, ops, rounds, iterations'", " --name FORMAT How to format names in results. Can be one of 'short',", " 'normal', 'long', or 'trial'. Default: 'normal'", " --histogram [FILENAME-PREFIX]", " Plot graphs of min/max/avg/stddev over time in", " FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX", " contains slashes ('/') then directories will be", " created. Default: 'benchmark_*'", " --csv [FILENAME] Save a csv report. If FILENAME contains slashes ('/')", " then directories will be created. Default:", " 'benchmark_*'", "", "examples:", "", " pytest-benchmark compare 'Linux-CPython-3.5-64bit/*'", "", " Loads all benchmarks ran with that interpreter. Note the special quoting that disables your shell's " "glob", " expansion.", "", " pytest-benchmark compare 0001", "", " Loads first run from all the interpreters.", "", " pytest-benchmark compare /foo/bar/0001_abc.json /lorem/ipsum/0001_sir_dolor.json", "", " Loads runs from exactly those files.", ]) assert result.ret == 0 def test_list(testdir): result = testdir.run('py.test-benchmark', '--storage', STORAGE, 'list') assert result.stderr.lines == [] result.stdout.fnmatch_lines([ '*0001_*.json', '*0002_*.json', '*0003_*.json', '*0004_*.json', '*0005_*.json', '*0006_*.json', '*0007_*.json', '*0008_*.json', '*0009_*.json', '*0010_*.json', '*0011_*.json', '*0012_*.json', '*0013_*.json', '*0014_*.json', '*0015_*.json', '*0016_*.json', '*0017_*.json', '*0018_*.json', '*0019_*.json', '*0020_*.json', '*0021_*.json', '*0022_*.json', '*0023_*.json', '*0024_*.json', '*0025_*.json', '*0026_*.json', '*0027_*.json', '*0028_*.json', '*0029_*.json', '*0030_*.json', ]) assert result.ret == 0 @pytest.mark.parametrize('name,name_pattern_generator', [ ('short', lambda n: '*xfast_parametrized[[]0[]] ' '(%.4d*)' % n), ('long', lambda n: '*xfast_parametrized[[]0[]] ' '(%.4d*)' % n), ('normal', lambda n: '*xfast_parametrized[[]0[]] ' '(%.4d*)' % n), ('trial', lambda n: '%.4d*' % n) ]) def test_compare(testdir, name, name_pattern_generator): result = testdir.run('py.test-benchmark', '--storage', STORAGE, 'compare', '0001', '0002', '0003', '--sort', 'min', '--columns', 'min,max', '--name', name, '--histogram', 'foobar', '--csv', 'foobar') result.stderr.fnmatch_lines([ 'Generated csv: *foobar.csv' ]) LineMatcher(testdir.tmpdir.join('foobar.csv').readlines(cr=0)).fnmatch_lines([ "name,min,max", "tests/test_normal.py::test_xfast_parametrized[[]0[]],2.15628567*e-07,1.03186158*e-05", "tests/test_normal.py::test_xfast_parametrized[[]0[]],2.16902756*e-07,7.73929968*e-06", "tests/test_normal.py::test_xfast_parametrized[[]0[]],2.17314542*e-07,1.14473891*e-05", "" ]) result.stdout.fnmatch_lines([ 'Computing stats ...', '---*--- benchmark: 3 tests ---*---', 'Name (time in ns) * Min * Max ', '---*---', '%s * 215.6286 (1.0) 10*318.6159 (1.33) ' % name_pattern_generator(3), '%s * 216.9028 (1.01) 7*739.2997 (1.0) ' % name_pattern_generator(2), '%s * 217.3145 (1.01) 11*447.3891 (1.48) ' % name_pattern_generator(1), '---*---', '', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.', ]) assert result.ret == 0 pytest-benchmark-3.2.2/tests/test_sample.py0000644000175000017500000000311513416261170017135 0ustar hlehlefrom functools import partial import pytest empty = object() class cached_property(object): def __init__(self, func): self.func = func def __get__(self, obj, cls): value = obj.__dict__[self.func.__name__] = self.func(obj) return value class SimpleProxy(object): def __init__(self, factory): self.factory = factory self.object = empty def __str__(self): if self.object is empty: self.object = self.factory() return str(self.object) class CachedPropertyProxy(object): def __init__(self, factory): self.factory = factory @cached_property def object(self): return self.factory() def __str__(self): return str(self.object) class LocalsSimpleProxy(object): def __init__(self, factory): self.factory = factory self.object = empty def __str__(self, func=str): if self.object is empty: self.object = self.factory() return func(self.object) class LocalsCachedPropertyProxy(object): def __init__(self, factory): self.factory = factory @cached_property def object(self): return self.factory() def __str__(self, func=str): return func(self.object) @pytest.fixture(scope="module", params=["SimpleProxy", "CachedPropertyProxy", "LocalsSimpleProxy", "LocalsCachedPropertyProxy"]) def impl(request): return globals()[request.param] def test_proto(benchmark, impl): obj = "foobar" proxied = impl(lambda: obj) result = benchmark(partial(str, proxied)) assert result == obj pytest-benchmark-3.2.2/tests/test_skip.py0000644000175000017500000000013513416261170016621 0ustar hlehleimport pytest def test_skip(benchmark): pytest.skip('bla') benchmark(lambda: None) pytest-benchmark-3.2.2/tests/test_storage.py0000644000175000017500000004765113416261170017335 0ustar hlehle# flake8: noqa import json import logging import os import sys from io import BytesIO from io import StringIO import py import pytest from freezegun import freeze_time from pytest_benchmark import plugin from pytest_benchmark.plugin import BenchmarkSession from pytest_benchmark.plugin import pytest_benchmark_compare_machine_info from pytest_benchmark.plugin import pytest_benchmark_generate_json from pytest_benchmark.plugin import pytest_benchmark_group_stats from pytest_benchmark.plugin import pytest_benchmark_scale_unit from pytest_benchmark.session import PerformanceRegression from pytest_benchmark.stats import normalize_stats from pytest_benchmark.storage.file import FileStorage from pytest_benchmark.utils import NAME_FORMATTERS from pytest_benchmark.utils import DifferenceRegressionCheck from pytest_benchmark.utils import Path from pytest_benchmark.utils import PercentageRegressionCheck from pytest_benchmark.utils import get_machine_id pytest_plugins = "pytester" THIS = py.path.local(__file__) STORAGE = THIS.dirpath(THIS.purebasename) JSON_DATA = json.loads(STORAGE.listdir('0030_*.json')[0].read_text(encoding='utf8')) JSON_DATA["machine_info"] = {'foo': 'bar'} JSON_DATA["commit_info"] = {'foo': 'bar'} list(normalize_stats(bench['stats']) for bench in JSON_DATA["benchmarks"]) class Namespace(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) def __getitem__(self, item): return self.__dict__[item] def getoption(self, item, default=None): try: return self[item] except KeyError: return default class LooseFileLike(BytesIO): def close(self): value = self.getvalue() super(LooseFileLike, self).close() self.getvalue = lambda: value class MockSession(BenchmarkSession): def __init__(self, name_format): self.histogram = True self.verbose = False self.benchmarks = [] self.performance_regressions = [] self.sort = u"min" self.compare = '0001' logger = logging.getLogger(__name__) self.logger = Namespace( debug=lambda *args, **_kwargs: logger.debug(*args), info=lambda *args, **_kwargs: logger.info(*args), warn=lambda *args, **_kwargs: logger.warn(*args), error=lambda *args, **_kwargs: logger.error(*args), ) self.machine_id = "FoobarOS" self.machine_info = {'foo': 'bar'} self.save = self.autosave = self.json = False self.name_format = NAME_FORMATTERS[name_format] self.options = { 'min_rounds': 123, 'min_time': 234, 'max_time': 345, 'cprofile': False, } self.cprofile_sort_by = 'cumtime' self.compare_fail = [] self.config = Namespace(hook=Namespace( pytest_benchmark_scale_unit=pytest_benchmark_scale_unit, pytest_benchmark_group_stats=pytest_benchmark_group_stats, pytest_benchmark_generate_machine_info=lambda **kwargs: {'foo': 'bar'}, pytest_benchmark_update_machine_info=lambda **kwargs: None, pytest_benchmark_compare_machine_info=pytest_benchmark_compare_machine_info, pytest_benchmark_generate_json=pytest_benchmark_generate_json, pytest_benchmark_update_json=lambda **kwargs: None, pytest_benchmark_generate_commit_info=lambda **kwargs: {'foo': 'bar'}, pytest_benchmark_update_commit_info=lambda **kwargs: None, )) self.storage = FileStorage(str(STORAGE), default_machine_id=get_machine_id(), logger=self.logger) self.group_by = 'group' self.columns = ['min', 'max', 'mean', 'stddev', 'median', 'iqr', 'outliers', 'rounds', 'iterations', 'ops'] for bench_file, data in reversed(list(self.storage.load("[0-9][0-9][0-9][0-9]_*"))): self.benchmarks.extend( Namespace( as_dict=lambda include_data=False, stats=True, flat=False, _bench=bench, cprofile='cumtime': dict(_bench, **_bench["stats"]) if flat else dict(_bench), name=bench['name'], fullname=bench['fullname'], group=bench['group'], options=bench['options'], has_error=False, params=None, **bench['stats'] ) for bench in data['benchmarks'] ) break try: text_type = unicode except NameError: text_type = str def force_text(text): if isinstance(text, text_type): return text else: return text.decode('utf-8') def force_bytes(text): if isinstance(text, text_type): return text.encode('utf-8') else: return text @pytest.fixture(params=['short', 'normal', 'long', 'trial']) def name_format(request): return request.param @pytest.fixture def sess(request, name_format): return MockSession(name_format) def make_logger(sess): output = StringIO() sess.logger = sess.storage.logger = Namespace( warn=lambda text, **opts: output.write(force_text(text) + u'\n'), info=lambda text, **opts: output.write(force_text(text) + u'\n'), error=lambda text: output.write(force_text(text) + u'\n'), ) return output def test_rendering(sess): output = make_logger(sess) sess.histogram = os.path.join('docs', 'sample') sess.compare = '*/*' sess.sort = 'name' sess.handle_loading() sess.finish() sess.display(Namespace( ensure_newline=lambda: None, write_line=lambda line, **opts: output.write(force_text(line) + u'\n'), write=lambda text, **opts: output.write(force_text(text)), rewrite=lambda text, **opts: output.write(force_text(text)), )) def test_regression_checks(sess, name_format): output = make_logger(sess) sess.handle_loading() sess.performance_regressions = [] sess.compare_fail = [ PercentageRegressionCheck("stddev", 5), DifferenceRegressionCheck("max", 0.000001) ] sess.finish() pytest.raises(PerformanceRegression, sess.display, Namespace( ensure_newline=lambda: None, write_line=lambda line, **opts: output.write(force_text(line) + u'\n'), write=lambda text, **opts: output.write(force_text(text)), rewrite=lambda text, **opts: output.write(force_text(text)), )) print(output.getvalue()) assert sess.performance_regressions == { 'normal': [ ('test_xfast_parametrized[0] (0001_b87b9aa)', "Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000"), ('test_xfast_parametrized[0] (0001_b87b9aa)', "Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000") ], 'short': [ ('xfast_parametrized[0] (0001)', "Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000"), ('xfast_parametrized[0] (0001)', "Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000") ], 'long': [ ('tests/test_normal.py::test_xfast_parametrized[0] (0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes)', "Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000"), ('tests/test_normal.py::test_xfast_parametrized[0] (0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes)', "Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000") ], 'trial': [ ('0001', "Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000"), ('0001', "Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000") ], }[name_format] output = make_logger(sess) pytest.raises(PerformanceRegression, sess.check_regressions) print(output.getvalue()) assert output.getvalue() == { 'short': """Performance has regressed: \txfast_parametrized[0] (0001) - Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000 \txfast_parametrized[0] (0001) - Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000 """, 'normal': """Performance has regressed: \ttest_xfast_parametrized[0] (0001_b87b9aa) - Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000 \ttest_xfast_parametrized[0] (0001_b87b9aa) - Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000 """, 'long': """Performance has regressed: \ttests/test_normal.py::test_xfast_parametrized[0] (0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes) - Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000 \ttests/test_normal.py::test_xfast_parametrized[0] (0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes) - Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000 """, 'trial': """Performance has regressed: \t0001 - Field 'stddev' has failed PercentageRegressionCheck: 23.331641765 > 5.000000000 \t0001 - Field 'max' has failed DifferenceRegressionCheck: 0.000001843 > 0.000001000 """ }[name_format] @pytest.mark.skipif(sys.version_info[:2] < (2, 7), reason="Something weird going on, see: https://bugs.python.org/issue4482") def test_regression_checks_inf(sess, name_format): output = make_logger(sess) sess.compare = '0002' sess.handle_loading() sess.performance_regressions = [] sess.compare_fail = [ PercentageRegressionCheck("stddev", 5), DifferenceRegressionCheck("max", 0.000001) ] sess.finish() pytest.raises(PerformanceRegression, sess.display, Namespace( ensure_newline=lambda: None, write_line=lambda line, **opts: output.write(force_text(line) + u'\n'), write=lambda text, **opts: output.write(force_text(text)), rewrite=lambda text, **opts: output.write(force_text(text)), )) print(output.getvalue()) assert sess.performance_regressions == { 'normal': [ ('test_xfast_parametrized[0] (0002_b87b9aa)', "Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000"), ('test_xfast_parametrized[0] (0002_b87b9aa)', "Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000") ], 'short': [ ('xfast_parametrized[0] (0002)', "Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000"), ('xfast_parametrized[0] (0002)', "Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000") ], 'long': [ ('tests/test_normal.py::test_xfast_parametrized[0] ' '(0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes)', "Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000"), ('tests/test_normal.py::test_xfast_parametrized[0] ' '(0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes)', "Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > " '0.000001000') ], 'trial': [ ('0002', "Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000"), ('0002', "Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000") ] }[name_format] output = make_logger(sess) pytest.raises(PerformanceRegression, sess.check_regressions) print(output.getvalue()) assert output.getvalue() == { 'short': """Performance has regressed: \txfast_parametrized[0] (0002) - Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000 \txfast_parametrized[0] (0002) - Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000 """, 'normal': """Performance has regressed: \ttest_xfast_parametrized[0] (0002_b87b9aa) - Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000 \ttest_xfast_parametrized[0] (0002_b87b9aa) - Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000 """, 'long': """Performance has regressed: \ttests/test_normal.py::test_xfast_parametrized[0] (0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes) - Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000 \ttests/test_normal.py::test_xfast_parametrized[0] (0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes) - Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000 """, 'trial': """Performance has regressed: \t0002 - Field 'stddev' has failed PercentageRegressionCheck: inf > 5.000000000 \t0002 - Field 'max' has failed DifferenceRegressionCheck: 0.000005551 > 0.000001000 """ }[name_format] def test_compare_1(sess, LineMatcher): output = make_logger(sess) sess.handle_loading() sess.finish() sess.display(Namespace( ensure_newline=lambda: None, write_line=lambda line, **opts: output.write(force_text(line) + u'\n'), write=lambda text, **opts: output.write(force_text(text)), rewrite=lambda text, **opts: output.write(force_text(text)), )) print(output.getvalue()) LineMatcher(output.getvalue().splitlines()).fnmatch_lines([ 'Benchmark machine_info is different. Current: {foo: "bar"} VS saved: {machine: "x86_64", node: "minibox", processor: "x86_64", python_compiler: "GCC 4.6.3", python_implementation: "CPython", python_version: "2.7.3", release: "3.13.0-55-generic", system: "Linux"} (location: tests*test_storage).', 'Comparing against benchmarks from: 0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted' '-changes.json', '', '*------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------*', 'Name (time in ns) * Min * Max Mean StdDev Median IQR Outliers Rounds Iterations OPS (Mops/s) *', '-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*', '*0001* 217.3145 (1.0) 11*447.3891 (1.0) 262.2408 (1.00) 214.0442 (1.0) 220.1664 (1.00) 38.2154 (2.03) 90;1878 9987 418 3.8133 (1.00)*', '*NOW* 217.9511 (1.00) 13*290.0380 (1.16) 261.2051 (1.0) 263.9842 (1.23) 220.1638 (1.0) 18.8080 (1.0) 160;1726 9710 431 3.8284 (1.0)*', '--------------------------------------------------------------------------------------------------------------------------------------------------------------------*', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.', ' OPS: Operations Per Second, computed as 1 / Mean', ]) def test_compare_2(sess, LineMatcher): output = make_logger(sess) sess.compare = '0002' sess.handle_loading() sess.finish() sess.display(Namespace( ensure_newline=lambda: None, write_line=lambda line, **opts: output.write(force_text(line) + u'\n'), section=lambda line, **opts: output.write(force_text(line) + u'\n'), write=lambda text, **opts: output.write(force_text(text)), rewrite=lambda text, **opts: output.write(force_text(text)), )) print(output.getvalue()) LineMatcher(output.getvalue().splitlines()).fnmatch_lines([ 'Benchmark machine_info is different. Current: {foo: "bar"} VS saved: {machine: "x86_64", node: "minibox", processor: "x86_64", python_compiler: "GCC 4.6.3", python_implementation: "CPython", python_version: "2.7.3", release: "3.13.0-55-generic", system: "Linux"} (location: tests*test_storage).', 'Comparing against benchmarks from: 0002_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190348_uncommitted-changes.json', '', '*------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------*', 'Name (time in ns) * Min *Max Mean StdDev Median IQR Outliers Rounds Iterations OPS (Mops/s)*', '--------------------------------------------------------------------------------------------------------------------------------------------------------------------*', '*0002* 216.9028 (1.0) 7*739.2997 (1.0) 254.0585 (1.0) 0.0000 (1.0) 219.8103 (1.0) 27.3309 (1.45) 235;1688 11009 410 3.9361 (1.0)*', '*NOW* 217.9511 (1.00) 13*290.0380 (1.72) 261.2051 (1.03) 263.9842 (inf) 220.1638 (1.00) 18.8080 (1.0) 160;1726 9710 431 3.8284 (0.97)*', '--------------------------------------------------------------------------------------------------------------------------------------------------------------------*', 'Legend:', ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.', ' OPS: Operations Per Second, computed as 1 / Mean', ]) @freeze_time("2015-08-15T00:04:18.687119") def test_save_json(sess, tmpdir, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = False sess.autosave = False sess.json = LooseFileLike() sess.save_data = False sess.handle_saving() assert tmpdir.listdir() == [] assert json.loads(sess.json.getvalue().decode()) == JSON_DATA @freeze_time("2015-08-15T00:04:18.687119") def test_save_with_name(sess, tmpdir, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = 'foobar' sess.autosave = True sess.json = None sess.save_data = False sess.storage.path = Path(str(tmpdir)) sess.handle_saving() files = list(Path(str(tmpdir)).rglob('*.json')) print(files) assert len(files) == 1 assert json.loads(files[0].read_text(encoding='utf8')) == JSON_DATA @freeze_time("2015-08-15T00:04:18.687119") def test_save_no_name(sess, tmpdir, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = True sess.autosave = True sess.json = None sess.save_data = False sess.storage.path = Path(str(tmpdir)) sess.handle_saving() files = list(Path(str(tmpdir)).rglob('*.json')) assert len(files) == 1 assert json.loads(files[0].read_text(encoding='utf8')) == JSON_DATA @freeze_time("2015-08-15T00:04:18.687119") def test_save_with_error(sess, tmpdir, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = True sess.autosave = True sess.json = None sess.save_data = False sess.storage.path = Path(str(tmpdir)) for bench in sess.benchmarks: bench.has_error = True sess.handle_saving() files = list(Path(str(tmpdir)).rglob('*.json')) assert len(files) == 1 assert json.loads(files[0].read_text(encoding='utf8')) == { 'benchmarks': [], 'commit_info': {'foo': 'bar'}, 'datetime': '2015-08-15T00:04:18.687119', 'machine_info': {'foo': 'bar'}, 'version': '2.5.0' } @freeze_time("2015-08-15T00:04:18.687119") def test_autosave(sess, tmpdir, monkeypatch): monkeypatch.setattr(plugin, '__version__', '2.5.0') sess.save = False sess.autosave = True sess.json = None sess.save_data = False sess.storage.path = Path(str(tmpdir)) sess.handle_saving() files = list(Path(str(tmpdir)).rglob('*.json')) assert len(files) == 1 assert json.loads(files[0].read_text(encoding='utf8')) == JSON_DATA pytest-benchmark-3.2.2/LICENSE0000644000175000017500000000246113416261170014111 0ustar hlehleBSD 2-Clause License Copyright (c) 2014-2019, Ionel Cristian MărieÈ™ All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pytest-benchmark-3.2.2/.cookiecutterrc0000644000175000017500000000241413416261170016130 0ustar hlehle# Generated by cookiepatcher, a small shim around cookiecutter (pip install cookiepatcher) cookiecutter: _template: cookiecutter-pylibrary appveyor: yes c_extension_function: longest c_extension_module: _pytest_benchmark c_extension_optional: no c_extension_support: no codacy: no codeclimate: no codecov: yes command_line_interface: no command_line_interface_bin_name: py.test-benchmark coveralls: yes distribution_name: pytest-benchmark email: contact@ionelmc.ro full_name: Ionel Cristian MărieÈ™ github_username: ionelmc landscape: no license: BSD 2-Clause License linter: flake8 package_name: pytest_benchmark project_name: pytest-benchmark project_short_description: A ``py.test`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See calibration_ and FAQ_. release_date: '2017-07-26' repo_name: pytest-benchmark requiresio: yes scrutinizer: no sphinx_docs: yes sphinx_doctest: no sphinx_theme: sphinx-py3doc-enhanced-theme test_matrix_configurator: no test_matrix_separate_coverage: yes test_runner: pytest travis: yes version: 3.1.1 website: http://blog.ionelmc.ro year: 2014-2019 pytest-benchmark-3.2.2/.gitignore0000644000175000017500000000114713416261170015074 0ustar hlehle*.py[cod] # C extensions *.so # Packages *.egg *.egg-info dist build eggs .eggs parts bin var sdist wheelhouse develop-eggs .installed.cfg lib lib64 venv*/ pyvenv*/ # Installer logs pip-log.txt # Unit test / coverage reports .coverage .tox .coverage.* nosetests.xml coverage.xml htmlcov # Translations *.mo # Mr Developer .mr.developer.cfg .project .pydevproject .idea *.iml *.komodoproject # Complexity output/*.html output/*/index.html # Sphinx docs/_build .DS_Store *~ .*.sw[po] .build .ve .env .cache .pytest .pytest_cache/ .bootstrap .appveyor.token *.bak *.t.err logfile logfile.* .benchmarks /*.svg pytest-benchmark-3.2.2/appveyor.yml0000644000175000017500000001270013416261170015471 0ustar hlehleversion: '{branch}-{build}' build: off cache: - '%LOCALAPPDATA%\pip\Cache' environment: global: WITH_COMPILER: 'cmd /E:ON /V:ON /C .\ci\appveyor-with-compiler.cmd' matrix: - TOXENV: check TOXPYTHON: C:\Python27\python.exe PYTHON_HOME: C:\Python27 PYTHON_VERSION: '2.7' PYTHON_ARCH: '32' - TOXENV: 'py34-pytest41-pygal24-nodist-cover,py34-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python34\python.exe PYTHON_HOME: C:\Python34 PYTHON_VERSION: '3.4' PYTHON_ARCH: '32' - TOXENV: 'py34-pytest41-pygal24-nodist-cover,py34-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python34-x64\python.exe WINDOWS_SDK_VERSION: v7.1 PYTHON_HOME: C:\Python34-x64 PYTHON_VERSION: '3.4' PYTHON_ARCH: '64' - TOXENV: 'py27-pytest41-pygal24-nodist-cover,py27-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python27\python.exe PYTHON_HOME: C:\Python27 PYTHON_VERSION: '2.7' PYTHON_ARCH: '32' - TOXENV: 'py27-pytest41-pygal24-nodist-cover,py27-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python27-x64\python.exe WINDOWS_SDK_VERSION: v7.0 PYTHON_HOME: C:\Python27-x64 PYTHON_VERSION: '2.7' PYTHON_ARCH: '64' - TOXENV: 'py36-pytest41-pygal24-nodist-cover,py36-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python36\python.exe PYTHON_HOME: C:\Python36 PYTHON_VERSION: '3.6' PYTHON_ARCH: '32' - TOXENV: 'py36-pytest41-pygal24-nodist-cover,py36-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python36-x64\python.exe PYTHON_HOME: C:\Python36-x64 PYTHON_VERSION: '3.6' PYTHON_ARCH: '64' - TOXENV: 'py36-pytest41-pygal24-nodist-nocov,py36-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python36\python.exe PYTHON_HOME: C:\Python36 PYTHON_VERSION: '3.6' PYTHON_ARCH: '32' - TOXENV: 'py36-pytest41-pygal24-nodist-nocov,py36-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python36-x64\python.exe PYTHON_HOME: C:\Python36-x64 PYTHON_VERSION: '3.6' PYTHON_ARCH: '64' - TOXENV: 'py34-pytest41-pygal24-nodist-nocov,py34-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python34\python.exe PYTHON_HOME: C:\Python34 PYTHON_VERSION: '3.4' PYTHON_ARCH: '32' - TOXENV: 'py34-pytest41-pygal24-nodist-nocov,py34-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python34-x64\python.exe WINDOWS_SDK_VERSION: v7.1 PYTHON_HOME: C:\Python34-x64 PYTHON_VERSION: '3.4' PYTHON_ARCH: '64' - TOXENV: 'py35-pytest41-pygal24-nodist-nocov,py35-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python35\python.exe PYTHON_HOME: C:\Python35 PYTHON_VERSION: '3.5' PYTHON_ARCH: '32' - TOXENV: 'py35-pytest41-pygal24-nodist-nocov,py35-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python35-x64\python.exe PYTHON_HOME: C:\Python35-x64 PYTHON_VERSION: '3.5' PYTHON_ARCH: '64' - TOXENV: 'py37-pytest41-pygal24-nodist-nocov,py37-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python37\python.exe PYTHON_HOME: C:\Python37 PYTHON_VERSION: '3.7' PYTHON_ARCH: '32' - TOXENV: 'py37-pytest41-pygal24-nodist-nocov,py37-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python37-x64\python.exe PYTHON_HOME: C:\Python37-x64 PYTHON_VERSION: '3.7' PYTHON_ARCH: '64' - TOXENV: 'py37-pytest41-pygal24-nodist-cover,py37-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python37\python.exe PYTHON_HOME: C:\Python37 PYTHON_VERSION: '3.7' PYTHON_ARCH: '32' - TOXENV: 'py37-pytest41-pygal24-nodist-cover,py37-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python37-x64\python.exe PYTHON_HOME: C:\Python37-x64 PYTHON_VERSION: '3.7' PYTHON_ARCH: '64' - TOXENV: 'py35-pytest41-pygal24-nodist-cover,py35-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python35\python.exe PYTHON_HOME: C:\Python35 PYTHON_VERSION: '3.5' PYTHON_ARCH: '32' - TOXENV: 'py35-pytest41-pygal24-nodist-cover,py35-pytest41-pygal24-xdist-cover,report,codecov' TOXPYTHON: C:\Python35-x64\python.exe PYTHON_HOME: C:\Python35-x64 PYTHON_VERSION: '3.5' PYTHON_ARCH: '64' - TOXENV: 'py27-pytest41-pygal24-nodist-nocov,py27-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python27\python.exe PYTHON_HOME: C:\Python27 PYTHON_VERSION: '2.7' PYTHON_ARCH: '32' - TOXENV: 'py27-pytest41-pygal24-nodist-nocov,py27-pytest41-pygal24-xdist-nocov' TOXPYTHON: C:\Python27-x64\python.exe WINDOWS_SDK_VERSION: v7.0 PYTHON_HOME: C:\Python27-x64 PYTHON_VERSION: '2.7' PYTHON_ARCH: '64' init: - ps: echo $env:TOXENV - ps: ls C:\Python* install: - python -u ci\appveyor-bootstrap.py - '%PYTHON_HOME%\Scripts\virtualenv --version' - '%PYTHON_HOME%\Scripts\easy_install --version' - '%PYTHON_HOME%\Scripts\pip --version' - '%PYTHON_HOME%\Scripts\tox --version' test_script: - '%WITH_COMPILER% %PYTHON_HOME%\Scripts\tox' on_failure: - ps: dir "env:" - ps: get-content .tox\*\log\* artifacts: - path: dist\* ### To enable remote debugging uncomment this (also, see: http://www.appveyor.com/docs/how-to/rdp-to-build-worker): # on_finish: # - ps: $blockRdp = $true; iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/appveyor/ci/master/scripts/enable-rdp.ps1')) pytest-benchmark-3.2.2/ci/0000755000175000017500000000000013416261170013474 5ustar hlehlepytest-benchmark-3.2.2/ci/appveyor-with-compiler.cmd0000644000175000017500000000137313416261170020613 0ustar hlehle:: Very simple setup: :: - if WINDOWS_SDK_VERSION is set then activate the SDK. :: - disable the WDK if it's around. SET COMMAND_TO_RUN=%* SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows SET WIN_WDK="c:\Program Files (x86)\Windows Kits\10\Include\wdf" ECHO SDK: %WINDOWS_SDK_VERSION% ARCH: %PYTHON_ARCH% IF EXIST %WIN_WDK% ( REM See: https://connect.microsoft.com/VisualStudio/feedback/details/1610302/ REN %WIN_WDK% 0wdf ) IF "%WINDOWS_SDK_VERSION%"=="" GOTO main SET DISTUTILS_USE_SDK=1 SET MSSdk=1 "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION% CALL "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release :main ECHO Executing: %COMMAND_TO_RUN% CALL %COMMAND_TO_RUN% || EXIT 1 pytest-benchmark-3.2.2/ci/bootstrap.py0000755000175000017500000000477413416261170016102 0ustar hlehle#!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import absolute_import, print_function, unicode_literals import os import sys from collections import defaultdict from os.path import abspath from os.path import dirname from os.path import exists from os.path import join if __name__ == "__main__": base_path = dirname(dirname(abspath(__file__))) print("Project path: {0}".format(base_path)) env_path = join(base_path, ".tox", "bootstrap") if sys.platform == "win32": bin_path = join(env_path, "Scripts") else: bin_path = join(env_path, "bin") if not exists(env_path): import subprocess print("Making bootstrap env in: {0} ...".format(env_path)) try: subprocess.check_call(["virtualenv", env_path]) except subprocess.CalledProcessError: subprocess.check_call([sys.executable, "-m", "virtualenv", env_path]) print("Installing `jinja2` into bootstrap environment...") subprocess.check_call([join(bin_path, "pip"), "install", "jinja2"]) python_executable = join(bin_path, "python") if not os.path.samefile(python_executable, sys.executable): print("Re-executing with: {0}".format(python_executable)) os.execv(python_executable, [python_executable, __file__]) import jinja2 import subprocess jinja = jinja2.Environment( loader=jinja2.FileSystemLoader(join(base_path, "ci", "templates")), trim_blocks=True, lstrip_blocks=True, keep_trailing_newline=True ) tox_environments = [ line.strip() # WARNING: 'tox' must be installed globally or in the project's virtualenv for line in subprocess.check_output(['tox', '--listenvs'], universal_newlines=True).splitlines() ] tox_environments = [line for line in tox_environments if line not in ['clean', 'report', 'docs', 'check']] tox_environments_by_python = defaultdict(list) for env in tox_environments: parts = env.split('-') if "pytest40" in parts: continue if "pygal23" in parts: continue tox_environments_by_python[(parts[0], parts[-1])].append(env) for name in os.listdir(join("ci", "templates")): with open(join(base_path, name), "w") as fh: fh.write(jinja.get_template(name).render(tox_environments=tox_environments, tox_environments_by_python=tox_environments_by_python)) print("Wrote {}".format(name)) print("DONE.") pytest-benchmark-3.2.2/ci/templates/0000755000175000017500000000000013416261170015472 5ustar hlehlepytest-benchmark-3.2.2/ci/templates/appveyor.yml0000644000175000017500000000352613416261170020070 0ustar hlehleversion: '{branch}-{build}' build: off cache: - '%LOCALAPPDATA%\pip\Cache' environment: global: WITH_COMPILER: 'cmd /E:ON /V:ON /C .\ci\appveyor-with-compiler.cmd' matrix: - TOXENV: check TOXPYTHON: C:\Python27\python.exe PYTHON_HOME: C:\Python27 PYTHON_VERSION: '2.7' PYTHON_ARCH: '32' {% for (py, cover), tox_environments in tox_environments_by_python.items() %}{{ '' }}{% if py.startswith(('py2', 'py3')) %} - TOXENV: '{{ tox_environments|join(',') }}{% if 'cover' in cover %},report,codecov{% endif %}' TOXPYTHON: C:\Python{{ py[2:4] }}\python.exe PYTHON_HOME: C:\Python{{ py[2:4] }} PYTHON_VERSION: '{{ py[2] }}.{{ py[3] }}' PYTHON_ARCH: '32' - TOXENV: '{{ tox_environments|join(',') }}{% if 'cover' in cover %},report,codecov{%- endif %}' TOXPYTHON: C:\Python{{ py[2:4] }}-x64\python.exe {%- if py.startswith(('py2', 'py34')) %} WINDOWS_SDK_VERSION: v7.{{ '1' if py.startswith('py3') else '0' }} {%- endif %} PYTHON_HOME: C:\Python{{ py[2:4] }}-x64 PYTHON_VERSION: '{{ py[2] }}.{{ py[3] }}' PYTHON_ARCH: '64' {% endif %}{% endfor %} init: - ps: echo $env:TOXENV - ps: ls C:\Python* install: - python -u ci\appveyor-bootstrap.py - '%PYTHON_HOME%\Scripts\virtualenv --version' - '%PYTHON_HOME%\Scripts\easy_install --version' - '%PYTHON_HOME%\Scripts\pip --version' - '%PYTHON_HOME%\Scripts\tox --version' test_script: - '%WITH_COMPILER% %PYTHON_HOME%\Scripts\tox' on_failure: - ps: dir "env:" - ps: get-content .tox\*\log\* artifacts: - path: dist\* ### To enable remote debugging uncomment this (also, see: http://www.appveyor.com/docs/how-to/rdp-to-build-worker): # on_finish: # - ps: $blockRdp = $true; iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/appveyor/ci/master/scripts/enable-rdp.ps1')) pytest-benchmark-3.2.2/ci/templates/.travis.yml0000644000175000017500000000326613416261170017612 0ustar hlehlelanguage: python sudo: false cache: pip env: global: - LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so - SEGFAULT_SIGNALS=all matrix: - TOXENV=check - TOXENV=docs matrix: include: {%- for env in tox_environments %}{{ '' }} - python: '{{ env.split("-")[0] if env.startswith("pypy") else "{0[2]}.{0[3]}".format(env) }}' {% if env.startswith('py37') %} dist: xenial sudo: required {% endif %} env: - TOXENV={{ env }}{% if 'cover' in env %},report,coveralls,codecov{% endif -%} {%- endfor %}{{ '' }} before_install: - python --version - uname -a - lsb_release -a install: - pip install tox - virtualenv --version - easy_install --version - pip --version - tox --version - | set -ex if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then (cd $HOME wget https://bitbucket.org/pypy/pypy/downloads/pypy2-v6.0.0-linux64.tar.bz2 tar xf pypy2-*.tar.bz2 pypy2-*/bin/pypy -m ensurepip pypy2-*/bin/pypy -m pip install -U virtualenv) export PATH=$(echo $HOME/pypy2-*/bin):$PATH export TOXPYTHON=$(echo $HOME/pypy2-*/bin/pypy) fi if [[ $TRAVIS_PYTHON_VERSION == 'pypy3' ]]; then (cd $HOME wget https://bitbucket.org/pypy/pypy/downloads/pypy3-v6.0.0-linux64.tar.bz2 tar xf pypy3-*.tar.bz2 pypy3-*/bin/pypy3 -m ensurepip pypy3-*/bin/pypy3 -m pip install -U virtualenv) export PATH=$(echo $HOME/pypy3-*/bin):$PATH export TOXPYTHON=$(echo $HOME/pypy3-*/bin/pypy3) fi set +x script: - tox -v after_failure: - more .tox/log/* | cat - more .tox/*/log/* | cat notifications: email: on_success: never on_failure: always pytest-benchmark-3.2.2/ci/appveyor-bootstrap.py0000644000175000017500000000763313416261170017737 0ustar hlehle""" AppVeyor will at least have few Pythons around so there's no point of implementing a bootstrapper in PowerShell. This is a port of https://github.com/pypa/python-packaging-user-guide/blob/master/source/code/install.ps1 with various fixes and improvements that just weren't feasible to implement in PowerShell. """ from __future__ import print_function from os import environ from os.path import exists from subprocess import check_call try: from urllib.request import urlretrieve except ImportError: from urllib import urlretrieve BASE_URL = "https://www.python.org/ftp/python/" GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py" GET_PIP_PATH = "C:\get-pip.py" URLS = { ("2.7", "64"): BASE_URL + "2.7.13/python-2.7.13.amd64.msi", ("2.7", "32"): BASE_URL + "2.7.13/python-2.7.13.msi", ("3.4", "64"): BASE_URL + "3.4.4/python-3.4.4.amd64.msi", ("3.4", "32"): BASE_URL + "3.4.4/python-3.4.4.msi", ("3.5", "64"): BASE_URL + "3.5.4/python-3.5.4-amd64.exe", ("3.5", "32"): BASE_URL + "3.5.4/python-3.5.4.exe", ("3.6", "64"): BASE_URL + "3.6.2/python-3.6.2-amd64.exe", ("3.6", "32"): BASE_URL + "3.6.2/python-3.6.2.exe", } INSTALL_CMD = { # Commands are allowed to fail only if they are not the last command. Eg: uninstall (/x) allowed to fail. "2.7": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"], ["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]], "3.4": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"], ["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]], "3.5": [["{path}", "/quiet", "TargetDir={home}"]], "3.6": [["{path}", "/quiet", "TargetDir={home}"]], } def download_file(url, path): print("Downloading: {} (into {})".format(url, path)) progress = [0, 0] def report(count, size, total): progress[0] = count * size if progress[0] - progress[1] > 1000000: progress[1] = progress[0] print("Downloaded {:,}/{:,} ...".format(progress[1], total)) dest, _ = urlretrieve(url, path, reporthook=report) return dest def install_python(version, arch, home): print("Installing Python", version, "for", arch, "bit architecture to", home) if exists(home): return path = download_python(version, arch) print("Installing", path, "to", home) success = False for cmd in INSTALL_CMD[version]: cmd = [part.format(home=home, path=path) for part in cmd] print("Running:", " ".join(cmd)) try: check_call(cmd) except Exception as exc: print("Failed command", cmd, "with:", exc) if exists("install.log"): with open("install.log") as fh: print(fh.read()) else: success = True if success: print("Installation complete!") else: print("Installation failed") def download_python(version, arch): for _ in range(3): try: return download_file(URLS[version, arch], "installer.exe") except Exception as exc: print("Failed to download:", exc) print("Retrying ...") def install_pip(home): pip_path = home + "/Scripts/pip.exe" python_path = home + "/python.exe" if exists(pip_path): print("pip already installed.") else: print("Installing pip...") download_file(GET_PIP_URL, GET_PIP_PATH) print("Executing:", python_path, GET_PIP_PATH) check_call([python_path, GET_PIP_PATH]) def install_packages(home, *packages): cmd = [home + "/Scripts/pip.exe", "install"] cmd.extend(packages) check_call(cmd) if __name__ == "__main__": install_python(environ['PYTHON_VERSION'], environ['PYTHON_ARCH'], environ['PYTHON_HOME']) install_pip(environ['PYTHON_HOME']) install_packages(environ['PYTHON_HOME'], "setuptools>=18.0.1", "wheel", "tox", "virtualenv>=13.1.0") pytest-benchmark-3.2.2/README.rst0000644000175000017500000001571413416261170014600 0ustar hlehle======== Overview ======== .. start-badges .. list-table:: :stub-columns: 1 * - docs - |docs| |gitter| * - tests - | |travis| |appveyor| |requires| | |coveralls| |codecov| * - package - | |version| |wheel| |supported-versions| |supported-implementations| | |commits-since| .. |docs| image:: https://readthedocs.org/projects/pytest-benchmark/badge/?style=flat :target: https://readthedocs.org/projects/pytest-benchmark :alt: Documentation Status .. |gitter| image:: https://badges.gitter.im/ionelmc/pytest-benchmark.svg :alt: Join the chat at https://gitter.im/ionelmc/pytest-benchmark :target: https://gitter.im/ionelmc/pytest-benchmark .. |travis| image:: https://travis-ci.org/ionelmc/pytest-benchmark.svg?branch=master :alt: Travis-CI Build Status :target: https://travis-ci.org/ionelmc/pytest-benchmark .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/ionelmc/pytest-benchmark?branch=master&svg=true :alt: AppVeyor Build Status :target: https://ci.appveyor.com/project/ionelmc/pytest-benchmark .. |requires| image:: https://requires.io/github/ionelmc/pytest-benchmark/requirements.svg?branch=master :alt: Requirements Status :target: https://requires.io/github/ionelmc/pytest-benchmark/requirements/?branch=master .. |coveralls| image:: https://coveralls.io/repos/ionelmc/pytest-benchmark/badge.svg?branch=master&service=github :alt: Coverage Status :target: https://coveralls.io/r/ionelmc/pytest-benchmark .. |codecov| image:: https://codecov.io/github/ionelmc/pytest-benchmark/coverage.svg?branch=master :alt: Coverage Status :target: https://codecov.io/github/ionelmc/pytest-benchmark .. |version| image:: https://img.shields.io/pypi/v/pytest-benchmark.svg :alt: PyPI Package latest release :target: https://pypi.org/project/pytest-benchmark .. |commits-since| image:: https://img.shields.io/github/commits-since/ionelmc/pytest-benchmark/v3.2.2.svg :alt: Commits since latest release :target: https://github.com/ionelmc/pytest-benchmark/compare/v3.2.2...master .. |wheel| image:: https://img.shields.io/pypi/wheel/pytest-benchmark.svg :alt: PyPI Wheel :target: https://pypi.org/project/pytest-benchmark .. |supported-versions| image:: https://img.shields.io/pypi/pyversions/pytest-benchmark.svg :alt: Supported versions :target: https://pypi.org/project/pytest-benchmark .. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pytest-benchmark.svg :alt: Supported implementations :target: https://pypi.org/project/pytest-benchmark .. end-badges A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See calibration_ and FAQ_. * Free software: BSD 2-Clause License Installation ============ :: pip install pytest-benchmark Documentation ============= For latest release: `pytest-benchmark.readthedocs.org/en/stable `_. For master branch (may include documentation fixes): `pytest-benchmark.readthedocs.io/en/latest `_. Examples ======== But first, a prologue: This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the `introductory material `_ or watch `talks `_. Few notes: * This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function. * In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use `parametrization `. * To run the benchmarks you simply use `pytest` to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run ``pytest --help`` for more details. This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed to it. Example: .. code-block:: python def something(duration=0.000001): """ Function that needs some serious benchmarking. """ time.sleep(duration) # You may return anything you want, like the result of a computation return 123 def test_my_stuff(benchmark): # benchmark something result = benchmark(something) # Extra code, to verify that the run completed correctly. # Sometimes you may want to check the result, fast functions # are no good if they return incorrect results :-) assert result == 123 You can also pass extra arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.02) Or even keyword arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, duration=0.02) Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient: .. code-block:: python def test_my_stuff(benchmark): @benchmark def something(): # unnecessary function call time.sleep(0.000001) A better way is to just benchmark the final function: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.000001) # way more accurate results! If you need to do fine control over how the benchmark is run (like a `setup` function, exact control of `iterations` and `rounds`) there's a special mode - pedantic_: .. code-block:: python def my_special_setup(): ... def test_with_setup(benchmark): benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100) Screenshots ----------- Normal run: .. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot.png :alt: Screenshot of pytest summary Compare mode (``--benchmark-compare``): .. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot-compare.png :alt: Screenshot of pytest summary in compare mode Histogram (``--benchmark-histogram``): .. image:: https://cdn.rawgit.com/ionelmc/pytest-benchmark/94860cc8f47aed7ba4f9c7e1380c2195342613f6/docs/sample-tests_test_normal.py_test_xfast_parametrized%5B0%5D.svg :alt: Histogram sample .. Also, it has `nice tooltips `_. Development =========== To run the all tests run:: tox Credits ======= * Timing code and ideas taken from: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py .. _FAQ: http://pytest-benchmark.readthedocs.org/en/latest/faq.html .. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/calibration.html .. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html pytest-benchmark-3.2.2/.travis.yml0000644000175000017500000003065413416261170015222 0ustar hlehlelanguage: python sudo: false cache: pip env: global: - LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so - SEGFAULT_SIGNALS=all matrix: - TOXENV=check - TOXENV=docs matrix: include: - python: '2.7' env: - TOXENV=py27-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest40-pygal23-nodist-nocov - python: '2.7' env: - TOXENV=py27-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest40-pygal23-xdist-nocov - python: '2.7' env: - TOXENV=py27-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest40-pygal24-nodist-nocov - python: '2.7' env: - TOXENV=py27-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest40-pygal24-xdist-nocov - python: '2.7' env: - TOXENV=py27-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest41-pygal23-nodist-nocov - python: '2.7' env: - TOXENV=py27-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest41-pygal23-xdist-nocov - python: '2.7' env: - TOXENV=py27-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest41-pygal24-nodist-nocov - python: '2.7' env: - TOXENV=py27-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: '2.7' env: - TOXENV=py27-pytest41-pygal24-xdist-nocov - python: '3.4' env: - TOXENV=py34-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest40-pygal23-nodist-nocov - python: '3.4' env: - TOXENV=py34-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest40-pygal23-xdist-nocov - python: '3.4' env: - TOXENV=py34-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest40-pygal24-nodist-nocov - python: '3.4' env: - TOXENV=py34-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest40-pygal24-xdist-nocov - python: '3.4' env: - TOXENV=py34-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest41-pygal23-nodist-nocov - python: '3.4' env: - TOXENV=py34-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest41-pygal23-xdist-nocov - python: '3.4' env: - TOXENV=py34-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest41-pygal24-nodist-nocov - python: '3.4' env: - TOXENV=py34-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: '3.4' env: - TOXENV=py34-pytest41-pygal24-xdist-nocov - python: '3.5' env: - TOXENV=py35-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest40-pygal23-nodist-nocov - python: '3.5' env: - TOXENV=py35-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest40-pygal23-xdist-nocov - python: '3.5' env: - TOXENV=py35-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest40-pygal24-nodist-nocov - python: '3.5' env: - TOXENV=py35-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest40-pygal24-xdist-nocov - python: '3.5' env: - TOXENV=py35-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest41-pygal23-nodist-nocov - python: '3.5' env: - TOXENV=py35-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest41-pygal23-xdist-nocov - python: '3.5' env: - TOXENV=py35-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest41-pygal24-nodist-nocov - python: '3.5' env: - TOXENV=py35-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: '3.5' env: - TOXENV=py35-pytest41-pygal24-xdist-nocov - python: '3.6' env: - TOXENV=py36-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest40-pygal23-nodist-nocov - python: '3.6' env: - TOXENV=py36-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest40-pygal23-xdist-nocov - python: '3.6' env: - TOXENV=py36-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest40-pygal24-nodist-nocov - python: '3.6' env: - TOXENV=py36-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest40-pygal24-xdist-nocov - python: '3.6' env: - TOXENV=py36-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest41-pygal23-nodist-nocov - python: '3.6' env: - TOXENV=py36-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest41-pygal23-xdist-nocov - python: '3.6' env: - TOXENV=py36-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest41-pygal24-nodist-nocov - python: '3.6' env: - TOXENV=py36-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: '3.6' env: - TOXENV=py36-pytest41-pygal24-xdist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal23-nodist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal23-xdist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal24-nodist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest40-pygal24-xdist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal23-nodist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal23-xdist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal24-nodist-nocov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: '3.7' dist: xenial sudo: required env: - TOXENV=py37-pytest41-pygal24-xdist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal23-nodist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal23-xdist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal24-nodist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest40-pygal24-xdist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal23-nodist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal23-xdist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal24-nodist-nocov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: 'pypy' env: - TOXENV=pypy-pytest41-pygal24-xdist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal23-nodist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal23-nodist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal23-xdist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal23-xdist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal24-nodist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal24-nodist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal24-xdist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest40-pygal24-xdist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal23-nodist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal23-nodist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal23-xdist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal23-xdist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal24-nodist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal24-nodist-nocov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal24-xdist-cover,report,coveralls,codecov - python: 'pypy3' env: - TOXENV=pypy3-pytest41-pygal24-xdist-nocov before_install: - python --version - uname -a - lsb_release -a install: - pip install tox - virtualenv --version - easy_install --version - pip --version - tox --version - | set -ex if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then (cd $HOME wget https://bitbucket.org/pypy/pypy/downloads/pypy2-v6.0.0-linux64.tar.bz2 tar xf pypy2-*.tar.bz2 pypy2-*/bin/pypy -m ensurepip pypy2-*/bin/pypy -m pip install -U virtualenv) export PATH=$(echo $HOME/pypy2-*/bin):$PATH export TOXPYTHON=$(echo $HOME/pypy2-*/bin/pypy) fi if [[ $TRAVIS_PYTHON_VERSION == 'pypy3' ]]; then (cd $HOME wget https://bitbucket.org/pypy/pypy/downloads/pypy3-v6.0.0-linux64.tar.bz2 tar xf pypy3-*.tar.bz2 pypy3-*/bin/pypy3 -m ensurepip pypy3-*/bin/pypy3 -m pip install -U virtualenv) export PATH=$(echo $HOME/pypy3-*/bin):$PATH export TOXPYTHON=$(echo $HOME/pypy3-*/bin/pypy3) fi set +x script: - tox -v after_failure: - more .tox/log/* | cat - more .tox/*/log/* | cat notifications: email: on_success: never on_failure: always pytest-benchmark-3.2.2/.editorconfig0000644000175000017500000000032713416261170015560 0ustar hlehle# see http://editorconfig.org root = true [*] end_of_line = lf trim_trailing_whitespace = true insert_final_newline = true indent_style = space indent_size = 4 charset = utf-8 [*.{bat,cmd,ps1}] end_of_line = crlf pytest-benchmark-3.2.2/.coveragerc0000644000175000017500000000026513416261170015225 0ustar hlehle[paths] source = src [run] branch = true source = src tests parallel = true [report] show_missing = true precision = 2 omit = *migrations* *pep418* *hookspec* pytest-benchmark-3.2.2/docs/0000755000175000017500000000000013416261170014031 5ustar hlehlepytest-benchmark-3.2.2/docs/measurement-issues.png0000644000175000017500000014041113416261170020376 0ustar hlehle‰PNG  IHDR¼‚3Ê ž pHYsMMó—ħ IDATxÚìÝwœeÙÿñÏuʶ$›„ tH•& R¥ˆ "¢>âObGÁ¶E)*"b Š´‡.Bè-„šd“l;åúýq_“=l’MX’Möû~½Îk÷œ3gÎ93sf¾sÍ=÷˜»#""""²ª*hˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆˆ¯ˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆˆ(ðŠˆˆˆÈUÒ$ffÙ¿š²9€»»&…ˆˆˆïP…Ü"PË  Ëft§¦ð+""#~»¨má2Š¢»×r÷ËÀX E¡WVÔb ts–Í~˪ˆˆˆ¯,)èZL·º™M> l´¢vѲbÕ€nààwÀ%î>ßÌŠ@]Õ^Qà•%†Ý,0˜ÙÇo£G;€‡€™ ½²‚ÔÉÀT`g` ð"p’»ÿ"¿üŠˆˆŒ$jûa7þ´ï®¾ ܤCÆ2Ì–×&à-ÀiÀEf¶»2Úöº‚¯ˆˆŒ¨í¢¶{ƒEw¯™Ù%Àû€¯¸û׳ç´ó ÃL%šÝ”3€ãÿu÷ϨM¯ˆˆŒ4 iKv°{Š»ÝÌšI' ÕH‡“E†‹bTykî~BßÍì&wÿ£B¯ˆˆŒ¨,§ ïî¹»›Ùꤶºÿu÷]¢iƒ£dx+Ä2jÀ}@Ø èõ×+""#ƒ*¼KVª¤Ên;ð¥\¨¢ MÈðVÊî¾ÀÌN.öw÷Ëͬ˰ˆˆÈ*MÞ%M ³B´…¼˜DªŽÙRÝìâ‹â¤&²äé¸"ÖâóÇãÙ÷,ä_ËS6Ï——fwŸmf;Äcjw."" ¼²Ð8RÛÇÇBØâ€^`Š™íO_;ʬMepÞ œüžÔt¢·BC8ɪˆõ†ÀRh¶Þðx-÷|m€Ïš S¤¯BÙXµ¬7ÕBn:4~žž7`©ØsŸgcrï™M§bn|ŒŠåu~ 30Ø—t¡…Âb>O~|ùÐ\kø>Ù÷ôưmf‡¦Üg|tq‡:0ÁÌ õƒ{ð·Üûö˜Ù>Àâóÿ9–¥Q¹é_kX>j‹˜·óaÐûoñ÷Q`}3+¹{U}󊈈¯d¨9¦Õ˹²¤PŒjðÀÙ‹îR»à±ÀÓ¬çEey40?Nšï›Ýo‰a³àÜåîXX1nŽÏØíî3ks÷yÑŸðè†0Ø“¦#7Œ»û¼…#}†b.ðÍÏ]ˆ£LºÒ\-7}òÏ7mñ½Ú-€)q¿+¦UkLçj´9-’ÚŸvG“’¿»ûI1]~CºŒn¶SPŒ÷«7|ž:©§‚ñX“»wÄ0m¹ïSÍ…áÂËÁ·ulzÛ_‘.ððg`ëÜç¹ v`º€/§æÆ÷àíÀƒ1+1m³ùŸGRõ¸{€ùPgðG²ïô2éò×M¨ý®ˆˆ(ðJŽçBN€–4êLîv!U6·Îî>uNŒÿŸ¤ŠaØ'ÂÙµÀ^ÿAªîjf›WÏGx™ljf»Gº6ÂsØ"B÷ßÍl¿`×E †Ô4`K3Û¸ÊÌŒPô/ ÉÌö¶þ Ü@_¸;>ÛNñþ׸û´ø¬•÷nf¶=ðéR·ÏÆ…²êk6ÜÎf¶&©jû©Zz`üÿ™lÏ-¤êy+0Þ̦šÙ{€¶©óãwUñçÓ×\"kJÑóè RÓ–ÞoKì||0Âî—€Ÿ?ŽŽ¿OFà½ØTݾøŒ»af»˜ÙºÀ_€Mâû=Ó¿5æ×Zq”`µ˜×Äwofð•^Ë-Ãê]DDDFX’s×m7RÛK€µ" |.î'5;XÒm,©RWŽ×½)ÆscÜoŽ¿çÆãŽû’ºúm.œüœTeÌîßG:|^ŒðÜA­˜¼#Æõ¿1ü/âï¹ï0>þ?£a˜Ë"ü]÷gÇß¿Eè*?ŒÇžèÀã¹É1¬Guà%àðx¿â±ïÅý«ãþ[ãþÎqÿ:`ú*°õ»kÇc3Õ#þ4ëνïI1¾½â±«"gÓðê˜OàO1íÞáp|Ì¿6RwðoÒ‰‹{Ä´/æèfñ~ˆÏúA`Ïx¯oÇs«ÏOÇý_Åó?ÍMGÀ\ŠefF|¯gã¹ûâ³䲘Íë³c~¶fÅwýÖuÓM7Ýt[ÕoEþ¥b Þ%ݲ*úØh‚µÙ,E%tlT<»# e˹¤CÎe`Ÿ:Œð·Tù¶Þaæ#üM&µ3]|'Æ•ûíÀÿ_ˆ÷vÒyyA¯Aî0àS¤+sí¡t|„ê·FXG:\ÿd¯ÉQÍ\+šU|,†ýi„Ó#⻟efcs•ÝL/ýÛÜfíTËÀÿ‘š£zºO<>/Bþ<àÝÀ‘<§Ä{ÏNjx6 ö0¿70=ƵC4ƒ¨Ät«çæ_VÎÂï¶À3ñ>wÇÅçÍÚ\Wb~Ör¡yB|Ÿì¹r„ñQQÅÏšeìóèë1ÎcݽÓn-à‹î¾ð^àøžUúÚ-vyqx—2ðF@,,Åm ¡È=–?™­ñ°óiî~m®*y¶»_Ejþà<_•¾NàdR[Ñ6àuf6‰¾º.r÷K"°yå iàÇî~™»?ìAlc3û~T*‰À5—Ô<`}R3‡c³Üýøh»W„¾sIíj/&5혟·»a"k7;+¦o¶l–ܽ7Â<À\w©aÚZØ:p>0ËÝÿü5ÆûƲ…ØQ8ÛÝÿŸÛIDOˆˆÛ"ÀznüMQY¾8 øfà’š8tf¿¥ÏÙwhÉΆç²ð[ÏMû룒ßEj†ÑÁàfö³Ø98¸‰¾“ýµ,Æ´Uð‘Gmx‡InØÉigœ¸Õ÷§Çý—suL„¦Q¤“ªšè;<Ÿïÿwn»ÖM½ÿ3qâX!÷&ÒWq=Ô6R%ø3À»HÕäïD ûD¼&kçÛnfÝôUO[YtûÑR¼wSn§€ÜýbTEó¡­Hj&`G›Y/}Uë–ܰåÈÔÏÓ׋BÖ{D%Þ«Ûir÷mc'ëÝ`—Ø)— Ôå˜?ÙNÄlúNr³˜þRÛÛj|Þr<ÿBtWV‹Š6RsKb˜£€ÅméB(wÄpê^LDDd1Tá]ú`Zx·|wbùÇv5U¸l˜¦¸0ÀÂÇî×€§bϹûÁîþÒIRg¸û³ô¯"zn¾ô¹² ¬°¸ûÁp¿Kjº°Z ûÙø;à1Råq ©’ê¤6­³bÜëÇ{½¯Í‡´Zܶ‰ï÷†x}%7Ýë¤*iöX=7ÝŒa6q÷yîÞá¿@jR¤oY7gùç&ëårú{€­Íì·föYReâ{Hín³÷ß4>ÿ›ãþ Ò‰n[G…w-R;ࢋjî»ä«ÊY`_¸Ø‘Ty>%;0v\ŠË° Šˆˆ(ðÊCï²Þ²pÒJÿv•Íôâ&BWV Ì?ŸUl³JèXR¿ª·;˜ÙÌìë´~ÅÆqô¹šrãϪš¿Žálf_#„ö(°S„ÓÿwR¼ï\R;Y.Ž×^DjÊðׇí ßïÞxïÓÍìrR{a‹á UJ Àa:‹ñ[ãö‡®ß7³3ÌìlR{ØûI½ŒÏ½_V}Ý k‰z&©ùÀN¤Šm)»"ð¸œÔ¤dëøÜ‘z{¸ÀÌ~Cjß|7© É ÀÛÍì÷¤“ÔʤÊm~磔›?£ã±—H=?ÜOj²05†{ ^³´Ë ˆˆˆ¯¼¦a·3ßùi_&U!"õÐÿO#]° Dªâ>tÅýcY<šÔ/íN¤¶¨&U_‹¤ŠêÃñš¦¾G™tx?{ÿ©úy éD¹i¤“ئ_‰ðö,éÐúlRß°øþß߀DPû$©:y>éD¸fRUô!`FœÌ÷³­¤“±>E:\ÿH\ÕîFÒ pUà Råóîœm¤öµï‰áþ_ si|ž¬çŠIMŠñ‰é27š!Ì"uŸÖKÿ¶Í­ô÷&¬öFR3†‹€ÃcØ“N`{ŽtÂÜ?I'›Íiþ1àôµ‹þ ð»è›xzL‹úªñ÷F !û"Rï“ªÇ§ÆøÆÒ¿*¬Ð+""2P€ÓE–3qÌ qa„µHÕ˯ëç•W+[RP΂Iv8?µ¬¬Â—ÞÎÚfW˪¯ÕÜý}‡ã+¤*d֗쬖åÜ{fãès¦@jZ$õ40?nYÕu^Œ?kãûRŒ/ë³6{~<©b:?ž+å‚~ö}ªñV‹ñ”"tgMj¤öÀ«‘k/KßÉ ñ8ñ~½ óà(åÆE.ìW1w ³ñg'ŠÍ‰0Ü”û¾Íñü¬wÖ<¢7>ÿ„˜O è;1.?/O`¬GX+¼¬]ð˜ÜüŒ¬í÷YÀ¡¤5ºt¥5 tÒÚà+ôuY5˜+­-jÕÜýì1P”ï +?L ó÷{s÷›#|Ub¸¬òçŒ{Qa| aÆæÂ\vrXÇÅsssAúºäÊžŸŸ{m>äç?S|‡,ZC0-Åãù“¼j¹0šÿ.úº«çBdµaøÆïœµ^ÔtÓ0ÿVk¶ÔÓÜ|ÌæOw|—ÕrÓi iŸß1i‰… ©·ŠRÃ|Xš+­Yî}Ué^é§7‚Î8^ý¡áÂr>È6†²žo¼Ÿ…¤Ò"^·¤Jࢆiì%¡ÞÒóÏ-ê³75„¸|ÿÄÞð^å\H/.æùÞ'û[^Äû5N÷ÁLÓ¦EiãϦM¹aœùù“=—ŸŽ½oãwËæmi€ù°4;\Y»ï^Ù²ˆˆˆï•޹6h)ËÊùøÒÞ_Üãö*>×`ž_–×.é1[†iôjÞ¥˜¯6ÄÏÙ«v0Ÿ·HºPÈ‚¬· 5gÞ‘žvÝ=Ú8öšÙ4àõô]-M'üÉʤFjC<•tâ!fVŒ*DDDVi mK–uö;Rul{R[Ó¢&¬D;¶ €ÝIÍr~«¯ˆˆŒê¥aI(Îb7³)¤n¬n#]],;IÉd˜ÊzpR»Ý’zÕØ˜t2¡š4ˆˆÈˆ & KJ )ìÝý3;ø©_Õï“é; Id8þ¾³.ãN6>äîó²K$k‰ˆÈH  ï`&RºÜoÖ­Ó•¤‹ |¸€Ô AËÖU™Èk²È’Ž>d}8 ø¹»HmwEDDWz!5eø é2¯W’*½Ðw….µ‹–)»¨I é’ÏŸ%]áí7¤+ç©wQà•ņÞìÊkEàëÀâ©;ãö4}—ûYaw"°éÈ[FÀý’»7ÛqSØ^YRè]Ìlcàà Ò‰@"ÃÅ}ÀåÀyîþ¬™H…]ýàEDDWzB¾¤™M$]òµ µå•´h’z_xÙÝ_Î-›j³+"" ¼²ÌÁ·ÁWg»Ëp\>K@ÝÝÕ‹ˆˆˆ(ðʫ٠mêW†˜&""¢À+""""#‚z^^^^^^QàÕ$‘UYI“@DDDVfV óÜÝkš"²pÙЕÖDDDd»–¿¤º™Y䜺¦Ž¨Iƒˆˆˆ¬ìa·àînfǘٹfö6Oêf¦£Ù¢ ¯ˆˆˆ¬Ôa×"ìŽ&ÅS>áîGSTí¹Tá‘•:óÆß #ìþ ¸xp™åîuU{xEDDDVvUÀàJàL`p™]ifº{ÕÌ YÅWFíé,Ë®dj/"""+^16ËcHÕÞ@;ðð}`oRµw73;ÑÝÛò’»W5ùxeà [PW'"""ÃF%¶ÑÇÄýé¤#ØÍ@Ô–÷àpRµ÷@à“jÛ;Â2œNZ|Ømèî¤=~Pƒ€ª ‹ÈJ¹úÓ$an]àhàcÀ Àï–Üö¹teà-À~À|àÓî~AlÓUíUà•Ü c/ïÖZÊ&´¦…¦…¦‡¦‡¦¦ÇÐïÖøÿÿH'«Aÿ‚”ÅýZß Ãb;~%ªö*ðJ¿öºã€ë­€k{Ií„Uåµ%Üg)Ÿg%yey[†q Åwãªéi¯Á|ŠïbCüCñ]–Ç{¼Vód8ü¶V¥uÑHZO¬ªŸÓ€™ÀÓq+“*º¾ˆá z›€½HÕÞy¤j¯Úö*ðŽèÀ[ˆ®LÎŽr÷+4eµ“0Ø Õ@+¸Â«Ç`6ôÃuœ…Uä»-Ëk «Ð|Ôï`Å~wýVÎe£° ã(DÈmdˆÎ U¤jï{€5IÝ™}JÕ^ÞÜ¢)C˜ Üäîo3³¦Åì=ŽÄ2#ä»×q®JWãÔô[ÖuϪx5Î¥Ó¿ Ã`Æ™U{³¶½û¢¶½«,õҰ䨫“º;ùWìõ9©¿?­Œ´PàÕ|Ôô[u§Ÿ¦ùÊ9ý«N:¹­ü˜ œof®Òö„ª½«u¼¼tÓ©;x•ÅEDDV~YˆEjüCàjà྆«´5¹xG uÏ#""²jß–øÿ¯|;Hýö^Wi«é*m ¼""""+{è5úW{¯þcfG«Ú«À+"""²ªß–¿ùjïùfö'3Û@Õ^^‘U!ôB_µ÷,úÚöÞ«¶½ ¼""""«Rðͪ½WEð‡Úö*ðŠˆˆˆ¬b¡7ß¶÷,úÚöÞ£¶½ ¼""""«Rðͪ½‹à;ŸÔ¶÷JµíUàYUB/Àh`:ð#Rµ÷¨m¯¯,“bÃMd¸®CŠ‹Y^­áq[Ìëµ>’•ay/ >Ù—–GPÌþ.îæ ¯̰4<>Ð{/nøÁŒ à›oÛû#ú·íUµw˜Ñ¥…‡éJÕÝ;ó˜Y+ºÂ› /t•X6-÷øè5Rw>Ùóm@Sl, ¤ÃÕÜúhÌ,‘átë±¼ÐË«ÖɃÏ¥Üú¡q=’W‹uBSn'xQ¯©ÄðŘ'äÏæMSn'ÜÞx}Ë>G!†­/!ôB_ÛÞ³ÝIm{÷4³Ï¸ûù± /º{M‹‚¯ä~tîÞmf»‘‘´WºûÕf–íMrÁ~x䆱†=\‘¡ »=ÀnÀ¦ôUh³ÇÕŽ±ÀAÀ뇀¿/ÅrÝ ì ìë¢ë[â¹lƒã¹ÿ—´œë²ßòZ-ë]@Øü›t8»)ž·Xöµ¼Új¤,«EàŘ^ÅÜtÊO«lúÔ€ €I±¾˜“[xÃïÞõ5b¸‡rÓøuB x4Ö9;Ô[Äü¼'‚± 0²ù¹10!¸-!ø¶Äkþ<  œgf'¸û“Y¥×ݵ=VàñŠ@·™¡{˜Ç›ÙGó’»W¢mg?3kÎÍϪ»÷,üåš•"D( ÈP)D°}7°o„Û¬Š;øc s°}ß;;÷æêXG. µíý™5E/êõc9P…wåÛA©Å|»*k³dfw’¯<+ÛCÌìfRú»ûƒÑ¤AU/ÊõÇ}¤ªÕ_YÀ—IG®¡ï¤7lÔ‹ll¼a9÷îfRe¸T9ÞŠtHó¾Øð½täâfàR³Š±ZÎeˆ5EP(œæOžÌ§B,‹ÏÅoa+à¤&j7’ÚgÕÝ‘x:9lqÏ-jøE Kì¬E:Iî1àÙ¨úNl®1”¾8ÀÎG)ŠHCö¡¯Ú{°é(À¹î~dÖ¼A?³×>@Éðà±—· ·§šU}ËñC}9V¼srýûeÁb¢»?¼øyìÕž L3³“bܪ€ÉPm¼zH•×÷’ùþ¸"vÊÆäB¾ÝœÇò:‡¾>z󫬫³l9}>6%úÚü‹»EÐ^8¸8‚t8T˹¼ÛK ä6î¼}=”I•ÆSHM{ÆïN#µ îÑvxÈ­Ži¿Ö"‚n0ç»ûOÝý|w¿ »‘ŽV å¼Éª½Ò‘©;€˜Ù”hZ¡*¯ïÈ ¼±r|‰Ô½ÊÀØ´o"<“tÂC(fWs¡¯­Ñ\3[+‚ÑT7ðª|ÉP„Ý©MùyÀwI'‘´“Ú-IÕ׬› IÕßíãþtRÕëER۾Ѷ‰º™ô&Î6YŬAz£xìó1þ#ã¹}9‡‰eù銵žXκã~on;ºV¬Ã× tY‹[v’ÕÏc9ý\,ûǺZçT °fœìö\ô{»^Ì|1³RîVŽ^ʯÑv>[eËNiqŸM†Žš4 ³ù ëÏ!µ»½-V°ßs÷ª™e?’|§ÙmñØ R{ÆK€;ÍìB`³öŸ¤6Q­`eVÚERÅuƒXÆvˆåkjÙÛ#¼> E:‹ý ±¡¿:–ÁßÚ4þ66€ âuY¿¾å\xͪ#3I‡O!õ¥ye|†餡%õ™)²´;wï!l¶A,gïŠÀz}GØ6Τïâ*IJ¸ð…ØÑûG,ç£rë÷Mæ!S#5#™äîÓã±ug1o‰‘ÃéßïwV€ÊQ!yØ“t¢ÞåYÿ¼ê›Ww¤©FºçÄÿÈøÑ |?ª½Y—6Oä.WøP¬ŒÇNj&qJìE~‡t"Q+ªðÊÐ*©YÁQ¤6äå¯?'5E0RWdG“N2».ž{’tæE±<Ãþø©É¿?'Z IDATÂlÒ!Éèköðx,ã£H]—IýúËùy1ÎQZÎeˆM޹núºÛ0Öųc˜'ãï]±¼nï&]vv?ÒÅêÀµ¤&@Ú/~Ç:IâüãÙ¥ ¼¦¬gfÏÆúbRnÞ,*ˆN ÿ‘!ËèW»óœ½~A,G’Nb¼øÑ”AG¤–ÇFK]À-fâÄ^W4˜|ÒÝÏŒŽ±k,Ð,æ±¥ºïî½ Ÿ%»ÌbÖMJþK¶×دíŒ×´•¨ ·2ð¥0mˆï—qŒ¤q®¨Ï‘õCš]…ªHªX5Ñ×n·7–Ïæø¿D߉œÔ^·)7ìúªjÙ%C³ö¸Yo%Ùðóéëì?;£»m€å|e]6ô;>ã¨äV¶κ%Û–Ôwú¿€ÿe8kß›©è¢ÿ‹^ú_žx¤ÌǬ[²[Üý†ØNí¼‘¾nɲá³í[ð7w¿-†°9}=•€éî~‰™MŒðj‡Jxÿéî7™ÙzÀIM¬.Œ#ªãb§y nɲžd–uúéë}æ ¤#NºÀηâ"R¦¾x—í]ß°ïG´Ððã!°¦†×ŽÉí·D%X—]•×¢S¢¯WË…Ùlym¡¯ö¬k²üåVó=*´Ò¿½nã‰gmêÆæ†onx½ÈPj ÔdÍò<:–Ã|_çÙŽÝ(ú÷F2šWöN2Â6qf‹‰ÙNí¤#Ee`FnøÛH=¶TIíþ7¤ï¤ï,¤fm©³sFpR˜Å¶±›t²í@—ža¼²˜Ëb‚ï‚Ç;IUÝ[€ãÝýßÙPØUà•þäZÃÈ—p¿–[!û¯ÊÐ[[Är ¯ìoÓ–°œ[øï´œ›–sYË9,Û%R›õß“›7Årè‹ù t¤©fAÏ̪‹Ø‘~ÈÝg ðÚ's‰u ©©IÍÝÝÌzrãȪòÍ@o<ß›{¾;ÚÍv“º9(•¯Ejk[YŠïV ¯ª»3©ª ©ËÆïDU·”}fý´xåµ[I‹ˆÈЬ_ˤÃã¿¢sXÖ\i¢™MÇ&6ììfçL£•YïBùÞì¨å„££ÍlSÒQ†«íñ~“ã~Ø*ò@ÝËeG¢Ö\Šé¬Âœo«;•T‘>ÖÝïŽ]t÷ª^‘•)ôIív³& :°øP˜¿Ô0¼òò½Ù0;Ðÿ2Îã¾sV#]È!ßæ6?®×‘N"ÌÞ«…t²ì@ãl ½ƒéõ%«êBªgmuO¾UÝ"P®ÒDWDDd¥ ½ªê.ì¤Ô,X(+K£ù×:é„Y1®jÜ`øÁ¼Çâ<ôUu¢¡­®%E]^Yl1As0à ¦‡„¥Þ–b|п­î.¤.ç []UuxeˆVÅÜÞ²ÈÊ*kC§¾sE˲¬,a=«ê¾“¾¶ºÇåz`PUWW†èWw÷®øaµ¡Cj²ò„.R¥¤•tFµÂ‚¬Œëd'õ ] õ«uòª¹¾¨ª{2©_ݪªº ¼2Äa(™Ùçîþ3¥¬¬dŠÞ@º$ö¥¤K€¶¡£²ò­“‹eúšØyÓ:yÕ™Çп­îTR[ÝÜý.Xx±*­»xeˆ~tÙÕ}~•í]šÙêî~º™µÓ×0_d¸‡ÝyÀNÀ¹¤³¦÷Žf¢J¯¬\z€Oª~.ý+Rï ZŽWnUÝCâñ/“z`ÈWu5¯‡ùŒ”•($Äeƒ¿a÷ àeàf¶·»wh'FV’õN©ÿÍÓc™}X øwÜ/2œwÜŒ04=vØ&¹XÀ+¯(+‡|¿º€£"ìÞ lïîߌ°[pw]DBW†PÉÝç›Ù["Ì!]áçÉ?«Âtk¾ÊJ ËñšÀcž&]™èc±|+(Èpß~.¶:€çb9î>Jº¼mÖÉ+å¼­Ä|Üø°)©­î›£»±b\XU]^âùÔkf“Ÿ‘‘=NªŒÕ€Gµ P“æ;nÀlàh`OR›Ý¹¤& ÏÇsÇo‰Çze8Ê.hÐ÷åµ;qíÀ±1¼BÑÊ3_óUÝ£IÕú[€Üýî^SUWw$ðô$V¢?Ö¡ïRŒÅ¨,LÞ|ÑÝ磦 2<Ãî\RÅäã¤*îs¹eµ@j¦S%UuÄB†{à=˜··aü4°p(éD6í¼ ÿ<ÔXÕ | ØÕÝïRUWw$ݲ™ tÝí×R1ºû|Úé±2-Òw…ŸR‡ÙÀ©föÖhÏ«¬ §uM°:ðxìIúºsrú®{ÿhn8GÕ1^²@{ ©î³±î-å–å2éˆÅˤ3úßDjë«uòðÜyi¬êfmuwp÷ÓTÕ]Ef´æÝb&NÚ“s3+“*Ss÷C̬ñ ò¡¸ŠË@ã(¹{‡™í\Kªˆ=’[±6†ò©=ÙLÒÙï/Ò×Å“±ôW§YÖÏmÃ`#iœÃõ»5>6Ô#ÃÀƒ—å¬r¶&éhÆO€ï‘Np«°ec¤ÿ†ãº(ëJosà«ñÿC‹(eëä-HUÓcÝÜ2Àú[ËÆÐŒ#Ûáì8ó=0ìLÿNw÷º™•]Þz ±Ðÿx?éÐÆMËñý'·“úþûo.¼ôã­£ce|µ»ï·¿, e© Å8l|÷×jú ÅwÉš)Œ%µwì$uA¶¸å¸±:6ŸÔ~}¨–áåõÝm˜Žs8þæW†ßEÖúNÀ6¤ÊîÜA®“'ëÅúx¯ìgzE,£ÃuÙx5ßýIR³¿òb†Ï¶­šš¼;þÿ2éji53+¹»NWà‘¡7kÚ0%öìßM:,µ¼Ô"$,Ä ¹aw²æœ C³H'¦-i£æ 6`ã%²¢Õc<ëäj¬“×Ôd[.n~Kÿ#Hù&}1/&u5v+ð±èjÌ"©ª«À«Ðÿ&5n_'ýíœG:ùáI˜X¹nK:Cø`R;Iœ(+ì§ËëRÏ ûw³è¦9ù0xðàç€uÙÎá1OGªìj[çGâ±%¬“³ ï6¤s*ÞIÿÜdè´Æºâ0à R¥·)¦s‘þmuóUÝ…muUÕ]µ©ëªÁ®åR…·ÿÏ=ûå´gg.ÅF&ûqÏp÷û4çd˜xÂÌþ·¾Ë2Àmî>]“P†Ñ²üƒÜÎÜ`–ãðœ»ß£É÷šn/O%u·©_ä–XßÌ'UußMj«{+p|t5fÑVWaWWr¡·?*[UŽle:~ß+ß…š* ²¢×3U`Ô2¾~T,ËY§þ"+$Oźtì2®W³u2Z/¹BÛæ˜¶õX_Tb±{„]'õ«ûí¬­.©5aPà•E_­WV¹võWñ1ëù¦"+$%˜ÕcY\ÖJ=·,kÃ$+j9’ur~\šªC6oˆuDG¶“L:¡pSúÚêÞFj«›ïAU]^‘•‚ǑקHÝ¿íKê6î ñ|¿PUwDÒÉL"²ìÌŒûbY>i7UË-*¶G“Úï¾ø °­»3wµ´ªªë#“*¼"òª¶4|³K¦ÖÑÆDD–ÿª¨o2³-6wŸûåE¢i”¦”¯ˆÈÒI'àLºpŸ×ïqmXDd…¬–¬àî`n®g%ì*jÒ "ËtÖîÅìRÌŽÀ¬Œ{=WñYn²\³“\UÕ^YæÈ×VÖ õ}yp+f{â^#""²¼C¯«®(ðŠÈPÞsë^RŸ—ÛÇìÜ«ªôŠˆˆ¯ˆ¬Œ²ÊÉ3ÀI]5‘.±Z!pfûã^;!u/""¢À+"+KÜ@ÜoÀý]ÀÖÀ‘ÀKz³D~ŠÙسN9E'Œˆˆˆ¯ˆ¬„R¼Ü;p¿tùÎgHÞ^RßøêW5­DDDWDVBî=2fM¸O#uúî±nqཇF¯…¾¶¿""" ¼"²Òß^ÌŠ¸_ÜJªòÖM›)ðŠˆˆ¯ˆ¬ ²@ûø[ŠÍ0IWDDxEdUò|.ðR„ñš$""²"©cxy͘YVÑSeoÅ[ž±÷jr‹ˆˆ¯Œ„°[Ì]¿\W¼&ó¨ë D""¢À+2²°kíí©VÛèêªÓÚªJïòÔÕåÐj´Yη¿°pž¤kÌ+ôŠˆˆ¯È²0³‚»×ÍŠ-Xý³EX·îéJ[ÞÕ¥ ´<çEJ½XXá²9fv!pŠ»w(ôŠˆˆ¯È« »öCà„zyÌc´Oº–5f¡®€µ¥‰]t¯u´0÷¹éœõ)àÍf¶Ð¥Ð+"" ¼"Kv‹î^3³]³îï¸ð©“M•N ”ÕŽw…(D¡w-z9xß÷ñÄ5ß,ø|­æ_‰6½ºì¯ˆˆ(ðŠ,U¾*ðκ+pñYï•Á®ÂÃÞÃ`Zpf?SæÈäg®Ï8z©t=ÎzÅ`€«&W–ðù*¦ÐÍWß½/Ÿ{ûlL=K¯ÅŠS¯Ý5š—žl¦¹\£¨‹Hˆˆˆ FI“@V˜ÎŽÛ¶/ໟܕÿþú«`u¨ŒRéeÖyÃ%|âÚ?ó|¥™NOW‘æöÍÔ˜ÛQ¢XršÛ|a„Tµmn«ÑÙY¢XqOG‘BÉ)–žŽ"c&ô0çéfž¹í3ŒYã³LúŸGèh/Sé,,|]½Ó¨aTª0¦½Fwg‘:ÕmíÕô\gr[}á{Œn¯Q¤Jgg‰¶ð¹ìóÕ»ŒÛpâ ‡Rëmg.ãþy£©Uë´µ×9ÞÔ>ã¶ü³=ˆbyûùIÞrä ^êl¢Ð¦K8‹ˆˆ,†*¼²b¡FçÌ1Ôº'³Ã1_`§NäM:‘¦ÑOñè?Nãò¯mÆ”r7•J]ÚçÒBma¯ö¹¬ÕÖK…eê¬×ÖÅÖmóX¯­‹—žlfjÛ‚Ô4"ª »·Ïaó¶ùL)w³Ý„¹}¿€òL&í0]'t1ë±ôº±å*uŒ m=lØÖÉ^ís©UŒ©m ˜ÜÖîí³éœUb 5Öl뢣ˆ³{ûê8OÜ>†Mںذ­³ßç›Ú¶€rk—)R*Ï£PœËÌø îÔÞA 5*•†ßdƶW¹öž Ìzøj=é?•»Î} ›ÓE¥K¿a‘%P…WV<3Ç ]|èûw±>½ìGGý4?Ýöm¼üÔxFQc,UŽÙîc¼üè~ÔªMü¬u;æ{ó…i\ð½Í¹ýGÿÃ›ÝÆô;¦²`FM¸‡£®ù:ãÖë¡©­Î‡^ÿqæ=³=Ô›µöm|ç¾³¨õ(ª<ö÷׳åÙÇÒÓ±)¹ŸCy:‡ì=ƒoóIj•¨Cdz»Qjz‰½¾úM¾ù£÷ÐñÜ® óÙé£_ç¯=ÀŒZøà~'Ñ1}WjÞÂÅ…¹lùÞ38ñ¼Û¸ì×ësíIÇ2~Ã{xî®#¸îÄ/`Ån`5zçÛLšÇÑœJ©m'ß}&/UšúU¦·™°€ßœ¾+µÊÖØælf?òfN{'qåET…EDDd!U‡d¸,†EþúËu9çÇ›²ßÉÛò»?Có˜ûxã±÷³Î~Ëá<÷ ¬÷ÆŸò¶ãŽ£Xîá_§žÍ³/6Sí)0÷©ƒyú¶÷°ÝÛ~ÀFo=ƒŽoãÒ£ÞÅÉmÏóí>ˬe‹}Îa£½Îgæ?Êg·ü$[î2›ze4/üç]lºÇÅl{È—é|i/þr¼ƒ9t¾´1³¦I¹¹—í÷>…Þëð×Oý†zÝØi¿ÏRí™À?ù4ošÍåÇíÉÌiG±á>çsèW>L©u&w_tÏ/(Ó;§Èüg๻g½íÏçMo}Œzµv;›7<…yÏîË›ÞûûÔ,8×6¸Vvê8OßøvvøèUŒ^ë&zçoÌe_ë•»¨tèw,""²ªðÊŠÕK5¼ÞÌUÇ} ¯Ž¡Ú3¯7±þî_c›æò-¼xïaŒÝà2¾wõÜ9s4Íoü6¿;ô*~þ¥7²Ñ>Îúo¹ˆkq3OQd»qG0÷‰­¹†ÑÌyòݬÿ–S¹ã×ÿàrFóÝ“ÞOïóm<÷d x™5wø 7ýú€3ùº¿ÓñÜv@ §J¹õi~øŸp(s˜°éµÌyü`¾õÈwø/²æ—ðÒýïáÛLd·“nc§÷äu{t0ãŠ6~ë¥üççpݧ°Þž³qœ5^Üð sëMšç²á^£ã™=9ì‚C9èƒÏrGçhÚ¢Ýo½bLnïáWMaÁ »Ó<îßüå£7³çÌ®ÿÊûxà’}Xÿä»™VEY‹’ˆˆˆ¯ OÍ@­ZÄ =ìýÃÏ3vR/ iâñë6à+¿Ë©ÛNàkwÿ˜Z¥ùÏìÁ¡cÿ‚W‹Ô½†çÓ9k•ÎVèfí¦sfçD6êì„â¼^äÊ0«Ð>eÍÏ]sÇpÄwïeºùñkCa“¶~Š/Xõê=4yŽî¹ë§Wk¦‡Q§ÒÙDSO’×y&þøp…bj'[(ÕhU§•ô:<ºû2ÇëÐ6&k'›ÝT£Ðd`^bGð•ã¿NçË[³ß§eÿ£žâçglÇ]g_H±=ÀšcÆ×)á8%(-`íÏdúíŸã­ÇßÂgt7w´3º½–Þ®µÎ34óÂ}©9C¥g]^ìú0̬bÅùÔ{¦pÕ—¶åÓ?½‘ÿtŒV{^‘©íŸ fu6mëd;:YŸޏx :_Ú‘bÛ3ÊlZW»Ÿ¹ÏìÀE_|„ë>{«­ÓÅ•GBó¦5¬î`ý»æ*˜cVগhý$ÓoÚ‡÷Nèà„)/ð½µ¾Í÷8•vœ‡QbÁ £)Ñ÷úByP¥Pê†b¾¯ÛþïS¯§aP óÅh_û&þrÚ lºa7\ù.À»n'må*î¼R¢„ÓJzµ™æ1óôm§3nÃ+¸ó¼³¸þæq¬×ÞMOŨw´usÅ6¤{öV”[ŸàÃWÀGnÛ›oÞõf6Úç4Ü'þ¹㩪O^^ÖKaêµ6N Œ»ŒÆ]Â%GýB¡›7u¿›7š7}ô ªÝkÒ:ᬶƙ\òößÒñÌVlVï¤Z-ãµ6<.øPÆñZ ^mf%¶yÿ÷Xð›i[㶘p)/ìÊ&o¿”¹%êµ6zç·ÒŠSÆ©õ4a€jjFPi¦)žóZ^kYøµjêM¬K•‰›þžÙFûêqàøK(—æS(ÍåŠ~i׎oÆë¥…¯¥ÞD½2š›^œÄÉ~bósœ÷Žsé~Øh«@OW‘)ôpç9ïÀ½‰ñ›üï7ƒÕ_×Mûëkìû­±B'O¿“_ÿe «µ÷èêk"""S“YqZZi´°Å¡ÿ¥gþ—©ö4C½H©µ‹5¶~’=¿úk¶wqe‡œúë¼õp®ûâžÌ{q=¦¾ùtŽûõžÛ¾ÿfî}n5¶˜:›0Q¬¹ÓŒYûiv<þV®ª´SØQiaímæ³Í‡>Áœ'× FA^‘E1w]¤iXÎ3sw7³‰À@gü-‹›i`GàVws6žåðy‹î^+˜Ýà¥Ö-ùNçî̦i‰/¬cŒ¡ÂTºh£N30ã%ÊFSÇh¦Æætñ­T(PÇØˆNªOÑJ­3]Dbæp#£x±³…–¶SéäZx™"EŒÉô0–*0š 餄ómL¦‡QÔÞ_^Ö —iŒ¢Ž±-óÙ‚nWÚY¯ÜÅFôp+£Ùœ.¥™yqjÙztõ»cʶ£“[M'Ê1m6¢“qT™F”(ļ¯WŒÊL¢Ê= Ï ¤§£ÈÞí³9|ê©Å9¼­VgMwïÒåì„{³£óI}p4χ÷Œv¿´Ùìk½p2pW,Ë éTÑÀæÀAî~E¶Œií +xÜëbš³N6 Ø¸ÏÝwÈKSUdùQ…WV¬μJ™[;šS–¯¤LŸ]¸ÜVO—è-×鬔¸¶k<õŠáåt)ÞNg¥Àí]í4·ÖÒkqî%(·Õ)´9s:›øU×DÊ­ušc¼÷t´Sn­S.;àLïlfFµ…r{Ç;S°Í.<½³™BÕžöR¥‰é]Í ?Ç-c¸µk,¥Ö:åö:Ou¶òDµæö·Çûd”x¼³ ª¤q•a^¥Ìµãin¯Q.÷M›‡;GQïJ PNßñÁÊ+Ÿ^n¡·ì4O¨ n¸ò+‡+—†Ç{,(´9£Û^ùÚæöþ5·¥à›º=—}ž¶\Ðlk¯Cî}ó¯o|ŸÆq/j”ÛêDAù•Ï©W‘AE MQàQàQàQàQàQà^^¶Ì ˜i}."²ºÒšˆÈÊÎ}ÉWÝ33 HßÕ ¨á®ËR‹ˆ¯ˆˆ sf=¸OÇÌ^bû«„zEDWdé¶¼éo/ФÉ1ìf¾ ,b}­1¬¥*¦>`ˆ[ÒóK7L²èp9¸ÏmVxE…vé¿Ç•ÀCÀA¤¦jµW¼¯Yp,°PîÄýÜxÎþ;Óà?‹V"" ¼²ª)˜Wk`ô6¨WŒBYÁá TzŒZ­)ÂN}9¾ûÐ.)Œ†¬|ðJÁqà iVĽÖïùÆ öÊaÒ¸³ñdÍÜ«ý^“ÆS0Œû‡Ëzøê1ì`¿G1ö.Ë‹ æfc€Ë·äžý0fo>ÙáÞðž†W~v^Y%YÚvsõÞ]¹ÿºqì¼ç,˜1šæèHª,GÍ­uêÍNçK۳ܽÇÌÌ_›pb w†¾Ôß@'â>3ÌJÀšÀó¸W^S‹aÖª¸¿°0fÓÁ½ãX èÅýy ¯š5 0+£€NÜ{_2—4]ûWx'/4|ÆÕܧã^ïWMŸ±L^Ľ 3_ÄNL çfŸŽ°ûhÜÑÀ/€÷Wà~Y|ïZCÀŸÌÃ}nÃgÏ>çš1¿_zÅôQà•UÔ)ýzõDþøÙåÁÏsðÚ3iŠ0Ò„*?Ë[7P¥@Î{w{?½ó6ªÁW†¡×fO¤³âb=Ì %(ô¾º fŸv.¾ LÅìQà`màÇÀ†À“˜}÷.¬<Â'ëM˜ý8÷çV4Í>|$†©av'ð ÜÿáVàÓÀ‡cúU0» ø<îOa¶pfÇá~;feÜ+˜LÁý]˜µÅì—ÀþÀ!Àæ¸OÃìÀ©À&ÀÌþŸñ¦…UU³÷߈Ï8³¯]ñyú‡êô Ø;¦Á9¸ßÏ_|ظ x;iÙ¸³Óâ±m€g0;÷o5|†“€mãÝÃìÀidÍ)Té^YåÒ®{ÍÌ îþfö]æÍ8‰¯µc&_O±ÜfÔµ\! fÌ{aºf¾¸ øžeahheÆãÿRÞ}q?­j6Í(ÖÆ§Ïþ<!lz\ñ¾ª/?‹v¼÷av7©ª Ða¿1ì=A8›NR¯ æÞë ¤¦[å^·ST°OÀýXÌŽŒ×¾÷ùñºã#˜Ú"¦Y© C à}Ͳà»Nü­Äg_8 ÷3£¹Å­1ÌÀ?ão øîÏ̓#P[¿ö¾"" ¼²J¥Ý8Ñ&ÂÔ…À…fí¡ÚL+é «,? §¹Õ«µÏõåª×¨ínª&–¢Zzéðxt¨ý’]~øß7›A±X¢Z]Ööž]Ìî[x²ÌŠ÷¹3÷سñ·³5HWp˜¸—}†ý+£ÙÀÍѬ`w  ˜ãY#†½¸x³ÓkqŸ<ÏO UM«Ùú.ôSqÿ!fHs|–My˜ 4ÇðÝñy÷×n<,È}×'sÃ-ËJ1w¿þ—ÑÌãyÌ®'5É*Ô7Äk¶Ãì¾ØY¸øé+Ú3‹ˆ(ðʪ|-ÚMz:¡H†ÜŒ¿³MãÞÃÀ†˜ñ×HÖR„{ ÕpW|®£0Ù†R À¹„:i¥ùÿ³wßqRÔ÷Ç_Ÿ™mw·wÜM@Š€Š(Øbï%¶ØãOƘX°co±÷c%ÑKì¢"v¥ˆ H»ƒkÛfg¾¿?fönCÁ(¢|žÇ=övv§ì÷æfßóï|¿%'Uáôyáö€ÈÀ¶3c®Ž( í ºJ) ¼J©µÉ{aÐk½®_h{ü>¡È23É’i‚žJk~£áñ­X+ûg`DnCd0"»´;}¨Â˜F‚›Ò~‡ÈÞˆl‹È%ÇÉ߇ý÷ž ¼‹È1ˆ ÛûžLЮ¶> ×ó€kÙ.\σ´5[(JP:PDP{Iø¹ÞçÝ‘€ÿŒŽAûáð8"Û r p{X¬ÉvÊË/ib2øÁˆlô¾0‰ çŠbàµz§¨(YF¼µ,›§¯!òDn%èÂÌ^˜…á †^¥”^¥Ô/T[Ÿ¶}îðïMní0­åÁxß/ð.>béÚÌz‚šUo™i_¶ækcî#¸ñj‚®Ùž$h»»9ÆÌCájBŸ&h«û%Aß´w„¿!¸„ÿ p3AÍè¿ÃyöƘ|®KÐñ«À]á²®§8¸Dð¹?'h# mÍ>#¸Y®†Ü1=;ücž ›Š|IЯd»­’›ßba/ý¬NX–¹¥¶7x­:¬8iZêoòË>&§Âc±!èÊ.ú-ÇäâM‰C€OŒ1ÃJ—¥ ¥V½iM)õC²Â6¼¿ î[€H/„O¯ù••2C>¿ê½4«°Ì4åoØò(í¯íf:cêJBii›Þ ø{i¡MJ‚ª„Ÿ±…¶ž Ú_NÐv™p½Kow{á°í6Â&”Ìïµó9–¬/ÿ忇Ë6KÍWìé¡­ÌÒíÌëÒÖKÅÒÛÔV;ÜV^ä”Rkä—“RJýpŠÁêÚºÕ`ø5oüºe ‘%K%—s¿w0Z¾›¯•›ÖÖe™„5¥þR½5¡PJ^÷0Æ´N/Î ÐKØKÂÊ-gÙíio{‹ó®hþÒϱôëmïÿ®Ð[¬%nÛn³\™-»œö¦-» ¥å¥”Rk ­áUJý0Ú.¯O0Ì®¢>؇,‘‹K»&øÞÚ U+;íÛ¦¯Ê<Å®½¾ï6~W0\Õ×W%h®êçÿ>娔Rk­áUJý`‘7|žL»÷yD$^Ú+RJ)µi ¯Zqz .c–;ªÖÞë­ÄM6Å÷Ö–N,À¤˜1¦ ¢#o)¥”ÒÀ«Ö¨°k™ ]bAKC…û„½’£µ™ež¤µô”RJiàUkZ°cŒ/Á]×û[0X°:ðD‡ ^+0–Áoöm{ ñø3Ƙù%'B«´Ki‰*¥”ÒÀ«Ö¤°kc|‰K?‡ læĬ0¢ÙeíH¼,ñ)H×s¬–ôU"r´1æ¹ïz•RJ) ¼j»>Æ‘&b cËžòÇþË®3ZBkÇ2¼¿$ÊÝûû/|t™¤ÝÇEd00EC¯RJ) ¼êçÊî-’ÝalÙýþ;}ÜØ1hÍàékk×î ìZ‘a×ÓÞä¦SN4g½ú>ÇcN“¶A”RJ) ¼êçÇÂê[¦À%;ŒaÉuµXõq »k«ú{;qú©Søë˜/"‹ó;‡S=-¥”RxÕÏQjý8‘<ëÕf¨ò7np, ¼k«DÌ‹e¥žFPJ)õ3¢O¨öhíZZÌ)¬ÝÔ)¥”ÒÀ«~´+¥û†RJ) ¼J)¥”RJiàUJ)¥”RJ¯RJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥J¹þOÓë€ ¸®–¿RJ)µèÀjÍîÒ'_ŽgH¬†/œ˜¾C'¯FŽm de•Ëfu–RJ)¥W©ÿ•/8.5§¡Ú‡¨¼À7|œÄu­5ÔÍœ§cE¤ã¯¾Ïlæ/v(XB·ÊüŠÃ¶/8É5CÒaÊ Ë!4G`RâG/¥”RJ¯Rÿלª|å¯ùIŸdß4ÌwèµI l—#[¨"W(‡l£EÍÀ t,À§É¶²écÔˆu™0çÌåÊîݯáÙ)Ÿsß!'³ÿà:Ü&GC¯RJ)Õ>½iMýô >¶Õ‚ûŽgÑsDZpÚQœ¶Õq4äq÷èMpzäHuÉñægUìtÜ¡ þõiœ~çPœ®9ܘǻ³Ê9çþAÌKGØø¾ =x8çŒØ§kŽŒ!¹^š7?«âà³÷bŸ“öã©w;’ÒL:g‘ ÝS6lw†u2gÞ½)N§|ãzÎ>a nýeÑI”žæ‘ÿÀØYOòÕSçòÌ+ërçø›yèéd½¾˜ò!–åQb[5qÁEƒ¸zÔÃlÒåœ>o‘žRÖZ2¾`ö:ókø8ÊþâþÒƒ.åõ³qt'RJ)¥VDk…ÔO,¶øø&Ê€³N§çþWйâÜóÞlÕýjÎ?ñsXbqû˜Ó(Τ1·‹›öá‚mdJÝï¹÷Ò P»˜‚_Iu|ó›¢ñöC‰Z yúóýˆížæñWÓ¥b< š§î…#é˜ǨI»Ñ·_׫&é,d^Ó1Ô¥¦Ü™ÄëÓ‚¤G$’!ïUsãn—SŸþ ëT¼Ì´º#9oÛ˨oÞŸž•O0uá^Po³ ¾šõ:Ü• :5Q0•$"óiyzoοá}rn«Ï\‡k^€:>Àû/ßAzRr©¶Çž þG›óý©ø4Uÿ7’²~£™ß² }:Œä‘³F‘žXN*æé¾¤”RJiàUkúnhŒM2’F¬ U±O˜R·—Ý9–DX’ÝÁe@· Y¿ö.û|Wžüx §€%öð KnëL®s²è,Òné«;“õjÙvÝW©ûWóŸ¯aî fÊÅ71á£J,+ÍŽ½_dñݨ{¢Œ®ï“u;6ùB‚Êħ}ò4½TMe|±O9瓨{¢ÝR“1– ot`Ä£8a³ÿ0hӿлúŸ<7å 6ãçT±ýz›u Bî«(Žå’+TóÛ;FÒ­bM¹šºG:’Œ-Ý×¶Ã_¤@Ü© "X’aazc¾y{’ݳ¤µIƒRJ)µ"Ú¤AýÄ¢ày–ä™tËu°Ašš#Tø·¼}q4Q=ÎþKr̬ëBùÿ5ÐmŸ:Ž»gsvúÍ!t­ÌB³ç@´tk|À`[9!Ö:Ý`LÉsc+èNìYGÓ½òYæ4ÍÔÙ³y÷±€Ïµ-8¶Á7yß!÷!å‘÷ʨŒ¿Ç»×íDcn C.½€ÊCêÉ,3…ˆ|z` ÈRMcI8-uqœ¤¯;‘RJ)Õ>­áU?=ßXx&É¡÷î€)s£|xÎ0æ6îÎ&o…i6ª}Œ÷æžÅ¯LfÈ&“xüµßòuÓ¶L¿ê`<“Ä5aXt bäý$$Òô«~šw¾>™?n7™|.Ê?Þ»—>¥oÏñM%óª æƒcp½Q»ȱ)˜x0†kð|ŸD Xõ¢Ø’ƒòDì4‹39ëÐÍùèƒMxoÖ‘€Å­ïlÎæ½Fá›® ‚u¹Á7q ÂÀÓqÒ¿Oä–±±ÃŸñæ-#i|·’ –r~°hÌo@§~%ï'hÈl=*žcÓ‘žR¶zÏPJ)¥4ð*µ{ŸÔÙt¯¨£<:…禜‚ÁÆ–,‰È"vès ¯œü4 ëÊ„ þÉпFxëË?òú´ŽÕÌ_MÇA-¤^v©ŠDçd.µ¬.B‡øÄì&x½šQÇÞÅö÷D9î*y/Ig6Çîx OŒ?Œq³aÌ´‰T'&Ò1јU±éäœ2òOÕró…Ü5|øÍ¼ðÌ(v]¿‘LC²ÊÒ”G'МÝc b§éZþ*=«>à±ß=Ž;; U¬”RJ©öˆ1ú=¹FþaDÄcD¤˜¤ÃG‡bí_û\`3`œ1f›ârVr¶1ƳEnñc‘?0ù”éÕ1‡›³WOÛÐ2ò@ƒp&V´½¾Y3|˜dìWüj³zˆû0- – ŠÅ]fyA¶0\…œÐmÇ%ði"ø\Ž¿Üо­íhÛ[f{rëæ #¼ûi%›o¹8ض¬söYáÂéœì%ýuœL³MMwù÷Ú*¼`zăuóA[åñåŽ÷ciÌÙ¤v[LížÿŠÔe»¹Æë¼‚(‚1DŽî&9Ö ‡–óXLä²<\L ÷aYÑ®€r`°¿1æéâþ©Gõ“Sá±Ø_\¶1ß²/ç!À'Ƙa¥ËÒRUjõÑ^µi Û”»¯ÇÀ–¥ºÛj"•ôùÕ¸_Çq½¶¡xÛ ¦.€Áõ-2ï¤è\™–óv‰¸ Bm;!Ô‰™p¼â•Üî˜!ýuÇ6l¾QSë¶á’É0P{²ÂÙ“1Czf‚dÜ#ÙÁoÿ½žÀ’âÿ«Mãœ8®-¤,ÇÑ/N¥”RJ¯Zã¹.GOp€T;7a¥’~A"8–YªÖ¹döå¬ep’7g·.çÛ7fÕ·¿¼Ýe¶Íõ¤µ¢ØùŽùݬà8ß^6¸à8+ñ”RJ)¥W­Yœ•*Ìv/߯Ìì«£YƲëpVeûœ•˜®Cª)¥”R«J»%SJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥”RJ¯RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥~ÞŒRJ) ¼ê—@ÑRPmr€ˆl- õó?‰…èAN©µID‹@-t‹É¢LZ˜¤ªK˵ÀñµxÖRnNH5Ú¸^¹™äcŒÖöªŸ'cü0ø m?>ºO+¥W­_¾m¿‡çE¸âÍxëÁ»áºÚàeGKh­“j«ãÚSÑœïç#·„/Ø@AËGýüNëÅÖš1f à•¼fcŒ§…¤”^õËæ‹ˆE_^aïðöì Ùb]‹?ïü½Ê]"šy×nøØèÚÜzøf¼ðñex~ ±ØÅ}å[æ^¶Ùƒ^:VkFÐ jv{ã€"ó1ÀËó2Æx­M´¶W) ¼ê—ÉcDDÌ&'"‡ ò°ywöù<â|-µ<+À<™lvšˆX¦xIxùP!@ª4èúЬ%¨ÖŒÝ€@Çð÷îÀPàtD&×`ÌËd¥”^õ ½~Ø>s¶\zé\rÉNÀVgð XZ[·ðñü­Ó>þM*õºih¨ÿÖ° c "Û„Ï-À¸°HKT­AâË<χûêfÀãˆ<‹1‹4ô*¥Wý²C¯i 6_üðš–ŠúŽšÝÆé ìBÐäÁf~“w ßÞ B©ýÐ>Ö£®Dô*Ãé.A-ð¾ÀDƘÏ4ô*¥Wý²C¯/mw0k­®¹°ÜücÔì‰÷ ‚Z³(ðʮƴ˜ k›HõSÔüðqrxR"=€½€S€ቚ l¼€È–óM¸¯ëþ«”^õ ½†Ò;˜•Z>@»wڸ؞ ÷È×ñæ›°ÃZ^jÍœÈKx¢6ø;"ç熯¹ÀºÀ¿Ù(\Òvò§”ÒÀ«”Z Cp‰Wd AíØP`‚šÝB "À…3íd‘Æhfj9“o ®Å«Æ4ç!òð Á• Ø8 cnÜ[ÄA›å(õ³¥#­)¥¾ïqcp pmÍŠ×Ý17 bÕ ÖÜðë‡ÍqcÎ&¸B!aÀ=‘.ôßi¥4ð*¥Ö®˜>N Cn±×Þ(Á ÄçaÌð°&Ø›¨—Õš| ƸáÍ—7¯‡'o.AfÇ Ýò*¥W)µVÞµ¹0¸Œ1Wéíêg»om|/!¨Ý„ûûAˆÞ»«ÔÏ™¶áUJ}ßÀ; ؃ ›§iÓh‡ýêg¼g·Ž´öðAût¬Ÿ nb›Ù ¬¯´¤”úÙÑ^¥Ôª†>f1挙ˆ1 ˆØvÕ/€îãcÂç.PæÀF]õ{S©Ÿ%­áUJ}"6A ˜ÁíÂNý"NéÂÇO–y^¡E£”^¥ÚÉB­Þ´ñÛ/?²ÂFŽ–Ý?Éÿ°+YárôΡvþÆèU?‚Æ¥v¶Ù”Rx•jM(¶i«ñÓ/ãµ[>Ü' %ûÃwí¥ïÉ›ÒA.T»ÿo£áéÿÚX¬Ë,ÈúY-ÌPëÿ“–Ð "#ª¨ºÄ³DC¯RJiàU«!ìÚbß øÓ*étœ.uP°AÛ_®µ|_0¾ÁZé¶¼‚o ÆÂ íù©ÿ60¾ë§ãÌ–aÑ©KX²­ˆl´hèUJ) ¼êÇ »¶1Æ‘má¬ûø}̼ ^,Œ¥7¯­½yŒ bƒx«pi؉•úÿ´#ô†üìqèt^¾8×s~ئW›(¥”^õ#ÙÏÂvà¡ÛæBló“z³‘R?—qÀ}¼ôÄžT³¹½ÁO)¥4ðªØlf{±–l·d!n´ŒÎ®–ŠR?Ž£ÑNÓd%¨ý0' {‰¬“0fNZ›5(¥T1Fý ŒŒ±É G¿l•Z *ŒEÄ®,û³:Öë’¶rÔÙ«ÿ»d¨‹¬êç Ê&½ÚÊG)¥”^¥Ô$†ã%q åᣃãçh´Ì`çƒô!™Þ’š%þjì¯Õÿ7Ô,JâVfÝ9Ò’£Ñ®Ä)t$™¯Ä)¸¤-—´¯”RJiàUêç`>Óâ_òQùL>KÎä£ò,ö–¤–Tà¸ËÂï Á+SC쓖ã7ÿ3»ZîwÍçâ²2ÛñmËñq¥×ÿ’±0tøïŸ6‡Ñ:„y IDAT²÷ÜohŒGHøq’žWð]\«'ÉÌóü}Ý1\ù§&¾ÞÕà'+û8ÇnÌ!÷ŸÆ­ïO ]ž ?«RJ)µ2´¶D©Õ¨Ü<ë—ÑqüôÇ-8á¬õøÕ­ ÌÙù†ÿeCÈæh´»‘Ì!Õ8/ePØžÔ’ ì"Iæ{’ÌlOjI–FÛéE²Å)†ÆH5tÇÉ&ÙÜn™Å`Gˆål’³Ž§ÃÂu¶2ˆT³Ó‡dzRMÝq²øýIµô'ÙÒ'»„9±þ¤Zl09î$ÓH6—ÖPÇqŒR…“ïC2ÝT6–M¬‚E¢°˜¼Ý•šì’ÍYÒ­í‰s¤¥ &÷7÷|ShdÖ¾qj>¬¡×ýqª&¶0§qÜþàMœ<´/‘Œ6oPJ)µ*´†W©ÕÏOP=ûYwt=ž-gr4õÍ€Õ™šÌ“\Ñÿn<Ý#Ó]ˆdkèýÒ©|ô@'pÏeð 6‰–<õ5 |³=x‘¾ìvë)<õZGåâ ßãï' IþPŽ»í ®m(D,Dº±Á™‹˜³ø±îluÏM¼öä=ŒXïMÎ9«š~c2iLª3CþÕ™S>æ_gx¤;VÓç¥3{G ÉÜHÎÞô^ ×L$EÑGóâm[ѧé4¶<6Oº"C}w/¹×]k0´s0ÑÆC8g›‰ÜsÚœrí\ñÝ8èÌ2ºN˜Áë—¿ÎçU>Y«™ù;~ÇmÀ¦wt ß ³yëüyø/ýÙôï]ÙôŸ ™tìÓ\> Ñìî¹,9˜³þÐݯ¯gÚ¡qÔþBC3óz/à“?E([<„=îÎÒ耡’u›ÏáÎõÞâº{’tþäpÎ4›t  )÷5>ï˜fÑÁr7gø“ï‘OŒbfí3¤; gÂCÐtø9¼ñŸ)PVlZ¡”RJ­ ­áUj5j[°[˜¹ç]\2À¥¹‡O¡6Få'›qäÛ)b…W¹äWùê]¹÷w³ó¢“é:í`zm;Ÿž wx‰Y÷òö½ýÁ=‚è=c¹ùÁ)|R>–;vµ°sðùÕÍ CàÒGÙë­8ñ)Äl¢uW2ᮡ¹Œ‘÷?É‘¼ÍÝë÷gÏـߋî}Ÿgž=‰É“ÿÆ€:1ðÙOõÄ¥Ðã2¢ÇÌcb¿ ¹÷öãþ“Öç€ú:&ÆveÈ»3óQ=Ó¶nœgÉ5L¹zSÈœÇ-AòuLªyƒ‘×Ö²þÈWùüŠÇIׯHWü&fF} e‚ÕÔ™­—4uÊXLJ&ÒÀT2N95žîIJ)¥4ð*µ†*ÏÇ+«¤ç‹Ã¹à¦ñ:å™i½ÃýÇ?Á ÿêÇÀ]Ò,¨°^äO·½‘k±<7Á©â"Qª&çh²Á*Êò€iáËxކ޳¾¢>ú ¦,EÊ¿‚žÞ²w‘MÆ(ÿ2EÞŒBM’®Y°]_«ƒg=@}Ç4ó¢Ñú® žzõÒ˜ˆM$çQˆlC,s#†>ͱÇ‘\ЧC®wÞ°qò6N]œ¼ÿ"…JA ¹çVð#xæè—YRé @¬jd,œ&Ÿ|‡,s£ÐË‚ •¤ò]!!i¦@Yq>¥”RJ¯Rk$#±–cøÝ¢»¨³÷£¦îö»ö^†Žyž{7KPْم«Ï(Ë^—a‹=ŒñÁ߀åE¨0eàŒRIÏl”òņ¹åUT`¾•ÄÉ¿Åäš©:ß ~‚¨ïõ#D°|<¢a€ôHQí ¶Dpü8Õ¾EÚŒmãäÏcTÏi¼te†Þx 7<¾Û×F‡‘†Bük ­=/D)#ê;Ä|ð“5 ¸»…ùƒîfËÿÍ¢ßý—º($±pL‘­é¹xS[X¸ÃnÛòöþÇuP{,>ˆãwšÂŽìÍÎÿþ¿½7jáhèUJ)µR´ ¯R?Eâ%’ï -=¨É|F>þ2§ì ¢GÝúüæcƒ_¶ˆÉ5¯pö;£8kÂh®Úú}îÙh#H›öoðó4EÖc yZú>ȽëÝJç/_æÒþ·³ñ ð|‡xÚ§P1"ƒ'‚ä’T¥“tɃx ¥Ënm'ëR@¯ŒÎuðØ€ƒyüß·³ý´›¹z`†%C,"éî¤òµ¯Ñâ¼ñ·çü'åî3ÓÔoö{†œü;jæÈØk0ƒQçôbë?½Âõ=÷ç7{}ȈË[˜¿c=Ó{®¹,}M)¥ÔJÓ^¥V£°Á*,ä³Ã“”máãÄ| ñ™žÕ¬ÿ¯¹ø‹.PÇ #&r÷ÍU<±o·S–%Cqäi‰ ‡Æ× /ë< °0©â"®˜p ÿxþNQÁï¤Él› ÃÇWrÜçÛpãÁ>ùò «0LžœL4J¹[ g±}<+(`ð’\+ ظâãGó4'àϯÊ?æßÅFÿ|Ø'"Rôx¦™¹» æ€C"8y7ÑšîñŒ5“qâÖÏ_aß?Oåéâ°)gòèëSI—9¤ü©¸eâö÷®äÃËf1þ”™Œ½`&c,· ƒî8wúˆtERoZSJ)¥W©5…c‚3€CnXÌ—]ËññMöeÿ©‡pĬ)ø7rS®¿‡“FÏ`Ì–qì9»rÃåûsô¬‡¡f}¾_ó$2`÷å×sæóñ9ÝØ¦nnÙ…|}áÚn>“7íJŸ+Ï3£^‡òüÒfL¨‡ˆ VG§û²Ïå]Ù|Q%}³}Øóœ¾ì5÷3ˆuf@¶{_ÜŸ}g}Ñ)¯/{_¹.[ÎÞ!-ûrïÑosýÞ'®¸u#v_|{ŒM²av¶~u6ã§Îƒ˜ V¶^܇=ÏÙƒg^…[{2ÿyíovŠàxu±Ã¦ Ž?tòZÞù8Oÿw<·l•eqM’ÚE²ÿ‡GpÒ´I¸eImÊ ”Rj•ˆ1úݱFþaDÄcD¤˜¤ÃG‡o¿aÇ6Æc¶).g5l¯mŒñÄ’1“x-éí—\ÖVmzC¦#¸åÁ 4‚=bóHÇJGKëO²e(dò oCù<Ò1›¤é €™°À8à‚ôGÌ .›â4oÙ©{Ê]°ºA¶ _@ÀÓÒ3 ž{4— <¼–›@l- :7Ö'¿4¼©&\{Nó\pr`u‚ügPÞÞö¹¸ ÆiöÁšÉeo@ËÑhw&•Û2Uà-{ ÄŽ;:ÂÚrr4Ú»Zü[6¸¨Þž²Ÿñ:w5f^Ëêúßÿ|mŒñ9ø7b>œnsó¯Dªß/Âãðǹ•³y`ð‰1fXéñ] [©ÕGkx•ZͦÒX6W‚s°ÀØDLi˜sHúSI—}B¦ IÂ/¾>t²øž ð¸öÛd*c¤¼ <:¼CcÅ8ÜJƒãó:þlÒ±9âÅ>l=iL9$| ǼMce8<°ñ@&ÒX'aŠ7‡Ÿ;8~=®3’ÆÚ`ÛR­ó;D|p˜M&Ö6d1VðZ°ži¬pˆÐ^€‘òêqÑ4Æ|{rHø1 »J)¥4ð*õ󰲃&8$}'¨Œ]nzéó0D.Õ7m6ý¥ƒdÒ°LMTŒ”×Þï%Ëh÷¹…c–í·tþä2=(|ײ—|&íoW)¥ÔC{iPJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥”RJ¯RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥Wýœ‰y- ¥V/£E ”RxÕꈻ–)’'jù¸¢%¢ÔË\šÄÇ‹>8¾–ŠRJ--¢E ~¨¬ `<ÞóÉoûoTýŠë>gNyŒ¸‚–R?øá»@Œ„oSá·0S uÆÌΈˆc´ÆW)¥4ðªXøåyʧpÖÓyÖÖLþËt[ドk­ÕòáIÑ÷ÕËõíj‰G°Óáyšúãsiø’že*¥”^õ§]c<±Œ1cm‘ë™óçSXgã :"Y±´ÍZÍÇ Áo=?’o=²Ú^צ1Ë•¥ø@š…ƒÒ,Ú \+"xZJJ)¥WýH¹7 ½gG$2ɵš†×ùMGi±(õã±l¦ãq-p¹1&­Í”RJ¯ú1Ónð%kÂ/Ü€R’ª-Pˆe“ÀhMÝZ(v¼ ™SAN>cÚzôXŽx’@?°ŽK}5Vô†, øÿ$ˆßRhù¦­à4ì*¥”^µÚ‚¯ˆØ€oŒY¤%¢Â0Ö&Bp¹Ý³¢6.b‚Îüàå-H›ô-Áo-Ûâÿ›†]¥”ÒÀ«VcèõÂ/bAÛ`êq&¸Ê)æ3‚.WXûÌëQ±nÄÒ¶©íŸcj¹(¥”^õS~£wدÕDÄ7Æø"bþ‡ÝÈ/×k“¥”R«Lo›WJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥”Rj­§ý𪠰çڗ®ú¾ì°Þâ ¶,ó¸Ü®Wò`…#‰Ù":†ÉÚ~X E:ªœRJ¯úA®vò_ÐÒPÿ£â¨{-ÃçÃÐòm·@ÛðÂáHb:š˜Zê„\C¯RJ¯ú_¿H|±mø‡5‹Ž >¾~¿¨UdaäkŒo!ÖŒ1|{u­ŒoüEìÈo_GZ[;÷!Œ…ïcÃûcž1Æ ½J) ¼êû†]+ »ý€‡=Ø | ,#Ú´A}Ÿý Á‹YÆ÷í¶LÛÞ›ƒ—$hÔ°Èøfâmi ¦µZëö?ãSéÇ©QÀ‘ƘoJ®F)¥”^µRaWzHÈ#¶˜![÷âÂ?mǘ:“p´˜Ô÷”-`e|dUö!¨ˆàG,=ÑZ›¹áãssIÞ=šÝ&Ìæ\`„ˆì…Þ_ ”ÒÀ«V‘mŒ)ˆÈnÀЭ{rÉèéŒàj:b#xúÅ¢”ú©ŽNÈjYò‡‡¸cpÊ?šÏÉØÂ5f¬ˆØa;o¥”ÒÀ«V–5Ô¿pÑvŒYrµV‘D\îRê§Õ˜&bÝNõï†ðê™/1¼`ŒeÅ7@*¥”^µœb¨íàØä׫%Så!nãè%e¥ÔO,•Äw Vß*Z,«ZKE)µ2tà Õ½4¨”Òc•RJ¯úEÓ˃J)=V©Ÿð/+"‘x™òƒ¼Gý,i“¥”Rê»i³®ÕZÚÆ'xæû…ÓÒþ™ƒ‘ýÖi"6ËÞäØÞ{JŸ«Ÿ=­áUJ)¥–—_æy\‹d5(†X‘õ9‘Ä Ã­1f…?m˳0Æ[j°c¼¥–·¢÷hØýEÑ^¥”Rjyó j‹Ã üCžE[ r1ˆ™’V|Ý,Â$¢¥8dK{ƒp´-ƒïxÏÒë^vû–_·Õº¬âüÆøË½Õ¶ÑFÄö®Öf·Î+"a¹T•a–±Â¿WøcŽlÝ>‘#c>ˆÌĘ¿…ÛIøží¿iþ…1—.´¥äïb}Çç±–)OE¯RJ)µÆ(“©@Ð1L:[!y§ºº@}ý»Æ¥Ãi'PzKÃå/Í›vCè·-#Xï·-§í²ÿŠÂZéô`YÅ@ÚÞú—êmë/ÝF/œ– ,߬!‹‘j`ßpZ#Áø$&X÷ˆì–û:ÚDB¯RêÀõš»åªõªµæ»ñ1Új.c‹-(ˆä­ÿµ‡ˆâåtèLÎAdðà`" °ñ*"I‚GƒÈÿ_…¯f"ru¢íp=S€çIˆ<†HYØ‚…ÈÙÀt‚ZÎÇiˆÜOÂmúGø{1„ —; |~ð<"׆Û|N¸þ!ÀÀÇÀÃÀLj¼€Hmê "[Ô¨¿†ÎQÀ 0Ю(§TÑðdäKŒÉ`L3Æ4`LKÌÔ×÷†'1³Âù7 ›F ÿÆ÷aÌŒyNÛ!\ßáç= ‘Ãòzø‘³Ã0+ˆT"ò,0 x–çÍa™‹ö¡W)õ?H»Xu9ìÒŸFwõü;±Ÿ¦?T'†‡ýÝwÏ»>R׌]—!R׌Ý–fµÅ›š¦ÁåõA §M»ššr+òÃu‹æà .½÷NŽ^.úGÛû…5Šƒ€ï}ß³Ú¡ÖBÖÿf†ËÝ888<¬ Þ ¸¸ ØØø+AMèVÅxÎòM –ý¿k$¨ßØ8/¼áì  A-l_`o`[àŽðsØmq[€õ «£Ã0»Ü©BøX–Û"àD&!ò2"—#Ò)  ë„۹ȄS M°ªÂå8@3° ¤0#\WçeÖûûpÛÎÞ ç½‘õÂò¼0üŒï»‡‡ûTà¸p»lý[ý´ ¯R¿>’ìI&ÙeøD1ä¦e*I|„i´<×GfÎ'Þ³ wuÈ7m>ñQ¼šJ x+>ât"_3˜tk¹,Æ¢…_olÂN%ÑJTI”3&l§ÙŒÈÅÀÃaâ35ð‡³-âÚ ÍÍ‘¨ëŠí4R]¥°kO`ÌSA´’ûÃú6Æ<N{ ø;0x$ ÈÆìY²¬Ù0 ¾÷†!n[à]Œ)ö:ñ0"w»÷‰pz cšÂßoDd ŸVòsÄÓ0æýp›÷&h"0´uZP üÀˆTÔp÷ŽÀ˜YˆD0æDžcŃ‹T‡åÖ; “„}7`?D¶ç-½Y®ØÎ؄Ӫjz!¨MΗ¼'N/ð+í0f&"ÿ"¨Í.~|žX€9À›S@ä…pÛ¾ ÷¯‚þ“iàUJ­¢tIö%7üo }k6CŒGÔ˶( ëÊû÷ÂûnÛ±0i)„5¿ “t‚°çúˆë!ŽÉd×F~²$Ħ}¤à!¸€¤Ö%sÓSô»øUn¾j7N9zOfä¾&QáPHÆ‚ùÒ.–cc\)øHqi«ñ”Ó8‹Ó] ±‚íkÌa׬Oó¦—óЀN<;þ!î©{ê„…ÁÇ×í†a÷ª'é;âBöõ Ñx„æŽIæêÊô ·gRÍæ4§? <éhèUK…^/ ½#Ù8Ä×€} Xƒ/vOƒÈ‚ÚÚ$™LŽLÆß÷ÿ‡3¾OJš9XaX_2 ”…¿w%¨‰~,ü.—0´môD¤< ìï' ² PÎ[F[-ã[M ÎAdgààIŒù¸d½Î·äyÓ˜!Gpù>¸A+¨™6íeÓávºá¶GÂ×ËÂ÷Ì?«ˆOÐ â0–¯E/®ïk‚ðà6‚&ÔŽö ÿ·ms!ü1ËL/êöæù< »‚æ$_…ë+çG z˜8Ø‘7 j{Á˜¹ú¥W)õ=5ùØÉ­h~þ4vŸÙÈ!ÊeAÔó‰<_þ’?½˜s£.–ëb%»’c0éðÐOÏ$‘´1N®S‰G޳cøEñ wQ'†I7`%»‘eCr¸(Ïiœ‚OE§ 2UÓÀ'¤™AÜý†˜ca’ëÓÂB¢NGòt#ÏLb_P–Úfúg:QwIǸ9$Ù ƒÈÅ0™˜;¤8I Î:ä©Ç©@#¬œGÇL$5øM¬ša4‘¾"…it±jú’ya C¦Öñ»Òr{ó+¸g"ÓvíÏmÿ9‡×ÒS(ÓЫ–ᇗÁ†³a®Öo!µ;lôE‹>ù)@&sð.©ßáÍã÷fff‰äæ4Ý7‚>ÏÞÁ†–àýugÞÝh\ˆaqÊ¡°×:¿‘ÎGlÂèÓ÷c‹‰ ¿Í~ÝٯϠË[_±ÉN½ùðŠË™xúyl>f:› îÊg÷Ã8wq§é[e½‡ÿʶ¹É×cüÍGò! Ø£>¥ÃÓÓc›^Ì¿õ:¶¿ûžŽYKð›éµM¾“A®GäæCøØ­k°\$!+‚×¥œ—o?œkÞšD—?cÉuó̧Ü|Ü-{Ï©ŒoœBYJC¯j WÅZÂZÞ‘ÀöVЄAª!r:t=ºÌwf.—=¾_VæGZZò?Âuk‹ ­l1`å1fßGi9†à’ÿŒ™T2ý>‚Ëövxy½<<ƒHAÞÓQs;"Maø·€"ÚÚ¶VµÞi ñAø;cæ¯`û„¿%šacòˆô^Áû‹7ì !h>P‡1¯¶ ÛNr´µÕítÀ˜9ˆÔ´.¶í]ï Æ|®§¸œ¯– ¼U%=L,?\Pž ²AûIÀÎáÉÄ“ Ç ¼J©ÿõ˰øàe $,!³I/Óüa—³ÓcrS‡c‘>÷UN}ùK®xãKî¾»/{žÁY÷Nà]ßÂkJ3ð”ç9*çÿŽÎW;Ëc¦sf‡rÆgsôyi§ÞÚÂï;$Èú>‰?<Íùò™<=Î~•ã<‹ÃÏ:…©÷]Ä•¾Ï’¨Mº`wçpÊòøÜF†Flßÿ†Së2œñŸÇybï9øùÉ\_[Æü[Þᄱ³¸ñÝO¸õžíØæ±Ï¸qä§¼›óHÍšÅ(cðŒ!Z±3‹÷ÎQÏNæú6æd:àºõ­5Oˆ…ƒíX´psKÃ/3n`O Ÿ-à¸ç'ñ[⼃[R_¥T`ü0ôÎEdw7èà,⃱AºA´D{@t|,æÑÒ’_É5øaàìÒè‚0Vl[ª@[Íç¿.™ïŒ1£Já¥ÀTŒy˜ ë®âMZÅ×ϧػAÐlã@‚n¸NÁ? ÷—´îε(Þ”• —sK÷“[ü¥^ èâTà¼e‚xw‚²œλO\[ÂÞ!ö‡ðJÔÒìp=ôрȱ½<œA[S„©7žM'¸™îRD®·¥‚ §†0&‡È[À~Àùˆ,"h1(Ü®×KN0 Ëü8A›Þ¶ry€ ‰Ã¹óð"]€­Õý‰¿ •R?wMX‹\KŽžtå‚uŽãÊš$¿7—ãݘ³6ÛŽú–éÄŸþ”sûtà‰º4;Ö5³Ç.ëqÉgpúüÛ©©IÒàúTmÚ74rä+Gq¤ëÓñß1ô‹ñ”½ù%—mÞ»4ðûº‹8"bÑð¯1l5°3Kr]zWñîü&Žj¼ŠÃ1¸Ndo¶§Áw=ª>¾ˆ?.y›ÃbæM«ç€·NãØú4WÅyoì,~Co¼æ,©aݸia3.hb÷!]¹õƒoø=_IÅi1†HŸjÆ6½Çþ{ìÊ<ÉTŒEÞ-ŸÌÕ{lÀY?À‹™/Úož` ’ŸFtÚt’‹&µwÞLCž>L'žˆ£ýdªvcox3U®&¸%í>%H;­×§ íHöÝßÃÚÚ×› e¬T-m7™=üx‘k9‘ѽ:ÄÂ÷¼Ä—9‘—M7€CNpÉþ$`""æÿÙ»ïè8ª³ãßgfgWZ­V¶$70Û`SL±0%”Ð;„N˜ÞR!€)! ½…j(ÐC¦›nšmllÜ{•lµm³3÷ýcf¥•l¹ÐÜžÏ9:ÒîN½»ÚùÍ;÷Š\HÐÍ™^—ó$Б§Ãõ¼V´Ý‰pšD ýôÚ3¸¸‘'¹‘'€Ô<û3x¸‘‡¹2ܯZ‚fÎ2'ÁIÁÒ‡\ IDATí]Ut¥6† ÷€0æƒ0 _îËí|Ï §ùƤÂe %Yo_‚.án ËíÒp·%BK[ÝBàíÙ¦ ºl{‘1aê{Ârù»f¯5Gkx•ZŸø eQš<¡®Ügò;S8âõ·›©#–ó¨N{TöíÂÍ–!šËSæ’ÿF¯à…{û¼¿øOl6¸/õ¶P·4KùoÓÃ78çïć‹nf£R3§’$í£l±¨;{#ÿ…îÕ[Ó”,aÒÂ=)%ïb=;ðêF»ÒØø Ñò(“›qÛÈâÅÃ)+1si†ÍùŠÒ7ðÔEw2h«ÎÜØ§´!öEŽ1””ÙÁHJç ä÷]N _,RÓjÙî³·9w—M¸ùÕwxnÁtë’XnwF&Á”DðñH$¸‹ÛÄu± £•¼ªõÿT8ämp·ý ƒn¿Œ& ¤p‡Y ü<VíÞµÂ4M}þ,~u› ä7ªÀM:øn48܃¤ý F"ïaÅK0 éÞ‹4I§$âõÝ+ÈT{H¶>à[Í(ìd”šl +Q‚/@Âaa¶»:˜Ö üõ½æ–iK9v“Jž·Ë™séÔ”¥{£…0"˜Ž¥äÁC"Bnn#?YÔÏo¢ÿ£$²â6¸R[åXHÇC©ÿøú~i„yN2zãšj'ìza»×_žv¡‹ƒO¡á?°äChšî|ÈS_Ôô®¨ŸÞB-°1µ}ã¿–)ª­,L›În3]š ½íŸÚl·U #èç¶ØD‚ mb¿"è/vÙýo¹í¶ð§ØÉEÛòAMmñ6†þð¯vË h;4üioù¦ÕßÁr—·†?Ëî°~ c^!xcyÓþKp“^Ûiòá:ƒkÛ÷éâå”Ù0`X;å©W‘Ö­VWjýá7ŸÈ.Âá+JvøóJæÖeé¾ûA,°³Ïæ<7k)¿œ¹€‹¦Oã7œÁ}ƒe¡oˆˆàcpü2‡¦Ã¶gŽ€ûíR’eûSßýPj¼ŽÃO8•Ÿu*'åƒÓÅÁÁGyƒÁ×±©õÁŽ9-]ÿX±Hðذm\Þ¤búRŽÞ¹;·O_ÌyÓfpmeœi"‰]›Ûò™¦<66†8^Þ#±U'º÷dŽšVË1{^Ä)UDZ¨¾±uÇî„߆låéÔw©&sæìõñLά]»óøyˆBµ=Fa÷aØu £L…ìÑ0i—ÒÒé7UWg?N$üѨɊ°Ú¡FÄnÓýØÊŸkb8Òü\ðºÝä ý £©¦•ðqaZÓü¸ízŠ÷#xN»ÕïB8,^Fëpj·™Ç^&ö£íòWt°ì|‘VûLç·ZgÛ2j™Æ*Z¯µÌ4Åe¶ì{ÒRÅûÛv »k”Öð*µ¾¤]+ëÒéÀÙ7‘¡¼É'6z6û¦rô<°/w²'•3ü?ã¹æÔ]XÚ±+5O¿Å%¾¡tá%œäzD|Ÿx«eJS.å}odfõßùø®O¸zÑï¹tæ7ô>‰¿ü¼7WtÚ”¹9Α¤Œ…DâKc[dóQì 6ׇXÞ'‚ƒÁÁ¸ŽcÑ@)ž%d¦Ô2è‚ôÅGì3½†ó>®8A‘Ælß´„Ò¼!fÚ#Œÿ×\ýöT®={“Ïê¿ ÷ÅÁdóDkN#ûuŠóDcžM3.›l”ä•§.äõÔ(Ê’1½©D…” ìî \x>ØX/§@mSuµg-^ìÚé´ëŸÁ'–w‰{Užk©ù]Õy Óš6ÓøÍÁsÕ–ãµùM«p¸jó¶¿Ë[~{¡·õ|ù•®¿½æ­·ß¬äõU{ŸVTžj œ½*¥Öí³V+è¯vã$Sl¡áñ\ñÊ$~óáT~é"¿Ø–óž»wR÷Òé£óøsÏŽ ûï—\qÿ0îËû$~½'·2‘hu‚¥¥|åƒ79èPÂèÎq1ŒòÇŽåšd”©ÿz…{GLåÂsÇð‡x¶¦hÇR¾è'ƒ‡PC¤²„)‰(5Ì!Ö¡”ÉUqæSC„ Ve S«Ê˜O-jˆT–2£Ô¡†JòÇôãªF—Íÿù<ϬcÀ•‡rVu)ýësNî#WãëŠ(.B«C)_W'X˜{NoÝÄ3½:òÔ‹ã9uÆ ¢¥…ÑÓj‰tŠS›ˆ2¡$ÂâF—^!Û­œ×öéÍsnäÑlÇÖÚ]Õ:¶„åz@LÐLÇj‚OŠD–49N"ºxq¸áMkë|õ1Fk!µ<×ÛsX}/ÖÖÊcŒ‘`,ôiݳLcÅ#Þ|÷2iŒÙ½°œU\§mŒñDì;cÿÌ ±ÏfÈFèÒweQ‚‡ Ï6¤™‰ãN Ìˆá±©ú÷HN®£d‡=YLæ#ŒRÖjˆ^;|ß],âäéGf›Tn^IÎÙžw4 §0]›¡}SY$žXÎ2ðSÚö±°iæ5›²öc1µØxˆ»€èr×Sô8å#ñîdL ,ÔƗ3ܰƒO4 Â=Èâ`Ü/)w‚ÓýŒ¯å\q*q_þ„ª#ŸâmÏX4Æû«ˆDÌ9\kKíî>À[@Þ­oþö:#ÿ8’JI>h ]Áwraô³€±Æ˜ÅßïúŽ*õVi(µq€LÐ~Õm –žI)&ÖvºYìô{$“åx;TÒGÂñ1N †ÐÖ±-Œ›"’Eù–=H“AêGQÑÜÀ[¶v4žhg™îrNÑÂuÔO!žtðwêICêË ›ÇÆ8NЫÂ2ë)z·0©Ù”t©"‡¿œi Á=V¹þ3¢Øˆ6cP+ðKZúÄ7aÌÇY­é–¹¤º*³;æÇlýßÞ¶­òö9߯l”FÞðó×—`ø\ c†O0fÉ6"Ç¢Í_”Z×èMkJ)¥TëÀ Á(gam®ÿ!" ôæF¥4ð*¥”Rë‡Ò6—è÷JiàUJ)¥Ö'F—JiàUJ)¥”RJ¯RJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥”RJ)¥4ð*¥”RJ) ¼J)¥”RJiàUJ)¥Ö„¼RJ¯úˆèà™J©µ‘‡”DZ¾«´@”R«"¢E Šƒnø{I.Otì"⺒±\,|-¥ÔšTŸÂ²ºá[H¹gð-X¢¥¢”ÒÀ«V—°mÿ3Ï#rÓûì}ècÜÇÍTãhá(¥Ö ª*€³XòèUìX6þ—Åß]J)¥W­ _D¬Þ½ycÊd>ùpWÚùíž¼Õ;ÑOŒúr¬¬‡e[«N<‰;xvTÍ-´Û[Gì®øå× 9æÂ(±Œ1ž’RJ¯Z%Æ#"2i’ÉŠÈ/€Ç?›Å'<ÎZ:êû²!½Êß;ƒàCÂ׸«Z{8 fWÛñ*¥4ðªÕ½¾ˆˆ1f–ˆìü ËÚŸÎzpQ߉……ï †íA– ´ZÀ|SŽÈ[ ÓÀ°4ún _M€`1;bù#\×¼~WéýJ) ¼ê;…^^&ô·Ã¥¾ù°˜‰˜œ<„à v9°MĘ¿¸ÆWKPµù<‰1FO€”RxÕ÷ ½¾“Yh­®úþß3…+@”_-(|î@òÐQD"€ h[MeŒ1ž†]¥”^õƒU4`¨ïKD0ÆäE¤péÙ°ò»ê‹§ñÂùÞœ¤”Rê»Ð'”RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥”RJiàUJ)¥”RJ¯RJ)¥”Úài?¼J­ãÂBÖæÁA,ù>˜H8¿%"kó`FCPJ) ¼J©6èÚ€†¬µ9håÂíÍ‚!«7ðD.æÚ_Þ+L¾¾~B•RJ¯Rêû+)Œ:&"I ”H¯…[[!neƒmÄCÄaEC ˆ D0ÆÃ¶»I<¾1iã@f-i­T j‘1&]¾z•RJ¯R껇]Ëã‹È>¶ðW[ØÎ38kgØÈ`¥‹ªg°òyLÑÀÂÞƒ’N³v·Hc[³³"òÑèeƘqáI‰6qPJ) ¼J©Õ »¶1Æ‘〧=+¶”dçç)Ûdø²¶Þ‡êãâ#–Y½– c™µvßü`ý¼xs¶¤aÞ¡âæö‘ÝŒ1cµ¦W)¥4ð*¥V/ì à‹HGàDSòßSøå> ˜E„ØZ¼ñ>ÁFðV£²Ö6¬õoNèGŽó¯{ØŒøÓbò÷‰È`@kx•RJ¯Rj5XaíîN@%}¾ŠC÷YÄ#3ºPÖÑÓâYÃF,ŠðøÐ/ØæÉ§L̈́Ӊǻ˜¦¦¹ZË«”Rx•R«xz žÉ¤\)åò8q Tkš‡áýÆ8UýÆR3ÁJuæ²vw§”RÌT)µniéw7ÕKæk?ap"n«÷H)¥”^¥”Z¯£AW)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥”RJiàUJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥”^¥ÔZËw¥Õëþ4ëÍÖÛ¸©Ÿþ»ÆO 5öw.'¥”Rx•R똘ãwòÍ?ŽÙ~Äàë# LÖÑ7Þ„ÿÄà#TųVU»Úó&„£C5+¥”^¥Ô:Åw…¹ãâÌ`Ö¸2fŒNÇg—ª¥ØŽYîô+[Þª¬³9þðó#¹ý´Át'ƒ[¿‚ïwÙÇí­Çw¥Ý î¦,ªÈòâu[rÒÎg­V9Ù¦.cÚè2lt;¥”ZOD´”Zσn¹ã2öíjž9æøùˆËJÓ±×› yõAœ®>7mQRjˆ;yêR¼¼OúdS‚…@âÉ>xÕ×l¾è斕⻂ç NÜoþÛÍC<¹ìcŸL½nCª>ؾ 'O6©zk™íóÓ€xŠÛßÜ–¹£.Åæ)256vÒsÌ2Y9Uo1¨¤‘kNÚ—IÃ.½»‘kŸyƒÿÕ'š×¡”Rj¤5¼J­ïÀÍ nS6Ùåö9ý|v»àª·|‰…_ŸÍcÇÉ §‘tm„ÉFz8ifŒO°e<Å€d#YWp€MâYúÆ›Ø!YÏ¢é1:89ºÆ³øm^%Ù!YOW'ËÉz*’y2XØÑâf²_§:–L‹ÒÁÉÑ#žÆÅ¢ÜqéO±[²Ž8>Â×vNÖa7eÓ7ÙÔ¼Ž“ T9Y¾ù¤‚r:€òîcY<ñ00†­Žº¥S7bΘ“ð²%ôÜë>~ûÚËd‹+ö>š9ŸñÊÀøtøo~ÿþ³,!•=/'¹Ñ4Œ=‚êþ¯’Øh.ÆwHÕD8©j‡î}5sÊ¿‡R½{Š:,'¸¹­k2Ë /mBªv7ʺ¼…IÑ8÷`ž}fS7E©(V\›8(¥ÔºzÔ"Pj²pBWNºk îùó–lò¿F"¶8ø5 O_º ^º•d÷O9ö¼3¨Üì&¾|÷ÿakú”§h˜»Óßý=vΠ.ÃMuã£;˹N 7~“^û3ݾÈ.Gþ•tm;ø>âx`¥Y2õª·ËC.áó‡®bG²¤j:Ò0ï f`ǽ®ÆIÌæ«ýE“öfÐC‰WcÊC™»8ÆG/taúûCIvÉé¿?•ê-_aÖWñüõ[З ©™=òª¶ÎÇ Çm,,Ž«ªáÅÌ÷®fÓ/±ÓÞK¨s#8aÕ­›¶èO†Q· Æø:oý>½ö{ clFݱ'}ÈJëw¥RJiàUJ­ÕÜœ â2ùµ xîÊ»>ô f¼ŒD—OÙãœ)ÄñûÜ8ñ)\>þot:·‰k¿º§tcŸ;‚¢MˆX$»¿ËØçÿÃ[~DuÿgÉ.ÙšùØÌøðâÞcÒëpîã8òÉ!l¶÷CŒ¡üJ«F1úå'yìŽ/Ùt÷ÇpS=¸N8%Äòøù]·óñ‹#èsàsßaÐEwóáK#ØþW‚À'tg«j8úѸhü=xƒ-v¹t–]Ç´×põˆíQRùsÿw—ÿf*žEìÎ;sw¾}ó*¶:ê{ùe&¤8ñ¢6¹¥>3‰2ÿ‹Ã»‘Ó߯¿‹©eáׇ1“(N©¶áUJ©u˜6iPjCàD Æ8ô>ðn~vÒHæ,îÀÒQe|üä_¸}—{ùEíÑ䛪ñ²Ý¸¼üYŒç€äñÜrð-f<Ê7žÀÓ5IˆA$šFDx‡r¼\œ².£yuq’³• 8|1§þYÀøå}˰Åqæ–Æ(íZƒX. ®ƒï:ˆ¤ÙâðZmì ž`ÇÒëÀÜßÔ+ê9<צW4w2„;+R1Æ/'o ¥¸ˆ#š˜Ç°EqæUD‰Ær¯‚QGÖ0²ÞãÆ!_ÑÉßùæù;øç‚®Éâ$&rÜ_ÍìT%Q—w[Še _RŽXˆ!Rep0caŒÇn4bEšpÓelRǬíåqØu«Z| ‚O‡jŸ|0Báê’m Ë1Í}ߊJ>NYø‡ÊM\Ä,þú¶9å,»âKÒ"ܾñÛXDhjŒîc'Cb0¾ÃfƒþÄ̯Îçàí~Ã+_ÝÆÓ5(­ Bo6-lÏ2þßûm vÊñ ‚XŒoŸÝ‡nC¿â›tN\?EJ)µÒ& Jm8y׫Èp$)6N¦¸á“®Ì¹?–a›Ž TöýÙ†þt; Í;¿ý˜.œÌ+gɬO;R…‹1)ê›ÖƒØ†-qé°Ùg4ÍÌÇ”pw©Ü³û±<0ðišr6¶å“K•ãžð±›¨r²X± "-µ¦– Æ´<6¾€dè²C Æî„]2“1½Âõ[Ìç…“÷Åxq"¥)ÊÊ\@À‹%¼^6‚ibÚ¨ûÙòÈëX8ú|ÎþÕžìYUK¶>-‘ô™J°dê¾€ÏΗ˥“öà’I{²ïM§#â²dÊ~|™JHz?ê J)¥4ð*¥¾ϵáÓ»þLIâE.íð8C÷y‘¦…»Ðkÿk(Áç˜{†QÚá nÝê?Tv½ƒ3ËÿË¢qÇÒ±C#9×Âx¥x^„Á`ò~>Æ¿HrÄmO`Eê¹°â1÷“ßRÝçN‹Öá¹eäs b;ø®ØtÁŸŒT›Æ/ka¼8&/D“³0&ÊÒ1ú0 /»ñNO“èò‹ÇïHI‡Ï™ñÁ{×ö`ùøù ðÏÆx¥\² ;O>ý2{?ΗÝÍC÷ô¤Ki–tM„­œÏŸ¿3nª7%_rîõ_ïåSÑ3Ïù—|CiåÿpÓ½xþüÙÊIéÍkJ)µnÒ& J­×§´Ž!ƒMçmMßCGfi9†‚K‡Íç0èÜoÙmÐbÆ» ºõÏqyÍùƒW~¹? 7¥ÿžç2äžQÜE%ýº‹D·FSJa«ã&àeo`qúŸ0‰]À8blqìd2™k‰uÊ3ƒ(}ŸÇ¼1×ÎûûHü4&½ò3âçpÁˆøêõŽ|xã`ÌÆ¶:þoر‘ ƒÐÿ„ψvøÉ.y^Huä¼ñwòä!ãYR_íÌ'âøÌÇ¡²÷"zî3”aAÎ!“±!“vùÝ-Lys+*{/b>%¥Ú5™RJ­ƒÄýþ^+ß1Æ©¦©ð·+òÔ#1»–£%º^|&cŒ+"—7pÐ݇rèùÓ™‘*]éÍT6†­I‘ÄÃÁà"Ì'Â,b4Ô;8á(g–c؆F¶!Ë "Œ¢œ¬kc9†­H1 ‡¥DèDŽÎäø†2|Wˆ9»ÐÈ&¸|Bœ)nØœY,fS‚…¡-È0š2ÊÈÓ›,£)ÃGˆ“§'¾%Ž‹E -I1…8un„^NŠ=H1›¨ mibq*Éã!̦€J\z-Û° L$F 1, >BY¶ ËTbÌŸð †FîC¦Õ<íIÕ[ì˜làúc÷cüsw»c>Ûãé'xùG³1ÆCäàY Ä|øµeÌ»ŠT~“ÂïáI\‹hoi@ØkŒXüý®…­ÔOGkx•Úx£ê“­ž³"Û1ÍÃöZŽ>M'åZX¯YNp`þ_}‚’RÓüxQ*ʬ|Œx8MÖµy+Ýß"¥~sÿ¶¾ "4?N¹F¥“Ä’ ®Ã¨tŒXÒÃÂr#|ž.–‰!åZ|–® Vêá8>SSq¾mH`•bI”áƒtGœRŸ¥nÄ ë©u-ÞKwÀ)mÙ' Cã½úR¬RÓêÄÁÂ07enº¤U9(¥”ÒÀ«”ZKÅ’«PËè@ÜñekŒãÉÖÏYqC¼ø&6Çs–]‡Óv¾¢éÚÎc9†xQ°t èu'î·¥m–ÕÞz û¶¼í³C¬jùe‹ˆkí¬RJ­ãô ¥”RJ)¥W)¥”RJ) ¼J)¥”RJiàUJ)¥”RJ¯RJ)¥”Rx•RJ)¥”ÒÀ«”RJ)¥4ðj(¥”RJ) ¼J)¥”RJiàUJ)¥”RJ¯RJ)¥”Rx•R+ü·-ƒµ˜ˆÑBX?ßY-¥4ð*¥~2~0_Èçô ¼Öp!ÛnÎ àk™¬oo°Rj]Ñ"PjÝIºáï1€0㽞ìpáx&ÌKPšð!¯%´&¿J3‹Ã«Rüî›í jg‡/jïzp|4P£E¢”^¥ÔOxED€Q‹Íä×.æïÏ|ů›MÁÇP²o}î{\ޮš1î[5>‡üfW–L9×,Ë£5½ë!ø?ë·œwZ)¥W)õc2Ʊ1KE"CÈ¥^à©S^çÕK‡/ÁAÖâVJâ|1X²êÁ×7Œ kq»eãƒe é¥Ýi˜ AMà¹Æ\í‹\£ÍÆÖÁKŒ1ˆì>v€œ“m`ž6UQJ¯RêG½^Xkø_,&7ÔªŸyˆç¯Õu»[Xd,žð}®4ÆLÑÚÝuŒ„gŒ"½€Ý š£XÀœÉ0±?P§MT”ÒÀ«”úIB¯†ÞýE¤âåÄÔZ¸Á¥&BZò¹̯±ah¯ÖV< q ¶5/ö¥Æ&-ÞZ·a¹{f£ãOÊyI›2¬£,ŒÉ#rEøÎf0l+c\,ÑÀ«”^¥ÔOzÿ€†µ}›E¤.üÎñ. [í^ƒNÁóÓ4w]x_DÄÞ »ëc\DöN ?§‚ÞÔRJ¯Rj …Þ0d kw¡‚.$œæ@„Ýökx[¿ ý†µø-1ž~2×¹ kÔ캈ôž ?³9 <Œ1cÞ‰`Œv…¢”^¥ÔšJY¬Å—YEÄk¤Í÷ØE_ÛĪêž<ÿ7ÁgÊGä`à :"yVv#MjNĘـ·.÷Î`Œî¯!hv£ÔGkx•RJ­ÿD$ |‡½€ÿv…†–¦7Yà ŒA¡)ƒˆ¬Ãû¼=Ø‘ã!b¡WK”^¥”Rj½c´ß¸§èyCKo Áy'cÌ{¬ûív%Ü¿›}ùpi¸ozÕ÷ ”RJm(šz^ÈÐÒ=^Œ S¿;Ã@h¯Óa7¨Ý \G½ IDATõ‰›…û:x‘Ã×´yƒÚ`h ¯RJ© I† &×ÒÀ$`8ð(ÆŒâͺßãF¡vw+‚Zm ï"²#° ¹ÙƒRx•RJ©u^áòýtà€0ìÎŘÉ-Ql‚ž Ö‡îå !v!p-pÐ) ½aÌ?Ã}ÖîôÔzO›4(¥”Zÿj1i˜×1æýæ°+ Û´zëMmgËþÎØk€s‹B°b¾~8Ô†@kx•R«EZú1…Uë›ÔûàµÚ`W4ðDñ`V8‚™½.Þ(¯Ö.{ì~ç‚ù̈B[Þ·i•ðsl/ç³ÊJ>Ë…ùí6ËZé¿à¯þ§ƒu 0’ ýr"|¥Ójü+¥W)µA…]+<ø®ÎÍ<^8oÁeå-7 µwÏCóðÂõáfzÙU­ –ˆˆ~†s+ Nã¹ï:ŸˆØßc?vEfÓº/aýRx•Rª½°+"eÀÀ6–EGd¥5D–oÈ"ìŽÁ‹þ+9<.Ó1>–ØüÁŽÈQ¾!b‰ÖF©5÷/€ÁGˆ!t’à™(«ÝS·à³ìŸ±èoGäÞ¢çW¸.ß³`’ïóš1fRÑÉæwe–Æ•ÒÀ«”RË »ExÒz[DŒå;u`lVrYÔIÌ ñeåß;&¨–‰'Ûü"z|Vkœ06Ö,cÂf:fųÔüV–3 Á2^4O®T,n‘ 1ÿøB¯Rx•Rª°+€‘Jà?–q*û²ß%;sÖUìžñA¬òmLž´•'cYDV©–ÖFÄÅ¥”rßñ ËÑwD­I6HKlp°XµÜ|òb1QʽUY¾&ÃÌÈ×<Ð}¤ÿÔåY–Ü#"1ï|Ïæ JiàUJ©öŽÁƘ¼ˆ lÒcÏ{Ÿ'^¹ª4„ª Q‚*÷ÇÕ.UÞï¹gôÛœsÞ |Í÷àôÿM) ¼J©…°lö/š:Ž'FÞJ]gGKG©ÉpVŶ‹¤ë§ Ö¬A"5ÆäDDŒ¡”^¥Ô’z»YHÎ&çYTX1-¥~T¥”ø)²–CÙ\J:RYBKÏJ©Õ O(¥VU(Q- ¥~"1cð-c0–Öê*¥W)õ#Ón”Z#ÿxz¨VJ¯RJ)¥”Rx•RJ)¥”^¥”RJ)¥4ð*¥”RJ)¥W)¥”RJ) ¼J)¥~8>®lHëUê' ³®ÖQ:ð„RJý@RÔ[v«è`1qÿÇ\¯ ÆÁñÝ5P‰ÃñVg½.)+KZZæ/ýÑËG©ĪŽnc«ùœPGÅÓÀ«”Rë“$77>@ XÓ d©’ñå gƒi¤ÆÉÓh•Óã'…ËÇ•ùL)M°inUö/E½Õ›dº'ñL9˜¹‚°p4¨µ“ˆÄcò+˜ÎÆð´Ð4ð*¥ÔzÅÇ•8NþOvpÓ7Ÿ¼D)_º‡}zWLœ %j9}Òâá˜JM!èeI‰ƒC0MÚp(õ‹ƒ KÊ ½ Ö{ Uµ'rÀyK™ºÛ_™7d %N«ù\ê-›R㑤ðZ¡F:x/ZG½4Uppˆ4¿æ“€´Uû”a]_áŒ'ó‡óNäÚ¯§’ŠC»hŸZ¸ "Ùx7ôÿ†§öNSßÉ¡¬f3öy2w}Uv×ÖЫֺ kŒôÞ~<ŠHd™à[»"UÀàðÙ÷0f "¢5½x•Rjæâ²1Ž;wÏñqËJ)mˆ8õÌî:—ÿýzoÿå/¼óÔÔ—;Dü¾Ä›¶ ž­kÄ—âF¿'ñLØeào“^Ö×P–Ûã’²¶&ÞTnD>cA2 Æ%ðñJw#ÚØ‡ª Àçá|1ðz’L/€Ø8M1`Äëp#»‘¬‹ƒ?â‹p£Žñq¥?ÉÆÍ!×Öx(¾¶1ñ¬ ”7»@ãǤ6öÈUºd">Ÿ¼ìD²~D—B´ð}RÒ•xö"úÿz!ß ).»Å|}îDþûò¥|x­¡Ú„0ô«5òм†¹¶¯µ¾d/b7Ÿ¯KXãI›iLórÛNÓºI@aš§ ¶¡½åÛÓ² Ó¼Ap-ÞF³’ýæ þ޲vʬP>‡6_Ùø4ܯyÛ[oÏò›@´.Kš÷§0ŸÒÀ«”R?µðÁ§’>ÿYÈØ«†®»Bz{:=6‹OOJÁÓ`ü®Ä³÷rÉÀé¼·s‚ª»rÉû;sHM#Džâº~è_—bNô¯<±g”DÃaÜóV¶Lepí^ÄSã×;.bÒFIº,9…‡>­|±(YüwtygŽŒIÄmolÎÀ†L,[·ß™ó&þ“köq©K æò÷³ÿ‚Ë8ì°Fæm´§¿ûsΛ±7Ú'sgî6—Q£”ÖmÇ÷~Æ™³²yzG)õj™Ôñf¾í:€ÓGƒø>¾IÐHÉÜ5œ¼Wö›5˜Óf7à:.ik ɺ߳÷ñ ùfˆMÉŒ8ñšmùÅÔ‰<Ù}Ï]ÑÀ¬CïåãÿÁ'DMEŒ*½üÄÚv_i/PµÔv²ÂÐW´¤¢çMÑò—¦õú ËkPXN{ÛP¼Ü/Ã[Ià_^˜Î…Ûêµ[."wõ@.ÌXþJËvyÛºüíô–[žJ¯RJýÄ,ÔÂHL|È—E(›Óò=©È\IßßÕ0éÔ•o.eÊ^Or̹ó¸òÜ[¸ò«ë¹÷t—†žes?ßÄâ½b¯=oaþÅYÿRz M1—2J¾˜Aãà«yí‹L=Û"šÍS¿Ù£ÜtEÆm¤q—GÙïXrÊ[¼^ý%Ü?žgG[H&Ofãg8æØ—è0!Ký¦¹Ê7¸äÔ *q ÇÏ:‘Mo¨cöáI:¼¼oW8ç¼Z&yŽ›?¾‰û/odþ.`7–Òá[ƒ?Œø¸ÖI$–v`³¿5²`·~ìu¢‡+^Ø\b:M¥óøüXAÌ–~Ë‹<ôÎ? ÓÉì?êfv¾l¯nÙ‡½&Œ‡¸CRk¯~š€[\ÛØ:PŠü ˜| \l<Ž1w"²/pž® Ä¡ï$à4 '0¸c†…ë2áòOÎ*:à^Œy ¨ øp °)0xc†‡ÛyÆä±‰vL÷þ\ Œ¾î@ä~àzà9`ðGàüæZW‘›€Çp_¸¾—9;\—Aä†0~ ܆Ùû¹ª(À=¾6¸(^CäüðõÁÀñáó6p °sø¸ð à_áߟ."‡ïÛŸ·!òx¸~Nó"àw…eøXû%lL˜§öǘßÔðFÚ9áØ 88°èµÓÃ}Ú( ÁG¯‡…¡Ü.Æ!²O8­ïU§5¼J)õ& AD×&žrðg¹¤6ÎNžÁ;[ Vnþ_5ýö´ñ|ܲ,u›~ qðb1*¾8™gßÀ¢M7áÿ&Oe8‹S¶€/·‰’˜x<¿œ~szÎÍããæ£w¡¾á †vŒQ1öΞt%‹{õ刟q_Ý\>éÖ…KÀÈôʳ4TTÒ+eáÔö`ÏaÏ’Mtc`Ú!6/KMåFàþ‘I·ü¨dË¡ÁÊ‘®¶ˆÕ¾`çlJæË•_?GSG1VÓ{ =«ž9ÊýÇÄs¿¦¾Æc¹¤*ó¤Ë3äºÇ©ú¦3Û>:'¬€ˆPZ YJéäy­­H®–Z;B‰ß@Êéé$s>®%8õd}‡j?BÒòBΈQ‘õ)·<"F°ret©Ï³mブ_ÖH<9ƒ÷.p°–”‘˜na¥_hÀÂò#DÒr–O™‰`0ÑF¬ü7¼Ø¯|·yƒS‡Ù–íêJ¼ÄÞì2'µ”Nn¾Ž)±rþÀ,óµ¶êG9 3&ì=À^£å&4‡ Ör<"Ï#r "!²iŠß+ »²13‘Ò0<ÎC a@ŽÈˆl‡H_àC  ^†ÿ-Æü‘þˆìÈÏ)4‰†¡õSà8D.G¤O¸Řç‹Î-—÷y±‹r„ÿYv7z‡Û¸9"éL Ãå¾á¼Û/…ó•…ó>^´Üå znðYñ‰›n—[ôœž„ïÓ”°RòD®BdwàŒ9 cžËD›ÿ¬­áUJ©ŽМ¹­{+Žž<š±ˆ «Øzö,>àz¦_s,¬— 9‚úŠNàù.ÆÂ#£%€§K¶#½§Ô2éØj*ýÎP7â³Ó™pþK6± øÑ4±âJ ã#&N×àåqÅ o ÈK$<0[ˆÄ(k¼žë¹4õ;™7÷þûŽ"Qâǃ•/§<$ / QÀñÁDvç²ófòiŸI¼|ó}Ü6á~3uéDŒRÏ¥ÞÚ’ªtz¿µ€/ûÁCgŒâÈ+aûZü9èâ¾ýå >xàVÆÜ:Žú¤¶ãý>—AhýÁ%òÍiéEa³ðç¨pÚ‚ʹˆ\ | ¼~®Ýp9^Q ….5ú„¿ ¨Á´Š‚u=Ð) ߇7ôxÐ,%hr0°1&‡È`(AûÛ¿ òð A;Þ4…v"~› ½Ya8ôÙ,|ýàTZ÷‘¢ˆ”„ûßî«ΛYiàm{ƒÚŠå‹ÚáJÑr£áᅢzy¸.|n!"o¿£PÛ­íx5ð*¥ÔO)ñ!’fQŸA쵪îeVrcN³‰-Þ‰ÓÆõ`û†1üsñè|×p~uË"U}ο¯ïÆöOßÆÇ·B¾,qƒ¶4 °1&y$W¸…Wÿx.}®Øƒ=Áˆ3ê˜z| ]ž÷ñlLõƒÙîÍ ²ˆ¼Cp³TDœ0œ¾tÁ˜mÂö·ŸÕ=Œ ÃóÓá2¶Fdk`,ð0AµÕÀ•½$€­Ãà7Ž —‡k0f~ØÅ؇Àdz"ûo¿î$hG|'°ðhXûyð2°Æ|€H4l.q'AñQáëÕaØ?è‡1SŠúÓ=NЬà1‚®ÍÎØ4´ÖR¾pÐ5Ü—ƒÃÒ|X|ƒ1×"rA—h„'ÜøWhû?ðNxÒ±M¸þÂ÷ëb` 7¶ c¬V6iPJ©H!záåË)¤âH' Ô/Ô\Î'[%=Ø*åïQß!Fҳڄжˌá˜Ù¤â6ŽéÁ€Æù¸Î|Ò±Ißk§]a¡‹¯¶¯/ïqaÚÑÔ':°qà}ê“⾃ã¯hû,0Y\; v%=2n›+ˆŽiĵ?"]'™OÒ%çƒ|F}…C©¯a÷Ç;g n'‚v³…ËÿcÌSaÀµiyo —÷½0Ж“¦uÏ…çšÂ0yd ß §­¦¿ áx‚n¾."hKœ#è6ì@‚ê>$¸|Ao '!RXÎ8àüp9ït]vAÏy‚Úë®´œŒçÃíóš÷+hþð <\ï’ðo8; »AwlÛܸ¶( ›þJûmx8‚ ß^hé-bÿð÷‡ár '*3næ;‘ «·Ãj©ý°ÍðIaþüøc¸/ú?£W)¥Ö·(äÅMŒx«˜0øšF\;xœôV¾¼àî÷ƒ#©qp «]VµR>NÒφÛ/ºyÌk÷@ß²ìBhvÛi.gᘎ—Á;haV¾ÿêâ´ kõa¸³ÂËêEÑMN) Š…ßCŠæ/L…¶©AMçd`װ热3ªèu\Örn Œ Ûë‚H‚6º3ÃA(ž!è³¶ð-Æ|R´œp"7Ü€7cf!ò6AûÛà|-XÇì0ç‹Bú¯Â®Çv ƒùHŒi C¤Á˜`ï°fys‚A"æ!òf€[Ê¥PÓ„Ô]‹þÑì¢é¤h» _#ÃðmaÌTDî#x¢°½c}Ã^*¶—1¾¹ç ¥W)¥Ö|®X¹U`Áùó®Þ6}·å;ß¡„£7‘ü¤Ú¾§v8XIJ'2ÆÌ)ú»èæ.ç¹y­ž+ ±kÌ7À7EºeHâ °N&6?n†›O¡çàR}ðj› Þv9_ÜWxmfÑö¤€ÉËì[ËúæÜ@GÑ:[ÝkÌ—M ËŸ¼L´.·¹«ð>xaèbÌô¢eÏ^f_ƒð= ˜Ôfûµw ¼J)¥”ú˜†ªUy®%Œnˆ3ahó— Æ…›à ýÖëÔ–¶ô’P¸™kùË‘æ×Z_ܲ¼Ö¡Ô­¯pÎå·j [<{Ë_~¹­èÎÆH†g‰š{tÚC[­Ê°uYo§6cÐÀ«”RJ©ï…—[{¹jχµö—_è lÙ Úú±· ËYþã¶Ë[þ¶{+yÝ|‡ýjï$¢ð× À<`B¢W<ïòÊJiàUJ)¥”Z O" 5·¯RÜTCkl5ð*¥”RJ­W‚1 Í9ô†M ¼J)¥”Rë™ äjÐ]ô&Y¥”RJ)¥W)¥”RJ) ¼J)¥”RJiàUJ)¥”RJ¯RJ)¥”Rx•R?9ÊR©5C´Ôÿ·w?!rÞuÇ?ßgv·›bˆàÁ€ô,‚ -Ñl=ˆ 'O^‚zò"ˆ ¨Ïzó&x•BÑŠˆx¨¥QBE(ˆ"âjÅ\iSÿlšÝ™çëaž-[É&³1û·¯,’™gÃ3¿ù=ïyö·Ï x£1½p´vr»† ‹$3 _¸O®Ã ¬¤’¿vú±fk}ÌN-²^³¬ `8$·òúp.Ç[¹y)ÉíWòÊ-{/ph­›Œ‹<—lúj>óásõ‡ßI.Μñ…C±•äcy÷kOç…wmåïʘ—º{§ª†öq´ xnQU•äÙ¤¾º™|ëñZ^¾‘O CžǼc’q¼k 3޵x,¤òjå.k!:©J'YïÎùTžKçÏÃa•/ǹCjӲѽ¸ÒIWòZw†}¯¡°ËãØ¹˜Ê«é|÷MÛºóëe÷ýÝ<É‹=æZwÿEì‚àŽ&zwÏô¾žä'Ó×ʪêËIÞŸÎf/{w¿L¨îÌ“œOò¾ôÚ7ºw^ð pRTÕ'—ôŸd#½ïû·êÎvR§{³»?ŸßoÖÝ {/p4Ñ;Nëy‡¬~mе䀭$¹GðNÛ__Þž_¨ªµ,/‹æ Ï±uî4nYŽÏîi,ïŽçý“åÉàlLãxï¶îù’›Þk.Ðÿÿ· ‚`Ÿè탄gU¥»çU5î=ˆ¯r Ÿn/¦Ç·³\[íN°ªæw§}±¼ûÒ™ïÝÖ‘…ú2´çyóšã™gÁ œ…w¨‹©Ø?’äíIv¦cÿßÞb¼À©3ýh%U—’|%É•=q[I®ÛI^àT'o–Ë).$ùÂôwÛY®9¾‘äGY®Ç·Lˆ·„Á.€3gwðï’üiº½‘åµ°?›î$rtk‰Að2w»S5Lkx7“ÜJò|’'Òý³=ÿo –4ÀÍÞéÏ/%étÿ1I¦Øõ!^à´çî´\¡ûSèV’»^àl©¦ðã'¼À™ãŒ.ø¥5/^¼ x@ð€àÁ €àÁ ‚/^¼^»Á ‚/^xPÖì‚S¡ïðu·ûÂiÏ«Žy8ís²q ÇÌÞÓ1±®%™%©ŸÓ^ñ¾pÔóM`ޙǜà9yXa|öžûËpŒœá=©3jwWÕäf’_$ùh’w`?Ý {”c„$ùùtÐÿà{3ÉKÓíÑ®äÌÉÿNr=É'’\>À&žO’ªšu·9ŽXuûIˉ}rªjšd/$¹’äRî~öv÷ÉüU’ï'™·'˜“5–ŸLòñ¬v–w+É3ݽ¹ûx{’2Ž™æä÷¬8'ÿ&ÉÕ$;Æ1^€}#Þà~YÒpJøY®g\U'E'p,t]îØÝ–2`Nþ¿×­×g™«4 x@ð€àÁ ‚/‚×.@ð€àÁ ‚/^/^¼ x@ð€àÁ €àÁ ‚/^¼ x¼ x@ð€àÁ ‚/‚/^¼ x@ðåvúÈsIDAT€à@ð€àÁ ‚/^¼^¼ x@ð€àÁ ‚Á ‚/^¼ x@ð x@ð€àÁ Gâ¿â *ýg”éIEND®B`‚pytest-benchmark-3.2.2/docs/pedantic.rst0000644000175000017500000000444313416261170016357 0ustar hlehlePedantic mode ============= ``pytest-benchmark`` allows a special mode that doesn't do any automatic calibration. To make it clear it's only for people that know exactly what they need it's called "pedantic". .. sourcecode:: python def test_with_setup(benchmark): benchmark.pedantic(stuff, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100) Reference --------- .. py:function:: benchmark.pedantic(target, args=(), kwargs=None, setup=None, rounds=1, warmup_rounds=0, iterations=1) :type target: callable :param target: Function to benchmark. :type args: list or tuple :param args: Positional arguments to the ``target`` function. :type kwargs: dict :param kwargs: Named arguments to the ``target`` function. :type setup: callable :param setup: A function to call right before calling the ``target`` function. The setup function can also return the arguments for the function (in case you need to create new arguments every time). .. sourcecode:: python def stuff(a, b, c, foo): pass def test_with_setup(benchmark): def setup(): # can optionally return a (args, kwargs) tuple return (1, 2, 3), {'foo': 'bar'} benchmark.pedantic(stuff, setup=setup, rounds=100) # stuff(1, 2, 3, foo='bar') will be benchmarked .. note:: if you use a ``setup`` function then you cannot use the ``args``, ``kwargs`` and ``iterations`` options. :type rounds: int :param rounds: Number of rounds to run. :type iterations: int :param iterations: Number of iterations. In the non-pedantic mode (eg: ``benchmark(stuff, 1, 2, 3, foo='bar')``) the ``iterations`` is automatically chosen depending on what timer you have. In other words, be careful in what you chose for this option. The default value (``1``) is **unsafe** for benchmarking very fast functions that take under 100μs (100 microseconds). :type warmup_rounds: int :param warmup_rounds: Number of warmup rounds. Set to non-zero to enable warmup. Warmup will run with the same number of iterations. Example: if you have ``iteration=5, warmup_rounds=10`` then your function will be called 50 times. pytest-benchmark-3.2.2/docs/comparing.rst0000644000175000017500000000437413416261170016552 0ustar hlehleComparing past runs =================== Before comparing different runs it's ideal to make your tests as consistent as possible, see :doc:`faq` for more details. `pytest-benchmark` has support for storing stats and data for the previous runs. To store a run just add ``--benchmark-autosave`` or ``--benchmark-save=some-name`` to your pytest arguments. All the files are saved in a path like ``.benchmarks/Linux-CPython-3.4-64bit``. * ``--benchmark-autosave`` saves a file like ``0001_c9cca5de6a4c7eb2_20150815_215724.json`` where: * ``0001`` is an automatically incremented id, much like how django migrations have a number. * ``c9cca5de6a4c7eb2`` is the commit id (if you use Git or Mercurial) * ``20150815_215724`` is the current time You should add ``--benchmark-autosave`` to ``addopts`` in you pytest configuration so you dont have to specify it all the time. * ``--benchmark-name=foobar`` works similarly, but saves a file like ``0001_foobar.json``. It's there in case you want to give specific name to the run. After you have saved your first run you can compare against it with ``--benchmark-compare=0001``. You will get an additional row for each test in the result table, showing the differences. You can also make the suite fail with ``--benchmark-compare-fail=:%`` or ``--benchmark-compare-fail=:``. Examples: * ``--benchmark-compare-fail=min:5%`` will make the suite fail if ``Min`` is 5% slower for any test. * ``--benchmark-compare-fail=mean:0.001`` will make the suite fail if ``Mean`` is 0.001 seconds slower for any test. Comparing outside of pytest --------------------------- There is a convenience CLI for listing/comparing past runs: ``pytest-benchmark`` (:ref:`comparison-cli`). Example:: pytest-benchmark compare 0001 0002 Plotting -------- .. note:: To use plotting you need to ``pip install pygal pygaljs`` or ``pip install pytest-benchmark[histogram]``. You can also get a nice plot with ``--benchmark-histogram``. The result is a modified Tukey box and whisker plot where the outliers (the small bullets) are ``Min`` and ``Max``. Note that if you do not supply a name for the plot it is recommended that ``--benchmark-histogram`` is the last option passed. Example output: .. image:: screenshot-histogram.png pytest-benchmark-3.2.2/docs/requirements.txt0000644000175000017500000000005613416261170017316 0ustar hlehlesphinx>=1.3 sphinx-py3doc-enhanced-theme -e . pytest-benchmark-3.2.2/docs/installation.rst0000644000175000017500000000014013416261170017257 0ustar hlehle============ Installation ============ At the command line:: pip install pytest-benchmark pytest-benchmark-3.2.2/docs/glossary.rst0000644000175000017500000000127113416261170016427 0ustar hlehleGlossary ======== Iteration A single run of your benchmarked function. Round A set of iterations. The size of a `round` is computed in the calibration phase. Stats are computed with rounds, not with iterations. The duration for a round is an average of all the iterations in that round. See: :doc:`calibration` for an explanation of why it's like this. Mean TODO Median TODO IQR InterQuertile Range. This is a different way to measure variance. Good explanation `here `__ StdDev TODO: Standard Deviation Outliers TODO pytest-benchmark-3.2.2/docs/faq.rst0000644000175000017500000000543013416261170015334 0ustar hlehleFrequently Asked Questions ========================== Why is my ``StdDev`` so high? There can be few causes for this: * Bad isolation. You run other services in your machine that eat up your cpu or you run in a VM and that makes machine performance inconsistent. Ideally you'd avoid such setups, stop all services and applications and use bare metal machines. * Bad tests or too much complexity. The function you're testing is doing I/O, using external resources, has side-effects or doing other non-deterministic things. Ideally you'd avoid testing huge chunks of code. One special situation is PyPy: it's GC and JIT can add unpredictable overhead - you'll see it as huge spikes all over the place. You should make sure that you have a good amount of warmup (using ``--benchmark-warmup`` and ``--benchmark-warmup-iterations``) to prime the JIT as much as possible. Unfortunately not much can be done about GC overhead. If you cannot make your tests more predictable and remove overhead you should look at different stats like: IQR and Median. IQR is often `better than StdDev `_. My is my ``Min`` way lower than ``Q1-1.5IQR``? You may see this issue in the histogram plot. This is another instance of *bad isolation*. For example, Intel CPUs has a feature called `Turbo Boost `_ wich overclocks your CPU depending how many cores you have at that time and hot your CPU is. If your CPU is too hot you get no Turbo Boost. If you get Turbo Boost active then the CPU quickly gets hot. You can see how this won't work for sustained workloads. When Turbo Boost kicks in you may see "speed spikes" - and you'd get this strange outlier ``Min``. When you have other programs running on your machine you may also see the "speed spikes" - the other programs idle for a brief moment and that allows your function to run way faster in that brief moment. I can't avoid using VMs or running other programs. What can I do? As a last ditch effort pytest-benchmark allows you to plugin in custom timers (``--benchmark-timer``). You could use something like ``time.process_time`` (Python 3.3+ only) as the timer. Process time `doesn't include sleeping or waiting for I/O `_. The histogram doesn't show ``Max`` time. What gives?! The height of the plot is limited to ``Q3+1.5IQR`` because ``Max`` has the nasty tendency to be way higher and making everything else small and undiscerning. For this reason ``Max`` is *plotted outside*. Most people don't care about ``Max`` at all so this is fine. pytest-benchmark-3.2.2/docs/changelog.rst0000644000175000017500000000003613416261170016511 0ustar hlehle.. include:: ../CHANGELOG.rst pytest-benchmark-3.2.2/docs/spelling_wordlist.txt0000644000175000017500000000015513416261170020337 0ustar hlehlebuiltin builtins classmethod staticmethod classmethods staticmethods args kwargs callstack Changelog Indices pytest-benchmark-3.2.2/docs/index.rst0000644000175000017500000000421413416261170015673 0ustar hlehleWelcome to pytest-benchmark's documentation! ============================================ This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed to it. Notable features and goals: * Sensible defaults and automatic calibration for microbenchmarks * Good integration with pytest * Comparison and regression tracking * Exhausive statistics * JSON export Examples: .. code-block:: python def something(duration=0.000001): """ Function that needs some serious benchmarking. """ time.sleep(duration) # You may return anything you want, like the result of a computation return 123 def test_my_stuff(benchmark): # benchmark something result = benchmark(something) # Extra code, to verify that the run completed correctly. # Sometimes you may want to check the result, fast functions # are no good if they return incorrect results :-) assert result == 123 def test_my_stuff_different_arg(benchmark): # benchmark something, but add some arguments result = benchmark(something, 0.001) assert result == 123 Screenshots ----------- Normal run: .. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot.png :alt: Screenshot of py.test summary Compare mode (``--benchmark-compare``): .. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot-compare.png :alt: Screenshot of py.test summary in compare mode Histogram (``--benchmark-histogram``): .. image:: https://cdn.rawgit.com/ionelmc/pytest-benchmark/94860cc8f47aed7ba4f9c7e1380c2195342613f6/docs/sample-tests_test_normal.py_test_xfast_parametrized%5B0%5D.svg :alt: Histogram sample .. Also, it has `nice tooltips `_. User guide ========== .. toctree:: :maxdepth: 2 installation usage calibration pedantic comparing hooks faq glossary contributing authors changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` pytest-benchmark-3.2.2/docs/usage.rst0000644000175000017500000003036713416261170015700 0ustar hlehle===== Usage ===== This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed to it. Example: .. code-block:: python def something(duration=0.000001): """ Function that needs some serious benchmarking. """ time.sleep(duration) # You may return anything you want, like the result of a computation return 123 def test_my_stuff(benchmark): # benchmark something result = benchmark(something) # Extra code, to verify that the run completed correctly. # Sometimes you may want to check the result, fast functions # are no good if they return incorrect results :-) assert result == 123 You can also pass extra arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.02) Or even keyword arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, duration=0.02) Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient: .. code-block:: python def test_my_stuff(benchmark): @benchmark def something(): # unnecessary function call time.sleep(0.000001) A better way is to just benchmark the final function: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.000001) # way more accurate results! If you need to do fine control over how the benchmark is run (like a `setup` function, exact control of `iterations` and `rounds`) there's a special mode - pedantic_: .. code-block:: python def my_special_setup(): ... def test_with_setup(benchmark): benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100) Commandline options =================== ``py.test`` command-line options: --benchmark-min-time=SECONDS Minimum time per round in seconds. Default: '0.000005' --benchmark-max-time=SECONDS Maximum run time per test - it will be repeated until this total time is reached. It may be exceeded if test function is very slow or --benchmark-min-rounds is large (it takes precedence). Default: '1.0' --benchmark-min-rounds=NUM Minimum rounds, even if total time would exceed `--max-time`. Default: 5 --benchmark-timer=FUNC Timer to use when measuring time. Default: 'time.perf_counter' --benchmark-calibration-precision=NUM Precision to use when calibrating number of iterations. Precision of 10 will make the timer look 10 times more accurate, at a cost of less precise measure of deviations. Default: 10 --benchmark-warmup=KIND Activates warmup. Will run the test function up to number of times in the calibration phase. See `--benchmark-warmup-iterations`. Note: Even the warmup phase obeys --benchmark-max-time. Available KIND: 'auto', 'off', 'on'. Default: 'auto' (automatically activate on PyPy). --benchmark-warmup-iterations=NUM Max number of iterations to run in the warmup phase. Default: 100000 --benchmark-disable-gc Disable GC during benchmarks. --benchmark-skip Skip running any tests that contain benchmarks. --benchmark-disable Disable benchmarks. Benchmarked functions are only ran once and no stats are reported. Use this is you want to run the test but don't do any benchmarking. --benchmark-enable Forcibly enable benchmarks. Use this option to override --benchmark-disable (in case you have it in pytest configuration). --benchmark-only Only run benchmarks. This overrides --benchmark-skip. --benchmark-save=NAME Save the current run into 'STORAGE-PATH/counter- NAME.json'. Default: '__