DTest-0.4.0/0000755000175000017500000000000011607315474013044 5ustar sorensoren00000000000000DTest-0.4.0/README.rst0000644000175000017500000001240211607315463014530 0ustar sorensoren00000000000000======================================== Dependency-based Threaded Test Framework ======================================== The DTest framework is a testing framework, similar to the standard ``unittest`` package provided by Python. The value-add for DTest, however, is that test execution is threaded, through use of the ``eventlet`` package. The DTest package also provides the concept of "dependencies" between tests and test fixtures--thus the "D" in "DTest"--which ensure that tests don't run until the matching set up test fixtures have completed, and that the tear down test fixtures don't run until all the associated tests have completed. Dependencies may also be used to ensure that tests requiring the availability of certain functionality don't run if the tests of that specific functionality fail. Writing Tests ============= The simplest test programs are simple functions with names beginning with "test," located in Python source files whose names also begin with "test." It is not even necessary to import any portion of the DTest framework. If tests are collected in classes, however, or if use of the more advanced features of DTest is desired, a simple ``from dtest import *`` is necessary. This makes available the ``DTestCase`` class--which should be extended by all classes containing tests--as well as such decorators as ``@skip`` and ``@nottest``. Tests may be performed using the standard Python ``assert`` statement; however, a number of utility routines are available in the ``dtest.util`` module (also safe for ``import *``). Many of these utility routines have names similar to methods of ``unittest.TestCase``--e.g., ``dtest.util.assert_dict_equal()`` is analogous to ``unittest.TestCase.assertDictEqual()``. Test Fixtures ============= The DTest framework supports test fixtures--set up and tear down functions--at the class, module, and package level. Package-level fixtures consist of functions named ``setUp()`` and ``tearDown()`` contained within "__init__.py" files; similarly, module-level fixtures consist of functions samed ``setUp()`` and ``tearDown()`` within modules containing test functions and classes of test methods. At the class level, classes may contain ``setUpClass()`` and ``tearDownClass()`` class methods (or static methods), which may perform set up and tear down for each class. In all cases, the ``setUp()`` functions and the ``setUpClass()`` method are executed before any of the tests within the same scope; similarly, after all the tests at a given scope have executed, the corresponding ``tearDownClass()`` method and ``tearDown()`` functions are executed. The DTest framework also supports per-test ``setUp()`` and ``tearDown()`` functions or methods, which are run before and after each associated test. For classes containing tests, each test automatically has the setUp() and tearDown() methods of the class associated with them; however, for all tests, these fixtures can be explicitly set (or overridden from the class default). Consider the following example:: @istest def test_something(): # Test something here pass @test_something.setUp def something_setup(): # Get everything set up ready to go... pass @test_something.tearDown def something_teardown(): # Clean up after ourselves pass In this example, a DTest decorator (other than ``@nottest``) is necessary preceding ``test_something()``; here we used ``@istest``, but any other available DTest decorator could be used here. This makes the ``@test_something.setUp`` and ``@test_something.tearDown`` decorators available. (For something analogous in the standard Python, check out the built-in ``@property`` decorator.) Running Tests ============= Running tests using the DTest framework is fairly straight-forward. A script called ``run-dtests`` is available. By default, the current directory is scanned for all modules or packages whose names begin with "test"; the search also recurses down through all packages. (A "package" is defined as a directory containing "__init__.py".) Once all tests are discovered, they are then executed, and the results of the tests emitted to standard output. Several command-line options are available for controlling the behavior of ``run-dtests``. For instance, the "--no-skip" option will cause ``run-dtests`` to run all tests, even those decorated with the ``@skip`` decorator, and the "-d" option causes ``run-dtests`` to search a specific directory, rather than the current directory. For a full list of options, use the "-h" or "--help" option. Running ``run-dtests`` from the command line is not the only way to run tests, however. The ``run-dtests`` script is a very simple script that parses command-line options (using the ``OptionParser`` constructed by the ``dtest.optparser()`` function), converts those options into a set of keyword arguments (using ``dtest.opts_to_args()``), then passes those keyword arguments to the ``dtest.main()`` function. Users can use these functions to build the same functionality with user-specific extensions, such as providing an alternate DTestOutput instance to control how test results are displayed, or providing an alternate method for controlling which tests are skipped. See the documentation strings for these functions and classes for more information. DTest-0.4.0/MANIFEST.in0000644000175000017500000000007111607315463014576 0ustar sorensoren00000000000000include MANIFEST.in run_tests.py *.txt *.rst graft tests DTest-0.4.0/bin/0000755000175000017500000000000011607315474013614 5ustar sorensoren00000000000000DTest-0.4.0/bin/run-dtests0000755000175000017500000000124111607315463015646 0ustar sorensoren00000000000000#!/bin/sh # # Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. exec python -m dtest.core "$@" DTest-0.4.0/setup.py0000755000175000017500000000326611607315463014566 0ustar sorensoren00000000000000#!/usr/bin/python # # Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from distutils.core import setup setup( name='DTest', version='0.4.0', description="Dependency-based Threaded Test Framework", author="Kevin L. Mitchell", author_email="kevin.mitchell@rackspace.com", url="http://github.com/klmitch/dtest", scripts=['bin/run-dtests'], packages=['dtest'], license="LICENSE.txt", long_description=open('README.rst').read(), requires=['eventlet'], classifiers=[ 'Development Status :: 3 - Alpha', 'Environment :: Console', 'Intended Audience :: Developers', 'Intended Audience :: End Users/Desktop', 'Intended Audience :: Information Technology', 'License :: OSI Approved :: Apache Software License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Topic :: Software Development :: Testing', ], ) DTest-0.4.0/tests/0000755000175000017500000000000011607315474014206 5ustar sorensoren00000000000000DTest-0.4.0/tests/test_tests.py0000644000175000017500000000170211607315463016757 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * def test_nothing(): # Do-nothing test for the attribute access test pass def test_attribute_missing(): # Verify that missing attributes on tests raise the correct # exception with assert_raises(AttributeError): dummy = test_nothing._dt_dtest.missing_attr DTest-0.4.0/tests/test_alternate.py0000644000175000017500000000262411607315463017600 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * # Ensure that the alternate setUp/tearDown decorators work class TestAlternate(DTestCase): alternate = None def setUp(self): assert_is_none(self.alternate) self.alternate = False def tearDown(self): assert_false(self.alternate) # Should use the default setUp/tearDown def test1(self): assert_false(self.alternate) # Have to use @istest here to make the decorators available @istest def test2(self): assert_true(self.alternate) # Alternate setUp/tearDown for test2 @test2.setUp def alternateSetUp(self): assert_is_none(self.alternate) self.alternate = True @test2.tearDown def alternateTearDown(self): assert_true(self.alternate) DTest-0.4.0/tests/test_policy.py0000644000175000017500000000162511607315463017120 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * @threshold(50.0) def test_threshold(): # Test function to return def tfcn(i): # Succeed if i is even assert_equal(i % 2, 0) # Yield several iterations of tfcn for i in range(100): yield (tfcn, (i,)) DTest-0.4.0/tests/test_partner.py0000644000175000017500000000160711607315463017274 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * # Ensure that fixtures don't run if there are no tests between them setUpRun = False tearDownRun = False def setUp(): global setUpRun setUpRun = True def tearDown(): global tearDownRun tearDownRun = True DTest-0.4.0/tests/ordering/0000755000175000017500000000000011607315474016017 5ustar sorensoren00000000000000DTest-0.4.0/tests/ordering/test_order.py0000644000175000017500000001177711607315463020556 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import dtest # Need t_order import tests def setUp(): # Make sure we're the third thing to have run assert len(tests.t_order) == 2, "Ordering error running test suite" assert tests.t_order[-1] == 'tests.ordering.setUp', \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.setUp') def tearDown(): # Make sure we're the twelfth thing to have run assert len(tests.t_order) == 11, "Ordering error running test suite" assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.tearDownClass'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.tearDown') class OrderingTestCase(dtest.DTestCase): @classmethod def setUpClass(cls): # Make sure we're the fourth thing to have run assert len(tests.t_order) == 3, "Ordering error running test suite" assert tests.t_order[-1] == 'tests.ordering.test_order.setUp', \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.setUpClass') @classmethod def tearDownClass(cls): # Make sure we're the eleventh thing to have run assert len(tests.t_order) == 10, "Ordering error running test suite" assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.tearDown'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.tearDownClass') def setUp(self): # Make sure we're the fifth or eighth thing to have run assert len(tests.t_order) == 4 or len(tests.t_order) == 7, \ "Ordering error running test suite" if len(tests.t_order) == 4: assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.setUpClass'), \ "Incorrect previous step" else: assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.tearDown'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.setUp') def tearDown(self): # Make sure we're the seventh or tenth thing to have run assert len(tests.t_order) == 6 or len(tests.t_order) == 9, \ "Ordering error running test suite" if len(tests.t_order) == 6: assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.test1'), \ "Incorrect previous step" else: assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.test2'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.tearDown') def test1(self): # Make sure we're the sixth thing to have run assert len(tests.t_order) == 5, "Ordering error running test suite" assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.setUp'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.test1') @dtest.depends(test1) def test2(self): # Make sure we're the ninth thing to have run assert len(tests.t_order) == 8, "Ordering error running test suite" assert tests.t_order[-1] == ('tests.ordering.test_order.' 'OrderingTestCase.setUp'), \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.test_order.' 'OrderingTestCase.test2') DTest-0.4.0/tests/ordering/__init__.py0000644000175000017500000000241211607315463020125 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Need t_order import tests def setUp(): # Make sure we're the second thing to have run assert len(tests.t_order) == 1, "Ordering error running test suite" assert tests.t_order[-1] == 'tests.setUp', "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.setUp') def tearDown(): # Make sure we're the thirteenth thing to have run assert len(tests.t_order) == 12, "Ordering error running test suite" assert tests.t_order[-1] == 'tests.ordering.test_order.tearDown', \ "Incorrect previous step" # Keep track of what has run tests.t_order.append('tests.ordering.tearDown') DTest-0.4.0/tests/test_inheritance.py0000644000175000017500000000327211607315463020112 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * # Define setUpClass/tearDownClass/setUp/tearDown for inheritance class TestInheritanceBase(DTestCase): class_setup = None instance_setup = None @classmethod def setUpClass(cls): assert_is_none(cls.class_setup) cls.class_setup = True @classmethod def tearDownClass(cls): assert_false(cls.class_setup) def setUp(self): assert_is_none(self.instance_setup) self.instance_setup = True def tearDown(self): assert_false(self.instance_setup) # See if we inherited them class TestInheritance(TestInheritanceBase): @attr(must_skip=True) def test_inheritance(self): assert_true(self.class_setup) assert_true(self.instance_setup) TestInheritanceBase.class_setup = False self.instance_setup = False # Let's really stress things out, here... class TestInheritanceTwo(TestInheritance): def test_inheritance(self): # Make sure we can call our superclass method super(TestInheritanceTwo, self).test_inheritance() DTest-0.4.0/tests/__init__.py0000644000175000017500000000215411607315463016317 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. t_order = [] def setUp(): # Make sure we're the first thing to have run assert len(t_order) == 0, "Ordering error running test suite" # Keep track of what has run t_order.append('tests.setUp') def tearDown(): # Make sure we're the fourteenth thing to have run assert len(t_order) == 13, "Ordering error running test suite" assert t_order[-1] == 'tests.ordering.tearDown', "Incorrect previous step" # Keep track of what has run t_order.append('tests.tearDown') DTest-0.4.0/tests/test_multi.py0000644000175000017500000000301611607315463016747 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.util import * @repeat(2) def test_multi(): # Set up a list to record executions recorded = [] # Now, define an inner function def inner(*args, **kwargs): # Place the arguments into the recorded list recorded.append((args, kwargs)) # Now, yield the inner function once... yield ('inner1', inner, (1,), dict(kw=1)) # Yield it again yield ('inner2', inner, (2,), dict(kw=2)) # Now, check if recorded has what we expect assert_equal(len(recorded), 4) assert_tuple_equal(recorded[0][0], (1,)) assert_dict_equal(recorded[0][1], dict(kw=1)) assert_tuple_equal(recorded[1][0], (1,)) assert_dict_equal(recorded[1][1], dict(kw=1)) assert_tuple_equal(recorded[2][0], (2,)) assert_dict_equal(recorded[2][1], dict(kw=2)) assert_tuple_equal(recorded[3][0], (2,)) assert_dict_equal(recorded[3][1], dict(kw=2)) DTest-0.4.0/tests/explore/0000755000175000017500000000000011607315474015664 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/test_notpkg/0000755000175000017500000000000011607315474020225 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/test_notpkg/test.py0000644000175000017500000000136111607315463021555 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/pkg/0000755000175000017500000000000011607315474016445 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/pkg/nottest.py0000644000175000017500000000136111607315463020516 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/pkg/__init__.py0000644000175000017500000000136111607315463020555 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/pkg_impl/0000755000175000017500000000000011607315474017466 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/pkg_impl/__init__.py0000644000175000017500000000136111607315463021576 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/pkg_impl/test_impl.py0000644000175000017500000000136111607315463022037 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/test_pkg/0000755000175000017500000000000011607315474017504 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/test_pkg/__init__.py0000644000175000017500000000136111607315463021614 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/notpkg/0000755000175000017500000000000011607315474017166 5ustar sorensoren00000000000000DTest-0.4.0/tests/explore/notpkg/test.py0000644000175000017500000000136111607315463020516 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/explore/__init__.py0000644000175000017500000000136111607315463017774 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * @istest def a_test(): pass @nottest def test_not(): pass def test_discovered(): pass DTest-0.4.0/tests/test_decorators.py0000644000175000017500000001224011607315463017761 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from dtest import * from dtest.policy import ThresholdPolicy from dtest.strategy import SerialStrategy, UnlimitedParallelStrategy, \ LimitedParallelStrategy from dtest.test import DTestFixture from dtest.util import * class TestThrowaway(DTestCase): def test_fordep(self): pass @skip def test_skip(): pass @failing def test_failing(): assert False @attr(attr1=1, attr2=2) def test_attr(): pass @depends(test_skip, test_attr, TestThrowaway.test_fordep) def test_depends(): pass class DecoratorTestException(Exception): pass @raises(DecoratorTestException) def test_raises(): raise DecoratorTestException() @timed(1) def test_timed(): pass @repeat(2) def test_repeat(): pass @parallel def test_parallel(): pass @parallel(2) def test_parallel_limited(): pass @threshold(50) def test_threshold(): pass class TestDecorators(DTestCase): @depends(test_timed) @classmethod def setUpClass(cls): pass @istest def skip(self): # Verify that skip is true... assert_true(test_skip._dt_dtest.skip) # Verify that it's false on something else assert_false(test_failing._dt_dtest.skip) @istest def failing(self): # Verify that failing is true... assert_true(test_failing._dt_dtest.failing) # Verify that it's false on something else assert_false(test_skip._dt_dtest.failing) @istest def attr(self): # Verify that the attributes are set as expected assert_equal(test_attr._dt_dtest.attr1, 1) assert_equal(test_attr._dt_dtest.attr2, 2) @istest def depends(self): # Part 1: Verify that test_depends() is dependent on # test_skip() and test_attr() assert_in(test_skip._dt_dtest, test_depends._dt_dtest.dependencies) assert_in(test_attr._dt_dtest, test_depends._dt_dtest.dependencies) assert_in(TestThrowaway.test_fordep._dt_dtest, test_depends._dt_dtest.dependencies) # Part 2: Verify that test_depends() is in the depedents set # of test_skip() and test_attr() assert_in(test_depends._dt_dtest, test_skip._dt_dtest.dependents) assert_in(test_depends._dt_dtest, test_attr._dt_dtest.dependents) assert_in(test_depends._dt_dtest, TestThrowaway.test_fordep._dt_dtest.dependents) @istest def raises(self): # Verify that the set of expected exceptions is as expected assert_set_equal(test_raises._dt_dtest.raises, set([DecoratorTestException])) # Verify that it's the empty set on something else assert_set_equal(test_timed._dt_dtest.raises, set()) @istest def timed(self): # Verify that the timeout is set properly assert_equal(test_timed._dt_dtest.timeout, 1) # Verify that it's None on something else assert_is_none(test_raises._dt_dtest.timeout) @istest def repeat(self): # Verify that the repeat count is set properly assert_equal(test_repeat._dt_dtest.repeat, 2) # Verify that it's 1 on something else assert_equal(test_timed._dt_dtest.repeat, 1) @istest def parallel(self): # Verify that the strategy is set properly assert_is_instance(test_parallel._dt_dtest._strategy, UnlimitedParallelStrategy) # Verify that it's SerialStrategy on something else assert_is_instance(test_timed._dt_dtest._strategy, SerialStrategy) @istest def parallel_limited(self): # Verify that the strategy is set properly assert_is_instance(test_parallel_limited._dt_dtest._strategy, LimitedParallelStrategy) # Verify that the limit is set properly assert_equal(test_parallel_limited._dt_dtest._strategy.limit, 2) @istest def threshold(self): # Verify that the policy is set properly assert_is_instance(test_threshold._dt_dtest._policy, ThresholdPolicy) # Verify that the threshold is set properly assert_almost_equal(test_threshold._dt_dtest._policy.threshold, 50.0) @istest def isfixture(self): # Verify that setUpClass has a fixture associated with it assert_is_instance(self.setUpClass._dt_dtest, DTestFixture) # Verify that we have the appropriate dependencies assert_in(test_timed._dt_dtest, self.setUpClass._dt_dtest.dependencies) # Verify that we have the appropriate dependents assert_in(self.setUpClass._dt_dtest, test_timed._dt_dtest.dependents) DTest-0.4.0/run_tests.py0000755000175000017500000001156211607315463015452 0ustar sorensoren00000000000000#!/usr/bin/python # # Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import dtest from dtest import util exp_order = [ 'tests.setUp', 'tests.ordering.setUp', 'tests.ordering.test_order.setUp', 'tests.ordering.test_order.OrderingTestCase.setUpClass', 'tests.ordering.test_order.OrderingTestCase.setUp', 'tests.ordering.test_order.OrderingTestCase.test1', 'tests.ordering.test_order.OrderingTestCase.tearDown', 'tests.ordering.test_order.OrderingTestCase.setUp', 'tests.ordering.test_order.OrderingTestCase.test2', 'tests.ordering.test_order.OrderingTestCase.tearDown', 'tests.ordering.test_order.OrderingTestCase.tearDownClass', 'tests.ordering.test_order.tearDown', 'tests.ordering.tearDown', 'tests.tearDown', ] required = [ 'tests.explore.a_test', 'tests.explore.test_discovered', 'tests.explore.test_pkg.a_test', 'tests.explore.test_pkg.test_discovered', 'tests.explore.pkg_impl.a_test', 'tests.explore.pkg_impl.test_discovered', 'tests.explore.pkg_impl.test_impl.a_test', 'tests.explore.pkg_impl.test_impl.test_discovered', ] prohibited = [ 'tests.explore.test_not', 'tests.explore.pkg.a_test', 'tests.explore.pkg.test_not', 'tests.explore.pkg.test_discovered', 'tests.explore.pkg.nottest.a_test', 'tests.explore.pkg.nottest.test_not', 'tests.explore.pkg.nottest.test_discovered', 'tests.explore.notpkg.a_test', 'tests.explore.notpkg.test_not', 'tests.explore.notpkg.test_discovered', 'tests.explore.test_pkg.test_not', 'tests.explore.test_notpkg.a_test', 'tests.explore.test_notpkg.test_not', 'tests.explore.test_notpkg.test_discovered', 'tests.explore.pkg_impl.test_not', 'tests.explore.pkg_impl.test_impl.test_not', ] @dtest.istest def test_ordering(): # Look up t_order and make sure it's right t_order = sys.modules['tests'].t_order util.assert_list_equal(t_order, exp_order) @dtest.istest def test_discovery(): # Get the list of test names tnames = set([str(t) for t in queue.tests]) # Now go through the required list and make sure all those tests # are present for t in required: assert t in tnames, "Required test %r not discovered" % t # And similarly for the prohibited list for t in prohibited: assert t not in tnames, "Prohibited test %r discovered" % t @dtest.istest def test_partner_setUp(): # Look up setUpRun and make sure it's right setUpRun = sys.modules['tests.test_partner'].setUpRun util.assert_false(setUpRun) @dtest.istest def test_partner_tearDown(): # Look up tearDownRun and make sure it's right tearDownRun = sys.modules['tests.test_partner'].tearDownRun util.assert_false(tearDownRun) # Start by processing the command-line arguments (options, args) = dtest.optparser(usage="%prog [options]").parse_args() # Get the options opts = dtest.opts_to_args(options) # If directory isn't set, use "tests" if 'directory' not in opts: opts['directory'] = 'tests' # Need to allocate a queue; select some suboptions for the task subopts = {'skip': lambda dt: hasattr(dt, 'must_skip') and dt.must_skip} if 'maxth' in opts: subopts['maxth'] = opts['maxth'] if 'output' in opts: subopts['output'] = opts['output'] queue = dtest.DTestQueue(**subopts) # OK, we need to do the explore dtest.explore(opts['directory'], queue) # Now, set up the dependency between tests.tearDown and our # test_ordering() test and the test_partner_*() tests dtest.depends(sys.modules['tests'].tearDown)(test_ordering) dtest.depends(sys.modules['tests'].tearDown)(test_partner_setUp) dtest.depends(sys.modules['tests'].tearDown)(test_partner_tearDown) # Have to add local tests to tests set queue.add_test(test_ordering) queue.add_test(test_discovery) queue.add_test(test_partner_setUp) queue.add_test(test_partner_tearDown) # Implement the rest of dtest.main() if not opts.get('dryrun', False): # Execute the tests result = queue.run(opts.get('debug', False)) else: result = True # Print out the names of the tests print "Discovered tests:\n" for dt in queue.tests: if dt.istest(): print str(dt) # Are we to dump the dependency graph? if 'dotpath' in opts: with open(opts['dotpath'], 'w') as f: print >>f, queue.dot() # All done! sys.exit(not result) DTest-0.4.0/PKG-INFO0000644000175000017500000001601511607315474014144 0ustar sorensoren00000000000000Metadata-Version: 1.1 Name: DTest Version: 0.4.0 Summary: Dependency-based Threaded Test Framework Home-page: http://github.com/klmitch/dtest Author: Kevin L. Mitchell Author-email: kevin.mitchell@rackspace.com License: LICENSE.txt Description: ======================================== Dependency-based Threaded Test Framework ======================================== The DTest framework is a testing framework, similar to the standard ``unittest`` package provided by Python. The value-add for DTest, however, is that test execution is threaded, through use of the ``eventlet`` package. The DTest package also provides the concept of "dependencies" between tests and test fixtures--thus the "D" in "DTest"--which ensure that tests don't run until the matching set up test fixtures have completed, and that the tear down test fixtures don't run until all the associated tests have completed. Dependencies may also be used to ensure that tests requiring the availability of certain functionality don't run if the tests of that specific functionality fail. Writing Tests ============= The simplest test programs are simple functions with names beginning with "test," located in Python source files whose names also begin with "test." It is not even necessary to import any portion of the DTest framework. If tests are collected in classes, however, or if use of the more advanced features of DTest is desired, a simple ``from dtest import *`` is necessary. This makes available the ``DTestCase`` class--which should be extended by all classes containing tests--as well as such decorators as ``@skip`` and ``@nottest``. Tests may be performed using the standard Python ``assert`` statement; however, a number of utility routines are available in the ``dtest.util`` module (also safe for ``import *``). Many of these utility routines have names similar to methods of ``unittest.TestCase``--e.g., ``dtest.util.assert_dict_equal()`` is analogous to ``unittest.TestCase.assertDictEqual()``. Test Fixtures ============= The DTest framework supports test fixtures--set up and tear down functions--at the class, module, and package level. Package-level fixtures consist of functions named ``setUp()`` and ``tearDown()`` contained within "__init__.py" files; similarly, module-level fixtures consist of functions samed ``setUp()`` and ``tearDown()`` within modules containing test functions and classes of test methods. At the class level, classes may contain ``setUpClass()`` and ``tearDownClass()`` class methods (or static methods), which may perform set up and tear down for each class. In all cases, the ``setUp()`` functions and the ``setUpClass()`` method are executed before any of the tests within the same scope; similarly, after all the tests at a given scope have executed, the corresponding ``tearDownClass()`` method and ``tearDown()`` functions are executed. The DTest framework also supports per-test ``setUp()`` and ``tearDown()`` functions or methods, which are run before and after each associated test. For classes containing tests, each test automatically has the setUp() and tearDown() methods of the class associated with them; however, for all tests, these fixtures can be explicitly set (or overridden from the class default). Consider the following example:: @istest def test_something(): # Test something here pass @test_something.setUp def something_setup(): # Get everything set up ready to go... pass @test_something.tearDown def something_teardown(): # Clean up after ourselves pass In this example, a DTest decorator (other than ``@nottest``) is necessary preceding ``test_something()``; here we used ``@istest``, but any other available DTest decorator could be used here. This makes the ``@test_something.setUp`` and ``@test_something.tearDown`` decorators available. (For something analogous in the standard Python, check out the built-in ``@property`` decorator.) Running Tests ============= Running tests using the DTest framework is fairly straight-forward. A script called ``run-dtests`` is available. By default, the current directory is scanned for all modules or packages whose names begin with "test"; the search also recurses down through all packages. (A "package" is defined as a directory containing "__init__.py".) Once all tests are discovered, they are then executed, and the results of the tests emitted to standard output. Several command-line options are available for controlling the behavior of ``run-dtests``. For instance, the "--no-skip" option will cause ``run-dtests`` to run all tests, even those decorated with the ``@skip`` decorator, and the "-d" option causes ``run-dtests`` to search a specific directory, rather than the current directory. For a full list of options, use the "-h" or "--help" option. Running ``run-dtests`` from the command line is not the only way to run tests, however. The ``run-dtests`` script is a very simple script that parses command-line options (using the ``OptionParser`` constructed by the ``dtest.optparser()`` function), converts those options into a set of keyword arguments (using ``dtest.opts_to_args()``), then passes those keyword arguments to the ``dtest.main()`` function. Users can use these functions to build the same functionality with user-specific extensions, such as providing an alternate DTestOutput instance to control how test results are displayed, or providing an alternate method for controlling which tests are skipped. See the documentation strings for these functions and classes for more information. Platform: UNKNOWN Classifier: Development Status :: 3 - Alpha Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Software Development :: Testing Requires: eventlet DTest-0.4.0/LICENSE.txt0000644000175000017500000002363711607315463014700 0ustar sorensoren00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. DTest-0.4.0/dtest/0000755000175000017500000000000011607315474014167 5ustar sorensoren00000000000000DTest-0.4.0/dtest/result.py0000644000175000017500000005473611607315463016074 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ============ Test Results ============ This module contains the DTestResult and DTestMessage classes, which are used to represent the results of tests. Instances of DTestResult contain the current state of a test, whether the test was successful or if an error was encountered, and any exceptions and output messages that were generated while running the test. The output messages are contained in an instance of DTestMessage. """ from dtest import capture from dtest.constants import * from eventlet.timeout import Timeout class ResultContext(object): """ ResultContext ============= The ResultContext class is a Python context manager used in the automatic collection of captured output and exception handling. It is instantiated by DTestResult.accumulate() """ def __init__(self, result, ctx, excs): """ Initialize a ResultContext associated with the given ``result``. The context will handle messages for the part of the test run given by ``ctx`` (may be PRE, TEST, or POST). The exceptions listed in the ``excs`` tuple are expected; if no exceptions are expected, pass ``excs`` as None. """ # Save the basic information self.result = result self.ctx = ctx self.excs = excs # There's no timeout... self.timeout = None def __enter__(self): """ Begin the context handling. Clears out any captured data and initializes any timeouts defined for the test. """ # Clear the captured values for this thread capture.retrieve() # If test should be timed, set up the timeout if self.result._test._timeout: self.timeout = Timeout(self.result._test._timeout, AssertionError("Timed out after %s " "seconds" % self.result._test._timeout)) def __exit__(self, exc_type, exc_value, tb): """ Ends context handling. Cancels any pending timeouts, retrieves output data and exceptions, and determines the final result of the test. A DTestMessage object is initialized if necessary. """ # Cancel the timeout if one is pending if self.timeout is not None: self.timeout.cancel() self.timeout = None # Get the output and clean up captured = capture.retrieve() # If this was the test, determine a result if self.ctx in (PRE, TEST): self.result._set_result(self, exc_type, exc_value, tb) # Generate a message, if necessary if captured or exc_type or exc_value or tb: self.result._storemsg(self, captured, exc_type, exc_value, tb) # We handled the exception return True class DTestResult(object): """ DTestResult =========== The DTestResult class stores the current state of a test, as well as the results and output messages of a test and its immediate fixtures. Various special methods are implemented, allowing the result to appear True if the test passed and False if the test did not pass, as well as allowing the messages to be accessed easily. Three public properties are available: the ``test`` property returns the associated test; the ``state`` property returns the state of the test, which can also indicate the final result; and the ``msgs`` property returns a list of the messages generated while executing the test. Note that the string representation of a DTestResult object is identical to its state. Test messages ------------- Messages can be emitted during three separate phases of test execution. The first step of executing a test is to execute the setUp() method defined for the class; the second step is executing the test itself; and the third step is to execute the tearDown() method defined for the class. (The setUp() and tearDown() methods used can be set or overridden using the setUp() and tearDown() decorators of DTest.) Messages produced by each phase are saved, and are identified by the constants PRE, TEST, and POST, respectively. This could be used to warn a developer that, although a test passed, the following tearDown() function failed for some reason. The list of test message objects (instances of class DTestMessage) can be retrieved, in the order (PRE, TEST, POST), using the ``msgs`` property, as indicated above. Additionally, the presence of each type of message can be discerned with the ``in`` operator (e.g., ``PRE in result``), and the message itself retrieved using array accessor syntax (e.g., ``result[TEST]``). The total number of messages available can be determined using the len() operator. """ def __init__(self, test): """ Initialize a DTestResult object corresponding to the given ``test``. """ self._test = test self._state = None self._result = None self._error = False self._msgs = {} def __nonzero__(self): """ Allows a DTestResult object to be used in a boolean context; the object will test as True if the test passed, otherwise it will test as False. """ # The boolean value is True for pass, False for fail or not # run return self._result is True def __len__(self): """ Allows the len() built-in to be called on a DTestResult object. Returns the number of messages. """ # Return the number of messages return len(self._msgs) def __getitem__(self, key): """ Allows a message, as specified by ``key``, to be retrieved using array access notation (square brackets, "[" and "]"). Valid values for ``key`` are the constants PRE, TEST, and POST. """ # Return the message for the desired key return self._msgs[key] def __contains__(self, key): """ Allows the ``in`` operator to be used on a DTestResult object. Determines if the message specified by ``key`` is set on this result. Valid values for ``key`` are the constants PRE, TEST, and POST. """ # Does the key exist in the list of messages? return key in self._msgs def __str__(self): """ Allows the str() built-in to be called on a DTestResult object. Returns the string version of the test state. In the event the test has not been run, returns the empty string. """ # Return our state, which is an excellent summary of the # result return '' if self._state is None else self._state def __repr__(self): """ Allows the repr() built-in to be called on a DTestResult object. Augments the default representation to include the state and the messages present. """ # Generate a representation of the result return ('<%s.%s object at %#x state %s with messages %r>' % (self.__class__.__module__, self.__class__.__name__, id(self), self._state, self._msgs.keys())) def _transition(self, state=None, output=None): """ Performs a transition to the given ``state``. If ``state`` is None, the state will be determined from the status of the ``_result`` and ``_error`` attributes, set by __exit__(). Note that the test's ``_exp_fail`` attribute is also consulted to determine if the result was expected or not. """ # If state is None, determine the state to transition to based # on the result if state is None: if self._result: state = UOK if self._test._exp_fail else OK elif self._error: state = ERROR else: state = XFAIL if self._test._exp_fail else FAIL # Issue an appropriate notification if output is not None: output.notify(self._test, state) # Transition to the new state self._state = state def _set_result(self, ctx, exc_type, exc_value, tb): """ Determines the result or error status of the test. Only called if the context is PRE or TEST. """ # Are we expecting any exceptions? if ctx.excs: self._result = exc_type in ctx.excs self._error = (exc_type not in ctx.excs and exc_type != AssertionError) else: # Guess we're not... self._result = exc_type is None self._error = (exc_type is not None and exc_type != AssertionError) def _storemsg(self, ctx, captured, exc_type, exc_value, tb): """ Allocates and stores a DTestMessage instance which brings together captured output and exception values. """ self._msgs[ctx.ctx] = DTestMessage(ctx.ctx, captured, exc_type, exc_value, tb) def accumulate(self, nextctx, excs=None, id=None): """ Prepares a context manager for accumulating output for a portion of a test. The ``nextctx`` argument must be one of the constants PRE, TEST, or POST, indicating which phase of test execution is about to occur. If ``excs`` is not None, it should be a tuple of the exceptions to expect the execution to raise; the test passes if one of these exceptions is raised, or fails otherwise. """ # Return a context for handling the result return ResultContext(self, nextctx, excs) @property def test(self): """ Retrieve the test associated with this DTestResult object. """ # We want the test to be read-only, but to be accessed like an # attribute return self._test @property def state(self): """ Retrieve the current state of this DTestResult object. If the test has not been executed, returns None. """ # We want the state to be read-only, but to be accessed like # an attribute return self._state @property def msgs(self): """ Retrieve the list of messages associated with this DTestResult object. The tests will be in the order (PRE, TEST, POST); if a given message does not exist, it will be omitted from the list. """ # Retrieve the messages in order msglist = [] for mt in (PRE, TEST, POST): if mt in self._msgs: msglist.append(self._msgs[mt]) # Return the list of messages return msglist @property def multi(self): """ Returns True only if the result is a multi-result. """ return False class DTestMessage(object): """ DTestMessage ============ The DTestMessage class is a simple container class for messages generated by test execution. The following attributes are defined: :ctx: The context in which the message was generated. May be one of the constants PRE, TEST, or POST. :captured: A list of tuples containing captured output. For each tuple, the first element is a short name; the second element is a description, suitable for display to the user; and the third element is the captured output. All three elements are simple strings. :exc_type: If an unexpected exception (including AssertionError) is thrown while executing the test, this attribute will contain the type of the exception. If no exception is thrown, this attribute will be None. :exc_value: If an unexpected exception (including AssertionError) is thrown while executing the test, this attribute will contain the actual exception object. If no exception is thrown, this attribute will be None. :exc_tb: If an unexpected exception (including AssertionError) is thrown while executing the test, this attribute will contain the traceback object. If no exception is thrown, this attribute will be None. """ def __init__(self, ctx, captured, exc_type, exc_value, exc_tb): """ Initialize a DTestMessage object. See the class docstring for the meanings of the parameters. """ # Save all the message information self.ctx = ctx self.captured = captured self.exc_type = exc_type self.exc_value = exc_value self.exc_tb = exc_tb class MultiResultContext(ResultContext): """ MultiResultContext ================== The MultiResultContext class is an extension of the ResultContext context manager that adds the requisite support for message IDs. These are needed to differentiate the results of multiple test runs with DTestResultMulti. """ def __init__(self, result, ctx, excs, msgid): """ Initialize a MultiResultContext associated with the given ``result``. The additional argument ``msgid`` is an ID to be associated with the messages that are generated as a result of the run; it may be None only if ``ctx`` is TEST. """ # Initialize the superclass super(MultiResultContext, self).__init__(result, ctx, excs) # Save the message ID as well self.msgid = msgid class DTestResultMulti(DTestResult): """ DTestResultMulti ================ The DTestResultMulti class is an extension of the DTestResult class which additionally provides the ability to store the results from multiple tests. This is used, for example, when a defined test is a generator, to store the results from all generated functions. """ def __init__(self, test): """ Initialize a DTestResultMulti object corresponding to the given ``test``. """ super(DTestResultMulti, self).__init__(test) # Pre-allocate the KeyedSequence, so we don't run into race # conditions self._msgseq = KeyedSequence() # Also keep track of IDs we've seen so the generated message # IDs are stable self._idseen = set() # Also need to count successes, failures, and errors self._success_cnt = 0 self._failure_cnt = 0 self._error_cnt = 0 self._total_cnt = 0 def _set_result(self, ctx, exc_type, exc_value, tb): """ Extends the superclass method to support threshold-style final result computation. """ # If we're in PRE, defer to the superclass method if ctx.ctx == PRE: super(DTestResultMulti, self)._set_result(ctx, exc_type, exc_value, tb) return # Figure out if this is a success, failure, or an error result = None if ctx.excs: if exc_type in ctx.excs: result = '_success_cnt' else: if exc_type is None: result = '_success_cnt' if result is None: if exc_type != AssertionError: result = '_error_cnt' else: result = '_failure_cnt' # Keep track of the number of successes, failures, and errors setattr(self, result, getattr(self, result) + 1) self._total_cnt += 1 # Finally, compute the values of _result and _error based on # the threshold strategy of the test self._result, self._error = self._test._policy(self._total_cnt, self._success_cnt, self._failure_cnt, self._error_cnt) def _storemsg(self, ctx, captured, exc_type, exc_value, tb): """ Allocates and stores a DTestMessageMulti instance which brings together captured output and exception values. """ # We only specially TEST-context messages if ctx.ctx != TEST: super(DTestResultMulti, self)._storemsg(ctx, captured, exc_type, exc_value, tb) return # Make sure the message sequence goes in the collection of # messages if TEST not in self._msgs: self._msgs[TEST] = self._msgseq # Store a message self._msgseq[ctx.msgid] = DTestMessageMulti(ctx.ctx, ctx.msgid, captured, exc_type, exc_value, tb) def accumulate(self, nextctx, excs=None, id=None): """ Prepares the DTestResultMulti object for use as a context manager. The ``nextctx`` argument must be one of the constants PRE, TEST, or POST, indicating which phase of test execution is about to occur. If ``excs`` is not None, it should be a tuple of the exceptions to expect the execution to raise; the test passes if one of these exceptions is raised, or fails otherwise. The ``id`` parameter must be specified if ``nextctx`` is TEST; it identifies the test being executed. """ # Force id to an empty string, if it's not specified if not id: id = '' # Starting with 'id', compute a message ID msgid = id i = 0 while msgid in self._idseen: i += 1 msgid = "%s#%d" % (id, i) # Mark this message ID as in use self._idseen.add(msgid) # Return a context for handling the result return MultiResultContext(self, nextctx, excs, msgid) @property def multi(self): """ Returns True only if the result is a multi-result. """ return True class DTestMessageMulti(DTestMessage): """ DTestMessageMulti ================= The DTestMessageMulti class is an extension of DTestMessage that adds an :id: attribute to identify the origin of the message in a DTestResultMulti result. """ def __init__(self, ctx, id, captured, exc_type, exc_value, exc_tb): """ Initialize a DTestMessageMulti object. See the class docstring for this class and its superclass for the meanings of the parameters. """ # Call the superclass constructor super(DTestMessageMulti, self).__init__(ctx, captured, exc_type, exc_value, exc_tb) # Also save the id self.id = id class KeyedSequence(object): """ KeyedSequence ============= The KeyedSequence class is a helper class for DTestResultMulti. Messages (DTestMessageMulti objects) from tests are stored in an ordered sequence, but the sequence should also be accessible by a test name. This class implements an abbreviated sequence interface that enables the sequence to be indexed by an integer index (or array slice), or by a string key. New items may not be added to the sequence using the standard operations such as append(); they may only be added to the sequence by using assignment by string key. This class does support iteration. """ def __init__(self): """ Initialize a KeyedSequence object. """ # Keep an index of keys to list positions self._index = {} self._values = [] def __contains__(self, key): """ Determines if ``key`` exists within the sequence. The ``key`` may only be a string; there is no support for searching for items within the sequence. """ # Check if the key exists return key in self._index def __getitem__(self, key): """ Retrieve the item or items associated with ``key``. The ``key`` may be an integer or a string. Array slices are also supported. """ # If key is an integer or a slice, use the values list if isinstance(key, (int, long, slice)): return self._values[key] # Get the index from the key and return that return self._values[self._index[key]] def __setitem__(self, key, value): """ Set the item associated with ``key``. The ``key`` may be an integer or a string. Replacing multiple elements with an array slice is not permitted. """ # If key is a slice, fault if isinstance(key, slice): raise TypeError("cannot replace slice") # If key is an integer, use the values list if isinstance(key, (int, long)): self._values[key] = value return # Does the key already exist? if key not in self._index: # Adding a new value, so let's add the key to the index self._index[key] = len(self) self._values.append(value) return # OK, just need to replace existing value self._values[self._index[key]] = value def __len__(self): """ Return the number of items in the KeyedSequence. """ # Return the values return len(self._values) def __iter__(self): """ Return an iterator which will iterate over the items in the sequence. """ # Iterate over the values return iter(self._values) def count(self, *args, **kwargs): """ Count the number of instances of a particular item in the sequence. """ # Use the values list return self._values.count(*args, **kwargs) def index(self, *args, **kwargs): """ Determine the index of a particular item in the sequence. """ # Use the values list return self._values.index(*args, **kwargs) DTest-0.4.0/dtest/strategy.py0000644000175000017500000001375511607315463016414 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ========================== Parallelization Strategies ========================== This module contains all the classes necessary for identifying parallelization strategies. A parallelization strategy provides support for alternate modes of parallelizing multiple-result tests, i.e., tests on which @repeat() has been used or which are generators providing lists of other test functions to execute. This module contains SerialStrategy, UnlimitedParallelStrategy, and LimitedParallelStrategy. """ import dtest from eventlet import spawn_n from eventlet.event import Event from eventlet.semaphore import Semaphore class SerialStrategy(object): """ SerialStrategy ============== The SerialStrategy class is a parallelization strategy that causes spawned tests to be executed serially, one after another. """ def prepare(self): """ Prepares the SerialStrategy object to spawn a set of tests. Since SerialStrategy "spawns" tests by running them synchronously, this function is a no-op. """ pass def spawn(self, call, *args, **kwargs): """ Spawn a function. The callable ``call`` will be executed with the provided positional and keyword arguments. Since SerialStrategy "spawns" tests by running them synchronously, this function simply calls ``call`` directly. """ call(*args, **kwargs) def wait(self): """ Waits for spawned tests to complete. Since SerialStrategy "spawns" tests by running them synchronously, this function is a no-op. """ pass class UnlimitedParallelStrategy(object): """ UnlimitedParallelStrategy ========================= The UnlimitedParallelStrategy class is a parallelization strategy that causes spawned tests to be executed in parallel, with no limit on the maximum number of tests that can be executing at one time. """ def prepare(self): """ Prepares the UnlimitedParallelStrategy object to spawn a set of tests. Simply initializes a counter to zero and sets up an event to be signaled when all tests are done. """ # Initialize the counter and the event self.count = 0 self.lock = Semaphore() self.event = None # Save the output and test for the status stream self.output = dtest.status.output self.test = dtest.status.test def spawn(self, call, *args, **kwargs): """ Spawn a function. The callable ``call`` will be executed with the provided positional and keyword arguments. The ``call`` will be executed in a separate thread. """ # Spawn our internal function in a separate thread self.count += 1 spawn_n(self._spawn, call, args, kwargs) def _spawn(self, call, args, kwargs): """ Executes ``call`` in a separate thread of control. This helper method maintains the count and arranges for the event to be signaled when appropriate. """ # Initialize the status stream dtest.status.setup(self.output, self.test) # Call the call call(*args, **kwargs) # Decrement the count self.count -= 1 # Signal the event, if necessary with self.lock: if self.count == 0 and self.event is not None: self.event.send() def wait(self): """ Waits for spawned tests to complete. """ # Check for completion... with self.lock: if self.count == 0: # No tests still going, so just return return # OK, let's initialize the event... self.event = Event() # Now we wait on the event self.event.wait() # End by clearing the event self.event = None class LimitedParallelStrategy(UnlimitedParallelStrategy): """ LimitedParallelStrategy ======================= The LimitedParallelStrategy class is an extension of the UnlimitedParallelStrategy that additionally limits the maximum number of threads that may be executing at any given time. """ def __init__(self, limit): """ Initializes a LimitedParallelStrategy object. The ``limit`` parameter specifies the maximum number of threads that may execute at any given time. """ # Save the limit self.limit = limit def prepare(self): """ Prepares the LimitedParallelStrategy to spawn a set of tests. In addition to the tasks performed by UnlimitedParallelStrategy.prepare(), sets up a semaphore to limit the maximum number of threads that may execute at once. """ # Call our superclass prepare method super(LimitedParallelStrategy, self).prepare() # Also initialize a limiting semaphore self.limit_sem = Semaphore(self.limit) def _spawn(self, call, args, kwargs): """ Executes ``call`` in a separate thread of control. This helper method extends UnlimitedParallelStrategy._spawn() to acquire the limiting semaphore prior to executing the call. """ # Call our superclass _spawn method with the limit semaphore with self.limit_sem: super(LimitedParallelStrategy, self)._spawn(call, args, kwargs) DTest-0.4.0/dtest/util.py0000644000175000017500000006246611607315463015532 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ============== Test Utilities ============== This module contains a number of utilities for use by tests. The use of these utilities is not mandatory--a simple ``assert`` statement will work fine--but the utilities provided may assist in evaluating complicated assertions. Most of these utilities are similar to the unittest.TestCase methods of similar names--for instance, the assert_false() utility is identical in action to unittest.TestCase.assertFalse(). """ import re from dtest.exceptions import DTestException __all__ = ['assert_false', 'assert_true', 'assert_raises', 'assert_equal', 'assert_not_equal', 'assert_almost_equal', 'assert_not_almost_equal', 'assert_sequence_equal', 'assert_list_equal', 'assert_tuple_equal', 'assert_set_equal', 'assert_in', 'assert_not_in', 'assert_is', 'assert_is_not', 'assert_dict_equal', 'assert_dict_contains', 'assert_items_equal', 'assert_less', 'assert_less_equal', 'assert_greater', 'assert_greater_equal', 'assert_is_none', 'assert_is_not_none', 'assert_is_instance', 'assert_is_not_instance', 'assert_regexp_matches', 'assert_not_regexp_matches'] def safe_repr(obj, maxlen=None): """ Helper function to safely determine the representation of ``obj``. This function can be used in the case that the user-provided __repr__() method raises an exception. The ``maxlen`` argument, if given, provides a maximum length for the representation. """ # Safely get the representation of an object try: result = repr(obj) except: # The repr() could call user code, so if it fails, we want to # be intelligent about what we return result = object.__repr__(obj) # Truncate representation if necessary if maxlen is not None and len(result) > maxlen: result = result[:maxlen - 3] + '...' return result def select_msg(usermsg, defmsg): """ Helper function to select a message. If ``usermsg`` is None, ``defmsg`` will be returned; otherwise, ``usermsg`` will be returned. This allows users to specify alternate messages to emit if an assertion fails. """ # Select the correct message to use if usermsg is None: return defmsg return usermsg def make_re(regexp): """ Helper function to build a regular expression object. If ``regexp`` is a string, it will be compiled into a regular expression object; otherwise, ``regexp`` will be returned unmodified. """ # If it's None or not an instance of string, return it if regexp is None or not isinstance(regexp, basestring): return regexp # Convert to a regular expression return re.compile(regexp) def assert_false(expr, msg=None): """ Assert that ``expr`` evaluate to False. """ # Ensure expr is False if expr: msg = select_msg(msg, "%s is not False" % safe_repr(expr)) raise AssertionError(msg) def assert_true(expr, msg=None): """ Assert that ``expr`` evaluate to True. """ # Ensure expr is True if not expr: msg = select_msg(msg, "%s is not True" % safe_repr(expr)) raise AssertionError(msg) class AssertRaisesContext(object): """ AssertRaisesContext =================== The AssertRaisesContext class is used by the assert_raises() function as a context manager. It ensures that the statement executed within the context raises the expected exception(s). """ def __init__(self, excs, msg, regexp=None): """ Initializes an AssertRaisesContext object. The ``excs`` argument should be a tuple of legal exceptions (None indicates that not raising an exception is also legal), and ``msg`` is the user-specified message. If ``regexp`` is not None, then the exception raised must match the regular expression. """ self.excs = excs self.msg = msg self.regexp = make_re(regexp) def __enter__(self): """ Enters the context manager. Returns the context itself. """ return self def __exit__(self, exc_type, exc_value, tb): """ Exits the context manager. Compares any exception raised to the expectations set when this context manager was initialized. If the expectations are not met, an AssertionError will be raised. """ # Ensure the appropriate exception was raised if exc_type is None: if None in self.excs: # Exception wasn't raised, but that's OK return True # OK, have to raise an assertion error msg = select_msg(self.msg, "No exception raised; expected one " "of: (%s)" % ', '.join([self._exc_name(exc) for exc in self.excs])) raise AssertionError(msg) # OK, an exception was raised; make sure it's one we were # expecting elif exc_type in self.excs or issubclass(exc_type, self.excs): # Do we need to check against a regexp? if (self.regexp is not None and not self.regexp.search(str(exc_value))): msg = select_msg(self.msg, 'Exception "%s" does not match ' 'expression "%s"' % (exc_value, self.regexp.pattern)) raise AssertionError(msg) # Assertion we were looking for, so say we handled it return True # Not an exception we were expecting, so let it bubble up return False @staticmethod def _exc_name(exc): """ Helper method to safely determine an exception's name. """ try: # If it has a name, return it return exc.__name__ except AttributeError: # OK, let's try to stringify it instead return str(exc) def assert_raises(excepts, *args, **kwargs): """ Assert that an exception specified by ``excepts`` is raised. If ``excepts`` is a tuple, any listed exception will be acceptable; if None is specified as an exception, not raising an exception will also be acceptable. With no other arguments, returns a context manager that can be used in a ``with`` statement. If one additional argument is specified, or the ``callableObj`` keyword argument is specified, the callable so specified will be called directly. A ``callableObj`` keyword argument overrides any callables specified as additional arguments. All remaining arguments and keyword arguments will be passed to the callable. There are two other special-purpose keyword arguments. The ``noRaiseMsg`` keyword argument may be used to specify an alternate message to use in the event that no exception is raised by the callable. The ``matchRegExp`` keyword argument may be used to specify that the exception must match the given regular expression, which must be either a string or a regular expression object supporting the search() method. If present, these keyword arguments will be removed from the set of keyword arguments before calling the callable. """ # Extract callableObj from arguments callableObj = None if 'callableObj' in kwargs: callableObj = kwargs['callableObj'] del kwargs['callableObj'] elif args: callableObj = args[0] args = args[1:] # Extract noRaiseMsg from keyword arguments noRaiseMsg = None if 'noRaiseMsg' in kwargs: noRaiseMsg = kwargs['noRaiseMsg'] del kwargs['noRaiseMsg'] # Extract matchRegExp from keyword arguments matchRegExp = None if 'matchRegExp' in kwargs: matchRegExp = kwargs['matchRegExp'] del kwargs['matchRegExp'] # First, check if excepts is a sequence try: length = len(excepts) excepts = tuple(excepts) except (TypeError, NotImplementedError): excepts = (excepts,) # Now, grab a context ctx = AssertRaisesContext(excepts, noRaiseMsg, matchRegExp) if callableObj is None: # No callable, so just return the context return ctx # Execute the callable with ctx: callableObj(*args, **kwargs) def assert_equal(first, second, msg=None): """ Assert that ``first`` is equal to ``second``. """ # Ensure first == second if not first == second: msg = select_msg(msg, "%s != %s" % (safe_repr(first), safe_repr(second))) raise AssertionError(msg) def assert_not_equal(first, second, msg=None): """ Assert that ``first`` is not equal to ``second``. """ # Ensure first != second if not first != second: msg = select_msg(msg, "%s == %s" % (safe_repr(first), safe_repr(second))) raise AssertionError(msg) def assert_almost_equal(first, second, msg=None, places=None, delta=None): """ Assert that ``first`` and ``second`` are almost equal. Comparison can be done either to within a given number of decimal places (7, by default), or to within a given delta. The delta is given using the ``delta`` keyword argument, or the number of places can be specified using ``places``. Note that ``places`` and ``delta`` cannot both be specified. """ # Sanity-check arguments if places is not None and delta is not None: raise DTestException("Specify delta or places, not both") # Ensure first and second are similar if first == second: # Short-circuit for the simple case return # Is this comparison a delta style? if delta is not None: if abs(first - second) <= delta: return stdmsg = "%s != %s within %s delta" % (safe_repr(first), safe_repr(second), safe_repr(delta)) else: # OK, do a places-based comparison; default places to 7 if places is None: places = 7 if round(abs(first - second), places) == 0: return stdmsg = "%s != %s within %s places" % (safe_repr(first), safe_repr(second), safe_repr(places)) # OK, they're not equal, so tell the caller raise AssertionError(select_msg(msg, stdmsg)) def assert_not_almost_equal(first, second, msg=None, places=None, delta=None): """ Assert that ``first`` and ``second`` are not almost equal. Comparison can be done either to within a given number of decimal places (7, by default), or to within a given delta. The delta is given using the ``delta`` keyword argument, or the number of places can be specified using ``places``. Note that ``places`` and ``delta`` cannot both be specified. """ # Sanity-check arguments if places is not None and delta is not None: raise DTestException("Specify delta or places, not both") # Is this comparison a delta style? if delta is not None: if not (first == second) and abs(first - second) > delta: return stdmsg = "%s == %s within %s delta" % (safe_repr(first), safe_repr(second), safe_repr(delta)) else: # OK, do a places-based comparison; default places to 7 if places is None: places = 7 if not (first == second) and round(abs(first - second), places) != 0: return stdmsg = "%s == %s within %s places" % (safe_repr(first), safe_repr(second), safe_repr(places)) # OK, they're not equal, so tell the caller raise AssertionError(select_msg(msg, stdmsg)) def assert_sequence_equal(seq1, seq2, msg=None, seq_type=None): """ Assert that ``seq1`` and ``seq2`` have the same contents. If ``seq_type`` is specified, both sequences must be of that type. """ # Enforce sequence typing if seq_type is not None: st_name = seq_type.__name__ if not isinstance(seq1, seq_type): raise AssertionError("First sequence is not a %s: %s" % (st_name, safe_repr(seq1))) if not isinstance(seq2, seq_type): raise AssertionError("Second sequence is not a %s: %s" % (st_name, safe_repr(seq2))) else: st_name = "sequence" # Grab the lengths of the sequences differing = None try: len1 = len(seq1) except (TypeError, NotImplementedError): differing = "First %s has no length. Non-sequence?" % st_name if differing is None: try: len2 = len(seq2) except (TypeError, NotImplementedError): differing = "Second %s has no length. Non-sequence?" % st_name # Now let's compare the sequences if differing is None: if seq1 == seq2: return # They differ somehow... seq1_repr = safe_repr(seq1) seq2_repr = safe_repr(seq2) if len(seq1_repr) > 30: seq1_repr = seq1_repr[:27] + "..." if len(seq2_repr) > 30: seq2_repr = seq2_repr[:27] + "..." differing = "%ss differ: %s != %s" % (st_name.capitalize(), seq1_repr, seq2_repr) # Compare sequences element by element for i in xrange(min(len1, len2)): try: item1 = seq1[i] except (TypeError, IndexError, NotImplementedError): differing += ("\nUnable to index element %d of first %s" % (i, st_name)) break try: item2 = seq2[i] except (TypeError, IndexError, NotImplementedError): differing += ("\nUnable to index element %d of second %s" % (i, st_name)) break if item1 != item2: differing += ("\nFirst differing element %d: %s != %s" % (i, safe_repr(item1), safe_repr(item2))) break else: # The items tally up, but... if len1 == len2 and seq_type is None and type(seq1) != type(seq2): # Just differ in type; who cares? return # Emit the extra elements if len1 > len2: differing += ("\nFirst %s contains %d additional elements" % (st_name, len1 - len2)) try: differing += ("\nFirst extra element %d: %s" % (len2, safe_repr(seq1[len2]))) except (TypeError, IndexError, NotImplementedError): differing += ("\nUnable to index element %d of first %s" % (len2, st_name)) elif len1 < len2: differing += ("\nSecond %s contains %d additional elements" % (st_name, len2 - len1)) try: differing += ("\nFirst extra element %d: %s" % (len1, safe_repr(seq2[len1]))) except (TypeError, IndexError, NotImplementedError): differing += ("\nUnable to index element %d of second %s" % (len1, st_name)) # Not going to bother with the whole diff stuff unittest does raise AssertionError(select_msg(msg, differing)) def assert_list_equal(list1, list2, msg=None): """ Assert that lists ``list1`` and ``list2`` have the same contents. """ assert_sequence_equal(list1, list2, msg=msg, seq_type=list) def assert_tuple_equal(tuple1, tuple2, msg=None): """ Assert that tuples ``tuple1`` and ``tuple2`` have the same contents. """ assert_sequence_equal(tuple1, tuple2, msg=msg, seq_type=tuple) def assert_set_equal(set1, set2, msg=None): """ Assert that sets ``set1`` and ``set2`` have the same contents. """ # Obtain the two set differences try: diff1 = set1.difference(set2) except TypeError as e: raise AssertionError("Invalid type when attempting set " "difference: %s" % e) except AttributeError as e: raise AssertionError("First set does not support set " "difference: %s" % e) try: diff2 = set2.difference(set1) except TypeError as e: raise AssertionError("Invalid type when attempting set " "difference: %s" % e) except AttributeError as e: raise AssertionError("Second set does not support set " "difference: %s" % e) # If both differences are empty, then we're fine if not (diff1 or diff2): return # Accumulate items in one but not the other stdmsg = '' if diff1: stdmsg += ("Items in the first set but not the second: %s" % ', '.join([safe_repr(item) for item in diff1])) if diff2: if stdmsg: stdmsg += "\n" stdmsg += ("Items in the second set but not the first: %s" % ', '.join([safe_repr(item) for item in diff2])) # Tell the caller raise AssertionError(select_msg(msg, stdmsg)) def assert_in(member, container, msg=None): """ Assert that ``member`` is in ``container``. """ # Ensure member is in container if member not in container: msg = select_msg(msg, "%s not found in %s" % (safe_repr(member), safe_repr(container))) raise AssertionError(msg) def assert_not_in(member, container, msg=None): """ Assert that ``member`` is not in ``container``. """ # Ensure member is not in container if member in container: msg = select_msg(msg, "%s unexpectedly found in %s" % (safe_repr(member), safe_repr(container))) raise AssertionError(msg) def assert_is(expr1, expr2, msg=None): """ Assert that ``expr1`` is ``expr2``. """ # Ensure expr1 is expr2 if expr1 is not expr2: msg = select_msg(msg, "%s is not %s" % (safe_repr(expr1), safe_repr(expr2))) raise AssertionError(msg) def assert_is_not(expr1, expr2, msg=None): """ Assert that ``expr1`` is not ``expr2``. """ # Ensure expr1 is not expr2 if expr1 is expr2: msg = select_msg(msg, "%s is unexpectedly %s" % (safe_repr(expr1), safe_repr(expr2))) raise AssertionError(msg) def assert_dict_equal(d1, d2, msg=None): """ Assert that dictionaries ``d1`` and ``d2`` have the same contents. """ # Make sure both are dict instances if not isinstance(d1, dict): raise AssertionError("First argument is not a dictionary") if not isinstance(d2, dict): raise AssertionError("Second argument is not a dictionary") # Ensure they're equal if d1 != d2: stdmsg = "%s != %s" % (safe_repr(d1, 30), safe_repr(d2, 30)) raise AssertionError(select_msg(msg, stdmsg)) def assert_dict_contains(actual, expected, msg=None): """ Assert that the dictionary ``actual`` contains the elements in ``expected``; extra elements are ignored. """ # Determine missing or mismatched keys missing = [] mismatched = [] for k, v in expected.items(): if k not in actual: missing.append(k) elif v != actual[k]: mismatched.append("Key %s: expected %s, actual %s" % (safe_repr(k), safe_repr(v), safe_repr(actual[k]))) # Are there any problems? if not (missing or mismatched): return # Build up the standard message stdmsg = '' if missing: stdmsg += "Missing keys: %s" % ', '.join([safe_repr(k) for k in missing]) if mismatched: if stdmsg: stdmsg += '; ' stdmsg += "Mismatched values: %s" % '; '.join(mismatched) raise AssertionError(select_msg(msg, stdmsg)) def assert_items_equal(actual, expected, msg=None): """ Assert that ``actual`` and ``expected`` contain the same items. Note that this function is implemented using an n^2 algorithm and so should likely not be used if ``actual`` or ``expected`` contain large numbers of items. """ # Order n^2 algorithm for comparing items in the lists missing = [] while expected: item = expected.pop() try: # Take it out of what we actually got actual.remove(item) except ValueError: # It wasn't there! missing.append(item) # Now, missing contains those items in expected which were not in # actual, and actual contains those items which were not in # expected; if missing and actual are empty, we're fine if not missing and not actual: return # Build the error message stdmsg = '' if missing: stdmsg += ("Missing items: %s" % ', '.join([safe_repr(i) for i in missing])) if actual: if stdmsg: stdmsg += '; ' stdmsg += ("Unexpected items: %s" % ', '.join([safe_repr(i) for i in actual])) raise AssertionError(select_msg(msg, stdmsg)) def assert_less(a, b, msg=None): """ Assert that ``a`` is less than ``b``. """ # Ensure a < b if not a < b: msg = select_msg(msg, "%s not less than %s" % (safe_repr(a), safe_repr(b))) raise AssertionError(msg) def assert_less_equal(a, b, msg=None): """ Assert that ``a`` is less than or equal to ``b``. """ # Ensure a <= b if not a <= b: msg = select_msg(msg, "%s not less than or equal to %s" % (safe_repr(a), safe_repr(b))) raise AssertionError(msg) def assert_greater(a, b, msg=None): """ Assert that ``a`` is greater than ``b``. """ # Ensure a > b if not a > b: msg = select_msg(msg, "%s not greater than %s" % (safe_repr(a), safe_repr(b))) raise AssertionError(msg) def assert_greater_equal(a, b, msg=None): """ Assert that ``a`` is greater than or equal to ``b``. """ # Ensure a >= b if not a >= b: msg = select_msg(msg, "%s not greater than or equal to %s" % (safe_repr(a), safe_repr(b))) raise AssertionError(msg) def assert_is_none(obj, msg=None): """ Assert that ``obj`` is None. """ # Ensure obj is None if obj is not None: msg = select_msg(msg, "%s is not None" % safe_repr(obj)) raise AssertionError(msg) def assert_is_not_none(obj, msg=None): """ Assert that ``obj`` is not None. """ # Ensure obj is not None if obj is None: msg = select_msg(msg, "%s is None" % safe_repr(obj)) raise AssertionError(msg) def assert_is_instance(obj, cls, msg=None): """ Assert that ``obj`` is an instance of class ``cls``. """ # Ensure obj is an instance of cls if not isinstance(obj, cls): msg = select_msg(msg, "%s is not an instance of %r" % (safe_repr(obj), cls)) raise AssertionError(msg) def assert_is_not_instance(obj, cls, msg=None): """ Assert that ``obj`` is not an instance of class ``cls``. """ # Ensure obj is not an instance of cls if isinstance(obj, cls): msg = select_msg(msg, "%s is an instance of %r" % (safe_repr(obj), cls)) raise AssertionError(msg) def assert_regexp_matches(text, regexp, msg=None): """ Assert that ``text`` matches the regular expression ``regexp``. The regular expression may be either a string or a regular expression object supporting the search() method. """ # Get the regular expression regexp = make_re(regexp) # Does it match? if not regexp.search(text): msg = select_msg(msg, "'%s' does not match text %s" % (regexp.pattern, safe_repr(text))) raise AssertionError(msg) def assert_not_regexp_matches(text, regexp, msg=None): """ Assert that ``text`` does not match the regular expression ``regexp``. The regular expression may be either a string or a regular expression object supporting the search() method. """ # Get the regular expression regexp = make_re(regexp) # Does it match? match = regexp.search(text) if match: msg = select_msg(msg, "'%s' matches text %r from %s" % (regexp.pattern, text[match.start():match.end()], safe_repr(text))) raise AssertionError(msg) DTest-0.4.0/dtest/core.py0000755000175000017500000011445211607315463015501 0ustar sorensoren00000000000000#!/usr/bin/python # # Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ============ Test Running ============ This module contains the run_test() function and the associated Queue class, which together provide the functionality for executing tests in a threaded manner while properly handling ordering implied by dependencies. Output is specified by passing an instance of DTestOutput to run(). If this file is executed directly, the main() function--which first calls explore(), then returns the result of run()--is called, and its return value will be passed to sys.exit(). Command line arguments are also available, and the module can be executed by passing "-m dtest.core" to the Python interpreter. """ import imp from optparse import OptionParser import os import os.path import sys import traceback from eventlet import spawn_n, monkey_patch from eventlet.corolocal import local from eventlet.event import Event from eventlet.semaphore import Semaphore from dtest import capture from dtest.constants import * from dtest.exceptions import DTestException from dtest import test # Default line width DEF_LINEWIDTH = 78 # Current output for issuing status messages _output = local() class _DTestStatus(object): """ _DTestStatus ============ The _DTestStatus class is a stream look-alike class, an instance of which implements the special ``dtest.status`` stream. Data written to the stream will be passed to the status() method of the current DTestOutput object. Thread-local data is used to store the current DTestOutput object, so multiple output objects may be safely used simultaneously. """ def write(self, msg): """ Emits ``msg`` as a status message to the current DTestOutput object. This can be used to notify the user of the status of a test which takes a long time to complete. """ # Write to the registered output _output.out.status(_output.test, msg) def flush(self): """ Provided for compatibility with normal output streams. Does nothing; the DTestOutput object's status() method is assumed to perform a flush after every call. """ pass @property def output(self): """ Retrieve the current DTestOutput object, which is stored in a per-thread manner. This property is provided to allow the status stream to be set up in threads started within the individual tests. See the setup() method. """ # This is simple... return _output.out @property def test(self): """ Retrieve the current test object, which is stored in a per-thread manner. This property is provided to allow the status stream to be set up in threads started within the individual tests. See the setup() method. """ # Also simple... return _output.test def setup(self, output, test): """ Initializes the status stream within a new thread of control. This routine should be called as the first action of a new thread. """ # Set up thread-local data _output.out = output _output.test = test # A stream for export status = _DTestStatus() class DTestOutput(object): """ DTestOutput =========== The DTestOutput class is a utility class for grouping together all output generation for the test framework. The ``output`` attribute contains a stream-like object to which output may be sent, and defaults to sys.__stdout__ (note that sys.stdout may be captured as the output of a test). The notify() method is called whenever a test or test fixture transitions to an alternate state; the result() method is called to output the results of a test; and the summary() method is called to output a summary of the results of a test. The default implementations of these methods send their output to the stream in the ``output`` attribute, but each may be overridden to perform alternate output. This could, for instance, be used to display test framework output in a GUI or to generate a web page. """ def __init__(self, output=sys.__stdout__, linewidth=DEF_LINEWIDTH): """ Initialize a DTestOutput object with the given ``output`` stream (defaults to sys.__stdout__) and linewidth. """ # Save the output and linewidth self.output = output self.linewidth = linewidth def notify(self, test, state): """ Called when a test or test fixture, identified by ``test``, transitions to ``state``. The default implementation ignores state transitions by test fixtures or transitions to the RUNNING state. """ # Are we interested in this test? if not test.istest() or state == RUNNING: return # Determine the name of the test name = str(test) # Determine the width of the test name field width = self.linewidth - len(state) - 1 # Truncate the name, if necessary if len(name) > width: name = name[:width - 3] + '...' # Emit the status message print >>self.output, "%-*s %s" % (width, name, state) # Flush the output self.output.flush() def result(self, result, debug=False): """ Called at the end of a test run to emit ``result`` information for a given test. Called once for each result. Should emit all exception and captured output information, if any. Will also be called for results from test fixtures, in order to emit errors encountered while executing them. The default implementation ignores results containing no messages, and only emits results from successful tests if debug is True. """ # Helper for reporting output def out_msg(msg, hdr=None): # Output header information if hdr: print >>self.output, (hdr.center(self.linewidth) + "\n" + ('-' * self.linewidth)) # Output the test ID if hasattr(msg, 'id'): id_hdr = " (%s) " % msg.id print >>self.output, id_hdr.center(self.linewidth, ':') # Output exception information if msg.exc_type is not None: exc_hdr = ' Exception %s ' % msg.exc_type.__name__ tb = ''.join(traceback.format_exception(msg.exc_type, msg.exc_value, msg.exc_tb)) print >>self.output, exc_hdr.center(self.linewidth, '-') print >>self.output, tb.rstrip() # Format output data for name, desc, value in msg.captured: print >>self.output, (' %s ' % desc).center(self.linewidth, '-') print >>self.output, value.rstrip() # Emit a closing line print >>self.output, '-' * self.linewidth # Skip results with no messages if len(result) == 0: return # If it's successful or an expected failure, only emit # messages if debug is True if not debug and (result.state == OK or result.state == XFAIL): return # Emit a banner for the result print >>self.output, ("\n" + ("=" * self.linewidth) + "\n" + str(result.test).center(self.linewidth) + "\n" + ("=" * self.linewidth)) # Emit the data for each step if PRE in result: out_msg(result[PRE], 'Pre-test Fixture') if TEST in result: if result.multi: for m in result[TEST]: out_msg(m) else: out_msg(result[TEST]) if POST in result: out_msg(result[POST], 'Post-test Fixture') # Flush the output self.output.flush() def summary(self, counts): """ Called at the end of a test run to emit summary information about the run. The ``counts`` argument is a dictionary containing the following keys: OK The number of tests which passed. This includes the count of unexpected passes (tests marked with the @failing decorator which passed). UOK The number of tests which unexpectedly passed. SKIPPED The number of tests which were skipped in this test run. FAIL The number of tests which failed. This includes the count of expected failures (tests marked with the @failing decorator which failed). XFAIL The number of tests which failed, where failure was expected. ERROR The number of tests which experienced an error--an unexpected exception thrown while executing the test. DEPFAIL The number of tests which could not be executed because tests they were dependent on failed. 'total' The total number of tests considered for execution. 'threads' The maximum number of simultaneously executing threads which were utilized while running tests. Note that test fixtures are not included in these counts. If a test fixture fails (raises an AssertionError) or raises any other exception, all tests dependent on that test fixture will fail due to dependencies. """ # Emit summary data print >>self.output, ("%d tests run in %d max simultaneous threads" % (counts['total'], counts['threads'])) if counts[OK] > 0: unexp = '' if counts[UOK] > 0: unexp = ' (%d unexpected)' % counts[UOK] print >>self.output, (" %d tests successful%s" % (counts[OK], unexp)) if counts[SKIPPED] > 0: print >>self.output, " %d tests skipped" % counts[SKIPPED] if counts[FAIL] + counts[ERROR] + counts[DEPFAIL] > 0: # Set up the breakdown bd = [] total = 0 if counts[FAIL] > 0: exp = '' if counts[XFAIL] > 0: exp = ' [%d expected]' % counts[XFAIL] bd.append('%d failed%s' % (counts[FAIL], exp)) total += counts[FAIL] if counts[ERROR] > 0: bd.append('%d errors' % counts[ERROR]) total += counts[ERROR] if counts[DEPFAIL] > 0: bd.append('%d failed due to dependencies' % counts[DEPFAIL]) total += counts[DEPFAIL] print >>self.output, (" %d tests failed (%s)" % (total, ', '.join(bd))) # Flush the output self.output.flush() def caught(self, exc_list): """ Called after emitting summary data to report any exceptions encountered within the dtest framework itself while running the test. The ``exc_list`` argument is a list of three-element tuples. For each tuple, the first element is an exception type; the second element is the exception value; and the third element is a traceback object. Under most circumstances, this function will not be called; if it is, the exception data reported should be sent back to the dtest framework developers. """ # Emit exception data print >>self.output, "\nThe following exceptions were encountered:" for exc_type, exc_value, tb in exc_list: exc_hdr = ' Exception %s ' % exc_type.__name__ tb = ''.join(traceback.format_exception(exc_type, exc_value, tb)) print >>self.output, exc_hdr.center(self.linewidth, '-') print >>self.output, tb.rstrip() print >>self.output, '-' * self.linewidth print >>self.output, ("Please report the above errors to the " "developers of the dtest framework.") # Flush the output self.output.flush() def imports(self, exc_list): """ Called by main() if import errors were encountered while discovering tests. The ``exc_list`` argument is a list of tuples containing three elements: the first element is the full path to the file for which import was attempted; the second element is the module path for which import was attempted; and the third is a three-element tuple returned by sys.exc_info(). """ # Emit import error data print >>self.output, "The following import errors were encountered:" for path, pkgname, (exc_type, exc_value, tb) in exc_list: exc_hdr = ' %s (%s) ' % (os.path.relpath(path), pkgname) tb = ''.join(traceback.format_exception(exc_type, exc_value, tb)) print >>self.output, exc_hdr.center(self.linewidth, '-') print >>self.output, tb.rstrip() print >>self.output, ('-' * self.linewidth) + "\n" # Flush the output self.output.flush() def info(self, message): """ Called to emit other specialized messages not specifically categorized. Currently only used in the case of dependency cycle detection. The ``message`` argument will be an explanatory message. """ # Emit the message print >>self.output, '\n' + message # Flush the output self.output.flush() def status(self, dt, message): """ Called to emit status messages printed to the dtest.result stream. The ``dt`` argument will be the test descriptor, and the ``message`` argument will be a message string or, if using the ``print`` statement, bare whitespace. The default implementation ignores messages consisting of whitespace and writes non-whitespace messages, prefixed with the short name of ``dt``, to the ``self.output`` stream; the output stream will be flushed to ensure the message is emitted. """ # Ignore messages composed only of whitespace... if message.isspace(): return # Get the short name of the test... shname = str(dt) if '.' in shname: dummy, shname = shname.rsplit('.', 1) # Emit the message print >>self.output, "%s: %s" % (shname, message) # Flush the output self.output.flush() class DTestQueue(object): """ DTestQueue ========== The DTestQueue class maintains a queue of tests waiting to be run. The constructor initializes the queue to an empty state and stores a maximum simultaneous thread count ``maxth`` (None means unlimited); a ``skip`` evaluation routine (defaults to testing the ``skip`` attribute of the test); and an instance of DTestOutput. The list of all tests in the queue is maintained in the ``tests`` attribute; tests may be added to a queue with add_test() (for a single test) or add_tests() (for a sequence of tests). The tests in the queue may be run by invoking the run() method. """ def __init__(self, maxth=None, skip=lambda dt: dt.skip, output=DTestOutput()): """ Initialize a DTestQueue. The ``maxth`` argument must be either None or an integer specifying the maximum number of simultaneous threads permitted. The ``skip`` arguments is function references; it should take a test and return True if the test should be skipped. The ``output`` argument should be an instance of DTestOutput containing a notify() method, which takes a test and the state to which it is transitioning, and may use that information to emit a test result. Note that the notify() method will receive state transitions to the RUNNING state, as well as state transitions for test fixtures; callers may find the DTestBase.istest() method useful for differentiating between regular tests and test fixtures for reporting purposes. """ # Save our maximum thread count if maxth is None: self.sem = None else: self.sem = Semaphore(maxth) # Need to remember the skip routine self.skip = skip # Also remember the output self.output = output # Initialize the lists of tests self.tests = set() self.waiting = None self.runlist = set() # Need locks for the waiting and runlist lists self.waitlock = Semaphore() self.runlock = Semaphore() # Set up some statistics... self.th_count = 0 self.th_event = Event() self.th_simul = 0 self.th_max = 0 # Place to keep any exceptions we encounter within dtest # itself self.caught = [] # We're not yet running self.running = False def add_test(self, tst): """ Add a test ``tst`` to the queue. Tests can be added multiple times, but the test will only be run once. """ # Can't add a test if the queue is running if self.running: raise DTestException("Cannot add tests to a running queue.") # First we need to get the test object dt = test._gettest(tst) # Add it to the set of tests self.tests.add(dt) def add_tests(self, tests): """ Add a sequence of tests ``tests`` to the queue. Tests can be added multiple times, but the test will only be run once. """ # Can't add a test if the queue is running if self.running: raise DTestException("Cannot add tests to a running queue.") # Run add_test() in a loop for tst in tests: self.add_test(tst) def dot(self, grname='testdeps'): """ Constructs a GraphViz-compatible dependency graph with the given name (``testdeps``, by default). Returns the graph as a string. The graph can be fed to the ``dot`` tool to generate a visualization of the dependency graph. Note that red nodes in the graph indicate test fixtures, and red dashed edges indicate dependencies associated with test fixtures. If the node outline is dotted, that indicates that the test was skipped in the most recent test run. """ # Helper to generate node and edge options def mkopts(opts): # If there are no options, return an empty string if not opts: return '' # OK, let's do this... return ' [' + ','.join(['%s="%s"' % (k, opts[k]) for k in opts]) + ']' # Now, create the graph nodes = [] edges = [] for dt in sorted(self.tests, key=lambda dt: str(dt)): # Get the real test function tfunc = dt.test # Make the node opts = dict(label=r'%s\n%s:%d' % (dt, tfunc.func_code.co_filename, tfunc.func_code.co_firstlineno)) if dt.state: opts['label'] += r'\n(Result: %s)' % dt.state if (dt.state == FAIL or dt.state == XFAIL or dt.state == ERROR or dt.state == DEPFAIL): opts['color'] = 'red' elif isinstance(dt, test.DTestFixture): opts['color'] = 'blue' if dt.state == SKIPPED: opts['style'] = 'dotted' elif dt.state == DEPFAIL: opts['style'] = 'dashed' nodes.append('"%s"%s;' % (dt, mkopts(opts))) # Make all the edges for dep in sorted(dt.dependencies, key=lambda dt: str(dt)): opts = {} if (isinstance(dt, test.DTestFixture) or isinstance(dep, test.DTestFixture)): opts.update(dict(color='blue', style='dashed')) if dt._partner is not None and dep == dt._partner: opts['style'] = 'dotted' edges.append('"%s" -> "%s"%s;' % (dt, dep, mkopts(opts))) # Return a graph return (('strict digraph "%s" {\n\t' % grname) + '\n\t'.join(nodes) + '\n\n\t' + '\n\t'.join(edges) + '\n}') def run(self, debug=False): """ Runs all tests that have been queued up. Does not return until all tests have been run. Causes test results and summary data to be emitted using the ``output`` object registered when the queue was initialized. """ # Can't run an already running queue if self.running: raise DTestException("Queue is already running.") # OK, put ourselves into the running state self.running = True # Must begin by ensuring we're monkey-patched monkey_patch() # OK, let's prepare all the tests... for dt in self.tests: dt._prepare() # Second pass--determine which tests are being skipped waiting = [] for dt in self.tests: # Do we skip this one? willskip = self.skip(dt) # If not, check if it's a fixture with no dependencies... if not willskip and not dt.istest(): if dt._partner is None: if len(dt._revdeps) == 0: willskip = True else: if len(dt._revdeps) == 1: willskip = True # OK, mark it skipped if we're skipping if willskip: dt._skipped(self.output) else: waiting.append(dt) # OK, last pass: generate list of waiting tests; have to # filter out SKIPPED tests self.waiting = set([dt for dt in self.tests if dt.state != SKIPPED]) # Install the capture proxies... if not debug: capture.install() # Spawn waiting tests self._spawn(self.waiting) # Wait for all tests to finish if self.th_count > 0: self.th_event.wait() # OK, uninstall the capture proxies if not debug: capture.uninstall() # Walk through the tests and output the results cnt = { OK: 0, UOK: 0, SKIPPED: 0, FAIL: 0, XFAIL: 0, ERROR: 0, DEPFAIL: 0, 'total': 0, 'threads': self.th_max, } for t in self.tests: # Get the result object r = t.result # Update the counts cnt[r.state] += int(r.test) cnt['total'] += int(r.test) # Special case update for unexpected OKs and expected failures if r.state == UOK: cnt[OK] += int(r.test) elif r.state == XFAIL: cnt[FAIL] += int(r.test) try: # Emit the result messages self.output.result(r, debug) except TypeError: # Maybe the output object is written to the older # standard? self.output.result(r) # Emit summary data self.output.summary(cnt) # If we saw exceptions, emit data about them if self.caught: self.output.caught(self.caught) # We're done running; re-running should be legal self.running = False # Return False if there were any unexpected OKs, unexpected # failures, errors, or dependency failures if (cnt[UOK] > 0 or (cnt[FAIL] - cnt[XFAIL]) > 0 or cnt[ERROR] > 0 or cnt[DEPFAIL] > 0): return False # All tests passed! return True def _spawn(self, tests): """ Selects all ready tests from the set or list specified in ``tests`` and spawns threads to execute them. Note that the maximum thread count restriction is implemented by having the thread wait on the ``sem`` Semaphore after being spawned. """ # Work with a copy of the tests tests = list(tests) # Loop through the list while tests: # Pop off a test to consider dt = tests.pop(0) with self.waitlock: # Is test waiting? if dt not in self.waiting: continue # OK, check dependencies elif dt._depcheck(self.output): # No longer waiting self.waiting.remove(dt) # Place test on the run list with self.runlock: self.runlist.add(dt) # Spawn the test self.th_count += 1 spawn_n(self._run_test, dt) # Dependencies failed; check if state changed and add # its dependents if so elif dt.state is not None: # No longer waiting self.waiting.remove(dt) # Check all its dependents. Note--not trying to # remove duplicates, because some formerly # unrunnable tests may now be runnable because of # the state change tests.extend(list(dt.dependents)) def _run_test(self, dt): """ Execute ``dt``. This method is meant to be run in a new thread. Once a test is complete, the thread's dependents will be passed back to the spawn() method, in order to pick up and execute any tests that are now ready for execution. """ # Acquire the thread semaphore if self.sem is not None: self.sem.acquire() # Increment the simultaneous thread count self.th_simul += 1 if self.th_simul > self.th_max: self.th_max = self.th_simul # Save the output and test relative to this thread, for the # status stream status.setup(self.output, dt) # Execute the test try: dt._run(self.output) except: # Add the exception to the caught list self.caught.append(sys.exc_info()) # Manually transition the test to the ERROR state dt._result._transition(ERROR, output=self.output) # OK, done running the test; take it off the run list with self.runlock: self.runlist.remove(dt) # Now, walk through its dependents and check readiness self._spawn(dt.dependents) # All right, we're done; release the semaphore if self.sem is not None: self.sem.release() # Decrement the thread count self.th_simul -= 1 self.th_count -= 1 # If thread count is now 0, signal the event with self.waitlock: if len(self.waiting) == 0 and self.th_count == 0: self.th_event.send() return # If the run list is empty, that means we have a cycle with self.runlock: if len(self.runlist) == 0: for dt2 in list(self.waiting): # Manually transition to DEPFAIL dt2._result._transition(DEPFAIL, output=self.output) # Emit an error message to let the user know what # happened self.output.info("A dependency cycle was discovered. " "Please examine the dependency graph " "and correct the cycle. The --dot " "option may be useful here.") # Now, let's signal our event self.th_event.send() def explore(directory=None, queue=None): """ Explore ``directory`` (by default, the current working directory) for all modules matching the test regular expression and import them. Each module imported will be further explored for tests. This function may be used to discover all registered tests prior to running them. Returns a tuple; the first element is a set of all discovered tests, and the second element is a list of tuples containing information about all ImportError exceptions caught. The elements of this exception information tuple are, in order, a path, the module name, and a tuple of exception information as returned by sys.exc_info(). """ # If no queue is provided, allocate one with the default settings if queue is None: queue = DTestQueue() # Set of all discovered tests tests = set() # List of all import exceptions caught = [] # Need the allowable suffixes suffixes = [sfx[0] for sfx in imp.get_suffixes()] # Obtain the canonical directory name if directory is None: directory = os.getcwd() else: directory = os.path.abspath(directory) # This is the directory we'll be searching searchdir = directory # But does it have an __init__.py? pkgpath = None for sfx in suffixes: if os.path.exists(os.path.join(directory, '__init__' + sfx)): # Refigure the directory directory, pkgpath = os.path.split(directory) # Now, let's jigger the import path tmppath = sys.path sys.path = [directory] + sys.path # Import the package, if necessary if pkgpath is not None: try: __import__(pkgpath) test.visit_mod(sys.modules[pkgpath], tests) except ImportError: # Remember the exception we got caught.append((searchdir, pkgpath, sys.exc_info())) # Having done that, we now begin walking the directory tree for root, dirs, files in os.walk(searchdir): # Let's determine the module's package path if root == directory: pkgpath = '' else: sep = root[len(directory)] subdir = root[len(directory) + 1:] pkgpath = '.'.join(subdir.split(sep)) + '.' # Start with files... for f in files: # Does it match the testRE? if not test.testRE.match(f): continue # Only interested in files we can load for sfx in suffixes: if f.endswith(sfx): modname = f[:-len(sfx)] break else: # Can't load it, so skip it continue # Determine the module's full path fullmodname = pkgpath + modname # Let's try to import it try: __import__(fullmodname) mod = sys.modules[fullmodname] except ImportError: # Remember the exception we got caught.append((os.path.join(root, f), fullmodname, sys.exc_info())) # Can't import it, so move on continue test.visit_mod(mod, tests) # Now we want to determine which subdirectories are packages; # they'll contain __init__.py subdirs = [] for d in dirs: # Only interested in directories which contain __init__.py for sfx in suffixes: if os.path.exists(os.path.join(root, d, '__init__' + sfx)): break else: # Not a package, so skip it continue # Does it match the testRE? if not test.testRE.match(d): # No, but let's continue exploring under it subdirs.append(d) continue # Determine the package's full path fullpkgname = pkgpath + d # Let's try to import it try: __import__(fullpkgname) pkg = sys.modules[fullpkgname] except ImportError: # Remember the exception we got caught.append((os.path.join(root, d), fullpkgname, sys.exc_info())) # Can't import it, no point exploring under it continue test.visit_mod(pkg, tests) # We also want to explore under it subdirs.append(d) # Make sure to set up our pruned subdirectory list dirs[:] = subdirs # We have finished loading all tests; restore the original import # path sys.path = tmppath # Add the discovered tests to the queue queue.add_tests(tests) # Output the import errors, if any if caught: queue.output.imports(caught) # Return the queue return queue def main(directory=None, maxth=None, skip=lambda dt: dt.skip, output=DTestOutput(), dryrun=False, debug=False, dotpath=None): """ Discover tests under ``directory`` (by default, the current directory), then run the tests under control of ``maxth``, ``skip``, and ``output`` (see the documentation for the run() function for more information on these three parameters). Returns True if all tests (with the exclusion of expected failures) passed, or False if an unexpect OK, a failure, or an error was encountered. """ # First, allocate a queue queue = DTestQueue(maxth, skip, output) # Next, discover the tests of interest explore(directory, queue) # Is this a dry run? if not dryrun: # Nope, execute the tests result = queue.run(debug=debug) else: result = True # Print out the names of the tests print "Discovered tests:\n" for dt in queue.tests: if dt.istest(): print str(dt) # Are we to dump the dependency graph? if dotpath is not None: with open(dotpath, 'w') as f: print >>f, queue.dot() # Now, let's return the result of the test run return result def optparser(*args, **kwargs): """ Builds and returns an option parser with the default options recognized by the dtest framework. All arguments are passed to the OptionParser constructor. """ # Set up an OptionParser op = OptionParser(*args, **kwargs) # Set up our default options op.add_option("-d", "--directory", action="store", type="string", dest="directory", help="The directory to search for tests to run.") op.add_option("-m", "--max-threads", action="store", type="int", dest="maxth", help="The maximum number of tests to run simultaneously; if " "not specified, an unlimited number of tests may run " "simultaneously.") op.add_option("-s", "--skip", action="store", type="string", dest="skip", help="Specifies a rule to control which tests are skipped. " "If value contains '=', tests having an attribute with the " "given value will be skipped. If value does not contain " "'=', tests that have the attribute will be skipped.") op.add_option("--no-skip", action="store_true", dest="noskip", help="Specifies that no test should be skipped. Overrides " "--skip, if specified.") op.add_option("-n", "--dry-run", action="store_true", dest="dryrun", help="Performs a dry run. After discovering all tests, " "the list of tests is printed to standard output.") op.add_option("-D", "--debug", action="store_true", dest="debug", help="Enables debugging mode. Disables output capturing " "for running tests, causing all output to be emitted " "immediately.") op.add_option("--dot", action="store", type="string", dest="dotpath", help="After running tests, a text representation of the " "dependency graph is placed in the indicated file. This " "file may then be passed to the \"dot\" tool of the " "GraphViz package to visualize the dependency graph. " "This option may be used in combination with \"-n\".") # Return the OptionParser return op def opts_to_args(options): """ Converts an options object--as returned by calling the parse_args() method of the return value from the optparser() function--into a dictionary that can be fed to the main() function to execute the desired test operation. """ # Build the arguments dictionary args = {} # Start with the skip-related arguments if options.noskip is True: args['skip'] = lambda dt: False elif options.skip is not None: if '=' in options.skip: k, v = options.skip.split('=', 1) args['skip'] = lambda dt: getattr(dt, k, None) == v else: args['skip'] = lambda dt: hasattr(dt, options.skip) # Now look at max threads if options.maxth is not None: args['maxth'] = options.maxth # Are we doing a dry run? if options.dryrun is True: args['dryrun'] = True # Are we in debug mode? if options.debug is True: args['debug'] = True # How about dumping the dependency graph? if options.dotpath is not None: args['dotpath'] = options.dotpath # And, finally, directory if options.directory is not None: args['directory'] = options.directory # Return the built arguments object return args if __name__ == '__main__': # Obtain the options opts = optparser(usage="%prog [options]") # Process command-line arguments (options, args) = opts.parse_args() # Execute the test suite sys.exit(not main(**opts_to_args(options))) DTest-0.4.0/dtest/test.py0000644000175000017500000013755011607315463015531 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ===== Tests ===== This module contains all the classes and decorators necessary for manipulating tests. The DTestBase class is the root of the inheritance tree for DTest and DTestFixture, which respectively represent tests and test fixtures. The DTestCaseMeta class is a metaclass for DTestCase, which is equivalent to unittest.TestCase for the dependency-based test framework. This module also contains a number of decorators, such as @istest, @nottest, @skip, @failing, @attr(), @depends(), @raises(), and @timed(), along with the debugging utility function dot(). """ import inspect import re import sys import types from dtest.constants import * from dtest import exceptions from dtest import policy as pol from dtest import result from dtest import strategy as strat SETUP = 'setUp' TEARDOWN = 'tearDown' CLASS = 'Class' class DTestBase(object): """ DTestBase ========= The DTestBase class is a base class for the DTest and DTestFixture classes, and contains a number of common elements. Most users will only be interested in the attribute manipulation methods, which allows attributes to be attached to tests (see also the @attr() decorator); the stringification method, which generates a string name for the test based on the function or method wrapped by the DTestBase instance; and the following properties: :result: The result of the most recent test run; may be None if the test has not yet been run. :state: The state of the most recent test run; may be None if the test has not yet been run. :test: The actual function or method implementing the test. :class_: If the test is a method of a class, this property will be the appropriate class; otherwise, None. :skip: True if the @skip decorator has been used on the test. :failing: True if the @failing decorator has been used on the test. :dependencies: The tests this test is dependent on. :dependents: The tests that are dependent on this test. :raises: The set of exceptions this test can raise; declared using the @raises() decorator. :timeout: The timeout set for this test; declared using the @timed() decorator. In addition, the setUp() and tearDown() methods are available to identify special set up and tear down functions or methods to override the class-level setUp() and tearDown() methods; these methods may be used as decorators, assuming the test has been decorated with the @istest decorator. There is also an istest() method, which by default returns False; this is overridden by the DTest class to return True. """ # Keep a list of the recognized attributes for the promote() # classmethod _class_attributes = [ '_name', '_test', '_class', '_exp_fail', '_skip', '_pre', '_post', '_deps', '_revdeps', '_partner', '_attrs', '_raises', '_timeout', '_result', '_repeat', '_strategy', '_policy' ] def __init__(self, test): """ Initialize a DTestBase instance wrapping ``test``. """ # The test cannot be None if test is None: raise exceptions.DTestException("None is an invalid test") # We have to unwrap MethodType and class and static methods if (isinstance(test, types.MethodType) or isinstance(test, classmethod) or isinstance(test, staticmethod)): try: test = test.__func__ except AttributeError: # Python 2.6 doesn't have __func__ attribute on # classmethod or staticmethod, so let's kludge around # it... tmp = test.__get__(None, object) # If it's an instance of staticmethod, tmp is func if isinstance(test, staticmethod): test = tmp # If it's an instance of classmethod, tmp has __func__ else: test = tmp.__func__ # Require it to be a callable... if not callable(test): raise exceptions.DTestException("%r must be a callable" % test) # Initialize ourself self._name = None self._test = test self._class = None self._exp_fail = False self._skip = False self._pre = None self._post = None self._deps = set() self._revdeps = set() self._partner = None self._attrs = {} self._raises = set() self._timeout = None self._result = None self._repeat = 1 self._strategy = strat.SerialStrategy() self._policy = pol.basicPolicy # Attach ourself to the test test._dt_dtest = self test.setUp = self.setUp test.tearDown = self.tearDown def __getattr__(self, key): """ Retrieve the attribute with the given ``key``. Attributes may be set using the @attr() decorator. """ # Get the attribute out of the _attrs map try: return self._attrs[key] except KeyError: raise AttributeError(key) def __setattr__(self, key, value): """ Update the attribute with the given ``key`` to ``value``. Attributes may be initially set using the @attr() decorator. """ # Is it an internal attribute? if key[0] == '_': return super(DTestBase, self).__setattr__(key, value) # Store that in the _attrs map self._attrs[key] = value def __delattr__(self, key): """ Delete the attribute with the given ``key``. Attributes may be initially set using the @attr() decorator. """ # Is it an internal attribute? if key[0] == '_': return super(DTestBase, self).__delattr__(key) # Delete from the _attrs map del self._attrs[key] def __int__(self): """ Returns the value of the instance in an integer context. Normally returns 0, but DTest overrides this to return 1. This makes counting tests and not test fixtures easy. """ # In an integer context, we're 0; this is how we can count the # number of tests return 0 def __hash__(self): """ Returns a hash code for this instance. This allows instances of DTestBase to be stored in a set or used as hash keys. """ # Return the hash of the key return hash(id(self)) def __eq__(self, other): """ Compares two instances of DTestBase for equality. This allows instances of DTestBase to be stored in a set or used as hash keys. """ # Compare test objects return self is other def __ne__(self, other): """ Compares two instances of DTestBase for inequality. This is for completeness, complementing __eq__(). """ # Compare test objects return self is not other def __str__(self): """ Generates a string representation of the test. The string representation is the fully qualified name of the wrapped function or method. """ # If our name has not been generated, do so... if self._name is None: if self._class is None: # No class is involved self._name = '.'.join([self._test.__module__, self._test.__name__]) else: # Have to include the class name self._name = '.'.join([self._test.__module__, self._class.__name__, self._test.__name__]) # Return the name return self._name def __repr__(self): """ Generates a representation of the test. This augments the standard __repr__() output to include the actual test function or method wrapped by the DTestBase instance. """ # Generate a representation of the test return ('<%s.%s object at %#x wrapping %r>' % (self.__class__.__module__, self.__class__.__name__, id(self), self._test)) @property def result(self): """ Retrieve the most recent result of running the test. This may be None if the test has not yet been run. """ # We want the result to be read-only, but to be accessed like # an attribute return self._result @property def state(self): """ Retrieve the most recent state of the test. This may be None if the test has not yet been run. """ # We want the state to be read-only, but to be accessed like # an attribute; this is a short-cut for reading the state from # the result return self._result.state if self._result is not None else None @property def test(self): """ Retrieve the test function or method wrapped by this DTestBase instance. """ # We want the test to be read-only, but to be accessed like an # attribute return self._test @property def class_(self): """ Retrieve the class in which the test method was defined. If the wrapped test is a bare function, rather than a method, this will be None. """ # We want the test's class to be read-only, but to be accessed # like an attribute return self._class @property def skip(self): """ Retrieve the ``skip`` setting for the test. This will be True only if the @skip decorator has been used on the test; otherwise, it will be False. """ # We want the test's skip setting to be read-only, but to be # accessed like an attribute return self._skip @property def failing(self): """ Retrieve the ``failing`` setting for the test. This will be True only if the @failing decorator has been used on the test; otherwise, it will be False. """ # We want the test's expected failure setting to be read-only, # but to be accessed like an attribute return self._exp_fail @property def dependencies(self): """ Retrieve the set of tests this test is dependent on. This returns a frozenset. """ # We want the dependencies to be read-only, but to be accessed # like an attribute return frozenset(self._deps) @property def dependents(self): """ Retrieve the set of tests that are dependent on this test. This returns a frozenset. """ # We want the depedents to be read-only, but to be accessed # like an attribute return frozenset(self._revdeps) @property def raises(self): """ Retrieve the set of exceptions this test is expected to raise. Will be empty unless the @raises() decorator has been used on this test. This returns a frozenset. """ # We want the exceptions to be read-only, but to be accessed # like an attribute return frozenset(self._raises) @property def timeout(self): """ Retrieve the timeout for this test. Will be None unless the @timed() decorator has been used on this test. """ # We want the timeout to be read-only, but to be accessed like # an attribute return self._timeout @property def repeat(self): """ Retrieve the repeat count for this test. Will be 1 unless the @repeat() decorator has been used on this test. """ # We want the repeat count to be read-only, but to be accessed # like an attribute return self._repeat @classmethod def promote(cls, test): """ Promotes a DTestBase instance from one class to another. """ # If test is None, return None if test is None: return None # If it's already the same class, return it if test.__class__ == cls: return test # First, allocate a new instance newtest = object.__new__(cls) # Now, initialize it from test for attr in cls._class_attributes: setattr(newtest, attr, getattr(test, attr)) # Walk through all dependencies/dependents and replace the # test for dep in newtest._deps: dep._revdeps.remove(test) dep._revdeps.add(newtest) for dep in newtest._revdeps: dep._deps.remove(test) dep._deps.add(newtest) # Replace the bindings in the test newtest._test._dt_dtest = newtest newtest._test.setUp = newtest.setUp newtest._test.tearDown = newtest.tearDown # Return the new test return newtest def setUp(self, pre): """ Explicitly set the setUp() function or method to be called immediately prior to executing the test. This can be used as a decorator; however, the test in question must have been decorated for this method to be available. If no other decorator is appropriate for the test, use the @istest decorator. """ # Save the pre-test fixture. This method can be used as a # decorator. self._pre = pre return pre def tearDown(self, post): """ Explicitly set the tearDown() function or method to be called immediately after executing the test. This can be used as a decorator; however, the test in question must have been decorated for this method to be available. If no other decorator is appropriate for the test, use the @istest decorator. """ # Save the post-test fixture. This method can be used as a # decorator. self._post = post return post def istest(self): """ Returns True if the instance is a test or False if the instance is a test fixture. For all instances of DTestBase, returns False; the DTest class overrides this method to return True. """ # Return True if this is a test return False def _attach(self, cls): """ Attach a class to the test. This may re-key the test. """ # If a class is already associated, do nothing if self._class is not None: return # Set the class self._class = cls # Re-set our name self._name = None def _run(self, output): """ Perform the test. Causes any fixtures discovered as part of the class or explicitly set (or overridden) by the setUp() and tearDown() methods to be executed before and after the actual test, respectively. Returns the result of the test. """ # Need a helper to unwrap and call class methods and static # methods def get_call(method, obj): # If obj is not None, extract the method with getattr(), # so we use the right calling convention if obj is not None: method = getattr(obj, method.__name__) # Now call it return method # Transition to the running state self._result._transition(RUNNING, output=output) # Set up an object for the call, if necessary obj = None if self._class is not None: obj = self._class() # Perform preliminary call pre_status = True if self._pre is not None: with self._result.accumulate(PRE): get_call(self._pre, obj)() if not self._result: pre_status = False # Execute the test if pre_status: # Prepare the strategy... self._strategy.prepare() # Trigger the test self._trigger(self._test.__name__, get_call(self._test, obj), (), {}) # Wait for spawned threads self._strategy.wait() # Invoke any clean-up that's necessary (regardless of # exceptions) if pre_status and self._post is not None: with self._result.accumulate(POST): get_call(self._post, obj)() # Transition to the appropriate ending state self._result._transition(output=output) # Return the result return self._result def _parse_item(self, name, item): """ Parses the tuple ``item`` returned by a generator "test". Returns a 4-element tuple consisting of the name, the callable, the positional arguments, and the keyword arguments. Note that ``item`` may also be a bare callable. """ if callable(item): # It's a bare callable; make up name, arg, and kwargs return ("%s:%s" % (name, item.__name__), item, (), {}) else: # Convert item into a list so we can mutate it try: item = list(item) except TypeError: # Hmmm... raise exceptions.DTestException("Generator result is not " "a sequence") # Make sure we have elements in the list if len(item) < 1: raise exceptions.DTestException("Generator result is an " "empty list") # Do we have a name? n = None if isinstance(item[0], basestring): n = item.pop(0) # Make sure we still have elements in the list if len(item) < 1: raise exceptions.DTestException("Generator result has no " "callable") # Get the callable c = item.pop(0) # Bail out if it's not actually callable if not callable(c): raise exceptions.DTestException("Generator result callable " "element is not callable") # Ensure we have a name if n is None: n = "%s:%s" % (name, c.__name__) # Now we need to look for arguments if len(item) < 1: a = () k = {} elif len(item) >= 2: a, k = item[:2] else: tmp = item[0] # Is it a dictionary? if isinstance(tmp, dict): a = () k = tmp else: a = tmp k = {} # Return the computed tuple return (n, c, a, k) def _trigger(self, name, call, args, kwargs): """ Handles making a single call. If the callable ``call`` is a generator function, it will be iterated over and each result sent recursively to _trigger. Otherwise, the call will be repeated the number of times requested by @repeat(). Generator functions may return a tuple consisting of an optional name (which must be a string), a callable (which may be another generator), a sequence of function arguments, and a dictionary of function keyword arguments. Any element except the callable may be omitted. Generators may also return a bare callable. """ # First, check if this is a generator function if inspect.isgeneratorfunction(call): # Allocate and use a context for the generator itself with self._result.accumulate(TEST, id=name): # OK, we need to iterate over the result for item in call(*args, **kwargs): # Make the recursive call self._trigger(*self._parse_item(name, item)) # Fully handled the generator function return # OK, it's a regular test function; let's allocate a result # context for it for i in range(self._repeat): # Allocate a context ctx = self._result.accumulate(TEST, self._raises, name) # Now, let's fire off the test self._strategy.spawn(self._fire, ctx, call, args, kwargs) def _fire(self, ctx, call, args, kwargs): """ Performs the actual test function. This is in a separate method so that it can be spawned as appropriate. """ with ctx: call(*args, **kwargs) def _depcheck(self, output): """ Performs a check of all this test's dependencies, to determine if the test can be executed. Tests can only be executed if all their dependencies have passed. """ # All dependencies must be OK for dep in self._deps: if (dep.state == FAIL or dep.state == ERROR or dep.state == XFAIL or dep.state == DEPFAIL): # Set our own state to DEPFAIL self._result._transition(DEPFAIL, output=output) return False elif dep.state == SKIPPED: # Set our own state to SKIPPED self._result._transition(SKIPPED, output=output) return False elif dep.state != OK and dep.state != UOK: # Dependencies haven't finished up, yet return False # All dependencies satisfied! return True def _skipped(self, output): """ Marks this DTestBase instance as having been skipped. This status propagates up and down the dependence graph, in order to mark all dependents as skipped and to cause unneeded fixtures to also be skipped. """ # Mark that this test has been skipped by transitioning the # state if self.state is None: self._result._transition(SKIPPED, output=output) # Propagate up to tests dependent on us for dt in self._revdeps: dt._skipped(output) # Also notify tests we're dependent on for dt in self._deps: dt._notify_skipped(output) def _notify_skipped(self, output): """ Notifies this DTestBase instance that a dependent has been skipped. This is used by DTestFixture to identify when a given fixture should be skipped. """ # Regular tests don't care that some test dependent on them # has been skipped pass def _prepare(self): """ Prepares this test for execution by allocating a DTestResult instance. """ # Select the correct result container if self._repeat > 1 or inspect.isgeneratorfunction(self._test): # Will have multiple results self._result = result.DTestResultMulti(self) else: # Just one result; use the simpler machinery self._result = result.DTestResult(self) class DTest(DTestBase): """ DTest ===== The DTest class represents individual tests to be executed. It inherits most of its elements from the DTestBase class, but overrides the __int__(), _depcheck(), and istest() methods to implement test-specific behavior. """ def __int__(self): """ Returns the value of the instance in an integer context. Returns 1 to make counting tests and not test fixtures easy. """ # This is a single test, so return 1 so we contribute to the # count return 1 def istest(self): """ Returns True if the instance is a test or False if the instance is a test fixture. Overrides DTestBase.istest() to return True. """ # Return True, since this is a test return True class DTestFixture(DTestBase): """ DTestFixture ============ The DTestFixture class represents test fixtures to be executed. It inherits most of its elements from the DTestBase class, but overrides the _depcheck(), _skipped(), and _notify_skipped() methods to implement test fixture-specific behavior. In addition, provides the _set_partner() method, used for setting test fixture partners. """ def _set_partner(self, setUp): """ Sets the partner of a test fixture. This method is called on tear down-type fixtures to pair them with the corresponding set up-type fixtures. This ensures that a tear down fixture will not run unless the corresponding set up fixture ran successfully. """ # Sanity-check setUp if setUp is None: return # First, set a dependency depends(setUp)(self) # Now, save our pair partner self._partner = setUp def _skipped(self, output): """ Marks this DTestFixture instance as having been skipped. Test fixtures may only be skipped if *all* their dependencies have been skipped. """ # Only bother if all our dependencies are also skipped--tear # down fixtures need to run any time the corresponding set up # fixtures have run for dep in self._deps: if dep is not self._partner and dep.state != SKIPPED: return # Call the superclass method super(DTestFixture, self)._skipped(output) def _notify_skipped(self, output): """ Notifies this DTestFixture instance that a dependent has been skipped. If all the fixture's dependents have been skipped, then the test fixture will also be skipped. """ # If all tests dependent on us have been skipped, we don't # need to run for dep in self._revdeps: if dep.state != SKIPPED: return # Call the superclass's _skipped() method super(DTestFixture, self)._skipped(output) class DTestFixtureSetUp(DTestFixture): """ DTestFixtureSetUp ================= The DTestFixtureSetUp class represents setUp() and setUpClass() test fixtures to be executed before enclosed tests. It is derived from DTestFixture. """ pass class DTestFixtureTearDown(DTestFixture): """ DTestFixtureTearDown ==================== The DTestFixtureTearDown class represents tearDown() and tearDownClass() test fixtures to be executed after enclosed tests. It is derived from DTestFixture, but overrides the _depcheck() method to ensure that tearDown() and tearDownClass() are always called even if some of the dependencies have failed (unless the corresponding setUp() or setUpClass() fixtures have failed). """ def _depcheck(self, output): """ Performs a check of all this test fixture's dependencies, to determine if the test can be executed. Test fixtures can only be executed if their partner (if one is specified) has passed and if all the tests the fixture is dependent on have finished running or been skipped. """ # Make sure our partner succeeded if self._partner is not None: if (self._partner.state == FAIL or self._partner.state == XFAIL or self._partner.state == ERROR or self._partner.state == DEPFAIL): # Set our own state to DEPFAIL self._result._transition(DEPFAIL, output=output) return False elif self._partner.state == SKIPPED: # Set our own state to SKIPPED self._result._transition(SKIPPED, output=output) return False # Other dependencies must not be un-run or in the RUNNING # state for dep in self._deps: if dep.state is None or dep.state == RUNNING: return False # Dependencies can have failed, failed due to dependencies, # been skipped, or have completed--they just have to be in # that state before running the fixture return True def _gettest(func, testcls=DTest, promote=False): """ Retrieves a DTest from--or, if ``testcls`` is not None, attaches a new test of that class to--``func``. This is a helper function used by the decorators below. """ # We could be passed a DTest, so return it if so if isinstance(func, DTestBase): return testcls.promote(func) if promote else func # If it's a class method or static method, unwrap it if isinstance(func, (classmethod, staticmethod)): try: func = func.__func__ except AttributeError: # Python 2.6 doesn't have __func__ attribute on # classmethod or staticmethod, so let's kludge around # it... tmp = func.__get__(None, object) # If it's an instance of staticmethod, tmp is func if isinstance(func, staticmethod): func = tmp # If it's an instance of classmethod, tmp has __func__ else: func = tmp.__func__ # Always return None if _dt_nottest is set if func is None or (hasattr(func, '_dt_nottest') and func._dt_nottest): return None # Look up the test as a function attribute try: return testcls.promote(func._dt_dtest) if promote else func._dt_dtest except AttributeError: # Don't want to create one, I guess if testcls is None: return None # Not yet declared, so let's go ahead and attach one dt = testcls(func) # Return the test return dt def istest(func): """ Decorates a function to indicate that the function is a test. Can be used if the @func.setUp or @func.tearDown decorators need to be used, or if the test would not be picked up by the test discovery regular expression. """ # Make sure func has a DTest associated with it _gettest(func) # Return the function return func def nottest(func): """ Decorates a function to indicate that the function is not a test. Can be used if the test would be picked up by the test discovery regular expression but should not be. Works by setting the ``_dt_nottest`` attribute on the function to True. """ # Mark that a function should not be considered a test func._dt_nottest = True return func def isfixture(func): """ Decorates a function to indicate that the function is a test fixture, i.e., package- or module-level setUp()/tearDown() or class-level setUpClass()/tearDownClass(). This decorator is now deprecated. """ # Return the function return func def skip(func): """ Decorates a test to indicate that the test should be skipped. """ # Get the DTest object for the test dt = _gettest(func) # Set up to skip it dt._skip = True # Return the function return func def failing(func): """ Decorates a test to indicate that the test is expected to fail. """ # Get the DTest object for the test dt = _gettest(func) # Set up to expect it to fail dt._exp_fail = True # Return the function return func def attr(**kwargs): """ Decorates a test to set attributes on the test. Keyword arguments are converted to attributes on the test. Note that all attributes beginning with underscore ("_") and the following list of attributes are reserved: ``result``, ``state``, ``test``, ``class_``, ``skip``, ``failing``, ``dependencies``, ``dependents``, ``raises``, ``timeout``, ``setUp``, ``tearDown``, and ``istest``. """ # Need a wrapper to perform the actual decoration def wrapper(func): # Get the DTest object for the test dt = _gettest(func) # Update the attributes dt._attrs.update(kwargs) # Return the function return func # Return the actual decorator return wrapper def depends(*deps): """ Decorates a test to indicate other tests the test depends on. There is no need to explicitly specify test fixtures. Take care to not introduce dependency cycles. Note that this decorator takes references to the dependencies, and cannot handle dependency names. """ # Get the DTest objects for the dependencies deps = [_gettest(dep) for dep in deps] # Need a wrapper to perform the actual decoration def wrapper(func): # Get the DTest object for the test dt = _gettest(func) # Add the dependencies dt._deps |= set(deps) # Add the reverse dependencies for dep in deps: dep._revdeps.add(dt) # Return the function return func # Return the actual decorator return wrapper def raises(*exc_types): """ Decorates a test to indicate that the test may raise an exception. The valid exceptions are specified to the decorator as references. The list may include None, in which case the test not raising an exception is permissible. """ # Need a wrapper to perform the actual decoration def wrapper(func): # Get the DTest object for the test dt = _gettest(func) # Store the recognized exception types dt._raises |= set(exc_types) # Return the function return func # Return the actual decorator return wrapper def timed(timeout): """ Decorates a test to indicate that the test must take less than ``timeout`` seconds (floats permissible). If the test takes more than that amount of time, the test will fail. Note that this uses the Eventlet timeout mechanism, which depends on the test cooperatively yielding; if the test exclusively performs computation without sleeping or performing I/O, this timeout may not trigger. """ # Need a wrapper to perform the actual decoration def wrapper(func): # Get the DTest object for the test dt = _gettest(func) # Store the timeout value (in seconds) dt._timeout = timeout # Return the function return func # Return the actual decorator return wrapper def repeat(count): """ Decorates a test to indicate that the test must be repeated ``count`` number of times. """ # Need a wrapper to perform the actual decoration def wrapper(func): # Get the DTest object for the test dt = _gettest(func) # Store the repeat count dt._repeat = count # Return the function return func # Return the actual decorator return wrapper def strategy(st, func=None): """ Used to set the parallelization strategy for tests to ``st``. If ``func`` is provided, the parallelization strategy for ``func`` is set; otherwise, returns a function which can be used as a decorator. This behavior on the presence of ``func`` allows strategy() to be used to create user-defined parallelization strategy decorators. Parallelization strategies allow tests that are defined as generators or which are decorated with the @repeat() decorator to execute in parallel threads. A parallelization strategy is an object defining prepare(), spawn(), and wait() methods, which will be called in that order. The prepare() method is passed no arguments and simply prepares the strategy object for a sequence of spawn() calls. The spawn() method is called with a callable and the arguments and keyword arguments, and should cause the callable to be executed (presumably in a separate thread of control) with the given arguments. Once all calls have been spawned, DTest will call the wait() method, which must wait for all the spawned callables to complete execution. Note that the callable passed to the spawn() method is not a test, and no assumptions may be made about the callable or its arguments. """ # Need a wrapper to perform the actual decoration def wrapper(f): # Get the DTest object for the test dt = _gettest(f) # Change the parallelization strategy dt._strategy = st # Return the function return f # If the function is specified, apply the wrapper directly if func is not None: return wrapper(func) # Return the actual decorator return wrapper def parallel(arg): """ Decorates a test to indicate that the test can be executed with a multithread parallelization strategy. This is only meaningful on tests that are repeated or on generator function tests. If used in the ``@parallel`` form, the maximum number of threads is unlimited; if used as ``@parallel(n)``, the maximum number of threads is limited to ``n``. """ # Default strategy is the UnlimitedParallelStrategy st = strat.UnlimitedParallelStrategy() # Wrapper to actually attach the strategy to the test def wrapper(func): return strategy(st, func) # If arg is a callable, call wrapper directly if callable(arg): return wrapper(arg) # OK, arg is an integer and specifies a limit on the number of # threads; set up a LimitedParallelStrategy. st = strat.LimitedParallelStrategy(arg) # And return the wrapper, which will be the actual decorator return wrapper def policy(p, func=None): """ Used to set the result policy for tests to ``p``. If ``func`` is provided, the result policy for ``func`` is set; otherwise, returns a function which can be used as a decorator. This behavior on the presence of ``func`` allows policy() to be used to create user-defined result policy decorators. Result policies allow tests that are defined as generators or which are decorated with the @repeat() decorator to specify more complex computations than simply requiring all functions executed to succeed. A result policy is simply a callable, and can be either a function or an object with a __call__() method. It will be passed four counts--the total number of functions executed so far, the total number of successes seen so far, the total number of failures seen so far, and the total number of errors seen so far. It must return a tuple of two boolean values; the first should be True if and only if the overall result is a success, and the second should be True if and only if the overall result is an error. The second boolean may not be True if the first boolean is True. """ # Need a wrapper to perform the actual decoration def wrapper(f): # Get the DTest object for the test dt = _gettest(f) # Change the result policy dt._policy = p # Return the function return f # If the function is specified, apply the wrapper directly if func is not None: return wrapper(func) # Return the actual decorator return wrapper def threshold(th): """ Decorates a test to indicate that the test's result policy is a threshold policy. The ``th`` argument is a float between 0.0 and 100.0, and indicates the minimum percentage of tests which must succeed for the overall result to be a success. Note that any errors cause the overall result to be an error. """ # Wrapper to actually attach the threshold to the test def wrapper(func): return policy(pol.ThresholdPolicy(th), func) # Now return the wrapper, which will be the actual decorator return wrapper testRE = re.compile(r'(?:^|[\b_\.-])[Tt]est') def visit_mod(mod, tests): """ Helper function which searches a module object, specified by ``mod``, for all tests, test classes, and test fixtures, then sets up proper dependency information. All discovered tests are added to the set specified by ``tests``. Returns a tuple containing the closest discovered test fixtures (needed because visit_mod() is recursive). """ # Have we visited this module before? if hasattr(mod, '_dt_visited'): # We cache the tests in this module (and parent modules) in # _dt_visited tests |= mod._dt_visited return mod._dt_setUp, mod._dt_tearDown # OK, set up the visited cache mod._dt_visited = set() # If we have a parent package... setUp = None tearDown = None if '.' in mod.__name__: pkgname, modname = mod.__name__.rsplit('.', 1) # Visit up one level setUp, tearDown = visit_mod(sys.modules[pkgname], mod._dt_visited) # See if we have fixtures in this module setUpLocal = None if hasattr(mod, SETUP): setUpLocal = _gettest(getattr(mod, SETUP), DTestFixtureSetUp, True) # Set up the dependency if setUp is not None: depends(setUp)(setUpLocal) setUp = setUpLocal if hasattr(mod, TEARDOWN): tearDownLocal = _gettest(getattr(mod, TEARDOWN), DTestFixtureTearDown, True) # Set up the dependency if tearDown is not None: depends(tearDownLocal)(tearDown) # Also set up the partner dependency if setUpLocal is not None: tearDownLocal._set_partner(setUpLocal) tearDown = tearDownLocal # OK, we now have the test fixtures; let's cache them mod._dt_setUp = setUp mod._dt_tearDown = tearDown # Also add them to the set of discovered tests if setUp is not None: mod._dt_visited.add(setUp) if tearDown is not None: mod._dt_visited.add(tearDown) # Now, let's scan all the module attributes and set them up as # tests with appropriate dependencies... for k in dir(mod): # Skip internal attributes and the fixtures if k[0] == '_' or k == SETUP or k == TEARDOWN: continue # Get the value v = getattr(mod, k) # Skip non-callables if not callable(v): continue # Is it explicitly not a test? if hasattr(v, '_dt_nottest') and v._dt_nottest: continue # If it's a DTestCase, handle it specially try: if issubclass(v, DTestCase): # Set up dependencies if setUp is not None: if hasattr(v, SETUP + CLASS): # That's easy... depends(setUp)(getattr(v, SETUP + CLASS)) else: # Set up a dependency for each test for t in v._dt_tests: depends(setUp)(t) if tearDown is not None: if hasattr(v, TEARDOWN + CLASS): # That's easy... depends(getattr(v, TEARDOWN + CLASS))(tearDown) else: # Set up a dependency for each test for t in v._dt_tests: depends(t)(tearDown) # Add all the tests mod._dt_visited |= v._dt_tests # Well, it's probably a class, so ignore it continue except TypeError: # Guess it's not a class... pass # OK, let's try to get the test dt = _gettest(v, DTest if testRE.match(k) else None) if dt is None: # Not a test continue # Keep track of tests in this module mod._dt_visited.add(dt) # Set up the dependencies on setUp and tearDown if setUp is not None: depends(setUp)(dt) if tearDown is not None: depends(dt)(tearDown) # Set up the list of tests tests |= mod._dt_visited # OK, let's return the fixtures for recursive calls return setUp, tearDown class DTestCaseMeta(type): """ DTestCaseMeta ============= The DTestCaseMeta is a metaclass for DTestCase. Before constructing the class, discovers all tests and related test fixtures (including module- and package-level fixtures) and sets up dependencies as appropriate. Also ensures that the ``class_`` attribute of tests and test fixtures is set appropriately. """ def __new__(mcs, name, bases, dict_): """ Constructs a new class with the given ``name``, ``bases``, and ``dict_``. The ``dict_`` is searched for all tests and class-level test fixtures. """ # We want to discover all tests, both here and in bases. The # easiest way of doing this is to begin by constructing the # class... cls = super(DTestCaseMeta, mcs).__new__(mcs, name, bases, dict_) # Look for the fixtures setUp = getattr(cls, SETUP, None) tearDown = getattr(cls, TEARDOWN, None) setUpClass = _gettest(getattr(cls, SETUP + CLASS, None), DTestFixtureSetUp, True) tearDownClass = _gettest(getattr(cls, TEARDOWN + CLASS, None), DTestFixtureTearDown, True) # Attach the class to the fixtures if setUpClass is not None: setUpClass._attach(cls) if tearDownClass is not None: tearDownClass._attach(cls) # Also set up the dependency between setUpClass and # tearDownClass if setUpClass is not None and tearDownClass is not None: tearDownClass._set_partner(setUpClass) # Now, let's scan all the class attributes and set them up as # tests with appropriate dependencies... tests = [] for k in dir(cls): # Skip internal attributes and the fixtures if (k[0] == '_' or k == SETUP or k == TEARDOWN or k == SETUP + CLASS or k == TEARDOWN + CLASS): continue # Get the value v = getattr(cls, k) # Skip non-callables if not callable(v): continue # Is it explicitly not a test? if hasattr(v, '_dt_nottest') and v._dt_nottest: continue # OK, let's try to get the test dt = _gettest(v, DTest if testRE.match(k) else None) if dt is None: # Not a test continue # Attach the class to the test dt._attach(cls) # Keep a list of the tests in this class tests.append(dt) # We now have a test; let's attach fixtures as # appropriate... if dt._pre is None and setUp is not None: dt.setUp(setUp) if dt._post is None and tearDown is not None: dt.tearDown(tearDown) # Also set up the dependencies on setUpClass and # tearDownClass if setUpClass is not None: depends(setUpClass)(dt) if tearDownClass is not None: depends(dt)(tearDownClass) # Save the list of tests cls._dt_tests = set(tests) # Also need to list the fixtures if setUpClass is not None: cls._dt_tests.add(setUpClass) if tearDownClass is not None: cls._dt_tests.add(tearDownClass) # OK, let's return the constructed class return cls class DTestCase(object): """ DTestCase ========= The DTestCase class is a base class for classes of test methods. It is constructed using the DTestCaseMeta metaclass. Any classes which contain tests must inherit from DTestCase or must use DTestCaseMeta as a metaclass. """ __metaclass__ = DTestCaseMeta DTest-0.4.0/dtest/__init__.py0000644000175000017500000001444511607315463016306 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ =============================== Dependency-based Test Framework =============================== The dtest package defines a dependency-based test framework similar to the standard unittest in the Python standard library. The primary advantage of a dependency-based test framework is that it is easy to run tests in multiple threads, making test runs faster because tests are performed simultaneously. It is also possible to ensure that some tests are skipped if other tests fail, perhaps because the tests to be skipped are dependent on the very functionality that has been shown to be improperly implemented. These dependencies are also used, under the hood, to safely permit the running of test fixtures at class-, module-, and package-levels, without worrying about multi-threading issues. The dtest framework provides a DTestCase class, similar to unittest.TestCase. There are also a number of decorators available, to do such things as: marking a function or method as being (or not being) a test (@istest and @nottest); marking a test to be skipped by default (@skip); marking a test as having an expected failure (@failing); setting arbitrary attributes on a test (@attr()); indicating that a test is dependent on other tests (@depends()); indicating that a test is expected to raise a given exception or one of a given set of exceptions (@raises()); marking that a test should conclude within a given time limit (@timed()); requesting that a test be executed multiple times (@repeat()); setting an alternate parallelization strategy (@strategy()); using the multithreaded parallelization strategies (@parallel()); setting the result policy (@policy()); and using the threshold result policy (@threshold()). Tests may be discovered using the explore() function, which returns an instance of DTestQueue. (This instance may be passed to other invocation of explore(), to discover tests in multiple directories.) Once tests have been discovered, a dependency graph may then be generated using the DTestQueue.dot() method, or the test suite may be executed by calling the DTestQueue.run(). It is possible to capture arbitrary forms of output by extending and instantiating the Capturer class. Note that standard output and standard error are captured by default. Capturing may be disabled for a test run by passing a True ``debug`` argument to the DTestQueue.run() method. Tests themselves may be written using the ``assert`` statement, if desired, but a number of utilities are available in the dtest.util package for performing various common tests. Additionally, a special output stream, ``dtest.status``, is provided; this stream may be used to emit status messages to inform the user of the status of a long-running test. (Additional properties and methods are available on ``dtest.status`` for supporting this special stream within newly-created threads. The built-in parallelization strategies already use this support. For more information, see the documentation for ``dtest.status.output``, ``dtest.status.test``, and ``dtest.status.setup()``.) For complex testing behavior, generator test functions are possible. These test functions should yield either a callable or a tuple. If a tuple is yielded, the first or second element must be a callable, and the elements after the callable identify positional arguments, keyword arguments, or both (in the order positional arguments as a sequence, followed by keyword arguments as a dictionary). If the callable is the second element of the tuple, the first must be a string giving a name for the test. Note that yielded tests cannot have dependencies, fixtures or any of the other DTest decorators; all such enhancements must be attached to the generator function; on the other hand, it is legal for the yielded callable to be a generator itself, which will be treated identically to the top-level generator function. Also note that when the @repeat() decorator is applied to a generator test function, each yielded function will be called the designated number of times, but the generator itself will be called only once. When using the above complex testing behavior, it is also possible to affect the overall result based on the number of individual successes, failures, and errors encountered through the use of result policies. By default, the overall result is only a success if all individual runs return result, and an error overall result is reported if any errors were encountered; however, a threshold policy can be selected by decorating the test with the @threshold() decorator. In the threshold policy, any errors result in an overall error result, but only a given percentage of tests must succeed in order for the overall result to be a success. It is also possible to build special-purpose result policies; they can be attached to a test using the @policy() decorator. Note that both dtest and dtest.util are safe for use with "import *". """ from dtest.capture import Capturer from dtest.constants import * from dtest.exceptions import DTestException from dtest.core import DTestQueue, DTestOutput, status, explore, main, \ optparser, opts_to_args from dtest.test import istest, nottest, isfixture, skip, failing, attr, \ depends, raises, timed, repeat, strategy, parallel, policy, threshold, \ DTestCase __all__ = ['Capturer', 'PRE', 'POST', 'TEST', 'RUNNING', 'FAIL', 'XFAIL', 'ERROR', 'DEPFAIL', 'OK', 'UOK', 'SKIPPED', 'DTestException', 'DTestQueue', 'DTestOutput', 'status', 'explore', 'main', 'optparser', 'opts_to_args', 'istest', 'nottest', 'isfixture', 'skip', 'failing', 'attr', 'depends', 'raises', 'timed', 'repeat', 'strategy', 'parallel', 'policy', 'threshold', 'DTestCase'] DTest-0.4.0/dtest/constants.py0000644000175000017500000000272011607315463016554 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ============== Test Constants ============== This module contains the various constants used by the test framework. The constants are the various states that a test may be in (RUNNING, FAIL, XFAIL, ERROR, DEPFAIL, OK, UOK, and SKIPPED) and the origins of messages in the result (PRE, POST, and TEST). """ # Test states RUNNING = 'RUNNING' # test running FAIL = 'FAIL' # test failed XFAIL = 'XFAIL' # test expected to fail ERROR = 'ERROR' # error running test DEPFAIL = 'DEPFAIL' # dependency failed or errored out OK = 'OK' # test completed successfully UOK = 'UOK' # test unexpectedly completed successfully SKIPPED = 'SKIPPED' # test was skipped # Result message origins PRE = 'PRE' # Error in pre-execute fixture POST = 'POST' # Error in post-execute fixture TEST = 'TEST' # Error from the test itself DTest-0.4.0/dtest/exceptions.py0000644000175000017500000000212311607315463016716 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ========================= Test Framework Exceptions ========================= This module contains the DTestException class, which is an extension raised by the framework when an error is encountered while executing functions or methods of the framework itself. """ class DTestException(Exception): """ DTestException ============== The DTestException is an exception class used for all exceptions generated by the test framework. """ pass DTest-0.4.0/dtest/policy.py0000644000175000017500000000573611607315463016051 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ =============== Result Policies =============== This module contains all the functions and classes necessary for identifying result policies. A result policy comes into play only for tests that result in multiple test calls. Result policies are simply callables (functions or objects with __call__() methods) that receive four counts--the total number of results accumulated so far, the total number that succeeded, the total number that failed, and the total number of errors encountered. They must return a tuple containing two boolean values--one indicating whether the test is an overall success (True) or not, and the second indicating whether the test is an error. (The second may only be True if the first is False.) """ def basicPolicy(tot, suc, fail, err): """ Implements the basic policy--all tests must be a success for the overall result to be a success, and if there are any errors, the overall result is an error. """ return (fail == 0 and err == 0), (err > 0) class ThresholdPolicy(object): """ ThresholdPolicy =============== Implements the threshold policy--there must be no errors, and the number of successes must exceed a given threshold (expressed as a percentage) for the overall result to be a success. """ def __init__(self, threshold): """ Initialize the ThresholdPolicy object. The ``threshold`` must be expressed as a percentage (0 to 100); float values are legal here. """ # Save the threshold as a float self.threshold = float(threshold) def __call__(self, tot, suc, fail, err): """ Implements the threshold policy. If ``err`` is greater than zero, the overall result is an error; otherwise, if ``suc`` represents more than the configured threshold percentage of ``tot``, the overall result is a success. Note that if ``fail`` is zero, the threshold logic is skipped. """ # If there are any errors, we have an error result if err > 0: return False, True # If fail is 0, no point going on if fail == 0: return True, False # Compute the percentage of successes... percent = (suc * 100.0) / tot # We're successful only if percent is greater than threshold return (percent >= self.threshold), False DTest-0.4.0/dtest/capture.py0000644000175000017500000003526411607315463016214 0ustar sorensoren00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ======================= Generic Output Capturer ======================= This module contains the Capturer class, which can be used to capture arbitrary output from tests in a thread-safe fashion. A new Capturer is defined by subclassing the Capturer class and providing implementations for the init(), retrieve(), install(), and uninstall() methods; then, the subclass is instantiated. Once instantiated, capturing by the test framework is automatic--registration of the instance is done by the Capturer constructor. The module also defines StdStreamCapturer and produces two instantiations of it, for capturing stdout and stderr. Usage ----- Each Capturer instance must have a unique name, and additionally a description, which will be output in the test report. The init() method of a Capturer instance is called to create a new object to intercept and store the output; for stream-like Capturer instances, this could simply return, say, an instantiation of StringIO. The retrieve() method will be passed this object and must return a string consisting of all the output. The install() and uninstall() methods cooperate to install a special CaptureProxy object. For an example of a full Capturer subclass, check out StdStreamCapturer, contained within this module:: class StdStreamCapturer(Capturer): def init(self): # Create a new StringIO stream return StringIO() def retrieve(self, st): # Retrieve the value of the StringIO stream return st.getvalue() def install(self, new): # Retrieve and return the old stream and install the new one old = getattr(sys, self.name) setattr(sys, self.name, new) return old def uninstall(self, old): # Re-install the old stream setattr(sys, self.name, old) # Add capturers for stdout and stderr StdStreamCapturer('stdout', 'Standard Output') StdStreamCapturer('stderr', 'Standard Error') Implementation Details ---------------------- The eventlet.corolocal.local class is used to maintain a set of capturing objects (as initialized by the Capturer.init() method) for each thread. The Capturer.install() method is used by the framework to install a special CaptureProxy object, which uses this thead-local data to proxy all attribute accesses to the correct capturing object. Once a test is complete, the captured data is retrieved by calling the Capturer.retrieve() method, and once all tests have finished, the Capturer.uninstall() method is used to restore the original values that the Capturer.install() method discovered when it installed the CaptureProxy object. The capture module exports three functions used only by the framework; these probably should not be called directly by a test author. The retrieve() function retrieves the captured data by calling the Capturer.retrieve() methods in turn; the data is returned in the same order in which it was registered. The next two internal functions are install() and uninstall(), which simply call the Capturer.install() and Capturer.uninstall() methods in turn. Note that these calls are not made in any defined order, so test authors should not rely on any given ordering. The capture module also pre-defines two Capturer instances, one for capturing output to sys.stdout, and the other for capturing output to sys.stderr; the code for this is included in the example above, and can be referred to while building your own Capturer subclasses. """ from StringIO import StringIO import sys from eventlet.corolocal import local from dtest.exceptions import DTestException # Globals needed for capturing _installed = False _saves = {} class Capturer(object): """ Capturer ======== The Capturer class is an abstract class which keeps track of all instances. It is used to set up new output capturers, in a thread-safe way. Subclasses must implement the init(), retrieve(), install(), and uninstall() methods. The Capturer class should not be instantiated directly, as these necessary methods are unimplemented. Two class variables are defined; the _capturers dictionary stores a mapping from a Capturer ``name`` to a Capturer instance, while the _caporder list contains a list of Capturer ``name``s in the order in which they were instantiated. """ _capturers = {} _caporder = [] def __new__(cls, name, desc): """ Allocate and initialize a Capturer. Each instance of Capturer must have a unique ``name``; this name permits distinct Capturer instances to be looked up by CaptureProxy instances. The ``desc`` argument should contain a short description which will be used to indicate the source of the capture in the test output. If an attempt to reuse a ``name`` is made, the previous instance of that name will be returned, rather than a new one being instantiated. """ # First, make sure name isn't already in use if name in cls._capturers: return cls._capturers[name] # Don't allow new capturer registrations after we're installed if _installed: raise DTestException("Capturers have already been installed") # OK, construct a new one cap = super(Capturer, cls).__new__(cls) cap.name = name cap.desc = desc # Save it in the cache cls._capturers[name] = cap cls._caporder.append(name) # And return it return cap def init(self): """ Initialize a Capturer object. Should return an object which exports the appropriate interface for the output being intercepted. All subclasses must implement this method. """ # Initialize a capturer; returns an object that looks like # whatever's being captured, but from which a value can later # be retrieved. raise DTestException("%s.%s.init() unimplemented" % (self.__class__.__module__, self.__class__.__name__)) def retrieve(self, captured): """ Retrieve data from a Capturer object. The ``captured`` argument will be an object returned by the init() method. Should return a string consisting of the output data. All subclasses must implement this method. """ # Retrieve the value of a capturer; takes the object returned # by init() and returns its string value. raise DTestException("%s.%s.retrieve() unimplemented" % (self.__class__.__module__, self.__class__.__name__)) def install(self, new): """ Install a CaptureProxy object. The ``new`` argument will be a CaptureProxy object, which will delegate all accesses to an appropriate object returned by the init() method. Should return the old value of whatever interface is being captured. All subclasses must implement this method. """ # Install the capture proxy specified by new; should place # that object into the appropriate place so that it can # capture output. Should return the old value, which will # later be passed to uninstall. raise DTestException("%s.%s.install() unimplemented" % (self.__class__.__module__, self.__class__.__name__)) def uninstall(self, old): """ Uninstall a CaptureProxy object. The ``old`` argument will be a value returned by the install() method. The CaptureProxy object installed by install() should be uninstalled and replaced by the original object specified by ``old``. All subclasses must implement this method. """ # Uninstall the capture proxy by replacing it with the old # value specified. The old value will be the value returned # by install(). raise DTestException("%s.%s.uninstall() unimplemented" % (self.__class__.__module__, self.__class__.__name__)) class _CaptureLocal(local): """ _CaptureLocal ============= The _CaptureLocal class extends eventlet.corolocal.local to provide thread-local data. Its attributes map to objects returned by the init() methods of the corresponding Capturer instances, and are unique to each thread. """ def __init__(self): """ Initialize a _CaptureLocal object in each thread. For each defined Capturer instance, calls the init() method of that object and stores it in an attribute with the same name as the Capturer. This is the magic that allows CaptureProxy to send output to the correct place. """ # Walk through all the capturers and initialize them for cap in Capturer._capturers.values(): setattr(self, cap.name, cap.init()) _caplocal = _CaptureLocal() def retrieve(): """ Retrieve captured output for the current thread. Returns a list of tuples, in the same order in which the corresponding Capturer instances were allocated. Each tuple contains the Capturer name, its description, and the captured output. The capture objects are reinitialized by this function. """ # Walk through all the capturers and retrieve their description # and value vals = [] for name in Capturer._caporder: # Get the capturer cap = Capturer._capturers[name] # Get the current value of the capturer and re-initialize it val = cap.retrieve(getattr(_caplocal, name)) setattr(_caplocal, name, cap.init()) # Push down the value and other important data if val: vals.append((cap.name, cap.desc, val)) # Return the captured values return vals class CaptureProxy(object): """ CaptureProxy ============ The CaptureProxy class delegates all attribute accesses to a thread-specific object initialized by the Capturer.init() method. The only local attribute of a CaptureProxy object is the _capname attribute, which stores the name of the Capturer the CaptureProxy is acting on behalf of. CaptureProxy objects are to be installed by the Capturer.install() method and uninstalled by the Capturer.uninstall() method. """ def __init__(self, capname): """ Initialize a CaptureProxy by storing the Capturer name. """ # Save the capturer name of interest super(CaptureProxy, self).__setattr__('_capname', capname) def __getattr__(self, attr): """ Delegate attribute accesses to the proxied, thread-specific capturing object. """ # Proxy out to the appropriate object return getattr(getattr(_caplocal, self._capname), attr) def __setattr__(self, attr, value): """ Delegate attribute updates to the proxied, thread-specific capturing object. """ # Proxy out to the appropriate object return setattr(getattr(_caplocal, self._capname), attr, value) def __delattr__(self, attr): """ Delegate attribute deletions to the proxied, thread-specific capturing object. """ # Proxy out to the appropriate stream return delattr(getattr(_caplocal, self._capname), attr) def install(): """ Install CaptureProxy objects for all defined Capturer instances. For each Capturer instance, the install() method will be called. """ global _installed # Do nothing if we're already installed if _installed: return # Remember that we've been installed _installed = True # Perform the install for cap in Capturer._capturers.values(): _saves[cap.name] = cap.install(CaptureProxy(cap.name)) def uninstall(): """ Uninstall CaptureProxy objects for all defined Capturer instances. For each Capturer instance, the uninstall() method will be called. """ global _installed global _saves # Do nothing if we haven't been installed if not _installed: return # Restore our saved objects for cap in Capturer._capturers.values(): cap.uninstall(_saves[cap.name]) # Reset our state _saves = {} _installed = False class StdStreamCapturer(Capturer): """ StdStreamCapturer ================= The StdStreamCapturer is a subclass of Capturer defined to capture the standard output and error streams, sys.stdout and sys.stderr, respectively. Output is captured using a StringIO(). """ def init(self): """ Initialize a Capturer object. Returns an instance of StringIO. """ # Create a new StringIO stream return StringIO() def retrieve(self, st): """ Retrieve data from a Capturer object. The ``st`` argument is a StringIO object allocated by the init() method. Returns a string with the contents of the StringIO object. """ # Retrieve the value of the StringIO stream return st.getvalue() def install(self, new): """ Install a CaptureProxy object. The ``new`` argument is a CaptureProxy object, which is installed in place of sys.stdout or sys.stderr (depending on the name used to instantiate the Capturer instance). Returns the original value of the stream being replaced. """ # Retrieve and return the old stream and install the new one old = getattr(sys, self.name) setattr(sys, self.name, new) return old def uninstall(self, old): """ Uninstall a CaptureProxy object. The ``old`` argument is the original value of the stream, as returned by the install() method. Re-installs that in place of the CaptureProxy object installed by install(). """ # Re-install the old stream setattr(sys, self.name, old) # Add capturers for stdout and stderr StdStreamCapturer('stdout', 'Standard Output') StdStreamCapturer('stderr', 'Standard Error')