././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.8036945 btest-0.72/0000755000076500000240000000000014246443553011311 5ustar00timstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6004915 btest-0.72/Baseline/0000755000076500000240000000000014246443553013033 5ustar00timstaff././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.630121 btest-0.72/Baseline/examples.t4/0000755000076500000240000000000014246443553015177 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.t4/dots0000644000076500000240000000234414072112013016053 0ustar00timstaff. .. .CFUserTextEncoding .DS_Store .Spotlight-V100 .TemporaryItems .Trash .Trashes .Xauthority .Xmodmap .Xmodmap.darwin .Xresources .abook .abookrc .anyconnect .backup .backup-to-icsi .backup-to-icsi.exclude .backup.exclude .bash .bash_history .bash_profile .bashrc .bashrc.local .cups .dropbox .elinks .fontconfig .fseventsd .gem .gnupg .growl-buffy .hostnames .inputrc .ispell_american .jed .jedrc .jedrecent .lesshst .mairix .mairix.history .mairixrc .mutt .muttprintrc .muttrc .offlineimap .offlineimap.py .offlineimap.pyc .offlineimaprc .offlineimaprc.bagend .python .rxvt-unicode-254.59.242.10.in-addr.arpa .rxvt-unicode-bagend.local .rxvt-unicode-wifi189.icsi.berkeley.edu .screen .serverauth.11038 .serverauth.1452 .serverauth.203 .serverauth.206 .serverauth.220 .serverauth.23460 .serverauth.23866 .serverauth.270 .serverauth.305 .serverauth.378 .serverauth.53652 .serverauth.55639 .signature .sleepwatcher .ssh .subversion .unison .unison.home.prf .urlview .viminfo .whitelist .xdvirc .xinitrc.d .xinitrc.darwin.leopard .xinitrc.darwin.tiger .yanag Desktop Documents Downloads Dropbox Library Mail Movies Music Pictures Public Sites bin bro data down etc generic include jed lib man mbox memos scripts share src synchronized tex tmp work www ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6307123 btest-0.72/Baseline/examples.t5/0000755000076500000240000000000014246443553015200 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.t5/output0000644000076500000240000000001114072112013016430 0ustar00timstaff 119 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6314108 btest-0.72/Baseline/examples.t5-2/0000755000076500000240000000000014246443553015337 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.t5-2/output0000644000076500000240000000001114072112013016567 0ustar00timstaff 22 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6320653 btest-0.72/Baseline/examples.t6/0000755000076500000240000000000014246443553015201 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.t6/output0000644000076500000240000000000214072112013016431 0ustar00timstaff3 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6329095 btest-0.72/Baseline/examples.t7/0000755000076500000240000000000014246443553015202 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.t7/output0000644000076500000240000000020414072112013016436 0ustar00timstaffPart 1 - /Users/robin/bro/docs/aux/btest/.tmp/examples.t7/t7.sh#1 Part 2 - /Users/robin/bro/docs/aux/btest/.tmp/examples.t7/t7.sh#2 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6336849 btest-0.72/Baseline/examples.unstable/0000755000076500000240000000000014246443553016465 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Baseline/examples.unstable/output0000644000076500000240000000000214072112013017715 0ustar00timstaff1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277936.0 btest-0.72/CHANGES0000644000076500000240000010651014246443460012304 0ustar00timstaff0.72 | 2022-03-22 09:21:34 +0100 * Release 0.72. 0.71-4 | 2022-03-22 09:19:53 +0100 * Make test `duplication-selection` independent of actual scheduling. (Benjamin Bannier, Corelight) 0.71-2 | 2022-02-02 12:47:42 +0100 * Avoid assertion failure if same test required multiple times. (Benjamin Bannier, Corelight) 0.71 | 2021-11-01 12:04:45 -0700 * Release 0.71. 0.70-6 | 2021-11-01 12:03:30 -0700 * Add PartInitializer option (Christian Kreibich, Corelight) * Support for global and test-part teardowns (Christian Kreibich, Corelight) * Remove Python 2.7 from manifest (Christian Kreibich, Corelight) 0.70 | 2021-10-28 12:14:34 -0700 * Release 0.70. 0.69-36 | 2021-10-28 12:12:47 -0700 * Prevent multiprocessing state leaks into toplevel .tmp directory (Benjmain Bannier, Corelight) * Check whether post-testing .tmp is empty (Benjamin Bannier and Christian Kreibich, Corelight) * Clean up registered temp files in btest-rst-cmd (Christian Kreibich, Corelight) * Add temp directory to the statefile-sorted test (Christian Kreibich, Corelight) 0.69-31 | 2021-10-22 13:29:57 +0200 * Remove use of Python 3.7 feature `time.time_ns`. (Benjamin Bannier, Corelight) * Use appropiate Sphinx versions for different Python versions. (Benjamin Bannier, Corelight) * GH-66: Fix typo preventing use of configured Python version in CI. (Benjamin Bannier, Corelight) 0.69-27 | 2021-10-19 13:33:43 -0700 * Remove uses of deprecated `distutils` package. (Benjamin Bannier, Corelight) 0.69-25 | 2021-10-08 11:55:13 +0200 * Improve backtick expansion in `btest.cfg`. Instead of naive splitting of the command string into arguments, pass the command as-is to a subshell for parsing. Note that this changes semantics of backticks slightly. However, they were ill-defined to begin with. (DJ Gregor, Corelight) * Minor README tweaks. (DJ Gregor, Corelight) 0.69-22 | 2021-09-24 10:57:31 +0200 * Add pre-commit configuration. (Benjamin Bannier) * Reformat Python scripts with yapf. (Benjamin Bannier) * Reformat shell scripts with shellfmt. (Benjamin Bannier) * Fix a couple issues tagged by shellcheck. (Benjamin Bannier) * More precise environment filtering in local-alternative-show-env.test. (Christian Kreibich, Corelight) * Fix parallel/interleaved execution between the test wrapup functions (such as testSucceeded) and the progress monitor. (Christian Kreibich, Corelight) * Switch Sphinx build test to use "text" output format. (Christian Kreibich, Corelight) * Add GitHub action exercising the test suite. (Benjamin Bannier, Corelight) * Make `btest-progress` more robust under concurrency. (Benjamin Bannier, Corelight) * Fix test `tests.sphinx.run-sphinx` for sphinx-3.5.4. (Benjamin Bannier, Corelight) * Show verbose failures when running test target. (Benjamin Bannier, Corelight) * Fix misspelled variable name in `btest-sphinx.py`. (Benjamin Bannier, Corelight) 0.69-4 | 2021-08-10 08:25:39 +0200 * Add flag `-l` to print available tests. (Benjamin Bannier, Corelight) 0.69-2 | 2021-08-04 18:12:33 +0200 * Always print a sorted list of failed tests to the state file. (Benjamin Bannier, Corelight) 0.69 | 2021-06-07 09:33:51 +0200 * Release 0.69. 0.68-9 | 2021-06-07 09:31:03 +0200 * Documentation updates. (Christian Kreibich, Corelight) - Add a section on running btest, to give some basic context. - Add more detail on btest invocation and test selection. - Update docs for download and installation, shifting focus to PyPI. - Move TEST-START-NEXT description to its section. 0.68-4 | 2021-06-01 18:34:04 +0200 * Alternatives: Update `baselinedir` default after potentially changing testbase. (Arne Welzel, Corelight) 0.68-2 | 2021-06-01 08:53:42 +0200 * Fix issue where global configuration object wasn't updated for environment variables taken from alternatives. (Arne Welzel, Corelight) * Ignore environment variable when overridden by alternative. (Arne Welzel, Corelight) 0.68 | 2021-04-16 16:21:36 -0700 * Release 0.68. 0.67-2 | 2021-04-16 16:21:14 -0700 * Preserve CRLF line-terminators in test files (Jon Siwek, Corelight) Since the test itself may depend on their existence. Related to https://github.com/zeek/zeek/issues/1497 0.67 | 2021-01-21 13:27:20 -0800 * Release 0.67. 0.66-2 | 2021-01-21 13:26:58 -0800 * Support BTest installation via CMake (Christian Kreibich, Corelight) This copies the relevant files manually. Relying on setup.py is feasible but requires careful separation of build and source directories, so currently explicit file lists are easier to maintain. 0.66 | 2021-01-08 12:14:24 +0000 * Bug fix to apply TEST-REQUIRES to tests duplicated with TEST-START-NEXT. (Robin Sommer, Corelight) 0.65 | 2020-12-07 09:32:03 +0000 * Release 0.65. 0.64-21 | 2020-12-07 09:01:13 +0000 * Default to `-j 1` to make CTRL-C work consistently. We now always use multiprocessing with a single process, with one exception: interactive baseline updates still run directly. (Christian Kreibich, Corelight) * Avoid leaking processes due to lacking sync manager shutdown. (Christian Kreibich, Corelight) * Speed up the threading test since its built-in delays weren't actually required for the test. (Christian Kreibich, Corelight) 0.64-18 | 2020-12-07 08:44:55 +0000 * Add "binary mode" to btest-diff. In binary mode, invoked with -b/--binary, btest-diff compares test output and baselines for equality only, never applies canonifiers, and doesn't apply our btest header when updating baselines. (Christian Kreibich, Corelight) 0.64-16 | 2020-12-07 08:35:25 +0000 * GH-36: Fix --abort-on-failure for expected failures. (Robin Sommer, Corelight) 0.64-14 | 2020-12-01 16:05:25 +0000 * Add support for multiple baseline directories. (Robin Sommer, Corelight) This works be setting the environment variable BTEST_BASELINE_DIR to a colon-separated list of directories. They will be searched in order when looking a baseline file to compare against. Updating a baseline will always put the new content into the first directory. One can now also generally set a different baseline directory through BTEST_BASELINE_DIR. See the README for more. * Require Python >= 3.5. (Jon Siwek, Corelight) 0.64-1 | 2020-11-11 12:57:12 +0000 * Fix for Python 3. (Robin Sommer, Corelight) 0.64 | 2020-10-20 14:02:01 +0000 * Fix canonification problem. (Robin Sommer, Corelight) 0.63 | 2020-10-20 08:32:28 +0000 * Add support for minimum BTest version requirement in the config file. If an entry called "MinVersion" is present in the "btest" section, it spells out a minimum version that BTest must have in order to run. If that condition isn't fulfilled, we exit with error code 1 and a corresponding error message on stderr. (Christian Kreibich, Corelight) * GH-29: Add new options -F/--abort-on-failure that will abort once at least one test has failed. (Robin Sommer, Corelight) * Documentation and code cleanup. (Christian Kreibich, Corelight) 0.62-13 | 2020-10-09 07:33:49 +0000 * Canonify outputs when updating baselines via btest-diff. (Christian Kreibich, Corelight) This leverages the same canonicalization btest-diff already applies during test output comparison against baselines also when updating those baselines (usually via btest -U/-u). This removes a bunch of noise from the baselines, including personal home directories, timestamps, etc. Since btest-diff doesn't know whether the baseline has undergone this canonicalization, it continues to canonicalize the baseline prior to diffing. To track whether it has canonicalized a baseline when updating, btest-diff now also prepends a header to the generated baseline that warns users about the fact that it is auto-generated. The presence of this header doubles as a marker for canonicalization. * Clean up btest-diff's shell code. (Christian Kreibich, Corelight) 0.62-7 | 2020-09-24 07:47:26 -0700 * GH-26: Explicitly use multiprocessing "fork" start-method (Jon Siwek, Corelight) The default start-method on macOS in Python 3.8+ is "spawn", but that emits a RuntimeError with current btest structuring. 0.62-5 | 2020-08-25 07:57:23 +0000 * GH-11: Fix %DIR not being set correctly in cloned tests. (Jon Siwek, Corelight) * Sort XML attributes for more output stability across Python versions. (Jon Siwek, Corelight) * Use Sphinx logging API directly for compatibility with Sphinx 2.0+. (Jon Siwek, Corelight) 0.62 | 2020-05-08 15:07:11 +0000 * Release 0.62. 0.61-10 | 2020-05-08 15:05:46 +0000 * Catch keyboard interrupt to abort orderly and immediately. The existing workers will still run to completion in the background. (Benjamin Bannier, Corelight) 0.61-8 | 2020-04-28 10:00:41 +0000 * Add `--trace-file` option for recording Chrome trace file. (Benjamin Bannier, Corelight) 0.61-2 | 2020-02-13 18:53:41 +0000 * Provide a more apt description for btest. (Christopher M. Hobbs, Corelight) 0.61 | 2020-02-07 10:19:25 +0000 * Release 0.61. 0.6-4 | 2020-02-07 10:18:14 +0000 * Change --retries option to not increment test name on retries. (Jon Siwek, Corelight) * Fix retrying tests that use additional files. (Jon Siwek, Corelight) 0.60 | 2020-01-17 09:13:56 +0000 * Show diagnostics for expected failures when --diagnostics-all is used. (Robin Sommer, Corelight) * Fix btest exit code to indicate success if only tests are failing that are expected to. (Robin Sommer, Corelight) * Extend XML test case to cover -j flag. (Jon Siwek, Corelight) * Fix XML output option -x to work with -j. (Jon Siwek, Corelight) 0.59-12 | 2019-09-09 11:30:26 +0000 * Add timestamps to "btest-progress" output. They are added to stderr only, so that they will get recorded but not displayed during execution. The new option -T suppresses the timestamp. (DJ Gregor, Corelight) 0.59-8 | 2019-08-16 16:15:41 +0000 * New option -z to retry any failed tests a few times to see if they might just be unstable. (Dev Bali, Corelight) 0.59-1 | 2019-08-09 09:11:32 -0700 * Update username in `make upload` target (Jon Siwek, Corelight) 0.59 | 2019-08-01 12:04:06 -0700 * Release 0.59. 0.58-12 | 2019-08-01 12:02:01 -0700 * Drop use of Python 2.6 for Travis CI tests (Jon Siwek, Corelight) 0.58-11 | 2019-06-27 16:54:17 +0000 * Adding clarity to difficult to understand error message. (Sam Zaydel, Corelight) 0.58-9 | 2019-06-17 20:17:32 -0700 * Update Travis config for bro to zeek renaming (Daniel Thayer) 0.58-7 | 2019-05-24 16:16:07 +0000 * Use more portable platform.system() for determining platform name. (woot4moo) 0.58-5 | 2018-12-07 16:32:52 -0600 * Update github/download link (Jon Siwek, Corelight) 0.58-4 | 2018-11-29 16:55:33 -0600 * Add TEST-PORT command and PortRange option (Jon Siwek, Corelight) These control assignment of TCP ports to environment variables for use during test execution. Helps in writing unit tests that can be run concurrently or just in reducing the risk of a unit test failing due to a port already being used by some external process. 0.58 | 2018-05-21 22:32:21 +0000 * Release 0.58. 0.57-32 | 2018-05-21 22:31:42 +0000 * Show number of skipped tests even when none fail. (Daniel Thayer) * Delete a test's temporary directory if skipped, unless the user wants these. (Corelight) 0.57-28 | 2018-05-08 10:03:20 -0500 * BIT-1735: open btest files as utf-8 if locale has no default encoding (Corelight) * Normalize output of a sphinx-related unit test (Corelight) * Improve a unicode decode error message (Daniel Thayer) 0.57-24 | 2018-04-18 14:56:56 -0700 * Improving console output. (Robin Sommer) - When showing a progress message, always clear to end of line to delete any content that a previous, longer message may have left. - Ensure to turn the cursor back on at exit. 0.57-23 | 2018-03-15 14:58:20 -0700 * Configure Travis CI email recipients and build branches. (Daniel Thayer) 0.57-21 | 2018-02-05 15:05:43 -0800 * Add a .travis.yml file. (Daniel Thayer) 0.57-19 | 2018-01-19 15:14:02 -0800 * Fix a bug when setting base dir using a relative path. Addresses BIT-1892. (Daniel Thayer) * Improve testing of setting a non-default base directory. (Daniel Thayer) 0.57-16 | 2017-11-17 15:03:16 -0800 * Fix "btest -R" to preserve sorted output ordering. (Daniel Thayer) * Add more tests to "doc.test". (Daniel Thayer) 0.57-13 | 2017-10-23 15:35:21 -0700 * Tweak -A|--show-all to use only coloring, not cursor navigation. (Christian Kreibich) * Fix the doc.test. (Daniel Thayer) * Improve the console.test and document "--show-all" option. (Daniel Thayer) * Added documentation of the "--show-all" option. (Daniel Thayer) * Allow multiple TEST-DOC keywords in a test file. (Daniel Thayer) 0.57-7 | 2017-10-16 12:18:28 -0700 * Fix the console.test to work on FreeBSD and macOS. (Christian Kreibich/Daniel Thayer) 0.57-5 | 2017-10-06 15:01:23 -0700 * Additional control over TTY-based output handling This adds -A|--show-all, which makes console output preserve output lines for passing/skipped tests. (Christian Kreibich) * Fix btest-rst-cmd script to remove tmp files. (Daniel Thayer) * Added TMPDIR to btest.cfg so that temporary files are stored in a local directory instead of a system-wide tmp directory. (Daniel Thayer) 0.57 | 2017-05-15 16:13:33 -0700 * Release 0.57. 0.56-22 | 2017-05-15 16:13:23 -0700 * Fixing broken version numbers. (Robin Sommer) 0.56-21 | 2017-05-15 16:05:18 -0700 * Catching CTRL-C and cleaning up. (Robin Sommer) 0.56-20 | 2017-03-21 17:56:10 -0700 * Catching exception that wasn't caught. (Robin Sommer) 0.56-19 | 2017-03-03 12:50:42 -0800 * Fix btest-progress output to stderr when run from btest. (Daniel Thayer) 0.56-17 | 2017-03-02 16:24:31 -0800 * Cosmectics for progress output: Delete it before asking for baseline updates. (Robin Sommer) * Fixing missing output for back-to-back btest-progress calls. Addresses BIT-1800. (Robin Sommer) * Fix for augmented output to console. (Robin Sommer) * Send btest-progress output to stderr as well. (Robin Sommer) 0.56-13 | 2017-02-23 10:14:56 -0800 * Prevent socket path length from exceeding system limits. Addresses BIT-862. (Daniel Thayer) 0.56-11 | 2017-02-03 12:38:01 -0800 * Adding btest-progress to setup.py. (Robin Sommer) 0.56-10 | 2017-01-25 13:04:07 -0800 * Fix a failing test on FreeBSD. (Daniel Thayer) * Fix a bug in btest-progress when using the "-q" option. (Daniel Thayer) * Fix some trivial errors in documentation and Makefile. (Daniel Thayer) * Add 'upload' Makefile target to upload to PyPi. (Jon Siwek) 0.56-5 | 2017-01-24 08:45:29 -0800 * Bugfix for recent btest-progress changes. (Robin Sommer) 0.56-4 | 2017-01-23 19:59:59 -0800 * New utility btest-progress to display progress messages while a test is executing. These messages appear in real-time while the rest is still running. When stdout is a tty, the progress messages are incorporated into the colored one-line status message. By default, btest-progress also prints the message to a test's standard output. That can be suppressed by giving it an option -q. (Robin Sommer) * Experimental automatic generation of test reference documentation. The new command-line option "-R " prints out a list of all tests in either Markdown (format 'md') or reStructuredText (format 'rst'). The list includes a documentation string with each test that gets defined through a new "@TEST-DOC: " directive. This is experimental. (Robin Sommer) * Fix pylint warnings. (Robin Sommer) 0.56 | 2016-10-31 14:23:57 -0700 * Release 0.56. 0.55-6 | 2016-10-31 14:23:24 -0700 * Python 3 compatibility fixes for btest-sphinx.py. (Daniel Thayer) 0.55-4 | 2016-10-25 09:31:25 -0700 * Fix diff-max-lines.test to work on openbsd. (Daniel Thayer) 0.55-2 | 2016-10-10 08:18:54 -0700 * Fix the btest-rst-cmd script to work with Python 3. (Daniel Thayer) 0.55 | 2016-02-23 14:02:35 -0800 * Release 0.55. 0.54-65 | 2016-02-23 14:00:10 -0800 * Fine-tuning diagnostic output. It needlessly stripped leading whitespace. (Robin Sommer) 0.54-63 | 2016-02-07 19:39:54 -0800 * Extending --groups to allow running everything *except* a set of groups. (Robin Sommer) * Fix portability issue with use of mktemp. (Daniel Thayer) 0.54-60 | 2015-11-16 07:30:38 -0800 * Updates for Python 3. (Fabian Affolter) 0.54-58 | 2015-10-01 16:04:51 -0700 * Improved test of TEST_DIFF_FILE_MAX_LINES. (Daniel Thayer) * Added ability for a user to override the default number of lines to show for diffs by setting the environment variable TEST_DIFF_FILE_MAX_LINES. Reduced the default to 100. (Daniel Thayer) * When no baseline exists, changed btest-diff to always just show the entire file. (Daniel Thayer) 0.54-55 | 2015-08-25 07:47:22 -0700 * Port to Python 3. (Daniel Thayer) * Various cleanup, bug fix, simplifications, and smaller improvements. (Daniel Thayer) * Improve and extend test suite substantially. (Daniel Thayer) 0.54-9 | 2015-07-03 18:21:52 -0700 * Make sure IgnoreDirs works with toplevel globbing. (Robin Sommer) 0.54-8 | 2015-07-03 16:31:24 -0700 * Expanding globs in TestDirs, relative to TestBase. (Robin Sommer) 0.54-7 | 2015-06-22 13:07:42 -0700 * Allow BTEST_TEST_BASE overriding in alternative configuration. (Vlad Grigorescu) * Create README symlink for GitHub rendering. (Vlad Grigorescu) 0.54-1 | 2015-06-18 09:08:34 -0700 * Add support for BTEST_TEST_BASE environment variable for overriding the test base directory. (Robin Sommer) 0.54 | 2015-03-02 17:22:22 -0800 * Release 0.54. 0.53-6 | 2015-03-02 17:21:26 -0800 * Improve documentation of timing functionality. (Daniel Thayer) * Add a new section to documentation that lists the BTest prerequisites. (Daniel Thayer) * Add warning when btest cannot create timing baseline. (Daniel Thayer) 0.53-3 | 2015-01-22 07:25:01 -0800 * Fix some typos in the README. (Daniel Thayer) 0.53-1 | 2014-11-11 13:21:10 -0800 * In diagnostics, do not show verbose output for tests known to fail. (Robin Sommer) 0.53 | 2014-07-22 17:36:24 -0700 * Release 0.53. 0.52-2 | 2014-07-22 17:36:15 -0700 * Update MANIFEST.in and setup.py to fix packaging. (Jon Siwek) 0.52 | 2014-03-13 14:05:44 -0700 * Release 0.52. 0.51-14 | 2014-03-13 14:05:36 -0700 * Fix a link in the README. (Jon Siwek) 0.51-12 | 2014-02-11 16:12:44 -0800 * Work-around for systems reporting that a socket path is too long. Addresses BIT-862. (Robin Sommer) 0.51-11 | 2014-02-11 15:37:40 -0800 * Fix for Linux systems that have the perf tool but don't support measuring instructions. (Robin Sommer) * No longer tracking tests that are expected to fail in state file. (Robin Sommer) * Refactoring the timing code to no longer execute at all when not needed.(Robin Sommer) 0.51-7 | 2014-02-06 21:06:40 -0800 * Fix for platforms that don't support timing measurements yet. (Robin Sommer) 0.51-6 | 2014-02-06 18:19:08 -0800 * Adding a timing mode that records test execution times per host. This is for catching regressions (or improvements :) that lets execution times divert significantly. Linux only for now. See the README for more information. (Robin Sommer) * Adding color to test status when writing to console. (Robin Sommer) * A bit of refactoring to define the status messages ("ok", "failed") only at a single location. Also added a note when a test declared as expecting failure in fact succeeds. (Robin Sommer) 0.51-2 | 2013-11-17 20:21:08 -0800 * New keyword ``TEST-KNOWN-FAILURE`` to mark tests that are currently known to fail. (Robin Sommer) 0.51-1 | 2013-11-11 13:36:36 -0800 * Fixing bug with tests potentially being ignored when using alternatives. (Robin Sommer) 0.51 | 2013-10-07 17:29:50 -0700 * Updating copyright notice. (Robin Sommer) 0.5-1 | 2013-10-07 17:26:30 -0700 * Polishing how included commands and files are shown. (Robin Sommer) - Enabling CSS styling to command lines and shown file names via the new "btest-include" and "btest-cmd" classes. - Fix to enable showing line numbers in btest-sphinx generated output. - Fix to enable Pygments coloring in output. 0.5 | 2013-09-20 14:48:01 -0700 * Fix the btest-rst-pipe script. (Daniel Thayer) * A set of of documentation fixes, clarifications, and extensions. (Daniel Thayer) * A set of changes to Sphinx commands and directives. (Robin Sommer) btest-rst-*: - Always show line numbers. - Highlight the command executed. - rst-cmd-include gets an option -n to include only upto i lines. - rst-cmd-include prefixes output with "" to show what we're including. btest-include: - Set Pygments language automatically if we show a file with an extension we know (in particular ".bro"). - Prefix output with "" to show what we're including. 0.4-63 | 2013-08-28 21:10:39 -0700 * btest-sphinx now provides a new directive btest-include. This works like literalinclude (with all its options) but it also saves a version of the included text as a test to detect changes. (Robin Sommer) 0.4-60 | 2013-08-28 18:54:51 -0700 * Fix typos and reST formatting in README (Daniel Thayer) * Fix a couple of error messages. (Daniel Thayer) * Fixed a reference to a non-existent variable which was causing the "-w" option to have no effect. (Daniel Thayer) * Test portability fix. (Robin Sommer) 0.4-55 | 2013-08-22 16:09:21 -0700 * New "Sphinx-mode" for BTest, activated with -S. This allows to capture a test's diagnostic output when running from inside Sphinx; the output will now be inserted into the generated document. (Robin Sommer) * Adding an option -n to btest-rst-cmd that truncates output longer than N lines. (Robin Sommer) * Adding a PartFinalizer that runs a commmand at the completion of each test part. (Robin Sommer) 0.4-51 | 2013-08-22 10:36:34 -0700 * Improve cleanup of processes that don't terminate with btest-bg-wait. (Jon Siwek) 0.4-49 | 2013-08-13 18:43:03 -0700 * Fixing test portability problems. (Daniel Thayer) * Adding TEST_BASE environment variable. The existing TESTBASE isn't always behaving as expected and wasn't documented to begin with. (Robin Sommer) 0.4-43 | 2013-08-12 16:04:53 -0700 * Bugfix for ignored tests. (Robin Sommer) 0.4-42 | 2013-07-31 20:46:30 -0700 * Adding support for "parts": One can split a single test across multiple files by adding a numerical ``#`` postfix to their names, where each ```` represents a separate part of the test. ``btests`` will combine all of a test's parts in numerical order and execute them subsequently within the same sandbox. Example in the README. (Robin Sommer) * When running a command, TEST_PART contains the current part number. (Robin Sommer) * Extending Sphinx support. (Robin Sommer) * Adding tests for Sphinx functionality. * Support for parts in Sphinx directives. If multiple btest directives reference the same test name, each will turn into a part of a single test. * Internal change restructuring the btest Sphinx directive. We now process it in two passes: one to save the test at parse time, and one later to execute once everything has been parsed. * Adding Sphinx sandbox for testing. * Fix for tests returning no output to render at all. (Robin Sommer) 0.4-28 | 2013-07-17 21:56:18 -0700 * btest-diff now passes the name of the file under consideration on to canonifiers. (Robin Sommer) 0.4-27 | 2013-07-14 21:19:59 -0700 * When searching for tests, BTest now ignores a directories if it finds a file ".btest-ignore" in there. (Robin Sommer) 0.4-26 | 2013-07-08 20:46:22 -0700 * Fixing bug with @TEST-START-NEXT naming. (Robin Sommer) 0.4-25 | 2013-07-08 13:25:50 -0700 * A test-suite for btest. Using, of course, btest. "make test" will test most of btest's features. The main missing piece is testing the Sphinx support, we will add that next. (Robin Sommer) * When creating directories, we know also create intermediaries. That in particular means that "@TEST-START-FILE a/b/c" now creates a directory "a/b" automatically and puts the file in there. (Robin Sommer) * IgnoreDirs now also works for sub directories. (Robin Sommer) * Documentation updates. (Robin Sommer) * Adding "Initializer" option, which runs a command before each test. (Robin Sommer) * Adding "CommandPrefix" option that changes the naming of all btest commands by replacing the "@TEST-" prefix with a custom string. (Robin Sommer) * Default configuration file can be overriden via BTEST_CFG environment variable. (Robin Sommer) * s/bro-ids.org/bro.org/g (Robin Sommer) * Bugfix for -j without number. (Robin Sommer) * New @TEST-ALTERNATIVE that activates tests only for the given alternative. Renamed @TEST-NO-ALTERNATIVE to @TEST-NOT-ALTERNATIVE, and allowing "default" for both @TEST-ALTERNATIVE and @TEST-NOTALTERNATIVE to specify the case that BTest runs without any alternative given. (Robin Sommer) * Fix for alternative names containing white spaces. (Robin Sommer) 0.4-14 | 2013-01-23 18:11:22 -0800 * Fixing links in README and removing TODOs. (Robin Sommer) 0.4-13 | 2013-01-23 14:33:23 -0800 * Allowing use of -j without a value. BTest then uses the number of CPU cores as reported by the OS. (Robin Sommer) 0.4-11 | 2013-01-21 17:50:40 -0800 * Adding a new "alternative" concept that combines filters and substitutions, and adds per-alternative environment variables. (Robin Sommer) Instead of defining filters and substitutions separately, one now specifies an alternative configuration to run with "-A " and that then checks for both "[substitutions-]" and "[filter-]" section. In addition, "[environment-]" allows to define alternative-specific environment variables. The old filter/substitutions options -F and -s are gone. The sections for substitutions are renamed to "[substitutions-]" from "[subst-]". 0.4-10 | 2013-01-07 09:45:35 -0800 * btest now sets a new environment variable TEST_VERBOSE, giving the path of a file where a test can record further information about its execution that will be included with btest's ``--verbose`` output. (Robin Sommer) 0.4-9 | 2012-12-20 12:20:44 -0800 * Documentation fixes/clarifications. (Daniel Thayer) * Fix the btest "-c" option, which didn't work when the specified config file was not in the current working directory. (Daniel Thayer) 0.4-6 | 2012-11-08 16:33:51 -0800 * Putting a limit on how many input line btest-diff shows. (Robin Sommer) 0.4-5 | 2012-11-01 16:14:29 -0700 * Making Sphinx module tolerant against docutils version change. (Robin Sommer) 0.4-4 | 2012-09-25 06:24:59 -0700 * Fix a couple of reST formatting problems. (Daniel Thayer) 0.4-2 | 2012-09-24 11:41:06 -0700 * Add option -x to output test results in an XML (JUnit-like) format. (Jon Siwek) 0.4 | 2012-06-15 15:15:13 -0700 * Remove code to expand environment variables on command line. (Not needed because the command line is just passed to the shell.) (Daniel Thayer) * Clarify explanation about expansion of environment variables. (Daniel Thayer) * Fix errors in README and btest help output; added documentation for the -q option. (Daniel Thayer) * Fixed a bug in btest where it was looking for "filters-" (instead of "filter-") in the btest config file. (Daniel Thayer) 0.31-45 | 2012-05-24 16:43:14 -0700 * Correct typos in documentation. (Daniel Thayer) * Failed tests are now only recorded into the state file when we're not updating. That allows to run "btest -r" repeatedly while updating baselines in between. (Robin Sommer) * Experimentation Sphinx directive to write a btest with a Sphinx document. See README for more information. * Fixing typos, plus an console output tweak. (Robin Sommer) * Option -q now implies -b as well. (Robin Sommer) 0.31-33 | 2012-05-13 17:08:15 -0700 * New command to copy a file into a test's directory. ``@TEST-COPY-FILE: `` Copy the given file into the test's directory before the test is run. If ```` is a relative path, it's interpreted relative to the BTest's base directory. Environment variables in ```` will be replaced if enclosed in ``$ { .. }``. This command can be given multiple times. (Robin Sommer) * Suppressing error messages when btest-diff can't remove diag file. (Robin Sommer) * Adding option -q/--quiet to suppress informational non-error output. (Robin Sommer) * Option -F also takes a comma-separated list to specify multiple filters , rather than having to give -F multiple times. (Robin Sommer) 0.31-28 | 2012-05-06 21:27:15 -0700 * Separating semantics of groups and thread serialization into separate options. -g still specifices @TEST-GROUPs that are to be executed, but these groups don't any longer control which tests get serialized in a parallel execution. For that, there's a new "@TEST-SERIALIZE: " command that takes a tag and then makes sure all tests with the same tag are run within the same thread. (Robin Sommer) * TEST-GROUPS can now be given multiple times now to assign a test to a set of groups. (Robin Sommer) * Extended -g to accept a comma-separated list of groups names to run more than one test group. (Robin Sommer) * New output handler for console output. This output is now the default when stdout is a terminal. It prints out a compressed output that updates as btest goes through; it also indicates the progress so far. If btest's output is redirected to a non-terminal, is switches back to the old style. (Robin Sommer) * New test command @TEST-NO-FILTER: This allows to ignore a test when running a specific filter. (Robin Sommer) * Changing the way filters are activated. -F now activates only the given filter, but doesn't run the standard tests in addition. But one can now give -F a command-separated list of filters to activate them all, and refer to the standard tests without filter as ``-``. (Robin Sommer) * Fix to allow numbered test to be given individually on the command line. (E.g., integer.geq-3 for a file that contains three tests). (Robin Sommer) 0.31-23 | 2012-04-16 18:10:02 -0700 * A number of smaller fixes for bugs, plus polishing, caused by the recent restructuring. (Robin Sommer) * Removing the error given when using -r with tests on the command line. It's unnessary and confusing compared to when listing tests in btest.cfg. (Robin Sommer) * Adding a new "finalizer" option. ``Finalizer`` An executable that will be executed each time any test has succesfully run. It runs in the same directory as the test itself and receives the name of the test as its parameter. The return value indicates whether the test should indeed be considered succeeded. By default, there's no finalizer set. (Robin Sommer) * btest is now again overwriting old diag files instead of appending (i.e., back to as it used to be). (Robin Sommer) * Diag output is now line-buffered. (Daniel Thayer) 0.31-13 | 2012-03-13 15:59:51 -0700 * Adding new option -r that reruns all tests that failed last time. btest now always records all failed tests in a file called. (Robin Sommer) * Internal restructuring to factor output out into sublcasses. (Robin Sommer) * Adding parallel test execution to btest. (Robin Sommer) - A new option "-j " allows to run up to tests in parallel. - A new @TEST-GROUP directive allows to group tests that can't be parallelized. All tests of the same group will be executed sequentially. - A new option "-g " allows to run only tests of a certain group, or with "-g -" all tests that don't have a group. 0.31-2 | 2012-01-25 16:58:29 -0800 * Don't add btest's path to PATH anymore. (Jon Siwek) 0.31 | 2011-11-29 12:11:49 -0600 * Submodule README conformity changes. (Jon Siwek) 0.3 | 2011-10-25 19:58:26 -0700 * More graceful error handling at startup if btest.cfg not found. (Robin Sommer) * Python 2.4 compat changes. (Jon Siwek) * When in brief mode, btest-diff now shows full output if we don't have a baseline yet. (Robin Sommer) * Adding executable permission back to script. (Robin Sommer) * Cleaning up distribution. (Robin Sommer) 0.22-28 | 2011-09-15 15:18:11 -0700 * New environment variable TEST_DIFF_BRIEF. If set btest-diff no longer includes a mismatching file's full content it the diagnostic output. This can be useful if the file being compared is very large. (Robin Sommer) 0.22-27 | 2011-08-12 22:56:12 -0700 * Fix btest-bg-wait's kill trap and -k option. (Jon Siwek) 0.22-18 | 2011-07-23 11:54:07 -0700 * A new option -u for interactively updating baselines. * Teach btest's TEST-START-FILE to make subdirectories (Jon Siwek) * Output polishing. (Robin Sommer) * Have distutils install 'btest-setsid' script. (Jon Siwek) * A portable setsid. (Robin Sommer) * Fixes for background execution of processes. * Fixing exit codes. (Robin Sommer) 0.22-6 | 2011-07-19 17:38:03 -0700 * Teach btest's TEST-START-FILE to make subdirectories (Jon Siwek) 0.22-5 | 2011-05-02 08:41:34 -0700 * A number of bug fixes, and output polishing. (Robin Sommer) * More robust background execution by btest-bg-*. (Robin Sommer) 0.22-4 | 2011-03-29 21:38:13 -0700 * A test command can now signal to btest that even if it fails subsequent test commands should still run by returning exit code 100. btest-diff uses this to continue in the case that no baseline has yet been established. * New test option @TEST-REQUIRES for running a test conditionally. See the README for more information. 0.22-2 | 2011-03-03 21:44:18 -0800 * Two new helper scripts for spawning processes in the background. See README for more information. * btest-diff can now deal with files specificied with paths. 0.22 | 2011-02-08 14:06:13 -0800 * BTest is now hosted along with the other Bro repositories on git.bro-ids.org. 0.21 | 2011-01-09 21:29:18 -0800 * In btest.cfg, option values can now include commands to execute in backticks. Example: [environment] CC=clang -emit-llvm -g `hilti-config --cflags` * Limiting substitutions to replacing whole words. * Adding "substitutions". Substitutions are similar to filters, yet they do not adapt the input but the command line being exectued. See README for more information. * Instead of giving a test's file name on the command line, one can now also use its "dotted" name as it's printed out when btest is running (e.g., "foo.bar"). That allows for easier copy/paste. * Starting CHANGES. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/COPYING0000644000076500000240000000345714072112013012332 0ustar00timstaffCopyright (c) 1995-2013, The Regents of the University of California through the Lawrence Berkeley National Laboratory and the International Computer Science Institute. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) Neither the name of the University of California, Lawrence Berkeley National Laboratory, U.S. Dept. of Energy, International Computer Science Institute, nor the names of contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Note that some files in the distribution may carry their own copyright notices. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1643045717.0 btest-0.72/MANIFEST0000644000076500000240000001757314173561525012456 0ustar00timstaff# file GENERATED by distutils, do NOT edit CHANGES COPYING MANIFEST MANIFEST.in Makefile README VERSION btest btest-ask-update btest-bg-run btest-bg-run-helper btest-bg-wait btest-diff btest-progress btest-setsid btest.cfg.example setup.py Baseline/examples.t4/dots Baseline/examples.t5/output Baseline/examples.t5-2/output Baseline/examples.t6/output Baseline/examples.t7/output Baseline/examples.unstable/output examples/alternative examples/my-filter examples/t1 examples/t2 examples/t3.sh examples/t4.awk examples/t5.sh examples/t6.sh examples/t7 examples/t7.sh#1 examples/t7.sh#2 examples/t7.sh#3 examples/unstable.sh examples/sphinx/.gitignore examples/sphinx/Makefile examples/sphinx/btest.cfg examples/sphinx/conf.py examples/sphinx/index.rst examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#1 examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#2 examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#3 examples/sphinx/tests/sphinx/hello-world.btest examples/sphinx/tests/sphinx/hello-world.btest#2 examples/sphinx/tests/sphinx/hello-world.btest#3 sphinx/btest-diff-rst sphinx/btest-rst-cmd sphinx/btest-rst-include sphinx/btest-rst-pipe sphinx/btest-sphinx.py testing/.gitignore testing/Makefile testing/btest.cfg testing/btest.tests.cfg testing/Baseline/tests.abort-on-failure/output testing/Baseline/tests.abort-on-failure-with-only-known-fails/output testing/Baseline/tests.alternatives-environment/child-output testing/Baseline/tests.alternatives-environment/output testing/Baseline/tests.alternatives-filter/child-output testing/Baseline/tests.alternatives-filter/output testing/Baseline/tests.alternatives-keywords/output testing/Baseline/tests.alternatives-substitution/child-output testing/Baseline/tests.alternatives-substitution/output testing/Baseline/tests.alternatives-testbase/output testing/Baseline/tests.brief/out1 testing/Baseline/tests.brief/out2 testing/Baseline/tests.btest-cfg/abspath testing/Baseline/tests.btest-cfg/nopath testing/Baseline/tests.btest-cfg/relpath testing/Baseline/tests.console/output testing/Baseline/tests.crlf-line-terminators/crlfs.dat testing/Baseline/tests.crlf-line-terminators/input testing/Baseline/tests.diag/output testing/Baseline/tests.diag-all/output testing/Baseline/tests.diag-file/diag testing/Baseline/tests.diag-file/output testing/Baseline/tests.diff-brief/output testing/Baseline/tests.diff-max-lines/output1 testing/Baseline/tests.diff-max-lines/output2 testing/Baseline/tests.doc/md testing/Baseline/tests.doc/rst testing/Baseline/tests.environment/output testing/Baseline/tests.exit-codes/out1 testing/Baseline/tests.exit-codes/out2 testing/Baseline/tests.groups/output testing/Baseline/tests.ignore/output testing/Baseline/tests.known-failure/output testing/Baseline/tests.known-failure-and-success/output testing/Baseline/tests.known-failure-succeeds/output testing/Baseline/tests.list/out testing/Baseline/tests.macros/output testing/Baseline/tests.measure-time/output testing/Baseline/tests.measure-time-options/output testing/Baseline/tests.multiple-baseline-dirs/fail.log testing/Baseline/tests.parts/output testing/Baseline/tests.parts-error-part/output testing/Baseline/tests.parts-error-start-next/output testing/Baseline/tests.parts-glob/output testing/Baseline/tests.parts-initializer-finalizer/output testing/Baseline/tests.parts-skipping/output testing/Baseline/tests.parts-teardown/output testing/Baseline/tests.progress/output testing/Baseline/tests.progress-back-to-back/output testing/Baseline/tests.quiet/out1 testing/Baseline/tests.quiet/out2 testing/Baseline/tests.requires/output testing/Baseline/tests.requires-with-start-next/output testing/Baseline/tests.rerun/output testing/Baseline/tests.sphinx.rst-cmd/output testing/Baseline/tests.sphinx.run-sphinx/_build.text.index.txt testing/Baseline/tests.start-file/output testing/Baseline/tests.start-next/output testing/Baseline/tests.start-next-dir/output testing/Baseline/tests.statefile/mystate1 testing/Baseline/tests.statefile/mystate2 testing/Baseline/tests.statefile-sorted/mystate testing/Baseline/tests.teardown/output testing/Baseline/tests.testdirs/out1 testing/Baseline/tests.testdirs/out2 testing/Baseline/tests.threads/output.j0 testing/Baseline/tests.threads/output.j1 testing/Baseline/tests.threads/output.j5 testing/Baseline/tests.tracing/output testing/Baseline/tests.unstable/output testing/Baseline/tests.unstable-dir/output testing/Baseline/tests.verbose/output testing/Baseline/tests.versioning/output testing/Baseline/tests.xml/output-j2.xml testing/Baseline/tests.xml/output.xml testing/Files/local_alternative/btest.tests.cfg testing/Files/local_alternative/Baseline/tests.local-alternative-show-env/output testing/Files/local_alternative/Baseline/tests.local-alternative-show-test-baseline/output testing/Files/local_alternative/Baseline/tests.local-alternative-show-testbase/output testing/Files/local_alternative/tests/local-alternative-found.test testing/Files/local_alternative/tests/local-alternative-show-env.test testing/Files/local_alternative/tests/local-alternative-show-test-baseline.test testing/Files/local_alternative/tests/local-alternative-show-testbase.test testing/Scripts/diff-remove-abspath testing/Scripts/dummy-script testing/Scripts/script-command testing/Scripts/strip-iso8601-date testing/Scripts/strip-test-base testing/Scripts/test-filter testing/Scripts/test-perf testing/tests/abort-on-failure-with-only-known-fails.btest testing/tests/abort-on-failure.btest testing/tests/alternatives-baseline-dir.test testing/tests/alternatives-environment.test testing/tests/alternatives-filter.test testing/tests/alternatives-keywords.test testing/tests/alternatives-overwrite-env.test testing/tests/alternatives-reread-config-baselinedir.test testing/tests/alternatives-reread-config.test testing/tests/alternatives-substitution.test testing/tests/alternatives-testbase.test testing/tests/baseline-dir-env.test testing/tests/basic-fail.test testing/tests/basic-succeed.test testing/tests/binary-mode.test testing/tests/brief.test testing/tests/btest-cfg.test testing/tests/canonifier-cmdline.test testing/tests/canonifier-conversion.test testing/tests/canonifier-fail.test testing/tests/canonifier.test testing/tests/console.test testing/tests/copy-file.test testing/tests/crlf-line-terminators.test testing/tests/diag-all.test testing/tests/diag-file.test testing/tests/diag.test testing/tests/diff-brief.test testing/tests/diff-max-lines.test testing/tests/diff.test testing/tests/doc.test testing/tests/environment.test testing/tests/exit-codes.test testing/tests/finalizer.test testing/tests/groups.test testing/tests/ignore.test testing/tests/initializer.test testing/tests/known-failure-and-success.btest testing/tests/known-failure-succeeds.btest testing/tests/known-failure.btest testing/tests/list.test testing/tests/macros.test testing/tests/measure-time-options.test testing/tests/measure-time.tests testing/tests/multiple-baseline-dirs.test testing/tests/parts-error-part.test testing/tests/parts-error-start-next.test testing/tests/parts-glob.test testing/tests/parts-initializer-finalizer.test testing/tests/parts-skipping.tests testing/tests/parts-teardown.test testing/tests/parts.tests testing/tests/ports.test testing/tests/progress-back-to-back.test testing/tests/progress.test testing/tests/quiet.test testing/tests/requires-with-start-next.test testing/tests/requires.test testing/tests/rerun.test testing/tests/start-file.test testing/tests/start-next-dir.test testing/tests/start-next-naming.test testing/tests/start-next.test testing/tests/statefile-sorted.test testing/tests/statefile.test testing/tests/teardown.test testing/tests/test-base.test testing/tests/testdirs.test testing/tests/threads.test testing/tests/tmps.test testing/tests/tracing.test testing/tests/unstable-dir.test testing/tests/unstable.test testing/tests/verbose.test testing/tests/versioning.test testing/tests/xml.test testing/tests/sphinx/rst-cmd.sh testing/tests/sphinx/run-sphinx ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/MANIFEST.in0000644000076500000240000000031414072112013013022 0ustar00timstaffinclude CHANGES include COPYING include MANIFEST include MANIFEST.in include Makefile include README include VERSION include btest.cfg.example include setup.py graft Baseline graft examples graft testing ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/Makefile0000644000076500000240000000116314072112013012727 0ustar00timstaffVERSION=`cat VERSION` .PHONY: all all: .PHONY: dist dist: rm -rf build/*.tar.gz python3 setup.py sdist -d build @printf "Package: "; echo build/*.tar.gz .PHONY: upload upload: twine-check dist twine upload -u zeek build/btest-$(VERSION).tar.gz .PHONY: test test: @(cd testing && make) .PHONY: twine-check twine-check: @type twine > /dev/null 2>&1 || \ { \ echo "Uploading to PyPi requires 'twine' and it's not found in PATH."; \ echo "Install it and/or make sure it is in PATH."; \ echo "E.g. you could use the following command to install it:"; \ echo "\tpip3 install twine"; \ echo ; \ exit 1; \ } ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.8029122 btest-0.72/PKG-INFO0000644000076500000240000000122214246443553012403 0ustar00timstaffMetadata-Version: 2.1 Name: btest Version: 0.72 Summary: A powerful system testing framework Home-page: https://github.com/zeek/btest Author: Robin Sommer Author-email: robin@icir.org License: 3-clause BSD License Keywords: system tests testing framework baselines Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: MacOS :: MacOS X Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Utilities License-File: COPYING See https://github.com/zeek/btest ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277936.0 btest-0.72/README0000644000076500000240000014470414246443460012200 0ustar00timstaff.. -*- mode: rst-mode -*- .. .. Version number is filled in automatically. .. |version| replace:: 0.72 ================================================== BTest - A Generic Driver for Powerful System Tests ================================================== BTest is a powerful framework for writing system tests. Freely borrowing some ideas from other packages, its main objective is to provide an easy-to-use, straightforward driver for a suite of shell-based tests. Each test consists of a set of command lines that will be executed, and success is determined based on their exit codes. ``btest`` comes with some additional tools that can be used within such tests to robustly compare output against a previously established baseline. This document describes BTest |version|. See the ``CHANGES`` file in the source tree for version history. .. contents:: Prerequisites ============= BTest has the following prerequisites: - Python version >= 3.5 (older versions may work, but are not well-tested). - Bash (note that on FreeBSD and Alpine Linux, bash is not installed by default). BTest has the following optional prerequisites to enable additional functionality: - Sphinx. - perf (Linux only). Note that on Debian/Ubuntu, you also need to install the "linux-tools" package. Download and Installation ========================= Installation is simple and standard via ``pip``:: > pip install btest Alternatively, you can download a tarball `from PyPI `_ and install locally:: > tar xzvf btest-*.tar.gz > cd btest-* > python3 setup.py install The same approach also works on a local git clone of the source tree, located at https://github.com/zeek/btest. Each will install a few scripts: ``btest`` is the main driver program, and there are a number of further helper scripts that we discuss below (including ``btest-diff``, which is a tool for comparing output to a previously established baseline). .. _running btest: Running BTest ============= A BTest testsuite consists of one or more "btests", executed by the ``btest`` driver. Btests are plain text files in which ``btest`` identifies keywords with corresponding arguments that tell it what to do. BTest is *not* a language; it recognizes keywords in any text file, including when embedded in other scripting languages. A common idiom in BTest is to use keywords to process the btest file via a particular command, often a script interpreter. This approach feels unusal at first, but lends BTest much of its flexibility: btest files can contain pretty much anything, as long as ``btest`` identifies keywords in it. ``btest`` requires a `configuration file`_. With it, you can run ``btest`` on an existing testsuite in several ways: - Point it at directories containing btests:: > btest ./testsuite/ - Use the config file to enumerate directories to scan for tests, via the ``TestDirs`` `option`_:: > btest - Run btests selectively, by pointing ``btest`` at a specific test file:: > btest ./testsuite/my.test More detail on this when we cover `test selection`_. Writing a Test ============== First Steps ----------- In the most simple case, ``btest`` simply executes a set of command lines, each of which must be prefixed with the ``@TEST-EXEC:`` keyword:: > cat examples/t1 @TEST-EXEC: echo "Foo" | grep -q Foo @TEST-EXEC: test -d . > btest examples/t1 examples.t1 ... ok The test passes as both command lines return success. If one of them didn't, that would be reported:: > cat examples/t2 @TEST-EXEC: echo "Foo" | grep -q Foo @TEST-EXEC: test -d DOESNOTEXIST > btest examples/t2 examples.t2 ... failed Usually you will just run all tests found in a directory:: > btest examples examples.t1 ... ok examples.t2 ... failed 1 test failed The file containing the test can simultaneously act as *its input*. Let's say we want to verify a shell script:: > cat examples/t3.sh # @TEST-EXEC: sh %INPUT ls /etc | grep -q passwd > btest examples/t3.sh examples.t3 ... ok Here, ``btest`` executes (something similar to) ``sh examples/t3.sh``, and then checks the return value as usual. The example also shows that the ``@TEST-EXEC`` keyword can appear anywhere, in particular inside the comment section of another language. Using Baselines --------------- Now, let's say we want to verify the output of a program, making sure that it matches our expectations---a common use case for BTest. To do this, we rely on BTest's built-in support for test baselines. These baselines record prior output of a test, adding support for abstracting away brittle details such as ever-changing timestamps or home directories. BTest comes with tooling to establish, update, and verify baselines, and to plug in "`canonifiers`_": scripts that abstract, or "normalize", troublesome detail from a baseline. In our test, we first add a command line that produces the output we want to check, and then run ``btest-diff`` to make sure it matches the previously recorded baseline. ``btest-diff`` is itself just a script that returns success if the output matches a pre-recorded baseline after applying any required normalizations. In the following example, we use an awk script as a fancy way to print all file names starting with a dot in the user's home directory. We write that list into a file called ``dots`` and then check whether its content matches what we know from last time:: > cat examples/t4.awk # @TEST-EXEC: ls -a $HOME | awk -f %INPUT >dots # @TEST-EXEC: btest-diff dots /^\.+/ { print $1 } Note that each test gets its own little sandbox directory when run, so by creating a file like ``dots``, you aren't cluttering up anything. The first time we run this test, we need to record a baseline. The ``btest`` command includes a baseline-update mode, set via ``-U``, that achieves this:: > btest -U examples/t4.awk ``btest-diff`` recognizes this update mode via an environment variable set by ``btest``, and records the ``dots`` file in a separate baseline folder. With this baseline in place, modifications to the output now trigger a test failure:: > btest examples/t4.awk examples.t4 ... ok > touch ~/.NEWDOTFILE > btest examples/t4.awk examples.t4 ... failed 1 test failed If we want to see what exactly changed in ``dots`` to trigger the failure, ``btest`` allows us to record the discrepancies via a *diagnostics* mode that records them in a file called ``.diag``:: > btest -d examples/t4.awk examples.t4 ... failed % 'btest-diff dots' failed unexpectedly (exit code 1) % cat .diag == File =============================== [... current dots file ...] == Diff =============================== --- /Users/robin/work/binpacpp/btest/Baseline/examples.t4/dots 2010-10-28 20:11:11.000000000 -0700 +++ dots 2010-10-28 20:12:30.000000000 -0700 @@ -4,6 +4,7 @@ .CFUserTextEncoding .DS_Store .MacOSX +.NEWDOTFILE .Rhistory .Trash .Xauthority ======================================= % cat .stderr [... if any of the commands had printed something to stderr, that would follow here ...] Once we delete the new file, the test passes again:: > rm ~/.NEWDOTFILE > btest -d examples/t4.awk examples.t4 ... ok That's the essence of the functionality the ``btest`` package provides. This example did not use canonifiers. We cover these, and a number of additional options that extend or modify this basic approach, in the following sections. Reference ========= Command Line Usage ------------------ ``btest`` must be started with a list of tests and/or directories given on the command line. In the latter case, the default is to recursively scan the directories and assume all files found to be tests to perform. It is however possible to exclude specific files and directories by specifying a suitable `configuration file`_. ``btest`` returns exit code 0 if all tests have successfully passed, and 1 otherwise. Exit code 1 can also result in case of other errors. ``btest`` accepts the following options: -a ALTERNATIVE, --alternative=ALTERNATIVE Activates an alternative_ configuration defined in the configuration file. Multiple alternatives can be given as a comma-separated list (in this case, all specified tests are run once for each specified alternative). If ``ALTERNATIVE`` is ``-`` that refers to running with the standard setup, which can be used to run tests both with and without alternatives by giving both. -A, --show-all Shows an output line for all tests that were run (this includes tests that passed, failed, or were skipped), rather than only failed tests. Note that this option has no effect when stdout is not a TTY (because all tests are shown in that case). -b, --brief Does not output *anything* for tests which pass. If all tests pass, there will not be any output at all except final summary information. -c CONFIG, --config=CONFIG Specifies an alternative `configuration file`_ to use. If not specified, the default is to use a file called ``btest.cfg`` if found in the current directory. An alternative way to specify a different config file is with the ``BTEST_CFG`` environment variable (however, the command-line option overrides ``BTEST_CFG``). -d, --diagnostics Reports diagnostics for all failed tests. The diagnostics include the command line that failed, its output to standard error, and potential additional information recorded by the command line for diagnostic purposes (see `@TEST-EXEC`_ below). In the case of ``btest-diff``, the latter is the ``diff`` between baseline and actual output. -D, --diagnostics-all Reports diagnostics for all tests, including those which pass. -f DIAGFILE, --file-diagnostics=DIAGFILE Writes diagnostics for all failed tests into the given file. If the file already exists, it will be overwritten. -g GROUPS, --groups=GROUPS Runs only tests assigned to the given test groups, see `@TEST-GROUP`_. Multiple groups can be given as a comma-separated list. Specifying groups with a leading ``-`` leads to all tests to run that are *not* not part of them. Specifying a sole ``-`` as a group name selects all tests that do not belong to any group. (Note that if you combine these variants to create ambiguous situations, it's left undefined which tests will end up running). -j THREADS, --jobs=THREADS Runs up to the given number of tests in parallel. If no number is given, BTest substitutes the number of available CPU cores as reported by the OS. By default, BTest assumes that all tests can be executed concurrently without further constraints. One can however ensure serialization of subsets by assigning them to the same serialization set, see `@TEST-SERIALIZE`_. -q, --quiet Suppress information output other than about failed tests. If all tests pass, there will not be any output at all. -r, --rerun Runs only tests that failed last time. After each execution (except when updating baselines), BTest generates a state file that records the tests that have failed. Using this option on the next run then reads that file back in and limits execution to those tests found in there. -R FORMAT, --documentation=FORMAT Generates a reference of all tests and prints that to standard output. The output can be of two types, specified by ``FORMAT``: ``rst`` prints reStructuredText, and ``md`` prints Markdown. In the output each test includes the documentation string that's defined for it through ``@TEST-DOC``. -t, --tmp-keep Does not delete any temporary files created for running the tests (including their outputs). By default, the temporary files for a test will be located in ``.tmp//``, where ```` is the relative path of the test file with all slashes replaced with dots and the file extension removed (e.g., the files for ``example/t3.sh`` will be in ``.tmp/example.t3``). -T, --update-times Record new `timing`_ baselines for the current host for tests that have `@TEST-MEASURE-TIME`_. Tests are run as normal except that the timing measurements are recorded as the new baseline instead of being compared to a previous baseline. --trace-file=TRACEFILE Record test execution timings in Chrome tracing format to the given file. If the file exists already, it is overwritten. The file can be loaded in Chrome-based browsers at ``_, or converted to standalone HTML with `trace2html `_. -U, --update-baseline Records a new baseline for all ``btest-diff`` commands found in any of the specified tests. To do this, all tests are run as normal except that when ``btest-diff`` is executed, it does not compute a diff but instead considers the given file to be authoritative and records it as the version to compare with in future runs. -u, --update-interactive Each time a ``btest-diff`` command fails in any tests that are run, ``btest`` will stop and ask whether or not the user wants to record a new baseline. -v, --verbose Shows all test command lines as they are executed. -w, --wait Interactively waits for ```` after showing diagnostics for a test. -x FILE, --xml=FILE Records test results in JUnit XML format to the given file. If the file exists already, it is overwritten. -z RETRIES, --retries=RETRIES Retry any failed tests up to this many times to determine if they are unstable. .. _configuration file: configuration_ .. _configuration: Configuration ------------- Specifics of ``btest``'s execution can be tuned with a configuration file, which by default is ``btest.cfg`` if that's found in the current directory. It can alternatively be specified with the ``--config`` command line option, or a ``BTEST_CFG`` environment variable. The configuration file is "INI-style", and an example comes with the distribution, see ``btest.cfg.example``. A configuration file has one main section, ``btest``, that defines most options; as well as an optional section for defining `environment variables`_ and further optional sections for defining alternatives_. Note that all paths specified in the configuration file are relative to ``btest``'s *base directory*. The base directory is either the one where the configuration file is located if such is given/found, or the current working directory if not. One can also override it explicitly by setting the environment variable ``BTEST_TEST_BASE``. When setting values for configuration options, the absolute path to the base directory is available by using the macro ``%(testbase)s`` (the weird syntax is due to Python's ``ConfigParser`` class). Furthermore, all values can use standard "backtick-syntax" to include the output of external commands (e.g., xyz=`\echo test\`). Note that the backtick expansion is performed after any ``%(..)`` have already been replaced (including within the backticks). .. _option: `options`_ .. _options: Options ~~~~~~~ The following options can be set in the ``btest`` section of the configuration file: ``BaselineDir`` One or more directories where to store the baseline files for ``btest-diff`` (note that the actual baseline files will be placed into test-specific subdirectories of this directory). By default, this is set to ``%(testbase)s/Baseline``. If multiple directories are to be used, they must be separated by colons. ``btest-diff`` will then search them for baseline files in order when looking for a baseline to compare against. When updating a baseline, it will always store the new version inside the first directory. Using multiple directories is most useful in combination with alternatives_ to support alternate executions where some tests produce expected differences in their output. This option can also be set through an environment variable ``BTEST_BASELINE_DIR``. ``CommandPrefix`` Changes the naming of all ``btest`` commands by replacing the ``@TEST-`` prefix with a custom string. For example, with ``CommandPrefix=$TEST-``, the ``@TEST-EXEC`` command becomes ``$TEST-EXEC``. ``Finalizer`` A command that will be executed each time any test has successfully run. It runs in the same directory as the test itself and receives the name of the test as its only argument. The return value indicates whether the test should indeed be considered successful. By default, there's no finalizer set. ``IgnoreDirs`` A space-separated list of relative directory names to ignore when scanning test directories recursively. Default is empty. An alternative way to ignore a directory is placing a file ``.btest-ignore`` in it. ``IgnoreFiles`` A space-separated list of filename globs matching files to ignore when scanning given test directories recursively. Default is empty. An alternative way to ignore a file is by placing ``@TEST-IGNORE`` in it. ``Initializer`` A command that will be executed before each test. It runs in the same directory as the test itself will and receives the name of the test as its only argument. The return value indicates whether the test should continue; if false, the test will be considered failed. By default, there's no initializer set. ``MinVersion`` On occasion, you'll want to ensure that the version of ``btest`` running your testsuite includes a particular feature. By setting this value to a given version number (as reported by ``btest --version``), ``btest`` installations older than this version will fail test execution with exit code 1 and a corresponding error message on stderr. ``PartFinalizer`` A command that will be executed each time a test *part* has successfully run. This operates similarly to ``Finalizer`` except that it runs after each test part rather than only at completion of the full test. See `parts`_ for more about test parts. ``PartInitializer`` A command that will be executed before each test *part*. This operates similarly to ``Initializer`` except that it runs at the beginning of any test part that BTest runs. See `parts`_ for more about test parts. Since a failing test part aborts execution of the test, part initializers do not run for any subsequent skipped parts. ``PartTeardown`` A command that will run after any test *part* that has run, regardless of failure or success of the part. This operates similarly to ``Teardown`` except it applies to test `parts`_ instead of the full test. Since a failing test part aborts execution of the test, part teardowns do not run for any subsequent skipped parts. ``PerfPath`` Specifies a path to the ``perf`` tool, which is used on Linux to measure the execution times of tests. By default, BTest searches for ``perf`` in ``PATH``. ``PortRange`` Specifies a port range like "10000-11000" to use in conjunction with ``@TEST-PORT`` commands. Port assignments will be restricted to this range. The default range is "1024-65535". ``StateFile`` The name of the state file to record the names of failing tests. Default is ``.btest.failed.dat``. ``Teardown`` A command that will be executed each time any test has run, regardless of whether that test succeeded. Conceptually, it pairs with an ``Initializer`` that sets up test infrastructure that requires tear-down at the end of the test. It runs in the same directory as the test itself and receives the name of the test as its only argument. There's no default teardown command. Teardown commands may return a non-zero exit code, which fails the corresponding test. Succeeding teardown commands do not override an otherwise failing test; such tests will still fail. To allow teardown routines to reason about the preceding tests, they receive two additional environment variables: ``TEST_FAILED`` This variable is defined (to 1) when the test has failed, and absent otherwise. ``TEST_LAST_RETCODE`` This variable contains the numeric exit code of the last command run prior to teardown. ``TestDirs`` A space-separated list of directories to search for tests. If defined, one doesn't need to specify any tests on the command line. ``TimingBaselineDir`` A directory where to store the host-specific `timing`_ baseline files. By default, this is set to ``%(testbase)s/Baseline/_Timing``. ``TimingDeltaPerc`` A value defining the `timing`_ deviation percentage that's tolerated for a test before it's considered failed. Default is 1.0 (which means a 1.0% deviation is tolerated by default). ``TmpDir`` A directory where to create temporary files when running tests. By default, this is set to ``%(testbase)s/.tmp``. .. _environment variables: Environment Variables ~~~~~~~~~~~~~~~~~~~~~ A special section ``environment`` defines environment variables that will be propagated to all tests:: [environment] CFLAGS=-O3 PATH=%(testbase)s/bin:%(default_path)s Note how ``PATH`` can be adjusted to include local scripts: the example above prefixes it with a local ``bin/`` directory inside the base directory, using the predefined ``default_path`` macro to refer to the ``PATH`` as it is set by default. Furthermore, by setting ``PATH`` to include the ``btest`` distribution directory, one could skip the installation of the ``btest`` package. .. _alternative: alternatives_ .. _alternatives: Alternatives ~~~~~~~~~~~~ BTest can run a set of tests with different settings than it would normally use by specifying an *alternative* configuration. Currently, three things can be adjusted: - Further environment variables can be set that will then be available to all the commands that a test executes. - *Filters* can modify an input file before a test uses it. - *Substitutions* can modify command lines executed as part of a test. We discuss the three separately in the following. All of them are defined by adding sections ``[-]`` where ```` corresponds to the type of adjustment being made and ```` is the name of the alternative. Once at least one section is defined for a name, that alternative can be enabled by BTest's ``--alternative`` flag. Environment Variables ^^^^^^^^^^^^^^^^^^^^^ An alternative can add further environment variables by defining an ``[environment-]`` section:: [environment-myalternative] CFLAGS=-O3 Running ``btest`` with ``--alternative=myalternative`` will now make the ``CFLAGS`` environment variable available to all commands executed. As a special case, one can override two specific environment variables---``BTEST_TEST_BASE`` and ``BTEST_BASELINE_DIR``---inside an alternative's environment section to have them not only be passed on to child processes, but also apply to the ``btest`` process itself. That way, one can switch to a different base and baseline directories for an alternative. .. _filters: Filters ^^^^^^^ Filters are a transparent way to adapt the input to a specific test command before it is executed. A filter is defined by adding a section ``[filter-]`` to the configuration file. This section must have exactly one entry, and the name of that entry is interpreted as the name of a command whose input is to be filtered. The value of that entry is the name of a filter script that will be run with two arguments representing input and output files, respectively. Example:: [filter-myalternative] cat=%(testbase)s/bin/filter-cat Once the filter is activated by running ``btest`` with ``--alternative=myalternative``, every time a ``@TEST-EXEC: cat %INPUT`` is found, ``btest`` will first execute (something similar to) ``%(testbase)s/bin/filter-cat %INPUT out.tmp``, and then subsequently ``cat out.tmp`` (i.e., the original command but with the filtered output). In the simplest case, the filter could be a no-op in the form ``cp $1 $2``. **NOTE:** There are a few limitations to the filter concept currently: * Filters are *always* fed with ``%INPUT`` as their first argument. We should add a way to filter other files as well. * Filtered commands are only recognized if they are directly starting the command line. For example, ``@TEST-EXEC: ls | cat >output`` would not trigger the example filter above. * Filters are only executed for ``@TEST-EXEC``, not for ``@TEST-EXEC-FAIL``. .. _substitution: Substitutions ^^^^^^^^^^^^^ Substitutions are similar to filters, yet they do not adapt the input but the command line being executed. A substitution is defined by adding a section ``[substitution-]`` to the configuration file. For each entry in this section, the entry's name specifies the command that is to be replaced with something else given as its value. Example:: [substitution-myalternative] gcc=gcc -O2 Once the substitution is activated by running ``btest`` with ``--alternative=myalternative``, every time a ``@TEST-EXEC`` executes ``gcc``, that is replaced with ``gcc -O2``. The replacement is simple string substitution so it works not only with commands but anything found on the command line; it however only replaces full words, not subparts of words. Supported Keywords ------------------ ``btest`` scans a test file for lines containing keywords that trigger certain functionality. It knows the following keywords: ``@TEST-ALTERNATIVE: `` Runs this test only for the given alternative (see alternative_). If ```` is ``default``, the test executes when BTest runs with no alternative given (which however is the default anyway). ``@TEST-COPY-FILE: `` Copy the given file into the test's directory before the test is run. If ```` is a relative path, it's interpreted relative to the BTest's base directory. Environment variables in ```` will be replaced if enclosed in ``${..}``. This command can be given multiple times. ``@TEST-DOC: `` Associates a documentation string with the test. These strings get included into the output of the ``--documentation`` option. .. _@TEST-EXEC: ``@TEST-EXEC: `` Executes the given command line and aborts the test if it returns an error code other than zero. The ```` is passed to the shell and thus can be a pipeline, use redirection, and any environment variables specified in ```` will be expanded, etc. When running a test, the current working directory for all command lines will be set to a temporary sandbox (and will be deleted later). There are two macros that can be used in ````: ``%INPUT`` will be replaced with the full pathname of the file defining the test (this file is in a temporary sandbox directory and is a copy of the original test file); and ``%DIR`` will be replaced with the full pathname of the directory where the test file is located (note that this is the directory where the original test file is located, not the directory where the ``%INPUT`` file is located). The latter can be used to reference further files also located there. In addition to environment variables defined in the configuration file, there are further ones that are passed into the commands: ``TEST_BASE`` The BTest base directory, i.e., the directory where ``btest.cfg`` is located. ``TEST_BASELINE`` A list of directories where the command can save permanent information across ``btest`` runs. (This is where ``btest-diff`` stores its baseline in ``UPDATE`` mode.) Multiple entries are separated by colons. If more than one entry is given, semantics should be to search them in order. (This is where ``btest-diff`` stores its baseline in ``UPDATE`` mode.) ``TEST_DIAGNOSTICS`` A file where further diagnostic information can be saved in case a command fails (this is also where ``btest-diff`` stores its diff). If this file exists, then the ``--diagnostics-all`` or ``--diagnostics`` options will show this file (for the latter option, only if a command fails). ``TEST_MODE`` This is normally set to ``TEST``, but will be ``UPDATE`` if ``btest`` is run with ``--update-baseline``, or ``UPDATE_INTERACTIVE`` if run with ``--update-interactive``. ``TEST_NAME`` The name of the currently executing test. ``TEST_PART`` The test part number (see `parts`_ for more about test parts). **NOTE:** If a command returns the special exit code 100, the test is considered failed, however subsequent test commands within the current test are still run. ``btest-diff`` uses this special exit code to indicate that no baseline has yet been established. If a command returns the special exit code 200, the test is considered failed and all further tests are aborted. ``btest-diff`` uses this special exit code when ``btest`` is run with the ``--update-interactive`` option and the user chooses to abort the tests when prompted to record a new baseline. ``TEST_VERBOSE`` The path of a file where the test can record further information about its execution that will be included with BTest's ``--verbose`` output. This is for further tracking the execution of commands and should generally generate output that follows a line-based structure. ``@TEST-EXEC-FAIL: `` Like ``@TEST-EXEC``, except that this expects the command to *fail*, i.e., the test is aborted when the return code is zero. .. _@TEST-GROUP: ``@TEST-GROUP: `` Assigns the test to a group of name ````. By using option ``-g`` one can limit execution to all tests that belong to a given group (or a set of groups). ``@TEST-IGNORE`` This is used to indicate that this file should be skipped (i.e., no test commands in this file will be executed). An alternative way to ignore files is by using the ``IgnoreFiles`` option in the btest configuration file. ``@TEST-KNOWN-FAILURE`` Marks a test as known to currently fail. This only changes BTest's output, which upon failure will indicate that that is expected; it won't change the test's processing otherwise. The keyword doesn't take any arguments but one could add a descriptive text, as in :: .. @TEST-KNOWN-FAILURE: We know this fails because .... .. _@TEST-MEASURE-TIME: ``@TEST-MEASURE-TIME`` Measures execution time for this test and compares it to a previously established `timing`_ baseline. If it deviates significantly, the test will be considered failed. ``@TEST-NOT-ALTERNATIVE: `` Ignores this test for the given alternative (see alternative_). If ```` is ``default``, the test is ignored if BTest runs with no alternative given. .. _@TEST-PORT: ``@TEST-PORT: `` Assign an available TCP port number to an environment variable that is accessible from the running test process. ```` is an arbitrary user-chosen string that will be set to the next available TCP port number. Availability is based on checking successful binding of the port on IPv4 INADDR_ANY and also restricted to the range specified by the ``PortRange`` option. IPv6 is not supported. Note that using the ``-j`` option to parallelize execution will work such that unique/available port numbers are assigned between concurrent tests, however there is still a potential race condition for external processes to claim a port before the test actually runs and claims it for itself. ``@TEST-REQUIRES: `` Defines a condition that must be met for the test to be executed. The given command line will be run before any of the actual test commands, and it must return success for the test to continue. If it does not return success, the rest of the test will be skipped but doing so will not be considered a failure of the test. This allows to write conditional tests that may not always make sense to run, depending on whether external constraints are satisfied or not (say, whether a particular library is available). Multiple requirements may be specified and then all must be met for the test to continue. .. _@TEST-SERIALIZE: ``@TEST-SERIALIZE: `` When using option ``-j`` to parallelize execution, all tests that specify the same serialization set are guaranteed to run sequentially. ```` is an arbitrary user-chosen string. ``@TEST-START-FILE `` This is used to include an additional input file for a test right inside the test file. All lines following the keyword line will be written into the given file until a line containing ``@TEST-END-FILE`` is found. The lines containing ``@TEST-START-FILE`` and ``@TEST-END-FILE``, and all lines in between, will be removed from the test's %INPUT. Example:: > cat examples/t6.sh # @TEST-EXEC: awk -f %INPUT output # @TEST-EXEC: btest-diff output { lines += 1; } END { print lines; } @TEST-START-FILE foo.dat 1 2 3 @TEST-END-FILE > btest -D examples/t6.sh examples.t6 ... ok % cat .diag == File =============================== 3 Multiple such files can be defined within a single test. Note that this is only one way to use further input files. Another is to store a file in the same directory as the test itself, making sure it's ignored via ``IgnoreFiles``, and then refer to it via ``%DIR/``. ``@TEST-START-NEXT`` This keyword lets you define multiple test inputs in the same file, all executing with the same command lines. See `defining multiple tests in one file`_ for details. .. _test selection: `selecting tests`_ .. _selecting tests: Selecting Tests =============== Internally, ``btest`` uses logical names for tests, abstracting input files. Those names result from substituting path separators with dots, ignoring btest file suffixes, and potentially adding additional labeling. ``btest`` does this only for tests within the ``TestDirs`` directories given in the `configuration file`. In addition to the invocations covered in `Running BTest`_, you can use logical names when telling ``btest`` which tests to run. For example, instead of saying :: > btest testsuite/foo.sh you can use:: > btest testsuite.foo This distinction rarely matters, but it's something to be aware of when `defining multiple tests in one file`_, which we cover next. .. _more than one test: `defining multiple tests in one file`_ .. _defining multiple tests in one file: Defining Multiple Tests in one File =================================== On occasion you want to use the same constellation of keywords on a set of input files. BTest supports this via the ``@TEST-START-NEXT`` keyword. When ``btest`` encounters this keyword, it initially considers the input file to end at that point, and runs all ``@TEST-EXEC-*`` with an ``%INPUT`` truncated accordingly. Afterwards, it creates a *new* ``%INPUT`` with everything *following* the ``@TEST-START-NEXT`` marker, running the *same* commands again. (It ignores any ``@TEST-EXEC-*`` lines later in the file.) The effect is that a single file can define multiple tests that the ``btest`` output will enumerate:: > cat examples/t5.sh # @TEST-EXEC: cat %INPUT | wc -c >output # @TEST-EXEC: btest-diff output This is the first test input in this file. # @TEST-START-NEXT ... and the second. > ./btest -D examples/t5.sh examples.t5 ... ok % cat .diag == File =============================== 119 [...] examples.t5-2 ... ok % cat .diag == File =============================== 22 [...] ``btest`` automatically generates the ``-`` suffix for each of the tests. **NOTE:** It matters how you name tests when running them individually. When you specify the btest file ("``examples/t5.sh``"), ``btest`` will run all of the contained tests. When you use the logical name, ``btest`` will run only that specific test: in the above scenario, ``examples.t5`` runs only the first test defined in the file, while ``examples.t5-2`` only runs the second. This also applies to baseline updates. .. _parts: `splitting tests into parts`_ .. _splitting tests into parts: Splitting Tests into Parts ========================== One can also split a single test across multiple files by adding a numerical ``#`` postfix to their names, where each ```` represents a separate part of the test. ``btest`` will combine all of a test's parts in numerical order and execute them subsequently within the same sandbox. Example:: > cat examples/t7.sh#1 # @TEST-EXEC: echo Part 1 - %INPUT >>output > cat examples/t7.sh#2 # @TEST-EXEC: echo Part 2 - %INPUT >>output > cat examples/t7.sh#3 # @TEST-EXEC: btest-diff output > btest -D examples/t7.sh examples.t7 ... ok % cat .diag == File =============================== Part 1 - /Users/robin/bro/docs/aux/btest/.tmp/examples.t7/t7.sh#1 Part 2 - /Users/robin/bro/docs/aux/btest/.tmp/examples.t7/t7.sh#2 Note how ``output`` contains the output of both ``t7.sh#1`` and ``t7.sh#2``, however in each case ``%INPUT`` refers to the corresponding part. For the first part of a test, one can also omit the ``#1`` postfix in the filename. .. _canonifiers: `canonifying diffs`_ .. _canonifying diffs: Canonifying Diffs ================= ``btest-diff`` has the capability to filter its input through an additional script before it compares the current version with the baseline. This can be useful if certain elements in an output are *expected* to change (e.g., timestamps). The filter can then remove/replace these with something consistent. To enable such canonification, set the environment variable ``TEST_DIFF_CANONIFIER`` to a script reading the original version from stdin and writing the canonified version to stdout. For examples of canonifier scripts, take a look at those `used in the Zeek distribution `_. **NOTE:** ``btest-diff`` passes both the pre-recorded baseline and the fresh test output through any canonifiers before comparing their contents. BTest version 0.63 introduced two changes in ``btest-diff``'s baseline handling: * ``btest-diff`` now records baselines in canonicalized form. The benefit here is that by canonicalizing upon recording, you can use ``btest -U`` more freely, keeping expected noise out of revision control. The downside is that updates to canonifiers require a refresh of the baselines. * ``btest-diff`` now prefixes the baselines with a header that warns against manual modification, and knows to exclude that header from comparison. We recommend only ever updating baselines via ``btest -U`` (or its interactive sibling, ``-u``). Once you use canonicalized baselines in your project, it's a good idea to use ``MinVersion = 0.63`` in your btest.cfg to avoid the use of older ``btest`` installations. Since these are unaware of the new baseline header and repeated application of canonifiers may cause unexpected alterations to already-canonified baselines, using such versions will likely cause test failures. Binary Data in Baselines ======================== ``btest`` baselines usually consist of text files, i.e. content that mostly makes sense to process line by line. It's possible to use binary data as well, though. For such data, ``btest-diff`` supports a binary mode in which it will treat the baselines as binary "blobs". In this mode, it will compare test output to baselines for byte-by-byte equality only, it will never apply any canonifiers, and it will leave the test output untouched during baseline updates. To use binary mode, invoke ``btest-diff`` with the ``--binary`` flag. Running Processes in the Background =================================== Sometimes processes need to be spawned in the background for a test, in particular if multiple processes need to cooperate in some fashion. ``btest`` comes with two helper scripts to make life easier in such a situation: ``btest-bg-run `` This is a script that runs ```` in the background, i.e., it's like using ``cmdline &`` in a shell script. Test execution continues immediately with the next command. Note that the spawned command is *not* run in the current directory, but instead in a newly created sub-directory called ````. This allows spawning multiple instances of the same process without needing to worry about conflicting outputs. If you want to access a command's output later, like with ``btest-diff``, use ``/foo.log`` to access it. ``btest-bg-wait [-k] `` This script waits for all processes previously spawned via ``btest-bg-run`` to finish. If any of them exits with a non-zero return code, ``btest-bg-wait`` does so as well, indicating a failed test. ```` is mandatory and gives the maximum number of seconds to wait for any of the processes to terminate. If any process hasn't done so when the timeout expires, it will be killed and the test is considered to be failed as long as ``-k`` is not given. If ``-k`` is given, pending processes are still killed but the test continues normally, i.e., non-termination is not considered a failure in this case. This script also collects the processes' stdout and stderr outputs for diagnostics output. .. _progress: Displaying Progress =================== For long-running tests it can be helpful to display progress messages during their execution so that one sees where the test is currently at. There's a helper script, `btest-progress`, to facilitate that. The script receives a custom message as its sole argument. When executed while a test is running, ``btest`` will display that message in real-time in its standard and verbose outputs. Example usage:: # @TEST-EXEC: bash %INPUT btest-progress Stage 1 sleep 1 btest-progress Stage 2 sleep 1 btest-progress Stage 3 sleep 1 When the tests execute, ``btest`` will then show these three messages successively. By default, ``btest-progress`` also prints the messages to the test's standard output and standard error. That can be suppressed by adding an option ``-q`` to the invocation. .. _timing: `timing execution`_ .. _timing execution: Timing Execution ================ ``btest`` can time execution of tests and report significant deviations from past runs. As execution time is inherently system-specific it keeps separate per-host timing baselines for that. Furthermore, as time measurements tend to make sense only for individual, usually longer running tests, they are activated on per test basis by adding a `@TEST-MEASURE-TIME`_ directive. The test will then execute as usual yet also record the duration for which it executes. After the timing baselines are created (with the ``--update-times`` option), further runs on the same host will compare their times against that baseline and declare a test failed if it deviates by more than, by default, 1%. (To tune the behaviour, look at the ``Timing*`` `options`_.) If a test requests measurement but BTest can't find a timing baseline or the necessary tools to perform timing measurements, then it will ignore the request. As timing for a test can deviate quite a bit even on the same host, BTest does not actually measure *time* but the number of CPU instructions that a test executes, which tends to be more stable. That however requires the right tools to be in place. On Linux, BTest leverages `perf `_. By default, BTest will search for ``perf`` in the ``PATH``; you can specify a different path to the binary by setting ``PerfPath`` in ``btest.cfg``. Integration with Sphinx ======================= ``btest`` comes with an extension module for the documentation framework `Sphinx `_. The extension module provides two new directives called ``btest`` and ``btest-include``. The ``btest`` directive allows writing a test directly inside a Sphinx document, and then the output from the test's command is included in the generated documentation. The ``btest-include`` directive allows for literal text from another file to be included in the generated documentation. The tests from both directives can also be run externally and will catch if any changes to the included content occur. The following walks through setting this up. Configuration ------------- First, you need to tell Sphinx a base directory for the ``btest`` configuration as well as a directory in there where to store tests it extracts from the Sphinx documentation. Typically, you'd just create a new subdirectory ``tests`` in the Sphinx project for the ``btest`` setup and then store the tests in there in, e.g., ``doc/``:: > cd > mkdir tests > mkdir tests/doc Then add the following to your Sphinx ``conf.py``:: extensions += ["btest-sphinx"] btest_base="tests" # Relative to Sphinx-root. btest_tests="doc" # Relative to btest_base. Next, create a ``btest.cfg`` in ``tests/`` as usual and add ``doc/`` to the ``TestDirs`` option. Also, add a finalizer to ``btest.cfg``:: [btest] ... PartFinalizer=btest-diff-rst Including a Test into a Sphinx Document --------------------------------------- The ``btest`` extension provides a new directive to include a test inside a Sphinx document:: .. btest:: Here, ```` is a custom name for the test; it will be stored in ``btest_tests`` under that name (with a file extension of ``.btest``). ```` is just a standard test as you would normally put into one of the ``TestDirs``. Example:: .. btest:: just-a-test @TEST-EXEC: expr 2 + 2 When you now run Sphinx, it will (1) store the test content into ``tests/doc/just-a-test.btest`` (assuming the above path layout), and (2) execute the test by running ``btest`` on it. You can then run ``btest`` manually in ``tests/`` as well and it will execute the test just as it would in a standard setup. If a test fails when Sphinx runs it, there will be a corresponding error and include the diagnostic output into the document. By default, nothing else will be included into the generated documentation, i.e., the above test will just turn into an empty text block. However, ``btest`` comes with a set of scripts that you can use to specify content to be included. As a simple example, ``btest-rst-cmd `` will execute a command and (if it succeeds) include both the command line and the standard output into the documentation. Example:: .. btest:: another-test @TEST-EXEC: btest-rst-cmd echo Hello, world! When running Sphinx, this will render as:: # echo Hello, world! Hello, world! The same ```` can be used multiple times, in which case each entry will become one part of a joint test. ``btest`` will execute all parts subsequently within a single sandbox, and earlier results will thus be available to later parts. When running ``btest`` manually in ``tests/``, the ``PartFinalizer`` we added to ``btest.cfg`` (see above) compares the generated reST code with a previously established baseline, just like ``btest-diff`` does with files. To establish the initial baseline, run ``btest -u``, like you would with ``btest-diff``. Scripts ------- The following Sphinx support scripts come with ``btest``: ``btest-rst-cmd [options] `` By default, this executes ```` and includes both the command line itself and its standard output into the generated documentation (but only if the command line succeeds). See above for an example. This script provides the following options: -c ALTERNATIVE_CMDLINE Show ``ALTERNATIVE_CMDLINE`` in the generated documentation instead of the one actually executed. (It still runs the ```` given outside the option.) -d Do not actually execute ````; just format it for the generated documentation and include no further output. -f FILTER_CMD Pipe the command line's output through ``FILTER_CMD`` before including. If ``-r`` is given, it filters the file's content instead of stdout. -n N Include only ``N`` lines of output, adding a ``[...]`` marker if there's more. -o Do not include the executed command into the generated documentation, just its output. -r FILE Insert ``FILE`` into output instead of stdout. The ``FILE`` must be created by a previous ``@TEST-EXEC`` or ``@TEST-COPY-FILE``. ``btest-rst-include [options] `` Includes ```` inside a code block. The ```` must be created by a previous ``@TEST-EXEC`` or ``@TEST-COPY-FILE``. This script provides the following options: -n N Include only ``N`` lines of output, adding a ``[...]`` marker if there's more. ``btest-rst-pipe `` Executes ````, includes its standard output inside a code block (but only if the command line succeeds). Note that this script does not include the command line itself into the code block, just the output. **NOTE:** All these scripts can be run directly from the command line to show the reST code they generate. **NOTE:** ``btest-rst-cmd`` can do everything the other scripts provide if you give it the right options. In fact, the other scripts are provided just for convenience and leverage ``btest-rst-cmd`` internally. Including Literal Text ---------------------- The ``btest`` Sphinx extension module also provides a directive ``btest-include`` that functions like ``literalinclude`` (including all its options) but also creates a test checking the included content for changes. As one further extension, the directive expands environment variables of the form ``${var}`` in its argument. Example:: .. btest-include:: ${var}/path/to/file When you now run Sphinx, it will automatically generate a test file in the directory specified by the ``btest_tests`` variable in the Sphinx ``conf.py`` configuration file. In this example, the filename would be ``include-path_to_file.btest`` (it automatically adds a prefix of "include-" and a file extension of ".btest"). When you run the tests externally, the tests generated by the ``btest-include`` directive will check if any of the included content has changed (you'll first need to run ``btest -u`` to establish the initial baseline). License ======= BTest is open-source under a BSD license. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277936.0 btest-0.72/VERSION0000644000076500000240000000000514246443460012351 0ustar00timstaff0.72 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277936.0 btest-0.72/btest0000755000076500000240000026364214246443460012372 0ustar00timstaff#! /usr/bin/env python3 # # Main test driver. # # pylint: disable=line-too-long,too-many-lines,invalid-name,missing-function-docstring,missing-class-docstring from __future__ import print_function import atexit import copy import fnmatch import glob import io import json import locale import multiprocessing import multiprocessing.managers import multiprocessing.sharedctypes import optparse import os import os.path import platform as pform import re import shutil import signal import socket import subprocess import sys import tempfile import threading import time import uuid import xml.dom.minidom from datetime import datetime try: import ConfigParser as configparser except ImportError: import configparser VERSION = "0.72" # Automatically filled in. using_py3 = (sys.version_info[0] == 3) Name = "btest" Config = None try: ConfigDefault = os.environ["BTEST_CFG"] except KeyError: ConfigDefault = "btest.cfg" def output(msg, nl=True, file=None): if not file: file = sys.stderr if nl: print(msg, file=file) else: print(msg, end=" ", file=file) def warning(msg): print("warning: %s" % msg, file=sys.stderr) def error(msg): print(msg, file=sys.stderr) sys.exit(1) def mkdir(folder): if not os.path.exists(folder): try: os.makedirs(folder) except OSError as exc: error("cannot create directory %s: %s" % (folder, exc)) else: if not os.path.isdir(folder): error("path %s exists but is not a directory" % folder) def which(cmd): # Adapted from http://stackoverflow.com/a/377028 def is_exe(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) (fpath, _) = os.path.split(cmd) if fpath: if is_exe(cmd): return cmd else: for path in os.environ["PATH"].split(os.pathsep): path = path.strip('"') exe_file = os.path.join(path, cmd) if is_exe(exe_file): return exe_file return None def platform(): return pform.system() def getDefaultBtestEncoding(): if locale.getdefaultlocale()[1] is None: return 'utf-8' return locale.getpreferredencoding() def validate_version_requirement(required: str, present: str): '''Helper function to validate that a `present` version is semantically newer or equal than a `required` version.''' def extract_version(v: str): '''Helper function to extract version components from a string.''' try: xyz = [int(x) for x in re.split(r'\.|-', v)] except ValueError: error("invalid version %s: versions must contain only numeric identifiers" % v) return xyz v_present = extract_version(present) v_required = extract_version(required) if v_present < v_required: error("%s requires at least BTest %s, this is %s. Please upgrade." % (Options.config, min_version, VERSION)) # Get the value of the specified option in the specified section (or # section "btest" if not specified), or return the specified default value # if the option or section is not found. The returned value has macros and # backticks from the config file expanded, but if the default value is returned # it will not be modified in any way. def getOption(key, default, section="btest"): try: value = Config.get(section, key) except (configparser.NoSectionError, configparser.NoOptionError): return default return ExpandBackticks(value) reBackticks = re.compile(r"`(([^`]|\`)*)`") def readStateFile(): try: # Read state file. tests = [] for line in open(StateFile): line = line.strip() if not line or line.startswith("#"): continue tests += [line] tests = findTests(tests) except IOError: return (False, []) return (True, tests) # Expand backticks in a config option value and return the result. def ExpandBackticks(origvalue): def _exec(m): cmd = m.group(1) if not cmd: return "" try: pp = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) except OSError as e: error("cannot execute '%s': %s" % (cmd, e)) out = pp.communicate()[0] out = out.decode() return out.strip() value = reBackticks.sub(_exec, origvalue) return value # We monkey-patch the config parser to provide an alternative method that # expands backticks in option values and does not include defaults in # returned section items. def cpItemsNoDefaults(self, section): # Get the keys from the specified section without anything from the # default section (the values are raw, so we need to fetch the actual # value below). try: items = self._sections[section].items() except KeyError: raise configparser.NoSectionError(section) result = {} for (key, rawvalue) in items: # Python 2 includes a key of "__name__" that we don't want (Python 3 # doesn't include this) if not key.startswith("__"): # Expand macros such as %(testbase)s. value = self.get(section, key) # Expand backticks (if any) in the value. result[key] = ExpandBackticks(value) return result.items() # Replace environment variables in string. def replaceEnvs(s): def replace_with_env(m): try: return os.environ[m.group(1)] except KeyError: return "" return RE_ENV.sub(replace_with_env, s) # Execute one of test's command line *cmdline*. *measure_time* indicates if # timing measurement is desired. *kw_args* are further keyword arguments # interpreted the same way as with subprocess.check_call(). # Returns a 3-tuple (success, rc, time) where the former two likewise # have the same meaning as with runSubprocess(), and 'time' is an integer # value corresponding to the commands execution time measured in some # appropiate integer measure. If 'time' is negative, that's an indicator # that time measurement wasn't possible and the value is to be ignored. def runTestCommandLine(cmdline, measure_time, **kwargs): if measure_time and Timer: return Timer.timeSubprocess(cmdline, **kwargs) (success, rc) = runSubprocess(cmdline, **kwargs) return (success, rc, -1) # Runs a subprocess. Takes same arguments as subprocess.check_call() # and returns a 2-tuple (success, rc) where *success* is a boolean # indicating if the command executed, and *rc* is its exit code if it did. def runSubprocess(*args, **kwargs): def child(q): try: subprocess.check_call(*args, **kwargs) success = True rc = 0 except subprocess.CalledProcessError as e: success = False rc = e.returncode except KeyboardInterrupt: success = False rc = 0 q.put([success, rc]) try: q = multiprocessing.Queue() p = multiprocessing.Process(target=child, args=(q, )) p.start() result = q.get() p.join() except KeyboardInterrupt: # Bail out here directly as otherwise we'd get a bunch of errors. # from all the childs. os._exit(1) return result def getcfgparser(defaults): configparser.ConfigParser.itemsNoDefaults = cpItemsNoDefaults cfg = configparser.ConfigParser(defaults) return cfg # Description of an alternative configuration. class Alternative: def __init__(self, name): self.name = name self.filters = {} self.substitutions = {} self.envs = {} # Exception class thrown to signal manager to abort processing. # The message passed to the constructor will be printed to the console. class Abort(Exception): pass # Main class distributing the work across threads. class TestManager(multiprocessing.managers.SyncManager): def __init__(self, *args, **kwargs): super(TestManager, self).__init__(*args, **kwargs) self._output_handler = None self._lock = None self._succeeded = None self._failed = None self._failed_expected = None self._unstable = None self._skipped = None self._tests = None self._failed_tests = None self._num_tests = None self._timing = None self._ports = None def run(self, tests, output_handler): self.start() output_handler.prepare(self) self._output_handler = output_handler self._lock = self.RLock() self._succeeded = multiprocessing.sharedctypes.RawValue('i', 0) self._failed = multiprocessing.sharedctypes.RawValue('i', 0) self._failed_expected = multiprocessing.sharedctypes.RawValue('i', 0) self._unstable = multiprocessing.sharedctypes.RawValue('i', 0) self._skipped = multiprocessing.sharedctypes.RawValue('i', 0) self._tests = self.list(tests) self._failed_tests = self.list([]) self._num_tests = len(self._tests) self._timing = self.loadTiming() port_range = getOption("PortRange", "1024-65535") port_range_lo = int(port_range.split("-")[0]) port_range_hi = int(port_range.split("-")[1]) if port_range_lo > port_range_hi: error("invalid PortRange value: {0}".format(port_range)) max_test_ports = 0 test_with_most_ports = None for t in self._tests: if len(t.ports) > max_test_ports: max_test_ports = len(t.ports) test_with_most_ports = t if max_test_ports > port_range_hi - port_range_lo + 1: error("PortRange {0} cannot satisfy requirement of {1} ports in test {2}".format( port_range, max_test_ports, test_with_most_ports.name)) self._ports = self.list([p for p in range(port_range_lo, port_range_hi + 1)]) threads = [] # With interactive input possibly required, we run tests # directly. This avoids noisy output appearing from detached # processes post-btest-exit when using CTRL-C during the input # stage. if Options.mode == "UPDATE_INTERACTIVE": self.threadRun(0) else: try: for i in range(Options.threads): t = multiprocessing.Process(name="#%d" % (i + 1), target=lambda: self.threadRun(i)) t.start() threads += [t] for t in threads: t.join() except KeyboardInterrupt: for t in threads: t.terminate() t.join() if Options.abort_on_failure and self._failed.value > 0 and self._failed.value > self._failed_expected.value: # Signal abort. The child processes will already have # finished because the join() above still ran. raise Abort("Aborted after first failure.") # Record failed tests if not updating. if Options.mode != "UPDATE" and Options.mode != "UPDATE_INTERACTIVE": try: state = open(StateFile, "w") except IOError: error("cannot open state file %s" % StateFile) for t in sorted(self._failed_tests): print(t, file=state) state.close() return (self._succeeded.value, self._failed.value, self._skipped.value, self._unstable.value, self._failed_expected.value) def percentage(self): if not self._num_tests: return 0 count = self._succeeded.value + self._failed.value + self._skipped.value return 100.0 * count / self._num_tests def threadRun(self, thread_num): signal.signal(signal.SIGINT, signal.SIG_IGN) all_tests = [] while True: tests = self.nextTests(thread_num) if tests is None: # No more work for us. return all_tests += tests for t in tests: t.run(self) self.testReplayOutput(t) if Options.update_times: self.saveTiming(all_tests) def rerun(self, test): test.reruns += 1 self._tests += [test.clone(increment=False)] def nextTests(self, thread_num): with self._lock: if Options.abort_on_failure and self._failed.value > 0 and self._failed.value > self._failed_expected.value: # Don't hand out any more tests if we are to abort after # first failure. Doing so will let all the processes terminate. return None for i in range(len(self._tests)): t = self._tests[i] if not t: continue if t.serialize and hash(t.serialize) % Options.threads != thread_num: # Not ours. continue # We'll execute it, delete from queue. del self._tests[i] if Options.alternatives: tests = [] for alternative in Options.alternatives: if alternative in t.ignore_alternatives: continue if t.include_alternatives and alternative not in t.include_alternatives: continue alternative_test = copy.deepcopy(t) if alternative == "-": alternative = "" alternative_test.setAlternative(alternative) tests += [alternative_test] else: if t.include_alternatives and "default" not in t.include_alternatives: tests = [] elif "default" in t.ignore_alternatives: tests = [] else: tests = [t] return tests # No more tests for us. return None def returnPorts(self, ports): with self._lock: for p in ports: self._ports.append(p) def getAvailablePorts(self, count): with self._lock: if count > len(self._ports): return [] first_port = -1 rval = [] for _ in range(count): while True: if len(self._ports) == 0: for s in rval: s.close() self._ports.append(s.getsockname()[1]) return [] next_port = self._ports[0] if next_port == first_port: # Looped over port pool once, bail out. for s in rval: s.close() self._ports.append(s.getsockname()[1]) return [] if first_port == -1: first_port = next_port del self._ports[0] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Setting REUSEADDR would allow ports to be recycled # more quickly, but on macOS, seems to also have the # effect of allowing multiple sockets to bind to the # same port, even if REUSEPORT is off, so just try to # ensure both are off. if hasattr(socket, 'SO_REUSEADDR'): sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 0) if hasattr(socket, 'SO_REUSEPORT'): sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 0) try: sock.bind(('', next_port)) except: self._ports.append(next_port) continue else: break rval.append(sock) return rval def lock(self): return self._lock def testStart(self, test): with self._lock: self._output_handler.testStart(test) def testCommand(self, test, cmdline): with self._lock: self._output_handler.testCommand(test, cmdline) def testProgress(self, test, msg): with self._lock: self._output_handler.testProgress(test, msg) def testSucceeded(self, test): test.parseProgress() msg = "ok" if test.known_failure: msg += " (but expected to fail)" msg += test.timePostfix() with self._lock: if test.reruns == 0: self._succeeded.value += 1 self._output_handler.testSucceeded(test, msg) else: self._failed.value -= 1 if test.known_failure: self._failed_expected.value -= 1 self._unstable.value += 1 msg += " on retry #{0}, unstable".format(test.reruns) self._output_handler.testUnstable(test, msg) self._output_handler.testFinished(test, msg) def testFailed(self, test): test.parseProgress() msg = "failed" if test.reruns > 0: msg += " on retry #{0}".format(test.reruns) if test.known_failure: msg += " (expected)" msg += test.timePostfix() with self._lock: self._output_handler.testFailed(test, msg) self._output_handler.testFinished(test, msg) if test.reruns == 0: self._failed.value += 1 if test.known_failure: self._failed_expected.value += 1 else: self._failed_tests += [test.name] if test.reruns < Options.retries and not test.known_failure: self.rerun(test) def testSkipped(self, test): msg = "not available, skipped" with self._lock: self._output_handler.testSkipped(test, msg) self._skipped.value += 1 def testReplayOutput(self, test): with self._lock: self._output_handler.replayOutput(test) def testTimingBaseline(self, test): return self._timing.get(test.name, -1) # Returns the name of the file to store the timing baseline in for this host. def timingPath(self): id = uuid.uuid3(uuid.NAMESPACE_DNS, str(uuid.getnode())) return os.path.abspath(os.path.join(BaselineTimingDir, id.hex)) # Loads baseline timing information for this host if available. Returns # empty directory if not. def loadTiming(self): timing = {} with self._lock: path = self.timingPath() if not os.path.exists(path): return {} for line in open(path): (k, v) = line.split() timing[k] = float(v) return timing # Updates the timing baseline for the given tests on this host. def saveTiming(self, tests): with self._lock: changed = False timing = self.loadTiming() for t in tests: if t and t.measure_time and t.utime >= 0: changed = True timing[t.name] = t.utime if not changed: return path = self.timingPath() (dir, base) = os.path.split(path) mkdir(dir) out = open(path, "w") for (k, v) in timing.items(): print("%s %u" % (k, v), file=out) out.close() class CmdLine: """A single command to invoke. These commands can be provided by @TEST-{EXEC,REQUIRES} instructions, an Initializer, Finalizer, or Teardown, or their part-specific equivalents. """ def __init__(self, cmdline, expect_success, part, file): self.cmdline = cmdline self.expect_success = expect_success self.part = part self.file = file class CmdSeq: """A sequence of commands, with potential subsequent teardown. Tracking the teardown separately allows us to skip to it when commands fail. Commands can be invidual CmdLines or CmdSeq instances. Test.run() processes the latter recursively. """ def __init__(self): self.cmds = [] # CmdLine or CmdSeq instances self.teardown = None # One test. class Test(object): def __init__(self, file=None, directory=None): # Allow dir to be directly defined if file is not None: self.dir = os.path.abspath(os.path.dirname(file)) else: self.dir = directory self.alternative = None self.baselines = [] self.basename = None self.bound_ports = [] self.cloned = False self.cmdseqs = [] self.contents = [] self.copy_files = [] self.diag = None self.diagmsgs = [] self.doc = [] self.files = [] self.groups = set() self.ignore_alternatives = [] self.include_alternatives = [] self.known_failure = False self.log = None self.measure_time = False self.mgr = None self.monitor = None self.monitor_quit = None self.name = None self.number = 1 self.part = -1 self.ports = set() self.progress_lock = None self.requires = [] self.reruns = 0 self.serialize = [] self.start = None self.stdout = None self.stderr = None self.tmpdir = None self.utime = -1 self.utime_base = -1 self.utime_exceeded = False self.utime_perc = 0.0 self.verbose = None def __lt__(self, value): return self.name and value.name and self.name < value.name def displayName(self): name = self.name if self.alternative: name = "%s [%s]" % (name, self.alternative) return name def setAlternative(self, alternative): self.alternative = alternative # Parse the test's content. def parse(self, content, file): cmds = {} for line in content: m = RE_IGNORE.search(line) if m: # Ignore this file. return False for (tag, regexp, multiple, optional, group1, group2) in Commands: m = regexp.search(line) if m: value = None if group1 >= 0: value = m.group(group1) if group2 >= 0: value = (value, m.group(group2)) if not multiple: if tag in cmds: error("%s: %s defined multiple times." % (file, tag)) cmds[tag] = value else: try: cmds[tag] += [value] except KeyError: cmds[tag] = [value] # Make sure all non-optional commands are there. for (tag, regexp, multiple, optional, group1, group2) in Commands: if not optional and tag not in cmds: if tag == "exec": error("%s: mandatory keyword '@TEST-EXEC' or '@TEST-EXEC-FAIL' is missing." % file) else: error("%s: mandatory %s command not found." % (file, tag)) basename = file part = 1 m = RE_PART.match(file) if m: basename = m.group(1) part = int(m.group(2)) name = os.path.relpath(basename, TestBase) (name, ext) = os.path.splitext(name) name = name.replace("/", ".") while name.startswith("."): name = name[1:] self.name = name self.part = part self.basename = name self.contents += [(file, content)] seq = CmdSeq() if PartInitializer: seq.cmds.append( CmdLine("%s %s" % (PartInitializer, self.name), True, part, "")) for (cmd, success) in cmds["exec"]: seq.cmds.append(CmdLine(cmd.strip(), success != "-FAIL", part, file)) if PartFinalizer: seq.cmds.append( CmdLine("%s %s" % (PartFinalizer, self.name), True, part, "")) if PartTeardown: seq.teardown = CmdLine("%s %s" % (PartTeardown, self.name), True, part, "") self.cmdseqs.append(seq) if "serialize" in cmds: self.serialize = cmds["serialize"] if "port" in cmds: self.ports |= set(cmd.strip() for cmd in cmds['port']) if "group" in cmds: self.groups |= set(cmd.strip() for cmd in cmds["group"]) if "requires" in cmds: for cmd in cmds["requires"]: self.requires.append(CmdLine(cmd.strip(), True, part, file)) if "copy-file" in cmds: self.copy_files += [cmd.strip() for cmd in cmds["copy-file"]] if "alternative" in cmds: self.include_alternatives = [cmd.strip() for cmd in cmds["alternative"]] if "not-alternative" in cmds: self.ignore_alternatives = [cmd.strip() for cmd in cmds["not-alternative"]] if "known-failure" in cmds: self.known_failure = True if "measure-time" in cmds: self.measure_time = True if "doc" in cmds: self.doc = cmds["doc"] return True # Copies all control information over to a new Test but replacing the test's # content with a new one. def clone(self, content=None, increment=True): clone = Test("") clone.number = self.number clone.basename = self.basename clone.name = self.basename if increment: clone.number = self.number + 1 clone.name = "%s-%d" % (self.basename, clone.number) clone.requires = self.requires clone.reruns = self.reruns clone.serialize = self.serialize clone.ports = self.ports clone.groups = self.groups clone.cmdseqs = self.cmdseqs clone.known_failure = self.known_failure clone.measure_time = self.measure_time clone.doc = self.doc if content: assert len(self.contents) == 1 clone.contents = [(self.contents[0][0], content)] else: clone.contents = self.contents clone.files = self.files clone.dir = self.dir self.cloned = True return clone def mergePart(self, part): if self.cloned or part.cloned: error("cannot use @TEST-START-NEXT with tests split across parts (%s)" % self.basename) self.serialize += part.serialize self.ports |= part.ports self.groups |= part.groups self.cmdseqs += part.cmdseqs self.ignore_alternatives += part.ignore_alternatives self.include_alternatives += part.include_alternatives self.files += part.files self.requires += part.requires self.copy_files += part.copy_files self.contents += part.contents self.doc += part.doc self.known_failure |= part.known_failure self.measure_time |= part.measure_time def getPorts(self, mgr, count): if not count: return [] attempts = 5 while True: rval = mgr.getAvailablePorts(count) if rval: return rval attempts -= 1 if attempts == 0: error("failed to obtain {0} ports for test {1}".format(count, self.name)) warning("failed to obtain {0} ports for test {1}, will try {2} more times".format( count, self.name, attempts)) time.sleep(15) def run(self, mgr): bound_sockets = self.getPorts(mgr, len(self.ports)) self.bound_ports = [s.getsockname()[1] for s in bound_sockets] for bs in bound_sockets: bs.close() self.progress_lock = threading.Lock() self.start = time.time() self.mgr = mgr mgr.testStart(self) self.tmpdir = os.path.abspath(os.path.join(TmpDir, self.name)) self.diag = os.path.join(self.tmpdir, ".diag") self.verbose = os.path.join(self.tmpdir, ".verbose") self.baselines = [os.path.abspath(os.path.join(d, self.name)) for d in BaselineDirs] self.diagmsgs = [] self.utime = -1 self.utime_base = self.mgr.testTimingBaseline(self) self.utime_perc = 0.0 self.utime_exceeded = False self.rmTmp() mkdir(self.tmpdir) for d in self.baselines: mkdir(d) for (fname, lines) in self.files: fname = os.path.join(self.tmpdir, fname) subdir = os.path.dirname(fname) if subdir != "": mkdir(subdir) try: ffile = open(fname, "w") except IOError as e: error("cannot write test's additional file '%s'" % fname) for line in lines: ffile.write(line) ffile.close() for file in self.copy_files: src = replaceEnvs(file) try: shutil.copy2(src, self.tmpdir) except IOError as e: error("cannot copy %s: %s" % (src, e)) for (file, content) in self.contents: localfile = os.path.join(self.tmpdir, os.path.basename(file)) out = io.open(localfile, "w", encoding=getDefaultBtestEncoding()) try: for line in content: out.write(line) except UnicodeEncodeError as e: error("unicode encode error in file %s: %s" % (localfile, e)) out.close() self.log = open(os.path.join(self.tmpdir, ".log"), "w") self.stdout = open(os.path.join(self.tmpdir, ".stdout"), "w") self.stderr = open(os.path.join(self.tmpdir, ".stderr"), "w") for cmd in self.requires: (success, rc) = self.execute(cmd, apply_alternative=self.alternative) if not success: self.mgr.testSkipped(self) if not Options.tmps: self.rmTmp() self.finish() return # Spawn thread that monitors for progress updates. # Note: We do indeed spawn a thread here, not a process, so # that the callback can modify the test object to maintain # state. def monitor_cb(): while not self.monitor_quit.is_set(): self.parseProgress() time.sleep(0.1) self.monitor = threading.Thread(target=monitor_cb) self.monitor_quit = threading.Event() self.monitor.start() # Run test's commands. First, construct a series of command sequences: # each sequence consists of test commands with an optional teardown that # always runs, regardless of prior test failures. seq = CmdSeq() if Initializer: seq.cmds.append(CmdLine("%s %s" % (Initializer, self.name), True, 1, "")) seq.cmds += self.cmdseqs if Finalizer: seq.cmds.append(CmdLine("%s %s" % (Finalizer, self.name), True, 1, "")) if Teardown: seq.teardown = CmdLine("%s %s" % (Teardown, self.name), True, 1, "") failures = 0 rc = 0 # Executes the provided Cmdseq command sequence. Helper function, so we # can recurse when a Cmdseq's command list includes other sequences. def run_cmdseq(seq): nonlocal failures, rc need_teardown = False # Run commands only when successful so far, if the most recent # command asked to continue despite error (code 100), or in Sphinx # mode. if failures == 0 or rc == 100 or Options.sphinx: skip_part = -1 for cmd in seq.cmds: # If the next command is a CmdSeq, process it recursively # first. This processes teardowns for those sequences as # needed, and skips them when nothing was actually run in a # CmdSeq. if isinstance(cmd, CmdSeq): need_teardown |= run_cmdseq(cmd) continue if skip_part >= 0 and skip_part == cmd.part: continue (success, rc) = self.execute(cmd, apply_alternative=self.alternative) need_teardown = True if not success: failures += 1 if Options.sphinx: # We still execute the remaining commands and # raise a failure for each one that fails. self.mgr.testFailed(self) skip_part = cmd.part continue if failures == 1: self.mgr.testFailed(self) if rc != 100: break if need_teardown and seq.teardown: (success, teardown_rc) = self.execute(seq.teardown, apply_alternative=self.alternative, addl_envs={ 'TEST_FAILED': int(failures > 0), 'TEST_LAST_RETCODE': rc }) # A teardown can fail an otherwise successful test run, with the # same special-casing of return codes 100 and 200. When failing # on top of an already failing run, the return code will # override the previous one. If a failing teardown wants to # preserve the run's existing failing error code, it has access # to it via the TEST_LAST_RETCODE environment variable. if not success: rc = teardown_rc failures += 1 if Options.sphinx or failures == 1: self.mgr.testFailed(self) return need_teardown run_cmdseq(seq) # Return code 200 aborts further processing, now that any teardowns have # run. btest-diff uses this code when we run with --update-interactive # and the user aborts the run. if rc == 200: # Abort all tests. self.monitor_quit.set() # Flush remaining command output prior to exit: mgr.testReplayOutput(self) sys.exit(1) self.utime_perc = 0.0 self.utime_exceeded = False if failures == 0: # If we don't have a timing baseline, we silently ignore that so that # on systems that can't measure execution time, the test will just pass. if self.utime_base >= 0 and self.utime >= 0: delta = getOption("TimingDeltaPerc", "1.0") self.utime_perc = (100.0 * (self.utime - self.utime_base) / self.utime_base) self.utime_exceeded = (abs(self.utime_perc) > float(delta)) if self.utime_exceeded and not Options.update_times: self.diagmsgs += [ "'%s' exceeded permitted execution time deviation%s" % (self.name, self.timePostfix()) ] self.mgr.testFailed(self) else: self.mgr.testSucceeded(self) if not Options.tmps and self.reruns == 0: self.rmTmp() self.finish() def finish(self): if self.bound_ports: self.mgr.returnPorts([p for p in self.bound_ports]) self.bound_ports = [] for d in self.baselines: try: # Try removing the baseline directory. If it works, it's empty, i.e., no baseline was created. os.rmdir(d) except OSError: pass self.log.close() self.stdout.close() self.stderr.close() if self.monitor: self.monitor_quit.set() self.monitor.join() def execute(self, cmd, apply_alternative=None, addl_envs=None): filter_cmd = None cmdline = cmd.cmdline env = {} # Apply alternative if requested. if apply_alternative: alt = Alternatives[apply_alternative] try: (path, executable) = os.path.split(cmdline.split()[0]) filter_cmd = alt.filters[executable] except LookupError: pass for (key, val) in alt.substitutions.items(): cmdline = re.sub("\\b" + re.escape(key) + "\\b", val, cmdline) env = alt.envs localfile = os.path.join(self.tmpdir, os.path.basename(cmd.file)) if filter_cmd and cmd.expect_success: # Do not apply filter if we expect failure. # This is not quite correct as it does not necessarily need to be # the %INPUT file which we are filtering ... filtered = os.path.join(self.tmpdir, "filtered-%s" % os.path.basename(localfile)) filter = CmdLine("%s %s %s" % (filter_cmd, localfile, filtered), True, 1, "") (success, rc) = self.execute(filter, apply_alternative=None) if not success: return (False, rc) mv = CmdLine("mv %s %s" % (filtered, localfile), True, 1, "") (success, rc) = self.execute(mv, apply_alternative=None) if not success: return (False, rc) self.mgr.testCommand(self, cmd) # Replace special names. if localfile: cmdline = RE_INPUT.sub(localfile, cmdline) cmdline = RE_DIR.sub(self.dir, cmdline) print("%s (expect %s)" % (cmdline, ("failure", "success")[cmd.expect_success]), file=self.log) # Additional environment variables provided by the caller override any # existing ones, but are generally not assumed to collide: if addl_envs: env.update(addl_envs) env = self.prepareEnv(cmd, env) measure_time = self.measure_time and (Options.update_times or self.utime_base >= 0) (success, rc, utime) = runTestCommandLine(cmdline, measure_time, cwd=self.tmpdir, shell=True, env=env, stderr=self.stderr, stdout=self.stdout) if utime > 0: self.utime += utime if success: if cmd.expect_success: return (True, rc) self.diagmsgs += ["'%s' succeeded unexpectedly (exit code 0)" % cmdline] return (False, 0) else: if not cmd.expect_success: return (True, rc) self.diagmsgs += ["'%s' failed unexpectedly (exit code %s)" % (cmdline, rc)] return (False, rc) def rmTmp(self): try: if os.path.isfile(self.tmpdir): os.remove(self.tmpdir) if os.path.isdir(self.tmpdir): subprocess.call("rm -rf %s 2>/dev/null" % self.tmpdir, shell=True) except OSError as e: error("cannot remove tmp directory %s: %s" % (self.tmpdir, e)) # Prepares the environment for the child processes. def prepareEnv(self, cmd, addl={}): env = copy.deepcopy(os.environ) env["TEST_BASELINE"] = ":".join(self.baselines) env["TEST_DIAGNOSTICS"] = self.diag env["TEST_MODE"] = Options.mode.upper() env["TEST_NAME"] = self.name env["TEST_VERBOSE"] = self.verbose env["TEST_PART"] = str(cmd.part) env["TEST_BASE"] = TestBase for (key, val) in addl.items(): # Convert val to string since otherwise os.environ (and our clone) # trigger a TypeError upon insertion, and the caller may be unaware. env[key.upper()] = str(val) for idx, key in enumerate(sorted(self.ports)): env[key] = str(self.bound_ports[idx]) + "/tcp" return env def addFiles(self, files): # files is a list of tuple (fname, lines). self.files = files # If timing information is requested and available returns a # string that summarizes the time spent for the test. # Otherwise, returns an empty string. def timePostfix(self): if self.utime_base >= 0 and self.utime >= 0: return " (%+.1f%%)" % self.utime_perc else: return "" # Picks up any progress output that has a test has written out. def parseProgress(self): with self.progress_lock: path = os.path.join(self.tmpdir, ".progress.*") for file in sorted(glob.glob(path)): try: for line in open(file): msg = line.strip() self.mgr.testProgress(self, msg) os.unlink(file) except (IOError, OSError): pass ### Output handlers. class OutputHandler: def __init__(self, options): """Base class for reporting progress and results to user. We derive several classes from this one, with the one being used depending on which output the users wants. A handler's method are called from test TestMgr and may be called interleaved from different tests. However, the TestMgr locks before each call so that it's guaranteed that two calls don't run concurrently. options: An optparser with the global options. """ self._buffered_output = {} self._options = options def prepare(self, mgr): """The TestManager calls this with itself as an argument just before it starts running tests.""" pass def options(self): """Returns the current optparser instance.""" return self._options def threadPrefix(self): """With multiple threads, returns a string with the thread's name in a form suitable to prefix output with. With a single thread, returns the empty string.""" if self.options().threads > 1: return "[%s]" % multiprocessing.current_process().name else: return "" def _output(self, msg, nl=True, file=None): if not file: file = sys.stderr if nl: print(msg, file=file) else: if msg: print(msg, end=" ", file=file) def output(self, test, msg, nl=True, file=None): """Output one line of output to user. Unless we're just using a single thread, this will be buffered until the test has finished; then all output is printed as a block. This should only be called from other members of this class, or derived classes, not from tests. """ if self.options().threads < 2: self._output(msg, nl, file) return try: self._buffered_output[test.name] += [(msg, nl, file)] except KeyError: self._buffered_output[test.name] = [(msg, nl, file)] def replayOutput(self, test): """Prints out all output buffered in threaded mode by output().""" if test.name not in self._buffered_output: return for (msg, nl, file) in self._buffered_output[test.name]: self._output(msg, nl, file) self._buffered_output[test.name] = [] # Methods to override. def testStart(self, test): """Called just before a test begins.""" def testCommand(self, test, cmdline): """Called just before a command line is exected for a trace.""" def testProgress(self, test, msg): """Called when a test signals having made progress.""" def testSucceeded(self, test, msg): """Called when a test was successful.""" def testFailed(self, test, msg): """Called when a test failed.""" def testSkipped(self, test, msg): """Called when a test is skipped because its dependencies aren't met.""" def testFinished(self, test, msg): """ Called just after a test has finished being processed, independent of success or failure. Not called for skipped tests. """ def testUnstable(self, test, msg): """Called when a test failed initially but succeeded in a retry.""" def finished(self): """Called when all tests have been executed.""" class Forwarder(OutputHandler): """ Forwards output to several other handlers. options: An optparser with the global options. handlers: List of output handlers to forward to. """ def __init__(self, options, handlers): OutputHandler.__init__(self, options) self._handlers = handlers def prepare(self, mgr): """Called just before test manager starts running tests.""" for h in self._handlers: h.prepare(mgr) def testStart(self, test): """Called just before a test begins.""" for h in self._handlers: h.testStart(test) def testCommand(self, test, cmdline): """Called just before a command line is exected for a trace.""" for h in self._handlers: h.testCommand(test, cmdline) def testProgress(self, test, msg): """Called when a test signals having made progress.""" for h in self._handlers: h.testProgress(test, msg) def testSucceeded(self, test, msg): """Called when a test was successful.""" for h in self._handlers: h.testSucceeded(test, msg) def testFailed(self, test, msg): """Called when a test failed.""" for h in self._handlers: h.testFailed(test, msg) def testSkipped(self, test, msg): for h in self._handlers: h.testSkipped(test, msg) def testFinished(self, test, msg): for h in self._handlers: h.testFinished(test, msg) def testUnstable(self, test, msg): """Called when a test failed initially but succeeded in a retry.""" for h in self._handlers: h.testUnstable(test, msg) def replayOutput(self, test): for h in self._handlers: h.replayOutput(test) def finished(self): for h in self._handlers: h.finished() class Standard(OutputHandler): def testStart(self, test): self.output(test, self.threadPrefix(), nl=False) self.output(test, "%s ..." % test.displayName(), nl=False) test._std_nl = False def testCommand(self, test, cmdline): pass def testProgress(self, test, msg): """Called when a test signals having made progress.""" if not test._std_nl: self.output(test, "") self.output(test, " - " + msg) test._std_nl = True def testSucceeded(self, test, msg): sys.stdout.flush() self.finalMsg(test, msg) def testFailed(self, test, msg): self.finalMsg(test, msg) def testSkipped(self, test, msg): self.finalMsg(test, msg) def finalMsg(self, test, msg): if test._std_nl: self.output(test, self.threadPrefix(), nl=False) self.output(test, "%s ..." % test.displayName(), nl=False) self.output(test, msg) def testUnstable(self, test, msg): self.finalMsg(test, msg) class Console(OutputHandler): """ Output handler that writes colorful progress report to the console. This handler works well in settings that can handle coloring but not cursor placement commands (for example because moving to the beginning of the line overwrites other surrounding output); it's what the ``--show-all`` output uses. In contrast, the *CompactConsole* handler uses cursor placement in addition for a more space-efficient output. """ Green = "\033[32m" Red = "\033[31m" Yellow = "\033[33m" Gray = "\033[37m" DarkGray = "\033[1;30m" Normal = "\033[0m" def __init__(self, options): OutputHandler.__init__(self, options) self.show_all = True def testStart(self, test): msg = "[%3d%%] %s ..." % (test.mgr.percentage(), test.displayName()) self._consoleOutput(test, msg, False) def testProgress(self, test, msg): """Called when a test signals having made progress.""" msg = self.DarkGray + "(%s)" % msg + self.Normal self._consoleOutput(test, msg, True) def testSucceeded(self, test, msg): if test.known_failure: msg = self.Yellow + msg + self.Normal else: msg = self.Green + msg + self.Normal self._consoleOutput(test, msg, self.show_all) def testFailed(self, test, msg): if test.known_failure: msg = self.Yellow + msg + self.Normal else: msg = self.Red + msg + self.Normal self._consoleOutput(test, msg, True) def testUnstable(self, test, msg): msg = self.Yellow + msg + self.Normal self._consoleOutput(test, msg, True) def testSkipped(self, test, msg): msg = self.Gray + msg + self.Normal self._consoleOutput(test, msg, self.show_all) def finished(self): sys.stdout.flush() def _consoleOutput(self, test, msg, sticky): self._consoleWrite(test, msg, sticky) def _consoleWrite(self, test, msg, sticky): sys.stdout.write(msg.strip() + " ") if sticky: sys.stdout.write("\n") sys.stdout.flush() class CompactConsole(Console): """ Output handler that writes compact, colorful progress report to the console while also keeping the output compact by keeping output only for failing tests. This handler adds cursor mods and navigation to the coloring provided by the Console class and hence needs settings that can handle both. """ CursorOff = "\033[?25l" CursorOn = "\033[?25h" EraseToEndOfLine = "\033[2K" def __init__(self, options): Console.__init__(self, options) self.show_all = False def cleanup(): sys.stdout.write(self.CursorOn) atexit.register(cleanup) def testStart(self, test): test.console_last_line = None self._consoleOutput(test, "", False) sys.stdout.write(self.CursorOff) def testProgress(self, test, msg): """Called when a test signals having made progress.""" msg = " " + self.DarkGray + "(%s)" % msg + self.Normal self._consoleAugment(test, msg) def testFinished(self, test, msg): test.console_last_line = None def finished(self): sys.stdout.write(self.EraseToEndOfLine) sys.stdout.write("\r") sys.stdout.write(self.CursorOn) sys.stdout.flush() def _consoleOutput(self, test, msg, sticky): line = "[%3d%%] %s ..." % (test.mgr.percentage(), test.displayName()) if msg: line += " " + msg test.console_last_line = line self._consoleWrite(test, line, sticky) def _consoleAugment(self, test, msg): sys.stdout.write(self.EraseToEndOfLine) sys.stdout.write(" %s" % msg.strip()) sys.stdout.write("\r%s" % test.console_last_line) sys.stdout.flush() def _consoleWrite(self, test, msg, sticky): sys.stdout.write(chr(27) + '[2K') sys.stdout.write("\r%s" % msg.strip()) if sticky: sys.stdout.write("\n") test.console_last_line = None sys.stdout.flush() class Brief(OutputHandler): """Output handler for producing the brief output format.""" def testStart(self, test): pass def testCommand(self, test, cmdline): pass def testProgress(self, test, msg): """Called when a test signals having made progress.""" pass def testSucceeded(self, test, msg): pass def testFailed(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.output(test, "%s ... %s" % (test.displayName(), msg)) def testUnstable(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.output(test, "%s ... %s" % (test.displayName(), msg)) def testSkipped(self, test, msg): pass class Verbose(OutputHandler): """Output handler for producing the verbose output format.""" def testStart(self, test): self.output(test, self.threadPrefix(), nl=False) self.output(test, "%s ..." % test.displayName()) def testCommand(self, test, cmdline): part = "" if cmdline.part > 1: part = " [part #%d]" % cmdline.part self.output(test, self.threadPrefix(), nl=False) self.output(test, " > %s%s" % (cmdline.cmdline, part)) def testProgress(self, test, msg): """Called when a test signals having made progress.""" self.output(test, " - " + msg) def testSucceeded(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.showTestVerbose(test) self.output(test, "... %s %s" % (test.displayName(), msg)) def testFailed(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.showTestVerbose(test) self.output(test, "... %s %s" % (test.displayName(), msg)) def testUnstable(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.showTestVerbose(test) self.output(test, "... %s %s" % (test.displayName(), msg)) def testSkipped(self, test, msg): self.output(test, self.threadPrefix(), nl=False) self.showTestVerbose(test) self.output(test, "... %s %s" % (test.displayName(), msg)) def showTestVerbose(self, test): if not os.path.exists(test.verbose): return for line in open(test.verbose): self.output(test, " > [test-verbose] %s" % line.strip()) class Diag(OutputHandler): def __init__(self, options, all=False, file=None): """Output handler for producing the diagnostic output format. options: An optparser with the global options. all: Print diagnostics also for succeeding tests. file: Output into given file rather than console. """ OutputHandler.__init__(self, options) self._all = all self._file = file def showDiag(self, test): """Generates diagnostics for a test.""" for line in test.diagmsgs: self.output(test, " % " + line, True, self._file) for f in (test.diag, os.path.join(test.tmpdir, ".stderr")): if not f: continue if os.path.isfile(f): self.output(test, " % cat " + os.path.basename(f), True, self._file) for line in open(f): self.output(test, " " + line.rstrip(), True, self._file) self.output(test, "", True, self._file) if self.options().wait and not self._file: self.output(test, " ...") try: sys.stdin.readline() except KeyboardInterrupt: sys.exit(1) def testCommand(self, test, cmdline): pass def testSucceeded(self, test, msg): if self._all: if self._file: self.output(test, "%s ... %s" % (test.displayName(), msg), True, self._file) self.showDiag(test) def testFailed(self, test, msg): if self._file: self.output(test, "%s ... %s" % (test.displayName(), msg), True, self._file) if (not test.known_failure) or self._all: self.showDiag(test) def testUnstable(self, test, msg): if self._file: self.output(test, "%s ... %s" % (test.displayName(), msg), True, self._file) def testSkipped(self, test, msg): if self._file: self.output(test, "%s ... %s" % (test.displayName(), msg), True, self._file) class SphinxOutput(OutputHandler): def __init__(self, options, all=False, file=None): """Output handler for producing output when running from Sphinx. The main point here is that we save all diagnostic output to $BTEST_RST_OUTPUT. options: An optparser with the global options. """ OutputHandler.__init__(self, options) self._output = None try: self._rst_output = os.environ["BTEST_RST_OUTPUT"] except KeyError: print("warning: environment variable BTEST_RST_OUTPUT not set, will not produce output", file=sys.stderr) self._rst_output = None def testStart(self, test): self._output = None def testCommand(self, test, cmdline): if not self._rst_output: return self._output = "%s#%s" % (self._rst_output, cmdline.part) self._part = cmdline.part def testSucceeded(self, test, msg): pass def testFailed(self, test, msg): if not self._output: return out = open(self._output, "a") print("\n.. code-block:: none ", file=out) print("\n ERROR executing test '%s' (part %s)\n" % (test.displayName(), self._part), file=out) for line in test.diagmsgs: print(" % " + line, file=out) test.diagmsgs = [] for f in (test.diag, os.path.join(test.tmpdir, ".stderr")): if not f: continue if os.path.isfile(f): print(" % cat " + os.path.basename(f), file=out) for line in open(f): print(" %s" % line.strip(), file=out) print(file=out) def testUnstable(self, test, msg): pass def testSkipped(self, test, msg): pass class XMLReport(OutputHandler): RESULT_PASS = "pass" RESULT_FAIL = "failure" RESULT_SKIP = "skipped" RESULT_UNSTABLE = "unstable" def __init__(self, options, xmlfile): """Output handler for producing an XML report of test results. options: An optparser with the global options. file: Output into given file """ OutputHandler.__init__(self, options) self._file = xmlfile self._start = time.time() self._timestamp = datetime.now().isoformat() def prepare(self, mgr): self._results = mgr.list([]) def testStart(self, test): pass def testCommand(self, test, cmdline): pass def makeTestCaseElement(self, doc, testsuite, name, duration): parts = name.split('.') if len(parts) > 1: classname = ".".join(parts[:-1]) name = parts[-1] else: classname = parts[0] name = parts[0] e = doc.createElement("testcase") e.setAttribute("classname", classname) e.setAttribute("name", name) e.setAttribute("time", str(duration)) testsuite.appendChild(e) return e def getContext(self, test, context_file): context = "" for line in test.diagmsgs: context += " % " + line + "\n" for f in (test.diag, os.path.join(test.tmpdir, context_file)): if not f: continue if os.path.isfile(f): context += " % cat " + os.path.basename(f) + "\n" for line in open(f): context += " " + line.strip() + "\n" return context def addTestResult(self, test, status): context = "" if status != self.RESULT_PASS: context = self.getContext(test, ".stderr") res = { "name": test.displayName(), "status": status, "context": context, "duration": time.time() - test.start, } self._results.append(res) def testSucceeded(self, test, msg): self.addTestResult(test, self.RESULT_PASS) def testFailed(self, test, msg): self.addTestResult(test, self.RESULT_FAIL) def testUnstable(self, test, msg): self.addTestResult(test, self.RESULT_UNSTABLE) def testSkipped(self, test, msg): self.addTestResult(test, self.RESULT_SKIP) def finished(self): num_tests = 0 num_failures = 0 doc = xml.dom.minidom.Document() testsuite = doc.createElement("testsuite") doc.appendChild(testsuite) for res in self._results: test_case = self.makeTestCaseElement(doc, testsuite, res["name"], res["duration"]) if res["status"] != self.RESULT_PASS: e = doc.createElement(res["status"]) e.setAttribute("type", res["status"]) text_node = doc.createTextNode(res["context"]) e.appendChild(text_node) test_case.appendChild(e) if res["status"] == self.RESULT_FAIL: num_failures += 1 num_tests += 1 # These attributes are set in sorted order so that resulting XML output # is stable across Python versions. Before Python 3.8, attributes # appear in sorted order. After Python 3.8, attributes appear in # order specified by the user. Would be best to use an XML canonifier # method here and Python 3.8+ does provide one, except earlier versions # would need to rely on a third-party lib to do the same. References: # https://bugs.python.org/issue34160 # https://mail.python.org/pipermail/python-dev/2019-March/156709.html testsuite.setAttribute("errors", str(0)) testsuite.setAttribute("failures", str(num_failures)) testsuite.setAttribute("hostname", socket.gethostname()) testsuite.setAttribute("tests", str(num_tests)) testsuite.setAttribute("time", str(time.time() - self._start)) testsuite.setAttribute("timestamp", self._timestamp) print(doc.toprettyxml(indent=" "), file=self._file) self._file.close() class ChromeTracing(OutputHandler): """Output in Chrome tracing format. Output files can be loaded into Chrome browser under about:tracing, or converted to standalone HTML files with `trace2html`. """ def __init__(self, options, tracefile): OutputHandler.__init__(self, options) self._file = tracefile def prepare(self, mgr): self._results = mgr.list([]) def testFinished(self, test, _): self._results.append({ "name": test.name, "ts": test.start * 1e6, "tid": multiprocessing.current_process().pid, "pid": 1, "ph": "X", "cat": "test", "dur": (time.time() - test.start) * 1e6, }) def finished(self): print(json.dumps(list(self._results)), file=self._file) self._file.close() ### Timing measurements. # Base class for all timers. class TimerBase: # Returns true if time measurement are supported by this class on the # current platform. Must be overidden by derived classes. def available(self): raise NotImplementedError("Timer.available not implemented") # Runs a subprocess and measures its execution time. Arguments are as with # runSubprocess. Return value is the same with runTestCommandLine(). This # method must only be called if available() returns True. Must be overidden # by derived classes. def timeSubprocess(self, *args, **kwargs): raise NotImplementedError("Timer.timeSubprocess not implemented") # Linux version of time measurements. Uses "perf". class LinuxTimer(TimerBase): def __init__(self): self.perf = getOption("PerfPath", which("perf")) def available(self): if not platform() == "Linux": return False if not self.perf or not os.path.exists(self.perf): return False # Make sure it works. (success, rc) = runSubprocess("%s stat -o /dev/null true 2>/dev/null" % self.perf, shell=True) return success and rc == 0 def timeSubprocess(self, *args, **kwargs): assert self.perf cargs = args ckwargs = kwargs targs = [self.perf, "stat", "-o", ".timing", "-x", " ", "-e", "instructions", "sh", "-c"] targs += [" ".join(cargs)] cargs = [targs] del ckwargs["shell"] (success, rc) = runSubprocess(*cargs, **ckwargs) utime = -1 try: cwd = kwargs["cwd"] if "cwd" in kwargs else "." for line in open(os.path.join(cwd, ".timing")): if "instructions" in line and "not supported" not in line: try: m = line.split() utime = int(m[0]) except ValueError: pass except IOError: pass return (success, rc, utime) # Walk the given directory and return all test files. def findTests(paths, expand_globs=False): tests = [] ignore_files = getOption("IgnoreFiles", "").split() ignore_dirs = getOption("IgnoreDirs", "").split() expanded = set() for p in paths: p = os.path.join(TestBase, p) if expand_globs: for d in glob.glob(p): if os.path.isdir(d): expanded.add(d) else: expanded.add(p) for path in expanded: rpath = os.path.relpath(path, TestBase) if os.path.isdir(path) and os.path.basename(path) in ignore_dirs: continue ignores = [os.path.join(path, dir) for dir in ignore_dirs] m = RE_PART.match(rpath) if m: error("Do not specify files with part numbers directly, use the base test name (%s)" % rpath) if os.path.isfile(path): tests += readTestFile(path) # See if there are more parts. for part in glob.glob("%s#*" % rpath): tests += readTestFile(part) elif os.path.isdir(path): for (dirpath, dirnames, filenames) in os.walk(path): ign = os.path.join(dirpath, ".btest-ignore") if os.path.isfile(os.path.join(ign)): del dirnames[0:len(dirnames)] continue for file in filenames: for gl in ignore_files: if fnmatch.fnmatch(file, gl): break else: tests += readTestFile(os.path.join(dirpath, file)) # Don't recurse into these. for (dir, path) in [(dir, os.path.join(dirpath, dir)) for dir in dirnames]: for skip in ignores: if path == skip: dirnames.remove(dir) else: # See if we have test(s) named like this in our configured set. found = False for t in Config.configured_tests: if t and rpath == t.name: tests += [t] found = True if not found: # See if there are parts. for part in glob.glob("%s#*" % rpath): tests += readTestFile(part) found = True if not found: error("cannot read %s" % path) return tests # Merge parts belonging to the same test into one. def mergeTestParts(tests): def key(t): return (t.basename, t.number, t.part) out = {} for t in sorted(tests, key=key): try: other = out[t.name] assert t.part != other.part out[t.name].mergePart(t) except KeyError: out[t.name] = t return sorted([t for t in out.values()], key=key) # Read the given test file and instantiate one or more tests from it. def readTestFile(filename): def newTest(content, previous): if not previous: t = Test(filename) if t.parse(content, filename): return t else: return None else: return previous.clone(content) if os.path.basename(filename) == ".btest-ignore": return [] try: input = io.open(filename, encoding=getDefaultBtestEncoding(), newline='') except IOError as e: error("cannot read test file: %s" % e) tests = [] files = [] content = [] previous = None file = (None, []) state = "test" try: lines = [line for line in input] except UnicodeDecodeError as e: # This error is caused by either a test file with an invalid UTF-8 byte # sequence, or if python makes the wrong assumption about the encoding # of a test file (this can happen if a test file has valid UTF-8 but # none of the locale environment variables LANG, LC_CTYPE, or LC_ALL, # were defined prior to running btest). However, if all test files # are ASCII, then this error should never occur. error("unicode decode error in file %s: %s" % (filename, e)) for line in lines: if state == "test": m = RE_START_FILE.search(line) if m: state = "file" file = (m.group(1), []) continue m = RE_END_FILE.search(line) if m: error("%s: unexpected %sEND-FILE" % (filename, CommandPrefix)) m = RE_START_NEXT_TEST.search(line) if not m: content += [line] continue t = newTest(content, previous) if not t: return [] tests += [t] previous = t content = [] elif state == "file": m = RE_END_FILE.search(line) if m: state = "test" files += [file] file = (None, []) continue file = (file[0], file[1] + [line]) else: error("internal: unknown state %s" % state) if state == "file": files += [file] input.close() t = newTest(content, previous) if t: tests.append(t) for t in tests: if t: t.addFiles(files) return tests def jOption(option, _, __, parser): val = multiprocessing.cpu_count() if parser.rargs and not parser.rargs[0].startswith('-'): try: # Next argument should be the non-negative number of threads. # Turn 0 into 1, for backward compatibility. val = max(1, int(parser.rargs[0])) parser.rargs.pop(0) except ValueError: # Default to using all CPUs. Flagging this as error risks # confusing subsequent non-option arguments with arguments # intended for -j. pass setattr(parser.values, option.dest, val) # Output markup language documenting tests. def outputDocumentation(tests, fmt): def indent(i): return " " * i def doc(t): return t.doc if t.doc else ["No documentation."] # The "sectionlist" ensures that sections are output in same order as # they appear in the "tests" list. sectionlist = [] sections = {} for t in tests: ids = t.name.split(".") path = ".".join(ids[:-1]) if path not in sectionlist: sectionlist.append(path) s = sections.setdefault(path, []) s.append(t) for s in sectionlist: tests = sections[s] if fmt == "rst": print("%s" % s) print("-" * len(s)) print() for t in tests: print("%s``%s``:" % (indent(1), t.name)) for d in doc(t): print("%s%s" % (indent(2), d)) print() if fmt == "md": print("# %s" % s) print() for t in tests: print("* `%s`:" % t.name) for d in doc(t): print("%s%s" % (indent(1), d)) print() ### Main if __name__ == '__main__': # Python 3.8+ on macOS no longer uses "fork" as the default start-method # See https://github.com/zeek/btest/issues/26 pyver_maj = sys.version_info[0] pyver_min = sys.version_info[1] if (pyver_maj == 3 and pyver_min >= 8) or pyver_maj > 3: multiprocessing.set_start_method('fork') optparser = optparse.OptionParser(usage="%prog [options] ", version=VERSION) optparser.add_option("-U", "--update-baseline", action="store_const", dest="mode", const="UPDATE", help="create a new baseline from the tests' output") optparser.add_option("-u", "--update-interactive", action="store_const", dest="mode", const="UPDATE_INTERACTIVE", help="interactively asks whether to update baseline for a failed test") optparser.add_option("-d", "--diagnostics", action="store_true", dest="diag", default=False, help="show diagnostic output for failed tests") optparser.add_option("-D", "--diagnostics-all", action="store_true", dest="diagall", default=False, help="show diagnostic output for ALL tests") optparser.add_option( "-f", "--file-diagnostics", action="store", type="string", dest="diagfile", default="", help="write diagnostic output for failed tests into file; if file exists, it is overwritten") optparser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False, help="show commands as they are executed") optparser.add_option("-w", "--wait", action="store_true", dest="wait", default=False, help="wait for after each failed (with -d) or all (with -D) tests") optparser.add_option("-b", "--brief", action="store_true", dest="brief", default=False, help="outputs only failed tests") optparser.add_option("-c", "--config", action="store", type="string", dest="config", default=ConfigDefault, help="configuration file") optparser.add_option("-t", "--tmp-keep", action="store_true", dest="tmps", default=False, help="do not delete tmp files created for running tests") optparser.add_option( "-j", "--jobs", action="callback", callback=jOption, dest="threads", default=1, help="number of threads running tests in parallel; with no argument will use all CPUs") optparser.add_option("-g", "--groups", action="store", type="string", dest="groups", default="", help="execute only tests of given comma-separated list of groups") optparser.add_option("-r", "--rerun", action="store_true", dest="rerun", default=False, help="execute commands for tests that failed last time") optparser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="suppress information output other than about failed tests") optparser.add_option( "-x", "--xml", action="store", type="string", dest="xmlfile", default="", help= "write a report of test results in JUnit XML format to file; if file exists, it is overwritten") optparser.add_option("-a", "--alternative", action="store", type="string", dest="alternatives", default=None, help="activate given alternative") optparser.add_option("-S", "--sphinx", action="store_true", dest="sphinx", default=False, help="indicates that we're running from inside Sphinx; for internal purposes") optparser.add_option("-T", "--update-times", action="store_true", dest="update_times", default=False, help="create a new timing baseline for tests being measured") optparser.add_option("-R", "--documentation", action="store", type="choice", dest="doc", choices=("rst", "md"), metavar="format", default=None, help="Output documentation for tests, supported formats: rst, md") optparser.add_option( "-A", "--show-all", action="store_true", default=False, help= "For console output, show one-liners for passing/skipped tests in addition to any failing ones") optparser.add_option("-z", "--retries", action="store", dest="retries", type="int", default=0, help="Retry failed tests this many times to determine if they are unstable") optparser.add_option("--trace-file", action="store", dest="tracefile", default="", help="write Chrome tracing file to file; if file exists, it is overwritten") optparser.add_option("-F", "--abort-on-failure", action="store_true", dest="abort_on_failure", help="terminate after first test failure") optparser.add_option("-l", "--list", action="store_true", dest="list", default=False, help="list available tests instead of executing them") optparser.set_defaults(mode="TEST") (Options, args) = optparser.parse_args() # Update-interactive mode implies single-threaded operation if Options.mode == "UPDATE_INTERACTIVE" and Options.threads > 1: warning("ignoring requested parallelism in interactive-update mode") Options.threads = 1 if not os.path.exists(Options.config): error("configuration file '%s' not found" % Options.config) # The defaults come from environment variables, plus a few additional items. defaults = {} # Changes to defaults should not change os.environ defaults.update(os.environ) defaults["default_path"] = os.environ["PATH"] dirname = os.path.dirname(Options.config) if not dirname: dirname = os.getcwd() # If the BTEST_TEST_BASE envirnoment var is set, we'll use that as the testbase. # If not, we'll use the current directory. TestBase = os.path.abspath(os.environ.get("BTEST_TEST_BASE", dirname)) defaults["testbase"] = TestBase defaults["baselinedir"] = os.path.abspath( os.environ.get("BTEST_BASELINE_DIR", os.path.join(TestBase, "Baseline"))) # Parse our config Config = getcfgparser(defaults) Config.read(Options.config) defaults["baselinedir"] = getOption("BaselineDir", defaults["baselinedir"]) min_version = getOption("MinVersion", None) if min_version: validate_version_requirement(min_version, VERSION) if Options.alternatives: # Preprocess to split into list. Options.alternatives = [alt.strip() for alt in Options.alternatives.split(",") if alt != "-"] # Helper function that, if an option wasn't explicitly specified as an # environment variable, checks if an alternative sets its through # its own environment section. If so, we make that value our new default. # If multiple alternatives set it, we pick the value from the first. def get_env_from_alternative(env, opt, default, transform=None): for tag in Options.alternatives: value = getOption(env, None, section="environment-%s" % tag) if value is not None: if transform: value = transform(value) defaults[opt] = value # At this point, our defaults have changed, so we # reread the configuration. new_config = getcfgparser(defaults) new_config.read(Options.config) return new_config, value return Config, default (Config, TestBase) = get_env_from_alternative("BTEST_TEST_BASE", "testbase", TestBase, lambda x: os.path.abspath(x)) # Need to update BaselineDir - it may be interpolated from testbase. defaults["baselinedir"] = getOption("BaselineDir", defaults["baselinedir"]) (Config, _) = get_env_from_alternative("BTEST_BASELINE_DIR", "baselinedir", None) os.chdir(TestBase) if Options.sphinx: Options.quiet = True if Options.quiet: Options.brief = True # Determine output handlers to use. output_handlers = [] if Options.verbose: output_handlers += [Verbose(Options, )] elif Options.brief: output_handlers += [Brief(Options, )] else: if sys.stdout.isatty(): if Options.show_all: output_handlers += [Console(Options, )] else: output_handlers += [CompactConsole(Options, )] else: output_handlers += [Standard(Options, )] if Options.diagall: output_handlers += [Diag(Options, True, None)] elif Options.diag: output_handlers += [Diag(Options, False, None)] if Options.diagfile: try: diagfile = open(Options.diagfile, "w", 1) output_handlers += [Diag(Options, Options.diagall, diagfile)] except IOError as e: print("cannot open %s: %s" % (Options.diagfile, e), file=sys.stderr) if Options.sphinx: output_handlers += [SphinxOutput(Options)] if Options.xmlfile: try: xmlfile = open(Options.xmlfile, "w", 1) output_handlers += [XMLReport(Options, xmlfile)] except IOError as e: print("cannot open %s: %s" % (Options.xmlfile, e), file=sys.stderr) if Options.tracefile: try: tracefile = open(Options.tracefile, "w", 1) output_handlers += [ChromeTracing(Options, tracefile)] except IOError as e: print("cannot open %s: %s" % (Options.tracefile, e), file=sys.stderr) output_handler = Forwarder(Options, output_handlers) # Determine Timer to use. Timer = None if platform() == "Linux": t = LinuxTimer() if t.available(): Timer = t if Options.update_times and not Timer: warning("unable to create timing baseline because timer is not available") # Evaluate other command line options. if Config.has_section("environment"): for (name, value) in Config.itemsNoDefaults("environment"): # Here we don't want to include items from defaults os.environ[name.upper()] = value Alternatives = {} if Options.alternatives: for tag in Options.alternatives: a = Alternative(tag) try: for (name, value) in Config.itemsNoDefaults("filter-%s" % tag): a.filters[name] = value except configparser.NoSectionError: pass try: for (name, value) in Config.itemsNoDefaults("substitution-%s" % tag): a.substitutions[name] = value except configparser.NoSectionError: pass try: for (name, value) in Config.itemsNoDefaults("environment-%s" % tag): a.envs[name] = value except configparser.NoSectionError: pass Alternatives[tag] = a CommandPrefix = getOption("CommandPrefix", "@TEST-") RE_INPUT = re.compile(r"%INPUT") RE_DIR = re.compile(r"%DIR") RE_ENV = re.compile(r"\$\{(\w+)\}") RE_PART = re.compile(r"^(.*)#([0-9]+)$") RE_IGNORE = re.compile(CommandPrefix + "IGNORE") RE_START_NEXT_TEST = re.compile(CommandPrefix + "START-NEXT") RE_START_FILE = re.compile(CommandPrefix + "START-FILE +([^\r\n ]*)") RE_END_FILE = re.compile(CommandPrefix + "END-FILE") # Commands as tuple (tag, regexp, more-than-one-is-ok, optional, group-main, group-add) # pylint: disable=bad-whitespace # yapf: disable RE_EXEC = ("exec", re.compile(CommandPrefix + "EXEC(-FAIL)?: *(.*)"), True, False, 2, 1) RE_REQUIRES = ("requires", re.compile(CommandPrefix + "REQUIRES: *(.*)"), True, True, 1, -1) RE_GROUP = ("group", re.compile(CommandPrefix + "GROUP: *(.*)"), True, True, 1, -1) RE_SERIALIZE = ("serialize", re.compile(CommandPrefix + "SERIALIZE: *(.*)"), False, True, 1, -1) RE_PORT = ("port", re.compile(CommandPrefix + "PORT: *(.*)"), True, True, 1, -1) RE_INCLUDE_ALTERNATIVE = ("alternative", re.compile(CommandPrefix + "ALTERNATIVE: *(.*)"), True, True, 1, -1) RE_IGNORE_ALTERNATIVE = ("not-alternative", re.compile(CommandPrefix + "NOT-ALTERNATIVE: *(.*)"), True, True, 1, -1) RE_COPY_FILE = ("copy-file", re.compile(CommandPrefix + "COPY-FILE: *(.*)"), True, True, 1, -1) RE_KNOWN_FAILURE = ("known-failure", re.compile(CommandPrefix + "KNOWN-FAILURE"), False, True, -1, -1) RE_MEASURE_TIME = ("measure-time", re.compile(CommandPrefix + "MEASURE-TIME"), False, True, -1, -1) RE_DOC = ("doc", re.compile(CommandPrefix + "DOC: *(.*)"), True, True, 1, -1) # yapf: enable # pylint: enable=bad-whitespace Commands = (RE_EXEC, RE_REQUIRES, RE_GROUP, RE_SERIALIZE, RE_PORT, RE_INCLUDE_ALTERNATIVE, RE_IGNORE_ALTERNATIVE, RE_COPY_FILE, RE_KNOWN_FAILURE, RE_MEASURE_TIME, RE_DOC) StateFile = os.path.abspath( getOption("StateFile", os.path.join(defaults["testbase"], ".btest.failed.dat"))) TmpDir = os.path.abspath(getOption("TmpDir", os.path.join(defaults["testbase"], ".tmp"))) BaselineDirs = [os.path.abspath(dir) for dir in defaults["baselinedir"].split(":")] BaselineTimingDir = os.path.abspath( getOption("TimingBaselineDir", os.path.join(BaselineDirs[0], "_Timing"))) Initializer = getOption("Initializer", "") Finalizer = getOption("Finalizer", "") Teardown = getOption("Teardown", "") PartInitializer = getOption("PartInitializer", "") PartFinalizer = getOption("PartFinalizer", "") PartTeardown = getOption("PartTeardown", "") Config.configured_tests = [] testdirs = getOption("TestDirs", "").split() if testdirs: Config.configured_tests = findTests(testdirs, True) if args: tests = findTests(args) else: if Options.rerun: (success, tests) = readStateFile() if success: if not tests: output("no tests failed last time") sys.exit(0) else: warning("cannot read state file, executing all tests") tests = Config.configured_tests else: tests = Config.configured_tests if Options.groups: groups = Options.groups.split(",") Options.groups = set([g for g in groups if not g.startswith("-")]) Options.no_groups = set([g[1:] for g in groups if g.startswith("-")]) def rightGroup(t): if not t: return True if t.groups & Options.groups: return True if "" in Options.no_groups: if not t.groups: return True elif Options.no_groups: if t.groups & Options.no_groups: return False return True return False tests = [t for t in tests if rightGroup(t)] if not tests: output("no tests to execute") sys.exit(0) tests = mergeTestParts(tests) if Options.doc: outputDocumentation(tests, Options.doc) sys.exit(0) for d in BaselineDirs: mkdir(d) mkdir(TmpDir) # Building our own path to avoid "error: AF_UNIX path too long" on # some platforms. See BIT-862. sname = "btest-socket-%d" % os.getpid() addr = os.path.join(tempfile.gettempdir(), sname) # Check if the pathname is too long to fit in struct sockaddr_un (the # maximum length is system-dependent, so here we just use 100, which seems # a safe default choice). if len(addr) > 100: # Try relative path to TmpDir (which would usually be ".tmp"). addr = os.path.join(os.path.relpath(TmpDir), sname) # If the path is still too long, then use the global tmp directory. if len(addr) > 100: addr = os.path.join("/tmp", sname) mgr = TestManager(address=addr) try: if Options.list: for test in sorted(tests): if test.name: print(test.name) sys.exit(0) else: (succeeded, failed, skipped, unstable, failed_expected) = mgr.run(copy.deepcopy(tests), output_handler) total = succeeded + failed + skipped output_handler.finished() # Ctrl-C can lead to broken pipe (e.g. FreeBSD), so include IOError here: except (Abort, KeyboardInterrupt, IOError) as exc: output_handler.finished() print(str(exc) or "Aborted with %s." % type(exc).__name__, file=sys.stderr) sys.stderr.flush() # Explicitly shut down sync manager to avoid leaking manager # processes, particularly with --abort-on-failure: mgr.shutdown() os._exit(1) skip = (", %d skipped" % skipped) if skipped > 0 else "" unstablestr = (", %d unstable" % unstable) if unstable > 0 else "" failed_expectedstr = (" (with %d expected to fail)" % failed_expected) if failed_expected > 0 else "" if failed > 0: if not Options.quiet: output("%d of %d test%s failed%s%s%s" % (failed, total, "s" if total > 1 else "", failed_expectedstr, skip, unstablestr)) if failed == failed_expected: sys.exit(0) else: sys.exit(1) elif skipped > 0 or unstable > 0: if not Options.quiet: output("%d test%s successful%s%s" % (succeeded, "s" if succeeded != 1 else "", skip, unstablestr)) sys.exit(0) else: if not Options.quiet: output("all %d tests successful" % total) sys.exit(0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-ask-update0000755000076500000240000000217114165076445014417 0ustar00timstaff#! /usr/bin/env bash # # Helper script that asks whether the user wants to update a baseline. # # Return code: # # 0: Yes, update and continue. # 1: No, don't update but continue. # 200: No, don't update and abort. while true; do printf "\033[0K" >>/dev/tty # Delete any augmented output. echo " failed" >>/dev/tty echo ">> Type 'c' to continue, 'd' to see diagnostics, 'u' to update baseline, and 'a' to abort." >/dev/tty read -r -s -n 1 key > Updating baseline ..." >/dev/tty exit 0 ;; [cC]) echo ">> Continuing ..." >/dev/tty exit 1 ;; [aA]) echo ">> Aborting ..." >/dev/tty exit 200 ;; [dD]) if [ "$TEST_DIAGNOSTICS" != "" ] && [ "$TEST_DIAGNOSTICS" != "/dev/stdout" ]; then less -S "$TEST_DIAGNOSTICS" /dev/tty else echo "Do not have diagnostics." >/dev/tty fi ;; *) echo ">> Answer not recognized, try again ..." >/dev/tty ;; esac done ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-bg-run0000755000076500000240000000130414165076445013550 0ustar00timstaff#! /usr/bin/env bash # # Usage: btest-bg-run # # Creates a new empty working directory within the current directory # and spawns in there in the background. It also records # a set of meta information that btest-bg-wait will read. if [ "$#" -le 1 ]; then echo "usage: $(basename "$0") " exit 1 fi cwd=$(pwd) cd "$(dirname "$0")" || exit 1 helper=$(pwd)/btest-bg-run-helper setsid=$(pwd)/btest-setsid cd "$cwd" || exit 1 dir=$1 shift if [ -e "$dir" ]; then echo "directory '$dir' already exists" >&2 exit 1 fi echo "$dir" >>.bgprocs mkdir "$dir" cd "$dir" || exit 1 echo "$@" >.cmdline $setsid "$helper" "$@" >.stdout 2>.stderr & sleep 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-bg-run-helper0000755000076500000240000000070714165076445015033 0ustar00timstaff#! /usr/bin/env bash # # Internal helper for btest-bg-run. cleanup() { if [ ! -e .exitcode ]; then echo 15 >.exitcode kill 0 &>/dev/null if [ -n "$pid" ]; then kill -0 "$pid" &>/dev/null && kill "$pid" sleep 1 kill -0 "$pid" &>/dev/null && kill -9 "$pid" && echo 9 >.exitcode fi fi } trap "cleanup" EXIT eval "$* &" pid=$! echo $$ >.pid wait $pid echo $? >.exitcode pid="" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-bg-wait0000755000076500000240000000604314165076445013715 0ustar00timstaff#! /usr/bin/env bash # # Usage: btest-bg-wait [-k] # # Waits until all of the background process spawned by btest-bg-run # have finished, or the given timeout (in seconds) has been exceeded. # # If the timeout triggers, all remaining processed are killed. If -k # is not given, this is considered an error and the script abort with # error code 1. If -k is given, a timeout is not considered an error. # # Once all processes have finished (or were killed), the scripts # merges their stdout and stderr. If one of them returned an error, # this script does so as well if [ "$1" == "-k" ]; then timeout_ok=1 shift else timeout_ok=0 fi if [ $# != 1 ]; then echo "usage: $(basename "$0") [-k] " exit 1 fi timeout=$1 procs=$(cat .bgprocs) rm -f .timeout touch .timeout function check_procs { for p in $procs; do if [ ! -e "$p/.exitcode" ]; then return 1 fi done # All done. return 0 } function kill_procs { for p in $procs; do if [ ! -e "$p/.exitcode" ]; then kill -1 "$(cat "$p/.pid")" 2>/dev/null cat "$p"/.cmdline >>.timeout if [ "$1" == "timeout" ]; then touch "$p/.timeout" fi fi done sleep 1 for p in $procs; do if [ ! -e "$p/.exitcode" ]; then kill -9 "$(cat "$p/.pid")" 2>/dev/null sleep 1 fi done } function collect_output { rm -f .stdout .stderr if [ $timeout_ok != 1 ] && [ -s .timeout ]; then { echo "The following processes did not terminate:" echo cat .timeout echo echo "-----------" } >>.stderr fi for p in $procs; do pid=$(cat "$p/.pid") cmdline=$(cat "$p/.cmdline") { printf "<<< [%s] %s\\n" "$pid" "$cmdline" cat "$p"/.stdout echo ">>>" } >>.stdout { printf "<<< [%s] %s\\n" "$pid" "$cmdline" cat "$p"/.stderr echo ">>>" } >>.stderr done } trap kill_procs EXIT while true; do if check_procs; then # All done. break fi timeout=$((timeout - 1)) if [ $timeout -le 0 ]; then # Timeout exceeded. kill_procs timeout if [ $timeout_ok == 1 ]; then # Just continue. break fi # Exit with error. collect_output exit 1 fi sleep 1 done trap - EXIT # All terminated either by themselves, or with a benign timeout. collect_output # See if any returned an error. result=0 for p in $procs; do if [ -e "$p/.timeout" ]; then # we're here because timeouts are ok, so don't mind the exit code # if we initiated killing the process due to timeout continue fi rc=$(cat "$p/.exitcode") pid=$(cat "$p/.pid") cmdline=$(cat "$p/.cmdline") if [ "$rc" != 0 ]; then echo ">>> process $pid failed with exitcode $rc: $cmdline" >>.stderr result=1 fi done exit $result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-diff0000755000076500000240000001702214165076445013272 0ustar00timstaff#! /usr/bin/env bash # # Usage: btest-diff [options] # # These environment variables are set by btest: # TEST_MODE={TEST|UPDATE|UPDATE_INTERACTIVE} # TEST_BASELINE # TEST_DIAGNOSTICS # TEST_NAME # # A test can optionally set these environment variables: # TEST_DIFF_CANONIFIER # TEST_DIFF_BRIEF # TEST_DIFF_FILE_MAX_LINES # # This script has the following exit codes: # # When TEST_MODE is TEST: # 0 - Comparison succeeded, files are the same # 1 - Problems with input file/args or running TEST_DIFF_CANONIFIER, or file contents differ # 2 - Other diffing trouble (inherited from diff) # 100 - No baseline to compare to available # # When TEST_MODE is UPDATE: # 0 - Baseline updated # 1 - Problems with input file/args or running TEST_DIFF_CANONIFIER # # When TEST_MODE is UPDATE_INTERACTIVE: # 0 - Baseline updated, or nothing to update # 1 - Problems with input file/args or running TEST_DIFF_CANONIFIER, or user skips a deviating baseline # 200 - User asks to abort after a deviating baseline # # Otherwise: exits with 1 # It's okay to check $? explicitly: # shellcheck disable=SC2181 # Maximum number of lines to show from mismatching input file by default. MAX_LINES=100 # Header line we tuck onto new baselines generated by # btest-diff. Serves both as a warning and as an indicator that the # baseline has been run through the TEST_DIFF_CANONIFIER (if any). HEADER="### BTest baseline data generated by btest-diff. Do not edit. Use \"btest -U/-u\" to update. Requires BTest >= 0.63." # btest-diff supports a binary mode to simplify the handling of files # that are better treated as binary blobs rather than text files. In # binary mode, we treat the input file as-is, meaning: # # - only check whether input and baseline are identical # - don't prepend our btest header line when updating baseline # - don't canonify when updating baseline # BINARY_MODE= is_binary_mode() { test -n "$BINARY_MODE" } # Predicate, succeeds if the given baseline is canonicalized. is_canon_baseline() { local input="$1" # The baseline is canonicalized when we find our header in it. To # allow for some wiggle room in updating the wording in the header # in the future, we don't fix the exact string, and end after the # "Do not edit." sentence. local header header=$(echo "$HEADER" | sed -E 's/Do not edit\..*/Do not edit./') if head -n 1 "$input" | grep -q -F "$header" 2>/dev/null; then return 0 fi return 1 } # Prints the requested baseline to standard out if it is canonicalized # or we're using binary mode. Otherwise fails and prints nothing. get_baseline() { local input="$1" if is_binary_mode; then cat "$input" return 0 fi ! is_canon_baseline "$input" && return 1 tail -n +2 "$input" } # Updates the given baseline to the given filename inside the *first* # baseline directory. Prepends our header if we're not in binary mode. update_baseline() { local input="$1" local output="${baseline_dirs[0]}/$2" if ! is_binary_mode; then echo "$HEADER" >"$output" cat "$input" >>"$output" else cat "$input" >"$output" fi } # ---- Main program ---------------------------------------------------- while [ "$1" != "" ]; do case "$1" in "--binary") BINARY_MODE=1 shift ;; *) break ;; esac done if [ -n "$TEST_DIFF_FILE_MAX_LINES" ]; then MAX_LINES=$TEST_DIFF_FILE_MAX_LINES fi if [ "$TEST_DIAGNOSTICS" = "" ]; then TEST_DIAGNOSTICS=/dev/stdout fi if [ "$#" -lt 1 ]; then echo "btest-diff: wrong number of arguments" >$TEST_DIAGNOSTICS exit 1 fi # Split string with baseline directories into array. IFS=':' read -ra baseline_dirs <<<"$TEST_BASELINE" input="$1" # shellcheck disable=SC2001 canon=$(echo "$input" | sed 's#/#.#g') shift if [ ! -f "$input" ]; then echo "btest-diff: input $input does not exist." >$TEST_DIAGNOSTICS exit 1 fi tmpfiles="" delete_tmps() { # shellcheck disable=SC2086 rm -f $tmpfiles 2>/dev/null } trap delete_tmps 0 # First available baseline across directories. baseline="" for dir in "${baseline_dirs[@]}"; do test -f "$dir/$canon" && baseline="$dir/$canon" && break done result=2 rm -f $TEST_DIAGNOSTICS 2>/dev/null echo "== File ===============================" >>$TEST_DIAGNOSTICS if [ -z "$baseline" ]; then cat "$input" >>$TEST_DIAGNOSTICS elif [ -n "$TEST_DIFF_BRIEF" ]; then echo "" >>$TEST_DIAGNOSTICS else if [ "$(wc -l "$input" | awk '{print $1}')" -le "$MAX_LINES" ]; then cat "$input" >>$TEST_DIAGNOSTICS else head -n "$MAX_LINES" "$input" >>$TEST_DIAGNOSTICS echo "[... File too long, truncated ...]" >>$TEST_DIAGNOSTICS fi fi # If no canonifier is defined, just copy. Simplifies code layout. # In binary mode, always just copy. if [ -z "$TEST_DIFF_CANONIFIER" ] || is_binary_mode; then TEST_DIFF_CANONIFIER="cat" fi canon_output=/tmp/test-diff.$$.$canon.tmp tmpfiles="$tmpfiles $canon_output" error=0 # Canonicalize the new test output. # shellcheck disable=SC2094 eval "$TEST_DIFF_CANONIFIER" "$input" <"$input" >"$canon_output" if [ $? -ne 0 ]; then echo "== Error ==============================" >>$TEST_DIAGNOSTICS echo "btest-diff: TEST_DIFF_CANONIFIER failed on file '$input'" >>$TEST_DIAGNOSTICS error=1 result=1 fi if [ -n "$baseline" ]; then canon_baseline=/tmp/test-diff.$$.$canon.baseline.tmp tmpfiles="$tmpfiles $canon_baseline" # Prepare the baseline. When created by a recent btest-diff, we # don't need to re-canonicalize, otherwise we do. if ! get_baseline "$baseline" >"$canon_baseline"; then # It's an older uncanonicalized baseline, so canonicalize # it now prior to comparison. Future updates via btest # -U/-u will then store it canonicalized. # shellcheck disable=SC2094 eval "$TEST_DIFF_CANONIFIER" "$baseline" <"$baseline" >"$canon_baseline" if [ $? -ne 0 ]; then echo "== Error ==============================" >>$TEST_DIAGNOSTICS echo "btest-diff: TEST_DIFF_CANONIFIER failed on file '$baseline'" >>$TEST_DIAGNOSTICS error=1 result=1 fi fi if [ $error -eq 0 ]; then echo "== Diff ===============================" >>$TEST_DIAGNOSTICS if is_binary_mode; then diff -s "$@" "$canon_baseline" "$canon_output" >>$TEST_DIAGNOSTICS else diff -au "$@" "$canon_baseline" "$canon_output" >>$TEST_DIAGNOSTICS fi result=$? fi elif [ "$TEST_MODE" = "TEST" ]; then echo "== Error ==============================" >>$TEST_DIAGNOSTICS echo "test-diff: no baseline found." >>$TEST_DIAGNOSTICS result=100 fi echo "=======================================" >>$TEST_DIAGNOSTICS if [ "$TEST_MODE" = "TEST" ]; then exit $result elif [ "$TEST_MODE" = "UPDATE_INTERACTIVE" ]; then # We had a problem running the canonifier if [ "$error" != 0 ]; then exit 1 fi # There's no change to the baseline, so skip user interaction if [ "$result" = 0 ]; then exit 0 fi btest-ask-update rc=$? echo -n "$TEST_NAME ..." >/dev/tty if [ $rc = 0 ]; then update_baseline "$canon_output" "$canon" exit 0 fi exit $rc elif [ "$TEST_MODE" = "UPDATE" ]; then # We had a problem running the canonifier if [ "$error" != 0 ]; then exit 1 fi update_baseline "$canon_output" "$canon" exit 0 fi echo "test-diff: unknown test mode $TEST_MODE" >$TEST_DIAGNOSTICS exit 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-progress0000755000076500000240000000157414165076445014233 0ustar00timstaff#! /usr/bin/env bash function usage { cat <&2 usage: $(basename "$0") [-q] [-T] -q: Do not print message to standard output or standard error. -T: Do not include timestamp on standard error message. EOF exit 1 } ### Main. quiet=0 time=1 while getopts ":qT" opt; do case $opt in q) quiet=1 shift ;; T) time=0 shift ;; *) usage ;; esac done test $# != 0 || usage msg="[btest] -- $*" if [ "${quiet}" -eq 0 ]; then echo "${msg}" if [ "${time}" -eq 0 ]; then echo "${msg}" >&2 else echo "${msg} -- $(date -u +'%Y-%m-%dT%H:%M:%S.%3NZ') " >&2 fi fi TIME=$(python3 -c 'import time; print(int(time.time() * 1e9))') file=$(mktemp ".progress.${TIME}.XXXXXX") || exit 1 echo "$@" >>"${file}" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/btest-setsid0000755000076500000240000000022514165076445013652 0ustar00timstaff#! /usr/bin/env python3 import os import sys try: os.setsid() except: pass prog = sys.argv[1] args = sys.argv[1:] os.execvp(prog, args) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/btest.cfg.example0000644000076500000240000000060214072112013014520 0ustar00timstaff [btest] TestDirs = examples TmpDir = %(testbase)s/.tmp BaselineDir = %(testbase)s/Baseline IgnoreDirs = .svn CVS .tmp IgnoreFiles = *.tmp *.swp #* MinVersion = 0.63 [environment] CFLAGS=-O3 PATH=%(testbase)s/bin:%(default_path)s [filter-myalternative] cat=%(testbase)s/examples/my-filter [substitution-myalternative] original=filtered [environment-myalternative] MYALT=1 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6436336 btest-0.72/examples/0000755000076500000240000000000014246443553013127 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/alternative0000644000076500000240000000017414072112013015347 0ustar00timstaff# @TEST-EXEC: cat %INPUT >output # @TEST-EXEC: cat output | grep original # @TEST-EXEC: set | grep MY >envs original input ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/my-filter0000755000076500000240000000007114072112013014740 0ustar00timstaff# @TEST-IGNORE cat $1 | sed 's/original/filtered/g' >$2 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6473262 btest-0.72/examples/sphinx/0000755000076500000240000000000014246443553014440 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/.gitignore0000644000076500000240000000002514072112013016402 0ustar00timstaff_* .btest.failed.dat ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6011622 btest-0.72/examples/sphinx/Baseline/0000755000076500000240000000000014246443553016162 5ustar00timstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6501966 btest-0.72/examples/sphinx/Baseline/tests.sphinx.hello-world/0000755000076500000240000000000014246443553023063 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#10000644000076500000240000000007514165076445031156 0ustar00timstaff.. code-block:: none # echo Hello, world! Hello, world! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#20000644000076500000240000000011314165076445031150 0ustar00timstaff.. code-block:: none # echo Hello, world! Again. Hello, world! Again. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#30000644000076500000240000000013114165076445031151 0ustar00timstaff.. code-block:: none # echo Hello, world! Again. Again. Hello, world! Again. Again. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/Makefile0000644000076500000240000001276414072112013016067 0ustar00timstaff# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext all: html help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* .tmp html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/BTest-SphinxDemo.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/BTest-SphinxDemo.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/BTest-SphinxDemo" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/BTest-SphinxDemo" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/btest.cfg0000644000076500000240000000032114072112013016213 0ustar00timstaff [btest] TestDirs = tests TmpDir = %(testbase)s/.tmp BaselineDir = %(testbase)s/Baseline Finalizer = btest-diff-rst [environment] PATH=%(testbase)s/../../:%(testbase)s/../../sphinx:%(default_path)s ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/sphinx/conf.py0000644000076500000240000001752514165076445015753 0ustar00timstaff# -*- coding: utf-8 -*- # # BTest-Sphinx Demo documentation build configuration file, created by # sphinx-quickstart on Wed May 8 15:22:37 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os sys.path.append("../../sphinx") sys.path.append("../../../sphinx") # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [] ### BTest extensions += ["btest-sphinx"] btest_base = "." # Relative to Sphinx-root. btest_tests = "tests/sphinx" # Relative to btest_base. ### # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'BTest-Sphinx Demo' copyright = u'2013, Foo Bar' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '1.0' # The full version, including alpha/beta/rc tags. release = '1.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'BTest-SphinxDemodoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'BTest-SphinxDemo.tex', u'BTest-Sphinx Demo Documentation', u'Foo Bar', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [('index', 'btest-sphinxdemo', u'BTest-Sphinx Demo Documentation', [u'Foo Bar'], 1)] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'BTest-SphinxDemo', u'BTest-Sphinx Demo Documentation', u'Foo Bar', 'BTest-SphinxDemo', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/sphinx/index.rst0000644000076500000240000000244414165076445016307 0ustar00timstaff.. BTest-Sphinx Demo documentation master file, created by sphinx-quickstart on Wed May 8 15:22:37 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to BTest-Sphinx Demo's documentation! ============================================= Contents: .. toctree:: :maxdepth: 2 Testing ======= .. btest:: hello-world @TEST-EXEC: btest-rst-cmd echo "Hello, world!" .. btest:: hello-world @TEST-EXEC: btest-rst-cmd echo "Hello, world! Again." .. btest:: hello-world @TEST-EXEC: btest-rst-cmd echo "Hello, world! Again. Again." .. btest:: hello-world-fail @TEST-EXEC: btest-rst-cmd echo "This will fail soon!" This should fail and include the diag output instead: .. btest:: hello-world-fail @TEST-EXEC: echo StDeRr >&2; echo 1 | grep -q 2 This should succeed: .. btest:: hello-world-fail @TEST-EXEC: btest-rst-cmd echo "This succeeds again!" This should fail again and include the diag output instead: .. btest:: hello-world-fail @TEST-EXEC: echo StDeRr >&2; echo 3 | grep -q 4 .. btest:: hello-world-fail @TEST-EXEC: btest-rst-cmd echo "This succeeds again!" .. btest-include:: btest.cfg Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.601463 btest-0.72/examples/sphinx/tests/0000755000076500000240000000000014246443553015602 5ustar00timstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6532485 btest-0.72/examples/sphinx/tests/sphinx/0000755000076500000240000000000014246443553017113 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/tests/sphinx/hello-world.btest0000644000076500000240000000005714072112013022365 0ustar00timstaff@TEST-EXEC: btest-rst-cmd echo "Hello, world!" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/tests/sphinx/hello-world.btest#20000644000076500000240000000006614072112013022512 0ustar00timstaff@TEST-EXEC: btest-rst-cmd echo "Hello, world! Again." ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/sphinx/tests/sphinx/hello-world.btest#30000644000076500000240000000007514072112013022513 0ustar00timstaff@TEST-EXEC: btest-rst-cmd echo "Hello, world! Again. Again." ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t10000644000076500000240000000007314072112013013353 0ustar00timstaff@TEST-EXEC: echo "Foo" | grep -q Foo @TEST-EXEC: test -d . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t20000644000076500000240000000010614072112013013351 0ustar00timstaff@TEST-EXEC: echo "Foo" | grep -q Foo @TEST-EXEC: test -d DOESNOTEXIST ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t3.sh0000644000076500000240000000006214072112013013764 0ustar00timstaff# @TEST-EXEC: sh %INPUT ls /etc | grep -q passwd ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t4.awk0000644000076500000240000000013514072112013014136 0ustar00timstaff# @TEST-EXEC: ls -a ~ | awk -f %INPUT >dots # @TEST-EXEC: btest-diff dots /\.*/ { print $1 } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/t5.sh0000644000076500000240000000023614165076445014016 0ustar00timstaff# @TEST-EXEC: cat %INPUT | wc -c >output # @TEST-EXEC: btest-diff output This is the first test input in this file. # @TEST-START-NEXT ... and the second. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t6.sh0000644000076500000240000000024614072112013013773 0ustar00timstaff# @TEST-EXEC: awk -f %INPUT output # @TEST-EXEC: btest-diff output { lines += 1; } END { print lines; } @TEST-START-FILE foo.dat 1 2 3 @TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t70000644000076500000240000000013414072112013013357 0ustar00timstaff@TEST-EXEC: echo "Foo" | grep -q Foo @TEST-EXEC: awk 'BEGIN{for(i=0; i < 50000000; i++){}}' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t7.sh#10000644000076500000240000000005414072112013014115 0ustar00timstaff# @TEST-EXEC: echo Part 1 - %INPUT >>output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t7.sh#20000644000076500000240000000005414072112013014116 0ustar00timstaff# @TEST-EXEC: echo Part 2 - %INPUT >>output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/examples/t7.sh#30000644000076500000240000000004014072112013014112 0ustar00timstaff# @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/examples/unstable.sh0000644000076500000240000000011614165076445015300 0ustar00timstaff# @TEST-EXEC: echo $(($RANDOM/2**14)) >output # @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.8039043 btest-0.72/setup.cfg0000644000076500000240000000004614246443553013132 0ustar00timstaff[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277936.0 btest-0.72/setup.py0000644000076500000240000000240514246443460013021 0ustar00timstaff#! /usr/bin/env python from distutils.core import setup, Extension # When making changes to the following list, remember to keep # CMakeLists.txt in sync. scripts = [ "btest", "btest-ask-update", "btest-bg-run", "btest-bg-run-helper", "btest-bg-wait", "btest-diff", "btest-setsid", "btest-progress", "sphinx/btest-diff-rst", "sphinx/btest-rst-cmd", "sphinx/btest-rst-include", "sphinx/btest-rst-pipe", ] py_modules = ["btest-sphinx"] setup( name='btest', version="0.72", # Filled in automatically. description='A powerful system testing framework', long_description='See https://github.com/zeek/btest', author='Robin Sommer', author_email='robin@icir.org', url='https://github.com/zeek/btest', scripts=scripts, package_dir={"": "sphinx"}, py_modules=py_modules, license='3-clause BSD License', keywords='system tests testing framework baselines', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'License :: OSI Approved :: BSD License', 'Operating System :: POSIX :: Linux', 'Operating System :: MacOS :: MacOS X', 'Programming Language :: Python :: 3', 'Topic :: Utilities', ], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6573663 btest-0.72/sphinx/0000755000076500000240000000000014246443553012622 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/sphinx/btest-diff-rst0000755000076500000240000000065214165076445015412 0ustar00timstaff#! /usr/bin/env bash # # A finalizer to diff the output generated by one of the other btest-rst-* # commands. This does nothing if we're running from inside Sphinx. # btest-rst-cmd generates these files if run from inside btest (but not Sphinx). if [ "$BTEST_RST_OUTPUT" != "" ]; then exit 0 fi while IFS= read -r -d '' output; do echo "$(pwd)/$output" btest-diff "$output" done < <(find btest-"${TEST_NAME}"*) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/sphinx/btest-rst-cmd0000755000076500000240000000671614165076445015254 0ustar00timstaff#! /usr/bin/env bash # # Executes a command and formats the command and its stdout in reST. # trap cleanup INT TERM EXIT function usage() { echo echo "$(basename "$0") [options] " echo echo " -d Do not actually execute command; just format the command line." echo " -h Show this help." echo " -r Insert into output, rather than stdout." echo " -o Do not include command into output." echo " -c Show in output instead of the one actually executed." echo " -f Run command on command output (or file) before including." echo " -n Include only n lines of output, adding a [...] marker if there's more." echo exit 1 } function apply_filter() { eval "$filter_env" | eval "$filter_opt" } # Strip leading white-space and then indent to 6 space. function indent() { python3 -c " from __future__ import print_function import sys input = sys.stdin.readlines() n = 1e10 for i in input: n = min(n, len(i) - len(i.lstrip())) for i in input: print(' ' + i[n:], end='') " } function cleanup() { # shellcheck disable=SC2086 rm -f $tmps exit } stdout=$(mktemp -t "$(basename "$0")".XXXXXX) cmd_out=$(mktemp -t "$(basename "$0")".XXXXXX) filter_out=$(mktemp -t "$(basename "$0")".XXXXXX) tmps="$stdout $cmd_out $filter_out" include=$cmd_out show_command=1 cmd_display="" dry=0 lines=0 filter_env=${BTEST_RST_FILTER} while getopts "odhr:f:c:n:" opt; do case $opt in h) usage ;; o) show_command=0 ;; r) include=$OPTARG ;; d) dry=1 include="" ;; c) cmd_display=$OPTARG ;; f) filter_opt=$OPTARG ;; n) lines=$OPTARG ;; *) exit 1 ;; esac done shift $((OPTIND - 1)) cmd=$* test "$cmd_display" == "" && cmd_display=$cmd test "$filter_opt" == "" && filter_opt="cat" test "$filter_env" == "" && filter_env="cat" test "$cmd" == "" && usage if [ "$dry" != "1" ]; then if ! eval "$cmd" >"$cmd_out"; then exit 1 fi fi # Generate reST output. if [ "$show_command" == "1" ]; then { echo ".. rst-class:: btest-cmd" echo echo " .. code-block:: none" echo " :linenos:" echo " :emphasize-lines: 1,1" echo echo " # $cmd_display" | apply_filter } >>"$stdout" else { echo ".. rst-class:: btest-include" echo echo " .. code-block:: guess" echo " :linenos:" echo } >>"$stdout" fi for i in $include; do echo " $(basename "$i")" >>"$filter_out" echo "" >>"$filter_out" cat "$i" | apply_filter | indent >"$filter_out" if [ "$lines" = 0 ]; then cat "$filter_out" >>"$stdout" else cat "$filter_out" | head -n "$lines" >>"$stdout" if [ "$(wc -l <"$filter_out")" -gt "$lines" ]; then echo ' [...]' >>"$stdout" fi fi rm -f "$filter_out" done echo >>"$stdout" # Branch depending on where this script was started from. if [ "$BTEST_RST_OUTPUT" != "" ]; then # Running from inside Sphinx, just output to where it tells us. cat "$stdout" >>"${BTEST_RST_OUTPUT}#${TEST_PART}" elif [ "$TEST_NAME" ]; then # Running from inside BTest, output into file that btest-diff-rst will pickup. cat "$stdout" >>"btest-${TEST_NAME}#${TEST_PART}" else # Running from command line, just print out. cat "$stdout" fi ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/sphinx/btest-rst-include0000755000076500000240000000063214165076445016123 0ustar00timstaff#! /usr/bin/env bash base=$(dirname "$0") function usage() { echo "usage: $(basename "$0") [-n ] " exit 1 } lines="" while getopts "n:" opt; do case $opt in n) lines=$OPTARG ;; *) usage ;; esac done shift $((OPTIND - 1)) if [ "$1" = "" ]; then usage fi if [ "$lines" != "" ]; then lines="-n $lines" fi "$base/btest-rst-cmd" "$lines" -o cat "$1" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/sphinx/btest-rst-pipe0000755000076500000240000000023414165076445015433 0ustar00timstaff#! /usr/bin/env bash base=$(dirname "$0") if [ "$#" = 0 ]; then echo "usage: $(basename "$0") " exit 1 fi "$base/btest-rst-cmd" -o "$@" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/sphinx/btest-sphinx.py0000644000076500000240000001713714165076445015637 0ustar00timstaffimport os import os.path import tempfile import subprocess import re from docutils import nodes, statemachine, utils from docutils.parsers.rst import directives, Directive, DirectiveError, Parser from docutils.transforms import TransformError, Transform from sphinx.util.console import bold, purple, darkgreen, red, term_width_line from sphinx.errors import SphinxError from sphinx.directives.code import LiteralInclude from sphinx.util import logging logger = logging.getLogger(__name__) Initialized = False App = None Reporter = None BTestBase = None BTestTests = None BTestTmp = None Tests = {} Includes = set() # Maps file name extensiosn to Pygments formatter. ExtMappings = {"bro": "bro", "rst": "rest", "c": "c", "cc": "cc", "py": "python"} def init(settings, reporter): global Initialized, App, Reporter, BTestBase, BTestTests, BTestTmp Initialized = True Reporter = reporter BTestBase = settings.env.config.btest_base BTestTests = settings.env.config.btest_tests BTestTmp = settings.env.config.btest_tmp if not BTestBase: raise SphinxError("error: btest_base not set in config") if not BTestTests: raise SphinxError("error: btest_tests not set in config") if not os.path.exists(BTestBase): raise SphinxError("error: btest_base directory '%s' does not exists" % BTestBase) joined = os.path.join(BTestBase, BTestTests) if not os.path.exists(joined): raise SphinxError("error: btest_tests directory '%s' does not exists" % joined) if not BTestTmp: BTestTmp = os.path.join(App.outdir, ".tmp/rst_output") BTestTmp = os.path.abspath(BTestTmp) if not os.path.exists(BTestTmp): os.makedirs(BTestTmp) def parsePartial(rawtext, settings): parser = Parser() document = utils.new_document("") document.settings = settings parser.parse(rawtext, document) return document.children class Test(object): def __init__(self): self.has_run = False def run(self): if self.has_run: return logger.info("running test %s ..." % darkgreen(self.path)) self.rst_output = os.path.join(BTestTmp, "%s" % self.tag) os.environ["BTEST_RST_OUTPUT"] = self.rst_output self.cleanTmps() try: subprocess.check_call("btest -S %s" % self.path, shell=True) except (OSError, IOError, subprocess.CalledProcessError) as e: # Equivalent to Directive.error(); we don't have an # directive object here and can't pass it in because # it doesn't pickle. logger.warning(red("BTest error: %s" % e)) def cleanTmps(self): subprocess.call("rm %s#* 2>/dev/null" % self.rst_output, shell=True) class BTestTransform(Transform): default_priority = 800 def apply(self): pending = self.startnode (test, part) = pending.details os.chdir(BTestBase) if not test.tag in BTestTransform._run: test.run() BTestTransform._run.add(test.tag) try: rawtext = open("%s#%d" % (test.rst_output, part)).read() except IOError as e: rawtext = "" settings = self.document.settings content = parsePartial(rawtext, settings) pending.replace_self(content) _run = set() class BTest(Directive): required_arguments = 1 final_argument_whitespace = True has_content = True def error(self, msg): self.state.document.settings.env.note_reread() msg = red(msg) msg = self.state.document.reporter.error(str(msg), line=self.lineno) return [msg] def message(self, msg): Reporter.info(msg) def run(self): if not Initialized: # FIXME: Better way to handle one-time initialization? init(self.state.document.settings, self.state.document.reporter) os.chdir(BTestBase) self.assert_has_content() document = self.state_machine.document tag = self.arguments[0] if not tag in Tests: import sys test = Test() test.tag = tag test.path = os.path.join(BTestTests, tag + ".btest") test.parts = 0 Tests[tag] = test test = Tests[tag] test.parts += 1 part = test.parts # Save the test. if part == 1: file = test.path else: file = test.path + "#%d" % part out = open(file, "w") for line in self.content: out.write("%s\n" % line) out.close() details = (test, part) pending = nodes.pending(BTestTransform, details, rawsource=self.block_text) document.note_pending(pending) return [pending] class BTestInclude(LiteralInclude): def __init__(self, *args, **kwargs): super(BTestInclude, self).__init__(*args, **kwargs) def error(self, msg): self.state.document.settings.env.note_reread() msg = red(msg) msg = self.state.document.reporter.error(str(msg), line=self.lineno) return [msg] def message(self, msg): Reporter.info(msg) def run(self): if not Initialized: # FIXME: Better way to handle one-time initialization? init(self.state.document.settings, self.state.document.reporter) document = self.state.document if not document.settings.file_insertion_enabled: return [document.reporter.warning('File insertion disabled', line=self.lineno)] env = document.settings.env expanded_arg = os.path.expandvars(self.arguments[0]) sphinx_src_relation = os.path.relpath(expanded_arg, env.srcdir) self.arguments[0] = os.path.join(os.sep, sphinx_src_relation) (root, ext) = os.path.splitext(self.arguments[0]) if ext.startswith("."): ext = ext[1:] if ext in ExtMappings: self.options["language"] = ExtMappings[ext] else: # Note that we always need to set a language, otherwise the lineos/emphasis don't seem to work. self.options["language"] = "none" self.options["linenos"] = True self.options["prepend"] = "%s\n" % os.path.basename(self.arguments[0]) self.options["emphasize-lines"] = "1,1" self.options["style"] = "X" retnode = super(BTestInclude, self).run() os.chdir(BTestBase) tag = os.path.normpath(self.arguments[0]) tag = os.path.relpath(tag, BTestBase) tag = re.sub("[^a-zA-Z0-9-]", "_", tag) tag = re.sub("__+", "_", tag) if tag.startswith("_"): tag = tag[1:] test_path = ("include-" + tag + ".btest") if BTestTests: test_path = os.path.join(BTestTests, test_path) test_path = os.path.abspath(test_path) i = 1 (base, ext) = os.path.splitext(test_path) while test_path in Includes: i += 1 test_path = "%s@%d" % (base, i) if ext: test_path += ext Includes.add(test_path) out = open(test_path, "w") out.write("# @TEST-EXEC: cat %INPUT >output && btest-diff output\n\n") for i in retnode: out.write(i.rawsource) out.close() for node in retnode: node["classes"] += ["btest-include"] return retnode directives.register_directive('btest', BTest) directives.register_directive('btest-include', BTestInclude) def setup(app): global App App = app app.add_config_value('btest_base', None, 'env') app.add_config_value('btest_tests', None, 'env') app.add_config_value('btest_tmp', None, 'env') ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.660741 btest-0.72/sphinx/btest.egg-info/0000755000076500000240000000000014246443553015435 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277994.0 btest-0.72/sphinx/btest.egg-info/PKG-INFO0000644000076500000240000000122214246443552016526 0ustar00timstaffMetadata-Version: 2.1 Name: btest Version: 0.72 Summary: A powerful system testing framework Home-page: https://github.com/zeek/btest Author: Robin Sommer Author-email: robin@icir.org License: 3-clause BSD License Keywords: system tests testing framework baselines Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: MacOS :: MacOS X Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Utilities License-File: COPYING See https://github.com/zeek/btest ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277994.0 btest-0.72/sphinx/btest.egg-info/SOURCES.txt0000644000076500000240000002000614246443552017316 0ustar00timstaffCHANGES COPYING MANIFEST MANIFEST.in Makefile README VERSION btest btest-ask-update btest-bg-run btest-bg-run-helper btest-bg-wait btest-diff btest-progress btest-setsid btest.cfg.example setup.py Baseline/examples.t4/dots Baseline/examples.t5/output Baseline/examples.t5-2/output Baseline/examples.t6/output Baseline/examples.t7/output Baseline/examples.unstable/output examples/alternative examples/my-filter examples/t1 examples/t2 examples/t3.sh examples/t4.awk examples/t5.sh examples/t6.sh examples/t7 examples/t7.sh#1 examples/t7.sh#2 examples/t7.sh#3 examples/unstable.sh examples/sphinx/.gitignore examples/sphinx/Makefile examples/sphinx/btest.cfg examples/sphinx/conf.py examples/sphinx/index.rst examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#1 examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#2 examples/sphinx/Baseline/tests.sphinx.hello-world/btest-tests.sphinx.hello-world#3 examples/sphinx/tests/sphinx/hello-world.btest examples/sphinx/tests/sphinx/hello-world.btest#2 examples/sphinx/tests/sphinx/hello-world.btest#3 sphinx/btest-diff-rst sphinx/btest-rst-cmd sphinx/btest-rst-include sphinx/btest-rst-pipe sphinx/btest-sphinx.py sphinx/btest.egg-info/PKG-INFO sphinx/btest.egg-info/SOURCES.txt sphinx/btest.egg-info/dependency_links.txt sphinx/btest.egg-info/top_level.txt testing/.gitignore testing/Makefile testing/btest.cfg testing/btest.tests.cfg testing/Baseline/tests.abort-on-failure/output testing/Baseline/tests.abort-on-failure-with-only-known-fails/output testing/Baseline/tests.alternatives-environment/child-output testing/Baseline/tests.alternatives-environment/output testing/Baseline/tests.alternatives-filter/child-output testing/Baseline/tests.alternatives-filter/output testing/Baseline/tests.alternatives-keywords/output testing/Baseline/tests.alternatives-substitution/child-output testing/Baseline/tests.alternatives-substitution/output testing/Baseline/tests.alternatives-testbase/output testing/Baseline/tests.brief/out1 testing/Baseline/tests.brief/out2 testing/Baseline/tests.btest-cfg/abspath testing/Baseline/tests.btest-cfg/nopath testing/Baseline/tests.btest-cfg/relpath testing/Baseline/tests.console/output testing/Baseline/tests.crlf-line-terminators/crlfs.dat testing/Baseline/tests.crlf-line-terminators/input testing/Baseline/tests.diag/output testing/Baseline/tests.diag-all/output testing/Baseline/tests.diag-file/diag testing/Baseline/tests.diag-file/output testing/Baseline/tests.diff-brief/output testing/Baseline/tests.diff-max-lines/output1 testing/Baseline/tests.diff-max-lines/output2 testing/Baseline/tests.doc/md testing/Baseline/tests.doc/rst testing/Baseline/tests.environment/output testing/Baseline/tests.exit-codes/out1 testing/Baseline/tests.exit-codes/out2 testing/Baseline/tests.groups/output testing/Baseline/tests.ignore/output testing/Baseline/tests.known-failure/output testing/Baseline/tests.known-failure-and-success/output testing/Baseline/tests.known-failure-succeeds/output testing/Baseline/tests.list/out testing/Baseline/tests.macros/output testing/Baseline/tests.measure-time/output testing/Baseline/tests.measure-time-options/output testing/Baseline/tests.multiple-baseline-dirs/fail.log testing/Baseline/tests.parts/output testing/Baseline/tests.parts-error-part/output testing/Baseline/tests.parts-error-start-next/output testing/Baseline/tests.parts-glob/output testing/Baseline/tests.parts-initializer-finalizer/output testing/Baseline/tests.parts-skipping/output testing/Baseline/tests.parts-teardown/output testing/Baseline/tests.progress/output testing/Baseline/tests.progress-back-to-back/output testing/Baseline/tests.quiet/out1 testing/Baseline/tests.quiet/out2 testing/Baseline/tests.requires/output testing/Baseline/tests.requires-with-start-next/output testing/Baseline/tests.rerun/output testing/Baseline/tests.sphinx.rst-cmd/output testing/Baseline/tests.sphinx.run-sphinx/_build.text.index.txt testing/Baseline/tests.start-file/output testing/Baseline/tests.start-next/output testing/Baseline/tests.start-next-dir/output testing/Baseline/tests.statefile/mystate1 testing/Baseline/tests.statefile/mystate2 testing/Baseline/tests.statefile-sorted/mystate testing/Baseline/tests.teardown/output testing/Baseline/tests.testdirs/out1 testing/Baseline/tests.testdirs/out2 testing/Baseline/tests.threads/output.j0 testing/Baseline/tests.threads/output.j1 testing/Baseline/tests.threads/output.j5 testing/Baseline/tests.tracing/output testing/Baseline/tests.unstable/output testing/Baseline/tests.unstable-dir/output testing/Baseline/tests.verbose/output testing/Baseline/tests.versioning/output testing/Baseline/tests.xml/output-j2.xml testing/Baseline/tests.xml/output.xml testing/Files/local_alternative/btest.tests.cfg testing/Files/local_alternative/Baseline/tests.local-alternative-show-env/output testing/Files/local_alternative/Baseline/tests.local-alternative-show-test-baseline/output testing/Files/local_alternative/Baseline/tests.local-alternative-show-testbase/output testing/Files/local_alternative/tests/local-alternative-found.test testing/Files/local_alternative/tests/local-alternative-show-env.test testing/Files/local_alternative/tests/local-alternative-show-test-baseline.test testing/Files/local_alternative/tests/local-alternative-show-testbase.test testing/Scripts/diff-remove-abspath testing/Scripts/dummy-script testing/Scripts/script-command testing/Scripts/strip-iso8601-date testing/Scripts/strip-test-base testing/Scripts/test-filter testing/Scripts/test-perf testing/tests/abort-on-failure-with-only-known-fails.btest testing/tests/abort-on-failure.btest testing/tests/alternatives-baseline-dir.test testing/tests/alternatives-environment.test testing/tests/alternatives-filter.test testing/tests/alternatives-keywords.test testing/tests/alternatives-overwrite-env.test testing/tests/alternatives-reread-config-baselinedir.test testing/tests/alternatives-reread-config.test testing/tests/alternatives-substitution.test testing/tests/alternatives-testbase.test testing/tests/baseline-dir-env.test testing/tests/basic-fail.test testing/tests/basic-succeed.test testing/tests/binary-mode.test testing/tests/brief.test testing/tests/btest-cfg.test testing/tests/canonifier-cmdline.test testing/tests/canonifier-conversion.test testing/tests/canonifier-fail.test testing/tests/canonifier.test testing/tests/console.test testing/tests/copy-file.test testing/tests/crlf-line-terminators.test testing/tests/diag-all.test testing/tests/diag-file.test testing/tests/diag.test testing/tests/diff-brief.test testing/tests/diff-max-lines.test testing/tests/diff.test testing/tests/doc.test testing/tests/duplicate-selection.test testing/tests/environment.test testing/tests/exit-codes.test testing/tests/finalizer.test testing/tests/groups.test testing/tests/ignore.test testing/tests/initializer.test testing/tests/known-failure-and-success.btest testing/tests/known-failure-succeeds.btest testing/tests/known-failure.btest testing/tests/list.test testing/tests/macros.test testing/tests/measure-time-options.test testing/tests/measure-time.tests testing/tests/multiple-baseline-dirs.test testing/tests/parts-error-part.test testing/tests/parts-error-start-next.test testing/tests/parts-glob.test testing/tests/parts-initializer-finalizer.test testing/tests/parts-skipping.tests testing/tests/parts-teardown.test testing/tests/parts.tests testing/tests/ports.test testing/tests/progress-back-to-back.test testing/tests/progress.test testing/tests/quiet.test testing/tests/requires-with-start-next.test testing/tests/requires.test testing/tests/rerun.test testing/tests/start-file.test testing/tests/start-next-dir.test testing/tests/start-next-naming.test testing/tests/start-next.test testing/tests/statefile-sorted.test testing/tests/statefile.test testing/tests/teardown.test testing/tests/test-base.test testing/tests/testdirs.test testing/tests/threads.test testing/tests/tmps.test testing/tests/tracing.test testing/tests/unstable-dir.test testing/tests/unstable.test testing/tests/verbose.test testing/tests/versioning.test testing/tests/xml.test testing/tests/sphinx/rst-cmd.sh testing/tests/sphinx/run-sphinx././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277994.0 btest-0.72/sphinx/btest.egg-info/dependency_links.txt0000644000076500000240000000000114246443552021502 0ustar00timstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1654277994.0 btest-0.72/sphinx/btest.egg-info/top_level.txt0000644000076500000240000000001514246443552020162 0ustar00timstaffbtest-sphinx ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6644137 btest-0.72/testing/0000755000076500000240000000000014246443553012766 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/.gitignore0000644000076500000240000000004014072112013014725 0ustar00timstaff.tmp .btest.failed.dat diag.log ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6159098 btest-0.72/testing/Baseline/0000755000076500000240000000000014246443553014510 5ustar00timstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6653628 btest-0.72/testing/Baseline/tests.abort-on-failure/0000755000076500000240000000000014246443553021017 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.abort-on-failure/output0000644000076500000240000000037714072112013022266 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. test1 ... ok % cat .stderr test2 ... failed % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr Aborted after first failure. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.666257 btest-0.72/testing/Baseline/tests.abort-on-failure-with-only-known-fails/0000755000076500000240000000000014246443553025175 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.abort-on-failure-with-only-known-fails/output0000644000076500000240000000053314072112013026436 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. test1 ... ok % cat .stderr test2 ... failed (expected) % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr test3 ... failed % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr Aborted after first failure. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6680477 btest-0.72/testing/Baseline/tests.alternatives-environment/0000755000076500000240000000000014246443553022714 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-environment/child-output0000644000076500000240000000044614072112013025241 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Foo: Foo2: ------------- Foo: BAR Foo2: ------------- Foo: BAR Foo2: ------------- Foo: Foo2: BAR2 ------------- Foo: BAR Foo2: ------------- Foo: Foo2: BAR2 ------------- ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-environment/output0000644000076500000240000000065714072112013024164 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. alternatives-environment ... ok all 1 tests successful alternatives-environment [foo] ... ok all 1 tests successful alternatives-environment [foo] ... ok alternatives-environment [foo2] ... ok all 2 tests successful alternatives-environment [foo] ... ok alternatives-environment [foo2] ... ok all 2 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6699321 btest-0.72/testing/Baseline/tests.alternatives-filter/0000755000076500000240000000000014246443553021635 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.alternatives-filter/child-output0000644000076500000240000000101314165076445024176 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. # %TEST-EXEC: btest %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo %INPUT >>output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-EXEC: btest-diff child-output @TEST-EXEC: cat %INPUT >>../../child-output # %T*ST-*X*C: btest %INPUT >>output 2>&1 # %T*ST-*X*C: btest -a foo %INPUT >>output 2>&1 # %T*ST-*X*C: btest-diff output # %T*ST-*X*C: btest-diff child-output @T*ST-*X*C: cat %INPUT >>../../child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-filter/output0000644000076500000240000000033514072112013023076 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. alternatives-filter ... ok all 1 tests successful alternatives-filter [foo] ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.670922 btest-0.72/testing/Baseline/tests.alternatives-keywords/0000755000076500000240000000000014246443553022217 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-keywords/output0000644000076500000240000000056014072112013023460 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. default1 ... ok notfoo1 ... ok all 2 tests successful default1 ... ok notfoo1 ... ok all 2 tests successful foo1 [foo] ... ok notdefault1 [foo] ... ok all 2 tests successful notdefault1 [notexist] ... ok notfoo1 [notexist] ... ok all 2 tests successful ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.672653 btest-0.72/testing/Baseline/tests.alternatives-substitution/0000755000076500000240000000000014246443553023124 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-substitution/child-output0000644000076500000240000000021014072112013025436 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. World! Hello, World! ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-substitution/output0000644000076500000240000000035114072112013024363 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. alternatives-substitution ... ok all 1 tests successful alternatives-substitution [foo] ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6736577 btest-0.72/testing/Baseline/tests.alternatives-testbase/0000755000076500000240000000000014246443553022162 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.alternatives-testbase/output0000644000076500000240000000007214072112013023421 0ustar00timstafftests.basic-succeed [local] ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6755064 btest-0.72/testing/Baseline/tests.brief/0000755000076500000240000000000014246443553016740 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.brief/out10000644000076500000240000000021214072112013017523 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. all 2 tests successful ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.brief/out20000644000076500000240000000022514072112013017530 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t2 ... failed 1 of 2 tests failed ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6781363 btest-0.72/testing/Baseline/tests.btest-cfg/0000755000076500000240000000000014246443553017527 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.btest-cfg/abspath0000644000076500000240000000030314072112013021045 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. btest-cfg ... ok all 1 tests successful btest-cfg ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.btest-cfg/nopath0000644000076500000240000000042414072112013020720 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. btest-cfg ... ok all 1 tests successful btest-cfg ... ok all 1 tests successful btest-cfg ... ok all 1 tests successful configuration file 'btest.cfg' not found ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.btest-cfg/relpath0000644000076500000240000000030314072112013021062 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. btest-cfg ... ok all 1 tests successful btest-cfg ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.679018 btest-0.72/testing/Baseline/tests.console/0000755000076500000240000000000014246443553017313 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.console/output0000644000076500000240000000016714072112013020557 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 2 4 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6807585 btest-0.72/testing/Baseline/tests.crlf-line-terminators/0000755000076500000240000000000014246443553022071 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.crlf-line-terminators/crlfs.dat0000644000076500000240000000017514072112013023654 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 1 2 3 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.crlf-line-terminators/input0000644000076500000240000000067514072112013023140 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. # %TEST-DOC: Check that CRLF line-terminators are preserved in test files. # Note that this test file itself has CRLF line endings and .gitattributes # has an entry to explicitly say this file uses CRLF. # %TEST-EXEC: cp %INPUT input # %TEST-EXEC: btest-diff input # %TEST-EXEC: btest-diff crlfs.dat one two three ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6815867 btest-0.72/testing/Baseline/tests.diag/0000755000076500000240000000000014246443553016555 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diag/output0000644000076500000240000000133414072112013020016 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diag ... failed % 'btest-diff output' failed unexpectedly (exit code 100) % cat .diag == File =============================== Hello, World! == Error ============================== test-diff: no baseline found. ======================================= % cat .stderr 1 of 1 test failed diag ... failed % 'btest-diff output' failed unexpectedly (exit code 1) % cat .diag == File =============================== Hello, World! == Diff =============================== @@ -1 +1 @@ -Wrong baseline +Hello, World! ======================================= % cat .stderr 1 of 1 test failed ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6823637 btest-0.72/testing/Baseline/tests.diag-all/0000755000076500000240000000000014246443553017323 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diag-all/output0000644000076500000240000000041014072112013020556 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok % cat .stderr Stderr output t2 ... ok % cat .diag This is the contents of the .diag file % cat .stderr all 2 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6841555 btest-0.72/testing/Baseline/tests.diag-file/0000755000076500000240000000000014246443553017472 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diag-file/diag0000644000076500000240000000033014072112013020272 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diag-file ... failed % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr Stderr output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diag-file/output0000644000076500000240000000023314072112013020730 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diag-file ... failed 1 of 1 test failed ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6849313 btest-0.72/testing/Baseline/tests.diff-brief/0000755000076500000240000000000014246443553017646 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diff-brief/output0000644000076500000240000000072714072112013021114 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diff-brief ... failed % 'TEST_DIFF_BRIEF=1 btest-diff child-output' failed unexpectedly (exit code 1) % cat .diag == File =============================== == Diff =============================== @@ -1 +1 @@ -This is the baseline +Hello world ======================================= % cat .stderr 1 of 1 test failed ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6865711 btest-0.72/testing/Baseline/tests.diff-max-lines/0000755000076500000240000000000014246443553020454 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diff-max-lines/output10000644000076500000240000000113414072112013021774 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diff-max-lines ... failed % 'TEST_DIFF_FILE_MAX_LINES=2 btest-diff child-output' failed unexpectedly (exit code 100) % cat .diag == File =============================== Output line 1 Output line 2 Output line 3 Output line 4 Output line 5 Output line 6 Output line 7 Output line 8 Output line 9 Output line 10 == Error ============================== test-diff: no baseline found. ======================================= % cat .stderr 1 of 1 test failed ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.diff-max-lines/output20000644000076500000240000000126214072112013021777 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. diff-max-lines ... failed % 'TEST_DIFF_FILE_MAX_LINES=2 btest-diff child-output' failed unexpectedly (exit code 1) % cat .diag == File =============================== Output line 1 Output line 2 [... File too long, truncated ...] == Diff =============================== @@ -1 +1,10 @@ -This is the baseline +Output line 1 +Output line 2 +Output line 3 +Output line 4 +Output line 5 +Output line 6 +Output line 7 +Output line 8 +Output line 9 +Output line 10 ======================================= % cat .stderr 1 of 1 test failed ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.688418 btest-0.72/testing/Baseline/tests.doc/0000755000076500000240000000000014246443553016416 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.doc/md0000644000076500000240000000154014072112013016716 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. # alternatives * `alternatives.default`: This test runs if no alternative is given. * `alternatives.myalternate`: This test runs only for the given alternative, but it will always appear in the documentation output. # misc * `misc.requires`: This is a skipped test. # misc.subdir * `misc.subdir.subtest`: This test is in a subdirectory. # multipart * `multipart.multi`: 1st test part. 2nd test part. 3rd test part. # testdoc * `testdoc.badtestdoc`: No documentation. * `testdoc.multiline`: This is a multi-line TEST-DOC comment. This is the 2nd line. This is the 3rd line. * `testdoc.singleline`: This is a single-line TEST-DOC comment. * `testdoc.testdocmissing`: No documentation. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.doc/rst0000644000076500000240000000174614072112013017136 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. alternatives ------------ ``alternatives.default``: This test runs if no alternative is given. ``alternatives.myalternate``: This test runs only for the given alternative, but it will always appear in the documentation output. misc ---- ``misc.requires``: This is a skipped test. misc.subdir ----------- ``misc.subdir.subtest``: This test is in a subdirectory. multipart --------- ``multipart.multi``: 1st test part. 2nd test part. 3rd test part. testdoc ------- ``testdoc.badtestdoc``: No documentation. ``testdoc.multiline``: This is a multi-line TEST-DOC comment. This is the 2nd line. This is the 3rd line. ``testdoc.singleline``: This is a single-line TEST-DOC comment. ``testdoc.testdocmissing``: No documentation. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6896198 btest-0.72/testing/Baseline/tests.environment/0000755000076500000240000000000014246443553020215 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.environment/output0000644000076500000240000000131514072112013021455 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Foo testbase is correct 42 macro expansion within backticks is correct default_path is correct <...>/.tmp/tests.environment/.tmp/environment/.diag TEST <...>/.tmp/tests.environment/Baseline/environment environment <...>/.tmp/tests.environment/.tmp/environment/.verbose <...>/.tmp/tests.environment 1 Foo testbase is correct 42 macro expansion within backticks is correct default_path is correct <...>/.tmp/tests.environment/.tmp/environment/.diag UPDATE <...>/.tmp/tests.environment/Baseline/environment environment <...>/.tmp/tests.environment/.tmp/environment/.verbose <...>/.tmp/tests.environment 1 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6912215 btest-0.72/testing/Baseline/tests.exit-codes/0000755000076500000240000000000014246443553017715 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.exit-codes/out10000644000076500000240000000023714072112013020507 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 1.1 1.2 2.1 4.1 4.2 5.1 6.1 7.1 7.2 8.1 9.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.exit-codes/out20000644000076500000240000000041714072112013020510 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... failed t2 ... failed 2 of 2 tests failed t4 ... ok t5 ... failed t6 ... ok 1 of 3 tests failed t7 ... ok t8 ... failed t9 ... ok 1 of 3 tests failed ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.691971 btest-0.72/testing/Baseline/tests.groups/0000755000076500000240000000000014246443553017170 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.groups/output0000644000076500000240000000073414072112013020434 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t2 ... ok all 2 tests successful t1 ... ok t2 ... ok t3 ... ok all 3 tests successful t4 ... ok all 1 tests successful t1 ... ok t2 ... ok t4 ... ok all 3 tests successful t1 ... ok t2 ... ok t3 ... ok t4 ... ok all 4 tests successful t3 ... ok t4 ... ok all 2 tests successful t1 ... ok t2 ... ok t3 ... ok t4 ... ok t5 ... ok all 5 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6927583 btest-0.72/testing/Baseline/tests.ignore/0000755000076500000240000000000014246443553017134 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.ignore/output0000644000076500000240000000027014072112013020373 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. all 3 tests successful all.sub.t3 ... ok all.t1 ... ok all.t2 ... ok ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6936493 btest-0.72/testing/Baseline/tests.known-failure/0000755000076500000240000000000014246443553020432 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.known-failure/output0000644000076500000240000000052614072112013021675 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. known-failure ... failed (expected) % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr test2 ... failed % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr 2 of 2 tests failed (with 1 expected to fail) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6946552 btest-0.72/testing/Baseline/tests.known-failure-and-success/0000755000076500000240000000000014246443553022640 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.known-failure-and-success/output0000644000076500000240000000032014072112013024073 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. known-failure-and-success ... failed (expected) 1 of 1 test failed (with 1 expected to fail) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.695531 btest-0.72/testing/Baseline/tests.known-failure-succeeds/0000755000076500000240000000000014246443553022226 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.known-failure-succeeds/output0000644000076500000240000000027714072112013023474 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. known-failure-succeeds ... ok (but expected to fail) all 1 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6966164 btest-0.72/testing/Baseline/tests.list/0000755000076500000240000000000014246443553016624 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.list/out0000644000076500000240000000017414165076445017362 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 t2 t3 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6974719 btest-0.72/testing/Baseline/tests.macros/0000755000076500000240000000000014246443553017135 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.macros/output0000644000076500000240000000023014072112013020370 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. macros ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6984663 btest-0.72/testing/Baseline/tests.measure-time/0000755000076500000240000000000014246443553020246 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.measure-time/output0000644000076500000240000000077014072112013021512 0ustar00timstaffmeasure-time ... ok % cat .stderr all 1 tests successful ----- measure-time ... ok % cat .stderr all 1 tests successful ----- measure-time ... ok (+xx.x%) % cat .stderr all 1 tests successful ----- measure-time ... failed (+xx.x%) % 'measure-time' exceeded permitted execution time deviation (+xx.x%) % cat .stderr 1 of 1 test failed ----- measure-time ... ok (+xx.x%) % cat .stderr all 1 tests successful ----- measure-time ... ok (+xx.x%) % cat .stderr all 1 tests successful ----- ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6993012 btest-0.72/testing/Baseline/tests.measure-time-options/0000755000076500000240000000000014246443553021737 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.measure-time-options/output0000644000076500000240000000120014072112013023170 0ustar00timstaffmeasure-time-options ... ok % cat .stderr all 1 tests successful ----- measure-time-options ... ok % cat .stderr all 1 tests successful ----- measure-time-options ... ok (+xx.x%) % cat .stderr all 1 tests successful ----- measure-time-options ... failed (+xx.x%) % 'measure-time-options' exceeded permitted execution time deviation (+xx.x%) % cat .stderr 1 of 1 test failed ----- measure-time-options ... ok (+xx.x%) % cat .stderr all 1 tests successful ----- measure-time-options ... failed (+xx.x%) % 'measure-time-options' exceeded permitted execution time deviation (+xx.x%) % cat .stderr 1 of 1 test failed ----- ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7002184 btest-0.72/testing/Baseline/tests.multiple-baseline-dirs/0000755000076500000240000000000014246443553022223 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.multiple-baseline-dirs/fail.log0000644000076500000240000000056714072112013023626 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t4 ... failed % 'btest-diff output' failed unexpectedly (exit code 1) % cat .diag == File =============================== 4 == Diff =============================== @@ -1 +1 @@ -XXX +4 ======================================= % cat .stderr ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7010102 btest-0.72/testing/Baseline/tests.parts/0000755000076500000240000000000014246443553017002 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.parts/output0000644000076500000240000000034314072112013020242 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 1 Hello, world! (<...>/test) 2 Hello, world! Again. (<...>/test#2) 3 Hello, world! Again. Again. (<...>/test#3) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7018209 btest-0.72/testing/Baseline/tests.parts-error-part/0000755000076500000240000000000014246443553021075 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.parts-error-part/output0000644000076500000240000000030414072112013022332 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Do not specify files with part numbers directly, use the base test name (test#3) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7026737 btest-0.72/testing/Baseline/tests.parts-error-start-next/0000755000076500000240000000000014246443553022240 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.parts-error-start-next/output0000644000076500000240000000026414072112013023502 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. cannot use @TEST-START-NEXT with tests split across parts (test) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7035482 btest-0.72/testing/Baseline/tests.parts-glob/0000755000076500000240000000000014246443553017723 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.parts-glob/output0000644000076500000240000000036314072112013021165 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Hello, world!. Hello, world! Again. Hello, world! Again. Again. Hello, world!. Hello, world! Again. Hello, world! Again. Again. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7043605 btest-0.72/testing/Baseline/tests.parts-initializer-finalizer/0000755000076500000240000000000014246443553023304 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.parts-initializer-finalizer/output0000644000076500000240000000055314165076445024574 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Initializer 1 t.test PartInitializer 1 t.test Hello, world!. PartFinalizer 1 t.test PartInitializer 2 t.test Hello, world! Again. PartFinalizer 2 t.test PartInitializer 3 t.test Hello, world! Again. Again. PartFinalizer 3 t.test Finalizer 1 t.test ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7053328 btest-0.72/testing/Baseline/tests.parts-skipping/0000755000076500000240000000000014246443553020624 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.parts-skipping/output0000644000076500000240000000026314072112013022065 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Hello, world!. Hello, world! Again. Hello, world! Again. Again. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7062914 btest-0.72/testing/Baseline/tests.parts-teardown/0000755000076500000240000000000014246443553020623 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.parts-teardown/output0000644000076500000240000000030714165076445022110 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Hello, world!. Teardown 1 t.test Hello, world! Again, with error. Teardown 2 t.test ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7070892 btest-0.72/testing/Baseline/tests.progress/0000755000076500000240000000000014246443553017515 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.progress/output0000644000076500000240000000077014072112013020761 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. progress ... - Foo 1 - Foo 2 - Foo 3 - Foo 4 progress ... ok all 1 tests successful --- % cat .stderr [btest] -- Foo 1 [btest] -- Foo 1 -- XXX [btest] -- Foo 3 [btest] -- Foo 3 -- XXX [btest] -- Foo 4 [btest] -- Foo 4 all 1 tests successful --- progress ... > bash %INPUT >&2 - Foo 1 - Foo 2 - Foo 3 - Foo 4 ... progress ok all 1 tests successful --- --- ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.707898 btest-0.72/testing/Baseline/tests.progress-back-to-back/0000755000076500000240000000000014246443553021731 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.progress-back-to-back/output0000644000076500000240000000114014072112013023165 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. progress-back-to-back ... - Foo 1 - Foo 2 - Foo 3 - Foo 4 progress-back-to-back ... ok all 1 tests successful --- % cat .stderr [btest] -- Foo 1 [btest] -- Foo 1 -- XXX [btest] -- Foo 2 [btest] -- Foo 2 -- XXX [btest] -- Foo 3 [btest] -- Foo 3 -- XXX [btest] -- Foo 4 [btest] -- Foo 4 -- XXX all 1 tests successful --- progress-back-to-back ... > bash %INPUT >&2 - Foo 1 - Foo 2 - Foo 3 - Foo 4 ... progress-back-to-back ok all 1 tests successful --- --- ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7095463 btest-0.72/testing/Baseline/tests.quiet/0000755000076500000240000000000014246443553017000 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.quiet/out10000644000076500000240000000016314072112013017570 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.quiet/out20000644000076500000240000000020114072112013017562 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t2 ... failed ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7103372 btest-0.72/testing/Baseline/tests.requires/0000755000076500000240000000000014246443553017510 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.requires/output0000644000076500000240000000034114072112013020746 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t2 ... not available, skipped t3 ... not available, skipped t4 ... ok 2 tests successful, 2 skipped ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.711197 btest-0.72/testing/Baseline/tests.requires-with-start-next/0000755000076500000240000000000014246443553022570 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.requires-with-start-next/output0000644000076500000240000000034514072112013024032 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t1-2 ... ok t2 ... not available, skipped t2-2 ... not available, skipped 2 tests successful, 2 skipped ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7120526 btest-0.72/testing/Baseline/tests.rerun/0000755000076500000240000000000014246443553017004 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.rerun/output0000644000076500000240000000031214072112013020240 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t2 ... failed t3 ... ok 1 of 3 tests failed t2 ... failed 1 of 1 test failed ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.713017 btest-0.72/testing/Baseline/tests.sphinx.rst-cmd/0000755000076500000240000000000014246443553020532 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.sphinx.rst-cmd/output0000644000076500000240000000163614072112013022000 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. .. rst-class:: btest-cmd .. code-block:: none :linenos: :emphasize-lines: 1,1 # echo Hello Hello .. rst-class:: btest-include .. code-block:: guess :linenos: Hello 2, no command .. rst-class:: btest-cmd .. code-block:: none :linenos: :emphasize-lines: 1,1 # Different command Hello 3, no command .. rst-class:: btest-cmd .. code-block:: none :linenos: :emphasize-lines: 1,1 # echo Hello 4, no output .. rst-class:: btest-cmd .. code-block:: none :linenos: :emphasize-lines: 1,1 # Xcho HXllo 5, filtXr HXllo 5, filtXr .. rst-class:: btest-cmd .. code-block:: none :linenos: :emphasize-lines: 1,1 # echo Hello 6, file Example file. Line 2 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.713879 btest-0.72/testing/Baseline/tests.sphinx.run-sphinx/0000755000076500000240000000000014246443553021274 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.sphinx.run-sphinx/_build.text.index.txt0000644000076500000240000000255614165076445025376 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. Welcome to BTest-Sphinx Demo's documentation! ********************************************* Contents: Testing ******* # echo Hello, world! Hello, world! # echo Hello, world! Again. Hello, world! Again. # echo Hello, world! Again. Again. Hello, world! Again. Again. # echo This will fail soon! This will fail soon! This should fail and include the diag output instead: ERROR executing test 'tests.sphinx.hello-world-fail' (part 2) % 'echo StDeRr >&2; echo 1 | grep -q 2' failed unexpectedly (exit code 1) % cat .stderr StDeRr This should succeed: # echo This succeeds again! This succeeds again! This should fail again and include the diag output instead: ERROR executing test 'tests.sphinx.hello-world-fail' (part 4) % 'echo StDeRr >&2; echo 3 | grep -q 4' failed unexpectedly (exit code 1) % cat .stderr StDeRr StDeRr # echo This succeeds again! This succeeds again! btest.cfg [btest] TestDirs = tests TmpDir = %(testbase)s/.tmp BaselineDir = %(testbase)s/Baseline Finalizer = btest-diff-rst [environment] PATH=%(testbase)s/../../:%(testbase)s/../../sphinx:%(default_path)s Indices and tables ****************** * Index * Module Index * Search Page ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.714872 btest-0.72/testing/Baseline/tests.start-file/0000755000076500000240000000000014246443553017723 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.start-file/output0000644000076500000240000000016714072112013021167 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 3 4 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.715725 btest-0.72/testing/Baseline/tests.start-next/0000755000076500000240000000000014246443553017762 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.start-next/output0000644000076500000240000000017514072112013021225 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 168 20 19 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7165418 btest-0.72/testing/Baseline/tests.start-next-dir/0000755000076500000240000000000014246443553020536 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.start-next-dir/output0000644000076500000240000000030414072112013021773 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. tests.start-next-dir/mydir tests.start-next-dir/mydir tests.start-next-dir/mydir ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7181947 btest-0.72/testing/Baseline/tests.statefile/0000755000076500000240000000000014246443553017631 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.statefile/mystate10000644000076500000240000000016314072112013021300 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.statefile/mystate20000644000076500000240000000017114072112013021300 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t2 t3 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7191148 btest-0.72/testing/Baseline/tests.statefile-sorted/0000755000076500000240000000000014246443553021127 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.statefile-sorted/mystate0000644000076500000240000000017114165076445022541 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 t2 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7198617 btest-0.72/testing/Baseline/tests.teardown/0000755000076500000240000000000014246443553017474 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Baseline/tests.teardown/output0000644000076500000240000000041714165076445020763 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. success Teardown tests.success 0 0 tests.success success failure Teardown tests.failure 1 42 tests.failure success Teardown tests.success 0 0 (failing now) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7213356 btest-0.72/testing/Baseline/tests.testdirs/0000755000076500000240000000000014246443553017512 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854987.0 btest-0.72/testing/Baseline/tests.testdirs/out10000644000076500000240000000024414072112013020302 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. d1.t1 ... ok d2.t2 ... ok all 2 tests successful ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.testdirs/out20000644000076500000240000000023214072112014020301 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. testdirs ... ok all 1 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7239945 btest-0.72/testing/Baseline/tests.threads/0000755000076500000240000000000014246443553017303 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.threads/output.j00000644000076500000240000000027414072112014021057 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t2 ... ok t3 ... ok t4 ... ok t5 ... ok all 5 tests successful ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.threads/output.j10000644000076500000240000000027414072112014021060 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... ok t2 ... ok t3 ... ok t4 ... ok t5 ... ok all 5 tests successful ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.threads/output.j50000644000076500000240000000017014072112014021057 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 4 5 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.724754 btest-0.72/testing/Baseline/tests.tracing/0000755000076500000240000000000014246443553017300 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.tracing/output0000644000076500000240000000024614072112014020543 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. 3 ['cat', 'dur', 'name', 'ph', 'pid', 'tid', 'ts'] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7255206 btest-0.72/testing/Baseline/tests.unstable/0000755000076500000240000000000014246443553017466 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.unstable/output0000644000076500000240000000040014072112014020721 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. test1 ... failed test1 ... failed on retry #1 test1 ... failed on retry #2 test1 ... ok on retry #3, unstable 0 tests successful, 1 unstable ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7263834 btest-0.72/testing/Baseline/tests.unstable-dir/0000755000076500000240000000000014246443553020242 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.unstable-dir/output0000644000076500000240000000043014072112014021500 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. mydir.test1 ... failed mydir.test1 ... failed on retry #1 mydir.test1 ... failed on retry #2 mydir.test1 ... ok on retry #3, unstable 0 tests successful, 1 unstable ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7271938 btest-0.72/testing/Baseline/tests.verbose/0000755000076500000240000000000014246443553017316 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.verbose/output0000644000076500000240000000054214072112014020560 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. t1 ... > echo "Hello, World!" ... t1 ok t2 ... > echo "This is the contents of the .verbose file" > ${TEST_VERBOSE} > echo "Hello, World!" > [test-verbose] This is the contents of the .verbose file ... t2 ok all 2 tests successful ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7279036 btest-0.72/testing/Baseline/tests.versioning/0000755000076500000240000000000014246443553020034 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.versioning/output0000644000076500000240000000027414072112014021300 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. btest.cfg requires at least BTest 99999.99, this is XXX. Please upgrade. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7293434 btest-0.72/testing/Baseline/tests.xml/0000755000076500000240000000000014246443553016451 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.xml/output-j2.xml0000644000076500000240000000066614072112014021032 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Baseline/tests.xml/output.xml0000644000076500000240000000066614072112014020521 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. % 'exit 1' failed unexpectedly (exit code 1) % cat .stderr ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.616247 btest-0.72/testing/Files/0000755000076500000240000000000014246443553014030 5ustar00timstaff././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.730065 btest-0.72/testing/Files/local_alternative/0000755000076500000240000000000014246443553017520 5ustar00timstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.6172676 btest-0.72/testing/Files/local_alternative/Baseline/0000755000076500000240000000000014246443553021242 5ustar00timstaff././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1654277994.730788 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-env/0000755000076500000240000000000014246443553027555 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-env/output0000644000076500000240000000051214072112014031014 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. ENV1=Foo ENV2=<...>/.tmp/tests.alternatives-reread-config/local_alternative ENV3=42 ENV4=(<...>/.tmp/tests.alternatives-reread-config/local_alternative=<...>/.tmp/tests.alternatives-reread-config/local_alternative) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7316408 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-test-baseline/0000755000076500000240000000000014246443553031524 5ustar00timstaff././@PaxHeader0000000000000000000000000000020500000000000010212 xustar00111 path=btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-test-baseline/output 22 mtime=1625854988.0 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-test-baseline/outpu0000644000076500000240000000037714072112014032610 0ustar00timstaff### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. TEST_BASELINE=<...>/.tmp/tests.alternatives-reread-config-baselinedir/local_alternative/Baseline/tests.local-alternative-show-test-baseline ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1654277994.73253 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-testbase/0000755000076500000240000000000014246443553030577 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Files/local_alternative/Baseline/tests.local-alternative-show-testbase/output0000644000076500000240000000004214072112014032034 0ustar00timstaffBTEST_TEST_BASE=local_alternative ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Files/local_alternative/btest.tests.cfg0000644000076500000240000000060514165076445022466 0ustar00timstaff# # Configuration file used by individual tests. # # This is set so that all files will be created inside the current # sandbox. [btest] TmpDir = .tmp BaselineDir = Baseline [environment] PATH=%(default_path)s ENV1=Foo ENV2=%(testbase)s ENV3=`expr 42` [environment-foo] FOO=BAR [filter-foo] cat=%(testbase)s/../../Scripts/test-filter [substitution-foo] printf=printf 'Hello, %s' ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7358015 btest-0.72/testing/Files/local_alternative/tests/0000755000076500000240000000000014246443553020662 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Files/local_alternative/tests/local-alternative-found.test0000644000076500000240000000016414165076445026305 0ustar00timstaff# We're just testing if we can find this test. The test itself will # always just return true. # # @TEST-EXEC: true ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Files/local_alternative/tests/local-alternative-show-env.test0000644000076500000240000000035214165076445026737 0ustar00timstaff# Output the ENV variables set from [environment] and ensure they # have been re-interpolated with "local_alternative" in them. # # @TEST-EXEC: env | grep '^ENV[1234]' | sort | strip-test-base > output # @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Files/local_alternative/tests/local-alternative-show-test-baseline.test0000644000076500000240000000031114072112014030655 0ustar00timstaff# Output the TEST_BASELINE environment value, it should # contain the local_alternative part # # @TEST-EXEC: env | grep ^TEST_BASELINE | sort | strip-test-base > output # @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Files/local_alternative/tests/local-alternative-show-testbase.test0000644000076500000240000000031014072112014027727 0ustar00timstaff# Output the BTEST_TEST_BASE environment value, it should not # contain the ignore/this/base path # # @TEST-EXEC: env | grep BTEST_TEST_BASE | strip-test-base > output # @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Makefile0000644000076500000240000000035414165076445014432 0ustar00timstaff all: @../btest -j -d -f diag.log @if [ -d .tmp ] && [ "$$(ls -A .tmp)" ]; then \ echo "ERROR: Left-over files in .tmp"; \ ls -lA .tmp; \ exit 1; \ fi cleanup: @rm -f diag.log @rm -f .btest.failed.dat @rm -rf .tmp ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.7412348 btest-0.72/testing/Scripts/0000755000076500000240000000000014246443553014415 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Scripts/diff-remove-abspath0000755000076500000240000000017414072112014020146 0ustar00timstaff#! /usr/bin/env bash # # Replace absolute paths with the basename. sed 's#/\([^/]\{1,\}/\)\{1,\}\([^/]\{1,\}\)#<...>/\2#g' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Scripts/dummy-script0000644000076500000240000000006014072112014016747 0ustar00timstaff # A dummy file used with the copy-file script. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Scripts/script-command0000755000076500000240000000037514072112014017246 0ustar00timstaff# This is a wrapper for the "script" command, which has different # options depending on the OS. if ! script -q -c ls /dev/null >/dev/null 2>&1; then # FreeBSD and macOS script -q /dev/null $@ else # Linux script -qfc "$@" /dev/null fi ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Scripts/strip-iso8601-date0000755000076500000240000000017114072112014017503 0ustar00timstaff#!/bin/sh - exec sed 's/-- [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]T[0-9][0-9]:[0-9][0-9]:[0-9][0-9][^ ]*Z$/-- XXX/g' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Scripts/strip-test-base0000755000076500000240000000015014165076445017367 0ustar00timstaff#! /usr/bin/env bash # dir=$(dirname "$0") testbase=$(cd "$dir/.." && pwd) sed "s#${testbase}#<...>#g" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/Scripts/test-filter0000755000076500000240000000011214072112014016555 0ustar00timstaff# Test filter used by the alternatives-filter test. sed 's/E/*/g' <$1 >$2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/Scripts/test-perf0000755000076500000240000000152114165076445016255 0ustar00timstaff#! /usr/bin/env bash # # This script imitates the behavior of the Linux "perf" command. Useful # for testing purposes because this script produces consistent and # predictable results. # # NOTE: if this script is in PATH, then it should not be named "perf", because # we want to use the real perf command for some tests. # Only "perf stat" is supported. if [ "$1" != "stat" ]; then exit 1 fi shift # Ignore all options except "-o". while getopts "o:x:e:" arg; do case $arg in o) fname=$OPTARG ;; *) ;; esac done shift $((OPTIND - 1)) # Use a hard-coded message so that we get predictable results msg="1000 instructions" # Write the message to a file (if specified), or stderr if [ -n "$fname" ]; then echo "$msg" >"$fname" else echo "$msg" >&2 fi # Run the specified command "$@" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/btest.cfg0000644000076500000240000000100614072112014014543 0ustar00timstaff # Configuration file for running btest's test suite. [btest] TestDirs = tests TmpDir = %(testbase)s/.tmp BaselineDir = %(testbase)s/Baseline IgnoreDirs = .svn CVS .tmp IgnoreFiles = *.tmp *.swp #* CommandPrefix = %%TEST- Initializer = test -f btest.cfg || cp %(testbase)s/btest.tests.cfg btest.cfg; echo >/dev/null [environment] PATH=%(testbase)s/..:%(testbase)s/../sphinx:%(testbase)s/Scripts:%(default_path)s SCRIPTS=%(testbase)s/Scripts TMPDIR=%(testbase)s/.tmp # BTEST_CFG=%(testbase)s/btest.tests.cfg ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/btest.tests.cfg0000644000076500000240000000104414165076445015732 0ustar00timstaff# # Configuration file used by individual tests. # # This is set so that all files will be created inside the current # sandbox. [btest] TmpDir = `echo .tmp` BaselineDir = %(testbase)s/Baseline [environment] ORIGPATH=%(default_path)s ENV1=Foo ENV2=%(testbase)s ENV3=`expr 42` ENV4=`echo \(%(testbase)s=%(testbase)s\)` [environment-foo] FOO=BAR [filter-foo] cat=%(testbase)s/../../Scripts/test-filter [substitution-foo] printf=printf 'Hello, %%s' [environment-foo2] FOO2=`echo BAR2` [environment-local] BTEST_TEST_BASE=local_alternative ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.8004146 btest-0.72/testing/tests/0000755000076500000240000000000014246443553014130 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/abort-on-failure-with-only-known-fails.btest0000644000076500000240000000120514165076445024517 0ustar00timstaff# %TEST-DOC: Ensure --abort-on-failure does not trigger for known failures. # # The TMPDIR assignment in the following prevents leakage of Python # multiprocessing state into btest's .tmp folder on some platforms. # %TEST-EXEC-FAIL: TMPDIR=$PWD btest -FD test1 test2 test3 test4 >output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-START-FILE test1 @TEST-EXEC: exit 0 # %TEST-END-FILE # %TEST-START-FILE test2 @TEST-EXEC: exit 1 @TEST-KNOWN-FAILURE: This test is expected to fail, and hence not abort. # %TEST-END-FILE # %TEST-START-FILE test3 @TEST-EXEC: exit 1 # %TEST-END-FILE # %TEST-START-FILE test4 @TEST-EXEC: exit 0 # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/abort-on-failure.btest0000644000076500000240000000075714165076445020354 0ustar00timstaff# %TEST-DOC: Ensure --abort-on-failure properly shortcuts testing. # # The TMPDIR assignment in the following prevents leakage of Python # multiprocessing state into btest's .tmp folder on some platforms. # %TEST-EXEC-FAIL: TMPDIR=$PWD btest -FD test1 test2 test3 >output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-START-FILE test1 @TEST-EXEC: exit 0 # %TEST-END-FILE # %TEST-START-FILE test2 @TEST-EXEC: exit 1 # %TEST-END-FILE # %TEST-START-FILE test3 @TEST-EXEC: exit 0 # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/alternatives-baseline-dir.test0000644000076500000240000000112714165076445022071 0ustar00timstaff# %TEST-DOC: Check that we can change the baseline directory from inside an alternative by setting BTEST_BASELINE_DIR there. # # %TEST-EXEC-FAIL: btest -a baseline %INPUT # %TEST-EXEC: test ! -f mydir/alternatives-baseline-dir/output # %TEST-EXEC: btest -a baseline -U %INPUT # %TEST-EXEC: test ! -e Baseline # %TEST-EXEC: test -f mydir/alternatives-baseline-dir/output # %TEST-EXEC: btest -a baseline %INPUT @TEST-EXEC: echo Hello, World! >output @TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp [environment-baseline] BTEST_BASELINE_DIR=mydir %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-environment.test0000644000076500000240000000065414072112014022057 0ustar00timstaff# %TEST-EXEC: btest %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo,foo2 %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo,-,foo2 %INPUT >>output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-EXEC: btest-diff child-output @TEST-EXEC: echo "Foo: ${FOO}" >>../../child-output @TEST-EXEC: echo "Foo2: ${FOO2}" >>../../child-output @TEST-EXEC: echo "-------------" >>../../child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/alternatives-filter.test0000644000076500000240000000031414165076445021015 0ustar00timstaff# %TEST-EXEC: btest %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo %INPUT >>output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-EXEC: btest-diff child-output @TEST-EXEC: cat %INPUT >>../../child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-keywords.test0000644000076500000240000000124614072112014021360 0ustar00timstaff# %TEST-EXEC: btest foo1 default1 notfoo1 notdefault1 >>output 2>&1 # %TEST-EXEC: btest -a - foo1 default1 notfoo1 notdefault1 >>output 2>&1 # %TEST-EXEC: btest -a foo foo1 default1 notfoo1 notdefault1 >>output 2>&1 # %TEST-EXEC: btest -a notexist foo1 default1 notfoo1 notdefault1 >>output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE foo1 @TEST-ALTERNATIVE: foo @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE default1 @TEST-ALTERNATIVE: default @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE notfoo1 @TEST-NOT-ALTERNATIVE: foo @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE notdefault1 @TEST-NOT-ALTERNATIVE: default @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-overwrite-env.test0000644000076500000240000000024114072112014022317 0ustar00timstaff# %TEST-EXEC: cp -r ../../Files/local_alternative . # %TEST-EXEC: BTEST_TEST_BASE=/ignore/this/base btest -a local -d tests/local-alternative-show-testbase.test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-reread-config-baselinedir.test0000644000076500000240000000020414072112014024466 0ustar00timstaff# %TEST-EXEC: cp -r ../../Files/local_alternative . # %TEST-EXEC: btest -a local -d tests/local-alternative-show-test-baseline.test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-reread-config.test0000644000076500000240000000017214072112014022213 0ustar00timstaff# %TEST-EXEC: cp -r ../../Files/local_alternative . # %TEST-EXEC: btest -a local -d tests/local-alternative-show-env.test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/alternatives-substitution.test0000644000076500000240000000036714165076445022314 0ustar00timstaff# %TEST-EXEC: btest %INPUT >>output 2>&1 # %TEST-EXEC: btest -a foo %INPUT >>output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-EXEC: btest-diff child-output @TEST-EXEC: printf 'World!' >>../../child-output @TEST-EXEC: echo >>../../child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/alternatives-testbase.test0000644000076500000240000000016414072112014021321 0ustar00timstaff# %TEST-EXEC: cp -r ../../Files/local_alternative . # %TEST-EXEC: btest -a local tests/local-alternative-found.test ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/baseline-dir-env.test0000644000076500000240000000104414072112014020132 0ustar00timstaff# %TEST-DOC: Check that we can change the baseline directory externally by setting BTEST_BASELINE_DIR. # # %TEST-EXEC-FAIL: BTEST_BASELINE_DIR=mydir btest %INPUT # %TEST-EXEC: test ! -e mydir/baseline-dir-env/output # %TEST-EXEC: BTEST_BASELINE_DIR=mydir btest -U %INPUT # %TEST-EXEC: test ! -e Baseline # %TEST-EXEC: test -f mydir/baseline-dir-env/output # %TEST-EXEC: BTEST_BASELINE_DIR=mydir btest %INPUT @TEST-EXEC: echo Hello, World! >output @TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/basic-fail.test0000644000076500000240000000032014072112014016774 0ustar00timstaff# %TEST-EXEC-FAIL: btest t1 t2 %TEST-START-FILE t1 @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo Hello, World! @TEST-EXEC-FAIL: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/basic-succeed.test0000644000076500000240000000031314072112014017476 0ustar00timstaff# %TEST-EXEC: btest t1 t2 %TEST-START-FILE t1 @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo Hello, World! @TEST-EXEC-FAIL: exit 1 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/binary-mode.test0000644000076500000240000000227614072112014017224 0ustar00timstaff# This tests btest-diff's binary mode. In binary mode we treat test # outputs and baselines as blobs, compare them only byte-by-byte, # don't canonify them, and don't prefix our baseline header. # # Running the below directly should fail: there's no baseline yet. # %TEST-EXEC-FAIL: btest %INPUT # # Verify that none got created either: # %TEST-EXEC: test ! -e mydir/binary-mode/output # # Now update the baseline: # %TEST-EXEC: btest -U %INPUT # # Verify it exists: # %TEST-EXEC: test -f mydir/binary-mode/output # # btest should now successfully compare against the baseline: # %TEST-EXEC: btest %INPUT # # And finally, verify that the updated baseline hasn't changed: # %TEST-EXEC: printf "\00\01\02" >output # %TEST-EXEC: diff -s output mydir/binary-mode/output # # Update once more, to ensure we refresh the baseline properly # %TEST-EXEC: btest -U %INPUT # %TEST-EXEC: diff -s output mydir/binary-mode/output @TEST-EXEC: printf "\00\01\02" >output # Setting a non-existant canonifier here verifies that binary mode # indeed does not canonify. @TEST-EXEC: TEST_DIFF_CANONIFIER=doesnotexist btest-diff --binary output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = mydir %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/brief.test0000644000076500000240000000046614072112014016104 0ustar00timstaff# %TEST-EXEC: btest -b t1 t3 >out1 2>&1 # %TEST-EXEC-FAIL: btest -b t1 t2 >out2 2>&1 # %TEST-EXEC: btest-diff out1 # %TEST-EXEC: btest-diff out2 %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/btest-cfg.test0000644000076500000240000000127014165076445016711 0ustar00timstaff# %TEST-EXEC: mv btest.cfg myfile # %TEST-EXEC: btest -c myfile %INPUT > nopath 2>&1 # %TEST-EXEC: BTEST_CFG=myfile btest %INPUT >> nopath 2>&1 # %TEST-EXEC: BTEST_CFG=notexist btest -c myfile %INPUT >> nopath 2>&1 # %TEST-EXEC-FAIL: btest %INPUT >> nopath 2>&1 # %TEST-EXEC: btest-diff nopath # %TEST-EXEC: mkdir z # %TEST-EXEC: mv myfile z/btest.cfg # %TEST-EXEC: btest -c z/btest.cfg %INPUT >> relpath 2>&1 # %TEST-EXEC: BTEST_CFG=z/btest.cfg btest %INPUT >> relpath 2>&1 # %TEST-EXEC: btest-diff relpath # %TEST-EXEC: btest -c `pwd`/z/btest.cfg %INPUT >> abspath 2>&1 # %TEST-EXEC: BTEST_CFG=`pwd`/z/btest.cfg btest %INPUT >> abspath 2>&1 # %TEST-EXEC: btest-diff abspath @TEST-EXEC: exit 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/canonifier-cmdline.test0000644000076500000240000000130614165076445020561 0ustar00timstaff# Verify that btest-diff returns nonzero when a file differs from the baseline # when a canonifier is applied. Two types of canonifiers are tested: one that # reads input only from stdin, and one that ignores stdin when a filename is # provided as a cmd-line argument. # %TEST-EXEC: chmod +x ignore-cmdline-args # %TEST-EXEC: btest -d %INPUT %TEST-START-FILE Baseline/canonifier-cmdline/output ABC 123 DEF %TEST-END-FILE %TEST-START-FILE ignore-cmdline-args sed 's/[0-9][0-9][0-9]/XXX/' %TEST-END-FILE @TEST-EXEC: echo ABC DEF >output @TEST-EXEC-FAIL: TEST_DIFF_CANONIFIER="../../ignore-cmdline-args" btest-diff output @TEST-EXEC-FAIL: TEST_DIFF_CANONIFIER="sed 's/[0-9][0-9][0-9]/XXX/'" btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/canonifier-conversion.test0000644000076500000240000000332214072112014021307 0ustar00timstaff# This test verifies that we only canonicalize baselines once, namely # when canonicalizing new test output, and that baselines get # converted over to our header-tagged format. # # Test prep: make our canonifier executable # %TEST-EXEC: chmod 755 ./diff-double-x # # Verify that without an existing baseline, we canonify a new one. # %TEST-EXEC: btest -U %INPUT # %TEST-EXEC: head -1 Baseline/canonifier-conversion/output | grep -q 'Do not edit' # %TEST-EXEC: tail -1 Baseline/canonifier-conversion/output | grep xx # %TEST-EXEC: cp Baseline/canonifier-conversion/output base.1 # # For testing conversion, first create a "dated" baseline. # %TEST-EXEC: echo x >Baseline/canonifier-conversion/output # # Verify that it succeeds: # %TEST-EXEC: btest %INPUT # # Update the baseline. This should prefix btest-diff's header and canonify: # %TEST-EXEC: btest -U %INPUT # # Verify that these have happened and preserve baseline: # %TEST-EXEC: head -1 Baseline/canonifier-conversion/output | grep -q 'Do not edit' # %TEST-EXEC: tail -1 Baseline/canonifier-conversion/output | grep xx # %TEST-EXEC: cp Baseline/canonifier-conversion/output base.2 # # Verify that it still succeeds: # %TEST-EXEC: btest %INPUT # # Update the baseline again. # %TEST-EXEC: btest -U %INPUT # %TEST-EXEC: cp Baseline/canonifier-conversion/output base.3 # # Verify the updated baselines remained identical. # %TEST-EXEC: test "$(cat base.1)" = "$(cat base.2)" && test "$(cat base.2)" = "$(cat base.3)" @TEST-EXEC: echo x >output @TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp [environment] TEST_DIFF_CANONIFIER=%(testbase)s/diff-double-x %TEST-END-FILE %TEST-START-FILE diff-double-x #! /usr/bin/env bash sed 's/x/xx/g' %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/canonifier-fail.test0000644000076500000240000000112114165076445020054 0ustar00timstaff# Verify that btest-diff returns nonzero when a canonifier returns nonzero for # any reason, even if the canonified result matches the baseline. # %TEST-EXEC: chmod +x test-canonifier # %TEST-EXEC: btest -d %INPUT %TEST-START-FILE Baseline/canonifier-fail/output ABC 123 DEF %TEST-END-FILE %TEST-START-FILE test-canonifier awk 'NF == 3 { print $1,"XXX",$3; if($2 == "000") exit 1;}' %TEST-END-FILE @TEST-EXEC: echo ABC 000 DEF >output @TEST-EXEC-FAIL: TEST_DIFF_CANONIFIER="../../test-canonifier" btest-diff output @TEST-EXEC-FAIL: TEST_DIFF_CANONIFIER="./does-not-exist" btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/canonifier.test0000644000076500000240000000056614165076445017157 0ustar00timstaff# %TEST-EXEC: chmod +x test-canonifier # %TEST-EXEC: btest -d %INPUT %TEST-START-FILE Baseline/canonifier/output ABC 123 DEF %TEST-END-FILE %TEST-START-FILE test-canonifier sed 's/[0-9][0-9][0-9]/XXX/g' %TEST-END-FILE @TEST-EXEC: echo ABC 890 DEF >output @TEST-EXEC-FAIL: btest-diff output @TEST-EXEC: TEST_DIFF_CANONIFIER="sh -c ../../test-canonifier" btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/console.test0000644000076500000240000000210114072112014016443 0ustar00timstaff# %TEST-DOC: Test the "--show-all" option. # # %TEST-REQUIRES: which script # # This test doesn't work on OpenBSD, because the "script" command lacks the # necessary options. # %TEST-REQUIRES: test `uname` != "OpenBSD" # # The following uses the "script" command for two btest invocations, in order # to verify that "btest --show-all" works correctly when btest's stdout is # a TTY. # # With one failing test, one succeeding, and one skipped, plus an additional # summary line, this makes two lines of output for the first invocation and # four for the second. # # %TEST-EXEC: script-command "btest t1 t2 t3" | wc -l | awk '{print $1}' >output 2>&1 # %TEST-EXEC: script-command "btest --show-all t1 t2 t3" | wc -l | awk '{print $1}' >>output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-EXEC: echo "A successful dummy test" @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo "A failing dummy test" @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-REQUIRES: false @TEST-EXEC: echo "A skipped dummy test" @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/copy-file.test0000644000076500000240000000025614072112014016701 0ustar00timstaff# %TEST-EXEC: btest %INPUT @TEST-COPY-FILE: ${ENV2}/../../Scripts/dummy-script @TEST-EXEC: test -e dummy-script @TEST-EXEC: cmp dummy-script %DIR/../../Scripts/dummy-script ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/crlf-line-terminators.test0000644000076500000240000000057614072112014021237 0ustar00timstaff# %TEST-DOC: Check that CRLF line-terminators are preserved in test files. # Note that this test file itself has CRLF line endings and .gitattributes # has an entry to explicitly say this file uses CRLF. # %TEST-EXEC: cp %INPUT input # %TEST-EXEC: btest-diff input # %TEST-EXEC: btest-diff crlfs.dat one two three %TEST-START-FILE crlfs.dat 1 2 3 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diag-all.test0000644000076500000240000000050114072112014016455 0ustar00timstaff# %TEST-EXEC: btest -D t1 t2 >output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-EXEC: echo Hello, World! @TEST-EXEC: echo Stderr output >&2 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo Hello, again! @TEST-EXEC: echo This is the contents of the .diag file > ${TEST_DIAGNOSTICS} %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diag-file.test0000644000076500000240000000025214072112014016627 0ustar00timstaff# %TEST-EXEC-FAIL: btest -f diag %INPUT >output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-EXEC: btest-diff diag @TEST-EXEC: echo Stderr output >&2 @TEST-EXEC: exit 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diag.test0000644000076500000240000000051214072112014015711 0ustar00timstaff# %TEST-EXEC-FAIL: btest -d %INPUT 2>>raw # %TEST-EXEC: mkdir Baseline/diag # %TEST-EXEC: echo Wrong baseline >Baseline/diag/output # %TEST-EXEC-FAIL: btest -d %INPUT 2>>raw # %TEST-EXEC: cat raw | egrep -v '\+\+\+|---' >output # %TEST-EXEC: btest-diff output @TEST-EXEC: echo Hello, World! >output @TEST-EXEC: btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diff-brief.test0000644000076500000240000000100714072112014017002 0ustar00timstaff# Verify that setting TEST_DIFF_BRIEF causes btest-diff to not show the # file contents when it doesn't match the baseline (but the diff is still # shown). # %TEST-EXEC: mkdir -p Baseline/diff-brief # %TEST-EXEC: echo "This is the baseline" > Baseline/diff-brief/child-output # %TEST-EXEC-FAIL: btest -d %INPUT >raw 2>&1 # %TEST-EXEC: cat raw | grep -v '+++' | grep -v '\-\-\-' >output # %TEST-EXEC: btest-diff output @TEST-EXEC: echo "Hello world" > child-output @TEST-EXEC: TEST_DIFF_BRIEF=1 btest-diff child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diff-max-lines.test0000644000076500000240000000143714072112014017617 0ustar00timstaff# Verify that when btest-diff fails due to a file not matching a baseline, # btest-diff will not show more than the first TEST_DIFF_FILE_MAX_LINES lines # of the file. However, when there is no baseline, then the entire file is # shown. # %TEST-EXEC-FAIL: btest -d %INPUT >raw 2>&1 # %TEST-EXEC: cat raw | grep -v '+++' | grep -v '\-\-\-' >output1 # %TEST-EXEC: btest-diff output1 # %TEST-EXEC: mkdir -p Baseline/diff-max-lines # %TEST-EXEC: echo "This is the baseline" >Baseline/diff-max-lines/child-output # %TEST-EXEC-FAIL: btest -d %INPUT >raw 2>&1 # %TEST-EXEC: cat raw | grep -v '+++' | grep -v '\-\-\-' >output2 # %TEST-EXEC: btest-diff output2 @TEST-EXEC: awk 'BEGIN{for(i=1;i<=10;i++) print "Output line",i}' >child-output @TEST-EXEC: TEST_DIFF_FILE_MAX_LINES=2 btest-diff child-output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/diff.test0000644000076500000240000000055414072112014015723 0ustar00timstaff# %TEST-EXEC-FAIL: btest %INPUT # %TEST-EXEC: test ! -e mydir/diff/output # %TEST-EXEC: btest -U %INPUT # %TEST-EXEC: test ! -e Baseline # %TEST-EXEC: test -f mydir/diff/output # %TEST-EXEC: btest %INPUT @TEST-EXEC: echo Hello, World! >output @TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = mydir %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/doc.test0000644000076500000240000000412614072112014015557 0ustar00timstaff# %TEST-DOC: Test the --documentation option and @TEST-DOC. # %TEST-EXEC: btest -R rst >rst # %TEST-EXEC: btest-diff rst # %TEST-EXEC: btest -R md >md # %TEST-EXEC: btest-diff md %TEST-START-FILE btest.cfg [btest] TestDirs = testdoc misc multipart emptydir alternatives TmpDir = .tmp BaselineDir = Baseline %TEST-END-FILE %TEST-START-FILE testdoc/singleline @TEST-DOC: This is a single-line TEST-DOC comment. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE testdoc/testdocmissing # This test does not have a TEST-DOC keyword. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE testdoc/multiline @TEST-DOC: This is a multi-line TEST-DOC comment. @TEST-DOC: This is the 2nd line. @TEST-EXEC: exit 0 @TEST-DOC: This is the 3rd line. %TEST-END-FILE %TEST-START-FILE testdoc/badtestdoc @TEST-DOC This comment will not appear in output (missing colon after TEST-DOC). @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE misc/requires.test # This test will not run due to TEST-REQUIRES, but is always documented. @TEST-REQUIRES: false @TEST-DOC: This is a skipped test. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE misc/subdir/subtest.test @TEST-DOC: This test is in a subdirectory. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE multipart/multi.test # Test if TEST-DOC works for a multi-part test. @TEST-DOC: 1st test part. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE multipart/multi.test#2 @TEST-DOC: 2nd test part. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE multipart/multi.test#3 @TEST-DOC: 3rd test part. @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE emptydir/ignoretest @TEST-DOC: This file is ignored (and does not appear in documentation output). @TEST-IGNORE @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE alternatives/myalternate @TEST-DOC: This test runs only for the given alternative, but it will always @TEST-DOC: appear in the documentation output. @TEST-ALTERNATIVE: myalternative @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE alternatives/default @TEST-DOC: This test runs if no alternative is given. @TEST-ALTERNATIVE: default @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1649434079.0 btest-0.72/testing/tests/duplicate-selection.test0000644000076500000240000000037214224056737020770 0ustar00timstaff# %TEST-DOC: Validates that if the same test is requested multiple times, it still only runs once. # # %TEST-EXEC: btest -j t t t >>out 2>&1 # %TEST-EXEC: test $(grep -c 't \.\.\. ok' out) = 1 %TEST-START-FILE t @TEST-EXEC: echo test %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/environment.test0000644000076500000240000000175514072112014017363 0ustar00timstaff# %TEST-EXEC: btest -d %INPUT # %TEST-EXEC: btest -U %INPUT # %TEST-EXEC: btest-diff output @TEST-REQUIRES: test -n "${ENV2}" @TEST-EXEC-FAIL: test -z "${ENV2}" @TEST-EXEC: echo ${ENV1} >>../../output @TEST-EXEC: echo ${ENV2} >1 @TEST-EXEC: set >>1 @TEST-EXEC: test "${ENV2}" = `cd ../.. && pwd` && echo "testbase is correct" >>../../output @TEST-EXEC: echo ${ENV3} >>../../output @TEST-EXEC: test "${ENV4}" = "(${TEST_BASE}=${TEST_BASE})" && echo "macro expansion within backticks is correct" >>../../output @TEST-EXEC: test "${ORIGPATH}" = "${PATH}" && echo "default_path is correct" >>../../output @TEST-EXEC: echo ${TEST_DIAGNOSTICS} | strip-test-base >>../../output @TEST-EXEC: echo ${TEST_MODE} >>../../output @TEST-EXEC: echo ${TEST_BASELINE} | strip-test-base >>../../output @TEST-EXEC: echo ${TEST_NAME} >>../../output @TEST-EXEC: echo ${TEST_VERBOSE} | strip-test-base >>../../output @TEST-EXEC: echo ${TEST_BASE} | strip-test-base >>../../output @TEST-EXEC: echo ${TEST_PART} >>../../output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/exit-codes.test0000644000076500000240000000221014072112014017046 0ustar00timstaff# %TEST-EXEC-FAIL: btest t1 t2 t3 >>out2 2>&1 # %TEST-EXEC-FAIL: btest t4 t5 t6 >>out2 2>&1 # %TEST-EXEC-FAIL: btest t7 t8 t9 >>out2 2>&1 # %TEST-EXEC: btest-diff out1 # %TEST-EXEC: btest-diff out2 %TEST-START-FILE t1 @TEST-EXEC: echo 1.1 >>../../out1 @TEST-EXEC: exit 100 @TEST-EXEC: echo 1.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo 2.1 >>../../out1 @TEST-EXEC: exit 200 @TEST-EXEC: echo 2.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: echo 3.1 >>../../out1 %TEST-END-FILE %TEST-START-FILE t4 @TEST-EXEC: echo 4.1 >>../../out1 @TEST-EXEC: exit 0 @TEST-EXEC: echo 4.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t5 @TEST-EXEC: echo 5.1 >>../../out1 @TEST-EXEC: exit 1 @TEST-EXEC: echo 5.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t6 @TEST-EXEC: echo 6.1 >>../../out1 %TEST-END-FILE %TEST-START-FILE t7 @TEST-EXEC: echo 7.1 >>../../out1 @TEST-EXEC-FAIL: exit 1 @TEST-EXEC: echo 7.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t8 @TEST-EXEC: echo 8.1 >>../../out1 @TEST-EXEC-FAIL: exit 0 @TEST-EXEC: echo 8.2 >>../../out1 %TEST-END-FILE %TEST-START-FILE t9 @TEST-EXEC: echo 9.1 >>../../out1 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/finalizer.test0000644000076500000240000000034014072112014016767 0ustar00timstaff# %TEST-EXEC: btest -t %INPUT # %TEST-EXEC: test -f finalized @TEST-EXEC: rm -f ../../finalized %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = Baseline Finalizer = touch ../../finalized %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/groups.test0000644000076500000240000000140414165076445016351 0ustar00timstaff# %TEST-EXEC: btest -g G1 t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest -g G1,G2 t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest -g - t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest -g G1,- t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest --groups=-G3 t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest --groups=-G3,-G1 t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest t1 t2 t3 t4 t5 >>output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-GROUP: G1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-GROUP: G1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t3 @TEST-GROUP: G2 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t4 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t5 @TEST-GROUP: G3 @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/ignore.test0000644000076500000240000000160114072112014016270 0ustar00timstaff# %TEST-EXEC: btest -t 2>&1 | sort >output # %TEST-EXEC: btest-diff output @TEST-EXEC: test -f ../../initialized %TEST-START-FILE all/t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/t2 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/sub/t3 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/not-this-one/t4 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/sub/neither-this-one/t5 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/not-this-one.txt @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE all/skip @TEST-EXEC: exit 0 @TEST-IGNORE %TEST-END-FILE %TEST-START-FILE all/sub2/.btest-ignore %TEST-END-FILE %TEST-START-FILE all/sub2/skip-this @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = Baseline TestDirs = all IgnoreDirs = not-this-one sub/neither-this-one IgnoreFiles = *.txt %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/initializer.test0000644000076500000240000000035214072112014017332 0ustar00timstaff# %TEST-EXEC: btest -t %INPUT # %TEST-EXEC: test -f initialized @TEST-EXEC: test -f ../../initialized %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = Baseline Initializer = touch ../../initialized %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/known-failure-and-success.btest0000644000076500000240000000025714165076445022170 0ustar00timstaff# %TEST-EXEC: btest %INPUT >output 2>&1 # %TEST-EXEC: btest-diff output @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 1 @TEST-KNOWN-FAILURE: This test is expected to fail. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/known-failure-succeeds.btest0000644000076500000240000000027514072112014021532 0ustar00timstaff# %TEST-EXEC: btest %INPUT >output 2>&1 # %TEST-EXEC: btest-diff output @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 0 @TEST-KNOWN-FAILURE: This test is expected to fail, but succeeds. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/known-failure.btest0000644000076500000240000000041214165076445017753 0ustar00timstaff# %TEST-EXEC-FAIL: btest -D %INPUT test2 >output 2>&1 # %TEST-EXEC: btest-diff output @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 1 @TEST-KNOWN-FAILURE: This test is expected to fail. # %TEST-START-FILE test2 @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/list.test0000644000076500000240000000040314165076445016003 0ustar00timstaff# %TEST-EXEC: btest -l t1 t2 t3 >>out # %TEST-EXEC: btest --list >>out # %TEST-EXEC: btest-diff out %TEST-START-FILE t1 @TEST-EXEC: true %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: true %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: true %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/macros.test0000644000076500000240000000033714072112014016276 0ustar00timstaff# %TEST-EXEC: btest -d %INPUT >output 2>&1 # %TEST-EXEC: btest-diff output @TEST-REQUIRES: test -d %DIR @TEST-REQUIRES: test -f %INPUT @TEST-EXEC: cmp %DIR/macros.test %INPUT @TEST-EXEC-FAIL: ! cmp %DIR/macros.test %INPUT ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/measure-time-options.test0000644000076500000240000000317314165076445021125 0ustar00timstaff# %TEST-REQUIRES: test "`uname`" = "Linux" # %TEST-REQUIRES: which perf # %TEST-REQUIRES: perf stat -o /dev/null true 2> /dev/null # %TEST-REQUIRES: perf stat -x " " -e instructions true 2>&1 | grep -vq "not supported" # Tests of TimingBaselineDir # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: test '!' -e Baseline/_Timing # %TEST-EXEC: test '!' -e mytimings # %TEST-EXEC: btest -DT %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: test '!' -e Baseline/_Timing # %TEST-EXEC: test -d mytimings # Tests of TimingDeltaPerc and PerfPath (for all of these, the runtime is 1000) # %TEST-EXEC: echo measure-time-options 897 >`echo mytimings/*` # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: echo measure-time-options 895 >`echo mytimings/*` # %TEST-EXEC-FAIL: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: echo measure-time-options 1128 >`echo mytimings/*` # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: echo measure-time-options 1131 >`echo mytimings/*` # %TEST-EXEC-FAIL: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: cat output | sed 's/ ([-+%.0-9]*)/ (+xx.x%)/g' >tmp # %TEST-EXEC: mv tmp output # %TEST-EXEC: btest-diff output @TEST-MEASURE-TIME @TEST-EXEC: awk 'BEGIN { for ( i = 1; i < 100000; i++ ) x += i; print x; }; done' /dev/null # %TEST-REQUIRES: perf stat -x " " -e instructions true 2>&1 | grep -vq "not supported" # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: test '!' -e Baseline/_Timing # %TEST-EXEC: btest -DT %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: test -d Baseline/_Timing # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: echo measure-time 42 >`echo Baseline/_Timing/*` # %TEST-EXEC-FAIL: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: btest -DT %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: btest -D %INPUT >>output 2>&1 # %TEST-EXEC: echo ----- >>output # %TEST-EXEC: cat output | sed 's/ ([-+%.0-9]*)/ (+xx.x%)/g' >tmp # %TEST-EXEC: mv tmp output # %TEST-EXEC: btest-diff output @TEST-MEASURE-TIME @TEST-EXEC: awk 'BEGIN { for ( i = 1; i < 100000; i++ ) x += i; print x; }; done' baseline1/t1/output # %TEST-EXEC: echo 2 >baseline2/t2/output # %TEST-EXEC: echo 3 >baseline3/t3/output # %TEST-EXEC: echo XXX >baseline3/t4/output # # %TEST-EXEC: btest -d t1 # %TEST-EXEC: btest -d t2 # %TEST-EXEC: btest -d t3 # # %TEST-EXEC-FAIL: btest -d -f fail.tmp t4 # %TEST-EXEC: cat fail.tmp | grep -v '\(---\|+++\)' >fail.log # %TEST-EXEC: btest-diff fail.log # %TEST-EXEC: btest -U t4 # %TEST-EXEC: test -f baseline1/t4/output # %TEST-EXEC: btest t4 # # %TEST-EXEC: test ! -d Baseline %TEST-START-FILE t1 @TEST-EXEC: echo 1 >output @TEST-EXEC: btest-diff output %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo 2 >output @TEST-EXEC: btest-diff output %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: echo 3 >output @TEST-EXEC: btest-diff output %TEST-END-FILE %TEST-START-FILE t4 @TEST-EXEC: echo 4 >output @TEST-EXEC: btest-diff output %TEST-END-FILE %TEST-START-FILE btest.cfg [btest] BaselineDir = baseline1:baseline2:baseline3 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/parts-error-part.test0000644000076500000240000000025014165076445020254 0ustar00timstaff# %TEST-EXEC-FAIL: btest test#3 >output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-START-FILE test @TEST-EXEC: echo "Hello, world!." >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/parts-error-start-next.test0000644000076500000240000000043514165076445021424 0ustar00timstaff# %TEST-EXEC-FAIL: btest test >output 2>&1 # %TEST-EXEC: btest-diff output # %TEST-START-FILE test @TEST-EXEC: echo "Hello, world!." >>../../output @TEST-START-NEXT # %TEST-END-FILE # %TEST-START-FILE test#2 @TEST-EXEC: echo "Hello, world!. Again" >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/parts-glob.test0000644000076500000240000000076514072112014017071 0ustar00timstaff# %TEST-EXEC: btest t/test # %TEST-EXEC: btest t.test # %TEST-EXEC: btest-diff output # %TEST-START-FILE btest.cfg [btest] TestDirs = t TmpDir = .tmp BaselineDir = Baseline # %TEST-END-FILE # %TEST-START-FILE t/test @TEST-EXEC: echo "Hello, world!." >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test#2 @TEST-EXEC: echo "Hello, world! Again." >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test#3 @TEST-EXEC: echo "Hello, world! Again. Again." >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/parts-initializer-finalizer.test0000644000076500000240000000146514165076445022474 0ustar00timstaff# This test verifies the invocation order of initializers, finalizers, and their # part-specific equivalents. # %TEST-EXEC: btest t/test # %TEST-EXEC: btest-diff output # %TEST-START-FILE btest.cfg [btest] TestDirs = t TmpDir = .tmp BaselineDir = Baseline Initializer = echo Initializer $TEST_PART >>../../output Finalizer = echo Finalizer $TEST_PART >>../../output PartInitializer = echo PartInitializer $TEST_PART >>../../output PartFinalizer = echo PartFinalizer $TEST_PART >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test @TEST-EXEC: echo "Hello, world!." >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test#2 @TEST-EXEC: echo "Hello, world! Again." >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test#3 @TEST-EXEC: echo "Hello, world! Again. Again." >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/parts-skipping.tests0000644000076500000240000000054314072112014020147 0ustar00timstaff# %TEST-EXEC: btest test # %TEST-EXEC: btest-diff output # %TEST-START-FILE test @TEST-EXEC: echo "Hello, world!." >>../../output # %TEST-END-FILE # %TEST-START-FILE test#67 @TEST-EXEC: echo "Hello, world! Again." >>../../output # %TEST-END-FILE # %TEST-START-FILE test#89 @TEST-EXEC: echo "Hello, world! Again. Again." >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/parts-teardown.test0000644000076500000240000000144114165076445020005 0ustar00timstaff# This test consists of three parts, with the second one triggering an error. # We define a PartTeardown that should kick in after the first and second # parts. Since the third part never runs, no point in calling its teardown. # %TEST-EXEC-FAIL: btest t/test # %TEST-EXEC: btest-diff output # %TEST-START-FILE btest.cfg [btest] TestDirs = t TmpDir = .tmp BaselineDir = Baseline PartTeardown = echo Teardown $TEST_PART >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test @TEST-EXEC: echo "Hello, world!." >>../../output # %TEST-END-FILE # %TEST-START-FILE t/test#2 @TEST-EXEC: echo "Hello, world! Again, with error." >>../../output && false # %TEST-END-FILE # %TEST-START-FILE t/test#3 @TEST-EXEC: echo "Hello, world! Again, but you won't see this." >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/parts.tests0000644000076500000240000000072414072112014016326 0ustar00timstaff# %TEST-EXEC: btest test # %TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff output # %TEST-START-FILE test @TEST-EXEC: echo "${TEST_PART} Hello, world! (%INPUT)" >>../../output # %TEST-END-FILE # %TEST-START-FILE test#2 @TEST-EXEC: echo "${TEST_PART} Hello, world! Again. (%INPUT)" >>../../output # %TEST-END-FILE # %TEST-START-FILE test#3 @TEST-EXEC: echo "${TEST_PART} Hello, world! Again. Again. (%INPUT)" >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/ports.test0000644000076500000240000000041314072112014016154 0ustar00timstaff # %TEST-EXEC: btest -t %INPUT # %TEST-EXEC: test 3 -eq `wc -l .tmp/ports/output | awk '{print $1}'` @TEST-PORT: MYPORT1 @TEST-PORT: MYPORT2 @TEST-PORT: MYPORT3 @TEST-EXEC: echo $MYPORT1 >>output @TEST-EXEC: echo $MYPORT2 >>output @TEST-EXEC: echo $MYPORT3 >>output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/progress-back-to-back.test0000644000076500000240000000107114072112014021066 0ustar00timstaff# %TEST-DOC: Ensures that "btest-progress" functions correctly. # %TEST-EXEC: btest %INPUT >output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -bD %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -v %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -q %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: TEST_DIFF_CANONIFIER=%DIR/../Scripts/strip-iso8601-date btest-diff output # @TEST-EXEC: bash %INPUT >&2 btest-progress Foo 1 btest-progress Foo 2 btest-progress Foo 3 btest-progress Foo 4 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/progress.test0000644000076500000240000000107714165076445016704 0ustar00timstaff# %TEST-DOC: Ensures that "btest-progress" functions correctly. # %TEST-EXEC: btest %INPUT >output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -bD %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -v %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: btest -q %INPUT >>output 2>&1 # %TEST-EXEC: echo --- >>output # %TEST-EXEC: TEST_DIFF_CANONIFIER=%DIR/../Scripts/strip-iso8601-date btest-diff output # @TEST-EXEC: bash %INPUT >&2 btest-progress Foo 1 btest-progress -q Foo 2 btest-progress Foo 3 btest-progress -T Foo 4 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/quiet.test0000644000076500000240000000046614072112014016144 0ustar00timstaff# %TEST-EXEC: btest -q t1 t3 >out1 2>&1 # %TEST-EXEC-FAIL: btest -q t1 t2 >out2 2>&1 # %TEST-EXEC: btest-diff out1 # %TEST-EXEC: btest-diff out2 %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/requires-with-start-next.test0000644000076500000240000000066114072112014021731 0ustar00timstaff# %TEST-EXEC: btest t1 t2 >output 2>&1 # %TEST-EXEC: btest-diff output # # %TEST-DOC: Check that TEST-REQUIRES is applied to tests replicated with TEST-START-NEXT # We expect two succeeding test for t1. %TEST-START-FILE t1 @TEST-REQUIRES: exit 0 @TEST-EXEC: echo 0 @TEST-START-NEXT %TEST-END-FILE # We expect two skipped tests for t2.:w %TEST-START-FILE t2 @TEST-REQUIRES: exit 1 @TEST-EXEC: echo 0 @TEST-START-NEXT %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/requires.test0000644000076500000240000000067714072112014016660 0ustar00timstaff# %TEST-EXEC: btest t1 t2 t3 t4 >output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-REQUIRES: exit 0 @TEST-EXEC: echo Foo1 %TEST-END-FILE %TEST-START-FILE t2 @TEST-REQUIRES: exit 1 @TEST-EXEC: echo Foo2 %TEST-END-FILE %TEST-START-FILE t3 @TEST-REQUIRES: exit 0 @TEST-REQUIRES: exit 1 @TEST-EXEC: echo Foo3 %TEST-END-FILE %TEST-START-FILE t4 @TEST-REQUIRES: exit 0 @TEST-REQUIRES: exit 0 @TEST-EXEC: echo Foo4 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/rerun.test0000644000076500000240000000043714072112014016146 0ustar00timstaff# %TEST-EXEC-FAIL: btest t1 t2 t3 >>output 2>&1 # %TEST-EXEC-FAIL: btest -r >>output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1654277994.8019085 btest-0.72/testing/tests/sphinx/0000755000076500000240000000000014246443553015441 5ustar00timstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/sphinx/rst-cmd.sh0000755000076500000240000000070214165076445017352 0ustar00timstaff# %TEST-EXEC: bash %INPUT %TEST-START-FILE file.txt Example file. Line 2 %TEST-END-FILE unset TEST_NAME btest-rst-cmd echo Hello >>output btest-rst-cmd -o echo "Hello 2, no command" >>output btest-rst-cmd -c "Different command" echo "Hello 3, no command" >>output btest-rst-cmd -d echo "Hello 4, no output" >>output btest-rst-cmd -f 'tr e X' echo "Hello 5, filter" >>output btest-rst-cmd -r file.txt echo "Hello 6, file" >>output btest-diff output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/sphinx/run-sphinx0000644000076500000240000000026214165076445017501 0ustar00timstaff# %TEST-REQUIRES: which sphinx-build # # %TEST-EXEC: cp -r %DIR/../../../examples/sphinx/* . # %TEST-EXEC: make clean && make text # %TEST-EXEC: btest-diff _build/text/index.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/start-file.test0000644000076500000240000000045514165076445017111 0ustar00timstaff# %TEST-EXEC: btest %INPUT # %TEST-EXEC: btest-diff output # @TEST-EXEC: awk -f %INPUT >../../output # @TEST-EXEC: awk -f %INPUT >../../output { lines += 1; } END { print lines; } @TEST-START-FILE foo.dat 1 2 3 @TEST-END-FILE @TEST-START-FILE bar.dat A B C D @TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/start-next-dir.test0000644000076500000240000000033614072112014017676 0ustar00timstaff# %TEST-EXEC: btest mydir/mytest # %TEST-EXEC: btest-diff output %TEST-START-FILE mydir/mytest @TEST-EXEC: echo $(basename $(dirname %DIR))/$(basename %DIR) >>../../output @TEST-START-NEXT @TEST-START-NEXT %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/start-next-naming.test0000644000076500000240000000060414165076445020413 0ustar00timstaff# %TEST-EXEC: btest -d %INPUT # # %TEST-START-FILE Baseline/start-next-naming/output X 1 # %TEST-END-FILE # %TEST-START-FILE Baseline/start-next-naming-2/output X 2 # %TEST-END-FILE # %TEST-START-FILE Baseline/start-next-naming-3/output X 3 # %TEST-END-FILE @TEST-EXEC: cat %INPUT | grep '^X.[0-9]' >output @TEST-EXEC: btest-diff output X 1 # @TEST-START-NEXT X 2 # @TEST-START-NEXT X 3 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/start-next.test0000644000076500000240000000036514072112014017124 0ustar00timstaff# %TEST-EXEC: btest %INPUT # %TEST-EXEC: btest-diff output @TEST-EXEC: cat %INPUT | wc -c | awk '{print $1}' >>../../output This is the first test input in this file. # @TEST-START-NEXT ... and the second. # @TEST-START-NEXT ... and the third. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/statefile-sorted.test0000644000076500000240000000074014165076445020312 0ustar00timstaff# %TEST-EXEC-FAIL: btest -j t1 t2 # %TEST-EXEC: btest-diff mystate # # %TEST-DOC: Tests that the StateFile is always sorted. # This test fails last (and also ends up in the list of failed tests last), but # shoult be the first to be listed in the statefile. # %TEST-START-FILE t1 # @TEST-EXEC: sleep 1 && exit 1 # %TEST-END-FILE # %TEST-START-FILE t2 # @TEST-EXEC: exit 1 # %TEST-END-FILE # %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp StateFile = mystate # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/statefile.test0000644000076500000240000000104114072112014016763 0ustar00timstaff# %TEST-EXEC: btest t1 # %TEST-EXEC: test ! -e .btest.failed.dat # %TEST-EXEC: test -f mystate # %TEST-EXEC: mv mystate mystate1 # %TEST-EXEC: btest-diff mystate1 # %TEST-EXEC-FAIL: btest t1 t2 t3 # %TEST-EXEC: mv mystate mystate2 # %TEST-EXEC: btest-diff mystate2 %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = Baseline StateFile = mystate %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/teardown.test0000644000076500000240000000237514165076445016665 0ustar00timstaff# This test verifies the basic properties of teardowns: they get called after # command regardess of their outcome, they receive TEST_FAILED and # TEST_LAST_RETCODE environment variables, and they can fail otherwise # successful tests. # Succeeding tests: teardown runs # %TEST-EXEC: btest -t tests/success # Failing test: teardown runs, run fails # %TEST-EXEC-FAIL: btest -t tests/failure # Succeeding test: teardown introduces failure, run fails # %TEST-EXEC-FAIL: btest -c btest.failing-teardown.cfg -t tests/success # %TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TestDirs = tests TmpDir = .tmp BaselineDir = Baseline Teardown = echo "Teardown $TEST_NAME $TEST_FAILED $TEST_LAST_RETCODE" >>../../output %TEST-END-FILE %TEST-START-FILE btest.failing-teardown.cfg [btest] TestDirs = tests TmpDir = .tmp BaselineDir = Baseline Teardown = echo "Teardown $TEST_NAME $TEST_FAILED $TEST_LAST_RETCODE (failing now)" >>../../output && false %TEST-END-FILE # %TEST-START-FILE tests/success @TEST-EXEC: echo "success" >>../../output # %TEST-END-FILE # %TEST-START-FILE tests/failure @TEST-EXEC: echo "success" >>../../output @TEST-EXEC: echo "failure" >>../../output && exit 42 @TEST-EXEC: echo "not reached" >>../../output # %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/test-base.test0000644000076500000240000000034314072112014016676 0ustar00timstaff# %TEST-EXEC: mkdir -p x/y/z # %TEST-EXEC: mv %INPUT x/y/z/new-test # %TEST-EXEC: BTEST_TEST_BASE=x/y btest z/new-test # %TEST-EXEC: BTEST_TEST_BASE=`pwd`/x/y btest z/new-test @TEST-EXEC: echo Hello, World! @TEST-EXEC: exit 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/testdirs.test0000644000076500000240000000057714072112014016661 0ustar00timstaff# %TEST-EXEC: btest >out1 2>&1 # %TEST-EXEC: btest %INPUT >out2 2>&1 # %TEST-EXEC: btest-diff out1 # %TEST-EXEC: btest-diff out2 @TEST-EXEC: exit 0 %TEST-START-FILE d1/t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE d2/t2.test @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp BaselineDir = Baseline TestDirs = d1 d2 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/threads.test0000644000076500000240000000317614072112014016450 0ustar00timstaff# This test verifies test case parallelization and control over that # parallelization via serializers. # # %TEST-EXEC: chmod +x normalize-output # # 5 tests with 5 threads parallelize completely, but tests 4 and 5 are # serialized, so must end up in a single thread. The following groups # all test numbers run by a given thread on a single line, and confirms # that one of them includes 4, then 5. # %TEST-EXEC: btest -j 5 t1 t2 t3 t4 t5 2>&1 | ./normalize-output | grep "4.*5" | sed "s/[0-36-9] //" >output.j5 # # Single-thread operation is the default, so "-j 1" should yield same output. # %TEST-EXEC: btest -j 1 t1 t2 t3 t4 t5 2>&1 | cat >output.j1 # %TEST-EXEC: btest -j 0 t1 t2 t3 t4 t5 2>&1 | cat >output.j0 # # %TEST-EXEC: btest-diff output.j5 # %TEST-EXEC: btest-diff output.j1 # %TEST-EXEC: btest-diff output.j0 # In multithreaded output, this looks for lines with the thread-number # prefix (e.g. "[#2] t4 ... ok"), transforms them into lines separating # thread and test numbers by whitespace (e.g. "test 4 thread 2"), groups # tests run by the same thread into the same line, and sorts this. %TEST-START-FILE normalize-output grep '\#' | \ sed 's/.#\([0-9]\). .\([0-9]\).*/test \2 thread \1/g' | \ awk '{t[$4] = t[$4] " " $2} END{ for ( i in t ) print t[i];}' | \ sort %TEST-END-FILE %TEST-START-FILE t1 @TEST-EXEC: echo t1 >output %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo t2 >output %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: echo t3 >output %TEST-END-FILE %TEST-START-FILE t4 @TEST-SERIALIZE: Foo @TEST-EXEC: echo t4 >output %TEST-END-FILE %TEST-START-FILE t5 @TEST-SERIALIZE: Foo @TEST-EXEC: echo t5 >output %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1641315621.0 btest-0.72/testing/tests/tmps.test0000644000076500000240000000015714165076445016021 0ustar00timstaff# %TEST-EXEC: btest -t %INPUT # %TEST-EXEC: test -f .tmp/tmps/output @TEST-EXEC: echo "Hello, World!" >output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/tracing.test0000644000076500000240000000064114072112014016437 0ustar00timstaff# %TEST-EXEC-FAIL: btest -d --trace-file=trace.json t1 t2 t3 # %TEST-EXEC: cat trace.json | python -c 'import json, sys; xs = json.load(sys.stdin); print(len(xs)); print(sorted([str(x) for x in xs[0].keys()]))' > output # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 0 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/unstable-dir.test0000644000076500000240000000154014072112014017400 0ustar00timstaff# %TEST-EXEC: btest -t -z 4 mydir/test1 >output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE Baseline/mydir.test1/test1-output ran more ran more ran more ran more %TEST-END-FILE %TEST-START-FILE Baseline/mydir.test1/test1-dir-output tests.unstable-dir/mydir tests.unstable-dir/mydir tests.unstable-dir/mydir tests.unstable-dir/mydir %TEST-END-FILE %TEST-START-FILE mydir/test1 @TEST-START-FILE single-output1 ran @TEST-END-FILE @TEST-START-FILE single-output2 more @TEST-END-FILE @TEST-EXEC: echo $(basename $(dirname %DIR))/$(basename %DIR) >> ../../persist-dirs @TEST-EXEC: cat single-output1 >> ../../persist @TEST-EXEC: cat single-output2 >> ../../persist @TEST-EXEC: cat ../../persist > test1-output @TEST-EXEC: cat ../../persist-dirs > test1-dir-output @TEST-EXEC: btest-diff test1-output @TEST-EXEC: btest-diff test1-dir-output %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/unstable.test0000644000076500000240000000073414072112014016630 0ustar00timstaff# %TEST-EXEC: btest -z 4 test1 >output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE Baseline/test1/output ran more ran more ran more ran more %TEST-END-FILE %TEST-START-FILE test1 @TEST-START-FILE single-output1 ran @TEST-END-FILE @TEST-START-FILE single-output2 more @TEST-END-FILE @TEST-EXEC: cat single-output1 >> ../../persist @TEST-EXEC: cat single-output2 >> ../../persist @TEST-EXEC: cat ../../persist > output @TEST-EXEC: btest-diff output %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/verbose.test0000644000076500000240000000044314072112014016455 0ustar00timstaff# %TEST-EXEC: btest -v t1 t2 >output 2>&1 # %TEST-EXEC: btest-diff output %TEST-START-FILE t1 @TEST-EXEC: echo "Hello, World!" %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: echo "This is the contents of the .verbose file" > ${TEST_VERBOSE} @TEST-EXEC: echo "Hello, World!" %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/versioning.test0000644000076500000240000000062514072112014017175 0ustar00timstaff# This test verifies that btest correctly errors out when the minimum # version requested in the config file is greater than our current one. # # %TEST-EXEC-FAIL: btest -c btest.cfg 2>output.tmp # %TEST-EXEC: cat output.tmp | sed 's/this is .*Please/this is XXX. Please/' >output # %TEST-EXEC: btest-diff output %TEST-START-FILE btest.cfg [btest] TmpDir = .tmp Minversion = 99999.99 %TEST-END-FILE ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1625854988.0 btest-0.72/testing/tests/xml.test0000644000076500000240000000130514072112014015606 0ustar00timstaff# %TEST-EXEC-FAIL: btest -d -x output.raw.xml t1 t2 t3 # %TEST-EXEC: cat output.raw.xml | sed 's/hostname[^"]*"[^"]*"/XXX/g' | sed 's/time[^"]*"[^"]*"/XXX/g' | sed '/^$/d' | sed "s/> />~/" | tr '~' '\n' | sed 's/^[ ]*//' >output.xml # %TEST-EXEC: btest-diff output.xml # %TEST-EXEC-FAIL: btest -d -x output.raw.xml t1 t2 t3 # %TEST-EXEC: cat output.raw.xml | sed 's/hostname[^"]*"[^"]*"/XXX/g' | sed 's/time[^"]*"[^"]*"/XXX/g' | sed '/^$/d' | sed "s/> />~/" | tr '~' '\n' | sed 's/^[ ]*//' >output-j2.xml # %TEST-EXEC: btest-diff output-j2.xml %TEST-START-FILE t1 @TEST-EXEC: exit 0 %TEST-END-FILE %TEST-START-FILE t2 @TEST-EXEC: exit 1 %TEST-END-FILE %TEST-START-FILE t3 @TEST-EXEC: exit 0 %TEST-END-FILE