pax_global_header00006660000000000000000000000064130525011500014502gustar00rootroot0000000000000052 comment=1d00c7af51b025b70c3d55d2c8121169f9dab9c7 doit-0.30.3/000077500000000000000000000000001305250115000125245ustar00rootroot00000000000000doit-0.30.3/.coveragerc000066400000000000000000000000331305250115000146410ustar00rootroot00000000000000[run] source = doit, tests doit-0.30.3/.gitignore000066400000000000000000000002601305250115000145120ustar00rootroot00000000000000*.pyc .doit.db doit.egg-info .coverage .cache dist MANIFEST.in revision.txt tests/data/* doc/_build doc/tutorial/*.o doc/tutorial/*.in doc/tutorial/*.out doc/tutorial/file* doit-0.30.3/.travis.yml000066400000000000000000000012231305250115000146330ustar00rootroot00000000000000language: python python: - "3.3" - "3.4" - "3.5" - "3.6" # - "pypy3" pypy3 implements py3.2 not supported anymore sudo: false addons: apt: packages: - strace before_install: - pip install -U pip setuptools install: - pip install . - pip install -r dev_requirements.txt python-coveralls branches: only: - master - test script: - doit pyflakes - py.test --ignore-flaky - if [[ $TRAVIS_PYTHON_VERSION == '3.5' ]]; then doit coverage; fi after_success: - if [[ $TRAVIS_PYTHON_VERSION == '3.5' ]]; then coveralls; fi notifications: email: on_success: change on_failure: change doit-0.30.3/AUTHORS000066400000000000000000000022111305250115000135700ustar00rootroot00000000000000 (in chronological order) * Eduardo Schettino - schettino72 gmail com * Javier Collado - https://launchpad.net/~javier.collado * Philipp Tölke - https://launchpad.net/~toelke+lp * Daniel Hjelm - doit-d hjelm eu * Damiro - https://launchpad.net/~damiro * Charlie Guo - https://launchpad.net/~charlie.guo * Michael Gliwinski - https://launchpad.net/~tzeentch-gm * Vadim Fint - mocksoul gmail com * Thomas Kluyver - https://bitbucket.org/takluyver * Rob Beagrie - http://rob.beagrie.com * Miguel Angel Garcia - http://magmax.org * Roland Puntaier - roland puntaier gmail com * Vincent Férotin - vincent ferotin gmail com * Chris Warrick - Kwpolska - http://chriswarrick.com/ * Ronan Le Gallic - rolegic gmail com * Simon Conseil - contact saimon org * Kostis Anagnostopoulos - ankostis gmail com * Randall Schwager - schwager hsph harvard edu * Pavel Platto - hinidu gmail com * Gerlad Storer - https://github.com/gstorer * Simon Mutch - https://github.com/smutch * Michael Milton - https://github.com/tmiguelt doit-0.30.3/CHANGES000066400000000000000000000414221305250115000135220ustar00rootroot00000000000000 ======= Changes ======= 0.30.3 (*2017-02-20*) ===================== - Revert usage of setuptools enviroment markers (feature too new) 0.30.2 (*2017-02-16*) ===================== - Fix dependency on `pathlib` from PyPi 0.30.1 (*2017-02-16*) ===================== - Fix GH-#159 KeyError on doit list --status when missing file dependency - add python3.6 support 0.30.0 (*2016-11-22*) ===================== - BACKWARD INCOMPATIBLE: #112 drop python2 compatibility - GH-#94: option to read output from CmdAction line or byte buffered - GH-#114: `file_dep`, `targets` and `CmdAction` support pathlib. - fix GH-#100: make cmd `completion` output deterministic - fix GH-#99: positional argument on tasks not specified from cmd-line - fix GH-#97: `list` command does not display task-doc for `DelayedTask` when `creates` is specified - fix GH-#131: race condition in doit.tools.create_folder - fix `auto` command on OS-X systems - fix GH-#117: Give error when user tries to use equal sign on task name 0.29.0 (*2015-08-16*) ===================== - BACKWARD INCOMPATIBLE: revert - `result_dep` to create an implicit `task_dep` - fix GH-#59: command `list` issue with unicode names - fix GH-#72: cmd `completion` escaping of apostrophes in zsh - fix GH-#74: Task action's handle python3 callables with keyword only args - fix GH-#50: Executing tasks in parallel (multi-process) fails on Windows - fix GH-#71 #92: Better error messages for invalid command line tasks/commands - fix issue with `--always-execute` and `setup` tasks - GH-#67: multiprocess runner handles closures in tasks (using cloudpickle) - GH-#58: add `DelayedLoader` parameter `target_regex` - GH-#30: add `DelayedLoader` parameter `creates` - GH-#58: cmd `Run` add option `--auto-delayed-regex` - GH-#24: cmd `info` add option `--status` show reason a task is not up-to-date - GH-#66: cmd `auto` support custom ( user specified ) commands to be executed after each task execution - GH-#61: speed up sqlite3 backend (use internal memory cache) 0.28.0 (*2015-04-22*) ===================== - BACKWARD INCOMPATIBLE: signature for custom DB backend changed - BACKWARD INCOMPATIBLE: `DoitMain` API change - BACKWARD INCOMPATIBLE: `Command` API change - BACKWARD INCOMPATIBLE: `default` reporter renamed to `console` - GH-#25: Add a `reset-dep` command to recompute dependencies state - GH-#22: Allow to customize how file_dep are checked - GH-#31: Add IPython `%doit` magic-function loading tasks from its global namespace - read configuration options from INI files - GH-#32 plugin system - plugin support: COMMAND - add new commands - plugin support: LOADER - add custom task loaders - plugin support: REPORTER - add custom reporter for `run` command - plugin support: BACKEND - add custom DB persistence backend - GH-#36 PythonAction recognizes returned TaskError or TaskFailed - GH-#37 CmdParse support for arguments of type list - GH-#47 CmdParse support for choices - fix issue when using unicode strings to specify `minversion` on python 2 - fix GH-#27 auto command in conjunction with task arguments - fix GH-#44 Fix the list -s command when result_dep is used - fix GH-#45 make sure all `uptodate` checks are executed (no short-circuit) 0.27.0 (*2015-01-30*) ====================== - BACKWARD INCOMPATIBLE: drop python 2.6 support - BACKWARD INCOMPATIBLE: removed unmaintained genstandalone script - BACKWARD INCOMPATIBLE: removed runtests.py script and support to run tests through setup.py - BACKWARD INCOMPATIBLE: `result_dep` creates an implicit `setup` (was `task_dep`) - BACKWARD INCOMPATIBLE: GH-#9 `getargs` creates an implicit `result_dep` - BACKWARD INCOMPATIBLE: `CmdAction` would always decode process output using `errors='strict'` default changed to `replace` - allow task-creators to return/yield Task instances - fix GH-#14: add support for delayed task creation - fix GH-#15: `auto` (linux) inotify also listen for `MOVE_TO` events - GH-#4 `CmdAction` added parameters `encoding` and `decode_error` - GH-#6: `loader.task_loader()` accepts methods as *task creators* 0.26.0 (*2014-08-30*) ====================== - moved development to git/github - `uptodate` callable "magic" arguments `task` and `values` are now optional - added command `info` to display task metadata - command `clean` smarter execution order - remove `strace` short option `-k` because it conflicts with `run` option - fix zsh tab-completion script when not `doit` script - fix #79. Use setuptools and `entry_points` - order of yielded tasks is preserved - #68. pass positional args to tasks - fix tab-completion on BASH for sub-commands that take file arguments 0.25.0 (*2014-03-26*) ====================== - BACKWARD INCOMPATIBLE: use function `doit.get_initial_workdir()` instead of variable `doit.initial_workdir` - DEPRECATED `tools.InteractiveAction` renamed to `tools.LongRunning` - fix: `strace` raises `InvalidCommand` instead of using `assert` - #28: task `uptodate` support string to be executed as shell command - added `tools.Interactive` for use with interactive commands - #69: added doit.run() to make it easier to turn a dodo file into executable - #70: added option "--pdb" to command `run` - added option "--single" to command `run` - include list of file_dep as an implicit dependency 0.24.0 (*2013-11-24*) ====================== - reporter added `initialize()` - cmd `list`: added option `--template` - dodo.py can specify minimum required doit version with DOIT_CONFIG['minversion'] - #62: added the absolute path from which doit is invoked `doit.initial_workdir` - fix #36: added method `isatty()` to `action.Writer` - added command `tabcompletion` for bash and zsh - fix #56: allow python actions to have default values for task parameters 0.23.0 (*2013-09-20*) ====================== - support definition of group tasks using basename without any task - added task property `watch` to specific extra files/folders in auto command - CmdAction support for all arguments of subprocess.Popen, but stdout and stderr - added command option `-k` as short for `--seek-file` - task action can be specified as a list of strings (executed using subprocess.Popen shell=False) - fix #60: result of calc_dep only considered if not run yet - fix #61: test failures involving DBM - fix: do not allow duplicate task names 0.22.1 (*2013-08-04*) ====================== - fix reporter output in py3 was being displayed as bytes instead of string - fix pr#12 read file in chunks when calculating MD5 - fix #54 - remove distribute bootstrapping during installation 0.22.0 (*2013-07-05*) ====================== - fix #49: skip unicode tests on systems with non utf8 locale - fix #51: bash completion does not mess up with global COMP_WORDBREAKS - fix docs spelling and added task to check spelling - fix #47: Task.options can always be accessed from `uptodate` code - fix #45: cmd forget, added option -s/--follow-sub to forget task_dep too 0.21.1 (*2013-05-21*) ====================== - fix tests on python3.3.1 - fix race condition on CmdAction (affected only python>=3.3.1) 0.21.0 (*2013-04-29*) ====================== - fix #38: `doit.tools.create_folder()` raise error if file exists in path - `create_doit_tasks` not called for unbound methods - support execution using "python -m doit" - fix #33: Failing to clean a group of task(s) with sub-tasks - python-actions can take a magic "task" parameter as reference to task - expose task.clean_targets - tools.PythonInteractiveAction saves "result" and "values" - fix #40. added option to use threads for parallel running of tasks - same code base for python 2 & 3 (no need to use tool `2to3`) - add sqlite3 DB backend - added option to select backend 0.20.0 (*2013-01-09*) ====================== - added command `dumpdb` - added `CmdAction.save_out` param - `CmdAction` support for callable that returns a command string - BACKWARD INCOMPATIBLE `getargs` for a group task gets a dict where each key is the name of subtasks (previously it was a list) - added command `strace` - cmd `auto` run tasks on separate process - support unicode for task name 0.19.0 (*2012-12-18*) ====================== - support for `doit help ` - added support to load tasks using `create_doit_tasks` - dropped python 2.5 support 0.18.1 (*2012-12-03*) ======================= - fix bug cmd option --continue not being recognized 0.18.0 (*2012-11-27*) ======================= - remove DEPRECATED `Task.insert_action`, `result_dep` and `getargs` using strings - fix #10 --continue does not execute tasks that have failed dependencies - fix --always-execute does not execute "ignored" tasks - fix #29 python3 cmd-actions issue - fix #30 tests pass on all dbm backends - API to add new sub-commands to doit - API to modify task loader - API to make dodo.py executable - added ZeroReporter 0.17.0 (*2012-09-20*) ====================== - fix #12 Action.out and Action.err not set when using multiprocessing - fix #16 fix `forget` command on gdbm backend - fix #14 improve parallel execution (better process utilization) - fix #9 calc_dep create implicit task_dep if a file_dep returned is a also a target - added tools.result_dep - fix #15 tools.result_dep supports group-tasks - DEPRECATE task attribute `result_dep` (use tools.result_dep) - DEPRECATE `getargs` specification using strings (must use 2-element tuple) - several changes on `uptodate` - DEPRECATE `Task.insert_action` (replaced by `Task.value_savers`) - fix #8 `clean` cleans all subtasks from a group-task - fix #8 `clean` added flag `--all` to clean all tasks - fix #8 `clean` when no task is specified set --clean-dep and clean default tasks 0.16.1 (*2012-05-13*) ====================== - fix multiprocessing/parallel bug - fix unicode bug on tools.config_changed - convert tools uptodate stuff to a class, so it can be used with multi-processing 0.16.0 (*2012-04-23*) ======================= - added task parameter ``basename`` - added support for task generators yield nested python generators - ``doit`` process return value ``3`` in case tasks do start executing (reporter is not used) - task parameter ``getargs`` take a tuple with 2 values (task_id, key_name) - DEPRECATE ``getargs`` being specified as . - ``getargs`` can take all values from task if specified as (task_id, None) - ``getargs`` will pass values from all sub-tasks if specified task is a group task - result_dep on PythonAction support checking for dict values - added ``doit.tools.PythonInteractiveAction`` 0.15.0 (*2012-01-10*) ======================= - added option --db-file (#909520) - added option --no-continue (#586651) - added genstandalone.py to create a standalone ``doit`` script (#891935) - fix doit.tools.set_trace to not modify sys.stdout 0.14.0 (*2011-11-05*) ======================== - added tools.InteractiveAction (#865290) - bash completion script - sub-command list: tasks on alphabetical order, better formatting (#872829) - fix ``uptodate`` to accept instance methods callables (#871967) - added command line option ``--seek-file`` - added ``tools.check_unchanged_timestamp`` (#862606) - fix bug subclasses of BaseAction should get a task reference 0.13.0 (*2011-07-18*) ======================== - performance speed improvements - fix bug on unicode output when task fails - ConsoleReporter does not output task's title for successful tasks that start with an ``_`` - added ``tools.config_changed`` (to be used with ``uptodate``) - ``teardown`` actions are executed in reverse order they were registered - added ``doit.get_var`` to get variables passed from command line - getargs creates implicit "setup" task not a "task_dep" 0.12.0 (*2011-05-29*) ======================= - fix bug #770150 - error on task dependency from target - fix bug #773579 - unicode output problems - task parameter ``uptodate`` accepts callables - deprecate task attribute run_once. use tools.run_once on uptodate instead - added doit.tools.timeout 0.11.0 (*2011-04-20*) ======================== - no more support for python2.4 - support for python 3.2 - fix bug on unicode filenames & unicode output (#737904) - fix bug when using getargs together with multiprocess (#742953) - fix for dumbdbm backend - fix task execution order when using "auto" command - fix getargs when used with sub-tasks - fix calc_dep when used with "auto" command - "auto" command now support verbosity control option 0.10.0 (*2011-01-24*) ====================== - add task parameter "uptodate" - add task parameter "run_once" - deprecate file_dep bool values and None - fix issues with error reporting for JSON Reporter - "Reporter" API changes - ".doit.db" now uses a DBM file format by default (speed optimization) 0.9.0 (*2010-06-08*) ===================== - support for dynamic calculated dependencies "calc_dep" - support for user defined reporters - support "auto" command on mac - fix installer on mac. installer aware of different python versions - deprecate 'dependencies'. use file_dep, task_dep, result_dep. 0.8.0 (*2010-05-16*) ======================= - parallel execution of tasks (multi-process support) - sub-command "list" option "--deps", show list of file dependencies - select task by wildcard (fnmatch) i.e. test:folderXXX/* - task-setup can be another task - task property "teardown" substitute of setup-objects cleanup - deprecate setup-objects 0.7.0 (*2010-04-08*) ===================== - configure options on dodo file (deprecate DEFAULT_TASKS)(#524387) - clean and forget act only on default tasks (not all tasks) (#444243) - sub-command "clean" option "clean-dep" to follow dependencies (#444247) - task dependency "False" means never up-to-date, "None" ignored - sub-command "list" by default do not show tasks starting with an underscore, added option (-p/--private) - new sub-command "auto" 0.6.0 (*2010-01-25*) ===================== - improve (speed optimization) of check if file modified (#370920) - sub-command "clean" dry-run option (-n/--dry-run) (#444246) - sub-command "clean" has a more verbose output (#444245) - sub-command "list" option to show task status (-s/--status) (#497661) - sub-command "list" filter tasks passed as positional parameters - tools.set_trace, PDB with stdout redirection (#494903) - accept command line optional parameters passed before sub-command (#494901) - give a clear error message if .doit.db file is corrupted (#500269) - added task option "getargs". actions can use computed values from other tasks (#486569) - python-action might return a dictionary on success 0.5.1 (*2009-12-03*) ===================== - fix. task-result-dependencies should be also added as task-dependency to force its execution. 0.5.0 (*2009-11-30*) ===================== - task parameter 'clean' == True, cleans empty folders, and display warning for non-empty folders - added command line option --continue. Execute all tasks even if tasks fails - added command line option --reporter to select result output reporter - added executed-only reporter - added json reporter - support for task-result dependency #438174 - added sub-command ignore task - added command line option --outfile. write output to specified file path - added support for passing arguments to tasks on cmd line - added command line option --dir (-d) to set current working directory - removed dodo-sample sub-command - added task field 'verbosity' - added task field 'title' - modified default way a task is printed on console (just show ". name"), old way added to doit.tools.task_title_with_actions 0.4.0 (*2009-10-05*) ==================== - deprecate anything other than a boolean values as return of python actions - sub-cmd clean (#421450) - remove support for task generators returning action (not documented behavior) - setup parameter for a task should be a list - single value deprecated (#437225) - PythonAction support 'dependencies', 'targets', 'changed' parameters - added tools.create_folder (#421453) - deprecate folder-dependency - CmdActions reference to dependencies, targets and changed dependencies (#434327) - print task description when printing through doit list (#425811) - action as list of commands/python (#421445) - deprecate "action" use "actions" 0.3.0 (*2009-08-30*) ===================== - added subcommand "forget" to clear successful runs status (#370911) - save run results in text file using JSON. (removed dbm) - added support for DEFAULT_TASKS in dodo file - targets md5 is not checked anymore. if target exist, task is up-to-date. it also supports folders - cmd line sub-commands (#370909) - remove hashlib dependency on python 2.4 - sub-cmd to create dodo template - cmd-task supports a list of shell commands - setup/cleanup for task (#370905) 0.2.0 (*2009-04-16*) ==================== - docs generated using sphinx - execute once (dependency = True) - group task - support python 2.4 and 2.6 - folder dependency 0.1.0 (*2008-04-14*) ==================== - initial release doit-0.30.3/CONTRIBUTING.md000066400000000000000000000021261305250115000147560ustar00rootroot00000000000000 # Contributing to doit ## issues/bugs If you find issues using `doit` please report at [github issues](https://github.com/pydoit/doit/issues). All issues should contain a sample minimal `dodo.py` and the used command line to reproduce the problem. ## questions Please ask question in the discussion [forum](http://groups.google.co.in/group/python-doit). Do not use the github issue tracker! `doit` has extensive online documentation please read the docs! When asking a question it is appreciated if you introduce yourself, also mention how long are you using doit and using it for what. A good question with a code example greatly increases the chance of it getting a reply. Unless you are looking for paid support, do **not** send private emails to the project maintainer. ## feature request Users are expected to implement new features themselves. You are welcome to add a request on github tracker but if you are not willing to spend your time on it, probably nobody else will... Before you start implementing anything it is a good idea to discuss its implementation in the discussion forum. doit-0.30.3/LICENSE000066400000000000000000000021041305250115000135260ustar00rootroot00000000000000 The MIT License Copyright (c) 2008-2014 Eduardo Naufel Schettino Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. doit-0.30.3/README.rst000066400000000000000000000071701305250115000142200ustar00rootroot00000000000000================ README ================ .. display some badges .. image:: https://img.shields.io/pypi/v/doit.svg :target: https://pypi.python.org/pypi/doit .. image:: https://travis-ci.org/pydoit/doit.png?branch=master :target: https://travis-ci.org/pydoit/doit .. image:: https://ci.appveyor.com/api/projects/status/f7f97iywo8y7fe4d/branch/master?svg=true :target: https://ci.appveyor.com/project/schettino72/doit/branch/master .. image:: https://coveralls.io/repos/pydoit/doit/badge.png?branch=master :target: https://coveralls.io/r/pydoit/doit?branch=master .. image:: https://badges.gitter.im/Join%20Chat.svg :alt: Join the chat at https://gitter.im/pydoit/doit :target: https://gitter.im/pydoit/doit?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge doit - automation tool ====================== *doit* comes from the idea of bringing the power of build-tools to execute any kind of task Project Details =============== - Website & docs - http://pydoit.org - Project management on github - https://github.com/pydoit/doit - Discussion group - https://groups.google.com/forum/#!forum/python-doit - News/twitter - https://twitter.com/py_doit - Plugins, extensions and projects based on doit - https://github.com/pydoit/doit/wiki/powered-by-doit license ======= The MIT License Copyright (c) 2008-2015 Eduardo Naufel Schettino see LICENSE file developers / contributors ========================== see AUTHORS file install ======= *doit* is tested on python 3.3, 3.4, 3.5. The last version supporting python 2 is version 0.29. :: $ python setup.py install dependencies ============= - cloudpickle - pyinotify (linux) - macfsevents (mac) Tools required for development: - git * VCS - py.test * unit-tests - mock * unit-tests - coverage * code coverage - epydoc * API doc generator - sphinx * doc tool - pyflakes * syntax checker - doit-py * helper to run dev tasks development setup ================== The best way to setup an environment to develop *doit* itself is to create a virtualenv... :: doit$ virtualenv dev (dev)doit$ dev/bin/activate install ``doit`` as "editable", and add development dependencies from `dev_requirements.txt`:: (dev)doit$ pip install --editable . (dev)doit$ pip install --requirement dev_requirements.txt .. note:: Windows developers: Due to a bug in `wheel` distributions `pytest` must not be installed from a `wheel`. e.g.:: pip install pytest --no-use-wheel See for more information: - https://github.com/pytest-dev/pytest/issues/749 - https://bitbucket.org/pytest-dev/pytest/issues/749/ tests ======= Use py.test - http://pytest.org :: $ py.test documentation ============= ``doc`` folder contains ReST documentation based on Sphinx. :: doc$ make html They are the base for creating the website. The only difference is that the website includes analytics tracking. To create it (after installing *doit*):: $ doit website The website will also includes epydoc generated API documentation. spell checking -------------- All documentation is spell checked using the task `spell`:: $ doit spell It is a bit annoying that code snippets and names always fails the check, these words must be added into the file `doc/dictionary.txt`. The spell checker currently uses `hunspell`, to install it on debian based systems install the hunspell package: `apt-get install hunspell`. profiling --------- :: python -m cProfile -o output.pstats `which doit` list gprof2dot -f pstats output.pstats | dot -Tpng -o output.png contributing ============== On github create pull requests using a named feature branch. doit-0.30.3/TODO.txt000066400000000000000000000006751305250115000140420ustar00rootroot00000000000000 see https://github.com/pydoit/doit/issues 0.X ---------- . setup/task single process/all procces . better terminal output (#5) wishlist ---------- . tools - profile . tools - code coverage . color output on the terminal . option dont save successful results . forget a dependency, not a task . task name alias . action to be executed on when ctrl-c is hit on auto mode big refactorings ------------------ . Task into TaskDep + Task doit-0.30.3/appveyor.yml000066400000000000000000000021041305250115000151110ustar00rootroot00000000000000build: false branches: only: - master - test environment: matrix: - PYTHON: "C:/Python33" - PYTHON: "C:/Python34" - PYTHON: "C:/Python35" - PYTHON: "C:/Python36" init: - "ECHO %PYTHON%" - ps: "ls C:/Python*" install: #- ps: (new-object net.webclient).DownloadFile('https://raw.github.com/pypa/pip/master/contrib/get-pip.py', 'C:/get-pip.py') #- "%PYTHON%/python.exe C:/get-pip.py" #- "%PYTHON%/Scripts/pip.exe install --upgrade setuptools" # Explicitly install pytest from NOT the wheel distribution to ensure it # works with Python 2.7. # See: https://github.com/pytest-dev/pytest/issues/749 # https://bitbucket.org/pytest-dev/pytest/issues/749/ - "%PYTHON%/python.exe -m pip install -U pip" - "%PYTHON%/Scripts/pip.exe install -U setuptools pytest --no-binary all" - "%PYTHON%/Scripts/pip.exe install ." - "%PYTHON%/Scripts/pip.exe install -r dev_requirements.txt" test_script: - "set path=%PYTHON%/Scripts;%path%" - "%PYTHON%/python.exe --version" - "%PYTHON%/Scripts/pip.exe --version" - "doit pyflakes" - "py.test" doit-0.30.3/bash_completion_doit000066400000000000000000000074451305250115000166460ustar00rootroot00000000000000# bash completion for doit # auto-generate by `doit tabcompletion` # to activate it you need to 'source' the generate script # $ source # reference => http://www.debian-administration.org/articles/317 # patch => http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=711879 _doit() { local cur prev words cword basetask sub_cmds tasks i dodof COMPREPLY=() # contains list of words with suitable completion # remove colon from word separator list because doit uses colon on task names _get_comp_words_by_ref -n : cur prev words cword # list of sub-commands sub_cmds="auto clean dumpdb forget help ignore info list reset-dep run strace tabcompletion" # options that take file/dir as values should complete file-system if [[ "$prev" == "-f" || "$prev" == "-d" || "$prev" == "-o" ]]; then _filedir return 0 fi if [[ "$cur" == *=* ]]; then prev=${cur/=*/} cur=${cur/*=/} if [[ "$prev" == "--file=" || "$prev" == "--dir=" || "$prev" == "--output-file=" ]]; then _filedir -o nospace return 0 fi fi # get name of the dodo file for (( i=0; i < ${#words[@]}; i++)); do case "${words[i]}" in -f) dodof=${words[i+1]} break ;; --file=*) dodof=${words[i]/*=/} break ;; esac done # dodo file not specified, use default if [ ! $dodof ] then dodof="dodo.py" fi # get task list # if it there is colon it is getting a subtask, complete only subtask names if [[ "$cur" == *:* ]]; then # extract base task name (remove everything after colon) basetask=${cur%:*} # sub-tasks tasks=$(doit list --file="$dodof" --quiet --all ${basetask} 2>/dev/null) COMPREPLY=( $(compgen -W "${tasks}" -- ${cur}) ) __ltrim_colon_completions "$cur" return 0 # without colons get only top tasks else tasks=$(doit list --file="$dodof" --quiet 2>/dev/null) fi # match for first parameter must be sub-command or task # FIXME doit accepts options "-" in the first parameter but we ignore this case if [[ ${cword} == 1 ]] ; then COMPREPLY=( $(compgen -W "${sub_cmds} ${tasks}" -- ${cur}) ) return 0 fi case ${words[1]} in auto) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; clean) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; dumpdb) COMPREPLY=( $(compgen -f -- $cur) ) return 0 ;; forget) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; help) COMPREPLY=( $(compgen -W "${tasks} ${sub_cmds}" -- $cur) ) return 0 ;; ignore) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; info) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; list) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; reset-dep) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; run) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; strace) COMPREPLY=( $(compgen -W "${tasks}" -- $cur) ) return 0 ;; tabcompletion) COMPREPLY=( $(compgen -f -- $cur) ) return 0 ;; esac # if there is already one parameter match only tasks (no commands) COMPREPLY=( $(compgen -W "${tasks}" -- ${cur}) ) } complete -o filenames -F _doit doit doit-0.30.3/dev_requirements.txt000066400000000000000000000002511305250115000166440ustar00rootroot00000000000000# modules required for development only # $ pip install --requirement dev_requirements.txt pyflakes pytest>=2.8.0 pytest-ignore-flaky mock coverage>=4.0 doit-py>=0.4.0 doit-0.30.3/doc/000077500000000000000000000000001305250115000132715ustar00rootroot00000000000000doit-0.30.3/doc/Makefile000066400000000000000000000151421305250115000147340ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/doit.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/doit.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/doit" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/doit" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." doit-0.30.3/doc/_static/000077500000000000000000000000001305250115000147175ustar00rootroot00000000000000doit-0.30.3/doc/_static/doit-text-160x60.png000077500000000000000000000136161305250115000202220ustar00rootroot00000000000000PNG  IHDR<ksBIT|d pHYstEXtSoftwarewww.inkscape.org< IDATxyǿ==.{ťͥ !*A jT^ DEQ0&jQƈ⁠(* rY2;9vv%d>3u:z ܱ@m2pL01%# ǔ3HDJO%+ `*3-`tɆ^"dBO%:9nzYUdUm:|OY^N9kO M&qJFHUײg;h\EiݽCڧ}EUR)3qK-I $tipS9J*!O}Skk_H6mRK=Lڄǀ m*-]1n$@/e앻7<=<2?%0.WEjN|SUyo YUwz>1KU (X)|sV;/:.y)Jg8@}Mة !gj3oSUyw Dz̚m?N.(縂o n5q) Jn4 `Ę/"~?$1jsa)_Q[i2PXv~0|2":DJ*\ \t>%1Qep/CbLN4mႂNY޸G[Zۄuw:;nN^w:3NX&D\!a\oVUS&UUn 9&]tpeM_RWt*F(I!f-ĸTr,HRxX=ߗsY`Hi9Q \bޜ.xiAsskw0~y<;ҬlFNw8^NhT盛/K4\ꥣ1Q:On$9.ן=(1ֳYn/w(ۘ -1fRVzV ho;x27y6iEM{e|{K4L$'gU]VUH)-!P;9ZbY .e؛ ?&v[/,1vCWbhCLs>F>)'ߝ5lS :I²‰[[,cm,|N7v1[ե7\`=n̐xMrŊkz\:_a7VbydAS`3􎈮Gbx:^)]`~RblZP8gbDk6| c s 8N&}#SVꜜyn:v:vdE=￴xS\'?-/?Ljm|\57yUC׈кf@bD7]l"c)cZMdhxpĘ7|cN9 #RDuq6oc/K ~dVhj \B ’N<n*1ItVH$\zk7GH6S8);k}?@:`.DžtՊzUmppufG+FAPi?RQ͑77>z)NqK(w\RG"y " #1-Fr&!n[iLJ 16D+J&.xQq!`1{9.lXQvmmzxp\kN<_B,Sy^jNQMӻ**(,WWGdRoŒBb\5 T3#R:LG%9xn+`vK y)==`_Ul ngiaęݺU*>ӯ?ݍG:%'gYYO /;!ŋ _PW9v@A}Kt8KF yYv)uJդHi4tDO"~ %YiT g|񃢸llgvPoeónwLꦦk/b8x-j}^(8$d^ȄYR#n0mSGN O4oO<f&%^)&(JȪZ\1(|ںvf|{֋T\nUUy_ Ҹ)H T:G:RAt9GfOB ]!ZqWb^~K4>1wgׇll# q/)wWEv:'zkUUu~MWt!5|}E~7 0!DJbKJ)5 c#%Ʈ{\bcAqH0N-] 7S2Uฬ<Unťw8a̔t.3D,0X*,*iن逢+!8WF'B&Cۯ.IXw g- %^pWƊF/tQ "|&|4XaU3±{YՎzX\T4'V4:W\Ttq׻AQJ'r4 `;^1$dARw"h'|I=a$>4JE yFqHfܫefOXN1\"f͜HH.̯;C0 BrHy &'eMJ" Is5I8b``y!1VzvO4] mv{C`09[n\T5L9Ms iXm3|Su]&@]PFBs/A ^c-]ў ?SoPƀ eg"|uE2B჊qڴR({p!!aS7 Y,މRfzK<:mf*FOpG[F7_nnsHQ`%dE~BB 9<Fwsm0X@6 4=,ٹCqUUj5gf}LΌо%C[.Ct'I oGQB>|Qq[M׀ܕ6EJZ))mm+a*g.Yd\ssrJ~z ~]js,BBv#RQKzf֕%nLR܈u:u)GK]: apB3\ܼi p9.*8xT';'dgm<3kvCzxP\i55^3O0 Y9Iu#TZיn4a$j:BayQ.jq8d ' #Z=bUd[m%%'ROaAi%n7VEiN$1S1>vQ̫좖֩`"0LWHѳ-`X0O$AKK˽z@A¨c9>*R= _N4@3-n᛿:%tDfl^?-ח\όs<!IJbd#-VwXv6QfmFĘY#6Op mx24ģg3L%4GC $%@BXRhƽ`qukecjaaV늟le+x֤/:$^W"Rx11 IR#]cc]10䠵ĘJ3 ZPVVk׫~dI}ee0X6&zrIra Kc"ʡ EH L%ҿ.5܊ҼU^'+l0l2H*0W~)rlPմ>Մ#_ NVe&%q:o'$_(ʻ``ɣE<_@y~p!'XFq\^d{dZZ~yMnoCiB3IK3aIl_W/56v;qDJGC; c3f86-@,(ȟ.l3J8.j}gncn^#C%4mVϟwrV^O0,xC'`eKh殼gxEU,x |e_[[< 5fȐᘒ`cJF)1IENDB`doit-0.30.3/doc/_static/doit.png000066400000000000000000000314051305250115000163670ustar00rootroot00000000000000PNG  IHDR<HsBIT|d pHYsDDptEXtSoftwarewww.inkscape.org< IDATxwUևLO`%*0 "V1*IW״ &T ]X],*9CIg=ǽ5]]]fH><=]}VuU׽H \ ccccc׆>#EQ\`[+6 X±O& ;A<,T󒊊JKw&"1Nf^-Tl"l}EE>Iaǐ5tVO$+lN^}ccmm{m6_tB&Yud*+P^>wb0Hd[76.+V6E"ڰgdcccc S>}PZ0{/~;w2N$*wuNNUGپBN_װ(ʛoU y>66666I_r}s"7^ nODGz>E` V١H`TlwuJiFOli9saGGG!{g=Q[G>^串}pN!HG@Qv3*K4 `/|ǠnR+"Omf ـox70ydڵ;mգ*)yd[k'm[NיP0PW Z| uh(TX&^x+E3_/|T!r Ȑ -A賱٘*U]=ɪxZZ'~IEV疕m`8#(ySn /=|ENH/LEKӾXt1͏0;.オjOMv xj7xOrgD\5WVns^YK\m_oni֖P0k!GPU0@xrx@Q8N'8rA>F=;̴K.D/<BōUU?/#Zy߱ڵg檪zOt8 yŪX%s33^]Eo~ `6]A A~䠪Z65FuPU L0 ŌVW+9XJ\ &fM0WmCDKt0*{yiژ7ni]#jtTկ;F !{˿f1U~nJe6tUZoL^jgǵ)":\ }>3D4—k 3?F} CiCt8R3F$PQ]TN6'u}55Ӧ54<ݩ/=_W7{bIɟCD|3h[["Uhp!'`]LD!(NpM0;(l ?etPph]PZ63PqKž63iT:# qoæ+G׊@鬙\Z}55ǚocm]/+B_"YGc:W($F !'x_2es(T5CV9 hhy˦ghD4ל59B66Y(PE}q\i}R76n %%CKJv8}>GsԤxPʈs{ߕe$(EfE*tBDE} Оln4祩P?` (64'U.פf _m)Z?{"h8-m0gwmGf{^R^~e3gG㯅ۘ[*N;ݻq𸱟 ~yy'@?I~T'LS MV+˅(ʅ xXPW ZP TU[gZhlPqa G!zy"ollSY gWH}Dt;PXp7fY }y(gO:nr,B] />P"7M`tc yD38(Twu|QWSaH٢ pf.JQ/_mzϣ \3 ̍tlBD%Q3 {VŃD, I1e||n?Or{u(g44< 跗8mS/׵۹\g{}9_TC3gAeϛ "*EUE߈^b|*B&:Z3|3ZJhO"FDwѫDgX` -$^&fNj%;Z[?O`k7㶏.[>kXP(']QVU)>D"]+409#q2!Mz ' }+ҤAyvqq@QN2FMEOOzՖIyn@l`N|dID;ёDt5#o r^Ã!t;C7k5:}M߉#<.n緷7۟>U;\;@?|7;;ی} $ j1(LfB;[19̹=(O)Q~*>$"2sl WQ3H`2s!Ϫ\4g+HTybr 2s6y NuXK(A8ysJاE^JB0e=ƪg%%q8G;FzNXڝ|AAw~K3.ץPpT]VvD|Y;A*F[YΚ d1o fлEy U8 "ebK4\@0 s:5 H 4qI!!":IAmoۜ \᷶k2$~Ԡ ` r߿J^طF]T^rG6{uq~`ZCûoռnIZ\3_Ɨޮ;0XYt+T9iRaK 5"Y2שrdAj8(+QAUm(J+DI87DK/C ҖopBNf!j Q|Xlj!H2 gvzZ` -q_'Zmf|nDhNB`qѪ ab=t+cVz8Z@19NԶyC:D&vue0`>%%muNg=g?AH-ŐR6<ˬߌP10xrz T,jñQu ::0NC MZ#*Ppp5F G2wDo$4򸛥P!/oocod(D4mSaJ#sŒ "M= ]-*dԪO"mT+z%I `n.aDt&frC< =3+''1sDia!VWB`h0M f.X.2q{fp"T9}}~FS&uxPh?Gv8LWhDr2s4oՏb2#ϚfS`ti}M?5KrNkgo8,BqBR"=į|1 *vraq9DI\IyZ!PӵQUP*9{*jSTZV U;#)>%BB~xJ~NYANiۀ3^Xi1Y•1h0S6 Y9]2腊Rݳ';k4m C"E탚'\}&[Uv6 m0<ѥ9fB|S`U#DhJL ݂ENC\!&vy#]L0"| qmO}k0wFB]\mkL&t_ζ @hC\L!~i)Շlc\i|7׼tHk.@! `>MΡI oC.CnB/O{<{< %r,#ruex%ӗpl[0JYBT'CRX_~B(R id,AU-F>QjEHoG5# GSmkL0rI>"/Y4Ry" z§@CBܕ>8, 7%쿍9ijT~}(kdy\""BAj$N 3Z ~ ; ~s!47AN!&n!|ƌiW_60rMKSBDea@<'A$6qdō&"" @/GA%X I,I&&)>#n3GO۶wv}Yt0V|k<ޥVpM,)[ t>!_FgbvC:d4H$s@<||Nuƺ'۹*B3kBo;ڠK1՘u6D@ 10|/3+?b-N} r7Q+5џZoX4BD{ADh ; 3?H=3 a2 6AL̳ fscBo)H*H_@1M3!w|#`[\@;3̳Yf[y%3 Boeҵ('"PhIDYpvD"EhtLowFg4'uOwu:5Ppt3E,kJ$+j*U#SO2T՜e*U!BEWie-z5e9&~ 5[H\S=nleQ7CB88xH )! ip4=#&DGu|v繇eU$kmY/v 1̜M8[&PAD![]LN{e9Ƃ^$b"{J~_w{A/7솯"Di-$B)MʼnVƲ.x&-U)ݣ^]VmvB ϘyqJ&7@E-k)~D,%(̶2?+'z-NIR/20t@:p-r mV2~`Ùy3g #g&72Urְ- / ;q~&37-DN:.(_ { 66$7̡ hf 7 P=G1sڼT6{ >hfBlrm1ƽooaE,֥F/#]o3y{ȓF}'Ȼޤߦ\߷%?W}:aE96'-$ij){ (JY6ZDsH'Dta\#oIHffL DwGM\|K͓> ; BD9+惦}E]\|e:|O0F~mv7螼Ks$ʔۚ xHD{"D{Gb-GR炈"D-iߒU1B=={}GvqTUեnY Z{ڎ0yHBNih8P(ő!$8R㩩ԅ&7//چ8~E)OXJaD~E@Q\H.(Tƃ'z# ᠹbyAk6Zք +ZY[=[f+^'Vd%4+axVnM%rPMEZCW+$}$=̜vlHuԜ"_b ٕ Zwhk㓖 iv-3 3jf"p8Ҵɷo_Mrk'K#.DwT'Мϟ (!n,o̲BRhd]W[tMRHUEcP-j%è*̚D&Gr#N,l}@yiH9;ͶVgpM1sISAZiM1HN\д벬a~R`6" BDeh޻EAE)MʼnHN8$I(@K?!-/ : ~=h{i|CB7юMKS- k{7s|]&I]}k4z4OP1X+teWTV49=$)HJtyEś WRmt]>|4`nw˗_LIf!׼'FinSƒ*ƔV۹\{lrs3Ust0ai,h`4q7/1q&D@Q@AUjAU]P&$=kGj#yAxUkP`3!YqC?BLAU-JaȔvSa:vf=ٍ3iBE6D:3]2i |MxTfԬ2KkK$TXisE3#TlMDl.F &XБ=ф'N;|} Kv׏4? c=9.m.}=з."%B9m/ĻA{6Ҥ!uN1^%xxN"׿I[t:25%V 0j*,R e􉋲Px"35ìa5gB*(Qzb"ݫDy(3MYф '2GhmӉFAL@jq>1@ʭphO lWJ1AM#svfI4tU-kZ/@!Dd%sKm<7;vOr{;CC;w?w9{Q06-M>#9Z1+DV} a&TںX+FK8Kk=~P脦D"W[,`y4mT]UV6z_ ru&KE)Hܣm 0M>w[+(#!TU|r?#ɭ~ I_@m+ T`Iv"*b,f;d@V&:M#a?AZ`ɿ38 ++ڀ;X6J:*Tr5H3MvOUX0B{nlBE!*ˇ2 Y(p}&,mp~}ƪx˷bG{K+*1iJ}υBǩwfmY$ --GooOFD]\gB+wdbKEa j>*@L0zuHs9mCjhptd5v$S)i>l~DtDXliBb-|8PADgAh:-'A&MIxIa6+ "0T+"2[[MW&d*4AU Zf1k}G3^3ꂈ"{@jȼ3PMF עb );~ma93`1UjJ}!|dW}f-"ˎv{ڣzZ_= ]s~1MMG?fcNuI;)_D"3#zHeE!TOYX3byP$+i\g1WF+rB@QDf2Uhtm{E7˷"WutՐ&Tt"%ߞ3 3vӚ70wH|v,L h1٪ fUV5"S Ά?C8j>g֡ i!2{H7 `z&B`t}Ԍ V 3tn5T9UHڋ 'ȤL< •!vr ~`t5 `Dh Z )TtT)|QpU]myuuN\n}mm'袯%_FoL(t_nuB!$+D=GXiVV.U@Q(֢W9v86oz2un;EP!\V[ PEy b xBVa!!Hߦѓ QSA[ 0w`ӄ\?1Xgi?곢=;V'!"ׇDm\+%Y?8,LZlI4"I51f-,f@8!+g5H3=SZv)q5)@|#-N8@RSd|^dI޶8Wo"<=r":]j !LMG$s ":׸0Aܧ19NU6;:t~NgͤgsUMM ?D! hzW;EMu4,(T,j釐2X* .O=1(A8A~'S#(~Gj8 '\iH:e 3Z֖T^|3ˈ 9ߌ#]UDaD/3/!!y}-UXU4GOK*yf1!ϻD19YҴg\LPM3 Q f~^0 ~JD!9 &x"L+sʹcfN)Ľ1L 1{CLtx)I95QaPɽ3͗WfYܻ^ hDm *z ~W0fG:S 1 )\a(AhO`r85DjY+BV\NsKuuN!pWu|)uti, s! ${R{Y|<X@k@QVK_*P0NT a2vAU= U+$b2qd= [ZZBDCDLױ9K[V$_\`\ !X !P,=GH~35e&$ Xc%L`Gҷ V_ `{f0·s*BmY![dCz+Ad{<Bx_Q&[ԇ ycC9]]C<bll`U} ?Ao>\ lګ'jkx\iUkBАӗƟ++_PV`+EJg]~bGF| I Vs:b8xbFz BK3mz s2f#lvǑWBD|s̹4G 7&jG<9MvDt g9(n(#eoTjOjVfΘ6~& rq qoz -ly0j%=)(  @3$p΄1"7·?[gm )\*ۊIٺYRr~[s"ѱ0z+DY7KKPXlMM=^ \kvF܌"h,#9&B`TI<@8} \nlYHugoQ>etJm6DTC·M)~PySŒPADj]ݼ}Z EvvN)[r SvtiCG?6ܶ.hlLW'/2g0Ԗ !q>T\EG{xiMo1bcc3X*`^Yy%%gSKw5<6ƻZ[-;yͪ{aQ~+, ntK@Q&@33dcccccbYxG̨t8Ukc~6nliHrDDTT 7>40wv>~u9CX4㡍FB̮xtI}[(3V-1%&;`%xa(t1zL Rt2$ /Tl mllllllLK} WRrTx~)le2D]Y(?l-ThY]anD}i.&}ii8|Ӵ :! BBƽ55wxu8mrmme)LN8|Y Dlllllll6E*:$r]]}ϡNIT* pnM$H,(xzS{{3 ,hIENDB`doit-0.30.3/doc/_static/external.png000066400000000000000000000002451305250115000172500ustar00rootroot00000000000000PNG  IHDR ?PLTEf3̙ffDtRNSKF8IDATW%A@A"OT$xl:rBΞ!/Y5fIENDB`doit-0.30.3/doc/_static/favico.ico000066400000000000000000000037351305250115000166720ustar00rootroot00000000000000PNG  IHDR szzsBIT|d pHYsktEXtSoftwarewww.inkscape.org<ZIDATXW{pTڻGww}^c`Lm #R 1Ө-L2A-cb%թPb$#HBB6ٽ_p/]$ι9߹0 _WY)Q`,2&QwmM_[۶9"c,N~fY'&aǶ]e`\|&`M߿9)&D /8gMΎ eeۢNǹԚo:78ɊsiQLE5[qgGǺ) /`D"JqNwԆBe;хGC9鴳#Pb4O7x6FB6GU}t}qNV:nDnШT*CS{2yG`H&|z=qN6kp@Wy5YR|p<^abwb/ 13|>逮o"|2ٹ(4F,0/h*P(?myO=TA_+Āx0F;o hDD^YiDaLb^\>_jFZ(=!Jmj肐m) =|I/#n[}k**^wVny~kõsHjI yCKvi8Q]B9mҴ{gڜTSHA✎%Njmq8 U Qtq3MSüxmtakJc11-">nk4zX2sfӴ159"g,Oۤix!& [=DKS%ab@tV/0&TK٪`V~ T>eV?LEfq _8~Ɣq\&/+!n3V+}?KV %8?:8(&T~ #ʕDeLsg#ɳӮ[p&Q1V/y  T90ʩ|xl1MUxQ (Uu0+#eyUU3_dweL":dY] lYW2+u *Wޮ6>2&?~ܹ1+zzF|4m"j'cX4{]7sܱAQeܬ(XYY7IQx}c&s6iڢey:d [iI&A4Sgo;޻[ `x?qy8|>]2N[9RPh:qN;KUG +ݩ%sڨi ڬڨ[;S 76G"t<8n9f +285rG{{k^͕~Cٓ=|&c _ŅiP}zeyޙJ]$iO, O*/ wb x5m^)Hd3b=]aSE_굡PL2y8Ti8<VN95ۊ_=9aO,[XAq#!DVS(y~PBĻݛx"ZhakhtWBAqv_,ǘ]2Nn IFE :lK ;bP,R(@/[ٚ.p0,+I5c$)>u/}hY/>`Zg7ȲPhdEXΝuF#77h-IENDB`doit-0.30.3/doc/_static/pygarden.png000066400000000000000000000350721305250115000172450ustar00rootroot00000000000000PNG  IHDRxx9d6sBIT|d pHYstEXtSoftwarewww.inkscape.org< IDATx}wxT;nMlBH BHKh "DTP>(E H5 [6Cz^$$! xgdg{g̙s O䂩 <ţSp<% Sp<% Sp<% Sp<% +P :0 Ӛ Uv{3^Hgmnq,/4YG6PWH}*run.!;JR'9M8l3=!_[Z[fNg"jEqa̜B'nJ_/L QƼP$&:Ӊ}GNl9' n0YkBPGSؼb3/]5qfěwu۝|~݆b#X0MP/=&Aqn;ueX9& NQiu-TnA.ucJo\:n&m5ĄDyGg'&As՘75@m<yX[ZېjX}Ԑ2kRGٮ-°X!{gd1G! \kה#ПDᶋM:՝xBO7ZP+MCcCcV}* >2C'6pCSW8Ifž˙qƋmC!!a^ք/6ǣu &,cEX`NzJҵAsܴwo%!g»! S79 ۴ _ޝfpDVB(> "&!vV ۑe04&\xwѴO|SW`/Jl ٦J{5o1/Sp_%hV`&y4E!?5Z+5_x?0-)5?MM^Ock*z-7 `~x5=Wl,Y{5i/ǢvaUDvh_'jk& Jؤ3YZUʊ%I~+兿7Mnw7MZ{lҦɜov׺ϳh'i­Jqy9ن9/++V0 I$qZb2יNp[9]؄Q ;4rpS8<0떏"  KFݽb֒/ìy'w~+uUҊ'e)ox{ƕuE΢Z]!忂z}ՁH̦ϷvPo,; \ĄJYP(AH,+$cwb7n*])]%r9 Z-HTbzWOQxQ/' w8u FNǾ/>vx>A"fZ:7+-;2Wg#i7q4v eez7zFH& '"MJd=[{AղҲ' 6tA)D 8ui茳Gc!p+#8[Fπn)T h ͉[Vb5{kՀ/,(B I1][Q7Ns]eO;rNzNߋSČo`zGsq*9AYdWvmiU)Xd 숴7Ĝb7_#F v="(IVFƴ/E9l/݉G}Վ B_}wFu[ʌ|I_HEuVadeH:uQu'1Os,b6+zЀMtg3A7_} taw*Gzr">D,&לbw]J+~8-`j?We9O1u_Ja/H.(]]V.V^O9, FlYxTgׁYV!8~o(=^c0Qyݦ-U{]Fj%5  'l[4jea('J!JЧ< T ̎Xjr32LI;i,X02!\_m~ PSDnE1"z:ZxO̲d!6W]c]}.8w04'ѿ(ҙ:S|lBa&}Դ3X]QDs-Ҩ+bw),{<  =n Jj|ĨwcUrq@xr+Y U!ag`Ÿp( ;#Y'2R10`Y6b2~iBzI{7X}PNQX\<}n )&#ğƮ5+dWpqs ʚ75H#74 %CD6t&xf s!ÄM"գRNۋ/G C0 =*[ɶbNo>t cp#:<< \8)o ǁ<"@ӶQ0qʞ .aЇluEjёBCAQ]*ٛXbzj'ݛɉX:s*K/Iwz5Gքiդ0D}-p0-]g藓av=v/>Gx6wɚMF|\m"M3beUڔGz%wH&?:q5_;uNʻ7 !q+# ?L|Q iR,?j4s=_fyj/vi"/=żEh&5lwFeä.)%lR橇Y^ ~.a J'WprvbƧK)iހ,;XvTsYd>1[dM|>ǧt|bFYɈ_9S/sZSwǨ2,uJ0Iت(} =g+2ƷXxHtl4ad#_}Gv-\_#O kcTݺ-mNnw#>曝|-Z^~D8?e228 mJO|9mt\%PH,yZP/˝?{nK5! Xtpdd]6m_MU^Gp'ox\ ~Jz7?l%S8;K٩i'Z|ث)V-v$y 2bDvɾBE z0v;H| kA 7*q2;jU T@ݢui$R)Vϝ~M}cI1K.\SYD(؀0 " Ǖr9X5 ze>j5@ko&C. ZH\=۩No`+:u$T&CCT$`rS,޾^ ںٖnfԩ,V.5 [{F Z6i|.ӽVh,Zwz&wpޔZ~#laoթ +wrV;<(23X)ɐ[}"ef5;+8 E0\Ub`9hO܀\:J%/((gM,=Vujf L(EC,_.rUD/:ͣ B_VھN=<3YjHeaJw,9 &>prugj6cko"3`6>^$~ [@9rѺ"$!E;. 1<] 7&SעT w]$8| f+1e'qcOML/k70yɷS!!++Ean.(!GoX-h+VmӏuV5tqw4oN䣸e)X&5^zU"ʒ^^gŹπL Gp6kͰ,E[q\ PZ#3'엾u&+b/epGsyP>I| "s +'h-t+|{~;-b1ܕo;= J)\\oYǔ&_B&loqvq5PJ= yyBaje0}6!ڸ4Z)<%TTBWZӇpͥz-z oźmx)xwt-CRnm44`{qWwO^}C_il/xS( ĕ"h|l}6\hٮFpmxz!aX9Dydˌ1]t^w%2'xE=in&I^*p9qa ~S;,,վoTdEUEU}Ay>JZjyʌl '#WXDmMi>xrxŅ6]Rn>w͗lc7#鍵H7&l-|1vh-`4RFc?}8IN P$'P\.M7Tsr#4LD);H^,Z=ժhj fY+ZEVB aj-Cpx מ9SRyДZg 6Ǿa.+,.&ŕ m&\f@5W: gQgxc]6|0{.r{w_L-foǽmʺ ?.Qȯr>}p\IÈƻݰiB{`gSPcϷP/}9Jr-B^-W1uZ/r_纤+6n9jL(,}@8Yhܺ Zw 5 [ {>snCQz/}5,wL18tlW7v`j>~6Rl:F"dIHѡF"׿vY{pL=`٠++X,P\yoa%Q{pԀB>d_ؿ~osW-9y}\t5*-ŋy%Ǔ誜-xo> *bV*VnE)lX56<=h~i ~8B55"]2X6kzd"jj@F(Ͽ:cb`yf+vnydՒ5g?n>hM|]aVZoHJO,<Ӣf: I+ Xn=aT8E^[Nr4 ~EE:&=rtEXM[PXU1aBRAC?{ʙ K 1ox<;ubHz@ [ efDUMժ5F,^ l_|  r8 ގl╧*Vdtf|"_86xW*nKZ54ZjE ee]\:sȅbB1wxN&U(~t\rxLeŘdFj u!V~m3Sl,NYJ-kժq} k\"S[ ߋáP*zo@L*H.ݖ )/Tv/odnAʝxx o[ )4 ')}nk@p~TNi-A~N}lab6ݷh l:Cx23AXv$LknդN,Mr~:qJ~2OZ);ztZ'V^'墳)d! DyH+Ʀ /nz ; 7z'fPvbVb.? er+Z#J V*/&+(xqLs18ְ+p0$ASaS!B'h-: %(X=>^_ϼԛ2B/<쮄:/Z F]2#_'j;cq5i!Z 1N|v2zI:Gf,.+VLA:?ߺFΥPB6!O ' da m#f߸X|(AĦn6܇U8 [(; ICw\c@k}2nÜQP8iIFh0^p5_Mŀ6辳$EfWʌα{*"RM ϷY tP mͯ.&p3B |&_Ji;4(y zmF燠 0 ~l2Ɗc5%ef~^7̸8%Y_k%- tX4FCB:b/e DRX6)fD]剎hJw2BW"^զ5&X Pvb 67yW7XA?PY}' r@ofaϥ,ɼUhJHDlsrW"YE͍M6hep" m}cIpDbWJ  =ZkYhnUdžlrs[ NZ%!#`*#$ -o4K!u[d[MyZbNNk<|?}գn',Þ?Z._99XMǃ,V\*y.axc[hʀ0V+l+2o$ӣ{v'i$ aVͦ:{Sдׄ0x7OҤ+ !0 r2y#)W/&_f˷Ϳq6ۚXm]]'"*?NJ׎Q aò6td4wF`t#RY<dzX*&ėp>V^?ES?ߍSƒӮ4*G/ީ@J{T5 nPGRW]:Ji:5` -L#+FB !5p"B ͹t !5(a!^.Bjvs_RQľru}lFǩ*q5_ZN u49̳D'^` SёRw8DQ>\a?+ a搽qe C*ݩH #l52M*̨Z!{, %~(K*15p `+OaU *YG!` }U ؽBڞ `.wtؗqg6Txj+Ϯ`GjP֨*򇮚q^G )l#r{a[>lR%vX%*-čŝe#L94GX.j4BZ(#vPJ!;š{aG]L}.R !$!؇PJV+pdH ݐ9Tܩ,ww:GBRzAuG@kbR:RRzQV}*?^WY--0ؕj?:QBHw ף*ȖXNjTcɪ :؉+͟7TSA.>ط@843^ _AZ~w6a_8+7LBi2鰯=`Ѱ[n]:X~:so}˲j?(i`ߌ8 r~y"ȓNX`#Cu;d's v0xL{GǙQ^7S@f2%,a })ܮzj고B|qi{+H^e/ϩ)%E^`)!< JWRJB>qԁBv;E)w%Y%|J9'@/oCp׳**N,BHOO˴^G)G@%խ@; у$J)(ktX]or%(_C'رtp|Mxn%QuhG)-}G L)5îwlxvWJimyƇR܋/VPJ'{<9x,V;υ]s=*r5 0v#t{I=_< Mq*e̤ӭ!D r$gF؇}KOWWp<% Sp<% YbIENDB`doit-0.30.3/doc/_static/python-powered-w-100x40.png000066400000000000000000000101131305250115000215030ustar00rootroot00000000000000PNG  IHDRd(x_CsBIT|dtEXtSoftwarewww.inkscape.org<IDATh{tSu$MmZG<*q`]pAf]>( PA۴iGMrss_GIIۤ 9>~?D# G@hooݬQIgO#!$VZ!JW[_oF~ww7qƋ72hN@ED*峧9}+%%{1 Houd/شKGmzgϞ_~f#bHV p[AFBeN_>?OlV|>f#b%Zl ~v)g[md 8ڂ v=ўXyA>Iww7)˜%C4.8 &>?y):YyԺ:uggy$+Wvb&[ܳt=\yqq'lڴl6V^}̙3{,Yb=q℺&nT*Ux돤Hz.9A9sl ,q:ݻ gΜђ$~"==~G,&M kQ+ o>㏷^0… -[d2~ٲeVSqZfMM #-*)ݳgoɾ]vwܙM~?aÆف͛7IRV}… E(Buukĉ< 1RNc .NyA-wL&hPdB\tIIPXXg2 K.QE̤"+u$c ${j6u7Haܚ98兺%мz`ySYPV )UF1vońF9O^j*Hz$ J|+WDҗ/_;m޼yJ Zq\ sܹs7n,?mD>GRq>lӖ-[ ׮][h*_xm:yz@__nD~1~n% Ojz10l۶ HSShX^s϶d8FC(š5kg̘ѷhѢ|Nè``<q1U#8EQh4CA.iB1XEN'ʑ$94 4vQgt  aOh:?fF)h0 y Pye'XBhڡAV1rssC!ƚ(EcY`1bw' qLKW\2a^뽷ALD|i hF>%Aee|UIE0>̋.#D+EBp*j@L 3`0͊gϪUVǰnF EU2掿mig6k鿧A. 0Zg?$8j***xiuo/ڻ>2>R_dϺ}O;Y^W?{Jw8(rӔ ű}"=E= 0Y<*Em:țϯsi҂֏j Ş] ?|аu/*+:cV/[!)%.I}gV-]GΥu$#u%S )73*o?2UCEfכ;6,Yf9r^z{isGKb 9O5cBM+C3 :!xz(IOM 7wȚ,PCsCہ^bC-v䎉9E5'Uηbδƾ>ݤ.JM2eηZ*Өe 0x[C'2ץ*b],!uynE^l +ZKW|ܣqxasE\Jf]:8)kR yS nR 2 W/S{AשlZX]վ {j;@i~Z9T\w _%'w!ƾ 0m00>qU2Qpi!3f̮*b&VL8٠jN6gV2d}Dsͩ9 RIO7j+s-=QI)+1N2JշڈWyfر` ɭ0/d%{"ZLk1Gvc/]hwxȂK2gT$ϩ0i^Q p\*+<,4ehOӧytA7 :bW))'l{탯>UIqy{BˊsW~)osx'.wJl~P,P֧^?ÖPɥL:z9~" !VrIbh\DFxl3|$..%.@Ao$C OZAKk8f0cuP^0ʖ08AD" RU:0U)]F~I^ u"!IENDB`doit-0.30.3/doc/_templates/000077500000000000000000000000001305250115000154265ustar00rootroot00000000000000doit-0.30.3/doc/_templates/layout.html000066400000000000000000000067411305250115000176410ustar00rootroot00000000000000{% extends "!layout.html" %} {% block extrahead %} {% if include_analytics %} {% endif %} {% endblock %} {% block rootrellink %}
  • Home | 
  • Documentation | 
  • Code | 
  • Issues | 
  • Download | 
  • Forum | 
  • Twitter
  • {% endblock %} {% block header %}
    doit logo
    {% endblock %} {% block sidebarsearch %} {{ super() }} {% if include_donate %}

    Funding Campaign

    {% endif %}

    Sponsors

    pygarden.com logo

    Donate

    {% if include_donate %}
    {% endif %}
    If you use doit and think it is a useful project. Please consider to make a donation to support the developer/maintainer with maintanance tasks (bug fixes, merge patches, reply users) and further development.
    {# logo - python #}

    python logo

    {% endblock %} {% block sidebarrel %} {% endblock %} doit-0.30.3/doc/blog.txt000066400000000000000000000142541305250115000147630ustar00rootroot00000000000000========================= DoIt - a build-tool tale ========================= DoIt is a built-tool like written in python. In this post explain my motivation for writting yet another buil tool. If you just want to use it. Please check the `website `_ Build-tool ---------- I started working on a web project... As a good TDD disciple I have lots of tests spawning everywhere. There are plain python unittest, twisted's trial tests and Django specific unit tests. That's all for python, but I also have unit tests for javascript (using a home grown unit test framework) and regression tests using Selenium. Running lint tools (`JavaScriptLint `_ and `PyFlakes `_) are as important. So I have seven tools to help me keeping the project healthy. But I need one more to control the seven tools! Actually there are more. I am not counting the javascript compression tool, the documentation generator... I am not looking for a continuous integration (at least right now). I want to execute the tests in a efficient way and get problems before committing the code to a VCS. | - What tool do we use to automate running tasks? | - GNU Make. Or any other build tool. SCons ----- I had the misfortune to (try to) debug some `Makefile `_'s before. XML based was never really an `option `_ to me. Since I work with python `SCons `_ looked like a good bet. SCons. Writing the rules/tasks in python helps a lot. But the configuration (construct) file is not as simple and intuitive as I would expect. Maybe too powerful for my needs. Thats ok I don't have to write new "Builders" that often. Things went ok for a while... but things started to get too slow. Normal python tests are fast enough not to bother about it. But Django tests using postgres execution time do bother. The javascript tests run on the browser. So it needs to start the server, launch the browser, load and execute the tests... uuoooohhhh. Most of the time i *really* need to execute just a subset of tests/tasks. The whole point of build tools is to keep track of dependencies and re-build only what is necessary, right? The problem with tests is that actually i am not *building* anything. I am executing tasks(in this case tests). Building something is a "task" with a "target" file(s), but running a test is a "task" with no "target". The problem is that build tools were designed to keep track of target/file dependencies not task dependencies. Yes I know you can `use `_ `hacks `_ to pretend that every task has a target file. But I was not really willing to do this... I was not using any of the great SCons features. Actually at some point I easily substitute it to a simple (but lengthy) python script using the `subprocess `_ module. Of course this didn't solve the speed problem. DoIt ---- `DoIt `_. I want a tool to automatically execute any kind of tasks, having a target or not. It must keep track of the dependencies and re-do (or re-execute) tasks only if necessary, (like every build tool do for target files). And of course it shouldn't get on my way while specifying the tasks. Requirements: . keep track of dependencies. but they must be specified by user, no automatic dependency analysis. (i.e. nearly every build tool supports this) . easy to create new task rules. (i.e. write them in python) . get out of your way, avoid boiler-plate code. (i.e. something like what nose does to unittest) . dependencies by tasks not on files/targets. The only distinctive requirement is item 4. I guess any tool that implements dependency on targets could support dependency on tasks with not so much effort. You just need to save the signature of the dependent files on successful completion of the task. If none of the dependencies changes the signature the task doesn't need to be executed again. Since it is not required to have a target the tasks needs to be uniquely identified. But thats an implementation detail... So how does it look like? extracted from the `tutorial `_: Compressing javascript files, and combining them in a single file. I will use `shrinksafe `_. ``compressjs.py`` :: """ dodo file - compress javascript files """ import os jsPath = "./" jsFiles = ["file1.js", "file2.js"] sourceFiles = [jsPath + f for f in jsFiles] compressedFiles = [jsPath + "build/" + f + ".compressed" for f in jsFiles] def create_folder(path): """Create folder given by "path" if it doesnt exist""" if not os.path.exists(path): os.mkdir(path) return True def task_create_build_folder(): buildFolder = jsPath + "build" return {'action':create_folder, 'args': (buildFolder,) } def task_shrink_js(): for jsFile,compFile in zip(sourceFiles,compressedFiles): action = 'java -jar custom_rhino.jar -c %s > %s'% (jsFile, compFile) yield {'action':action, 'name':jsFile, 'dependencies':(":create_build_folder", jsFile,), 'targets':(compFile,) } def task_pack_js(): output = jsPath + 'compressed.js' input = compressedFiles action = "cat %s > %s"% (" ".join(input), output) return {'action': action, 'dependencies': input, 'targets':[output]} Running:: doit -f compressjs.py Let's start from the end. ``task_pack_js`` will combine all compressed javascript files into a single file. ``task_shrink_js`` compress a single javascript file and save the result in the "build" folder. ``task_create_build_folder`` is used to create a *build* folder to store the compressed javascript files (if the folder doesnt exist yet). Note that this task will always be execute because it doesnt have dependencies. But even it is a dependency for every "shrink_js" task it will be executed only once per DoIt run. The same task is never executed twice. And that's all doit-0.30.3/doc/changes.rst000077700000000000000000000000001305250115000166342../CHANGESustar00rootroot00000000000000doit-0.30.3/doc/cmd_other.rst000066400000000000000000000222541305250115000157740ustar00rootroot00000000000000================ Other Commands ================ .. note:: Not all options/arguments are documented below. Always check `doit help ` to see a complete list of options. Let's use a more complex example to demonstrate the command line features. The example below is used to manage a very simple C project. .. literalinclude:: tutorial/cproject.py .. _cmd-help: help ------- `doit` comes with several commands. `doit help` will list all available commands. You can also get help from each available command. e.g. `doit help run`. `doit help task` will display information on all fields/attributes a task dictionary from a `dodo` file accepts. .. _cmd-list: list ------ *list* is used to show all tasks available in a *dodo* file. Tasks are listed in alphabetical order, not by order of execution. .. code-block:: console $ doit list compile : compile C files install : install executable (TODO) link : create binary program By default task name and description are listed. The task description is taken from the first line of task function doc-string. You can also set it using the *doc* attribute on the task dictionary. It is possible to omit the description using the option *-q*/*--quiet*. By default sub-tasks are not listed. It can list sub-tasks using the option *--all*. By default task names that start with an underscore(*_*) are not listed. They are listed if the option *-p*/*--private* is used. Task status can be printed using the option *-s*/*--status*. Task's file-dependencies can be printed using the option *--deps*. info ------- You can check a task meta-data using the *info* command. This might be useful when have some complex code generating the task meta-data. .. code-block:: console $ doit info link name:'link' file_dep:set(['command.o', 'kbd.o', 'main.o']) targets:['edit'] Use the option `--status`, to check the reason a task is not up-to-date. .. code-block:: console $ doit info link Task is not up-to-date: * The following file dependencies have changed: - main.o - kbd.o - command.o forget ------- Suppose you change the compilation parameters in the compile action. Or you changed the code from a python-action. *doit* will think your task is up-to-date based on the dependencies but actually it is not! In this case you can use the *forget* command to make sure the given task will be executed again even with no changes in the dependencies. If you do not specify any task, the default tasks are "*forget*". .. code-block:: console $ doit forget .. note:: *doit* keeps track of which tasks are successful in the file ``.doit.db``. clean ------ A common scenario is a task that needs to "revert" its actions. A task may include a *clean* attribute. This attribute can be ``True`` to remove all of its target files. If there is a folder as a target it will be removed if the folder is empty, otherwise it will display a warning message. The *clean* attribute can be a list of actions, again, an action could be a string with a shell command or a tuple with a python callable. If you want to clean the targets and add some custom clean actions, you can include the `doit.task.clean_targets` instead of passing `True`: .. literalinclude:: tutorial/clean_mix.py You can specify which task to *clean*. If no task is specified the clean operation of default tasks are executed. .. code-block:: console $ doit clean By default if a task contains task-dependencies those are not automatically cleaned too. You can enable this using the option *-c*/*--clean-dep*. If you are executing the default tasks this flag is automatically set. .. note:: By default only the default tasks' clean are executed, not from all tasks. You can clean all tasks using the *-a*/*--all* argument. If you want check which tasks the clean operation would affect you can use the option *-n*/*--dry-run*. ignore ------- It is possible to set a task to be ignored/skipped (that is, not executed). This is useful, for example, when you are performing checks in several files and you want to skip the check in some of them temporarily. .. literalinclude:: tutorial/subtasks.py .. code-block:: console $ doit . create_file:file0.txt . create_file:file1.txt . create_file:file2.txt $ doit ignore create_file:file1.txt ignoring create_file:file1.txt $ doit . create_file:file0.txt !! create_file:file1.txt . create_file:file2.txt Note the ``!!``, it means that task was ignored. To reverse the `ignore` use `forget` sub-command. .. _cmd-auto: auto (watch) ------------- .. note:: Supported on Linux and Mac only. `auto` sub-command is an alternative way of executing your tasks. It is a long running process that only terminates when it is interrupted (Ctrl-C). When started it will execute the given tasks. After that it will watch the file system for modifications in the file-dependencies. When a file is modified the tasks are re-executed. .. code-block:: console $ doit auto .. note:: The `dodo` file is actually re-loaded/executed in a separate process every time tasks need to be re-executed. callbacks ^^^^^^^^^ It is possible to specify shell commands to executed after every cycle of task execution. This can used to display desktop notifications, so you do not need to keep an eye in the terminal to notice when tasks succeed or failed. Example of sound and desktop notification on Ubuntu. Contents of a `doit.cfg` file: .. code-block:: INI [auto] success_callback = notify-send -u low -i /usr/share/icons/gnome/16x16/emotes/face-smile.png "doit: success"; aplay -q /usr/share/sounds/purple/send.wav failure_callback = notify-send -u normal -i /usr/share/icons/gnome/16x16/status/error.png "doit: fail"; aplay -q /usr/share/sounds/purple/alert.wav ``watch`` parameter ^^^^^^^^^^^^^^^^^^^^^ Apart from ``file_dep`` you can use the parameter ``watch`` to pass extra paths to be watched for (including folders). The ``watch`` parameter can also be specified for a group of "sub-tasks". .. literalinclude:: tutorial/empty_subtasks.py .. _tabcompletion: tabcompletion ---------------- This command creates a completion for bash or zsh. The generated script is written on stdout. bash ^^^^^^ To use a completion script you need to `source` it first. .. code-block:: console $ doit tabcompletion > bash_completion_doit $ source bash_completion_doit zsh ^^^^^ zsh completion scripts should be placed in a folder in the "autoload" path. .. code-block:: sh # add folder with completion scripts fpath=(~/.zsh/tabcompletion $fpath) # Use modern completion system autoload -Uz compinit compinit .. code-block:: console $ doit tabcompletion --shell zsh > _doit $ cp _doit ~/.zsh/tabcompletion/_doit hard-coding tasks ^^^^^^^^^^^^^^^^^^^^ If you are creating an application based on `doit` or if you tasks take a long time to load you may create a completion script that includes the list of tasks from your dodo.py. .. code-block:: console $ my_app tabcompletion --hardcode-tasks > _my_app dumpdb -------- `doit` saves internal data in a file (`.doit.db` by default). It uses a binary format (whatever python's dbm is using in your system). This command will simply dump its content in readable text format in the output. .. code-block:: console $ doit dumpdb strace -------- This command uses `strace `_ utility to help you verify which files are being used by a given task. The output is a list of files prefixed with `R` for open in read mode or `W` for open in write mode. The files are listed in chronological order. This is a debugging feature with many limitations. * can strace only one task at a time * can only strace CmdAction * the process being traced itself might have some kind of cache, that means it might not write a target file if it exist * does not handle chdir So this is NOT 100% reliable, use with care! .. code-block:: console $ doit strace reset-dep --------- This command allows to recompute the informations on file dependencies (timestamp, md5sum, ... depending on the ``check_file_uptodate`` setting), and save this in the database, without executing the actions. The command run on all tasks by default, but it is possible to specify a list of tasks to work on. This is useful when the targets of your tasks already exist, and you want doit to consider your tasks as up-to-date. One use-case for this command is when you change the ``check_file_uptodate`` setting, which cause doit to consider all your tasks as not up-to-date. It is also useful if you start using doit while some of your data as already been computed, or when you add a file dependency to a task that has already run. .. code-block:: console $ doit reset-dep .. warning:: `reset-dep` will **NOT** recalculate task `values` and `result`. This might not be the correct behavior for your tasks! It is safe to use `reset-dep` if your tasks rely only on files to control its up-to-date status. So only use this command if you are sure it is OK for your tasks. If the DB already has any saved `values` or `result` they will be preserved otherwise they will not be set at all. doit-0.30.3/doc/cmd_run.rst000066400000000000000000000245541305250115000154640ustar00rootroot00000000000000============================== Command line interface - run ============================== A general `doit` command goes like this: .. code-block:: console $ doit [run] [] [ ]* [] The `doit` command line contains several sub-commands. Most of the time you just want to execute your tasks, that's what *run* does. Since it is by far the most common operation it is also the default, so if you don't specify any sub-command to *doit* it will execute *run*. So ``$ doit`` and ``$ doit run`` do the same thing. The basics of task selection were introduced in :ref:`Task Selection `. `python -m doit` ----------------- `doit` can also be executed without using the `doit` script. .. code-block:: console $ python -m doit This is specially useful when testing `doit` with different python versions. dodo file ---------- By default all commands are relative to ``dodo.py`` in the current folder. You can specify a different *dodo* file containing task with the flag ``-f``. This flag is valid for all sub-commands. .. code-block:: console $ doit -f release.py *doit* can seek for the ``dodo.py`` file on parent folders if the option ``--seek-file`` is specified. as an executable file ----------------------- using a hashbang ^^^^^^^^^^^^^^^^^^^^^ If you have `doit` installed on ``/usr/bin`` use the following hashbang: .. code-block:: bash #! /usr/bin/doit -f using the API ^^^^^^^^^^^^^^ It is possible to make a ``dodo`` file become an executable on its own by calling the ``doit.run()``, you need to pass the ``globals``: .. literalinclude:: tutorial/executable.py .. note:: The ``doit.run()`` method will call ``sys.exit()`` so any code after it will not be executed. ``doit.run()`` parameter will be passed to a :ref:`ModuleTaskLoader ` to find your tasks. from IPython ------------------ You can install and use the `%doit` magic function to load tasks defined directly in IPython's global namespace (:ref:`more `). returned value ------------------ ``doit`` process returns: * 0 => all tasks executed successfully * 1 => task failed * 2 => error executing task * 3 => error before task execution starts (in this case the reporter is not used) DB backend -------------- `doit` saves the results of your tasks runs in a "DB-file", it supports different backends: - `dbm`: (default) It uses `python dbm module `_. The actual DBM used depends on what is available on your machine/platform. - `json`: Plain text using a json structure, it is slow but good for debugging. - `sqlite3`: Support concurrent access (DB is updated only once when process is terminated for better performance). From the command line you can select the backend using the ``--backend`` option. It is quite easy to add a new backend for any key-value store. DB-file ---------- Option ``--db-file`` sets the name of the file to save the "DB", default is ``.doit.db``. Note that DBM backends might save more than one file, in this case the specified name is used as a base name. To configure in a `dodo` file the field name is ``dep_file`` .. code-block:: python DOIT_CONFIG = { 'backend': 'json', 'dep_file': 'doit-db.json', } .. _verbosity_option: verbosity ----------- Option to change the default global task :ref:`verbosity` value. .. code-block:: console $ doit --verbosity 2 output buffering ---------------- The output (`stdout` and `stderr`) is by default line-buffered for `CmdAction`. You can change that by specifying the `buffering` parameter when creating a `CmdAction`. The value zero (the default) means line-buffered, positive integers are the number of bytes to be read per call. Note this controls the buffering from the `doit` process and the terminal, not to be confused with subprocess.Popen `buffered`. .. code-block:: python from doit.action import CmdAction def task_progress(): return { 'actions': [CmdAction("progress_bar", buffering=1)], } dir (cwd) ----------- By default the directory of the `dodo` file is used as the "current working directory" on python execution. You can specify a different *cwd* with the *-d*/*--dir* option. .. code-block:: console $ doit --dir path/to/another/cwd .. note:: It is possible to get a reference to the original initial current working directory (location where the command line was executed) using :ref:`initial_workdir`. continue --------- By default the execution of tasks is halted on the first task failure or error. You can force it to continue execution with the option --continue/-c .. code-block:: console $ doit --continue single task execution ---------------------- The option ``-s/--single`` can be used to execute a task without executing its task dependencies. .. code-block:: console $ doit -s do_something .. _parallel-execution: parallel execution ------------------- `doit` supports parallel execution --process/-n. This allows different tasks to be run in parallel, as long any dependencies are met. By default the `multiprocessing `_ module is used. So the same restrictions also apply to the use of multiprocessing in `doit`. .. code-block:: console $ doit -n 3 You can also execute in parallel using threads by specifying the option `--parallel-type/-P`. .. code-block:: console $ doit -n 3 -P thread .. note:: The actions of a single task are always run sequentially; only tasks and sub-tasks are affected by the parallel execution option. .. warning:: On Windows, due to some limitations on how `multiprocess` works, there are stricter requirements for task properties being picklable than other platforms. .. _reporter: reporter --------- `doit` provides different "*reporters*" to display running tasks info on the console. Use the option --reporter/-r to choose a reporter. Apart from the default it also includes: * executed-only: Produces zero output if no task is executed * json: Output results in JSON format * zero: display only error messages (does not display info on tasks being executed/skipped). This is used when you only want to see the output generated by the tasks execution. .. code-block:: console $ doit --reporter json .. _custom_reporter: custom reporter ----------------- It is possible to define your own custom reporter. Check the code on `doit/reporter.py `_ ... It is easy to get started by sub-classing the default reporter as shown below. The custom reporter can be enabled directly on DOIT_CONFIG dict. .. literalinclude:: tutorial/custom_reporter.py It is also possible distribute/use a custom reporter as a :ref:`plugin `. Note that the ``reporter`` have no control over the *real time* output from a task while it is being executed, this is controlled by the ``verbosity`` param. check_file_uptodate ------------------- `doit` provides different options to check if dependency files are up to date (see :ref:`file-dep`). Use the option ``--check_file_uptodate`` to choose: * `md5`: use the md5sum. * `timestamp`: use the timestamp. .. note:: The `timestamp` checker considers a file is not up-to-date if there is **any** change in the the modified time (`mtime`), it does not matter if the new time is in the future or past of the original timestamp. You can set this option from command line, but you probably want to set it for all commands using `DOIT_CONFIG`. .. code-block:: console DOIT_CONFIG = {'check_file_uptodate': 'timestamp'} custom check_file_uptodate ^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to define your own custom up to date checker. Check the code on `doit/dependency.py `_ ... Sub-class ``FileChangedChecker`` and define the 2 required methods as shown below. The custom checker must be configured using DOIT_CONFIG dict. .. code-block:: python from doit.dependency import FileChangedChecker class MyChecker(FileChangedChecker): """With this checker, files are always out of date.""" def check_modified(self, file_path, file_stat, state): return True def get_state(self, dep, current_state): pass DOIT_CONFIG = {'check_file_uptodate': MyChecker} output-file ------------ The option --output-file/-o let you output the result to a file. .. code-block:: console $ doit --output-file result.txt pdb ------- If the option ``--pdb`` is used, a post-mortem debugger will be launched in case of a unhandled exception while loading tasks. .. _initial_workdir: get_initial_workdir() --------------------- When `doit` executes by default it will use the location of `dodo.py` as the current working directory (unless --dir is specified). The value of `doit.get_initial_workdir()` will contain the path from where `doit` was invoked from. This can be used for example set which tasks will be executed: .. literalinclude:: tutorial/initial_workdir.py minversion ------------- `minversion` can be used to specify the minimum/oldest `doit` version that can be used with a `dodo.py` file. For example if your `dodo.py` makes use of a feature added at `doit X` and distribute it. If another user who tries this `dodo.py` with a version older that `X`, doit will display an error warning the user to update `doit`. `minversion` can be specified as a string or a 3-element tuple with integer values. If specified as a string any part that is not a number i.e.(dev0, a2, b4) will be converted to -1. .. code-block:: console DOIT_CONFIG = { 'minversion': '0.24.0', } .. note:: This feature was added on `doit` 0.24.0. Older Versions will not check or display error messages. .. _auto-delayed-regex: automatic regex for delayed task loaders ------------------------------------------ When specifying a target for `doit run`, *doit* usually only considers usual tasks and :ref:`delayed tasks ` which have a target regex specified. Any task generated by a delayed task loader which has :ref:`no target regex specified ` will not be considered. By specifying `--auto-delayed-regex`, every delayed task loader having no target regex specified is assumed to have `.*` specified, a regex which matches any target. doit-0.30.3/doc/conf.py000066400000000000000000000241351305250115000145750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # doit documentation build configuration file, created by # sphinx-quickstart on Wed Apr 2 22:40:41 2014. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.viewcode', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'contents' # General information about the project. project = u'doit' copyright = u'2008-2015, Eduardo Schettino' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.30' # The full version, including alpha/beta/rc tags. release = '0.30' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build', 'presentation.rst'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'sphinxdoc' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = "doit - automation tool" # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = '_static/favico.ico' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. html_use_index = False # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'doitdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('contents', 'doit.tex', u'doit Documentation', u'Eduardo Schettino', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('contents', 'doit', u'doit Documentation', [u'Eduardo Schettino'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('contents', 'doit', u'doit Documentation', u'Eduardo Schettino', 'doit', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # -- Options for Epub output ---------------------------------------------- # Bibliographic Dublin Core info. epub_title = u'doit' epub_author = u'Eduardo Schettino' epub_publisher = u'Eduardo Schettino' epub_copyright = u'2014, Eduardo Schettino' # The basename for the epub file. It defaults to the project name. #epub_basename = u'doit' # The HTML theme for the epub output. Since the default themes are not optimized # for small screen space, using the same theme for HTML and epub output is # usually not wise. This defaults to 'epub', a theme designed to save visual # space. #epub_theme = 'epub' # The language of the text. It defaults to the language option # or en if the language is not set. #epub_language = '' # The scheme of the identifier. Typical schemes are ISBN or URL. #epub_scheme = '' # The unique identifier of the text. This can be a ISBN number # or the project homepage. #epub_identifier = '' # A unique identification for the text. #epub_uid = '' # A tuple containing the cover image and cover page html template filenames. #epub_cover = () # A sequence of (type, uri, title) tuples for the guide element of content.opf. #epub_guide = () # HTML files that should be inserted before the pages created by sphinx. # The format is a list of tuples containing the path and title. #epub_pre_files = [] # HTML files shat should be inserted after the pages created by sphinx. # The format is a list of tuples containing the path and title. #epub_post_files = [] # A list of files that should not be packed into the epub file. #epub_exclude_files = [] # The depth of the table of contents in toc.ncx. #epub_tocdepth = 3 # Allow duplicate toc entries. #epub_tocdup = True # Choose between 'default' and 'includehidden'. #epub_tocscope = 'default' # Fix unsupported image types using the PIL. #epub_fix_images = False # Scale large images. #epub_max_image_width = 0 # How to display URL addresses: 'footnote', 'no', or 'inline'. #epub_show_urls = 'inline' # If false, no index is generated. #epub_use_index = True doit-0.30.3/doc/configuration.rst000066400000000000000000000040061305250115000166720ustar00rootroot00000000000000Configuration ============= doit.cfg -------- `doit` uses an INI style configuration file (see `configparser `_). Note: key/value entries can be separated only by the equal sign `=`. If a file name `doit.cfg` is present in the current working directory, it is processed. It supports 3 kind of sections: - a `GLOBAL` section - a section for each command - a section for each plugin category GLOBAL section ^^^^^^^^^^^^^^ The `GLOBAL` section may contain command line options that will be used (if applicable) by any commands. Example setting the DB backend type:: [GLOBAL] backend = json All commands that has a `backend` option (*run*, *clean*, *forget*, etc), will use this option without the need this option in the command line. commands section ^^^^^^^^^^^^^^^^ To configure options for a specific command, use a section with the command name:: [list] status = True subtasks = True .. note:: The key name is the internal option name, it might not be the same as the string using in the command line. i.e. `subtasks` above refers to `--all`. plugins sections ^^^^^^^^^^^^^^^^ Check the :ref:`plugins ` section for an introduction on available plugin categories. configuration at *dodo.py* -------------------------- As a convenience you can also set `GLOBAL` options directly into a `dodo.py`. Just put the option in the `DOIT_CONFIG` dict. This example below sets the default tasks to be run, the ``continue`` option, and a different reporter. .. literalinclude:: tutorial/doit_config.py So if you just execute .. code-block:: console $ doit it will have the same effect as executing .. code-block:: console $ doit --continue --reporter json my_task_1 my_task_2 .. note:: Not all options can be set on `dodo.py` file. The parameters ``--file`` and ``--dir`` can not be used on config because they control how the *dodo* file itself is loaded. Also if the command does not read the `dodo.py` file it obviously will not be used. doit-0.30.3/doc/contents.rst000066400000000000000000000024401305250115000156600ustar00rootroot00000000000000.. doit documentation master file, created by sphinx-quickstart on Wed Apr 2 22:40:41 2014. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. `doit` documentation ==================== `doit` :ref:`documentation` introduces the concepts of *every* feature one by one with examples. It is the preferred way to learn `doit` and it also serves as a complete reference. You may skip parts of advanced task configuration used on non trivial work-flows on a first read, but make sure you reach the docs describing the command line... The total reading time for the whole documentation is about one hour. articles & tutorials -------------------- For a short introduction check the articles below: * Software Carpentry: `Automating an analysis pipeline using doit `_. * Create a command line program using `doit` as a lib: `Power up your tools `_ .. _main-doc-toc: documentation ------------- .. toctree:: :maxdepth: 2 install tasks dependencies cmd_run cmd_other configuration task_args task_creation uptodate tools extending faq stories changes related doit-0.30.3/doc/dependencies.rst000066400000000000000000000175611305250115000164630ustar00rootroot00000000000000===================== More on dependencies ===================== .. _attr-uptodate: uptodate ---------- Apart from file dependencies you can extend `doit` to support other ways to determine if a task is up-to-date through the attribute ``uptodate``. This can be used in cases where you need to some kind of calculation to determine if the task is up-to-date or not. ``uptodate`` is a list where each element can be True, False, None, a callable or a command(string). * ``False`` indicates that the task is NOT up-to-date * ``True`` indicates that the task is up-to-date * ``None`` values will just be ignored. This is used when the value is dynamically calculated .. note:: An ``uptodate`` value equal to ``True`` does not override others up-to-date checks. It is one more way to check if task is **not** up-to-date. i.e. if uptodate==True but a file_dep changes the task is still considered **not** up-to-date. If an ``uptodate`` item is a string it will be executed on the shell. If the process exits with the code ``0``, it is considered as up-to-date. All other values would be considered as not up-to-date. ``uptodate`` elements can also be a callable that will be executed on runtime (not when the task is being created). The section ``custom-uptodate`` will explain in details how to extend `doit` writing your own callables for ``uptodate``. This callables will typically compare a value on the present time with a value calculated on the last successful execution. .. note:: There is no guarantee ``uptodate`` callables or commands will be executed. `doit` short-circuit the checks, if it is already determined that the task is no `up-to-date` it will not execute remaining ``uptodate`` checks. `doit` includes several implementations to be used as ``uptodate``. They are all included in module `doit.tools` and will be discussed in detail :ref:`later `: * :ref:`result_dep `: check if the result of another task has changed * :ref:`run_once `: execute a task only once (used for tasks without dependencies) * :ref:`timeout `: indicate that a task should "expire" after a certain time interval * :ref:`config_changed `: check for changes in a "configuration" string or dictionary * :ref:`check_timestamp_unchanged`: check access, status change/create or modify timestamp of a given file/directory .. _up-to-date-def: doit up-to-date definition ----------------------------- A task is **not** up-to-date if any of: * an :ref:`uptodate ` item is (or evaluates to) `False` * a file is added to or removed from `file_dep` * a `file_dep` changed since last successful execution * a `target` path does not exist * a task has no `file_dep` and `uptodate` item equal to `True` It means that if a task does not explicitly define any *input* (dependency) it will never be considered `up-to-date`. Note that since a `target` represents an *output* of the task, a missing `target` is enough to determine that a task is not `up-to-date`. But its existence by itself is not enough to mark a task `up-to-date`. In some situations, it is useful to define a task with targets but no dependencies. If you want to re-execute this task only when targets are missing you must explicitly add a dependency: you could add a ``uptodate`` with ``True`` value or use :ref:`run_once() ` to force at least one execution managed by `doit`. Example: .. literalinclude:: tutorial/touch.py Apart from ``file_dep`` and ``uptodate`` used to determine if a task is `up-to-date` or not, ``doit`` also includes other kind of dependencies (introduced below) to help you combine tasks so they are executed in appropriate order. task-dependency --------------- It is used to enforce tasks are executed on the desired order. By default tasks are executed on the same order as they were defined in the `dodo` file. To define a dependency on another task use the task name (whatever comes after ``task_`` on the function name) in the "task_dep" attribute. .. note:: A *task-dependency* **only** indicates that another task should be "executed" before itself. The task-dependency might not really be executed if it is *up-to-date*. .. note:: *task-dependencies* are **not** used to determine if a task is up-to-date or not. If a task defines only *task-dependency* it will always be executed. This example we make sure we include a file with the latest revision number of the mercurial repository on the tar file. .. literalinclude:: tutorial/tar.py .. code-block:: console $ doit . version . tar groups ^^^^^^^ You can define a group of tasks by adding tasks as dependencies and setting its `actions` to ``None``. .. literalinclude:: tutorial/group.py Note that tasks are never executed twice in the same "run". .. _attr-calc_dep: calculated-dependencies ------------------------ Calculation of dependencies might be an expensive operation, so not suitable to be done on load time by task-creators. For this situation it is better to delegate the calculation of dependencies to another task. The task calculating dependencies must have a python-action returning a dictionary with `file_dep`, `task_dep`, `uptodate` or another `calc_dep`. On the example below ``mod_deps`` prints on the screen all direct dependencies from a module. The dependencies itself are calculated on task ``get_dep`` (note: get_dep has a fake implementation where the results are taken from a dict). .. literalinclude:: tutorial/calc_dep.py setup-task ------------- Some tasks may require some kind of environment setup. In this case they can define a list of "setup" tasks. * the setup-task will be executed only if the task is to be executed (not up-to-date) * setup-tasks are just normal tasks that follow all other task behavior .. note:: A *task-dependency* is executed before checking if the task is up-to-date. A *setup-task* is executed after the checking if the task is up-to-date and it is executed only if the task is not up-to-date and will be executed. teardown ^^^^^^^^^^^ Task may also define 'teardown' actions. These actions are executed after all tasks have finished their execution. They are executed in reverse order their tasks were executed. Example: .. literalinclude:: tutorial/tsetup.py .. code-block:: console $ doit withenvX . setup_sample:setupX start setupX . withenvX:c x c . withenvX:b x b . withenvX:a x a stop setupX $ doit withenvY . setup_sample:setupY start setupY . withenvY y stop setupY saving computed values ------------------------ Tasks can save computed values by returning a dictionary on it's python-actions. The values must be JSON encodable. A cmd-action can also save it's output. But for this you will need to explicitly import `CmdAction` and set its `save_out` parameter with the *name* used to save the output in *values* .. literalinclude:: tutorial/save_out.py These values can be used on uptodate_ and getargs_. Check those sections for examples. getargs -------- `getargs` provides a way to use values computed from one task in another task. The values are taken from "saved computed values" (returned dict from a python-action). For *cmd-action* use dictionary-based string formatting. For *python-action* the action callable parameter names must match with keys from `getargs`. `getargs` is a dictionary where the key is the argument name used on actions, and the value is a tuple with 2 strings: task name, "value name". .. literalinclude:: tutorial/getargs.py The values are being passed on to a python-action you can pass the whole dict by specifying the value name as ``None``. .. literalinclude:: tutorial/getargs_dict.py If a group-task is used, the values from all its sub-tasks are passed as a dict. .. literalinclude:: tutorial/getargs_group.py .. note:: ``getargs`` creates an implicit setup-task. doit-0.30.3/doc/dictionary.txt000066400000000000000000000053241305250115000162030ustar00rootroot00000000000000' '0 'a' 'actions' 'b' 'backend' 'c' 'cc 'changed' 'check 'choice' 'clean' 'command 'command' 'dep 'dependencies' 'display' 'doit 'doit' 'echo 'echo' 'edit' 'file 'files' 'input 'insert' 'java 'json' 'kbd 'kbd' 'library' 'link' 'main 'main' 'make' 'minversion' 'my 'myscript' 'name' 'notavalidchoice' 'params' 'result 'run 'setup' 'standard' 'strict' 'success 'targets' 'teardown' 'that' 'this' 'timestamp' 'title' 'values' 'verbosity' 'version' 'x' 'y' 0' 2to3 API BaseAction Biomechanics Blog CFEngine CMake CamelCase CmdAction CmdAction's CmdActions CmdParse Config ConsoleReporter Ctrl DelayedLoader DelayedTask Dembia DoIt DoitCmdBase DoitMain FPGA FileChangedChecker FooCmd GH Gliwinski Guo INI IPython IPython's InteractiveAction InvalidCommand JSON KeyError KeyboardInterrupt L485 LongRunning MD5 Metagenomics ModuleTaskLoader MyChecker MyCustomTask2 MyLoader Naufel PDB PYTHONPATH Popen PyPi PyPy PythonAction PythonInteractiveAction Pythonic README RSS ReST SCons Schettino Segata SetupSample SkOink Subclassing TODO TaskError TaskFailed TaskLoader Tpng Trento UptodateCalculator Uz VCS WORDBREAKS Waf ZeroReporter a2 abc api aplay app args atime attr autoclass autoload b4 backend backends backport bar' basename basenames bioinformatic biomechanics bitbucket blog bool c' cProfile calc callables cfg chdir cid cloudpickle cmd cmp codebase compFile compinit compressed1 config configparser cp cproject cron ctime cwd dbm debian defs dep dep' deps dev dev0 dict dict's dir doit dumbdbm dumpdb efg encodable env epydoc eq faq file' file0 file1 file2 file3 flagoff fnmatch folderXXX foo foo' fpath gdbm genstandalone getargs gif github gitter gmail google gprof2dot hardcode hashbang hashlib hello2 helloworld hg hggit hgrc html hunspell img init inotify interactiveaction internet ipython isatty iteritems iterkeys java js jsFile json json' kbd kwargs lelele letsdoit linux literalinclude longrunning lovin macfsevents makedirs maxdepth md5 md5sum metadata microbiome minversion mortem msg mtime mycmd mygroup mytask namespace nikola notavalidchoice o' once' online os outfile param param1 param2 params pathlib pdb perl picklable plugin plugins png popen pos pos1 pre prev programmatically pstats py py' py3 pyFiles pychecker pycon pyflakes pygments pyinotify pylogo pytest python2 python3 quickstart regex repr rst2s5 runtests s' s5defs schettino72 scons selecttasks settrace setupX setupY setuptools shrinksafe sourcecode sourceforge sqlite3 startswith stderr stdlib stdout str strace sudo t1 t2 t3 tabcompletion task1 task2 taskorder taskresult tasks' teardown time' timedelta timestamp titlewithactions toc toctree tsetup txt txt' unhandled unicode unix uptodate uptodate' utf8 utm virtualenv wget wildcard withenvX withenvY workdir workflow x' xxx xyz zsh doit-0.30.3/doc/epydoc.config000066400000000000000000000001131305250115000157360ustar00rootroot00000000000000[epydoc] name: doit modules: doit output: html frames: no imports: yes doit-0.30.3/doc/extending.rst000066400000000000000000000112701305250115000160110ustar00rootroot00000000000000========================= Extending `doit` ========================= .. _extending: `doit` is built to be extended and this can be done in several levels. So far we have seen: 1) User's can create new ways to define when a task is up-to-date using the `uptodate` task parameter (:ref:`more `) 2) You can customize how tasks are executed by creating new Action types (:ref:`more `) 3) Tasks can be created in different styles by creating custom task creators (:ref:`more `) 4) The output can be configured by creating custom reports (:ref:`more `) Apart from those, `doit` also provides a plugin system and expose it's internal API so you can create new applications on top of `doit`. .. _custom_loader: task loader customization =========================== The task loader controls the source/creation of tasks. Normally `doit` tasks are defined in a `dodo.py` file. This file is loaded, and the list of tasks is created from the dict containing task meta-data from the *task-creator* functions. Subclass TaskLoader to create a custom loader: .. autoclass:: doit.cmd_base.TaskLoader :members: load_tasks The main program is implemented in the `DoitMain`. It's constructor takes an instance of the task loader to be used. Example: pre-defined task ---------------------------- In the full example below a application is created where the only task available is defined using a dict (so no `dodo.py` will be used). .. literalinclude:: tutorial/custom_loader.py .. _ModuleTaskLoader: Example: load tasks from a module ------------------------------------- The `ModuleTaskLoader` can be used to load tasks from a specified module, where this module specifies tasks in the same way as in `dodo.py`. `ModuleTaskLoader` is included in `doit` source. .. literalinclude:: tutorial/module_loader.py `ModuleTaskLoader` can take also take a `dict` where its items are functions or methods of an object. .. _custom_command: command customization ===================== In `doit` a command usually perform some kind of operations on tasks. `run` to execute tasks, `list` to display available tasks, etc. Most of the time you should really be creating tasks but when developing a custom application on top of `doit` it may make sense to provide some extra commands... To create a new command, subclass `doit.cmd_base.Command` set some class variables and implement the `execute` method. .. autoclass:: doit.cmd_base.Command :members: execute ``cmd_options`` uses the same format as :ref:`task parameters `. If the command needs to access tasks it should sub-class `doit.cmd_base.DoitCmdBase`. Example: scaffolding ---------------------- A common example is applications that provide some kind of scaffolding when creating new projects. .. literalinclude:: tutorial/custom_cmd.py .. _plugins: plugins ======= `doit` plugin system is based on the use of *entry points*, the plugin does not need to implement any kind of "plugin interface". It needs only to implement the API of the component it is extending. Plugins can be enabled in 2 different ways: - *local plugins* are enabled through the `doit.cfg` file. - plugins installed with *setuptools* (that provide an entry point), are automatically enabled on installation. Check this `sample plugin `_ for details on how to create a plugin. config plugin ------------- To enable a plugin create a section named after the plugin category. The value is an entry point to the python class/function/object that implements the plugin. The format is :. Example of command plugin implemented in the *class* `FooCmd`, located at the module `my_plugins.py`:: [COMMAND] foo = my_plugins:FooCmd .. note:: The python module containing the plugin must be in the *PYTHONPATH*. category COMMAND ---------------- Creates a new sub-command. Check :ref:`command ` section for details on how to create a new command. category BACKEND ---------------- Implements the internal `doit` DB storage system. Check the module `doit/dependency.py` to see the existing implementation / API. .. _plugin_reporter: category REPORTER ----------------- Register a custom reporter as introduced in the :ref:`custom reporter` section. category LOADER ---------------- Creates a custom task loader. Check :ref:`loader ` section for details on how to create a new command. Apart from getting the plugin you also need to indicate which loader will be used in the `GLOBAL` section of your config file. .. code-block:: INI [GLOBAL] loader = my_loader [LOADER] my_loader = my_plugins:MyLoader doit-0.30.3/doc/faq.rst000066400000000000000000000043031305250115000145720ustar00rootroot00000000000000======= FAQ ======= Why is `doit` written in all lowercase instead of CamelCase? ------------------------------------------------------------- At first it would be written in CamelCase `DoIt` but depending on the font some people would read it as `dolt `_ with an `L` instead of `I`. So I just set it as lowercase to avoid confusion. *doit* is too verbose, why don't you use decorators? ----------------------------------------------------- `doit` is designed to be extensible. A simple dictionary is actually the most flexible representation. It is possible to create different interfaces on top of it. Check this `blog post `_ for some examples. `dodo.py` file itself should be a `file_dep` for all tasks ----------------------------------------------------------- If I edit my `dodo.py` file and re-run *doit*, and my tasks are otherwise up-to-date, the modified tasks are not re-run. While developing your tasks it is recommended to use ``doit forget`` after you change your tasks or use ``doit --always-run``. In case you really want, you will need to explicitly add the `dodo.py` in `file_dep` of your tasks manually. If `dodo.py` was an implicit `file_dep`: * how would you disable it? * should imported files from your `dodo.py` also be a `file_dep`? Why `file_dep` can not depend on a directory/folder? ------------------------------------------------------ A `file_dep` is considered to not be up-to-date when the content of the file changes. But what is a folder change? Some people expect it to be a change in any of its containing files (for this case see question below). Others expect it to be whether the folder exist or not, or if a new file was added or removed from the folder (for these cases you should implement a custom ``uptodate`` (:ref:`check the API`). How to make a dependency on all files in a folder? ---------------------------------------------------- ``file_dep`` does NOT support folders. If you want to specify all files from a folder you can use a third party library like `pathlib `_ ( `pathlib` was add on python's 3.4 stdlib). doit-0.30.3/doc/index.rst000066400000000000000000000201621305250115000151330ustar00rootroot00000000000000:orphan: .. rubric:: `doit` is a task management & automation tool .. rubric:: `doit` comes from the idea of bringing the power of build-tools to execute any kind of **task** `doit` is a modern open-source build-tool written in python designed to be simple to use and flexible to deal with complex work-flows. It is specially suitable for building and managing custom work-flows where there is no out-of-the-box solution available. `doit` has been successfully used on: systems test/integration automation, scientific computational pipelines, content generation, configuration management, etc. Check some `success stories `_ ... introduction ============ A **task** describes some computation to be done (*actions*), and contains some extra meta-data. .. code-block:: python def task_example(): return { 'actions': ['myscript'], 'file_dep': ['my_input_file'], 'targets': ['result_file'], } **actions**: - can be external programs (executed as shell commands) or python functions. - a single task may define more than one action. **task meta-data**: - task meta-data includes a description of input file for the *actions* (**dependencies**), and result files **targets** - there are many other meta-data fields to control how and when a task is executed... *doit* uses the task's meta-data to: .. topic:: cache task results (aka *incremental-builds*) `doit` checks if the task is **up-to-date** and skips its execution if the task would produce the same result of a previous execution. .. topic:: correct execution order By checking the inter-dependency between tasks `doit` ensures that tasks will be execute in the correct order. .. topic:: parallel execution built-in support for parallel (threaded or multi-process) task execution (:ref:`more `) Traditional build-tools were created mainly to deal with compile/link process of source code. `doit` was designed to solve a broader range of tasks. .. topic:: powerful dependency system - the *up-to-date* check is not restricted to looking for file modification on dependencies, it can be customized for each task (:ref:`more `) - *target* files are not required in order to check if a task is up-to-date (:ref:`more `) - *dependencies* can be dynamically calculated by other tasks (:ref:`more `) Task's metadata (actions, dependencies, targets...) are better described in a declarative way, but often you want to create this metadata programmatically. .. topic:: flexible task definition `doit` uses plain python modules to create tasks (and its meta-data) .. topic:: customizable task definition By default tasks are described by a python `dict`. But it can be easily customized. (:ref:`more `) .. topic:: debugger Since plain python is used to define your tasks the python debugger (`pdb`) is available as in any other python application Other features... .. topic:: self documented `doit` command allows you to list and obtain help/documentation for tasks (:ref:`more `) .. topic:: inotify integration built-in support for a long-running process that automatically re-execute tasks based on file changes by external process (linux/mac only) (:ref:`more `) .. topic:: custom output process output can be completely customized through *reporters* (:ref:`more `) .. topic:: tab-completion built-in support tab-completion for commands/task (supports bash and zsh) (:ref:`more `) .. topic:: IPython integration provide `%doit` magic function that loads tasks defined directly in IPython's global namespace (:ref:`more `) .. topic:: extensible Apart from using `doit` to automate your project it also expose its API so you can create new applications/tools using `doit` functionality (:ref:`more `) Check the `documentation `_ for more features... What people are saying about `doit` ===================================== Congratulations! Your tool follows the KISS principle very closely. I always wondered why build tools had to be that complicated. - `Elena `_ Let me start by saying I'm really lovin doit, at first the interface seemed verbose but quickly changed my mind when I started using it and realized the flexibility. Many thanks for the great software! - `Michael Gliwinski `_ I love all the traditional unix power tools, like cron, make, perl, ..., I also like new comprehensive configuration management tools like CFEngine and Puppet. But I find doit to be so versatile and so productive. - `Charlie Guo `_ I went back and forth on different Pythonic build tools for awhile. Scons is pretty great if you're doing 'standard' sorts of builds, but I found it a little heavy for my tastes and really hard to customize to my tool flow (in FPGA land, there are all kinds of nonstandard vendor tools that all need to play together). I've been using doit more and more over the past few months, and I'm continually impressed by the tool (aside from the goofy name). It works amazingly well for automating tricky/exotic build processes. Check it out! `SkOink `_ I needed a sort of 'make' tool to glue things together and after trying out all kinds, doit ... has actually turned out to be beautiful. Its easy to add and manage tasks, even complex ones-- gluing things together with decorators and 'library' functions I've written to do certain similar things. - `Matthew `_ Some time ago, I grew frustrated with Make and Ant and started porting my build files to every build tool I found (SCons, Waf, etc.). Each time, as soon as I stepped out of already available rules, I ran into some difficult to overcome stumbling blocks. Then I discovered this little gem of simplicity: doit. It's Python-based. It doesn't try to be smart, it does not try to be cool, it just works. If you are looking for a flexible little build tool for different languages and tasks, give it a chance. (...) - `lelele `_ `Success Stories... `_ Project Details =============== * This is an open-source project (`MIT license `_) written in python. Runs on Python 3.3 through 3.6 (including PyPy support). For python 2 support please use *doit* version 0.29. * Download from `PyPi `_ * Please check the community `guidelines `_ before asking questions and reporting issues. * Project management (bug tracker, feature requests and source code ) on `github `_. * `doit projects `_ contains a collection of third-party projects, plugins, extensions, non-trivial examples and re-usable task creators for `doit`. * Questions and feedback on `Google group `_. Please do **not** send questions to my private email. * This web site is hosted on http://pages.github.com * Professional support and consulting services available from `doit` creator & maintainer (*schettino72* at gmail.com). Status ====== This blog `post `_ explains how everything started in 2008. `doit` is under active development. Version 0.30 released on 2016-11. `doit` core features are quite stable. If there is no recent development, it does NOT mean `doit` is not being maintained... The project has 100% unit-test code coverage. Development is done based on real world use cases. It is well designed and has a small code base, so adding new features is not hard. Patches are welcome. doit-0.30.3/doc/install.rst000066400000000000000000000010701305250115000154670ustar00rootroot00000000000000========== Installing ========== * Using `pip `_:: $ pip install doit Latest version of `doit` supports only python 3. If you are using python 2:: $ pip install doit==0.29.0 * `Download `_ the source and:: $ python setup.py install * Get latest development version:: $ git clone https://github.com/pydoit/doit.git .. note:: * `doit` depends on the packages `pyinotify `_ (for linux), `macfsevents `_ (mac). doit-0.30.3/doc/make.bat000066400000000000000000000056131305250115000147030ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set ALLSPHINXOPTS=-d _build/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (_build\*) do rmdir /q /s %%i del /q /s _build\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% _build/html echo. echo.Build finished. The HTML pages are in _build/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% _build/dirhtml echo. echo.Build finished. The HTML pages are in _build/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% _build/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% _build/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% _build/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in _build/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% _build/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in _build/qthelp, like this: echo.^> qcollectiongenerator _build\qthelp\doit.qhcp echo.To view the help file: echo.^> assistant -collectionFile _build\qthelp\doit.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% _build/latex echo. echo.Build finished; the LaTeX files are in _build/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% _build/changes echo. echo.The overview file is in _build/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% _build/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in _build/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% _build/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in _build/doctest/output.txt. goto end ) :end doit-0.30.3/doc/presentation.rst000066400000000000000000000124121305250115000165360ustar00rootroot00000000000000.. include:: ====================================================================== doit: bringing the power of build tools to execute any kind of task ====================================================================== "Anything worth repeating is worth automating" :Author: Eduardo Schettino build tools ============ Tools that manage repetitive tasks and their dependencies * 1977: Make * C & other compiled languages Make - how it works =================== * rules target: dependencies ... commands ... * simple (and fragile) dependency checking (timestamps) Make - how does it look? ======================== .. class:: tiny .. sourcecode:: make helloworld: helloworld.o cc -o $@ $< helloworld.o: helloworld.c cc -c -o $@ $< .PHONY: clean clean: rm -f helloworld helloworld.o Make - problems =============== * Make is not a programming language (how can I do a loop?) * compact syntax (but hard to understand and remember) * hard to debug other build tools ================== * CMake * java/XML: ant, maven * Rake * SCons dynamic languages ================= * who needs a build tool? * unit-test *web development* * heavy use of database * tests on browsers are slow * need environment setup (start servers, reset DB...) doit - design ============= * nice language => python * get out of your way * we-don't-need-no-stinking-API (learned from pytest) * dependencies by task not on file/targets (unique feature) doit - do what? ================ it's up to you! doit is a tool to help you execute **your** tasks in a efficient way. doit - how it works =================== * actions => what the task does * python (portable) * shell commands (fast, easy to use other programs) * targets => what this task creates * dependencies => what this task uses as input doit - how does it look? (1) ============================ .. class:: tiny .. sourcecode:: python DEFAULT_TASKS = ['edit'] # map source file to dependencies SOURCE = { 'main': ["defs.h"], 'kbd': ["defs.h command.h"], 'command': ["defs.h command.h"], 'display': ["defs.h buffer.h"], 'insert': ["defs.h buffer.h"], 'files': ["defs.h buffer.h command.h"], } OBJECTS = ["%s.o" module for module in SOURCE.iterkeys()] doit - how does it look? (2) ============================ .. class:: tiny .. sourcecode:: python def task_edit(): return {'actions': ['cc -o edit %s' % " ".join(OBJECTS)], 'dependencies': OBJECTS, 'targets': ['edit'] } def task_object(): for module, dep in SOURCE.iteritems(): dependencies = dep + ['%s.c' % module] yield {'name': module, 'actions': ["cc -c %s.c" % module] 'targets': ["%s.o" % module], 'dependencies': dependencies, } doit - how does it look? (3) ============================ .. class:: tiny .. sourcecode:: python import os def task_clean(): for f in ['edit'] + OBJECTS: yield {'name': f, 'actions': [(os.remove, f)], 'dependencies': [f]} doit - no targets ================= .. class:: tiny .. sourcecode:: python import glob; pyFiles = glob.glob('*.py') def task_checker(): for f in pyFiles: yield {'actions': ["pychecker %s"% f], 'name':f, 'dependencies':(f,)} doit - run once =============== .. class:: tiny .. sourcecode:: python URL = "http://svn.dojotoolkit.org/src/util/trunk/shrinksafe/shrinksafe.jar" shrinksafe = "shrinksafe.jar" jsFile = "file1.js" compFile = "compressed1.js" def task_shrink(): return {'actions': ['java -jar %s %s > %s'% (shrinksafe, jsFile, compFile)], 'dependencies': [shrinksafe] } def task_get_shrinksafe(): return {'actions': ["wget %s"% URL], 'targets': [shrinksafe], 'dependencies': [True] } doit - groups ============= .. class:: tiny .. sourcecode:: python def task_foo(): return {'actions': ["echo foo"]} def task_bar(): return {'actions': ["echo bar"]} def task_mygroup(): return {'actions': None, 'dependencies': [':foo', ':bar']} doit - environment setup (1) ============================ .. class:: tiny .. sourcecode:: python ### task setup env. good for functional tests! class SetupSample(object): def __init__(self, server): self.server = server def setup(self): # start server pass def cleanup(self): # stop server pass doit - environment setup (2) ============================ .. class:: tiny .. sourcecode:: python setupX = SetupSample('x') setupY = SetupSample('y') def task_withenvX(): for fin in ('a','b','c'): yield {'name': fin, 'actions':['echo x'], 'setup': setupX} def task_withenvY(): return {'actions': ['echo x'], 'setup': setupY} doit - cmd line =============== * run * list * forget doit - future ============= * community > 1 * support clean task * command line parameters * specific support for common tasks (C compilation) * dependency scanners * speed improvements thanks =========== Questions? doit website: http://python-doit.sourceforge.net references: http://software-carpentry.org/ http://www.gnu.org/software/make/ presentation written in ReST/rst2s5 + pygments doit-0.30.3/doc/related.rst000066400000000000000000000010001305250115000154320ustar00rootroot00000000000000================ Related Projects ================ These are the main build tools in use today. - `make `_ - `ant `_ - `SCons `_ - `Rake `_ There are `many `_ more... In this `post `_ I briefly explained my motivation to start another build tool like project. doit-0.30.3/doc/stories.rst000066400000000000000000000116761305250115000155260ustar00rootroot00000000000000 Success Stories =============== Do you have a success story? Please share it! Send a pull-request on github describing your project, how `doit` is used, and why `doit` was chosen. .. contents:: :local: Scientific ---------- Software Carpentry ^^^^^^^^^^^^^^^^^^ The `Software Carpentry Foundation `_ is a non-profit membership organization devoted to improving basic computing skills among researchers in science, engineering, medicine, and other disciplines. `doit` is introduced in the Software Carpentry workshop lesson: `Automating an analysis pipeline using doit `_. Biomechanics Lab / Stanford University, USA ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ by `Christopher Dembia `_ (2014-04-03) I am a graduate student in a biomechanics lab that studies how humans coordinate their muscles in movements like walking or running. We record someone's motion using reflective motion capture markers and by recording the force their feet exert on the ground. We put this data into our software, which gives back an estimate of the force each muscle (92 muscles) was generating throughout the observed motion. In a typical study, we record about 100 walking motions. To analyze a single walking motion, we need to run 4 different executables in sequence. Each executable requires a handful of input files, and generates a handful of output files that a subsequent executable uses as input. So, a study entails about 1000 files, some of which contain raw data, but most of which are intermediate files (output of one executable and input to another executable). Typically, a researcher manages this workflow manually. However, that is prone to error, as a researcher may forget to properly modify all relevant files if an error is noticed in, for example, a raw data file. With `doit`, I am automating this workflow for my current study. This allows me to avoid errors and avoid unnecessary duplication of files. Most importantly, if I learn that I must modify something in a file that is an input toward the beginning of this workflow, `doit` will allow me to automatically update all my results without missing a step. I tried to do this with `Make` first. `Make` just wasn't made to do what I want. Also, my lab's software has python bindings, so my entire workflow can be in python. Also, the ability to script anything directly into the workflow is important, and `Make` can't do that. `CMake` was another option, but that's not general enough. `doit` is just completely generic, and the interface is simple yet very flexible. `Computational Metagenomics Lab `_ / University of Trento, Italy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ by `Nicola Segata `_ (2015-01-20) My laboratory of Computational Metagenomics at University of Trento studies the human microbiome, i.e. the huge microbial diversity populating our body. Our analyses involve processing several thousands of microbial genomes (long sequences of DNA) with a series of computational steps (mostly written in python) on subset of those genomes. Genomes are organized in a hierarchical structure (the taxonomy) and many steps of our pipelines need to take into account these dependencies. `doit` is just the right way to do this. We actually even tried to implement something like this on our own, but we are now switching to `doit` because it has all the features we need and it is intuitive to use. We particularly love the possibility of specifying dependencies "on the fly". We are now thinking to convert all our pipelines to the `doit` format. Given the importance of `doit` for our research and its potential for bioinformatic pipelines we are happy to support this project scientifically (by citing it in our papers, mentioning it in our funding requests, etc). Thanks for developing `doit`, it's just wonderful for computational biology (and for many other tasks, of course, but this is our research field:)! Content Generation ------------------ Nikola ^^^^^^ by `the Nikola team `_ `Nikola `_ is a Static Site and Blog Generator. doit is used to process all the tasks required for building the website (HTML files, indexes, RSS, copying files…). Use of doit makes Nikola unique: unlike other static site generators, Nikola regenerates only the files that were changed since last build (and not all files in the site!). ``nikola build``, the centerpiece of Nikola, is basically the usual ``doit run`` command. doit is what makes Nikola extremely fast, even for large sites. Only a handful of files actually *change* on a rebuild. Using the dependency architecture of doit (for files and configuration), we are able to rebuild only what is needed. Nikola is an `open-source `_ project with many users and contributors. doit-0.30.3/doc/svg/000077500000000000000000000000001305250115000140705ustar00rootroot00000000000000doit-0.30.3/doc/svg/doit-text-full.svg000066400000000000000000000336771305250115000175120ustar00rootroot00000000000000 image/svg+xml Automation Tool doit-0.30.3/doc/svg/doit-text-sq.svg000077500000000000000000000305001305250115000171540ustar00rootroot00000000000000 image/svg+xml doit-0.30.3/doc/svg/doit-text.svg000077500000000000000000000272421305250115000165440ustar00rootroot00000000000000 image/svg+xml doit-0.30.3/doc/svg/doit.svg000077500000000000000000000127731305250115000155650ustar00rootroot00000000000000 image/svg+xml doit-0.30.3/doc/task_args.rst000066400000000000000000000115241305250115000160040ustar00rootroot00000000000000 Passing Task Arguments from the command line ============================================ .. _parameters: arguments ----------- It is possible to pass option parameters to the task through the command line. Just add a ``params`` field to the task dictionary. ``params`` must be a list of dictionaries where every entry is an option parameter. Each parameter must define a name, and a default value. It can optionally define a "short" and "long" names to be used from the command line (it follows unix command line conventions). It may also specify additional attributes, such as `type` and `help` (see :ref:`below `). See the example: .. literalinclude:: tutorial/parameters.py For python-actions the python function must define arguments with the same name as a task parameter. .. code-block:: console $ doit py_params -p abc --param2 4 . py_params abc 9 Need a list in your python function? Specify an option with ``type`` set to ``list``. .. code-block:: console $ doit py_params_list -l milk -l eggs -l bread . py_params_list milk eggs bread Choices can be set by specifying an option with ``choices`` set to a sequence of a 2-element tuple. The first element is the choice value. The second element is the choice description, if not required, use an empty string. .. code-block:: console $ doit py_params_choice -c that . py_params_choice that Invalid choices are detected and passed back to the user. .. code-block:: console $ doit py_params_choice -c notavalidchoice ERROR: Error parsing parameter 'choice'. Provided 'notavalidchoice' but available choices are: 'this', 'that'. For cmd-actions use python string substitution notation: .. code-block:: console $ doit cmd_params -f "-c --other value" . cmd_params mycmd -c --other value xxx .. _parameters-attributes: All parameters attributes ^^^^^^^^^^^^^^^^^^^^^^^^^ Here is the list of all attributes ``param`` accepts: ``name`` Name of the parameter, identifier used as name of the the parameter on python code. It should be unique among others. :required: True :type: `str` ``default`` Default value used when it is set through command-line. :required: True ``short`` Short parameter form, used for e.g. ``-p value``. :required: optional :type: `str` ``long`` Long parameter form, used for e.g. ``--parameter value``. :required: optional :type: `str` ``type`` Actually it can be any python callable. It coverts the string value received from command line to whatever value to be used on python code. If the ``type`` is ``bool`` the parameter is treated as an *option flag* where no value should be specified, value is set to ``True``. Example: ``doit mytask --flag``. :required: optional :type: `callable` (e.g. a `function`) :default: `str` ``choices`` List of accepted value choices for option. First tuple element is the value name, second tuple element is a help description for value. :required: optional :type: list of 2-tuple strings ``help`` Help message associated to this parameter, shown when :ref:`help ` is called for this task, e.g. ``doit help mytask``. :required: optional :type: `str` ``inverse`` [only for `bool` parameter] Set inverse flag long parameter name, value will be set to ``False`` (see example below). :required: optional :type: `str` Example, given following code: .. literalinclude:: tutorial/parameters_inverse.py calls to task `with_flag` show flag on or off: .. code-block:: console $ doit with_flag . with_flag Flag On $ doit with_flag --flagoff . with_flag Flag Off positional arguments ------------------------ Tasks might also get positional arguments from the command line as standard unix commands do, with positional arguments *after* optional arguments. .. literalinclude:: tutorial/pos.py .. code-block:: console $ doit pos_args -p 4 foo bar . pos_args param1 is: 4 positional-0: foo positional-1: bar .. warning:: If a task accepts positional arguments, it is not allowed to pass other tasks after it in the command line. For example if `task1` takes positional arguments you can not call:: $ doit task1 pos1 task2 As the string `task2` would be interpreted as positional argument from `task1` not as another task name. .. _command line variables: command line variables (*doit.get_var*) ----------------------------------------- It is possible to pass variable values to be used in dodo.py from the command line. .. literalinclude:: tutorial/get_var.py .. code-block:: console $ doit . echo hi {abc: NO} $ doit abc=xyz x=3 . echo hi {abc: xyz} doit-0.30.3/doc/task_creation.rst000066400000000000000000000071671305250115000166640ustar00rootroot00000000000000More on Task creation ===================== importing tasks --------------- The *doit* loader will look at **all** objects in the namespace of the *dodo*. It will look for functions staring with ``task_`` and objects with ``create_doit_tasks``. So it is also possible to load task definitions from other modules just by importing them into your *dodo* file. .. literalinclude:: tutorial/import_tasks.py .. code-block:: console $ doit list echo hello sample .. note:: Importing tasks from different modules is useful if you want to split your task definitions in different modules. The best way to create re-usable tasks that can be used in several projects is to call functions that return task dict's. For example take a look at a reusable *pyflakes* `task generator `_. Check the project `doit-py `_ for more examples. .. _delayed-task-creation: delayed task creation --------------------- `doit` execution model is divided in two phases: - *task-loading* : search for task-creator functions (that starts with string `task_`) and create task metadata - *task-execution* : check which tasks are out-of-date and execute them Normally *task-loading* is completed before the *task-execution* starts. `doit` allows some task metadata to be modified during *task-execution* with `calc_deps` and on `uptodate`, but those are restricted to modifying already created tasks... Sometimes it is not possible to know all tasks that should be created before some tasks are executed. For these cases `doit` supports *delayed task creation*, that means *task-execution* starts before *task-loading* is completed. When *task-creator* function is decorated with `doit.create_after`, its evaluation to create the tasks will be delayed to happen after the execution of the specified task in the `executed` param. .. literalinclude:: tutorial/delayed.py .. _specify-target-regex: .. note:: To be able to specify targets created by delayed task loaders to `doit run`, it is possible to also specify a regular expression (regex) for every delayed task loader. If specified, this regex should match any target name possibly generated by this delayed task generator. It can be specified via the additional *task-generator* argument `target_regex`. In the above example, the regex `.*\\.out` matches every target name ending with `.out`. It is possible to match every possible target name by specifying `.*`. Alternatively, one can use the command line option `--auto-delayed-regex` to `doit run`; see :ref:`here ` for more information. Parameter: `creates` ++++++++++++++++++++ In case the task created by a `DelayedTask` has a different *basename* than then creator function, or creates several tasks with different *basenames*, you should pass the parameter `creates`. Since `doit` will only execute the body of the task-creator function on demand, the tasks names must be explicitly specified... Example: .. literalinclude:: tutorial/delayed_creates.py .. _create-doit-tasks: custom task definition ------------------------ Apart from collect functions that start with the name `task_`. The *doit* loader will also execute the ``create_doit_tasks`` callable from any object that contains this attribute. .. literalinclude:: tutorial/custom_task_def.py The `project letsdoit `_ has some real-world implementations. For simple examples to help you create your own check this `blog post `_. doit-0.30.3/doc/tasks.rst000066400000000000000000000352101305250115000151510ustar00rootroot00000000000000 ======== Tasks ======== Intro ------- `doit` is all about automating task dependency management and execution. Tasks can execute external shell commands/scripts or python functions (actually any callable). So a task can be anything you can code :) Tasks are defined in plain `python `_ module with some conventions. .. note:: You should be comfortable with python basics. If you don't know python yet check `Python tutorial `_. A function that starts with the name `task_` defines a *task-creator* recognized by `doit`. These functions must return (or yield) dictionaries representing a *task*. A python module/file that defines *tasks* for `doit` is called **dodo** file (that is something like a `Makefile` for `make`). Take a look at this example (file dodo.py): .. literalinclude:: tutorial/hello.py When `doit` is executed without any parameters it will look for tasks in a file named `dodo.py` in the current folder and execute its tasks. .. code-block:: console $ doit . hello On the output it displays which tasks were executed. In this case the `dodo` file has only one task, `hello`. Actions -------- Every *task* must define **actions**. It can optionally define other attributes like `targets`, `file_dep`, `verbosity`, `doc` ... Actions define what the task actually does. *Actions* is always a list that can have any number of elements. The actions of a task are always run sequentially. There are 2 basic kinds of `actions`: *cmd-action* and *python-action*. The action "result" is used to determine if task execution was successful or not. python-action ^^^^^^^^^^^^^^ If `action` is a python callable or a tuple `(callable, *args, **kwargs)` - only `callable` is required. The callable must be a function, method or callable object. Classes and built-in functions are not allowed. ``args`` is a sequence and ``kwargs`` is a dictionary that will be used as positional and keywords arguments for the callable. see `Keyword Arguments `_. The result of the task is given by the returned value of the ``action`` function. For **successful** completion it must return one of: * `True` * `None` * a dictionary * a string For **unsuccessful** completion it must return one of: * `False` indicates the task generally failed * if it raises any exception, it will be considered an error * it can also explicitly return an instance of :py:class:`TaskFailed` or :py:class:`TaskError` If the action returns a type other than the types already discussed, the action will be considered a failure, although this behavior might change in future versions. .. literalinclude:: tutorial/tutorial_02.py The function `task_hello` is a *task-creator*, not the task itself. The body of the task-creator function is always executed when the dodo file is loaded. .. topic:: task-creators vs actions The body of task-creators are executed even if the task is not going to be executed. The body of task-creators should be used to create task metadata only, not execute tasks! From now on when the documentation says that a *task* is executed, read "the task's actions are executed". `action` parameters can be passed as ``kwargs``. .. literalinclude:: tutorial/task_kwargs.py cmd-action ^^^^^^^^^^^ CmdAction's are executed in a subprocess (using python `subprocess.Popen `_). If `action` is a string, the command will be executed through the shell. (Popen argument shell=True). Note that the string must be escaped according to `python string formatting `_. It is easy to include dynamic (on-the-fly) behavior to your tasks with python code from the `dodo` file. Let's take a look at another example: .. literalinclude:: tutorial/cmd_actions.py .. note:: The body of the *task-creator* is always executed, so in this example the line `msg = 3 * "hi! "` will always be executed. If `action` is a list of strings and instances of any Path class from `pathlib `_, by default it will be executed **without the shell** (Popen argument shell=False). .. literalinclude:: tutorial/cmd_actions_list.py For complex commands it is also possible to pass a callable that returns the command string. In this case you must explicit import CmdAction. .. literalinclude:: tutorial/cmd_from_callable.py You might also explicitly import ``CmdAction`` in case you want to pass extra parameters to ``Popen`` like ``cwd``. All keyword parameter from ``Popen`` can be used on ``CmdAction`` (except ``stdout`` and ``stderr``). .. note:: Different from `subprocess.Popen`, `CmdAction` `shell` argument defaults to `True`. All other `Popen` arguments can also be passed in `CmdAction` except `stdout` and `stderr` The result of the task follows the shell convention. If the process exits with the value `0` it is successful. Any other value means the task failed. .. _custom-actions: custom actions ^^^^^^^^^^^^^^^^^^^ It is possible to create other type of actions, check :ref:`tools.LongRunning` as an example. keywords on actions -------------------- It is common situation to use task information such as *targets*, *dependencies*, or *changed* in its own actions. Note: Dependencies here refers only to *file-dependencies*. For *cmd-action* you can use the python notation for keyword substitution on strings. The string will contain all values separated by a space (" "). For *python-action* create a parameter in the function, `doit` will take care of passing the value when the function is called. The values are passed as list of strings. .. literalinclude:: tutorial/hello.py You can also pass the keyword *task* to have a reference to all task metadata. .. literalinclude:: tutorial/meta.py .. note:: Note that the *task* argument is a `Task` object instance, not the metadata *dict*. It is possible not only to retrieve task's attributes but also to modify them while the action is running! task name ------------ By default a task name is taken from the name of the python function that generates the task. For example a `def task_hello` would create a task named ``hello``. It is possible to explicitly set a task name with the parameter ``basename``. .. literalinclude:: tutorial/task_name.py .. code-block:: console $ doit . hello . hello2 When explicit using ``basename`` the task-creator is not limited to create only one task. Using ``yield`` it can generate several tasks at once. It is also possible to ``yield`` a generator that generate tasks. This is useful to write some generic/reusable task-creators. .. literalinclude:: tutorial/task_reusable.py .. code-block:: console $ doit . t2 . t1 sub-tasks --------- Most of the time we want to apply the same task several times in different contexts. The task function can return a python-generator that yields dictionaries. Since each sub-task must be uniquely identified it requires an additional field ``name``. .. literalinclude:: tutorial/subtasks.py .. code-block:: console $ doit . create_file:file0.txt . create_file:file1.txt . create_file:file2.txt avoiding empty sub-tasks ^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are not sure sub-tasks will be created for a given ``basename`` but you want to make sure that a task exist, you can yield a sub-task with ``name`` equal to ``None``. This can also be used to set the task ``doc`` and ``watch`` attributes. .. literalinclude:: tutorial/empty_subtasks.py .. code-block:: console $ doit $ doit list do_x docs for X Dependencies & Targets ------------------------- One of the main ideas of `doit` (and other build-tools) is to check if the tasks/targets are **up-to-date**. In case there is no modification in the dependencies and the targets already exist, it skips the task execution to save time, as it would produce the same output from the previous run. Dependency A dependency indicates an input to the task execution. Target A *target* is the result/output file produced by the task execution. i.e. In a compilation task the source file is a *file_dep*, the object file is a *target*. .. literalinclude:: tutorial/compile.py `doit` automatically keeps track of file dependencies. It saves the signature (MD5) of the dependencies every time the task is completed successfully. So if there are no modifications to the dependencies and you run `doit` again. The execution of the task's actions is skipped. .. code-block:: console $ doit . compile $ doit -- compile Note the ``--`` (2 dashes, one space) on the command output on the second time it is executed. It means, this task was up-to-date and not executed. .. _file-dep: file_dep (file dependency) ----------------------------- Different from most build-tools dependencies are on tasks, not on targets. So `doit` can take advantage of the "execute only if not up-to-date" feature even for tasks that don't define targets. Let's say you work with a dynamic language (python in this example). You don't need to compile anything but you probably want to apply a lint-like tool (i.e. `pyflakes `_) to your source code files. You can define the source code as a dependency to the task. Every dependency in file_dep list should be a string or an instance of any Path class from `pathlib `_. .. literalinclude:: tutorial/checker.py .. code-block:: console $ doit . checker $ doit -- checker `doit` checks if `file_dep` was modified or not (by comparing the file content's MD5). If there are no changes the action is not executed again as it would produce the same result. Note the ``--`` again to indicate the execution was skipped. Traditional build-tools can only handle files as "dependencies". `doit` has several ways to check for dependencies, those will be introduced later. .. note:: `doit` saves the MD5 of a `file_dep` after the actions are executed. Be careful about editing a `file_dep` while a task is running because `doit` might saves the MD5 of a version of the file that is different than it actually used to execute the task. targets ------- Targets can be any file path (a file or folder). If a target doesn't exist the task will be executed. There is no limitation on the number of targets a task may define. Two different tasks can not have the same target. Target can be specified as a string or as an instance of any Path class from `pathlib `_. Lets take the compilation example again. .. literalinclude:: tutorial/compile.py * If there are no changes in the dependency the task execution is skipped. * But if the target is removed the task is executed again. * But only if it does not exist. If the target is modified but the dependencies do not change the task is not executed again. .. code-block:: console $ doit . compile $ doit -- compile $ rm main.o $ doit . compile $ echo xxx > main.o $ doit -- compile execution order ----------------- If your tasks interact in a way where the target (output) of one task is a file_dep (input) of another task, `doit` will make sure your tasks are executed in the correct order. .. literalinclude:: tutorial/taskorder.py .. code-block:: console $ doit . create . modify .. note:: `doit` compares the path (string) of the file of `file_dep` and `targets`. So although `my_file` and `./my_file` are actually the same file, `doit` will think they are different files. .. _task-selection: Task selection ---------------- By default all tasks are executed in the same order as they were defined (the order may change to satisfy dependencies). You can control which tasks will run in 2 ways. Another example .. literalinclude:: tutorial/selecttasks.py DOIT_CONFIG -> default_tasks ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ *dodo* file defines a dictionary ``DOIT_CONFIG`` with ``default_tasks``, a list of strings where each element is a task name. .. code-block:: console $ doit . t1 . t3 Note that only the task *t3* was specified to be executed by default. But its dependencies include a target of another task (t1). So that task was automatically executed also. command line selection ^^^^^^^^^^^^^^^^^^^^^^^ From the command line you can control which tasks are going to be execute by passing its task name. Any number of tasks can be passed as positional arguments. .. code-block:: console $ doit t2 . t2 You can also specify which task to execute by its target: .. code-block:: console $ doit task1 . t1 sub-task selection ^^^^^^^^^^^^^^^^^^^^^ You can select sub-tasks from the command line specifying its full name. .. literalinclude:: tutorial/subtasks.py .. code-block:: console $ doit create_file:file2.txt . create_file:file2.txt wildcard selection ^^^^^^^^^^^^^^^^^^^^ You can also select tasks to be executed using a `glob `_ like syntax (it must contains a ``*``). .. code-block:: console $ doit create_file:file* . create_file:file1.txt . create_file:file2.txt . create_file:file3.txt private/hidden tasks --------------------- If task name starts with an underscore '_', it will not be included in the output. title ------- By default when you run `doit` only the task name is printed out on the output. You can customize the output passing a "title" function to the task: .. literalinclude:: tutorial/title.py .. code-block:: console $ doit . executing... Cmd: echo abc efg .. _verbosity: verbosity ----------- By default the stdout from a task is captured and its stderr is sent to the console. If the task fails or there is an error the stdout and a traceback (if any) is displayed. There are 3 levels of verbosity: 0: capture (do not print) stdout/stderr from task. 1 (default): capture stdout only. 2: do not capture anything (print everything immediately). You can control the verbosity by: * task attribute verbosity .. literalinclude:: tutorial/verbosity.py .. code-block:: console $ doit . print hello * from command line, see :ref:`verbosity option`. pathlib -------- `doit` supports `pathlib `_: file_dep, targets and CmdAction specified as a list can take as elements not only strings but also instances of any Path class from pathlib. Lets take the compilation example and modify it to work with any number of header and source files in current directory using pathlib. .. literalinclude:: tutorial/compile_pathlib.py doit-0.30.3/doc/tools.rst000066400000000000000000000056671305250115000152010ustar00rootroot00000000000000====== Tools ====== `doit.tools` includes some commonly used code. These are not used by the `doit` core, you can see it as a "standard library". The functions/class used with `uptodate` were already introduced in the previous section. create_folder (action) ------------------------- Creates a folder if it does not exist yet. Uses `os.makedirs() _`. .. literalinclude:: tutorial/folder.py title_with_actions (title) ---------------------------- Return task name task actions from a task. This function can be used as 'title' attribute of a task dictionary to provide more detailed information of the action being executed. .. literalinclude:: tutorial/titlewithactions.py .. _tools.LongRunning: LongRunning (action) ----------------------------- .. autoclass:: doit.tools.LongRunning This is useful for executing long running process like a web-server. .. literalinclude:: tutorial/longrunning.py Interactive (action) ---------------------------------- .. autoclass:: doit.tools.Interactive PythonInteractiveAction (action) ---------------------------------- .. autoclass:: doit.tools.PythonInteractiveAction set_trace ----------- `doit` by default redirects stdout and stderr. Because of this when you try to use the python debugger with ``pdb.set_trace``, it does not work properly. To make sure you get a proper PDB shell you should use doit.tools.set_trace instead of ``pdb.set_trace``. .. literalinclude:: tutorial/settrace.py .. _tools.IPython: IPython integration ---------------------- A handy possibility for interactive experimentation is to define tasks from within *ipython* sessions and use the ``%doit`` `magic function `_ to discover and execute them. First you need to register the new magic function into ipython shell. .. code-block:: pycon >>> from doit.tools import register_doit_as_IPython_magic >>> register_doit_as_IPython_magic() .. Tip:: To permanently add this magic-function to your IPython include it on your `profile `_, create a new script inside your startup-profile (i.e. :file:`~/.ipython/profile_default/startup/doit_magic.ipy`) with the following content:: from doit.tools import register_doit_as_IPython_magic register_doit_as_IPython_magic() Then you can define your `task_creator` functions and invoke them with `%doit` magic-function, instead of invoking the cmd-line script with a :file:`dodo.py` file. Examples: .. code-block:: pycon >>> %doit --help ## Show help for options and arguments. >>> def task_foo(): return {'actions': ["echo hi IPython"], 'verbosity': 2} >>> %doit list ## List any tasks discovered. foo >>> %doit ## Run any tasks. . foo hi IPython doit-0.30.3/doc/tutorial/000077500000000000000000000000001305250115000151345ustar00rootroot00000000000000doit-0.30.3/doc/tutorial/calc_dep.py000066400000000000000000000015201305250115000172360ustar00rootroot00000000000000DOIT_CONFIG = {'verbosity': 2} MOD_IMPORTS = {'a': ['b','c'], 'b': ['f','g'], 'c': [], 'f': ['a'], 'g': []} def print_deps(mod, dependencies): print("%s -> %s" % (mod, dependencies)) def task_mod_deps(): """task that depends on all direct imports""" for mod in MOD_IMPORTS.keys(): yield {'name': mod, 'actions': [(print_deps,(mod,))], 'file_dep': [mod], 'calc_dep': ["get_dep:%s" % mod], } def get_dep(mod): # fake implementation return {'file_dep': MOD_IMPORTS[mod]} def task_get_dep(): """get direct dependencies for each module""" for mod in MOD_IMPORTS.keys(): yield {'name': mod, 'actions':[(get_dep,[mod])], 'file_dep': [mod], } doit-0.30.3/doc/tutorial/check_timestamp_unchanged.py000066400000000000000000000007031305250115000226620ustar00rootroot00000000000000from doit.tools import check_timestamp_unchanged def task_create_foo(): return { 'actions': ['touch foo', 'chmod 750 foo'], 'targets': ['foo'], 'uptodate': [True], } def task_on_foo_changed(): # will execute if foo or its metadata is modified return { 'actions': ['echo foo modified'], 'task_dep': ['create_foo'], 'uptodate': [check_timestamp_unchanged('foo', 'ctime')], } doit-0.30.3/doc/tutorial/checker.py000066400000000000000000000002041305250115000171060ustar00rootroot00000000000000from pathlib import Path def task_checker(): return {'actions': ["pyflakes sample.py"], 'file_dep': ["sample.py"]} doit-0.30.3/doc/tutorial/clean_mix.py000066400000000000000000000003231305250115000174430ustar00rootroot00000000000000from doit.task import clean_targets def simple(): print("ok") def task_poo(): return { 'actions': ['touch poo'], 'targets': ['poo'], 'clean': [clean_targets, simple], } doit-0.30.3/doc/tutorial/cmd_actions.py000066400000000000000000000002611305250115000177700ustar00rootroot00000000000000def task_hello(): """hello cmd """ msg = 3 * "hi! " return { 'actions': ['echo %s ' % msg + ' > %(targets)s',], 'targets': ["hello.txt"], } doit-0.30.3/doc/tutorial/cmd_actions_list.py000066400000000000000000000001401305250115000210170ustar00rootroot00000000000000def task_python_version(): return { 'actions': [['python', '--version']] } doit-0.30.3/doc/tutorial/cmd_from_callable.py000066400000000000000000000003441305250115000211140ustar00rootroot00000000000000from doit.action import CmdAction def task_hello(): """hello cmd """ def create_cmd_string(): return "echo hi" return { 'actions': [CmdAction(create_cmd_string)], 'verbosity': 2, } doit-0.30.3/doc/tutorial/command.c000066400000000000000000000000761305250115000167210ustar00rootroot00000000000000#include void command(int a){ printf("geez"); }; doit-0.30.3/doc/tutorial/command.h000066400000000000000000000000251305250115000167200ustar00rootroot00000000000000void command(int a); doit-0.30.3/doc/tutorial/compile.py000066400000000000000000000002331305250115000171340ustar00rootroot00000000000000def task_compile(): return {'actions': ["cc -c main.c"], 'file_dep': ["main.c", "defs.h"], 'targets': ["main.o"] } doit-0.30.3/doc/tutorial/compile_pathlib.py000066400000000000000000000007621305250115000206460ustar00rootroot00000000000000from pathlib import Path def task_compile(): working_directory = Path('.') # Path.glob returns an iterator so turn it into a list headers = list(working_directory.glob('*.h')) for source_file in working_directory.glob('*.c'): object_file = source_file.with_suffix('.o') yield { 'name': object_file.name, 'actions': [['cc', '-c', source_file]], 'file_dep': [source_file] + headers, 'targets': [object_file], } doit-0.30.3/doc/tutorial/config_params.py000066400000000000000000000003261305250115000203170ustar00rootroot00000000000000from doit.tools import config_changed option = "AB" def task_with_params(): return {'actions': ['echo %s' % option], 'uptodate': [config_changed(option)], 'verbosity': 2, } doit-0.30.3/doc/tutorial/cproject.py000066400000000000000000000017451305250115000173260ustar00rootroot00000000000000DOIT_CONFIG = {'default_tasks': ['link']} # map source file to dependencies SOURCE = { 'main': ["defs.h"], 'kbd': ["defs.h", "command.h"], 'command': ["defs.h", "command.h"], } def task_link(): "create binary program" OBJECTS = ["%s.o" % module for module in SOURCE.keys()] return {'actions': ['cc -o %(targets)s %(dependencies)s'], 'file_dep': OBJECTS, 'targets': ['edit'], 'clean': True } def task_compile(): "compile C files" for module, dep in SOURCE.items(): dependencies = dep + ['%s.c' % module] yield {'name': module, 'actions': ["cc -c %s.c" % module], 'targets': ["%s.o" % module], 'file_dep': dependencies, 'clean': True } def task_install(): "install" return {'actions': ['echo install comes here...'], 'task_dep': ['link'], 'doc': 'install executable (TODO)' } doit-0.30.3/doc/tutorial/custom_cmd.py000066400000000000000000000005211305250115000176410ustar00rootroot00000000000000from doit.cmd_base import Command class Init(Command): doc_purpose = 'create a project scaffolding' doc_usage = '' doc_description = """This is a multiline command description. It will be displayed on `doit help init`""" def execute(self, opt_values, pos_args): print("TODO: create some files for my project") doit-0.30.3/doc/tutorial/custom_loader.py000066400000000000000000000010621305250115000203450ustar00rootroot00000000000000#! /usr/bin/env python import sys from doit.task import dict_to_task from doit.cmd_base import TaskLoader from doit.doit_cmd import DoitMain my_builtin_task = { 'name': 'sample_task', 'actions': ['echo hello from built in'], 'doc': 'sample doc', } class MyLoader(TaskLoader): @staticmethod def load_tasks(cmd, opt_values, pos_args): task_list = [dict_to_task(my_builtin_task)] config = {'verbosity': 2} return task_list, config if __name__ == "__main__": sys.exit(DoitMain(MyLoader()).run(sys.argv[1:])) doit-0.30.3/doc/tutorial/custom_reporter.py000066400000000000000000000005631305250115000207460ustar00rootroot00000000000000from doit.reporter import ConsoleReporter class MyReporter(ConsoleReporter): def execute_task(self, task): self.outstream.write('MyReporter --> %s\n' % task.title()) DOIT_CONFIG = {'reporter': MyReporter, 'verbosity': 2} def task_sample(): for x in range(3): yield {'name': str(x), 'actions': ['echo out %d' % x]} doit-0.30.3/doc/tutorial/custom_task_def.py000066400000000000000000000003411305250115000206560ustar00rootroot00000000000000def make_task(func): """make decorated function a task-creator""" func.create_doit_tasks = func return func @make_task def sample(): return { 'verbosity': 2, 'actions': ['echo hi'], } doit-0.30.3/doc/tutorial/defs.h000066400000000000000000000000311305250115000162200ustar00rootroot00000000000000 static int SIZE = 20; doit-0.30.3/doc/tutorial/delayed.py000066400000000000000000000011131305250115000171110ustar00rootroot00000000000000import glob from doit import create_after @create_after(executed='early', target_regex='.*\.out') def task_build(): for inf in glob.glob('*.in'): yield { 'name': inf, 'actions': ['cp %(dependencies)s %(targets)s'], 'file_dep': [inf], 'targets': [inf[:-3] + '.out'], 'clean': True, } def task_early(): """a task that create some files...""" inter_files = ('a.in', 'b.in', 'c.in') return { 'actions': ['touch %(targets)s'], 'targets': inter_files, 'clean': True, } doit-0.30.3/doc/tutorial/delayed_creates.py000066400000000000000000000005371305250115000206300ustar00rootroot00000000000000import sys from doit import create_after def say_hello(your_name): sys.stderr.write("Hello from {}!\n".format(your_name)) def task_a(): return { "actions": [ (say_hello, ["a"]) ] } @create_after("a", creates=['b']) def task_another_task(): return { "basename": "b", "actions": [ (say_hello, ["b"]) ], } doit-0.30.3/doc/tutorial/doit_config.py000066400000000000000000000002001305250115000177620ustar00rootroot00000000000000DOIT_CONFIG = {'default_tasks': ['my_task_1', 'my_task_2'], 'continue': True, 'reporter': 'json'} doit-0.30.3/doc/tutorial/download.py000066400000000000000000000003651305250115000173210ustar00rootroot00000000000000from doit.tools import run_once def task_get_pylogo(): url = "http://python.org/images/python-logo.gif" return {'actions': ["wget %s" % url], 'targets': ["python-logo.gif"], 'uptodate': [run_once], } doit-0.30.3/doc/tutorial/empty_subtasks.py000066400000000000000000000006171305250115000205670ustar00rootroot00000000000000import glob def task_xxx(): """my doc""" LIST = glob.glob('*.xyz') # might be empty yield { 'basename': 'do_x', 'name': None, 'doc': 'docs for X', 'watch': ['.'], } for item in LIST: yield { 'basename': 'do_x', 'name': item, 'actions': ['echo %s' % item], 'verbosity': 2, } doit-0.30.3/doc/tutorial/executable.py000077500000000000000000000002741305250115000176350ustar00rootroot00000000000000#! /usr/bin/env python def task_echo(): return { 'actions': ['echo hi'], 'verbosity': 2, } if __name__ == '__main__': import doit doit.run(globals()) doit-0.30.3/doc/tutorial/folder.py000066400000000000000000000003641305250115000167640ustar00rootroot00000000000000from doit.tools import create_folder BUILD_PATH = "_build" def task_build(): return {'actions': [(create_folder, [BUILD_PATH]), 'touch %(targets)s'], 'targets': ["%s/file.o" % BUILD_PATH] } doit-0.30.3/doc/tutorial/get_var.py000066400000000000000000000002551305250115000171370ustar00rootroot00000000000000from doit import get_var config = {"abc": get_var('abc', 'NO')} def task_echo(): return {'actions': ['echo hi %s' % config], 'verbosity': 2, } doit-0.30.3/doc/tutorial/getargs.py000066400000000000000000000012031305250115000171360ustar00rootroot00000000000000DOIT_CONFIG = {'default_tasks': ['use_cmd', 'use_python']} def task_compute(): def comp(): return {'x':5,'y':10, 'z': 20} return {'actions': [(comp,)]} def task_use_cmd(): return {'actions': ['echo x=%(x)s, z=%(z)s'], 'getargs': {'x': ('compute', 'x'), 'z': ('compute', 'z')}, 'verbosity': 2, } def task_use_python(): return {'actions': [show_getargs], 'getargs': {'x': ('compute', 'x'), 'y': ('compute', 'z')}, 'verbosity': 2, } def show_getargs(x, y): print("this is x:%s" % x) print("this is y:%s" % y) doit-0.30.3/doc/tutorial/getargs_dict.py000066400000000000000000000004551305250115000201510ustar00rootroot00000000000000def task_compute(): def comp(): return {'x':5,'y':10, 'z': 20} return {'actions': [(comp,)]} def show_getargs(values): print(values) def task_args_dict(): return {'actions': [show_getargs], 'getargs': {'values': ('compute', None)}, 'verbosity': 2, } doit-0.30.3/doc/tutorial/getargs_group.py000066400000000000000000000007041305250115000203570ustar00rootroot00000000000000def task_compute(): def comp(x): return {'x':x} yield {'name': '5', 'actions': [ (comp, [5]) ] } yield {'name': '7', 'actions': [ (comp, [7]) ] } def show_getargs(values): print(values) assert sum(v['x'] for v in values.values()) == 12 def task_args_dict(): return {'actions': [show_getargs], 'getargs': {'values': ('compute', None)}, 'verbosity': 2, } doit-0.30.3/doc/tutorial/group.py000066400000000000000000000003051305250115000166400ustar00rootroot00000000000000def task_foo(): return {'actions': ["echo foo"]} def task_bar(): return {'actions': ["echo bar"]} def task_mygroup(): return {'actions': None, 'task_dep': ['foo', 'bar']} doit-0.30.3/doc/tutorial/hello.py000066400000000000000000000004071305250115000166120ustar00rootroot00000000000000def task_hello(): """hello""" def python_hello(targets): with open(targets[0], "a") as output: output.write("Python says Hello World!!!\n") return { 'actions': [python_hello], 'targets': ["hello.txt"], } doit-0.30.3/doc/tutorial/import_tasks.py000066400000000000000000000003061305250115000202240ustar00rootroot00000000000000# import task_ functions from get_var import task_echo # import tasks with create_doit_tasks callable from custom_task_def import sample def task_hello(): return {'actions': ['echo hello']} doit-0.30.3/doc/tutorial/initial_workdir.py000066400000000000000000000012621305250115000207010ustar00rootroot00000000000000### README # Sample to test doit.get_initial_workdir # First create a folder named 'sub1'. # Invoking doit from the root folder will execute both tasks 'base' and 'sub1'. # Invoking 'doit -k' from path 'sub1' will execute only task 'sub1' ################## import os import doit DOIT_CONFIG = { 'verbosity': 2, 'default_tasks': None, # all by default } # change default tasks based on dir from where doit was run sub1_dir = os.path.join(os.path.dirname(__file__), 'sub1') if doit.get_initial_workdir() == sub1_dir: DOIT_CONFIG['default_tasks'] = ['sub1'] def task_base(): return {'actions': ['echo root']} def task_sub1(): return {'actions': ['echo sub1']} doit-0.30.3/doc/tutorial/kbd.c000066400000000000000000000000501305250115000160330ustar00rootroot00000000000000#include "defs.h" #include "command.h" doit-0.30.3/doc/tutorial/longrunning.py000066400000000000000000000001601305250115000200430ustar00rootroot00000000000000from doit.tools import LongRunning def task_top(): cmd = "top" return {'actions': [LongRunning(cmd)],} doit-0.30.3/doc/tutorial/main.c000066400000000000000000000001201305250115000162150ustar00rootroot00000000000000#include int main() { printf("\nHello World\n"); return 0; } doit-0.30.3/doc/tutorial/meta.py000066400000000000000000000002771305250115000164420ustar00rootroot00000000000000def who(task): print('my name is', task.name) print(task.targets) def task_x(): return { 'actions': [who], 'targets': ['asdf'], 'verbosity': 2, } doit-0.30.3/doc/tutorial/module_loader.py000066400000000000000000000003771305250115000203300ustar00rootroot00000000000000#! /usr/bin/env python import sys from doit.cmd_base import ModuleTaskLoader from doit.doit_cmd import DoitMain if __name__ == "__main__": import my_module_with_tasks sys.exit(DoitMain(ModuleTaskLoader(my_module_with_tasks)).run(sys.argv[1:])) doit-0.30.3/doc/tutorial/my_dodo.py000066400000000000000000000020001305250115000171300ustar00rootroot00000000000000 DOIT_CONFIG = {'verbosity': 2} TASKS_MODULE = __import__('my_tasks') def task_do(): # get functions that are tasks from module for name in dir(TASKS_MODULE): item = getattr(TASKS_MODULE, name) if not hasattr(item, 'task_metadata'): continue # get task metadata attached to the function metadata = item.task_metadata # get name of task from function name metadata['name'] = item.__name__ # *I* dont like the names file_dep, targets. So I use 'input', 'output' class Sentinel(object): pass input_ = metadata.pop('input', Sentinel) output_ = metadata.pop('output', Sentinel) args = [] if input_ != Sentinel: metadata['file_dep'] = input_ args.append(input_) if output_ != Sentinel: metadata['targets'] = output_ args.append(output_) # the action is the function iteself metadata['actions'] = [(item, args)] yield metadata doit-0.30.3/doc/tutorial/my_module_with_tasks.py000066400000000000000000000001541305250115000217400ustar00rootroot00000000000000 def task_sample(): return {'actions': ['echo hello from module loader'], 'verbosity': 2,} doit-0.30.3/doc/tutorial/my_tasks.py000066400000000000000000000013711305250115000173420ustar00rootroot00000000000000def task(*fn, **kwargs): # decorator without parameters if fn: function = fn[0] function.task_metadata = {} return function # decorator with parameters def wrap(function): function.task_metadata = kwargs return function return wrap @task def simple(): print("thats all folks") @task(output=['my_input.txt']) def pre(to_create): with open(to_create[0], 'w') as fp: fp.write('foo') @task(output=['out1.txt', 'out2.txt']) def create(to_be_created): print("I should create these files: %s" % " ".join(to_be_created)) @task(input=['my_input.txt'], output=['my_output_result.txt']) def process(in_, out_): print("processing %s" % in_[0]) print("creating %s" % out_[0]) doit-0.30.3/doc/tutorial/parameters.py000066400000000000000000000033041305250115000176510ustar00rootroot00000000000000def task_py_params(): def show_params(param1, param2): print(param1) print(5 + param2) return {'actions':[(show_params,)], 'params':[{'name':'param1', 'short':'p', 'default':'default value'}, {'name':'param2', 'long':'param2', 'type': int, 'default':0}], 'verbosity':2, } def task_py_params_list(): def print_a_list(list): for item in list: print(item) return {'actions':[(print_a_list,)], 'params':[{'name':'list', 'short':'l', 'long': 'list', 'type': list, 'default': [], 'help': 'Collect a list with multiple -l flags'}], 'verbosity':2, } def task_py_params_choice(): def print_choice(choice): print(choice) return {'actions':[(print_choice,)], 'params':[{'name':'choice', 'short':'c', 'long': 'choice', 'type': str, 'choices': (('this', ''), ('that', '')), 'default': 'this', 'help': 'Choose between this and that'}], 'verbosity':2,} def task_cmd_params(): return {'actions':["echo mycmd %(flag)s xxx"], 'params':[{'name':'flag', 'short':'f', 'long': 'flag', 'default': '', 'help': 'helpful message about this flag'}], 'verbosity': 2 } doit-0.30.3/doc/tutorial/parameters_inverse.py000066400000000000000000000005701305250115000214060ustar00rootroot00000000000000def task_with_flag(): def _task(flag): print("Flag {0}".format("On" if flag else "Off")) return { 'params': [{ 'name': 'flag', 'long': 'flagon', 'short': 'f', 'type': bool, 'default': True, 'inverse': 'flagoff'}], 'actions': [(_task, )], 'verbosity': 2 } doit-0.30.3/doc/tutorial/pos.py000066400000000000000000000007341305250115000163130ustar00rootroot00000000000000def task_pos_args(): def show_params(param1, pos): print('param1 is: {0}'.format(param1)) for index, pos_arg in enumerate(pos): print('positional-{0}: {1}'.format(index, pos_arg)) return {'actions':[(show_params,)], 'params':[{'name':'param1', 'short':'p', 'default':'default value'}, ], 'pos_arg': 'pos', 'verbosity': 2, } doit-0.30.3/doc/tutorial/run_once.py000066400000000000000000000002551305250115000173200ustar00rootroot00000000000000 def run_once(task, values): def save_executed(): return {'run-once': True} task.value_savers.append(save_executed) return values.get('run-once', False) doit-0.30.3/doc/tutorial/sample.py000066400000000000000000000000171305250115000167650ustar00rootroot00000000000000print("hello") doit-0.30.3/doc/tutorial/save_out.py000066400000000000000000000002741305250115000173360ustar00rootroot00000000000000from doit.action import CmdAction def task_save_output(): return { 'actions': [CmdAction("echo x1", save_out='out')], } # The task values will contain: {'out': u'x1'} doit-0.30.3/doc/tutorial/selecttasks.py000066400000000000000000000004241305250115000200330ustar00rootroot00000000000000 DOIT_CONFIG = {'default_tasks': ['t3']} def task_t1(): return {'actions': ["touch task1"], 'targets': ['task1']} def task_t2(): return {'actions': ["echo task2"]} def task_t3(): return {'actions': ["echo task3"], 'file_dep': ['task1']} doit-0.30.3/doc/tutorial/settrace.py000066400000000000000000000002451305250115000173210ustar00rootroot00000000000000 def need_to_debug(): # some code here from doit import tools tools.set_trace() # more code def task_X(): return {'actions':[(need_to_debug,)]} doit-0.30.3/doc/tutorial/subtasks.py000066400000000000000000000002471305250115000173500ustar00rootroot00000000000000def task_create_file(): for i in range(3): filename = "file%d.txt" % i yield {'name': filename, 'actions': ["touch %s" % filename]} doit-0.30.3/doc/tutorial/tar.py000066400000000000000000000003371305250115000162770ustar00rootroot00000000000000def task_tar(): return {'actions': ["tar -cf foo.tar *"], 'task_dep':['version'], 'targets':['foo.tar']} def task_version(): return {'actions': ["hg tip --template '{rev}' > revision.txt"]} doit-0.30.3/doc/tutorial/task_kwargs.py000077500000000000000000000005151305250115000200320ustar00rootroot00000000000000def func_with_args(arg_first, arg_second): print(arg_first) print(arg_second) return True def task_call_func(): return { 'actions': [(func_with_args, [], { 'arg_second': 'This is a second argument.', 'arg_first': 'This is a first argument.'}) ], 'verbosity': 2, } doit-0.30.3/doc/tutorial/task_name.py000066400000000000000000000002641305250115000174520ustar00rootroot00000000000000def task_hello(): return { 'actions': ['echo hello'] } def task_xxx(): return { 'basename': 'hello2', 'actions': ['echo hello2'] } doit-0.30.3/doc/tutorial/task_reusable.py000066400000000000000000000003031305250115000203260ustar00rootroot00000000000000 def gen_many_tasks(): yield {'basename': 't1', 'actions': ['echo t1']} yield {'basename': 't2', 'actions': ['echo t2']} def task_all(): yield gen_many_tasks() doit-0.30.3/doc/tutorial/taskorder.py000066400000000000000000000003441305250115000175050ustar00rootroot00000000000000def task_modify(): return {'actions': ["echo bar > foo.txt"], 'file_dep': ["foo.txt"], } def task_create(): return {'actions': ["touch foo.txt"], 'targets': ["foo.txt"] } doit-0.30.3/doc/tutorial/taskresult.py000066400000000000000000000003531305250115000177100ustar00rootroot00000000000000from doit.tools import result_dep def task_version(): return {'actions': ["hg tip --template '{rev}:{node}'"]} def task_send_email(): return {'actions': ['echo "TODO: send an email"'], 'uptodate': [result_dep('version')]} doit-0.30.3/doc/tutorial/timeout.py000066400000000000000000000003561305250115000172000ustar00rootroot00000000000000import datetime from doit.tools import timeout def task_expire(): return { 'actions': ['echo test expire; date'], 'uptodate': [timeout(datetime.timedelta(minutes=5))], 'verbosity': 2, } doit-0.30.3/doc/tutorial/title.py000066400000000000000000000002411305250115000166240ustar00rootroot00000000000000 def show_cmd(task): return "executing... %s" % task.name def task_custom_display(): return {'actions':['echo abc efg'], 'title': show_cmd} doit-0.30.3/doc/tutorial/titlewithactions.py000066400000000000000000000002261305250115000211040ustar00rootroot00000000000000from doit.tools import title_with_actions def task_with_details(): return {'actions': ['echo abc 123'], 'title': title_with_actions} doit-0.30.3/doc/tutorial/touch.py000066400000000000000000000003501305250115000166260ustar00rootroot00000000000000def task_touch(): return { 'actions': ['touch foo.txt'], 'targets': ['foo.txt'], # force doit to always mark the task # as up-to-date (unless target removed) 'uptodate': [True], } doit-0.30.3/doc/tutorial/tsetup.py000066400000000000000000000013411305250115000170310ustar00rootroot00000000000000### task setup env. good for functional tests! DOIT_CONFIG = {'verbosity': 2, 'default_tasks': ['withenvX', 'withenvY']} def start(name): print("start %s" % name) def stop(name): print("stop %s" % name) def task_setup_sample(): for name in ('setupX', 'setupY'): yield {'name': name, 'actions': [(start, (name,))], 'teardown': [(stop, (name,))], } def task_withenvX(): for fin in ('a','b','c'): yield {'name': fin, 'actions':['echo x %s' % fin], 'setup': ['setup_sample:setupX'], } def task_withenvY(): return {'actions':['echo y'], 'setup': ['setup_sample:setupY'], } doit-0.30.3/doc/tutorial/tutorial_02.py000066400000000000000000000004241305250115000176520ustar00rootroot00000000000000def task_hello(): """hello py """ def python_hello(times, text, targets): with open(targets[0], "a") as output: output.write(times * text) return {'actions': [(python_hello, [3, "py!\n"])], 'targets': ["hello.txt"], } doit-0.30.3/doc/tutorial/uptodate_callable.py000066400000000000000000000004271305250115000211550ustar00rootroot00000000000000 def fake_get_value_from_db(): return 5 def check_outdated(): total = fake_get_value_from_db() return total > 10 def task_put_more_stuff_in_db(): def put_stuff(): pass return {'actions': [put_stuff], 'uptodate': [check_outdated], } doit-0.30.3/doc/tutorial/verbosity.py000066400000000000000000000001251305250115000175320ustar00rootroot00000000000000def task_print(): return {'actions': ['echo hello'], 'verbosity': 2} doit-0.30.3/doc/uptodate.rst000066400000000000000000000217671305250115000156650ustar00rootroot00000000000000================ custom uptodate ================ The basics of `uptodate` was already :ref:`introduced `. Here we look in more detail into some implementations shipped with `doit`. And the API used by those. .. _result_dep: result-dependency ---------------------- In some cases you can not determine if a task is "up-to-date" only based on input files, the input could come from a database or an external process. *doit* defines a "result-dependency" to deal with these cases without need to create an intermediate file with the results of the process. i.e. Suppose you want to send an email every time you run *doit* on a mercurial repository that contains a new revision number. .. literalinclude:: tutorial/taskresult.py Note the `result_dep` with the name of the task ('version'). `doit` will keep track of the output of the task *version* and will execute *send_email* only when the mercurial repository has a new version since last time *doit* was executed. The "result" from the dependent task compared between different runs is given by its last action. The content for python-action is the value of the returned string or dict. For cmd-actions it is the output send to stdout plus stderr. `result_dep` also supports group-tasks. In this case it will check that the result of all subtasks did not change. And also the existing sub-tasks are the same. .. _run_once: run_once() --------------- Sometimes there is no dependency for a task but you do not want to execute it all the time. With "run_once" the task will not be executed again after the first successful run. This is mostly used together with targets. Suppose you need to download something from internet. There is no dependency, but you do not want to download it many times. .. literalinclude:: tutorial/download.py Note that even with *run_once* the file will be downloaded again in case the target is removed. .. code-block:: console $ doit . get_pylogo $ doit -- get_pylogo $ rm python-logo.gif $ doit . get_pylogo .. _timeout: timeout() ----------- ``timeout`` is used to expire a task after a certain time interval. i.e. You want to re-execute a task only if the time elapsed since the last time it was executed is bigger than 5 minutes. .. literalinclude:: tutorial/timeout.py ``timeout`` is function that takes an ``int`` (seconds) or ``timedelta`` as a parameter. It returns a callable suitable to be used as an ``uptodate`` callable. .. _config_changed: config_changed() ----------------- ``config_changed`` is used to check if any "configuration" value for the task has changed. Config values can be a string or dict. For dict's the values are converted to string (actually it uses python's `repr()`) and only a digest/checksum of the dictionaries keys and values are saved. .. literalinclude:: tutorial/config_params.py .. _check_timestamp_unchanged: check_timestamp_unchanged() ----------------------------- ``check_timestamp_unchanged`` is used to check if specified timestamp of a given file/dir is unchanged since last run. The timestamp field to check defaults to ``mtime``, but can be selected by passing ``time`` parameter which can be one of: ``atime``, ``ctime``, ``mtime`` (or their aliases ``access``, ``status``, ``modify``). Note that ``ctime`` or ``status`` is platform dependent. On Unix it is the time of most recent metadata change, on Windows it is the time of creation. See `Python library documentation for os.stat`__ and Linux man page for stat(2) for details. __ http://docs.python.org/library/os.html#os.stat It also accepts an ``cmp_op`` parameter which defaults to ``operator.eq`` (==). To use it pass a callable which takes two parameters (prev_time, current_time) and returns True if task should be considered up-to-date, False otherwise. Here ``prev_time`` is the time from the last successful run and ``current_time`` is the time obtained in current run. If the specified file does not exist, an exception will be raised. If a file is a target of another task you should probably add ``task_dep`` on that task to ensure the file is created before it is checked. .. literalinclude:: tutorial/check_timestamp_unchanged.py .. _uptodate_api: uptodate API -------------- This section will explain how to extend ``doit`` writing an ``uptodate`` implementation. So unless you need to write an ``uptodate`` implementation you can skip this. Let's start with trivial example. `uptodate` is a function that returns a boolean value. .. literalinclude:: tutorial/uptodate_callable.py You could also execute this function in the task-creator and pass the value to to `uptodate`. The advantage of just passing the callable is that this check will not be executed at all if the task was not selected to be executed. Example: run-once implementation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Most of the time an `uptodate` implementation will compare the current value of something with the value it had last time the task was executed. We already saw how tasks can save values by returning dict on its actions. But usually the "value" we want to check is independent from the task actions. So the first step is to add a callable to the task so it can save some extra values. These values are not used by the task itself, they are only used for dependency checking. The Task has a property called ``value_savers`` that contains a list of callables. These callables should return a dict that will be saved together with other task values. The ``value_savers`` will be executed after all actions. The second step is to actually compare the saved value with its "current" value. The `uptodate` callable can take two positional parameters ``task`` and ``values``. The callable can also be represented by a tuple (callable, args, kwargs). - ``task`` parameter will give you access to task object. So you have access to its metadata and opportunity to modify the task itself! - ``values`` is a dictionary with the computed values saved in the last successful execution of the task. Let's take a look in the ``run_once`` implementation. .. literalinclude:: tutorial/run_once.py The function ``save_executed`` returns a dict. In this case it is not checking for any value because it just checks it the task was ever executed. The next line we use the ``task`` parameter adding ``save_executed`` to ``task.value_savers``.So whenever this task is executed this task value 'run-once' will be saved. Finally the return value should be a boolean to indicate if the task is up-to-date or not. Remember that the 'values' parameter contains the dict with the values saved from last successful execution of the task. So it just checks if this task was executed before by looking for the ``run-once`` entry in ```values``. Example: timeout implementation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's look another example, the ``timeout``. The main difference is that we actually pass the parameter ``timeout_limit``. Here we present a simplified version that only accepts integers (seconds) as a parameter. .. code-block:: python class timeout(object): def __init__(self, timeout_limit): self.limit_sec = timeout_limit def __call__(self, task, values): def save_now(): return {'success-time': time_module.time()} task.value_savers.append(save_now) last_success = values.get('success-time', None) if last_success is None: return False return (time_module.time() - last_success) < self.limit_sec This is a class-based implementation where the objects are made callable by implementing a ``__call__`` method. On ``__init__`` we just save the ``timeout_limit`` as an attribute. The ``__call__`` is very similar with the ``run-once`` implementation. First it defines a function (``save_now``) that is registered into ``task.value_savers``. Than it compares the current time with the time that was saved on last successful execution. Example: result_dep implementation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``result_dep`` is more complicated due to two factors. It needs to modify the task's ``task_dep``. And it needs to check the task's saved values and metadata from a task different from where it is being applied. A ``result_dep`` implies that its dependency is also a ``task_dep``. We have seen that the callable takes a `task` parameter that we used to modify the task object. The problem is that modifying ``task_dep`` when the callable gets called would be "too late" according to the way `doit` works. When an object is passed ``uptodate`` and this object's class has a method named ``configure_task`` it will be called during the task creation. The base class ``dependency.UptodateCalculator`` gives access to an attribute named ``tasks_dict`` containing a dictionary with all task objects where the ``key`` is the task name (this is used to get all sub-tasks from a task-group). And also a method called ``get_val`` to access the saved values and results from any task. See the `result_dep` `source `_. doit-0.30.3/doc_requirements.txt000066400000000000000000000001511305250115000166320ustar00rootroot00000000000000# modules required to generate documentation # $ pip install --requirement doc_requirements.txt sphinx doit-0.30.3/dodo.py000077500000000000000000000071161305250115000140330ustar00rootroot00000000000000"""dodo file. test + management stuff""" import glob import os import pytest from doitpy.pyflakes import Pyflakes from doitpy.coverage import Config, Coverage, PythonPackage from doitpy import docs from doitpy.package import Package DOIT_CONFIG = { 'minversion': '0.24.0', 'default_tasks': ['pyflakes', 'ut'], # 'backend': 'sqlite3', } CODE_FILES = glob.glob("doit/*.py") TEST_FILES = glob.glob("tests/test_*.py") TESTING_FILES = glob.glob("tests/*.py") PY_FILES = CODE_FILES + TESTING_FILES def task_pyflakes(): flaker = Pyflakes() yield flaker('dodo.py') yield flaker.tasks('doit/*.py') yield flaker.tasks('tests/*.py') def run_test(test): return not bool(pytest.main(test)) #return not bool(pytest.main("-v " + test)) def task_ut(): """run unit-tests""" for test in TEST_FILES: yield {'name': test, 'actions': [(run_test, (test,))], 'file_dep': PY_FILES, 'verbosity': 0} def task_coverage(): """show coverage for all modules including tests""" cov = Coverage([PythonPackage('doit', 'tests')], config=Config(branch=False, parallel=True, concurrency='multiprocessing', omit=['tests/myecho.py', 'tests/sample_process.py'],) ) yield cov.all() yield cov.src() yield cov.by_module() ############################ website DOC_ROOT = 'doc/' DOC_BUILD_PATH = DOC_ROOT + '_build/html/' def task_docs(): doc_files = glob.glob('doc/*.rst') + ['README.rst', 'CONTRIBUTING.md'] yield docs.spell(doc_files, 'doc/dictionary.txt') sphinx_opts = "-A include_analytics=1 -A include_donate=1" yield docs.sphinx(DOC_ROOT, DOC_BUILD_PATH, sphinx_opts=sphinx_opts, task_dep=['spell']) def task_tutorial_check(): """check tutorial sample are at least runnuable without error""" black_list = [ 'longrunning.py', # long running doesn't terminate on its own 'settrace.py', 'download.py', # uses network 'taskresult.py', # uses mercurial 'tar.py', # uses mercurial 'calc_dep.py', # uses files not created by the script 'doit_config.py', # no tasks defined ] exclude = set('doc/tutorial/{}'.format(m) for m in black_list) arguments = {'doc/tutorial/pos.py': 'pos_args -p 4 foo bar'} for sample in glob.glob("doc/tutorial/*.py"): if sample in exclude: continue args = arguments.get(sample, '') yield { 'name': sample, 'actions': ['doit -f {} {}'.format(sample, args)], } def task_website(): """dodo file create website html files""" return {'actions': None, 'task_dep': ['sphinx', 'tutorial_check'], } def task_website_update(): """update website on SITE_PATH website is hosted on github-pages this task just copy the generated content to SITE_PATH, need to commit/push to deploy site. """ SITE_PATH = '../doit-website' SITE_URL = 'pydoit.org' return { 'actions': [ "rsync -avP %s %s" % (DOC_BUILD_PATH, SITE_PATH), "echo %s > %s" % (SITE_URL, os.path.join(SITE_PATH, 'CNAME')), "touch %s" % os.path.join(SITE_PATH, '.nojekyll'), ], 'task_dep': ['website'], } def task_package(): """create/upload package to pypi""" pkg = Package() yield pkg.revision_git() yield pkg.manifest_git() yield pkg.sdist() yield pkg.sdist_upload() # doit -f ../doit-recipes/deps/deps.py -d . --reporter=executed-only doit-0.30.3/doit/000077500000000000000000000000001305250115000134635ustar00rootroot00000000000000doit-0.30.3/doit/__init__.py000066400000000000000000000027151305250115000156010ustar00rootroot00000000000000"""doit - Automation Tool The MIT License Copyright (c) 2008-2013 Eduardo Naufel Schettino Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ from doit.version import VERSION __version__ = VERSION from doit import loader from doit.loader import create_after from doit.doit_cmd import get_var from doit.api import run __all__ = ['get_var', 'run', 'create_after'] def get_initial_workdir(): """working-directory from where the doit command was invoked on shell""" return loader.initial_workdir doit-0.30.3/doit/__main__.py000066400000000000000000000003651305250115000155610ustar00rootroot00000000000000# lazy way to ignore coverage in this file if True: # pragma: no cover def main(): import sys from doit.doit_cmd import DoitMain sys.exit(DoitMain().run(sys.argv[1:])) if __name__ == '__main__': main() doit-0.30.3/doit/action.py000066400000000000000000000412261305250115000153170ustar00rootroot00000000000000"""Implements actions used by doit tasks """ import os import subprocess, sys from io import StringIO import inspect from pathlib import PurePath from threading import Thread from .exceptions import InvalidTask, TaskFailed, TaskError def normalize_callable(ref): """return a list with (callabe, *args, **kwargs) ref can be a simple callable or a tuple """ if isinstance(ref, tuple): return list(ref) return [ref, (), {}] # Actions class BaseAction(object): """Base class for all actions""" # must implement: # def execute(self, out=None, err=None) @staticmethod def _prepare_kwargs(task, func, args, kwargs): """ Prepare keyword arguments (targets, dependencies, changed, cmd line options) Inspect python callable and add missing arguments: - that the callable expects - have not been passed (as a regular arg or as keyword arg) - are available internally through the task object """ # Return just what was passed in task generator # dictionary if the task isn't available if not task: return kwargs func_sig = inspect.signature(func) sig_params = func_sig.parameters.values() func_has_kwargs = any(p.kind==p.VAR_KEYWORD for p in sig_params) # use task meta information as extra_args meta_args = { 'task': task, 'targets': task.targets, 'dependencies': task.file_dep, 'changed': task.dep_changed, } extra_args = dict(meta_args) # tasks parameter options extra_args.update(task.options) if task.pos_arg is not None: extra_args[task.pos_arg] = task.pos_arg_val kwargs = kwargs.copy() bound_args = func_sig.bind_partial(*args) for key in extra_args.keys(): # check key is a positional parameter if key in func_sig.parameters: sig_param = func_sig.parameters[key] # it is forbidden to use default values for this arguments # because the user might be unware of this magic. if (key in meta_args and sig_param.default!=sig_param.empty): msg = ("Task %s, action %s(): The argument '%s' is not " "allowed to have a default value (reserved by doit)" % (task.name, func.__name__, key)) raise InvalidTask(msg) # if value not taken from position parameter if key not in bound_args.arguments: kwargs[key] = extra_args[key] # if function has **kwargs include extra_arg on it elif func_has_kwargs and key not in kwargs: kwargs[key] = extra_args[key] return kwargs class CmdAction(BaseAction): """ Command line action. Spawns a new process. @ivar action(str,list,callable): subprocess command string or string list, see subprocess.Popen first argument. It may also be a callable that generates the command string. Strings may contain python mappings with the keys: dependencies, changed and targets. ie. "zip %(targets)s %(changed)s" @ivar task(Task): reference to task that contains this action @ivar save_out: (str) name used to save output in `values` @ivar shell: use shell to execute command see subprocess.Popen `shell` attribute @ivar encoding (str): encoding of the process output @ivar decode_error (str): value for decode() `errors` param while decoding process output @ivar pkwargs: Popen arguments except 'stdout' and 'stderr' """ def __init__(self, action, task=None, save_out=None, shell=True, encoding='utf-8', decode_error='replace', buffering=0, **pkwargs): #pylint: disable=W0231 ''' :ivar buffering: (int) stdout/stderr buffering. Not to be confused with subprocess buffering - 0 -> line buffering - positive int -> number of bytes ''' for forbidden in ('stdout', 'stderr'): if forbidden in pkwargs: msg = "CmdAction can't take param named '{0}'." raise InvalidTask(msg.format(forbidden)) self._action = action self.task = task self.out = None self.err = None self.result = None self.values = {} self.save_out = save_out self.shell = shell self.encoding = encoding self.decode_error = decode_error self.pkwargs = pkwargs self.buffering = buffering @property def action(self): if isinstance(self._action, (str, list)): return self._action else: # action can be a callable that returns a string command ref, args, kw = normalize_callable(self._action) kwargs = self._prepare_kwargs(self.task, ref, args, kw) return ref(*args, **kwargs) def _print_process_output(self, process, input_, capture, realtime): """Reads 'input_' untill process is terminated. Writes 'input_' content to 'capture' (string) and 'realtime' stream """ if self.buffering: read = lambda: input_.read(self.buffering) else: # line buffered read = lambda: input_.readline() while True: try: line = read().decode(self.encoding, self.decode_error) except: # happens when fails to decoded input process.terminate() input_.read() raise if not line: break capture.write(line) if realtime: realtime.write(line) realtime.flush() # required if on byte buffering mode def execute(self, out=None, err=None): """ Execute command action both stdout and stderr from the command are captured and saved on self.out/err. Real time output is controlled by parameters @param out: None - no real time output a file like object (has write method) @param err: idem @return failure: - None: if successful - TaskError: If subprocess return code is greater than 125 - TaskFailed: If subprocess return code isn't zero (and not greater than 125) """ try: action = self.expand_action() except Exception as exc: return TaskError( "CmdAction Error creating command string", exc) # set environ to change output buffering env = None if self.buffering: env = os.environ.copy() env['PYTHONUNBUFFERED'] = '1' # spawn task process process = subprocess.Popen( action, shell=self.shell, #bufsize=2, # ??? no effect use PYTHONUNBUFFERED instead stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env, **self.pkwargs) output = StringIO() errput = StringIO() t_out = Thread(target=self._print_process_output, args=(process, process.stdout, output, out)) t_err = Thread(target=self._print_process_output, args=(process, process.stderr, errput, err)) t_out.start() t_err.start() t_out.join() t_err.join() self.out = output.getvalue() self.err = errput.getvalue() self.result = self.out + self.err # make sure process really terminated process.wait() # task error - based on: # http://www.gnu.org/software/bash/manual/bashref.html#Exit-Status # it doesnt make so much difference to return as Error or Failed anyway if process.returncode > 125: return TaskError("Command error: '%s' returned %s" % (action, process.returncode)) # task failure if process.returncode != 0: return TaskFailed("Command failed: '%s' returned %s" % (action, process.returncode)) # save stdout in values if self.save_out: self.values[self.save_out] = self.out def expand_action(self): """Expand action using task meta informations if action is a string. Convert `Path` elements to `str` if action is a list. @returns: string -> expanded string if action is a string list - string -> expanded list of command elements """ if not self.task: return self.action # cant expand keywords if action is a list of strings if isinstance(self.action, list): action = [] for element in self.action: if isinstance(element, str): action.append(element) elif isinstance(element, PurePath): action.append(str(element)) else: msg = ("%s. CmdAction element must be a str " + "or Path from pathlib. Got '%r' (%s)") raise InvalidTask( msg % (self.task.name, element, type(element))) return action subs_dict = {'targets' : " ".join(self.task.targets), 'dependencies': " ".join(self.task.file_dep)} # just included changed if it is set if self.task.dep_changed is not None: subs_dict['changed'] = " ".join(self.task.dep_changed) # task option parameters subs_dict.update(self.task.options) # convert postional parameters from list space-separated string if self.task.pos_arg: if self.task.pos_arg_val: pos_val = ' '.join(self.task.pos_arg_val) else: pos_val = '' subs_dict[self.task.pos_arg] = pos_val return self.action % subs_dict def __str__(self): return "Cmd: %s" % self._action def __repr__(self): return "" % str(self._action) class Writer(object): """write to many streams""" def __init__(self, *writers): """@param writers - file stream like objects""" self.writers = [] self._isatty = True for writer in writers: self.add_writer(writer) def add_writer(self, stream, isatty=None): """adds a stream to the list of writers @param isatty: (bool) if specified overwrites real isatty from stream """ self.writers.append(stream) isatty = stream.isatty() if (isatty is None) else isatty self._isatty = self._isatty and isatty def write(self, text): """write 'text' to all streams""" for stream in self.writers: stream.write(text) def flush(self): """flush all streams""" for stream in self.writers: stream.flush() def isatty(self): return self._isatty class PythonAction(BaseAction): """Python action. Execute a python callable. @ivar py_callable: (callable) Python callable @ivar args: (sequence) Extra arguments to be passed to py_callable @ivar kwargs: (dict) Extra keyword arguments to be passed to py_callable @ivar task(Task): reference to task that contains this action """ def __init__(self, py_callable, args=None, kwargs=None, task=None): #pylint: disable=W0231 self.py_callable = py_callable self.task = task self.out = None self.err = None self.result = None self.values = {} if args is None: self.args = [] else: self.args = args if kwargs is None: self.kwargs = {} else: self.kwargs = kwargs # check valid parameters if not hasattr(self.py_callable, '__call__'): msg = "%r PythonAction must be a 'callable' got %r." raise InvalidTask(msg % (self.task, self.py_callable)) if inspect.isclass(self.py_callable): msg = "%r PythonAction can not be a class got %r." raise InvalidTask(msg % (self.task, self.py_callable)) if inspect.isbuiltin(self.py_callable): msg = "%r PythonAction can not be a built-in got %r." raise InvalidTask(msg % (self.task, self.py_callable)) if type(self.args) is not tuple and type(self.args) is not list: msg = "%r args must be a 'tuple' or a 'list'. got '%s'." raise InvalidTask(msg % (self.task, self.args)) if type(self.kwargs) is not dict: msg = "%r kwargs must be a 'dict'. got '%s'" raise InvalidTask(msg % (self.task, self.kwargs)) def _prepare_kwargs(self): return BaseAction._prepare_kwargs(self.task, self.py_callable, self.args, self.kwargs) def execute(self, out=None, err=None): """Execute command action both stdout and stderr from the command are captured and saved on self.out/err. Real time output is controlled by parameters @param out: None - no real time output a file like object (has write method) @param err: idem @return failure: see CmdAction.execute """ # set std stream old_stdout = sys.stdout output = StringIO() out_writer = Writer() # capture output but preserve isatty() from original stream out_writer.add_writer(output, old_stdout.isatty()) if out: out_writer.add_writer(out) sys.stdout = out_writer old_stderr = sys.stderr errput = StringIO() err_writer = Writer() err_writer.add_writer(errput, old_stderr.isatty()) if err: err_writer.add_writer(err) sys.stderr = err_writer kwargs = self._prepare_kwargs() # execute action / callable try: returned_value = self.py_callable(*self.args, **kwargs) except Exception as exception: return TaskError("PythonAction Error", exception) finally: # restore std streams /log captured streams sys.stdout = old_stdout sys.stderr = old_stderr self.out = output.getvalue() self.err = errput.getvalue() # if callable returns false. Task failed if returned_value is False: return TaskFailed("Python Task failed: '%s' returned %s" % (self.py_callable, returned_value)) elif returned_value is True or returned_value is None: pass elif isinstance(returned_value, str): self.result = returned_value elif isinstance(returned_value, dict): self.values = returned_value self.result = returned_value elif isinstance(returned_value, (TaskFailed, TaskError)): return returned_value else: return TaskError("Python Task error: '%s'. It must return:\n" "False for failed task.\n" "True, None, string or dict for successful task\n" "returned %s (%s)" % (self.py_callable, returned_value, type(returned_value))) def __str__(self): # get object description excluding runtime memory address return "Python: %s"% str(self.py_callable)[1:].split(' at ')[0] def __repr__(self): return ""% (repr(self.py_callable)) def create_action(action, task_ref): """ Create action using proper constructor based on the parameter type @param action: Action to be created @type action: L{BaseAction} subclass object, str, tuple or callable @raise InvalidTask: If action parameter type isn't valid """ if isinstance(action, BaseAction): action.task = task_ref return action if isinstance(action, str): return CmdAction(action, task_ref, shell=True) if isinstance(action, list): return CmdAction(action, task_ref, shell=False) if isinstance(action, tuple): if len(action) > 3: msg = "Task '%s': invalid 'actions' tuple length. got:%r %s" raise InvalidTask(msg % (task_ref.name, action, type(action))) py_callable, args, kwargs = (list(action) + [None]*(3-len(action))) return PythonAction(py_callable, args, kwargs, task_ref) if hasattr(action, '__call__'): return PythonAction(action, task=task_ref) msg = "Task '%s': invalid 'actions' type. got:%r %s" raise InvalidTask(msg % (task_ref.name, action, type(action))) doit-0.30.3/doit/api.py000066400000000000000000000005761305250115000146160ustar00rootroot00000000000000"""Definition of stuff that can be used directly by a user in a dodo.py file.""" import sys from doit.cmd_base import ModuleTaskLoader from doit.doit_cmd import DoitMain def run(task_creators): """run doit using task_creators @param task_creators: module or dict containing task creators """ sys.exit(DoitMain(ModuleTaskLoader(task_creators)).run(sys.argv[1:])) doit-0.30.3/doit/cmd_auto.py000066400000000000000000000106711305250115000156350ustar00rootroot00000000000000"""starts a long-running process that whatches the file system and automatically execute tasks when file dependencies change""" import os import time import sys from multiprocessing import Process from subprocess import call from .exceptions import InvalidCommand from .cmdparse import CmdParse from .filewatch import FileModifyWatcher from .cmd_base import tasks_and_deps_iter from .cmd_base import DoitCmdBase from .cmd_run import opt_verbosity, Run opt_reporter = { 'name':'reporter', 'short': None, 'long': None, 'type':str, 'default': 'executed-only', } opt_success = { 'name':'success_callback', 'short': None, 'long': 'success', 'type':str, 'default': '', } opt_failure = { 'name':'failure_callback', 'short': None, 'long': 'failure', 'type':str, 'default': '', } class Auto(DoitCmdBase): """the main process will never load tasks, delegates execution to a forked process. python caches imported modules, but using different process we can have dependencies on python modules making sure the newest module will be used. """ doc_purpose = "automatically execute tasks when a dependency changes" doc_usage = "[TASK ...]" doc_description = None execute_tasks = True cmd_options = (opt_verbosity, opt_reporter, opt_success, opt_failure) @staticmethod def _find_file_deps(tasks, sel_tasks): """find all file deps @param tasks (dict) @param sel_tasks(list - str) """ deps = set() for task in tasks_and_deps_iter(tasks, sel_tasks): deps.update(task.file_dep) deps.update(task.watch) return deps @staticmethod def _dep_changed(watch_files, started, targets): """check if watched files was modified since execution started""" for watched in watch_files: # assume that changes to targets were done by doit itself if watched in targets: continue if os.stat(watched).st_mtime > started: return True return False @staticmethod def _run_callback(result, success_callback, failure_callback): '''run callback if any after task execution''' if result == 0: if success_callback: call(success_callback, shell=True) else: if failure_callback: call(failure_callback, shell=True) def run_watch(self, params, args): """Run tasks and wait for file system event This method is executed in a forked process. The process is terminated after a single event. """ started = time.time() # execute tasks using Run Command arun = Run(task_loader=self.loader) params.add_defaults(CmdParse(arun.get_options()).parse([])[0]) try: result = arun.execute(params, args) # ??? actually tested but coverage doesnt get it... except InvalidCommand as err: # pragma: no cover sys.stderr.write("ERROR: %s\n" % str(err)) sys.exit(3) # user custom callbacks for result self._run_callback(result, params.pop('success_callback', None), params.pop('failure_callback', None)) # get list of files to watch on file system watch_files = self._find_file_deps(arun.control.tasks, arun.control.selected_tasks) # Check for timestamp changes since run started, # if change, restart straight away if not self._dep_changed(watch_files, started, arun.control.targets): # set event handler. just terminate process. class DoitAutoRun(FileModifyWatcher): def handle_event(self, event): # print("FS EVENT -> {}".format(event)) sys.exit(result) file_watcher = DoitAutoRun(watch_files) # kick start watching process file_watcher.loop() def execute(self, params, args): """loop executing tasks until process is interrupted""" while True: try: proc = Process(target=self.run_watch, args=(params, args)) proc.start() proc.join() # if error on given command line, terminate. if proc.exitcode == 3: return 3 except KeyboardInterrupt: return 0 doit-0.30.3/doit/cmd_base.py000066400000000000000000000342401305250115000155750ustar00rootroot00000000000000import inspect import sys from collections import deque from . import version from .cmdparse import CmdOption, CmdParse from .exceptions import InvalidCommand, InvalidDodoFile from .dependency import CHECKERS, DbmDB, JsonDB, SqliteDB, Dependency from .plugin import PluginDict from . import loader def version_tuple(ver_in): """convert a version string or tuple into a 3-element tuple with ints Any part that is not a number (dev0, a2, b4) will be converted to -1 """ result = [] if isinstance(ver_in, str): parts = ver_in.split('.') else: parts = ver_in for rev in parts: try: result.append(int(rev)) except: result.append(-1) assert len(result) == 3 return result class Command(object): """third-party should subclass this for commands that do no use tasks :cvar name: (str) name of sub-cmd to be use from cmdline :cvar doc_purpose: (str) single line cmd description :cvar doc_usage: (str) describe accepted parameters :cvar doc_description: (str) long description/help for cmd :cvar cmd_options: (list of dict) see cmdparse.CmdOption for dict format """ # if not specified uses the class name name = None # doc attributes, should be sub-classed doc_purpose = '' doc_usage = '' doc_description = None # None value will completely ommit line from doc # sequence of dicts cmd_options = tuple() # `execute_tasks` indicates wheather this command execute task's actions. # This is used by the loader to indicate when delayed task creation # should be used. execute_tasks = False def __init__(self, config=None, **kwargs): """configure command :param config: dict Set extra configuration values, this vals can come from: * directly passed when using the API - through DoitMain.run() * from an INI configuration file """ self.name = self.get_name() # config includes all option values and plugins self.config = config if config else {} self._cmdparser = None # config_vals contains cmd option values self.config_vals = {} if 'GLOBAL' in self.config: self.config_vals.update(self.config['GLOBAL']) if self.name in self.config: self.config_vals.update(self.config[self.name]) # Use post-mortem PDB in case of error loading tasks. # Only available for `run` command. self.pdb = False @classmethod def get_name(cls): """get command name as used from command line""" return cls.name or cls.__name__.lower() @property def cmdparser(self): """get CmdParser instance for this command""" if not self._cmdparser: self._cmdparser = CmdParse(self.get_options()) self._cmdparser.overwrite_defaults(self.config_vals) return self._cmdparser def get_options(self): """@reutrn list of CmdOption """ return [CmdOption(opt) for opt in self.cmd_options] def execute(self, opt_values, pos_args): # pragma: no cover """execute command :param opt_values: (dict) with cmd_options values :param pos_args: (list) of cmd-line positinal arguments """ raise NotImplementedError() def parse_execute(self, in_args): """helper. just parse parameters and execute command @args: see method parse @returns: result of self.execute """ params, args = self.cmdparser.parse(in_args) self.pdb = params.get('pdb', False) return self.execute(params, args) def help(self): """return help text""" text = [] text.append("Purpose: %s" % self.doc_purpose) text.append("Usage: doit %s %s" % (self.name, self.doc_usage)) text.append('') text.append("Options:") for opt in self.cmdparser.options: text.extend(opt.help_doc()) if self.doc_description is not None: text.append("") text.append("Description:") text.append(self.doc_description) return "\n".join(text) ###################################################################### # choose internal dependency file. opt_depfile = { 'name': 'dep_file', 'short':'', 'long': 'db-file', 'type': str, 'default': ".doit.db", 'help': "file used to save successful runs [default: %(default)s]" } # dependency file DB backend opt_backend = { 'name': 'backend', 'short':'', 'long': 'backend', 'type': str, 'default': "dbm", 'help': ("Select dependency file backend. [default: %(default)s]") } opt_check_file_uptodate = { 'name': 'check_file_uptodate', 'short': '', 'long': 'check_file_uptodate', 'type': str, 'default': 'md5', 'help': """\ Choose how to check if files have been modified. Available options [default: %(default)s]: 'md5': use the md5sum 'timestamp': use the timestamp """ } #### options related to dodo.py # select dodo file containing tasks opt_dodo = { 'name': 'dodoFile', 'short':'f', 'long': 'file', 'type': str, 'default': 'dodo.py', 'help':"load task from dodo FILE [default: %(default)s]" } # cwd opt_cwd = { 'name': 'cwdPath', 'short':'d', 'long': 'dir', 'type': str, 'default': None, 'help':("set path to be used as cwd directory (file paths on " + "dodo file are relative to dodo.py location).") } # seek dodo file on parent folders opt_seek_file = { 'name': 'seek_file', 'short': 'k', 'long': 'seek-file', 'type': bool, 'default': False, 'help': ("seek dodo file on parent folders " + "[default: %(default)s]") } class TaskLoader(object): """task-loader interface responsible of creating Task objects Subclasses must implement the method `load_tasks` :cvar cmd_options: (list of dict) see cmdparse.CmdOption for dict format """ cmd_options = () def __init__(self): # list of command names, used to detect clash of task names and commands self.cmd_names = [] self.config = None # reference to config object taken from Command def load_tasks(self, cmd, opt_values, pos_args): # pragma: no cover """load tasks and DOIT_CONFIG :return: (tuple) list of Task, dict with DOIT_CONFIG options :param cmd: (doit.cmd_base.Command) current command being executed :param opt_values: (dict) with values for cmd_options :param pos_args: (list str) positional arguments from command line """ raise NotImplementedError() @staticmethod def _load_from(cmd, namespace, cmd_list): """load task from a module or dict with module members""" if inspect.ismodule(namespace): members = dict(inspect.getmembers(namespace)) else: members = namespace task_list = loader.load_tasks(members, cmd_list, cmd.execute_tasks) doit_config = loader.load_doit_config(members) return task_list, doit_config class ModuleTaskLoader(TaskLoader): """load tasks from a module/dictionary containing task generators Usage: `ModuleTaskLoader(my_module)` or `ModuleTaskLoader(globals())` """ cmd_options = () def __init__(self, mod_dict): super(ModuleTaskLoader, self).__init__() self.mod_dict = mod_dict def load_tasks(self, cmd, params, args): return self._load_from(cmd, self.mod_dict, self.cmd_names) class DodoTaskLoader(TaskLoader): """default task-loader create tasks from a dodo.py file""" cmd_options = (opt_dodo, opt_cwd, opt_seek_file) def load_tasks(self, cmd, params, args): dodo_module = loader.get_module(params['dodoFile'], params['cwdPath'], params['seek_file']) return self._load_from(cmd, dodo_module, self.cmd_names) class DoitCmdBase(Command): """ subclass must define: cmd_options => list of option dictionary (see CmdOption) _execute => method, argument names must be option names """ base_options = (opt_depfile, opt_backend, opt_check_file_uptodate) def __init__(self, task_loader=None, cmds=None, **kwargs): super(DoitCmdBase, self).__init__(**kwargs) self.sel_tasks = None # selected tasks for command self.dep_manager = None # self.outstream = sys.stdout self.loader = self._get_loader(task_loader, cmds) self._backends = self.get_backends() def get_options(self): """from base class - merge base_options, loader_options and cmd_options """ opt_list = (self.base_options + self.loader.cmd_options + self.cmd_options) return [CmdOption(opt) for opt in opt_list] def _execute(self): # pragma: no cover """to be subclassed - actual command implementation""" raise NotImplementedError @staticmethod def check_minversion(minversion): """check if this version of doit statisfy minimum required version Minimum version specified by configuration on dodo. """ if minversion: if version_tuple(minversion) > version_tuple(version.VERSION): msg = ('Please update doit. ' 'Minimum version required is {required}. ' 'You are using {actual}. ') raise InvalidDodoFile(msg.format(required=minversion, actual=version.VERSION)) @staticmethod def get_checker_cls(check_file_uptodate): """return checker class to be used by dep_manager""" if isinstance(check_file_uptodate, str): if check_file_uptodate not in CHECKERS: msg = ("No check_file_uptodate named '{}'." " Type 'doit help run' to see a list " "of available checkers.") raise InvalidCommand(msg.format(check_file_uptodate)) return CHECKERS[check_file_uptodate] else: # user defined class return check_file_uptodate def _get_loader(self, task_loader=None, cmds=None): """return task loader :param task_loader: a TaskLoader class :param cmds: dict of available commands """ loader = None if task_loader: loader = task_loader # task_loader set from the API elif 'loader' in self.config_vals: # a plugin loader loader_name = self.config_vals['loader'] plugins = PluginDict() plugins.add_plugins(self.config, 'LOADER') loader = plugins.get_plugin(loader_name)() else: loader = DodoTaskLoader() # default loader if cmds: loader.cmd_names = list(sorted(cmds.keys())) loader.config = self.config return loader def get_backends(self): """return PluginDict of DB backends, including core and plugins""" backend_map = {'dbm': DbmDB, 'json': JsonDB, 'sqlite3': SqliteDB} # add plugins plugins = PluginDict() plugins.add_plugins(self.config, 'BACKEND') backend_map.update(plugins.to_dict()) # set choices, sub-classes might not have this option if 'backend' in self.cmdparser: choices = {k: getattr(v, 'desc', '') for k,v in backend_map.items()} self.cmdparser['backend'].choices = choices return backend_map def execute(self, params, args): """load dodo.py, set attributes and call self._execute :param params: instance of cmdparse.DefaultUpdate :param args: list of string arguments (containing task names) """ self.task_list, dodo_config = self.loader.load_tasks( self, params, args) # merge config values from dodo.py into params params.update_defaults(dodo_config) self.check_minversion(params.get('minversion')) # set selected tasks for command self.sel_tasks = args or params.get('default_tasks') # create dep manager db_class = self._backends.get(params['backend']) checker_cls = self.get_checker_cls(params['check_file_uptodate']) # note the command have the responsability to call dep_manager.close() self.dep_manager = Dependency(db_class, params['dep_file'], checker_cls) # hack to pass parameter into _execute() calls that are not part # of command line options params['pos_args'] = args params['continue_'] = params.get('continue') # magic - create dict based on signature of _execute() method. # this done so that _execute() have a nice API with name parameters # instead of just taking a dict. args_name = list(inspect.signature(self._execute).parameters.keys()) exec_params = dict((n, params[n]) for n in args_name) return self._execute(**exec_params) # helper functions to find list of tasks def check_tasks_exist(tasks, name_list): """check task exist""" if not name_list: return for task_name in name_list: if task_name not in tasks: msg = "'%s' is not a task." raise InvalidCommand(msg % task_name) # this is used by commands that do not execute tasks (list, clean, forget...) def tasks_and_deps_iter(tasks, sel_tasks, yield_duplicates=False): """iterator of select_tasks and its dependencies @param tasks (dict - Task) @param sel_tasks(list - str) """ processed = set() # str - task name to_process = deque(sel_tasks) # str - task name # get initial task while to_process: task = tasks[to_process.popleft()] processed.add(task.name) yield task # FIXME this does not take calc_dep into account for task_dep in task.task_dep + task.setup_tasks: if (task_dep not in processed) and (task_dep not in to_process): to_process.append(task_dep) elif yield_duplicates: yield tasks[task_dep] def subtasks_iter(tasks, task): """find all subtasks for a given task @param tasks (dict - Task) @param task (Task) """ for name in task.task_dep: dep = tasks[name] if dep.is_subtask: yield dep doit-0.30.3/doit/cmd_clean.py000066400000000000000000000062331305250115000157460ustar00rootroot00000000000000from .cmd_base import DoitCmdBase from .cmd_base import check_tasks_exist, tasks_and_deps_iter, subtasks_iter opt_clean_dryrun = { 'name': 'dryrun', 'short': 'n', # like make dry-run 'long': 'dry-run', 'type': bool, 'default': False, 'help': 'print actions without really executing them', } opt_clean_cleandep = { 'name': 'cleandep', 'short': 'c', # clean 'long': 'clean-dep', 'type': bool, 'default': False, 'help': 'clean task dependencies too', } opt_clean_cleanall = { 'name': 'cleanall', 'short': 'a', # all 'long': 'clean-all', 'type': bool, 'default': False, 'help': 'clean all task', } class Clean(DoitCmdBase): doc_purpose = "clean action / remove targets" doc_usage = "[TASK ...]" doc_description = ("If no task is specified clean default tasks and " "set --clean-dep automatically.") cmd_options = (opt_clean_cleandep, opt_clean_cleanall, opt_clean_dryrun) def clean_tasks(self, tasks, dryrun): """ensure task clean-action is executed only once""" cleaned = set() for task in tasks: if task.name not in cleaned: cleaned.add(task.name) task.clean(self.outstream, dryrun) def _execute(self, dryrun, cleandep, cleanall, pos_args=None): """Clean tasks @param task_list (list - L{Task}): list of all tasks from dodo file @ivar dryrun (bool): if True clean tasks are not executed (just print out what would be executed) @param cleandep (bool): execute clean from task_dep @param cleanall (bool): clean all tasks @var default_tasks (list - string): list of default tasks @var selected_tasks (list - string): list of tasks selected from cmd-line """ tasks = dict([(t.name, t) for t in self.task_list]) # behaviour of cleandep is different if selected_tasks comes from # command line or DOIT_CONFIG.default_tasks selected_tasks = pos_args check_tasks_exist(tasks, selected_tasks) # get base list of tasks to be cleaned if cleanall: clean_list = [t.name for t in self.task_list] elif selected_tasks: clean_list = selected_tasks else: if self.sel_tasks is None: clean_list = [t.name for t in self.task_list] else: clean_list = self.sel_tasks # if cleaning default tasks enable clean_dep automatically cleandep = True # include dependencies in list if cleandep: # including repeated entries will garantee that deps are listed # first when the list is reversed to_clean = list(tasks_and_deps_iter(tasks, clean_list, True)) # include only subtasks in list else: to_clean = [] for name in reversed(clean_list): task = tasks[name] to_clean.append(task) to_clean.extend(subtasks_iter(tasks, task)) to_clean.reverse() self.clean_tasks(to_clean, dryrun) doit-0.30.3/doit/cmd_completion.py000066400000000000000000000261011305250115000170310ustar00rootroot00000000000000"""generate shell script with tab completion code for doit commands/tasks""" import sys from string import Template from .exceptions import InvalidCommand from .cmd_base import DoitCmdBase opt_shell = { 'name': 'shell', 'short': 's', 'long': 'shell', 'type': str, 'choices': (('bash', ''), ('zsh', '')), 'default': 'bash', 'help': 'Completion code for SHELL. [default: %(default)s]', } opt_hardcode_tasks = { 'name': 'hardcode_tasks', 'short': '', 'long': 'hardcode-tasks', 'type': bool, 'default': False, 'help': 'Hardcode tasks from current task list.', } class TabCompletion(DoitCmdBase): """generate scripts for tab-completion If hardcode-tasks options is chosen it will get the task list from the current dodo file and include in the completion script. Otherwise the script will dynamically call `doit list` to get the list of tasks. If it is completing a sub-task (contains ':' in the name), it will always call doit while evaluating the options. """ doc_purpose = "generate script for tab-completion" doc_usage = "" doc_description = None cmd_options = (opt_shell, opt_hardcode_tasks, ) def __init__(self, cmds=None, **kwargs): super(TabCompletion, self).__init__(cmds=cmds, **kwargs) self.init_kwargs = kwargs self.init_kwargs['cmds'] = cmds if cmds: self.cmds = cmds.to_dict() # dict name - Command class def execute(self, opt_values, pos_args): if opt_values['shell'] == 'bash': self._generate_bash(opt_values, pos_args) elif opt_values['shell'] == 'zsh': self._generate_zsh(opt_values, pos_args) else: msg = 'Invalid option for --shell "{0}"' raise InvalidCommand(msg.format(opt_values['shell'])) @classmethod def _bash_cmd_args(cls, cmd): """return case item for completion of specific sub-command""" comp = [] if 'TASK' in cmd.doc_usage: comp.append('${tasks}') if 'COMMAND' in cmd.doc_usage: comp.append('${sub_cmds}') if comp: completion = '-W "{0}"'.format(' '.join(comp)) else: completion = '-f' # complete file return bash_subcmd_arg.format(cmd_name=cmd.name, completion=completion) def _generate_bash(self, opt_values, pos_args): # some applications built with doit do not use dodo.py files for opt in self.get_options(): if opt.name == 'dodoFile': get_dodo_part = bash_get_dodo pt_list_param = '--file="$dodof"' break else: get_dodo_part = '' pt_list_param = '' # dict with template values pt_bin_name = sys.argv[0].split('/')[-1] tmpl_vars = { 'pt_bin_name': pt_bin_name, 'pt_cmds': ' '.join(sorted(self.cmds)), 'pt_list_param': pt_list_param, } # if hardcode tasks if opt_values['hardcode_tasks']: self.task_list, _ = self.loader.load_tasks( self, opt_values, pos_args) task_names = (t.name for t in self.task_list if not t.is_subtask) tmpl_vars['pt_tasks'] = '"{0}"'.format(' '.join(sorted(task_names))) else: tmpl_list_cmd = "$({0} list {1} --quiet 2>/dev/null)" tmpl_vars['pt_tasks'] = tmpl_list_cmd.format(pt_bin_name, pt_list_param) # case statement to complete sub-commands cmds_args = [] for name in sorted(self.cmds): cmd_class = self.cmds[name] cmd = cmd_class(**self.init_kwargs) cmds_args.append(self._bash_cmd_args(cmd)) comp_subcmds = ("\n case ${words[1]} in\n" + "".join(cmds_args) + "\n esac\n") template = Template(bash_start + bash_opt_file + get_dodo_part + bash_task_list + bash_first_arg + comp_subcmds + bash_end) self.outstream.write(template.safe_substitute(tmpl_vars)) @staticmethod def _zsh_arg_line(opt): """create a text line for completion of a command arg""" # '(-c|--continue)'{-c,--continue}'[continue executing tasks...]' \ # '--db-file[file used to save successful runs]' \ if opt.short and opt.long: tmpl = ('"(-{0.short}|--{0.long})"{{-{0.short},--{0.long}}}"' '[{help}]" \\') elif not opt.short and opt.long: tmpl = '"--{0.long}[{help}]" \\' elif opt.short and not opt.long: tmpl = '"-{0.short}[{help}]" \\' else: # without short or long options cant be really used return '' ohelp = opt.help.replace(']', r'\]').replace('"', r'\"') return tmpl.format(opt, help=ohelp).replace('\n', ' ') @classmethod def _zsh_arg_list(cls, cmd): """return list of arguments lines for zsh completion""" args = [] for opt in cmd.get_options(): args.append(cls._zsh_arg_line(opt)) if 'TASK' in cmd.doc_usage: args.append("'*::task:(($tasks))'") if 'COMMAND' in cmd.doc_usage: args.append("'::cmd:(($commands))'") return args @classmethod def _zsh_cmd_args(cls, cmd): """create the content for "case" statement with all command options """ arg_lines = cls._zsh_arg_list(cmd) tmpl = """ ({cmd_name}) _command_args=( {args_body} '' ) ;; """ args_body = '\n '.join(arg_lines) return tmpl.format(cmd_name=cmd.name, args_body=args_body) # TODO: # detect correct dodo-file location # complete sub-tasks # task options def _generate_zsh(self, opt_values, pos_args): # deal with doit commands cmds_desc = [] cmds_args = [] for name in sorted(self.cmds): cmd_class = self.cmds[name] cmd = cmd_class(**self.init_kwargs) cmds_desc.append(" '{0}: {1}'".format(cmd.name, cmd.doc_purpose)) cmds_args.append(self._zsh_cmd_args(cmd)) template_vars = { 'pt_bin_name': sys.argv[0].split('/')[-1], 'pt_cmds':'\n '.join(cmds_desc), 'pt_cmds_args':'\n'.join(cmds_args), } if opt_values['hardcode_tasks']: self.task_list, _ = self.loader.load_tasks( self, opt_values, pos_args) lines = [] for task in self.task_list: if not task.is_subtask: lines.append("'{0}: {1}'".format(task.name, task.doc)) template_vars['pt_tasks'] = '(\n{0}\n)'.format( '\n'.join(sorted(lines))) else: tmp_tasks = Template( '''("${(f)$($pt_bin_name list --template '{name}: {doc}')}")''') template_vars['pt_tasks'] = tmp_tasks.safe_substitute(template_vars) template = Template(zsh_start) self.outstream.write(template.safe_substitute(template_vars)) ############## templates # Variables starting with 'pt_' belongs to the Python Template # to generate the script. # Remaining are shell variables used in the script. ################################################################ ############### bash template bash_start = """# bash completion for $pt_bin_name # auto-generate by `$pt_bin_name tabcompletion` # to activate it you need to 'source' the generate script # $ source # reference => http://www.debian-administration.org/articles/317 # patch => http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=711879 _$pt_bin_name() { local cur prev words cword basetask sub_cmds tasks i dodof COMPREPLY=() # contains list of words with suitable completion # remove colon from word separator list because doit uses colon on task names _get_comp_words_by_ref -n : cur prev words cword # list of sub-commands sub_cmds="$pt_cmds" """ # FIXME - wont be necessary after adding support for options with type bash_opt_file = """ # options that take file/dir as values should complete file-system if [[ "$prev" == "-f" || "$prev" == "-d" || "$prev" == "-o" ]]; then _filedir return 0 fi if [[ "$cur" == *=* ]]; then prev=${cur/=*/} cur=${cur/*=/} if [[ "$prev" == "--file=" || "$prev" == "--dir=" || "$prev" == "--output-file=" ]]; then _filedir -o nospace return 0 fi fi """ bash_get_dodo = """ # get name of the dodo file for (( i=0; i < ${#words[@]}; i++)); do case "${words[i]}" in -f) dodof=${words[i+1]} break ;; --file=*) dodof=${words[i]/*=/} break ;; esac done # dodo file not specified, use default if [ ! $dodof ] then dodof="dodo.py" fi """ bash_task_list = """ # get task list # if it there is colon it is getting a subtask, complete only subtask names if [[ "$cur" == *:* ]]; then # extract base task name (remove everything after colon) basetask=${cur%:*} # sub-tasks tasks=$($pt_bin_name list $pt_list_param --quiet --all ${basetask} 2>/dev/null) COMPREPLY=( $(compgen -W "${tasks}" -- ${cur}) ) __ltrim_colon_completions "$cur" return 0 # without colons get only top tasks else tasks=$pt_tasks fi """ bash_first_arg = """ # match for first parameter must be sub-command or task # FIXME doit accepts options "-" in the first parameter but we ignore this case if [[ ${cword} == 1 ]] ; then COMPREPLY=( $(compgen -W "${sub_cmds} ${tasks}" -- ${cur}) ) return 0 fi """ bash_subcmd_arg = """ {cmd_name}) COMPREPLY=( $(compgen {completion} -- $cur) ) return 0 ;;""" bash_end = """ # if there is already one parameter match only tasks (no commands) COMPREPLY=( $(compgen -W "${tasks}" -- ${cur}) ) } complete -o filenames -F _$pt_bin_name $pt_bin_name """ ################################################################ ############### zsh template zsh_start = """#compdef $pt_bin_name _$pt_bin_name() { local -a commands tasks # format is 'completion:description' commands=( $pt_cmds ) # split output by lines to create an array tasks=$pt_tasks # complete command or task name if (( CURRENT == 2 )); then _arguments -A : '::cmd:(($commands))' '::task:(($tasks))' return fi # revome program name from $words and decrement CURRENT local curcontext context state state_desc line _arguments -C '*:: :->' # complete sub-command or task options local -a _command_args case "$words[1]" in $pt_cmds_args # default completes task names (*) _command_args='*::task:(($tasks))' ;; esac # -A no options will be completed after the first non-option argument _arguments -A : $_command_args return 0 } _$pt_bin_name """ doit-0.30.3/doit/cmd_dumpdb.py000066400000000000000000000027531305250115000161420ustar00rootroot00000000000000import pprint import json import dbm from dbm import whichdb from .exceptions import InvalidCommand from .cmd_base import Command, opt_depfile def dbm_iter(db): # try dictionary interface - ok in python2 and dumbdb try: return db.items() except: # pragma: no cover pass # try firstkey/nextkey - ok for py3 dbm.gnu try: # pragma: no cover db.firstkey def iter_gdbm(db): k = db.firstkey() while k != None: yield k, db[k] k = db.nextkey(k) return iter_gdbm(db) except: # pragma: no cover raise InvalidCommand("It seems your DB backend doesn't support " "iterating through all elements") class DumpDB(Command): """dump dependency DB""" doc_purpose = 'dump dependency DB' doc_usage = '' doc_description = None cmd_options = (opt_depfile,) def execute(self, opt_values, pos_args): dep_file = opt_values['dep_file'] db_type = whichdb(dep_file) print("DBM type is '%s'" % db_type) if db_type in ('dbm', 'dbm.ndbm'): # pragma: no cover raise InvalidCommand('ndbm does not support iteration of elements') data = dbm.open(dep_file) for key, value_str in dbm_iter(data): value_dict = json.loads(value_str.decode('utf-8')) value_fmt = pprint.pformat(value_dict, indent=4, width=100) print("{key} -> {value}".format(key=key, value=value_fmt)) doit-0.30.3/doit/cmd_forget.py000066400000000000000000000030261305250115000161470ustar00rootroot00000000000000from .cmd_base import DoitCmdBase, check_tasks_exist from .cmd_base import tasks_and_deps_iter, subtasks_iter opt_forget_taskdep = { 'name': 'forget_sub', 'short': 's', 'long': 'follow-sub', 'type': bool, 'default': False, 'help': 'forget task dependencies too', } class Forget(DoitCmdBase): doc_purpose = "clear successful run status from internal DB" doc_usage = "[TASK ...]" doc_description = None cmd_options = (opt_forget_taskdep, ) def _execute(self, forget_sub): """remove saved data successful runs from DB """ # no task specified. forget all if not self.sel_tasks: self.dep_manager.remove_all() self.outstream.write("forgetting all tasks\n") # forget tasks from list else: tasks = dict([(t.name, t) for t in self.task_list]) check_tasks_exist(tasks, self.sel_tasks) forget_list = self.sel_tasks if forget_sub: to_forget = list(tasks_and_deps_iter(tasks, forget_list, True)) else: to_forget = [] for name in forget_list: task = tasks[name] to_forget.append(task) to_forget.extend(subtasks_iter(tasks, task)) for task in to_forget: # forget it - remove from dependency file self.dep_manager.remove(task.name) self.outstream.write("forgetting %s\n" % task.name) self.dep_manager.close() doit-0.30.3/doit/cmd_help.py000066400000000000000000000117561305250115000156220ustar00rootroot00000000000000from .exceptions import InvalidDodoFile from .cmdparse import TaskParse, CmdOption from .cmd_base import DoitCmdBase HELP_TASK = """ Task Dictionary parameters -------------------------- Tasks are defined by functions starting with the string ``task_``. It must return a dictionary describing the task with the following fields: actions [required]: - type: Python-Task -> callable or tuple (callable, `*args`, `**kwargs`) - type: Cmd-Task -> string or list of strings (each item is a different command). to be executed by shell. - type: Group-Task -> None. basename: - type: string. if present use it as task name instead of taking name from python function name [required for sub-task]: - type: string. sub-task identifier file_dep: - type: list. items: * file (string) path relative to the dodo file task_dep: - type: list. items: * task name (string) setup: - type: list. items: * task name (string) targets: - type: list of strings - each item is file-path relative to the dodo file (accepts both files and folders) uptodate: - type: list. items: * None - None values are just ignored * bool - False indicates task is not up-to-date * callable - returns bool or None. must take 2 positional parameters (task, values) calc_dep: - type: list. items: * task name (string) getargs: - type: dictionary * key: string with the name of the function argument (used in a python-action) * value: tuple of (, ) teardown: - type: (list) of actions (see above) doc: - type: string -> the description text clean: - type: (True bool) remove target files - type: (list) of actions (see above) params: - type: (list) of dictionaries containing: - name [required] (string) parameter identifier - default [required] default value for parameter - short [optional] (string - 1 letter) short option string - long [optional] (string) long option string - type [optional] (callable) the option will be converted to this type - choices [optional] (list of 2-tuple str) limit option values, second tuple element is a help description for value - help [optional] (string) description displayed by help command - inverse [optional] (string) for a bool parameter set value to False pos_arg: - type: string -> name of the function argument to receive list of positional arguments verbosity: - type: int - 0: capture (do not print) stdout/stderr from task. - 1: (default) capture stdout only. - 2: do not capture anything (print everything immediately). title: - type: callable taking one parameter as argument (the task reference) watch: - type: list. items: * (string) path to be watched when using the `auto` command """ class Help(DoitCmdBase): doc_purpose = "show help" doc_usage = "[TASK] [COMMAND]" doc_description = None def __init__(self, cmds=None, **kwargs): self.init_kwargs = kwargs super(Help, self).__init__(cmds=cmds, **kwargs) self.cmds = cmds.to_dict() # dict name - Command class @staticmethod def print_usage(cmds): """print doit "usage" (basic help) instructions :var cmds: dict name -> Command class """ print("doit -- automation tool") print("http://pydoit.org") print('') print("Commands") for cmd_name in sorted(cmds.keys()): cmd = cmds[cmd_name] print(" doit {:16s} {}".format( cmd_name, cmd.doc_purpose)) print("") print(" doit help show help / reference") print(" doit help task show help on task dictionary fields") print(" doit help show command usage") print(" doit help show task usage") @staticmethod def print_task_help(): """print help for 'task' usage """ print(HELP_TASK) def _execute(self, pos_args): """execute help for specific task""" task_name = pos_args[0] tasks = dict([(t.name, t) for t in self.task_list]) task = tasks.get(task_name, None) if not task: return False print("%s %s" % (task.name, task.doc)) taskcmd = TaskParse([CmdOption(opt) for opt in task.params]) for opt in taskcmd.options: print("\n".join(opt.help_doc())) return True def execute(self, params, args): """execute cmd 'help' """ if len(args) != 1: self.print_usage(self.cmds) elif args[0] == 'task': self.print_task_help() # help on command elif args[0] in self.cmds: cmd = self.cmds[args[0]](**self.init_kwargs) print(cmd.help()) else: # help of specific task try: # call base class implemention to execute _execute() if not DoitCmdBase.execute(self, params, args): self.print_usage(self.cmds) except InvalidDodoFile: self.print_usage(self.cmds) return 0 doit-0.30.3/doit/cmd_ignore.py000066400000000000000000000022121305250115000161400ustar00rootroot00000000000000from .cmd_base import DoitCmdBase, check_tasks_exist, subtasks_iter class Ignore(DoitCmdBase): doc_purpose = "ignore task (skip) on subsequent runs" doc_usage = "TASK [TASK ...]" doc_description = None cmd_options = () def _execute(self, pos_args): """mark tasks to be ignored @param ignore_tasks: (list - str) tasks to be ignored. """ ignore_tasks = pos_args # no task specified. if not ignore_tasks: msg = "You cant ignore all tasks! Please select a task.\n" self.outstream.write(msg) return tasks = dict([(t.name, t) for t in self.task_list]) check_tasks_exist(tasks, ignore_tasks) for task_name in ignore_tasks: # for group tasks also remove all tasks from group sub_list = [t.name for t in subtasks_iter(tasks, tasks[task_name])] for to_ignore in [task_name] + sub_list: # ignore it - remove from dependency file self.dep_manager.ignore(tasks[to_ignore]) self.outstream.write("ignoring %s\n" % to_ignore) self.dep_manager.close() doit-0.30.3/doit/cmd_info.py000066400000000000000000000071301305250115000156140ustar00rootroot00000000000000"""command doit info - display info on task metadata""" import pprint from .cmd_base import DoitCmdBase from .exceptions import InvalidCommand opt_show_execute_status = { 'name': 'show_execute_status', 'short': 's', 'long': 'status', 'type': bool, 'default': False, 'help': """Shows reasons why this task would be executed. [default: %(default)s]""" } class Info(DoitCmdBase): """command doit info""" doc_purpose = "show info about a task" doc_usage = "TASK" doc_description = None cmd_options = (opt_show_execute_status, ) def _execute(self, pos_args, show_execute_status=False): if len(pos_args) != 1: msg = ('doit info failed, must select *one* task.' '\nCheck `doit help info`.') raise InvalidCommand(msg) task_name = pos_args[0] # dict of all tasks tasks = dict([(t.name, t) for t in self.task_list]) printer = pprint.PrettyPrinter(indent=4, stream=self.outstream) task = tasks[task_name] task_attrs = ( 'name', 'file_dep', 'task_dep', 'setup_tasks', 'calc_dep', 'targets', # these fields usually contains reference to python functions # 'actions', 'clean', 'uptodate', 'teardown', 'title' 'getargs', 'params', 'verbosity', 'watch' ) for attr in task_attrs: value = getattr(task, attr) # by default only print fields that have non-empty value if value: self.outstream.write('\n{0}:'.format(attr)) printer.pprint(getattr(task, attr)) # print reason task is not up-to-date if show_execute_status: status = self.dep_manager.get_status(task, tasks, get_log=True) if status.status == 'up-to-date': self.outstream.write('\nTask is up-to-date.\n') return 0 else: # status.status == 'run' or status.status == 'error' self.outstream.write('\nTask is not up-to-date:\n') self.outstream.write(self.get_reasons(status.reasons)) self.outstream.write('\n') return 1 @staticmethod def get_reasons(reasons): '''return string with description of reason task is not up-to-date''' lines = [] if reasons['has_no_dependencies']: lines.append(' * The task has no dependencies.') if reasons['uptodate_false']: lines.append(' * The following uptodate objects evaluate to false:') for utd, utd_args, utd_kwargs in reasons['uptodate_false']: msg = ' - {} (args={}, kwargs={})' lines.append(msg.format(utd, utd_args, utd_kwargs)) if reasons['checker_changed']: msg = ' * The file_dep checker changed from {0} to {1}.' lines.append(msg.format(*reasons['checker_changed'])) sentences = { 'missing_target': 'The following targets do not exist:', 'changed_file_dep': 'The following file dependencies have changed:', 'missing_file_dep': 'The following file dependencies are missing:', 'removed_file_dep': 'The following file dependencies were removed:', 'added_file_dep': 'The following file dependencies were added:', } for reason, sentence in sentences.items(): entries = reasons.get(reason) if entries: lines.append(' * {}'.format(sentence)) for item in entries: lines.append(' - {}'.format(item)) return '\n'.join(lines) doit-0.30.3/doit/cmd_list.py000066400000000000000000000103611305250115000156340ustar00rootroot00000000000000from .cmd_base import DoitCmdBase, check_tasks_exist, subtasks_iter opt_listall = { 'name': 'subtasks', 'short':'', 'long': 'all', 'type': bool, 'default': False, 'help': "list include all sub-tasks from dodo file" } opt_list_quiet = { 'name': 'quiet', 'short': 'q', 'long': 'quiet', 'type': bool, 'default': False, 'help': 'print just task name (less verbose than default)' } opt_list_status = { 'name': 'status', 'short': 's', 'long': 'status', 'type': bool, 'default': False, 'help': 'print task status (R)un, (U)p-to-date, (I)gnored' } opt_list_private = { 'name': 'private', 'short': 'p', 'long': 'private', 'type': bool, 'default': False, 'help': "print private tasks (start with '_')" } opt_list_dependencies = { 'name': 'list_deps', 'short': '', 'long': 'deps', 'type': bool, 'default': False, 'help': ("print list of dependencies " "(file dependencies only)") } opt_template = { 'name': 'template', 'short': '', 'long': 'template', 'type': str, 'default': None, 'help': "display entries with template" } class List(DoitCmdBase): doc_purpose = "list tasks from dodo file" doc_usage = "[TASK ...]" doc_description = None cmd_options = (opt_listall, opt_list_quiet, opt_list_status, opt_list_private, opt_list_dependencies, opt_template) STATUS_MAP = {'ignore': 'I', 'up-to-date': 'U', 'run': 'R', 'error': 'E'} def _print_task(self, template, task, status, list_deps, tasks): """print a single task""" line_data = {'name': task.name, 'doc':task.doc} # FIXME group task status is never up-to-date if status: # FIXME: 'ignore' handling is ugly if self.dep_manager.status_is_ignore(task): task_status = 'ignore' else: task_status = self.dep_manager.get_status(task, tasks).status line_data['status'] = self.STATUS_MAP[task_status] self.outstream.write(template.format(**line_data)) # print dependencies if list_deps: for dep in task.file_dep: self.outstream.write(" - %s\n" % dep) self.outstream.write("\n") @staticmethod def _list_filtered(tasks, filter_tasks, include_subtasks): """return list of task based on selected 'filter_tasks' """ check_tasks_exist(tasks, filter_tasks) # get task by name print_list = [] for name in filter_tasks: task = tasks[name] print_list.append(task) if include_subtasks: print_list.extend(subtasks_iter(tasks, task)) return print_list def _list_all(self, include_subtasks): """list of tasks""" print_list = [] for task in self.task_list: if (not include_subtasks) and task.is_subtask: continue print_list.append(task) return print_list def _execute(self, subtasks=False, quiet=True, status=False, private=False, list_deps=False, template=None, pos_args=None): """List task generators, in the order they were defined. """ filter_tasks = pos_args # dict of all tasks tasks = dict([(t.name, t) for t in self.task_list]) if filter_tasks: # list only tasks passed on command line print_list = self._list_filtered(tasks, filter_tasks, subtasks) else: print_list = self._list_all(subtasks) # exclude private tasks if not private: print_list = [t for t in print_list if not t.name.startswith('_')] # set template if template is None: max_name_len = 0 if print_list: max_name_len = max(len(t.name) for t in print_list) template = '{name:<' + str(max_name_len + 3) + '}' if not quiet: template += '{doc}' if status: template = '{status} ' + template template += '\n' # print list of tasks for task in sorted(print_list): self._print_task(template, task, status, list_deps, tasks) return 0 doit-0.30.3/doit/cmd_resetdep.py000066400000000000000000000055541305250115000165040ustar00rootroot00000000000000from .cmd_base import DoitCmdBase, check_tasks_exist from .cmd_base import subtasks_iter import os class ResetDep(DoitCmdBase): name = "reset-dep" doc_purpose = ("recompute and save the state of file dependencies without " "executing actions") doc_usage = "[TASK ...]" cmd_options = () doc_description = """ This command allows to recompute the informations on file dependencies (timestamp, md5sum, ... depending on the ``check_file_uptodate`` setting), and save this in the database, without executing the actions. The command run on all tasks by default, but it is possible to specify a list of tasks to work on. This is useful when the targets of your tasks already exist, and you want doit to consider your tasks as up-to-date. One use-case for this command is when you change the ``check_file_uptodate`` setting, which cause doit to consider all your tasks as not up-to-date. It is also useful if you start using doit while some of your data as already been computed, or when you add a file dependency to a task that has already run. """ def _execute(self, pos_args=None): filter_tasks = pos_args # dict of all tasks tasks = dict([(t.name, t) for t in self.task_list]) # select tasks that command will be applied to if filter_tasks: # list only tasks passed on command line check_tasks_exist(tasks, filter_tasks) # get task by name task_list = [] for name in filter_tasks: task = tasks[name] task_list.append(task) task_list.extend(subtasks_iter(tasks, task)) else: task_list = self.task_list write = self.outstream.write for task in task_list: # Get these now because dep_manager.get_status will remove the task # from the db if the checker changed. values = self.dep_manager.get_values(task.name) result = self.dep_manager.get_result(task.name) missing_deps = [dep for dep in task.file_dep if not os.path.exists(dep)] if len(missing_deps) > 0: write("failed {} (Dependent file '{}' does not " "exist.)\n".format(task.name, "', '".join(missing_deps))) continue res = self.dep_manager.get_status(task, tasks) # An 'up-to-date' status means that it is useless to recompute the # state: file deps and targets exists, the state has not changed, # there is nothing more to do. if res.status == 'up-to-date': write("skip {}\n".format(task.name)) continue task.values = values self.dep_manager.save_success(task, result_hash=result) write("processed {}\n".format(task.name)) self.dep_manager.close() doit-0.30.3/doit/cmd_run.py000066400000000000000000000164041305250115000154710ustar00rootroot00000000000000import sys import codecs from .exceptions import InvalidCommand from .plugin import PluginDict from .task import Task from .control import TaskControl from .runner import Runner, MRunner, MThreadRunner from .cmd_base import DoitCmdBase from . import reporter # verbosity opt_verbosity = { 'name':'verbosity', 'short':'v', 'long':'verbosity', 'type':int, 'default': None, 'help': """0 capture (do not print) stdout/stderr from task. 1 capture stdout only. 2 do not capture anything (print everything immediately). [default: 1]""" } # select output file opt_outfile = { 'name': 'outfile', 'short':'o', 'long': 'output-file', 'type': str, 'default': sys.stdout, 'help':"write output into file [default: stdout]" } # always execute task opt_always = { 'name': 'always', 'short': 'a', 'long': 'always-execute', 'type': bool, 'default': False, 'help': "always execute tasks even if up-to-date [default: %(default)s]", } # continue executing tasks even after a failure opt_continue = { 'name': 'continue', 'short': 'c', 'long': 'continue', 'inverse': 'no-continue', 'type': bool, 'default': False, 'help': ("continue executing tasks even after a failure " + "[default: %(default)s]"), } opt_single = { 'name': 'single', 'short': 's', 'long': 'single', 'type': bool, 'default': False, 'help': ("Execute only specified tasks ignoring their task_dep " + "[default: %(default)s]"), } opt_num_process = { 'name': 'num_process', 'short': 'n', 'long': 'process', 'type': int, 'default': 0, 'help': "number of subprocesses [default: %(default)s]" } # reporter opt_reporter = { 'name':'reporter', 'short':'r', 'long':'reporter', 'type':str, 'default': 'console', 'help': """Choose output reporter.\n[default: %(default)s]""" } opt_parallel_type = { 'name':'par_type', 'short':'P', 'long':'parallel-type', 'type':str, 'default': 'process', 'help': """Tasks can be executed in parallel in different ways: 'process': uses python multiprocessing module 'thread': uses threads [default: %(default)s] """ } # pdb post-mortem opt_pdb = { 'name':'pdb', 'short':'', 'long':'pdb', 'type': bool, 'default': None, 'help': """get into PDB (python debugger) post-mortem in case of unhandled exception""" } # use ".*" as default regex for delayed tasks without explicitly specified regex opt_auto_delayed_regex = { 'name': 'auto_delayed_regex', 'short': '', 'long': 'auto-delayed-regex', 'type': bool, 'default': False, 'help': """Uses the default regex ".*" for every delayed task loader for which no regex was explicitly defined""" } class Run(DoitCmdBase): doc_purpose = "run tasks" doc_usage = "[TASK/TARGET...]" doc_description = None execute_tasks = True cmd_options = (opt_always, opt_continue, opt_verbosity, opt_reporter, opt_outfile, opt_num_process, opt_parallel_type, opt_pdb, opt_single, opt_auto_delayed_regex) def __init__(self, **kwargs): super(Run, self).__init__(**kwargs) self.reporters = self.get_reporters() # dict def get_reporters(self): """return dict of all available reporters Also set CmdOption choices. """ # built-in reporters reporters = { 'console': reporter.ConsoleReporter, 'executed-only': reporter.ExecutedOnlyReporter, 'json': reporter.JsonReporter, 'zero': reporter.ZeroReporter, } # plugins plugins = PluginDict() plugins.add_plugins(self.config, 'REPORTER') reporters.update(plugins.to_dict()) # set choices for reporter cmdoption # sub-classes might not have this option if 'reporter' in self.cmdparser: choices = {k: v.desc for k,v in reporters.items()} self.cmdparser['reporter'].choices = choices return reporters def _execute(self, outfile, verbosity=None, always=False, continue_=False, reporter='console', num_process=0, par_type='process', single=False, auto_delayed_regex=False): """ @param reporter: (str) one of provided reporters or ... (class) user defined reporter class (can only be specified from DOIT_CONFIG - never from command line) (reporter instance) - only used in unittests """ # get tasks to be executed # self.control is saved on instance to be used by 'auto' command self.control = TaskControl(self.task_list, auto_delayed_regex=auto_delayed_regex) self.control.process(self.sel_tasks) if single: for task_name in self.sel_tasks: task = self.control.tasks[task_name] if task.has_subtask: for task_name in task.task_dep: sub_task = self.control.tasks[task_name] sub_task.task_dep = [] else: task.task_dep = [] # reporter if isinstance(reporter, str): reporter_cls = self.reporters[reporter] else: # user defined class reporter_cls = reporter # verbosity if verbosity is None: use_verbosity = Task.DEFAULT_VERBOSITY else: use_verbosity = verbosity show_out = use_verbosity < 2 # show on error report # outstream if isinstance(outfile, str): outstream = codecs.open(outfile, 'w', encoding='utf-8') else: # outfile is a file-like object (like StringIO or sys.stdout) outstream = outfile self.outstream = outstream # run try: # FIXME stderr will be shown twice in case of task error/failure if isinstance(reporter_cls, type): reporter_obj = reporter_cls(outstream, {'show_out': show_out, 'show_err': True}) else: # also accepts reporter instances reporter_obj = reporter_cls run_args = [self.dep_manager, reporter_obj, continue_, always, verbosity] if num_process == 0: RunnerClass = Runner else: if par_type == 'process': RunnerClass = MRunner if not MRunner.available(): RunnerClass = MThreadRunner sys.stderr.write( "WARNING: multiprocessing module not available, " + "running in parallel using threads.") elif par_type == 'thread': RunnerClass = MThreadRunner else: msg = "Invalid parallel type %s" raise InvalidCommand(msg % par_type) run_args.append(num_process) runner = RunnerClass(*run_args) return runner.run_all(self.control.task_dispatcher()) finally: if isinstance(outfile, str): outstream.close() doit-0.30.3/doit/cmd_strace.py000066400000000000000000000113351305250115000161440ustar00rootroot00000000000000import sys import os import re from .exceptions import InvalidCommand from .action import CmdAction from .task import Task from .cmd_run import Run # filter to display only files from cwd opt_show_all = { 'name':'show_all', 'short':'a', 'long':'all', 'type': bool, 'default': False, 'help': "display all files (not only from within CWD path)", } opt_keep_trace = { 'name':'keep_trace', 'long':'keep', 'type': bool, 'default': False, 'help': "save strace command output into strace.txt", } class Strace(Run): doc_purpose = "use strace to list file_deps and targets" doc_usage = "TASK" doc_description = """ The output is a list of files prefixed with 'R' for open in read mode or 'W' for open in write mode. The files are listed in chronological order. This is a debugging feature with many limitations. * can strace only one task at a time * can only strace CmdAction * the process being traced itself might have some kind of cache, that means it might not write a target file if it exist * does not handle chdir So this is NOT 100% reliable, use with care! """ cmd_options = (opt_show_all, opt_keep_trace) TRACE_CMD = "strace -f -e trace=file %s 2>>%s " TRACE_OUT = 'strace.txt' def execute(self, params, args): """remove existing output file if any and do sanity checking""" if os.path.exists(self.TRACE_OUT): # pragma: no cover os.unlink(self.TRACE_OUT) if len(args) != 1: msg = ('doit strace failed, must select *one* task to strace.' '\nCheck `doit help strace`.') raise InvalidCommand(msg) result = Run.execute(self, params, args) if (not params['keep_trace']) and os.path.exists(self.TRACE_OUT): os.unlink(self.TRACE_OUT) return result def _execute(self, show_all): """1) wrap the original action with strace and save output in file 2) add a second task that will generate the report from temp file """ # find task to trace and wrap it selected = self.sel_tasks[0] for task in self.task_list: if task.name == selected: self.wrap_strace(task) break # add task to print report report_strace = Task( 'strace_report', actions=[(find_deps, [self.outstream, self.TRACE_OUT, show_all])], verbosity=2, task_dep=[selected], uptodate=[False], ) self.task_list.append(report_strace) self.sel_tasks.append(report_strace.name) # clear strace file return Run._execute(self, sys.stdout) @classmethod def wrap_strace(cls, task): """wrap task actions into strace command""" wrapped_actions = [] for action in task.actions: if isinstance(action, CmdAction): cmd = cls.TRACE_CMD % (action._action, cls.TRACE_OUT) wrapped = CmdAction(cmd, task, save_out=action.save_out) wrapped_actions.append(wrapped) else: wrapped_actions.append(action) task._action_instances = wrapped_actions # task should be always executed task._extend_uptodate([False]) def find_deps(outstream, strace_out, show_all): """read file witn strace output, return dict with deps, targets""" # 7978 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 # get "mode" file was open, until ')' is closed # ignore rest of line # .*\( # ignore text until '(' # "(?P[^"]*)" # get "file" name inside " # , (\[.*\])* # ignore elments if inside [] - used by execve # (?P[^)]*)\) # get mode opening file # = ].* # check syscall was successful""", regex = re.compile(r'.*\("(?P[^"]*)",' + r' (\[.*\])*(?P[^)]*)\) = [^-].*') read = set() write = set() cwd = os.getcwd() if not os.path.exists(strace_out): return with open(strace_out) as text: for line in text: # ignore non file operation match = regex.match(line) if not match: continue rel_name = match.group('file') name = os.path.abspath(rel_name) # ignore files out of cwd if not show_all: if not name.startswith(cwd): continue if 'WR' in match.group('mode'): if name not in write: write.add(name) outstream.write("W %s\n" % name) else: if name not in read: read.add(name) outstream.write("R %s\n" % name) doit-0.30.3/doit/cmdparse.py000066400000000000000000000264601305250115000156430ustar00rootroot00000000000000"""Parse command line options and execute it. Built on top of getopt. optparse can't handle sub-commands. """ import getopt import copy from collections import OrderedDict class DefaultUpdate(dict): """A dictionary that has an "update_defaults" method where only items with default values are updated. This is used when you have a dict that has multiple source of values (i.e. hardcoded, config file, command line). And values are updated beggining from the source with higher priority. A default value is added with the method set_default or add_defaults. """ def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) # set of keys that have a non-default value self._non_default_keys = set() def set_default(self, key, value): """set default value for given key""" dict.__setitem__(self, key, value) def add_defaults(self, source): """add default values from another dict @param source: (dict)""" for key, value in source.items(): if key not in self: self.set_default(key, value) def update_defaults(self, update_dict): """like dict.update but do not update items that have a non-default value""" for key, value in update_dict.items(): if key in self._non_default_keys: continue self[key] = value def __setitem__(self, key, value): """overwrite to keep track of _non_default_keys""" try: self._non_default_keys.add(key) # http://bugs.python.org/issue826897 except AttributeError: self._non_default_keys = set() self._non_default_keys.add(key) dict.__setitem__(self, key, value) class CmdParseError(Exception): """Error parsing options """ class CmdOption(object): """a command line option - name (string) : variable name - default (value from its type): default value - type (type): type of the variable. must be able to be initialized taking a single string parameter. if type is bool. option is just a flag. and if present its value is set to True. - short (string): argument short name - long (string): argument long name - inverse (string): argument long name to be the inverse of the default value (only used by boolean options) - choices(list - 2-tuple str): sequence of 2-tuple of choice name, choice description. - help (string): option description """ def __init__(self, opt_dict): # options must contain 'name' and 'default' value opt_dict = opt_dict.copy() for field in ('name', 'default',): if field not in opt_dict: msg = "CmdOption dict %r missing required property '%s'" raise CmdParseError(msg % (opt_dict, field)) self.name = opt_dict.pop('name') self.type = opt_dict.pop('type', str) self.set_default(opt_dict.pop('default')) self.short = opt_dict.pop('short', '') self.long = opt_dict.pop('long', '') self.inverse = opt_dict.pop('inverse', '') self.choices = dict(opt_dict.pop('choices', [])) self.help = opt_dict.pop('help', '') # TODO add some hint for tab-completion scripts # options can not contain any unrecognized field if opt_dict: msg = "CmdOption dict contains invalid property '%s'" raise CmdParseError(msg % list(opt_dict.keys())) def __repr__(self): tmpl = ("{0}({{'name':{1.name!r}, 'short':{1.short!r}," + "'long':{1.long!r} }})") return tmpl.format(self.__class__.__name__, self) def set_default(self, val): """set default value, value is already the expected type""" if self.type is list: self.default = copy.copy(val) else: self.default = val def validate_choice(self, given_value): """raise error is value is not a valid choice""" if given_value not in self.choices: msg = ("Error parsing parameter '{}'. " "Provided '{}' but available choices are: {}.") choices = ("'{}'".format(k) for k in self.choices.keys()) choices_str = ", ".join(choices) raise CmdParseError(msg.format(self.name, given_value, choices_str)) _boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True, '0': False, 'no': False, 'false': False, 'off': False} def str2boolean(self, str_val): """convert string to boolean""" try: return self._boolean_states[str_val.lower()] except: raise ValueError('Not a boolean: {}'.format(str_val)) def str2type(self, str_val): """convert string value to option type value""" try: # no coversion if value is not a string if not isinstance(str_val, str): val = str_val elif self.type is bool: val = self.str2boolean(str_val) elif self.type is list: parts = [p.strip() for p in str_val.split(',')] val = [p for p in parts if p] # remove empty strings else: val = self.type(str_val) except ValueError as exception: msg = "Error parsing parameter '{}' {}.\n{}\n" raise CmdParseError(msg.format(self.name, self.type, str(exception))) if self.choices: self.validate_choice(val) return val @staticmethod def _print_2_columns(col1, col2): """print using a 2-columns format """ column1_len = 24 column2_start = 28 left = (col1).ljust(column1_len) right = col2.replace('\n', '\n'+ column2_start * ' ') return " %s %s" % (left, right) def help_param(self): """return string of option's short and long name i.e.: -f ARG, --file=ARG """ # TODO replace 'ARG' with metavar (copy from optparse) opts_str = [] if self.short: if self.type is bool: opts_str.append('-%s' % self.short) else: opts_str.append('-%s ARG' % self.short) if self.long: if self.type is bool: opts_str.append('--%s' % self.long) else: opts_str.append('--%s=ARG' % self.long) return ', '.join(opts_str) def help_choices(self): """return string with help for option choices""" if not self.choices: return '' # if choice has a description display one choice per line... if any(self.choices.values()): items = [] for choice in sorted(self.choices): items.append("\n{}: {}".format(choice, self.choices[choice])) return "\nchoices:" + "".join(items) # ... otherwise display in a single line else: return "\nchoices: " + ", ".join(sorted(self.choices.keys())) def help_doc(self): """return list of string of option's help doc""" # ignore option that cant be modified on cmd line if not (self.short or self.long): return [] text = [] opt_str = self.help_param() # TODO It should always display option's default value opt_help = self.help % {'default': self.default} opt_choices = self.help_choices() text.append(self._print_2_columns(opt_str, opt_help + opt_choices)) # print bool inverse option if self.inverse: opt_str = '--%s' % self.inverse opt_help = 'opposite of --%s' % self.long text.append(self._print_2_columns(opt_str, opt_help)) return text class CmdParse(object): """Process string with command options @ivar options: (list - CmdOption) """ _type = "Command" def __init__(self, options): self._options = OrderedDict((o.name, o) for o in options) def __contains__(self, key): return key in self._options def __getitem__(self, key): return self._options[key] @property def options(self): """return list of options for backward compatibility""" return list(self._options.values()) def get_short(self): """return string with short options for getopt""" short_list = "" for opt in self._options.values(): if not opt.short: continue short_list += opt.short # ':' means option takes a value if opt.type is not bool: short_list += ':' return short_list def get_long(self): """return list with long options for getopt""" long_list = [] for opt in self._options.values(): long_name = opt.long if not long_name: continue # '=' means option takes a value if opt.type is not bool: long_name += '=' long_list.append(long_name) if opt.inverse: long_list.append(opt.inverse) return long_list def get_option(self, opt_str): """return tuple - CmdOption from matching opt_str. or None - (bool) matched inverse """ for opt in self._options.values(): if opt_str in ('-' + opt.short, '--' + opt.long): return opt, False if opt_str == '--' + opt.inverse: return opt, True return None, None def overwrite_defaults(self, new_defaults): """overwrite self.options default values This values typically come from an INI file """ for key, val in new_defaults.items(): if key in self._options: opt = self._options[key] opt.set_default(opt.str2type(val)) def parse(self, in_args): """parse arguments into options(params) and positional arguments @param in_args (list - string): typically sys.argv[1:] @return params, args params(dict): params contain the actual values from the options. where the key is the name of the option. pos_args (list - string): positional arguments """ params = DefaultUpdate() # add default values for opt in self._options.values(): params.set_default(opt.name, opt.default) # parse options using getopt try: opts, args = getopt.getopt(in_args, self.get_short(), self.get_long()) except Exception as error: msg = "Error parsing %s: %s (parsing options: %s)" raise CmdParseError(msg % (self._type, str(error), in_args)) # update params with values from command line for opt, val in opts: this, inverse = self.get_option(opt) if this.type is bool: params[this.name] = not inverse elif this.type is list: params[this.name].append(val) else: params[this.name] = this.str2type(val) return params, args class TaskParse(CmdParse): """Process string with command options (for tasks)""" _type = "Task" doit-0.30.3/doit/compat.py000066400000000000000000000004231305250115000153170ustar00rootroot00000000000000"""stuff dealing with incompatibilities between python versions""" def get_platform_system(): """return platform.system platform module has many regexp, so importing it is slow... import only if required """ import platform return platform.system() doit-0.30.3/doit/control.py000066400000000000000000000602001305250115000155130ustar00rootroot00000000000000"""Control tasks execution order""" import fnmatch from collections import deque from collections import OrderedDict import re from .exceptions import InvalidTask, InvalidCommand, InvalidDodoFile from .cmdparse import TaskParse, CmdOption from .task import Task, DelayedLoaded from .loader import generate_tasks class RegexGroup(object): '''Helper to keep track of all delayed-tasks which regexp target matches the target specified from command line. ''' def __init__(self, target, tasks): # target name specified in command line self.target = target # set of delayed-tasks names (string) self.tasks = tasks # keep track if the target was already found self.found = False class TaskControl(object): """Manages tasks inter-relationship There are 3 phases 1) the constructor gets a list of tasks and do initialization 2) 'process' the command line options for tasks are processed 3) 'task_dispatcher' dispatch tasks to runner Process dependencies and targets to find out the order tasks should be executed. Also apply filter to exclude tasks from execution. And parse task cmd line options. @ivar tasks: (dict) Key: task name ([taskgen.]name) Value: L{Task} instance @ivar targets: (dict) Key: fileName Value: task_name """ def __init__(self, task_list, auto_delayed_regex=False): self.tasks = OrderedDict() self.targets = {} self.auto_delayed_regex = auto_delayed_regex # name of task in order to be executed # this the order as in the dodo file. the real execution # order might be different if the dependecies require so. self._def_order = [] # list of tasks selected to be executed self.selected_tasks = None # sanity check and create tasks dict for task in task_list: # task must be a Task if not isinstance(task, Task): msg = "Task must an instance of Task class. %s" raise InvalidTask(msg % (task.__class__)) # task name must be unique if task.name in self.tasks: msg = "Task names must be unique. %s" raise InvalidDodoFile(msg % task.name) self.tasks[task.name] = task self._def_order.append(task.name) # expand wild-card task-dependencies for task in self.tasks.values(): for pattern in task.wild_dep: task.task_dep.extend(self._get_wild_tasks(pattern)) self._check_dep_names() self.set_implicit_deps(self.targets, task_list) def _check_dep_names(self): """check if user input task_dep or setup_task that doesnt exist""" # check task-dependencies exist. for task in self.tasks.values(): for dep in task.task_dep: if dep not in self.tasks: msg = "%s. Task dependency '%s' does not exist." raise InvalidTask(msg% (task.name, dep)) for setup_task in task.setup_tasks: if setup_task not in self.tasks: msg = "Task '%s': invalid setup task '%s'." raise InvalidTask(msg % (task.name, setup_task)) @staticmethod def set_implicit_deps(targets, task_list): """set/add task_dep based on file_dep on a target from another task @param targets: (dict) fileName -> task_name @param task_list: (list - Task) task with newly added file_dep """ # 1) create a dictionary associating every target->task. where the task # builds that target. for task in task_list: for target in task.targets: if target in targets: msg = ("Two different tasks can't have a common target." + "'%s' is a target for %s and %s.") raise InvalidTask(msg % (target, task.name, targets[target])) targets[target] = task.name # 2) now go through all dependencies and check if they are target from # another task. # FIXME - when used with delayed tasks needs to check if # any new target matches any old file_dep. for task in task_list: TaskControl.add_implicit_task_dep(targets, task, task.file_dep) @staticmethod def add_implicit_task_dep(targets, task, deps_list): """add implicit task_dep for `task` for newly added `file_dep` @param targets: (dict) fileName -> task_name @param task: (Task) task with newly added file_dep @param dep_list: (list - str): list of file_dep for task """ for dep in deps_list: if (dep in targets and targets[dep] not in task.task_dep): task.task_dep.append(targets[dep]) def _get_wild_tasks(self, pattern): """get list of tasks that match pattern""" wild_list = [] for t_name in self._def_order: if fnmatch.fnmatch(t_name, pattern): wild_list.append(t_name) return wild_list def _process_filter(self, task_selection): """process cmd line task options [task_name [-task_opt [opt_value]] ...] ... @param task_selection: list of strings with task names/params or target @return list of task names. Expanding glob and removed params """ filter_list = [] def add_filtered_task(seq, f_name): """add task to list `filter_list` and set task.options from params @return list - str: of elements not yet """ filter_list.append(f_name) # only tasks specified by name can contain parameters if f_name in self.tasks: # parse task_selection the_task = self.tasks[f_name] # remaining items are other tasks not positional options taskcmd = TaskParse([CmdOption(opt) for opt in the_task.params]) the_task.options, seq = taskcmd.parse(seq) # if task takes positional parameters set all as pos_arg_val if the_task.pos_arg is not None: the_task.pos_arg_val = seq seq = [] return seq # process... seq = task_selection[:] # process cmd_opts until nothing left while seq: f_name = seq.pop(0) # always start with a task/target name # select tasks by task-name pattern if '*' in f_name: for task_name in self._get_wild_tasks(f_name): add_filtered_task((), task_name) else: seq = add_filtered_task(seq, f_name) return filter_list def _filter_tasks(self, task_selection): """Select tasks specified by filter. @param task_selection: list of strings with task names/params or target @return (list) of string. where elements are task name. """ selected_task = [] filter_list = self._process_filter(task_selection) for filter_ in filter_list: # by task name if filter_ in self.tasks: selected_task.append(filter_) continue # by target if filter_ in self.targets: selected_task.append(self.targets[filter_]) continue # if can not find name check if it is a sub-task of a delayed basename = filter_.split(':', 1)[0] if basename in self.tasks: loader = self.tasks[basename].loader if not loader: raise InvalidCommand(not_found=filter_) loader.basename = basename self.tasks[filter_] = Task(filter_, None, loader=loader) selected_task.append(filter_) continue # check if target matches any regex delayed_matched = [] # list of Task for task in list(self.tasks.values()): if not task.loader: continue if task.name.startswith('_regex_target'): continue if task.loader.target_regex: if re.match(task.loader.target_regex, filter_): delayed_matched.append(task) elif self.auto_delayed_regex: delayed_matched.append(task) delayed_matched_names = [t.name for t in delayed_matched] regex_group = RegexGroup(filter_, set(delayed_matched_names)) # create extra tasks to load delayed tasks matched by regex for task in delayed_matched: loader = task.loader loader.basename = task.name name = '{}_{}:{}'.format('_regex_target', filter_, task.name) loader.regex_groups[name] = regex_group self.tasks[name] = Task(name, None, loader=loader, file_dep=[filter_]) selected_task.append(name) if not delayed_matched: # not found raise InvalidCommand(not_found=filter_) return selected_task def process(self, task_selection): """ @param task_selection: list of strings with task names/params @return (list - string) each element is the name of a task """ # execute only tasks in the filter in the order specified by filter if task_selection is not None: self.selected_tasks = self._filter_tasks(task_selection) else: # if no filter is defined execute all tasks # in the order they were defined. self.selected_tasks = self._def_order def task_dispatcher(self): """return a TaskDispatcher generator """ assert self.selected_tasks is not None, \ "must call 'process' before this" return TaskDispatcher(self.tasks, self.targets, self.selected_tasks) class ExecNode(object): """Each task will have an instace of this This used to keep track of waiting events and the generator for dep nodes @ivar run_status (str): contains the result of Dependency.get_status().status modified by runner, value can be: - None: not processed yet - run: task is selected to be executed (it might be running or waiting for setup) - ignore: task wont be executed (user forced deselect) - up-to-date: task wont be executed (no need) - done: task finished its execution """ def __init__(self, task, parent): self.task = task # list of dependencies not processed by _add_task yet self.task_dep = task.task_dep[:] self.calc_dep = task.calc_dep.copy() # ancestors are used to detect cyclic references. # it does not contain a list of tasks that depends on this node # for that check the attribute waiting_me self.ancestors = [] if parent: self.ancestors.extend(parent.ancestors) self.ancestors.append(task.name) # Wait for a task to be selected to its execution # checking if it is up-to-date self.wait_select = False # Wait for a task to finish its execution self.wait_run = set() # task names self.wait_run_calc = set() # task names self.waiting_me = set() # ExecNode self.run_status = None # all ancestors that failed self.bad_deps = [] self.ignored_deps = [] # generator from TaskDispatcher._add_task self.generator = None def reset_task(self, task, generator): """reset task & generator after task is created by its own `loader`""" self.task = task self.task_dep = task.task_dep[:] self.calc_dep = task.calc_dep.copy() self.generator = generator def parent_status(self, parent_node): if parent_node.run_status == 'failure': self.bad_deps.append(parent_node) elif parent_node.run_status == 'ignore': self.ignored_deps.append(parent_node) def __repr__(self): return "%s(%s)" % (self.__class__.__name__, self.task.name) def step(self): """get node's next step""" try: return next(self.generator) except StopIteration: return None def no_none(decorated): """decorator for a generator to discard/filter-out None values""" def _func(*args, **kwargs): """wrap generator""" for value in decorated(*args, **kwargs): if value is not None: yield value return _func class TaskDispatcher(object): """Dispatch another task to be selected/executed, mostly handle with MP Note that a dispatched task might not be ready to be executed. """ def __init__(self, tasks, targets, selected_tasks): self.tasks = tasks self.targets = targets self.nodes = {} # key task-name, value: ExecNode # queues self.waiting = set() # of ExecNode self.ready = deque() # of ExecNode self.generator = self._dispatcher_generator(selected_tasks) def _gen_node(self, parent, task_name): """return ExecNode for task_name if not created yet""" node = self.nodes.get(task_name, None) # first time, create node if node is None: node = ExecNode(self.tasks[task_name], parent) node.generator = self._add_task(node) self.nodes[task_name] = node return node # detect cyclic/recursive dependencies if parent and task_name in parent.ancestors: msg = "Cyclic/recursive dependencies for task %s: [%s]" cycle = " -> ".join(parent.ancestors + [task_name]) raise InvalidDodoFile(msg % (task_name, cycle)) def _node_add_wait_run(self, node, task_list, calc=False): """updates node.wait_run @param node (ExecNode) @param task_list (list - str) tasks that node should wait for @param calc (bool) task_list is for calc_dep """ # wait_for: contains tasks that `node` needs to wait for and # were not executed yet. wait_for = set() for name in task_list: dep_node = self.nodes[name] if (not dep_node) or dep_node.run_status in (None, 'run'): wait_for.add(name) else: # if dep task was already executed: # a) set parent status node.parent_status(dep_node) # b) update dependencies from calc_dep results if calc: self._process_calc_dep_results(dep_node, node) # update ExecNode setting parent/dependent relationship for name in wait_for: self.nodes[name].waiting_me.add(node) if calc: node.wait_run_calc.update(wait_for) else: node.wait_run.update(wait_for) @no_none def _add_task(self, node): """@return a generator that produces: - ExecNode for task dependencies - 'wait' to wait for an event (i.e. a dep task run) - Task when ready to be dispatched to runner (run or be selected) - None values are of no interest and are filtered out by the decorator no_none note that after a 'wait' is sent it is the reponsability of the caller to ensure the current ExecNode cleared all its waiting before calling `next()` again on this generator """ this_task = node.task # skip this task if task belongs to a regex_group that already # executed the task used to build the given target if this_task.loader: regex_group = this_task.loader.regex_groups.get(this_task.name, None) if regex_group and regex_group.found: return # add calc_dep & task_dep until all processed # calc_dep may add more deps so need to loop until nothing left while True: calc_dep_list = list(node.calc_dep) node.calc_dep.clear() task_dep_list = node.task_dep[:] node.task_dep = [] for calc_dep in calc_dep_list: yield self._gen_node(node, calc_dep) self._node_add_wait_run(node, calc_dep_list, calc=True) # add task_dep for task_dep in task_dep_list: yield self._gen_node(node, task_dep) self._node_add_wait_run(node, task_dep_list) # do not wait until all possible task_dep are created if (node.calc_dep or node.task_dep): continue # pragma: no cover # coverage cant catch this #198 elif (node.wait_run or node.wait_run_calc): yield 'wait' else: break # generate tasks from a DelayedLoader if this_task.loader: ref = this_task.loader.creator to_load = this_task.loader.basename or this_task.name this_loader = self.tasks[to_load].loader if this_loader and not this_loader.created: new_tasks = generate_tasks(to_load, ref(), ref.__doc__) TaskControl.set_implicit_deps(self.targets, new_tasks) for nt in new_tasks: if not nt.loader: nt.loader = DelayedLoaded self.tasks[nt.name] = nt # check itself for implicit dep (used by regex_target) TaskControl.add_implicit_task_dep( self.targets, this_task, this_task.file_dep) # remove file_dep since generated tasks are not required # to really create the target (support multiple matches) if regex_group: this_task.file_dep = {} if regex_group.target in self.targets: regex_group.found = True else: regex_group.tasks.remove(this_task.loader.basename) if len(regex_group.tasks) == 0: # In case no task is left, we cannot find a task # generating this target. Print an error message! raise InvalidCommand(not_found=regex_group.target) # mark this loader to not be executed again this_task.loader.created = True this_task.loader = DelayedLoaded # this task was placeholder to execute the loader # now it needs to be re-processed with the real task yield "reset generator" assert False, "This generator can not be used again" # add itself yield this_task # tasks that contain setup-tasks need to be yielded twice if this_task.setup_tasks: # run_status None means task is waiting for other tasks # in order to check if up-to-date. so it needs to wait # before scheduling its setup-tasks. if node.run_status is None: node.wait_select = True yield "wait" # if this task should run, so schedule setup-tasks before itself if node.run_status == 'run': for setup_task in this_task.setup_tasks: yield self._gen_node(node, setup_task) self._node_add_wait_run(node, this_task.setup_tasks) if node.wait_run: yield 'wait' # re-send this task after setup_tasks are sent yield this_task def _get_next_node(self, ready, tasks_to_run): """get ExecNode from (in order): .1 ready .2 tasks_to_run (list in reverse order) """ if ready: return ready.popleft() # get task group from tasks_to_run while tasks_to_run: task_name = tasks_to_run.pop() node = self._gen_node(None, task_name) if node: return node def _update_waiting(self, processed): """updates 'ready' and 'waiting' queues after processed @param processed (ExecNode) or None """ # no task processed, just ignore if processed is None: return node = processed # if node was waiting select must only receive select event if node.wait_select: self.ready.append(node) self.waiting.remove(node) node.wait_select = False # status == run means this was not just select completed if node.run_status == 'run': return for waiting_node in node.waiting_me: waiting_node.parent_status(node) # is_ready indicates if node.generator can be invoked again task_name = node.task.name # node wait_run will be ready if there are nothing left to wait if task_name in waiting_node.wait_run: waiting_node.wait_run.remove(task_name) is_ready = not (waiting_node.wait_run or waiting_node.wait_run_calc) # node wait_run_calc else: assert task_name in waiting_node.wait_run_calc waiting_node.wait_run_calc.remove(task_name) # calc_dep might add new deps that can be run without # waiting for the completion of the remaining deps is_ready = True self._process_calc_dep_results(node, waiting_node) # this node can be further processed if is_ready and (waiting_node in self.waiting): self.ready.append(waiting_node) self.waiting.remove(waiting_node) def _process_calc_dep_results(self, node, waiting_node): # refresh this task dependencies with values got from calc_dep values = node.task.values len_task_deps = len(waiting_node.task.task_dep) old_calc_dep = waiting_node.task.calc_dep.copy() waiting_node.task.update_deps(values) TaskControl.add_implicit_task_dep( self.targets, waiting_node.task, values.get('file_dep', [])) # update node's list of non-processed dependencies new_task_dep = waiting_node.task.task_dep[len_task_deps:] waiting_node.task_dep.extend(new_task_dep) new_calc_dep = waiting_node.task.calc_dep - old_calc_dep waiting_node.calc_dep.update(new_calc_dep) def _dispatcher_generator(self, selected_tasks): """return generator dispatching tasks""" # each selected task will create a tree (from dependencies) of # tasks to be processed tasks_to_run = list(reversed(selected_tasks)) node = None # current active ExecNode while True: # get current node if not node: node = self._get_next_node(self.ready, tasks_to_run) if not node: if self.waiting: # all tasks are waiting, hold on processed = (yield "hold on") self._update_waiting(processed) continue # we are done! return # get next step from current node next_step = node.step() # got None, nothing left for this generator if next_step is None: node = None continue # got a task, send ExecNode to runner if isinstance(next_step, Task): processed = (yield self.nodes[next_step.name]) self._update_waiting(processed) # got new ExecNode, add to ready_queue elif isinstance(next_step, ExecNode): self.ready.append(next_step) # node just performed a delayed creation of tasks, restart elif next_step == "reset generator": node.reset_task(self.tasks[node.task.name], self._add_task(node)) # got 'wait', add ExecNode to waiting queue else: assert next_step == "wait" self.waiting.add(node) node = None doit-0.30.3/doit/dependency.py000066400000000000000000000563031305250115000161620ustar00rootroot00000000000000"""Manage (save/check) task dependency-on-files data.""" import os import hashlib import subprocess import inspect from collections import defaultdict from dbm import dumb import dbm as ddbm # uncomment imports below to run tests on all dbm backends... #import dumbdbm as ddbm #import dbm as ddbm #import gdbm as ddbm # note: to check which DBM backend is being used (in py2): # >>> anydbm._defaultmod import json class DatabaseException(Exception): """Exception class for whatever backend exception""" pass def get_md5(input_data): """return md5 from string or unicode""" byte_data = input_data.encode("utf-8") return hashlib.md5(byte_data).hexdigest() def get_file_md5(path): """Calculate the md5 sum from file content. @param path: (string) file path @return: (string) md5 """ with open(path, 'rb') as file_data: md5 = hashlib.md5() block_size = 128 * md5.block_size while True: data = file_data.read(block_size) if not data: break md5.update(data) return md5.hexdigest() class JsonDB(object): """Backend using a single text file with JSON content""" def __init__(self, name): """Open/create a DB file""" self.name = name if not os.path.exists(self.name): self._db = {} else: self._db = self._load() def _load(self): """load db content from file""" db_file = open(self.name, 'r') try: try: return json.load(db_file) except ValueError as error: # file contains corrupted json data msg = (error.args[0] + "\nInvalid JSON data in %s\n" % os.path.abspath(self.name) + "To fix this problem, you can just remove the " + "corrupted file, a new one will be generated.\n") error.args = (msg,) raise DatabaseException(msg) finally: db_file.close() def dump(self): """save DB content in file""" try: db_file = open(self.name, 'w') json.dump(self._db, db_file) finally: db_file.close() def set(self, task_id, dependency, value): """Store value in the DB.""" if task_id not in self._db: self._db[task_id] = {} self._db[task_id][dependency] = value def get(self, task_id, dependency): """Get value stored in the DB. @return: (string) or (None) if entry not found """ if task_id in self._db: return self._db[task_id].get(dependency, None) def in_(self, task_id): """@return bool if task_id is in DB""" return task_id in self._db def remove(self, task_id): """remove saved dependecies from DB for taskId""" if task_id in self._db: del self._db[task_id] def remove_all(self): """remove saved dependecies from DB for all tasks""" self._db = {} class DbmDB(object): """Backend using a DBM file with individual values encoded in JSON On initialization all items are read from DBM file and loaded on _dbm. During execution whenever an item is read ('get' method) the json value is cached on _db. If a item is modified _db is update and the id is added to the 'dirty' set. Only on 'dump' all dirty items values are encoded in json into _dbm and the DBM file is saved. @ivar name: (str) file name/path @ivar _dbm: (dbm) items with json encoded values @ivar _db: (dict) items with python-dict as value @ivar dirty: (set) id of modified tasks """ DBM_CONTENT_ERROR_MSG = 'db type could not be determined' def __init__(self, name): """Open/create a DB file""" self.name = name try: self._dbm = ddbm.open(self.name, 'c') except ddbm.error as exception: message = str(exception) if message == self.DBM_CONTENT_ERROR_MSG: # When a corrupted/old format database is found # suggest the user to just remove the file new_message = ( 'Dependencies file in %(filename)s seems to use ' 'an old format or is corrupted.\n' 'To fix the issue you can just remove the database file(s) ' 'and a new one will be generated.' % {'filename': repr(self.name)}) raise DatabaseException(new_message) else: # Re-raise any other exceptions raise DatabaseException(message) self._db = {} self.dirty = set() def dump(self): """save/close DBM file""" for task_id in self.dirty: self._dbm[task_id] = json.dumps(self._db[task_id]) self._dbm.close() def set(self, task_id, dependency, value): """Store value in the DB.""" if task_id not in self._db: self._db[task_id] = {} self._db[task_id][dependency] = value self.dirty.add(task_id) def _in_dbm(self, key): """ should be just:: return key in self._dbm for get()/set() key is convert to bytes but not for 'in' """ return key.encode('utf-8') in self._dbm def get(self, task_id, dependency): """Get value stored in the DB. @return: (string) or (None) if entry not found """ # optimization, just try to get it without checking it exists if task_id in self._db: return self._db[task_id].get(dependency, None) else: try: task_data = self._dbm[task_id] except KeyError: return self._db[task_id] = json.loads(task_data.decode('utf-8')) return self._db[task_id].get(dependency, None) def in_(self, task_id): """@return bool if task_id is in DB""" return self._in_dbm(task_id) or task_id in self.dirty def remove(self, task_id): """remove saved dependecies from DB for taskId""" if task_id in self._db: del self._db[task_id] if self._in_dbm(task_id): del self._dbm[task_id] if task_id in self.dirty: self.dirty.remove(task_id) def remove_all(self): """remove saved dependecies from DB for all tasks""" self._db = {} # dumb dbm always opens file in update mode if isinstance(self._dbm, dumb._Database): # pragma: no cover self._dbm._index = {} self._dbm.close() # gdbm can not be running on 2 instances on same thread # see https://bitbucket.org/schettino72/doit/issue/16/ del self._dbm self._dbm = ddbm.open(self.name, 'n') self.dirty = set() class SqliteDB(object): """ sqlite3 json backend """ def __init__(self, name): self.name = name self._conn = self._sqlite3(self.name) self._cache = {} self._dirty = set() @staticmethod def _sqlite3(name): """Open/create a sqlite3 DB file""" # Import sqlite here so it's only imported when required import sqlite3 def dict_factory(cursor, row): """convert row to dict""" data = {} for idx, col in enumerate(cursor.description): data[col[0]] = row[idx] return data def converter(data): return json.loads(data.decode('utf-8')) sqlite3.register_adapter(list, json.dumps) sqlite3.register_adapter(dict, json.dumps) sqlite3.register_converter("json", converter) conn = sqlite3.connect( name, detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES, isolation_level='DEFERRED') conn.row_factory = dict_factory sqlscript = """ create table if not exists doit ( task_id text not null primary key, task_data json );""" try: conn.execute(sqlscript) except sqlite3.DatabaseError as exception: new_message = ( 'Dependencies file in %(filename)s seems to use ' 'an bad format or is corrupted.\n' 'To fix the issue you can just remove the database file(s) ' 'and a new one will be generated.' 'Original error: %(msg)s' % {'filename': repr(name), 'msg': str(exception)}) raise DatabaseException(new_message) return conn def get(self, task_id, dependency): """Get value stored in the DB. @return: (string) or (None) if entry not found """ if task_id in self._cache: return self._cache[task_id].get(dependency, None) else: data = self._cache[task_id] = self._get_task_data(task_id) return data.get(dependency, None) def _get_task_data(self, task_id): data = self._conn.execute('select task_data from doit where task_id=?', (task_id,)).fetchone() return data['task_data'] if data else {} def set(self, task_id, dependency, value): """Store value in the DB.""" if task_id not in self._cache: self._cache[task_id] = {} self._cache[task_id][dependency] = value self._dirty.add(task_id) def in_(self, task_id): if task_id in self._cache: return True if self._conn.execute('select task_id from doit where task_id=?', (task_id,)).fetchone(): return True return False def dump(self): """save/close sqlite3 DB file""" for task_id in self._dirty: self._conn.execute('insert or replace into doit values (?,?)', (task_id, json.dumps(self._cache[task_id]))) self._conn.commit() self._conn.close() self._dirty = set() def remove(self, task_id): """remove saved dependecies from DB for taskId""" if task_id in self._cache: del self._cache[task_id] if task_id in self._dirty: self._dirty.remove(task_id) self._conn.execute('delete from doit where task_id=?', (task_id,)) def remove_all(self): """remove saved dependecies from DB for all task""" self._conn.execute('delete from doit') self._cache = {} self._dirty = set() class FileChangedChecker(object): """Base checker for dependencies, must be inherited.""" def check_modified(self, file_path, file_stat, state): """Check if file in file_path is modified from previous "state". @param file_path (string): file path @param file_stat: result of os.stat() of file_path @param state: state that was previously saved with ``get_state()`` @returns (bool): True if dep is modified """ raise NotImplementedError() def get_state(self, dep, current_state): """Compute the state of a task after it has been successfuly executed. @param dep (str): path of the dependency file. @param current_state (tuple): the current state, saved from a previous execution of the task (None if the task was never run). @returns: the new state. Return None if the state is unchanged. The parameter `current_state` is passed to allow speed optimization, see MD5Checker.get_state(). """ raise NotImplementedError() class MD5Checker(FileChangedChecker): """MD5 checker, uses the md5sum. This is the default checker used by doit. As an optimization the check uses (timestamp, file-size, md5). If the timestamp is the same it considers that the file has the same content. If file size is different its content certainly is modified. Finally the md5 is used for a different timestamp with the same size. """ def check_modified(self, file_path, file_stat, state): """Check if file in file_path is modified from previous "state". """ timestamp, size, file_md5 = state # 1 - if timestamp is not modified file is the same if file_stat.st_mtime == timestamp: return False # 2 - if size is different file is modified if file_stat.st_size != size: return True # 3 - check md5 return file_md5 != get_file_md5(file_path) def get_state(self, dep, current_state): timestamp = os.path.getmtime(dep) # time optimization. if dep is already saved with current # timestamp skip calculating md5 if current_state and current_state[0] == timestamp: return size = os.path.getsize(dep) md5 = get_file_md5(dep) return timestamp, size, md5 class TimestampChecker(FileChangedChecker): """Checker that use only the timestamp.""" def check_modified(self, file_path, file_stat, state): return file_stat.st_mtime != state def get_state(self, dep, current_state): """@returns float: mtime for file `dep`""" return os.path.getmtime(dep) # name of checkers class available CHECKERS = {'md5': MD5Checker, 'timestamp': TimestampChecker} class DependencyStatus(object): """Result object for Dependency.get_status. @ivar status: (str) one of "run", "up-to-date" or "error" """ def __init__(self, get_log): self.get_log = get_log self.status = 'up-to-date' # save reason task is not up-to-date self.reasons = defaultdict(list) self.error_reason = None def add_reason(self, reason, arg, status='run'): """sets state and append reason for not being up-to-date :return boolean: processing should be interrupted """ self.status = status if self.get_log: self.reasons[reason].append(arg) return not self.get_log def set_reason(self, reason, arg): """sets state and reason for not being up-to-date :return boolean: processing should be interrupted """ self.status = 'run' if self.get_log: self.reasons[reason] = arg return not self.get_log def get_error_message(self): '''return str with error message''' return self.error_reason class Dependency(object): """Manage tasks dependencies Each dependency is saved in "db". the "db" can have json or dbm format where there is a dictionary for every task. each task has a dictionary where key is a dependency (abs file path), and the value is the dependency signature. Apart from dependencies other values are also saved on the task dictionary * 'result:', 'task:', 'ignore:' * user(task) defined values are defined in '_values_:' sub-dict @ivar name: (string) filepath of the DB file @ivar _closed: (bool) DB was flushed to file """ def __init__(self, db_class, backend_name, checker_cls=MD5Checker): self._closed = False self.checker = checker_cls() self.db_class = db_class self.backend = db_class(backend_name) self._set = self.backend.set self._get = self.backend.get self.remove = self.backend.remove self.remove_all = self.backend.remove_all self._in = self.backend.in_ self.name = self.backend.name def close(self): """Write DB in file""" if not self._closed: self.backend.dump() self._closed = True ####### task specific def save_success(self, task, result_hash=None): """save info after a task is successfuly executed :param result_hash: (str) explicitly set result_hash """ # save task values self._set(task.name, "_values_:", task.values) # save task result md5 if result_hash is not None: self._set(task.name, "result:", result_hash) elif task.result: if isinstance(task.result, dict): self._set(task.name, "result:", task.result) else: self._set(task.name, "result:", get_md5(task.result)) # file-dep self._set(task.name, 'checker:', self.checker.__class__.__name__) for dep in task.file_dep: state = self.checker.get_state(dep, self._get(task.name, dep)) if state is not None: self._set(task.name, dep, state) # save list of file_deps self._set(task.name, 'deps:', tuple(task.file_dep)) def get_values(self, task_name): """get all saved values from a task @return dict """ values = self._get(task_name, '_values_:') return values or {} def get_value(self, task_id, key_name): """get saved value from task @param task_id (str) @param key_name (str): key result dict of the value """ if not self._in(task_id): # FIXME do not use generic exception raise Exception("taskid '%s' has no computed value!" % task_id) values = self.get_values(task_id) if key_name not in values: msg = "Invalid arg name. Task '%s' has no value for '%s'." raise Exception(msg % (task_id, key_name)) return values[key_name] def get_result(self, task_name): """get the result saved from a task @return dict or md5sum """ return self._get(task_name, 'result:') def remove_success(self, task): """remove saved info from task""" self.remove(task.name) def ignore(self, task): """mark task to be ignored""" self._set(task.name, 'ignore:', '1') def status_is_ignore(self, task): """check if task is marked to be ignored""" return self._get(task.name, "ignore:") def get_status(self, task, tasks_dict, get_log=False): """Check if task is up to date. set task.dep_changed If the checker class changed since the previous run, the task is deleted, to be sure that its state is not re-used. @param task: (Task) @param tasks_dict: (dict: Task) passed to objects used on uptodate @param get_log: (bool) if True, adds all reasons to the return object why this file will be rebuild. @return: (DependencyStatus) a status object with possible status values up-to-date, run or error task.dep_changed (list-strings): file-dependencies that are not up-to-date if task not up-to-date because of a target, returned value will contain all file-dependencies reagrdless they are up-to-date or not. """ result = DependencyStatus(get_log) task.dep_changed = [] # check uptodate bool/callables uptodate_result_list = [] for utd, utd_args, utd_kwargs in task.uptodate: # if parameter is a callable if hasattr(utd, '__call__'): # FIXME control verbosity, check error messages # 1) setup object with global info all tasks if isinstance(utd, UptodateCalculator): utd.setup(self, tasks_dict) # 2) add magic positional args for `task` and `values` # if present. spec_args = list(inspect.signature(utd).parameters.keys()) magic_args = [] for i, name in enumerate(spec_args): if i == 0 and name == 'task': magic_args.append(task) elif i == 1 and name == 'values': magic_args.append(self.get_values(task.name)) args = magic_args + utd_args # 3) call it and get result uptodate_result = utd(*args, **utd_kwargs) elif isinstance(utd, str): # TODO py3.3 has subprocess.DEVNULL with open(os.devnull, 'wb') as null: uptodate_result = subprocess.call( utd, shell=True, stderr=null, stdout=null) == 0 # parameter is a value else: uptodate_result = utd # None means uptodate was not really calculated and should be # just ignored if uptodate_result is None: continue uptodate_result_list.append(uptodate_result) if not uptodate_result: result.add_reason('uptodate_false', (utd, utd_args, utd_kwargs)) # any uptodate check is false if not get_log and result.status == 'run': return result # no dependencies means it is never up to date. if not (task.file_dep or uptodate_result_list): if result.set_reason('has_no_dependencies', True): return result # if target file is not there, task is not up to date for targ in task.targets: if not os.path.exists(targ): task.dep_changed = list(task.file_dep) if result.add_reason('missing_target', targ): return result # check for modified file_dep checker previous = self._get(task.name, 'checker:') checker_name = self.checker.__class__.__name__ if previous and previous != checker_name: task.dep_changed = list(task.file_dep) # remove all saved values otherwise they might be re-used by # some optmization on MD5Checker.get_state() self.remove(task.name) if result.set_reason('checker_changed', (previous, checker_name)): return result # check for modified file_dep previous = self._get(task.name, 'deps:') previous_set = set(previous) if previous else None if previous_set and previous_set != task.file_dep: if get_log: added_files = sorted(list(task.file_dep - previous_set)) removed_files = sorted(list(previous_set - task.file_dep)) result.set_reason('added_file_dep', added_files) result.set_reason('removed_file_dep', removed_files) result.status = 'run' # list of file_dep that changed check_modified = self.checker.check_modified changed = [] for dep in task.file_dep: state = self._get(task.name, dep) try: file_stat = os.stat(dep) except OSError: error_msg = "Dependent file '{}' does not exist.".format(dep) result.error_reason = error_msg.format(dep) if result.add_reason('missing_file_dep', dep, 'error'): return result else: if state is None or check_modified(dep, file_stat, state): changed.append(dep) task.dep_changed = changed if len(changed) > 0: result.set_reason('changed_file_dep', changed) return result ############# class UptodateCalculator(object): """Base class for 'uptodate' that need access to all tasks """ def __init__(self): self.get_val = None # Dependency._get self.tasks_dict = None # dict with all tasks def setup(self, dep_manager, tasks_dict): """@param""" self.get_val = dep_manager._get self.tasks_dict = tasks_dict doit-0.30.3/doit/doit_cmd.py000066400000000000000000000133051305250115000156210ustar00rootroot00000000000000"""doit CLI (command line interface)""" import os import sys import traceback from collections import defaultdict from configparser import ConfigParser from .version import VERSION from .plugin import PluginDict from .exceptions import InvalidDodoFile, InvalidCommand, InvalidTask from .cmdparse import CmdParseError from .cmd_help import Help from .cmd_run import Run from .cmd_clean import Clean from .cmd_list import List from .cmd_info import Info from .cmd_forget import Forget from .cmd_ignore import Ignore from .cmd_auto import Auto from .cmd_dumpdb import DumpDB from .cmd_strace import Strace from .cmd_completion import TabCompletion from .cmd_resetdep import ResetDep # used to save variable values passed from command line _CMDLINE_VARS = None def reset_vars(): global _CMDLINE_VARS _CMDLINE_VARS = {} def get_var(name, default=None): return _CMDLINE_VARS.get(name, default) def set_var(name, value): _CMDLINE_VARS[name] = value class DoitMain(object): # core doit commands BIN_NAME = sys.argv[0].split('/')[-1] DOIT_CMDS = (Help, Run, List, Info, Clean, Forget, Ignore, Auto, DumpDB, Strace, TabCompletion, ResetDep) def __init__(self, task_loader=None, config_filenames='doit.cfg', extra_config=None): self.task_loader = task_loader # combine config option from INI files and API self.config = defaultdict(dict) if extra_config: for section, items in extra_config.items(): self.config[section].update(items) ini_config = self.load_config_ini(config_filenames) for section in ini_config.sections(): self.config[section].update(ini_config[section].items()) @staticmethod def load_config_ini(filenames): """read config from INI files :param files: str or list of str. Like ConfigParser.read() param filenames """ cfg_parser = ConfigParser(allow_no_value=True, delimiters=('=',)) cfg_parser.optionxform = str # preserve case of option names cfg_parser.read(filenames) return cfg_parser @staticmethod def print_version(): """print doit version (includes path location)""" print(".".join([str(i) for i in VERSION])) print("lib @", os.path.dirname(os.path.abspath(__file__))) def get_cmds(self): """get all sub-commands :return dict: name - Command class """ sub_cmds = PluginDict() # core doit commands for cmd_cls in self.DOIT_CMDS: sub_cmds[cmd_cls.get_name()] = cmd_cls # plugin commands sub_cmds.add_plugins(self.config, 'COMMAND') return sub_cmds def process_args(self, cmd_args): """process cmd line set "global" variables/parameters return list of args without processed variables """ # get cmdline variables from args reset_vars() args_no_vars = [] for arg in cmd_args: if (arg[0] != '-') and ('=' in arg): name, value = arg.split('=', 1) set_var(name, value) else: args_no_vars.append(arg) return args_no_vars def get_commands(self): # pragma: no cover '''Notice for application subclassing DoitMain with old API''' msg = ('ERROR: You are using doit version {}, it is too new! ' 'This application requires version <= 0.27.\n') sys.stderr.write(msg.format('.'.join(str(v) for v in VERSION))) sys.exit(3) def run(self, cmd_args): """entry point for all commands :param cmd_args: list of string arguments from command line :param extra_config: dict of extra argument values (by argument name) This is parameter is only used by explicit API call. return codes: 0: tasks executed successfully 1: one or more tasks failed 2: error while executing a task 3: error before task execution starts, in this case the Reporter is not used. So be aware if you expect a different formatting (like JSON) from the Reporter. """ # get list of available commands sub_cmds = self.get_cmds() # special parameters that dont run anything if cmd_args: if cmd_args[0] == "--version": self.print_version() return 0 if cmd_args[0] == "--help": Help.print_usage(sub_cmds.to_dict()) return 0 # get "global vars" from cmd-line args = self.process_args(cmd_args) # get specified sub-command or use default='run' if len(args) == 0 or args[0] not in sub_cmds: specified_run = False cmd_name = 'run' else: specified_run = True cmd_name = args.pop(0) # execute command command = sub_cmds.get_plugin(cmd_name)( task_loader=self.task_loader, cmds=sub_cmds, config=self.config,) try: return command.parse_execute(args) # dont show traceback for user errors. except (CmdParseError, InvalidDodoFile, InvalidCommand, InvalidTask) as err: if isinstance(err, InvalidCommand): err.cmd_used = cmd_name if specified_run else None err.bin_name = self.BIN_NAME sys.stderr.write("ERROR: %s\n" % str(err)) return 3 except Exception: if command.pdb: # pragma: no cover import pdb pdb.post_mortem(sys.exc_info()[2]) sys.stderr.write(traceback.format_exc()) return 3 doit-0.30.3/doit/exceptions.py000066400000000000000000000055441305250115000162260ustar00rootroot00000000000000"""Handle exceptions generated from 'user' code""" import sys import traceback class InvalidCommand(Exception): """Invalid command line argument.""" def __init__(self, *args, **kwargs): self.not_found = kwargs.pop('not_found', None) super(InvalidCommand, self).__init__(*args, **kwargs) self.cmd_used = None self.bin_name = 'doit' # default but might be overwriten def __str__(self): if self.not_found is None: return super(InvalidCommand, self).__str__() if self.cmd_used: msg_task_not_found = ( 'command `{cmd_used}` invalid parameter: "{not_found}".' + ' Must be a task, or a target.\n' + 'Type "{bin_name} list" to see available tasks') return msg_task_not_found.format(**self.__dict__) else: msg_cmd_task_not_found = ( 'Invalid parameter: "{not_found}".' + ' Must be a command, task, or a target.\n' + 'Type "{bin_name} help" to see available commands.\n' + 'Type "{bin_name} list" to see available tasks.\n') return msg_cmd_task_not_found.format(**self.__dict__) class InvalidDodoFile(Exception): """Invalid dodo file""" pass class InvalidTask(Exception): """Invalid task instance. User error on specifying the task.""" pass class CatchedException(object): """This used to save info from catched exceptions The traceback from the original exception is saved """ def __init__(self, msg, exception=None): self.message = msg self.traceback = '' if isinstance(exception, CatchedException): self.traceback = exception.traceback elif exception is not None: # TODO remove doit-code part from traceback self.traceback = traceback.format_exception( exception.__class__, exception, sys.exc_info()[2]) def get_msg(self): """return full exception description (includes traceback)""" return "%s\n%s" % (self.message, "".join(self.traceback)) def get_name(self): """get Exception name""" return self.__class__.__name__ def __repr__(self): return "(<%s> %s)" % (self.get_name(), self.message) def __str__(self): return "%s\n%s" % (self.get_name(), self.get_msg()) class TaskFailed(CatchedException): """Task execution was not successful.""" pass class UnmetDependency(TaskFailed): """Task was not executed because a dependent task failed or is ignored""" pass class TaskError(CatchedException): """Error while trying to execute task.""" pass class SetupError(CatchedException): """Error while trying to execute setup object""" pass class DependencyError(CatchedException): """Error while trying to check if task is up-to-date""" pass doit-0.30.3/doit/filewatch.py000066400000000000000000000066401305250115000160110ustar00rootroot00000000000000"""Watch for modifications of file-system use by cmd_auto module """ import os.path from .compat import get_platform_system class FileModifyWatcher(object): """Use inotify to watch file-system for file modifications Usage: 1) subclass the method handle_event, action to be performed 2) create an object passing a list of files to be watched 3) call the loop method """ supported_platforms = ('Darwin', 'Linux') def __init__(self, path_list): """@param file_list (list-str): files to be watched""" self.file_list = set() self.watch_dirs = set() # all dirs to be watched self.notify_dirs = set() # dirs that generate notification whatever file for filename in path_list: path = os.path.abspath(filename) if os.path.isfile(path): self.file_list.add(path) self.watch_dirs.add(os.path.dirname(path)) else: self.notify_dirs.add(path) self.watch_dirs.add(path) self.platform = get_platform_system() if self.platform not in self.supported_platforms: msg = "Unsupported platform '%s'\n" % self.platform msg += ("'auto' command is supported only on %s" % (self.supported_platforms,)) raise Exception(msg) def _handle(self, event): """calls platform specific handler""" if self.platform == 'Darwin': # pragma: no cover filename = event.name elif self.platform == 'Linux': filename = event.pathname if (filename in self.file_list or os.path.dirname(filename) in self.notify_dirs): self.handle_event(event) def handle_event(self, event): """this should be sub-classed """ raise NotImplementedError def _loop_darwin(self): # pragma: no cover """loop implementation for darwin platform""" from fsevents import Observer #pylint: disable=F0401 from fsevents import Stream #pylint: disable=F0401 from fsevents import IN_MODIFY #pylint: disable=F0401 observer = Observer() handler = self._handle def fsevent_callback(event): if event.mask == IN_MODIFY: handler(event) for watch_this in self.watch_dirs: stream = Stream(fsevent_callback, watch_this, file_events=True) observer.schedule(stream) observer.daemon = True observer.run() def _loop_linux(self, loop_callback): """loop implementation for linux platform""" import pyinotify handler = self._handle class EventHandler(pyinotify.ProcessEvent): def process_default(self, event): handler(event) watch_manager = pyinotify.WatchManager() event_handler = EventHandler() notifier = pyinotify.Notifier(watch_manager, event_handler) mask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_MOVED_TO for watch_this in self.watch_dirs: watch_manager.add_watch(watch_this, mask) notifier.loop(loop_callback) def loop(self, loop_callback=None): """Infinite loop watching for file modifications @loop_callback: used to stop loop on unittests """ if self.platform == 'Darwin': # pragma: no cover self._loop_darwin() elif self.platform == 'Linux': self._loop_linux(loop_callback) doit-0.30.3/doit/loader.py000066400000000000000000000271771305250115000153210ustar00rootroot00000000000000"""Loads dodo file (a python module) and convert them to 'tasks' """ import os import sys import inspect import importlib from collections import OrderedDict from .exceptions import InvalidTask, InvalidCommand, InvalidDodoFile from .task import DelayedLoader, Task, dict_to_task # Directory path from where doit was executed. # Set by loader, to be used on dodo.py by users. initial_workdir = None # TASK_STRING: (string) prefix used to identify python function # that are task generators in a dodo file. TASK_STRING = "task_" def flat_generator(gen, gen_doc=''): """return only values from generators if any generator yields another generator it is recursivelly called """ for item in gen: if inspect.isgenerator(item): item_doc = item.gi_code.co_consts[0] for value, value_doc in flat_generator(item, item_doc): yield value, value_doc else: yield item, gen_doc def get_module(dodo_file, cwd=None, seek_parent=False): """ Find python module defining tasks, it is called "dodo" file. @param dodo_file(str): path to file containing the tasks @param cwd(str): path to be used cwd, if None use path from dodo_file @param seek_parent(bool): search for dodo_file in parent paths if not found @return (module) dodo module """ global initial_workdir initial_workdir = os.getcwd() def exist_or_raise(path): """raise exception if file on given path doesnt exist""" if not os.path.exists(path): msg = ("Could not find dodo file '%s'.\n" + "Please use '-f' to specify file name.\n") raise InvalidDodoFile(msg % path) # get absolute path name if os.path.isabs(dodo_file): dodo_path = dodo_file exist_or_raise(dodo_path) else: if not seek_parent: dodo_path = os.path.abspath(dodo_file) exist_or_raise(dodo_path) else: # try to find file in any folder above current_dir = initial_workdir dodo_path = os.path.join(current_dir, dodo_file) file_name = os.path.basename(dodo_path) parent = os.path.dirname(dodo_path) while not os.path.exists(dodo_path): new_parent = os.path.dirname(parent) if new_parent == parent: # reached root path exist_or_raise(dodo_file) parent = new_parent dodo_path = os.path.join(parent, file_name) ## load module dodo file and set environment base_path, file_name = os.path.split(dodo_path) # make sure dodo path is on sys.path so we can import it sys.path.insert(0, base_path) if cwd is None: # by default cwd is same as dodo.py base path full_cwd = base_path else: # insert specified cwd into sys.path full_cwd = os.path.abspath(cwd) if not os.path.isdir(full_cwd): msg = "Specified 'dir' path must be a directory.\nGot '%s'(%s)." raise InvalidCommand(msg % (cwd, full_cwd)) sys.path.insert(0, full_cwd) # file specified on dodo file are relative to cwd os.chdir(full_cwd) # get module containing the tasks return importlib.import_module(os.path.splitext(file_name)[0]) def create_after(executed=None, target_regex=None, creates=None): """Annotate a task-creator function with delayed loader info""" def decorated(func): func.doit_create_after = DelayedLoader( func, executed=executed, target_regex=target_regex, creates=creates ) return func return decorated def load_tasks(namespace, command_names=(), allow_delayed=False): """Find task-creators and create tasks @param namespace: (dict) containing the task creators, it might contain other stuff @param command_names: (list - str) blacklist for task names @param load_all: (bool) if True ignore doit_crate_after['executed'] `load_all == False` is used by the runner to delay the creation of tasks until a dependent task is executed. This is only used by the `run` command, other commands should always load all tasks since it wont execute any task. @return task_list (list) of Tasks in the order they were defined on the file """ funcs = _get_task_creators(namespace, command_names) # sort by the order functions were defined (line number) # TODO: this ordering doesnt make sense when generators come # from different modules funcs.sort(key=lambda obj: obj[2]) task_list = [] def _process_gen(): task_list.extend(generate_tasks(name, ref(), ref.__doc__)) def _add_delayed(tname): task_list.append(Task(tname, None, loader=delayed, doc=delayed.creator.__doc__)) for name, ref, _ in funcs: delayed = getattr(ref, 'doit_create_after', None) if not delayed: # not a delayed task, just run creator _process_gen() elif delayed.creates: # delayed with explicit task basename for tname in delayed.creates: _add_delayed(tname) elif allow_delayed: # delayed no explicit name, cmd run _add_delayed(name) else: # delayed no explicit name, cmd list (run creator) _process_gen() return task_list def _get_task_creators(namespace, command_names): """get functions defined in the `namespace` and select the task-creators A task-creator is a function that: - name starts with the string TASK_STRING - has the attribute `create_doit_tasks` @return (list - func) task-creators """ funcs = [] prefix_len = len(TASK_STRING) # get all functions that are task-creators for name, ref in namespace.items(): # function is a task creator because of its name if ((inspect.isfunction(ref) or inspect.ismethod(ref)) and name.startswith(TASK_STRING)): # remove TASK_STRING prefix from name task_name = name[prefix_len:] # object is a task creator because it contains the special method elif hasattr(ref, 'create_doit_tasks'): ref = ref.create_doit_tasks # If create_doit_tasks is a method, it should be called only # if it is bounded to an object. # This avoids calling it for the class definition. if inspect.signature(ref).parameters: continue task_name = name # ignore functions that are not a task creator else: # pragma: no cover # coverage can't get "else: continue" continue # tasks can't have the same name of a commands if task_name in command_names: msg = ("Task can't be called '%s' because this is a command name."+ " Please choose another name.") raise InvalidDodoFile(msg % task_name) # get line number where function is defined line = inspect.getsourcelines(ref)[1] # add to list task generator functions funcs.append((task_name, ref, line)) return funcs def load_doit_config(dodo_module): """ @param dodo_module (dict) dict with module members """ doit_config = dodo_module.get('DOIT_CONFIG', {}) if not isinstance(doit_config, dict): msg = ("DOIT_CONFIG must be a dict. got:'%s'%s") raise InvalidDodoFile(msg % (repr(doit_config), type(doit_config))) return doit_config def _generate_task_from_return(func_name, task_dict, gen_doc): """generate a single task from a dict return'ed by a task generator""" if 'name' in task_dict: raise InvalidTask("Task '%s'. Only subtasks use field name." % func_name) task_dict['name'] = task_dict.pop('basename', func_name) # Use task generator docstring # if no doc present in task dict if not 'doc' in task_dict: task_dict['doc'] = gen_doc return dict_to_task(task_dict) def _generate_task_from_yield(tasks, func_name, task_dict, gen_doc): """generate a single task from a dict yield'ed by task generator @param tasks: dictionary with created tasks @return None: the created task is added to 'tasks' dict """ # check valid input if not isinstance(task_dict, dict): raise InvalidTask("Task '%s' must yield dictionaries" % func_name) msg_dup = "Task generation '%s' has duplicated definition of '%s'" basename = task_dict.pop('basename', None) # if has 'name' this is a sub-task if 'name' in task_dict: basename = basename or func_name # if subname is None attributes from group task if task_dict['name'] is None: task_dict['name'] = basename task_dict['actions'] = None group_task = dict_to_task(task_dict) group_task.has_subtask = True tasks[basename] = group_task return # name is '.' full_name = "%s:%s"% (basename, task_dict['name']) if full_name in tasks: raise InvalidTask(msg_dup % (func_name, full_name)) task_dict['name'] = full_name sub_task = dict_to_task(task_dict) sub_task.is_subtask = True # get/create task group group_task = tasks.get(basename) if group_task: if not group_task.has_subtask: raise InvalidTask(msg_dup % (func_name, basename)) else: group_task = Task(basename, None, doc=gen_doc, has_subtask=True) tasks[basename] = group_task group_task.task_dep.append(sub_task.name) tasks[sub_task.name] = sub_task # NOT a sub-task else: if not basename: raise InvalidTask( "Task '%s' must contain field 'name' or 'basename'. %s"% (func_name, task_dict)) if basename in tasks: raise InvalidTask(msg_dup % (func_name, basename)) task_dict['name'] = basename # Use task generator docstring if no doc present in task dict if not 'doc' in task_dict: task_dict['doc'] = gen_doc tasks[basename] = dict_to_task(task_dict) def generate_tasks(func_name, gen_result, gen_doc=None): """Create tasks from a task generator result. @param func_name: (string) name of taskgen function @param gen_result: value returned by a task generator function it can be a dict or generator (generating dicts) @param gen_doc: (string/None) docstring from the task generator function @return: (list - Task) """ # a task instance, just return it without any processing if isinstance(gen_result, Task): return (gen_result,) # task described as a dictionary if isinstance(gen_result, dict): return [_generate_task_from_return(func_name, gen_result, gen_doc)] # a generator if inspect.isgenerator(gen_result): tasks = OrderedDict() # task_name: task # the generator return subtasks as dictionaries for task_dict, x_doc in flat_generator(gen_result, gen_doc): if isinstance(task_dict, Task): tasks[task_dict.name] = task_dict else: _generate_task_from_yield(tasks, func_name, task_dict, x_doc) if tasks: return list(tasks.values()) else: # special case task_generator did not generate any task # create an empty group task return [Task(func_name, None, doc=gen_doc, has_subtask=True)] if gen_result is None: return () raise InvalidTask( "Task '%s'. Must return a dictionary or generator. Got %s" % (func_name, type(gen_result))) doit-0.30.3/doit/plugin.py000066400000000000000000000055521305250115000153420ustar00rootroot00000000000000import importlib class PluginEntry(object): """A Plugin entry point The entry-point is not loaded/imported on creation. Use the method `get()` to import the module and get the attribute. """ class Sentinel(object): pass # indicate the entry-point object is not loaded yet NOT_LOADED = Sentinel() def __init__(self, category, name, location): """ :param category str: plugin category name :param name str: plugin name (as used by doit) :param location str: python object location as : """ self.obj = self.NOT_LOADED self.category = category self.name = name self.location = location def __repr__(self): return "PluginEntry('{}', '{}', '{}')".format( self.category, self.name, self.location) def get(self): """return obj, get from cache or load""" if self.obj is self.NOT_LOADED: self.obj = self.load() return self.obj def load(self): """load/import reference to obj from named module/obj""" module_name, obj_name = self.location.split(':') try: module = importlib.import_module(module_name) except ImportError: raise Exception('Plugin {} module `{}` not found.'.format( self.category, module_name)) try: obj = getattr(module, obj_name) except AttributeError: raise Exception('Plugin {}:{} module `{}` has no {}.'.format( self.category, self.name, module_name, obj_name)) return obj class PluginDict(dict): """A dict where item values *might* be a PluginEntry""" def add_plugins(self, cfg_parser, section): """read all items from a ConfigParser section containing plugins""" # plugins from INI file if section in cfg_parser: for name, location in cfg_parser[section].items(): self[name] = PluginEntry(section, name, location) # plugins from pkg_resources try: import pkg_resources group = "doit.{}".format(section) for point in pkg_resources.iter_entry_points(group=group): name = point.name location = "{}:{}".format(point.module_name, point.attrs[0]) self[name] = PluginEntry(section, name, location) except ImportError: # pragma: no cover pass # ignore, if setuptools is not installed def get_plugin(self, key): """load and return a single plugin""" val = self[key] if isinstance(val, PluginEntry): val.name = key # overwrite obj name attribute return val.get() else: return val def to_dict(self): """return a standard dict with all plugins loaded""" return {k: self.get_plugin(k) for k in self.keys()} doit-0.30.3/doit/reporter.py000066400000000000000000000211641305250115000157030ustar00rootroot00000000000000"""Reports doit execution status/results""" import sys import time import datetime import json from io import StringIO class ConsoleReporter(object): """Default reporter. print results on console/terminal (stdout/stderr) @ivar show_out (bool): include captured stdout on failure report @ivar show_err (bool): include captured stderr on failure report """ # short description, used by the help system desc = 'console output' def __init__(self, outstream, options): # save non-succesful result information (include task errors) self.failures = [] self.runtime_errors = [] self.show_out = options.get('show_out', True) self.show_err = options.get('show_err', True) self.outstream = outstream def write(self, text): self.outstream.write(text) def initialize(self, tasks): """called just after tasks have benn loaded before execution starts""" pass def get_status(self, task): """called when task is selected (check if up-to-date)""" pass def execute_task(self, task): """called when excution starts""" # ignore tasks that do not define actions # ignore private/hidden tasks (tasks that start with an underscore) if task.actions and (task.name[0] != '_'): self.write('. %s\n' % task.title()) def add_failure(self, task, exception): """called when excution finishes with a failure""" self.failures.append({'task': task, 'exception':exception}) def add_success(self, task): """called when excution finishes successfuly""" pass def skip_uptodate(self, task): """skipped up-to-date task""" if task.name[0] != '_': self.write("-- %s\n" % task.title()) def skip_ignore(self, task): """skipped ignored task""" self.write("!! %s\n" % task.title()) def cleanup_error(self, exception): """error during cleanup""" sys.stderr.write(exception.get_msg()) def runtime_error(self, msg): """error from doit (not from a task execution)""" # saved so they are displayed after task failures messages self.runtime_errors.append(msg) def teardown_task(self, task): """called when starts the execution of teardown action""" pass def complete_run(self): """called when finshed running all tasks""" # if test fails print output from failed task for result in self.failures: self.write("#"*40 + "\n") msg = '%s - taskid:%s\n' % (result['exception'].get_name(), result['task'].name) self.write(msg) self.write(result['exception'].get_msg()) self.write("\n") task = result['task'] if self.show_out: out = "".join([a.out for a in task.actions if a.out]) self.write("%s\n" % out) if self.show_err: err = "".join([a.err for a in task.actions if a.err]) self.write("%s\n" % err) if self.runtime_errors: self.write("#"*40 + "\n") self.write("Execution aborted.\n") self.write("\n".join(self.runtime_errors)) self.write("\n") class ExecutedOnlyReporter(ConsoleReporter): """No output for skipped (up-to-date) and group tasks Produces zero output unless a task is executed """ desc = 'console, no output for skipped (up-to-date) and group tasks' def skip_uptodate(self, task): """skipped up-to-date task""" pass def skip_ignore(self, task): """skipped ignored task""" pass class ZeroReporter(ConsoleReporter): """Report only internal errors from doit""" desc = 'report only inetrnal errors from doit' def _just_pass(self, *args): """over-write base to do nothing""" pass get_status = execute_task = add_failure = add_success \ = skip_uptodate = skip_ignore = teardown_task = complete_run \ = _just_pass def runtime_error(self, msg): sys.stderr.write(msg) class TaskResult(object): """result object used by JsonReporter""" # FIXME what about returned value from python-actions ? def __init__(self, task): self.task = task self.result = None # fail, success, up-to-date, ignore self.out = None # stdout from task self.err = None # stderr from task self.error = None # error from doit (exception traceback) self.started = None # datetime when task execution started self.elapsed = None # time (in secs) taken to execute task self._started_on = None # timestamp self._finished_on = None # timestamp def start(self): """called when task starts its execution""" self._started_on = time.time() def set_result(self, result, error=None): """called when task finishes its execution""" self._finished_on = time.time() self.result = result line_sep = "\n<------------------------------------------------>\n" self.out = line_sep.join([a.out for a in self.task.actions if a.out]) self.err = line_sep.join([a.err for a in self.task.actions if a.err]) self.error = error def to_dict(self): """convert result data to dictionary""" if self._started_on is not None: started = datetime.datetime.utcfromtimestamp(self._started_on) self.started = str(started) self.elapsed = self._finished_on - self._started_on return {'name': self.task.name, 'result': self.result, 'out': self.out, 'err': self.err, 'error': self.error, 'started': self.started, 'elapsed': self.elapsed} class JsonReporter(object): """output results in JSON format - out (str) - err (str) - tasks (list - dict): - name (str) - result (str) - out (str) - err (str) - error (str) - started (str) - elapsed (float) """ desc = 'output in JSON format' def __init__(self, outstream, options=None): #pylint: disable=W0613 # options parameter is not used # json result is sent to stdout when doit finishes running self.t_results = {} # when using json reporter output can not contain any other output # than the json data. so anything that is sent to stdout/err needs to # be captured. self._old_out = sys.stdout sys.stdout = StringIO() self._old_err = sys.stderr sys.stderr = StringIO() self.outstream = outstream # runtime and cleanup errors self.errors = [] def get_status(self, task): """called when task is selected (check if up-to-date)""" self.t_results[task.name] = TaskResult(task) def execute_task(self, task): """called when excution starts""" self.t_results[task.name].start() def add_failure(self, task, exception): """called when excution finishes with a failure""" self.t_results[task.name].set_result('fail', exception.get_msg()) def add_success(self, task): """called when excution finishes successfuly""" self.t_results[task.name].set_result('success') def skip_uptodate(self, task): """skipped up-to-date task""" self.t_results[task.name].set_result('up-to-date') def skip_ignore(self, task): """skipped ignored task""" self.t_results[task.name].set_result('ignore') def cleanup_error(self, exception): """error during cleanup""" self.errors.append(exception.get_msg()) def runtime_error(self, msg): """error from doit (not from a task execution)""" self.errors.append(msg) def teardown_task(self, task): """called when starts the execution of teardown action""" pass def complete_run(self): """called when finshed running all tasks""" # restore stdout log_out = sys.stdout.getvalue() sys.stdout = self._old_out log_err = sys.stderr.getvalue() sys.stderr = self._old_err # add errors together with stderr output if self.errors: log_err += "\n".join(self.errors) task_result_list = [ tr.to_dict() for tr in self.t_results.values()] json_data = {'tasks': task_result_list, 'out': log_out, 'err': log_err} # indent not available on simplejson 1.3 (debian etch) # json.dump(json_data, sys.stdout, indent=4) json.dump(json_data, self.outstream) doit-0.30.3/doit/runner.py000066400000000000000000000472271305250115000153620ustar00rootroot00000000000000"""Task runner""" import sys from multiprocessing import Process, Queue as MQueue from threading import Thread import pickle import queue import cloudpickle from .exceptions import InvalidTask, CatchedException from .exceptions import TaskFailed, SetupError, DependencyError, UnmetDependency from .task import DelayedLoaded # execution result. SUCCESS = 0 FAILURE = 1 ERROR = 2 class Runner(object): """Task runner run_all() run_tasks(): for each task: select_task() execute_task() process_task_result() finish() """ def __init__(self, dep_manager, reporter, continue_=False, always_execute=False, verbosity=0): """ @param dep_manager: DependencyBase @param reporter: reporter object to be used @param continue_: (bool) execute all tasks even after a task failure @param always_execute: (bool) execute even if up-to-date or ignored @param verbosity: (int) 0,1,2 see Task.execute """ self.dep_manager = dep_manager self.reporter = reporter self.continue_ = continue_ self.always_execute = always_execute self.verbosity = verbosity self.teardown_list = [] # list of tasks to be teardown self.final_result = SUCCESS # until something fails self._stop_running = False def _handle_task_error(self, node, catched_excp): """handle all task failures/errors called whenever there is an error before executing a task or its execution is not successful. """ assert isinstance(catched_excp, CatchedException) node.run_status = "failure" self.dep_manager.remove_success(node.task) self.reporter.add_failure(node.task, catched_excp) # only return FAILURE if no errors happened. if isinstance(catched_excp, TaskFailed) and self.final_result != ERROR: self.final_result = FAILURE else: self.final_result = ERROR if not self.continue_: self._stop_running = True def _get_task_args(self, task, tasks_dict): """get values from other tasks""" task.init_options() def get_value(task_id, key_name): """get single value or dict from task's saved values""" if key_name is None: return self.dep_manager.get_values(task_id) return self.dep_manager.get_value(task_id, key_name) # selected just need to get values from other tasks for arg, value in task.getargs.items(): task_id, key_name = value if tasks_dict[task_id].has_subtask: # if a group task, pass values from all sub-tasks arg_value = {} base_len = len(task_id) + 1 # length of base name string for sub_id in tasks_dict[task_id].task_dep: name = sub_id[base_len:] arg_value[name] = get_value(sub_id, key_name) else: arg_value = get_value(task_id, key_name) task.options[arg] = arg_value def select_task(self, node, tasks_dict): """Returns bool, task should be executed * side-effect: set task.options Tasks should be executed if they are not up-to-date. Tasks that cointains setup-tasks must be selected twice, so it gives chance for dependency tasks to be executed after checking it is not up-to-date. """ task = node.task # if run_status is not None, it was already calculated if node.run_status is None: self.reporter.get_status(task) # check if task should be ignored (user controlled) if node.ignored_deps or self.dep_manager.status_is_ignore(task): node.run_status = 'ignore' self.reporter.skip_ignore(task) return False # check task_deps if node.bad_deps: bad_str = " ".join(n.task.name for n in node.bad_deps) self._handle_task_error(node, UnmetDependency(bad_str)) return False # check if task is up-to-date res = self.dep_manager.get_status(task, tasks_dict) if res.status == 'error': msg = "ERROR: Task '{}' checking dependencies: {}".format( task.name, res.get_error_message()) self._handle_task_error(node, DependencyError(msg)) return False # set node.run_status if self.always_execute: node.run_status = 'run' else: node.run_status = res.status # if task is up-to-date skip it if node.run_status == 'up-to-date': self.reporter.skip_uptodate(task) task.values = self.dep_manager.get_values(task.name) return False if task.setup_tasks: # dont execute now, execute setup first... return False else: # sanity checks assert node.run_status == 'run', \ "%s:%s" % (task.name, node.run_status) assert task.setup_tasks try: self._get_task_args(task, tasks_dict) except Exception as exception: msg = ("ERROR getting value for argument\n" + str(exception)) self._handle_task_error(node, DependencyError(msg)) return False return True def execute_task(self, task): """execute task's actions""" # register cleanup/teardown if task.teardown: self.teardown_list.append(task) # finally execute it! self.reporter.execute_task(task) return task.execute(sys.stdout, sys.stderr, self.verbosity) def process_task_result(self, node, catched_excp): """handles result""" task = node.task # save execution successful if catched_excp is None: node.run_status = "successful" task.save_extra_values() self.dep_manager.save_success(task) self.reporter.add_success(task) # task error else: self._handle_task_error(node, catched_excp) def run_tasks(self, task_dispatcher): """This will actually run/execute the tasks. It will check file dependencies to decide if task should be executed and save info on successful runs. It also deals with output to stdout/stderr. @param task_dispatcher: L{TaskDispacher} """ node = None while True: if self._stop_running: break try: node = task_dispatcher.generator.send(node) except StopIteration: break if not self.select_task(node, task_dispatcher.tasks): continue catched_excp = self.execute_task(node.task) self.process_task_result(node, catched_excp) def teardown(self): """run teardown from all tasks""" for task in reversed(self.teardown_list): self.reporter.teardown_task(task) catched = task.execute_teardown(sys.stdout, sys.stderr, self.verbosity) if catched: msg = "ERROR: task '%s' teardown action" % task.name error = SetupError(msg, catched) self.reporter.cleanup_error(error) def finish(self): """finish running tasks""" # flush update dependencies self.dep_manager.close() self.teardown() # report final results self.reporter.complete_run() return self.final_result def run_all(self, task_dispatcher): """entry point to run tasks @ivar task_dispatcher (TaskDispatcher) """ try: if hasattr(self.reporter, 'initialize'): self.reporter.initialize(task_dispatcher.tasks) self.run_tasks(task_dispatcher) except InvalidTask as exception: self.reporter.runtime_error(str(exception)) self.final_result = ERROR finally: self.finish() return self.final_result # JobXXX objects send from main process to sub-process for execution class JobHold(object): """Indicates there is no task ready to be executed""" type = object() class JobTask(object): """Contains a Task object""" type = object() def __init__(self, task): self.name = task.name try: self.task_pickle = cloudpickle.dumps(task) except pickle.PicklingError as excp: msg = """Error on Task: `{}`. Task created at execution time that has an attribute than can not be pickled, so not feasible to be used with multi-processing. To fix this issue make sure the task is pickable or just do not use multi-processing execution. Original exception {}: {} """ raise InvalidTask(msg.format(self.name, excp.__class__, excp)) class JobTaskPickle(object): """dict of Task object excluding attributes that might be unpicklable""" type = object() def __init__(self, task): self.task_dict = task.pickle_safe_dict() # actually a dict to be pickled @property def name(self): return self.task_dict['name'] class MReporter(object): """send reported messages to master process puts a dictionary {'name': , 'reporter': } on runner's 'result_q' """ def __init__(self, runner, reporter_cls): self.runner = runner self.reporter_cls = reporter_cls def __getattr__(self, method_name): """substitute any reporter method with a dispatching method""" if not hasattr(self.reporter_cls, method_name): raise AttributeError(method_name) def rep_method(task): self.runner.result_q.put({'name':task.name, 'reporter':method_name}) return rep_method def complete_run(self): """ignore this on MReporter""" pass class MRunner(Runner): """MultiProcessing Runner """ Queue = staticmethod(MQueue) Child = staticmethod(Process) @staticmethod def available(): """check if multiprocessing module is available""" # see: https://bitbucket.org/schettino72/doit/issue/17 # http://bugs.python.org/issue3770 # not available on BSD systens try: import multiprocessing.synchronize multiprocessing # pyflakes except ImportError: # pragma: no cover return False else: return True def __init__(self, dep_manager, reporter, continue_=False, always_execute=False, verbosity=0, num_process=1): Runner.__init__(self, dep_manager, reporter, continue_=continue_, always_execute=always_execute, verbosity=verbosity) self.num_process = num_process self.free_proc = 0 # number of free process self.task_dispatcher = None # TaskDispatcher retrieve tasks self.tasks = None # dict of task instances by name self.result_q = None def __getstate__(self): # multiprocessing on Windows will try to pickle self. # These attributes are actually not used by spawend process so # safe to be removed. pickle_dict = self.__dict__.copy() pickle_dict['reporter'] = None pickle_dict['task_dispatcher'] = None pickle_dict['dep_manager'] = None return pickle_dict def get_next_job(self, completed): """get next task to be dispatched to sub-process On MP needs to check if the dependencies finished its execution @returns : - None -> no more tasks to be executed - JobXXX """ if self._stop_running: return None # gentle stop node = completed while True: # get next task from controller try: node = self.task_dispatcher.generator.send(node) if node == "hold on": self.free_proc += 1 return JobHold() # no more tasks from controller... except StopIteration: # ... terminate one sub process if no other task waiting return None # send a task to be executed if self.select_task(node, self.tasks): # If sub-process already contains the Task object send # only safe pickle data, otherwise send whole object. task = node.task if task.loader is DelayedLoaded and self.Child == Process: return JobTask(task) else: return JobTaskPickle(task) def _run_tasks_init(self, task_dispatcher): """initialization for run_tasks""" self.task_dispatcher = task_dispatcher self.tasks = task_dispatcher.tasks def _run_start_processes(self, job_q, result_q): """create and start sub-processes @param job_q: (multiprocessing.Queue) tasks to be executed @param result_q: (multiprocessing.Queue) collect task results @return list of Process """ # #### DEBUG PICKLE ERRORS # class MyPickler (pickle._Pickler): # def save(self, obj): # print('pickling object {} of type {}'.format(obj, type(obj))) # try: # Pickler.save(self, obj) # except: # print('error. skipping...') # from io import BytesIO # pickler = MyPickler(BytesIO()) # pickler.dump(self) # ### END DEBUG proc_list = [] for _ in range(self.num_process): next_job = self.get_next_job(None) if next_job is None: break # do not start more processes than tasks job_q.put(next_job) process = self.Child( target=self.execute_task_subprocess, args=(job_q, result_q, self.reporter.__class__)) process.start() proc_list.append(process) return proc_list def _process_result(self, node, task, result): """process result received from sub-process""" if 'failure' in result: catched_excp = result['failure'] else: # success set values taken from subprocess result catched_excp = None task.update_from_pickle(result['task']) for action, output in zip(task.actions, result['out']): action.out = output for action, output in zip(task.actions, result['err']): action.err = output self.process_task_result(node, catched_excp) def run_tasks(self, task_dispatcher): """controls subprocesses task dispatching and result collection """ # result queue - result collected from sub-processes result_q = self.Queue() # task queue - tasks ready to be dispatched to sub-processes job_q = self.Queue() self._run_tasks_init(task_dispatcher) proc_list = self._run_start_processes(job_q, result_q) # wait for all processes terminate proc_count = len(proc_list) try: while proc_count: # wait until there is a result to be consumed result = result_q.get() if 'exit' in result: raise result['exit'](result['exception']) node = task_dispatcher.nodes[result['name']] task = node.task if 'reporter' in result: getattr(self.reporter, result['reporter'])(task) continue self._process_result(node, task, result) # update num free process free_proc = self.free_proc + 1 self.free_proc = 0 # tries to get as many tasks as free process completed = node for _ in range(free_proc): next_job = self.get_next_job(completed) completed = None if next_job is None: proc_count -= 1 job_q.put(next_job) # check for cyclic dependencies assert len(proc_list) > self.free_proc except (SystemExit, KeyboardInterrupt, Exception): if self.Child == Process: for proc in proc_list: proc.terminate() raise # we are done, join all process for proc in proc_list: proc.join() # get teardown results while not result_q.empty(): # safe because subprocess joined result = result_q.get() assert 'reporter' in result task = task_dispatcher.tasks[result['name']] getattr(self.reporter, result['reporter'])(task) def execute_task_subprocess(self, job_q, result_q, reporter_class): """executed on child processes @param job_q: task queue, * None elements indicate process can terminate * JobHold indicate process should wait for next task * JobTask / JobTaskPickle task to be executed """ self.result_q = result_q if self.Child == Process: self.reporter = MReporter(self, reporter_class) try: while True: job = job_q.get() if job is None: self.teardown() return # no more tasks to execute finish this process # job is an incomplete Task obj when pickled, attrbiutes # that might contain unpickleble data were removed. # so we need to get task from this process and update it # to get dynamic task attributes. if job.type is JobTaskPickle.type: task = self.tasks[job.name] if self.Child == Process: # pragma: no cover ... # ... actually covered but subprocess doesnt get it. task.update_from_pickle(job.task_dict) elif job.type is JobTask.type: task = pickle.loads(job.task_pickle) # do nothing. this is used to start the subprocess even # if no task is available when process is created. else: assert job.type is JobHold.type continue # pragma: no cover result = {'name': task.name} t_result = self.execute_task(task) if t_result is None: result['task'] = task.pickle_safe_dict() result['out'] = [a.out for a in task.actions] result['err'] = [a.err for a in task.actions] else: result['failure'] = t_result result_q.put(result) except (SystemExit, KeyboardInterrupt, Exception) as exception: # error, blow-up everything. send exception info to master process result_q.put({ 'exit': exception.__class__, 'exception': str(exception)}) class MThreadRunner(MRunner): """Parallel runner using threads""" Queue = staticmethod(queue.Queue) class DaemonThread(Thread): """daemon thread to make sure process is terminated if there is an uncatch exception and threads are not correctly joined. """ def __init__(self, *args, **kwargs): Thread.__init__(self, *args, **kwargs) self.daemon = True Child = staticmethod(DaemonThread) @staticmethod def available(): return True doit-0.30.3/doit/task.py000066400000000000000000000532271305250115000150100ustar00rootroot00000000000000 """Tasks are the main abstractions managed by doit""" import types import os import sys import inspect from collections import OrderedDict from functools import partial from pathlib import PurePath from .cmdparse import CmdOption, TaskParse from .exceptions import CatchedException, InvalidTask from .action import create_action, PythonAction from .dependency import UptodateCalculator def first_line(doc): """extract first non-blank line from text, to extract docstring title""" if doc is not None: for line in doc.splitlines(): striped = line.strip() if striped: return striped return '' class DelayedLoader(object): """contains info for delayed creation of tasks from a task-creator :ivar creator: reference to task-creator function :ivar task_dep: (str) name of task that should be executed before the the loader call the creator function :ivar basename: (str) basename used when creating tasks This is used when doit creates new tasks to handle tasks and targets specified on command line :ivar target_regex: (str) regex for all targets that this loader tasks will create :ivar created: (bool) wheather this creator was already executed or not """ def __init__(self, creator, executed=None, target_regex=None, creates=None): self.creator = creator self.task_dep = executed self.basename = None self.created = False self.target_regex = target_regex self.creates = creates[:] if creates else [] self.regex_groups = OrderedDict() # task_name:RegexGroup # used to indicate that a task had DelayedLoader but was already created DelayedLoaded = False class Task(object): """Task @ivar name string @ivar actions: list - L{BaseAction} @ivar clean_actions: list - L{BaseAction} @ivar loader (DelayedLoader) @ivar teardown (list - L{BaseAction}) @ivar targets: (list -string) @ivar task_dep: (list - string) @ivar wild_dep: (list - string) task dependency using wildcard * @ivar file_dep: (set - string) @ivar calc_dep: (set - string) reference to a task @ivar dep_changed (list - string): list of file-dependencies that changed (are not up_to_date). this must be set before @ivar uptodate: (list - bool/None) use bool/computed value instead of checking dependencies @ivar value_savers (list - callables) that return dicts to be added to task values. Always executed on main process. To be used by `uptodate` implementations. @ivar setup_tasks (list - string): references to task-names @ivar is_subtask: (bool) indicate this task is a subtask @ivar has_subtask: (bool) indicate this task has subtasks @ivar result: (str) last action "result". used to check task-result-dep @ivar values: (dict) values saved by task that might be used by other tasks @ivar getargs: (dict) values from other tasks @ivar doc: (string) task documentation @ivar options: (dict) calculated params values (from getargs and taskopt) @ivar taskopt: (cmdparse.CmdParse) @ivar pos_arg: (str) name of parameter in action to receive positional parameters from command line @ivar pos_arg_val: (list - str) list of positional parameters values @ivar custom_title: function reference that takes a task object as parameter and returns a string. """ DEFAULT_VERBOSITY = 1 string_types = (str, ) # list of valid types/values for each task attribute. valid_attr = {'basename': (string_types, ()), 'name': (string_types, ()), 'actions': ((list, tuple), (None,)), 'file_dep': ((list, tuple), ()), 'task_dep': ((list, tuple), ()), 'uptodate': ((list, tuple), ()), 'calc_dep': ((list, tuple), ()), 'targets': ((list, tuple), ()), 'setup': ((list, tuple), ()), 'clean': ((list, tuple), (True,)), 'teardown': ((list, tuple), ()), 'doc': (string_types, (None,)), 'params': ((list, tuple,), ()), 'pos_arg': (string_types, (None,)), 'verbosity': ((), (None, 0, 1, 2,)), 'getargs': ((dict,), ()), 'title': ((types.FunctionType,), (None,)), 'watch': ((list, tuple), ()), } def __init__(self, name, actions, file_dep=(), targets=(), task_dep=(), uptodate=(), calc_dep=(), setup=(), clean=(), teardown=(), is_subtask=False, has_subtask=False, doc=None, params=(), pos_arg=None, verbosity=None, title=None, getargs=None, watch=(), loader=None): """sanity checks and initialization @param params: (list of dict for parameters) see cmdparse.CmdOption """ getargs = getargs or {} #default self.check_attr(name, 'name', name, self.valid_attr['name']) self.check_attr(name, 'actions', actions, self.valid_attr['actions']) self.check_attr(name, 'file_dep', file_dep, self.valid_attr['file_dep']) self.check_attr(name, 'task_dep', task_dep, self.valid_attr['task_dep']) self.check_attr(name, 'uptodate', uptodate, self.valid_attr['uptodate']) self.check_attr(name, 'calc_dep', calc_dep, self.valid_attr['calc_dep']) self.check_attr(name, 'targets', targets, self.valid_attr['targets']) self.check_attr(name, 'setup', setup, self.valid_attr['setup']) self.check_attr(name, 'clean', clean, self.valid_attr['clean']) self.check_attr(name, 'teardown', teardown, self.valid_attr['teardown']) self.check_attr(name, 'doc', doc, self.valid_attr['doc']) self.check_attr(name, 'params', params, self.valid_attr['params']) self.check_attr(name, 'pos_arg', pos_arg, self.valid_attr['pos_arg']) self.check_attr(name, 'verbosity', verbosity, self.valid_attr['verbosity']) self.check_attr(name, 'getargs', getargs, self.valid_attr['getargs']) self.check_attr(name, 'title', title, self.valid_attr['title']) self.check_attr(name, 'watch', watch, self.valid_attr['watch']) if '=' in name: msg = "Task '{}': name must not use the char '=' (equal sign)." raise InvalidTask(msg.format(name)) self.name = name self.params = params # save just for use on command `info` self.options = None self.pos_arg = pos_arg self.pos_arg_val = None # to be set when parsing command line self.setup_tasks = list(setup) # actions self._action_instances = None if actions is None: self._actions = [] else: self._actions = list(actions[:]) self._init_deps(file_dep, task_dep, calc_dep) # loaders create an implicity task_dep self.loader = loader if self.loader and self.loader.task_dep: self.task_dep.append(loader.task_dep) uptodate = uptodate if uptodate else [] self.getargs = getargs if self.getargs: uptodate.extend(self._init_getargs()) self.value_savers = [] self.uptodate = self._init_uptodate(uptodate) self.targets = self._init_targets(targets) self.is_subtask = is_subtask self.has_subtask = has_subtask self.result = None self.values = {} self.verbosity = verbosity self.custom_title = title # clean if clean is True: self._remove_targets = True self.clean_actions = () else: self._remove_targets = False self.clean_actions = [create_action(a, self) for a in clean] self.teardown = [create_action(a, self) for a in teardown] self.doc = self._init_doc(doc) self.watch = watch def _init_deps(self, file_dep, task_dep, calc_dep): """init for dependency related attributes""" self.dep_changed = None # file_dep self.file_dep = set() self._expand_file_dep(file_dep) # task_dep self.task_dep = [] self.wild_dep = [] if task_dep: self._expand_task_dep(task_dep) # calc_dep self.calc_dep = set() if calc_dep: self._expand_calc_dep(calc_dep) def _init_targets(self, items): """convert valid targets to `str`""" targets = [] for target in items: if isinstance(target, str): targets.append(target) elif isinstance(target, PurePath): targets.append(str(target)) else: msg = ("%s. target must be a str or Path from pathlib. " + "Got '%r' (%s)") raise InvalidTask(msg % (self.name, target, type(target))) return targets def _init_uptodate(self, items): """wrap uptodate callables""" uptodate = [] for item in items: # configure task if hasattr(item, 'configure_task'): item.configure_task(self) # check/append uptodate value to task if isinstance(item, bool) or item is None: uptodate.append((item, None, None)) elif hasattr(item, '__call__'): uptodate.append((item, [], {})) elif isinstance(item, tuple): call = item[0] args = list(item[1]) if len(item) > 1 else [] kwargs = item[2] if len(item) > 2 else {} uptodate.append((call, args, kwargs)) elif isinstance(item, str): uptodate.append((item, [], {})) else: msg = ("%s. task invalid 'uptodate' item '%r'. " + "Must be bool, None, str, callable or tuple " + "(callable, args, kwargs).") raise InvalidTask(msg % (self.name, item)) return uptodate def _expand_file_dep(self, file_dep): """put input into file_dep""" for dep in file_dep: if isinstance(dep, str): self.file_dep.add(dep) elif isinstance(dep, PurePath): self.file_dep.add(str(dep)) else: msg = ("%s. file_dep must be a str or Path from pathlib. " + "Got '%r' (%s)") raise InvalidTask(msg % (self.name, dep, type(dep))) def _expand_task_dep(self, task_dep): """convert task_dep input into actaul task_dep and wild_dep""" for dep in task_dep: if "*" in dep: self.wild_dep.append(dep) else: self.task_dep.append(dep) def _expand_calc_dep(self, calc_dep): """calc_dep input""" for dep in calc_dep: if dep not in self.calc_dep: self.calc_dep.add(dep) def _extend_uptodate(self, uptodate): """add/extend uptodate values""" self.uptodate.extend(self._init_uptodate(uptodate)) # FIXME should support setup also _expand_map = { 'task_dep': _expand_task_dep, 'file_dep': _expand_file_dep, 'calc_dep': _expand_calc_dep, 'uptodate': _extend_uptodate, } def update_deps(self, deps): """expand all kinds of dep input""" for dep, dep_values in deps.items(): if dep not in self._expand_map: continue self._expand_map[dep](self, dep_values) def init_options(self): """Put default values on options. This will only be used, if params options were not passed on the command line. """ if self.options is None: taskcmd = TaskParse([CmdOption(opt) for opt in self.params]) # ignore positional parameters self.options = taskcmd.parse('')[0] def _init_getargs(self): """task getargs attribute define implicit task dependencies""" check_result = set() for arg_name, desc in self.getargs.items(): # tuple (task_id, key_name) parts = desc if isinstance(parts, str) or len(parts) != 2: msg = ("Taskid '%s' - Invalid format for getargs of '%s'.\n" % (self.name, arg_name) + "Should be tuple with 2 elements " + "('', '') got '%s'\n" % desc) raise InvalidTask(msg) if parts[0] not in self.setup_tasks: check_result.add(parts[0]) return [result_dep(t, setup_dep=True) for t in check_result] @staticmethod def _init_doc(doc): """process task "doc" attribute""" # store just first non-empty line as documentation string return first_line(doc) @staticmethod def check_attr(task, attr, value, valid): """check input task attribute is correct type/value @param task (string): task name @param attr (string): attribute name @param value: actual input from user @param valid (list): of valid types/value accepted @raises InvalidTask if invalid input """ if type(value) in valid[0]: return if value in valid[1]: return # input value didnt match any valid type/value, raise execption msg = "Task '%s' attribute '%s' must be " % (task, attr) accept = ", ".join([getattr(v, '__name__', str(v)) for v in (valid[0] + valid[1])]) msg += "{%s} got:%r %s" % (accept, value, type(value)) raise InvalidTask(msg) @property def actions(self): """lazy creation of action instances""" if self._action_instances is None: self._action_instances = [ create_action(a, self) for a in self._actions] return self._action_instances def save_extra_values(self): """run value_savers updating self.values""" for value_saver in self.value_savers: self.values.update(value_saver()) def _get_out_err(self, out, err, verbosity): """select verbosity to be used""" priority = (verbosity, # use command line option self.verbosity, # or task default from dodo file self.DEFAULT_VERBOSITY) # or global default use_verbosity = [v for v in priority if v is not None][0] out_err = [(None, None), # 0 (None, err), # 1 (out, err)] # 2 return out_err[use_verbosity] def execute(self, out=None, err=None, verbosity=None): """Executes the task. @return failure: see CmdAction.execute """ self.init_options() task_stdout, task_stderr = self._get_out_err(out, err, verbosity) for action in self.actions: action_return = action.execute(task_stdout, task_stderr) if isinstance(action_return, CatchedException): return action_return self.result = action.result self.values.update(action.values) def execute_teardown(self, out=None, err=None, verbosity=None): """Executes task's teardown @return failure: see CmdAction.execute """ task_stdout, task_stderr = self._get_out_err(out, err, verbosity) for action in self.teardown: action_return = action.execute(task_stdout, task_stderr) if isinstance(action_return, CatchedException): return action_return def clean(self, outstream, dryrun): """Execute task's clean @ivar outstream: 'write' output into this stream @ivar dryrun (bool): if True clean tasks are not executed (just print out what would be executed) """ self.init_options() # if clean is True remove all targets if self._remove_targets is True: clean_targets(self, dryrun) else: # clean contains a list of actions... for action in self.clean_actions: msg = "%s - executing '%s'\n" outstream.write(msg % (self.name, action)) # add extra arguments used by clean actions if isinstance(action, PythonAction): action_sig = inspect.signature(action.py_callable) if 'dryrun' in action_sig.parameters: action.kwargs['dryrun'] = dryrun if not dryrun: result = action.execute(out=outstream) if isinstance(result, CatchedException): sys.stderr.write(str(result)) def title(self): """String representation on output. @return: (str) Task name and actions """ if self.custom_title: return self.custom_title(self) return self.name def __repr__(self): return ""% self.name def __getstate__(self): """remove attributes that never used on process that only execute tasks """ to_pickle = self.__dict__.copy() # never executed in sub-process to_pickle['uptodate'] = None to_pickle['value_savers'] = None # can be re-recreated on demand to_pickle['_action_instances'] = None return to_pickle # when using multiprocessing Tasks are pickled. def pickle_safe_dict(self): """remove attributes that might contain unpickleble content mostly probably closures """ to_pickle = self.__dict__.copy() del to_pickle['_actions'] del to_pickle['_action_instances'] del to_pickle['clean_actions'] del to_pickle['teardown'] del to_pickle['custom_title'] del to_pickle['value_savers'] del to_pickle['uptodate'] return to_pickle def update_from_pickle(self, pickle_obj): """update self with data from pickled Task""" self.__dict__.update(pickle_obj) def __eq__(self, other): return self.name == other.name def __lt__(self, other): """used on default sorting of tasks (alphabetically by name)""" return self.name < other.name def dict_to_task(task_dict): """Create a task instance from dictionary. The dictionary has the same format as returned by task-generators from dodo files. @param task_dict (dict): task representation as a dict. @raise InvalidTask: If unexpected fields were passed in task_dict """ # check required fields if 'actions' not in task_dict: raise InvalidTask("Task %s must contain 'actions' field. %s" % (task_dict['name'], task_dict)) # user friendly. dont go ahead with invalid input. task_attrs = list(task_dict.keys()) valid_attrs = set(Task.valid_attr.keys()) for key in task_attrs: if key not in valid_attrs: raise InvalidTask("Task %s contains invalid field: '%s'"% (task_dict['name'], key)) return Task(**task_dict) def clean_targets(task, dryrun): """remove all targets from a task""" files = [path for path in task.targets if os.path.isfile(path)] dirs = [path for path in task.targets if os.path.isdir(path)] # remove all files for file_ in files: print("%s - removing file '%s'" % (task.name, file_)) if not dryrun: os.remove(file_) # remove all directories (if empty) for dir_ in dirs: if os.listdir(dir_): msg = "%s - cannot remove (it is not empty) '%s'" print(msg % (task.name, dir_)) else: msg = "%s - removing dir '%s'" print(msg % (task.name, dir_)) if not dryrun: os.rmdir(dir_) def _return_param(val): '''just return passed parameter - make a callable from any value''' return val # uptodate class result_dep(UptodateCalculator): """check if result of the given task was modified """ def __init__(self, dep_task_name, setup_dep=False): ''' :param setup_dep: controls if dependent task is task_dep or setup ''' self.dep_name = dep_task_name self.setup_dep = setup_dep self.result_name = '_result:%s' % self.dep_name def configure_task(self, task): """to be called by doit when create the task""" # result_dep creates an implicit task_dep if self.setup_dep: task.setup_tasks.append(self.dep_name) else: task.task_dep.append(self.dep_name) def _result_single(self): """get result from a single task""" return self.get_val(self.dep_name, 'result:') def _result_group(self, dep_task): """get result from a group task the result is the combination of results of all sub-tasks """ prefix = dep_task.name + ":" sub_tasks = {} for sub in dep_task.task_dep: if sub.startswith(prefix): sub_tasks[sub] = self.get_val(sub, 'result:') return sub_tasks def __call__(self, task, values): """return True if result is the same as last run""" dep_task = self.tasks_dict[self.dep_name] if not dep_task.has_subtask: dep_result = self._result_single() else: dep_result = self._result_group(dep_task) func = partial(_return_param, {self.result_name: dep_result}) task.value_savers.append(func) last_success = values.get(self.result_name) if last_success is None: return False return last_success == dep_result doit-0.30.3/doit/tools.py000066400000000000000000000231321305250115000151760ustar00rootroot00000000000000"""extra goodies to be used in dodo files""" import os import time as time_module import datetime import hashlib import operator import subprocess from . import exceptions from .action import CmdAction, PythonAction from .task import result_dep # imported for backward compatibility result_dep # pyflakes # action def create_folder(dir_path): """create a folder in the given path if it doesnt exist yet.""" os.makedirs(dir_path, exist_ok=True) # title def title_with_actions(task): """return task name task actions""" if task.actions: title = "\n\t".join([str(action) for action in task.actions]) # A task that contains no actions at all # is used as group task else: title = "Group: %s" % ", ".join(task.task_dep) return "%s => %s"% (task.name, title) # uptodate def run_once(task, values): """execute task just once used when user manually manages a dependency """ def save_executed(): return {'run-once': True} task.value_savers.append(save_executed) return values.get('run-once', False) # uptodate class config_changed(object): """check if passed config was modified @var config (str) or (dict) """ def __init__(self, config): self.config = config self.config_digest = None def _calc_digest(self): if isinstance(self.config, str): return self.config elif isinstance(self.config, dict): data = '' for key in sorted(self.config): data += key + repr(self.config[key]) byte_data = data.encode("utf-8") return hashlib.md5(byte_data).hexdigest() else: raise Exception(('Invalid type of config_changed parameter got %s' + ', must be string or dict') % (type(self.config),)) def configure_task(self, task): task.value_savers.append(lambda: {'_config_changed':self.config_digest}) def __call__(self, task, values): """return True if confing values are UNCHANGED""" self.config_digest = self._calc_digest() last_success = values.get('_config_changed') if last_success is None: return False return (last_success == self.config_digest) # uptodate class timeout(object): """add timeout to task @param timeout_limit: (datetime.timedelta, int) in seconds if the time elapsed since last time task was executed is bigger than the "timeout" time the task is NOT up-to-date """ def __init__(self, timeout_limit): if isinstance(timeout_limit, datetime.timedelta): self.limit_sec = ((timeout_limit.days * 24 * 3600) + timeout_limit.seconds) elif isinstance(timeout_limit, int): self.limit_sec = timeout_limit else: msg = "timeout should be datetime.timedelta or int got %r " raise Exception(msg % timeout_limit) def __call__(self, task, values): def save_now(): return {'success-time': time_module.time()} task.value_savers.append(save_now) last_success = values.get('success-time', None) if last_success is None: return False return (time_module.time() - last_success) < self.limit_sec # uptodate class check_timestamp_unchanged(object): """check if timestamp of a given file/dir is unchanged since last run. The C{cmp_op} parameter can be used to customize when timestamps are considered unchanged, e.g. you could pass L{operator.ge} to also consider e.g. files reverted to an older copy as unchanged; or pass a custom function to completely customize what unchanged means. If the specified file does not exist, an exception will be raised. Note that if the file C{fn} is a target of another task you should probably add C{task_dep} on that task to ensure the file is created before checking it. """ def __init__(self, file_name, time='mtime', cmp_op=operator.eq): """initialize the callable @param fn: (str) path to file/directory to check @param time: (str) which timestamp field to check, can be one of (atime, access, ctime, status, mtime, modify) @param cmp_op: (callable) takes two parameters (prev_time, current_time) should return True if the timestamp is considered unchanged @raises ValueError: if invalid C{time} value is passed """ if time in ('atime', 'access'): self._timeattr = 'st_atime' elif time in ('ctime', 'status'): self._timeattr = 'st_ctime' elif time in ('mtime', 'modify'): self._timeattr = 'st_mtime' else: raise ValueError('time can be one of: atime, access, ctime, ' 'status, mtime, modify (got: %r)' % time) self._file_name = file_name self._cmp_op = cmp_op self._key = '.'.join([self._file_name, self._timeattr]) def _get_time(self): return getattr(os.stat(self._file_name), self._timeattr) def __call__(self, task, values): """register action that saves the timestamp and check current timestamp @raises OSError: if cannot stat C{self._file_name} file (e.g. doesn't exist) """ def save_now(): return {self._key: self._get_time()} task.value_savers.append(save_now) prev_time = values.get(self._key) if prev_time is None: # this is first run return False current_time = self._get_time() return self._cmp_op(prev_time, current_time) # action class class LongRunning(CmdAction): """Action to handle a Long running shell process, usually a server or service. Properties: * the output is never captured * it is always successful (return code is not used) * "swallow" KeyboardInterrupt """ def execute(self, out=None, err=None): action = self.expand_action() process = subprocess.Popen(action, shell=self.shell, **self.pkwargs) try: process.wait() except KeyboardInterrupt: # normal way to stop interactive process pass # the name InteractiveAction is deprecated on 0.25 InteractiveAction = LongRunning class Interactive(CmdAction): """Action to handle Interactive shell process: * the output is never captured """ def execute(self, out=None, err=None): action = self.expand_action() process = subprocess.Popen(action, shell=self.shell, **self.pkwargs) process.wait() if process.returncode != 0: return exceptions.TaskFailed( "Interactive command failed: '%s' returned %s" % (action, process.returncode)) # action class class PythonInteractiveAction(PythonAction): """Action to handle Interactive python: * the output is never captured * it is successful unless a exeception is raised """ def execute(self, out=None, err=None): kwargs = self._prepare_kwargs() try: returned_value = self.py_callable(*self.args, **kwargs) except Exception as exception: return exceptions.TaskError("PythonAction Error", exception) if isinstance(returned_value, str): self.result = returned_value elif isinstance(returned_value, dict): self.values = returned_value self.result = returned_value # debug helper def set_trace(): # pragma: no cover """start debugger, make sure stdout shows pdb output. output is not restored. """ import pdb import sys debugger = pdb.Pdb(stdin=sys.__stdin__, stdout=sys.__stdout__) debugger.set_trace(sys._getframe().f_back) #pylint: disable=W0212 def register_doit_as_IPython_magic(): # pragma: no cover """ Defines a ``%doit`` magic function[1] that discovers and execute tasks from IPython's interactive variables (global namespace). It will fail if not invoked from within an interactive IPython shell. .. Tip:: To permanently add this magic-function to your IPython, create a new script inside your startup-profile (``~/.ipython/profile_default/startup/doit_magic.ipy``) with the following content: from doit.tools import register_doit_as_IPython_magic register_doit_as_IPython_magic() [1] http://ipython.org/ipython-doc/dev/interactive/tutorial.html#magic-functions """ from IPython.core.magic import register_line_magic from IPython.core.getipython import get_ipython from doit.cmd_base import ModuleTaskLoader from doit.doit_cmd import DoitMain @register_line_magic def doit(line): """ Run *doit* with `task_creators` from all interactive variables (IPython's global namespace). Examples: >>> %doit --help ## Show help for options and arguments. >>> def task_foo(): return {'actions': ['echo hi IPython'], 'verbosity': 2} >>> %doit list ## List any tasks discovered. foo >>> %doit ## Run any tasks. . foo hi IPython """ ip = get_ipython() # Override db-files location inside ipython-profile dir, # which is certainly writable. prof_dir = ip.profile_dir.location opt_vals = {'dep_file': os.path.join(prof_dir, 'db', '.doit.db')} commander = DoitMain(ModuleTaskLoader(ip.user_module), extra_config={'GLOBAL': opt_vals}) commander.run(line.split()) doit-0.30.3/doit/version.py000066400000000000000000000001401305250115000155150ustar00rootroot00000000000000"""doit version, defined out of __init__.py to avoid circular reference""" VERSION = (0, 30, 3) doit-0.30.3/pylintrc000066400000000000000000000165121305250115000143200ustar00rootroot00000000000000[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add to the black list. It should be a base name, not a # path. You may set this option multiple times. ignore=CVS # Pickle collected data for later comparisons. persistent=yes # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. #enable= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifier separated by comma (,) or put this option # multiple time (only on the command line, not in the configuration file where # it should appear only once). # :E1103: *%s %r has no %r member (but some types could not be inferred)* # :W0142: *Used * or ** magic* # :W0703: *Catch "Exception"* # :R0903: *Too few public methods (%s/%s)* # :R0922: *Abstract class is only referenced 1 times* # :E1101: *Used when a variable is accessed for an unexistent member. This message belongs to the typecheck checker.* (many false positive) disable=E1103,W0142,W0703,R0903,R0922,E1101 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html output-format=text # Include message's id in output include-ids=yes # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=yes [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. generated-members=REQUEST,acl_users,aq_parent [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the beginning of the name of dummy variables # (i.e. not used). dummy-variables-rgx=_|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=map,filter,apply,input # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=(([A-Z_][A-Z0-9_]*)|[a-z_][a-z0-9_]{2,30}$|[A-Z_][a-zA-Z0-9]+$|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct instance attribute names attr-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # Regular expression which should only match functions or classes name which do # not require a docstring no-docstring-rgx=__.*__ [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,string,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [DESIGN] # Maximum number of arguments for function / method max-args=5 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branchs=12 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=7 # Minimum number of public methods for a class (see R0903). min-public-methods=2 # Maximum number of public methods for a class (see R0904). max-public-methods=20 [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp doit-0.30.3/setup.py000077500000000000000000000066351305250115000142530ustar00rootroot00000000000000#! /usr/bin/env python import sys from setuptools import setup install_requires = ['cloudpickle'] ########### last version to support python2 is 0.29 #### if sys.version_info[0] < 3: sys.exit('This version of doit is only supported by Python 3.\n' + 'Please use doit==0.29.0 with Python 2.') ######################################################## ########### platform specific stuff ############# import platform platform_system = platform.system() # auto command dependencies to watch file-system if platform_system == "Darwin": install_requires.append('macfsevents') elif platform_system == "Linux": install_requires.append('pyinotify') ################################################## ######### python version specific stuff ########## # pathlib is the part of the Python standard library since 3.4 version. if sys.version_info < (3, 4): install_requires.append('pathlib') ################################################## long_description = """ `doit` is a task management & automation tool `doit` comes from the idea of bringing the power of build-tools to execute any kind of **task** `doit` is a modern open-source build-tool written in python designed to be simple to use and flexible to deal with complex work-flows. It is specially suitable for building and managing custom work-flows where there is no out-of-the-box solution available. `doit` has been successfully used on: systems test/integration automation, scientific computational pipelines, content generation, configuration management, etc. `website/docs `_ """ setup(name = 'doit', description = 'doit - Automation Tool', version = '0.30.3', license = 'MIT', author = 'Eduardo Naufel Schettino', author_email = 'schettino72@gmail.com', url = 'http://pydoit.org', classifiers = [ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'License :: OSI Approved :: MIT License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Operating System :: POSIX', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Intended Audience :: Developers', 'Intended Audience :: Information Technology', 'Intended Audience :: Science/Research', 'Intended Audience :: System Administrators', 'Topic :: Software Development :: Build Tools', 'Topic :: Software Development :: Testing', 'Topic :: Software Development :: Quality Assurance', 'Topic :: Scientific/Engineering', ], keywords = "build make task automation pipeline", packages = ['doit'], install_requires = install_requires, # extra_requires with environment markers can be used only # newer versions of setuptools that most users do not have # installed. So wait for a while before use them (2017-02) # extras_require={ # ':python_version <= "3.3"': ['pathlib'], # ':sys.platform == "darwin"': ['macfsevents'], # ':sys.platform == "linux"': ['pyinotify'], # }, long_description = long_description, entry_points = { 'console_scripts': [ 'doit = doit.__main__:main' ] }, ) doit-0.30.3/tests/000077500000000000000000000000001305250115000136665ustar00rootroot00000000000000doit-0.30.3/tests/__init__.py000066400000000000000000000000001305250115000157650ustar00rootroot00000000000000doit-0.30.3/tests/conftest.py000066400000000000000000000120011305250115000160570ustar00rootroot00000000000000import os import time from dbm import whichdb import py import pytest from doit.dependency import DbmDB, Dependency, MD5Checker from doit.task import Task def get_abspath(relativePath): """ return abs file path relative to this file""" return os.path.join(os.path.dirname(__file__), relativePath) # fixture to create a sample file to be used as file_dep def dependency_factory(relative_path): @pytest.fixture def dependency(request): path = get_abspath(relative_path) if os.path.exists(path): os.remove(path) ff = open(path, "w") ff.write("whatever" + str(time.asctime())) ff.close() def remove_dependency(): if os.path.exists(path): os.remove(path) request.addfinalizer(remove_dependency) return path return dependency dependency1 = dependency_factory("data/dependency1") dependency2 = dependency_factory("data/dependency2") # fixture to create a sample file to be used as file_dep @pytest.fixture def target1(request): path = get_abspath("data/target1") if os.path.exists(path): # pragma: no cover os.remove(path) def remove_path(): if os.path.exists(path): os.remove(path) request.addfinalizer(remove_path) return path # fixture for "doit.db". create/remove for every test def remove_db(filename): """remove db file from anydbm""" # dbm on some systems add '.db' on others add ('.dir', '.pag') extensions = ['', #dbhash #gdbm '.bak', #dumbdb '.dat', #dumbdb '.dir', #dumbdb #dbm2 '.db', #dbm1 '.pag', #dbm2 ] for ext in extensions: if os.path.exists(filename + ext): os.remove(filename + ext) # dbm backends use different file extentions db_ext = {'dbhash': [''], 'gdbm': [''], 'dbm': ['.db', '.dir'], 'dumbdbm': ['.dat'], # for python3 'dbm.ndbm': ['.db'], } @pytest.fixture def depfile(request): if hasattr(request, 'param'): dep_class = request.param else: dep_class = DbmDB # copied from tempdir plugin name = request._pyfuncitem.name name = py.std.re.sub("[\W]", "_", name) my_tmpdir = request.config._tmpdirhandler.mktemp(name, numbered=True) dep_file = Dependency(dep_class, os.path.join(my_tmpdir.strpath, "testdb")) dep_file.whichdb = whichdb(dep_file.name) if dep_class is DbmDB else 'XXX' dep_file.name_ext = db_ext.get(dep_file.whichdb, ['']) def remove_depfile(): if not dep_file._closed: dep_file.close() remove_db(dep_file.name) request.addfinalizer(remove_depfile) return dep_file @pytest.fixture def depfile_name(request): # copied from tempdir plugin name = request._pyfuncitem.name name = py.std.re.sub("[\W]", "_", name) my_tmpdir = request.config._tmpdirhandler.mktemp(name, numbered=True) depfile_name = (os.path.join(my_tmpdir.strpath, "testdb")) def remove_depfile(): remove_db(depfile_name) request.addfinalizer(remove_depfile) return depfile_name @pytest.fixture def dep_manager(request, depfile_name): return Dependency(DbmDB, depfile_name) @pytest.fixture def restore_cwd(request): """restore cwd to its initial value after test finishes.""" previous = os.getcwd() def restore_cwd(): os.chdir(previous) request.addfinalizer(restore_cwd) # create a list of sample tasks def tasks_sample(): tasks_sample = [ # 0 Task("t1", [""], doc="t1 doc string"), # 1 Task("t2", [""], file_dep=['tests/data/dependency1'], doc="t2 doc string"), # 2 Task("g1", None, doc="g1 doc string", has_subtask=True), # 3 Task("g1.a", [""], doc="g1.a doc string", is_subtask=True), # 4 Task("g1.b", [""], doc="g1.b doc string", is_subtask=True), # 5 Task("t3", [""], doc="t3 doc string", task_dep=["t1"]) ] tasks_sample[2].task_dep = ['g1.a', 'g1.b'] return tasks_sample def tasks_bad_sample(): """Create list of tasks that cause errors.""" bad_sample = [ Task("e1", [""], doc='e4 bad file dep', file_dep=['xxxx']) ] return bad_sample def CmdFactory(cls, outstream=None, task_loader=None, dep_file=None, backend=None, task_list=None, sel_tasks=None, dep_manager=None, config=None, cmds=None): """helper for test code, so test can call _execute() directly""" cmd = cls(task_loader=task_loader, config=config, cmds=cmds) if outstream: cmd.outstream = outstream if backend: assert backend == "dbm" # the only one used on tests cmd.dep_manager = Dependency(DbmDB, dep_file, MD5Checker) elif dep_manager: cmd.dep_manager = dep_manager cmd.dep_file = dep_file # (str) filename usually '.doit.db' cmd.task_list = task_list # list of tasks cmd.sel_tasks = sel_tasks # from command line or default_tasks return cmd doit-0.30.3/tests/data/000077500000000000000000000000001305250115000145775ustar00rootroot00000000000000doit-0.30.3/tests/data/README000066400000000000000000000001001305250115000154460ustar00rootroot00000000000000this folder is used to keep some temporary files used on tests. doit-0.30.3/tests/loader_sample.py000066400000000000000000000004301305250115000170440ustar00rootroot00000000000000 DOIT_CONFIG = {'verbose': 2} def task_xxx1(): """task doc""" return { 'actions': ['do nothing'], 'params': [{'name':'p1', 'default':'1', 'short':'p'}], } def task_yyy2(): return {'actions':None} def bad_seed(): # pragma: no cover pass doit-0.30.3/tests/myecho.py000066400000000000000000000003211305250115000155200ustar00rootroot00000000000000#! /usr/bin/env python # tests on CmdTask will use this script as an external process. # just print out all arguments import sys if __name__ == "__main__": print(" ".join(sys.argv[1:])) sys.exit(0) doit-0.30.3/tests/sample.cfg000066400000000000000000000001071305250115000156260ustar00rootroot00000000000000[GLOBAL] optx = 6 opty = 7 [COMMAND] foo = tests.sample_plugin:MyCmd doit-0.30.3/tests/sample_md5.txt000066400000000000000000000036571305250115000164700ustar00rootroot00000000000000MD5SUM(1) User Commands MD5SUM(1) NAME md5sum - compute and check MD5 message digest SYNOPSIS md5sum [OPTION] [FILE]... DESCRIPTION Print or check MD5 (128-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read MD5 sums from the FILEs and check them -t, --text read in text mode (default) The following two options are useful only when verifying checksums: --status don’t output anything, status code shows success -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 1321. When checking, the input should be a former output of this program. The default mode is to print a line with checksum, a character indicating type (‘*’ for binary, ‘ ’ for text), and name for each FILE. AUTHOR Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS Report bugs to . COPYRIGHT Copyright © 2006 Free Software Foundation, Inc. This is free software. You may redistribute copies of it under the terms of the GNU General Public License . There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for md5sum is maintained as a Texinfo manual. If the info and md5sum programs are properly installed at your site, the command info md5sum should give you access to the complete manual. md5sum 5.97 September 2007 MD5SUM(1) doit-0.30.3/tests/sample_plugin.py000066400000000000000000000013201305250115000170730ustar00rootroot00000000000000from doit.cmd_base import Command class MyCmd(Command): name = 'mycmd' doc_purpose = 'test extending doit commands' doc_usage = '[XXX]' doc_description = 'my command description' def execute(self, opt_values, pos_args): # pragma: no cover print("this command does nothing!") ############## from doit.task import dict_to_task from doit.cmd_base import TaskLoader my_builtin_task = { 'name': 'sample_task', 'actions': ['echo hello from built in'], 'doc': 'sample doc', } class MyLoader(TaskLoader): def load_tasks(self, cmd, opt_values, pos_args): task_list = [dict_to_task(my_builtin_task)] config = {'verbosity': 2} return task_list, config doit-0.30.3/tests/sample_process.py000066400000000000000000000013201305250115000172530ustar00rootroot00000000000000#! /usr/bin/env python # tests on CmdTask will use this script as an external process. # 3 or more arguments. process return error exit (166) # arguments "please fail". process return fail exit (11) # first argument is sent to stdout # second argument is sent to stderr import sys if __name__ == "__main__": # error if len(sys.argv) > 3: sys.exit(166) # fail if len(sys.argv) == 3 and sys.argv[1]=='please' and sys.argv[2]=='fail': sys.stdout.write("out ouch") sys.stderr.write("err output on failure") sys.exit(11) # ok if len(sys.argv) > 1: sys.stdout.write(sys.argv[1]) if len(sys.argv) > 2: sys.stderr.write(sys.argv[2]) sys.exit(0) doit-0.30.3/tests/test___init__.py000066400000000000000000000007261305250115000170430ustar00rootroot00000000000000import os import doit from doit.loader import get_module def test_get_initial_workdir(restore_cwd): initial_wd = os.getcwd() fileName = os.path.join(os.path.dirname(__file__),"loader_sample.py") cwd = os.path.normpath(os.path.join(os.path.dirname(__file__), "data")) assert cwd != initial_wd # make sure test is not too easy get_module(fileName, cwd) assert os.getcwd() == cwd, os.getcwd() assert doit.get_initial_workdir() == initial_wd doit-0.30.3/tests/test___main__.py000066400000000000000000000002611305250115000170160ustar00rootroot00000000000000import subprocess def test_execute(depfile_name): assert 0 == subprocess.call(['python', '-m', 'doit', 'list', '--db-file', depfile_name]) doit-0.30.3/tests/test_action.py000066400000000000000000000705221305250115000165620ustar00rootroot00000000000000import os import sys import tempfile import textwrap import locale locale # quiet pyflakes from pathlib import PurePath, Path from io import StringIO, BytesIO from threading import Thread import time import pytest from mock import Mock from doit import action from doit.exceptions import TaskError, TaskFailed #path to test folder TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH @pytest.fixture def tmpfile(request): temp = tempfile.TemporaryFile('w+') request.addfinalizer(temp.close) return temp class FakeTask(object): def __init__(self, file_dep, dep_changed, targets, options, pos_arg=None, pos_arg_val=None): self.name = "Fake" self.file_dep = file_dep self.dep_changed = dep_changed self.targets = targets self.options = options self.pos_arg = pos_arg self.pos_arg_val = pos_arg_val ############# CmdAction class TestCmdAction(object): # if nothing is raised it is successful def test_success(self): my_action = action.CmdAction(PROGRAM) got = my_action.execute() assert got is None def test_success_noshell(self): my_action = action.CmdAction(PROGRAM.split(), shell=False) got = my_action.execute() assert got is None def test_error(self): my_action = action.CmdAction("%s 1 2 3" % PROGRAM) got = my_action.execute() assert isinstance(got, TaskError) def test_failure(self): my_action = action.CmdAction("%s please fail" % PROGRAM) got = my_action.execute() assert isinstance(got, TaskFailed) def test_str(self): my_action = action.CmdAction(PROGRAM) assert "Cmd: %s" % PROGRAM == str(my_action) def test_unicode(self): action_str = PROGRAM + "中文" my_action = action.CmdAction(action_str) assert "Cmd: %s" % action_str == str(my_action) def test_repr(self): my_action = action.CmdAction(PROGRAM) expected = "" % PROGRAM assert expected == repr(my_action), repr(my_action) def test_result(self): my_action = action.CmdAction("%s 1 2" % PROGRAM) my_action.execute() assert "12" == my_action.result def test_values(self): # for cmdActions they are emtpy if save_out not specified my_action = action.CmdAction("%s 1 2" % PROGRAM) my_action.execute() assert {} == my_action.values class TestCmdActionParams(object): def test_invalid_param_stdout(self): pytest.raises(action.InvalidTask, action.CmdAction, [PROGRAM], stdout=None) def test_changePath(self, tmpdir): path = tmpdir.mkdir("foo") command = 'python -c "import os; print(os.getcwd())"' my_action = action.CmdAction(command, cwd=path.strpath) my_action.execute() assert path + os.linesep == my_action.out, repr(my_action.out) def test_noPathSet(self, tmpdir): path = tmpdir.mkdir("foo") command = 'python -c "import os; print(os.getcwd())"' my_action = action.CmdAction(command) my_action.execute() assert path.strpath + os.linesep != my_action.out, repr(my_action.out) class TestCmdVerbosity(object): # Capture stderr def test_captureStderr(self): cmd = "%s please fail" % PROGRAM my_action = action.CmdAction(cmd) got = my_action.execute() assert isinstance(got, TaskFailed) assert "err output on failure" == my_action.err, repr(my_action.err) # Capture stdout def test_captureStdout(self): my_action = action.CmdAction("%s hi_stdout hi2" % PROGRAM) my_action.execute() assert "hi_stdout" == my_action.out, repr(my_action.out) # Do not capture stderr # test using a tempfile. it is not possible (at least i dont know) # how to test if the output went to the parent process, # faking sys.stderr with a StringIO doesnt work. def test_noCaptureStderr(self, tmpfile): my_action = action.CmdAction("%s please fail" % PROGRAM) action_result = my_action.execute(err=tmpfile) assert isinstance(action_result, TaskFailed) tmpfile.seek(0) got = tmpfile.read() assert "err output on failure" == got, repr(got) assert "err output on failure" == my_action.err, repr(my_action.err) # Do not capture stdout def test_noCaptureStdout(self, tmpfile): my_action = action.CmdAction("%s hi_stdout hi2" % PROGRAM) my_action.execute(out=tmpfile) tmpfile.seek(0) got = tmpfile.read() assert "hi_stdout" == got, repr(got) assert "hi_stdout" == my_action.out, repr(my_action.out) class TestCmdExpandAction(object): def test_task_meta_reference(self): cmd = "python %s/myecho.py" % TEST_PATH cmd += " %(dependencies)s - %(changed)s - %(targets)s" dependencies = ["data/dependency1", "data/dependency2", ":dep_on_task"] targets = ["data/target", "data/targetXXX"] task = FakeTask(dependencies, ["data/dependency1"], targets, {}) my_action = action.CmdAction(cmd, task) assert my_action.execute() is None got = my_action.out.split('-') assert task.file_dep == got[0].split(), got[0] assert task.dep_changed == got[1].split(), got[1] assert targets == got[2].split(), got[2] def test_task_options(self): cmd = "python %s/myecho.py" % TEST_PATH cmd += " %(opt1)s - %(opt2)s" task = FakeTask([],[],[],{'opt1':'3', 'opt2':'abc def'}) my_action = action.CmdAction(cmd, task) assert my_action.execute() is None got = my_action.out.strip() assert "3 - abc def" == got def test_task_pos_arg(self): cmd = "python %s/myecho.py" % TEST_PATH cmd += " %(pos)s" task = FakeTask([],[],[],{}, 'pos', ['hi', 'there']) my_action = action.CmdAction(cmd, task) assert my_action.execute() is None got = my_action.out.strip() assert "hi there" == got def test_task_pos_arg_None(self): # pos_arg_val is None when the task is not specified from # command line but executed because it is a task_dep cmd = "python %s/myecho.py" % TEST_PATH cmd += " %(pos)s" task = FakeTask([],[],[],{}, 'pos', None) my_action = action.CmdAction(cmd, task) assert my_action.execute() is None got = my_action.out.strip() assert "" == got def test_callable_return_command_str(self): def get_cmd(opt1, opt2): cmd = "python %s/myecho.py" % TEST_PATH return cmd + " %s - %s" % (opt1, opt2) task = FakeTask([],[],[],{'opt1':'3', 'opt2':'abc def'}) my_action = action.CmdAction(get_cmd, task) assert my_action.execute() is None got = my_action.out.strip() assert "3 - abc def" == got, repr(got) def test_callable_tuple_return_command_str(self): def get_cmd(opt1, opt2): cmd = "python %s/myecho.py" % TEST_PATH return cmd + " %s - %s" % (opt1, opt2) task = FakeTask([],[],[],{'opt1':'3'}) my_action = action.CmdAction((get_cmd, [], {'opt2':'abc def'}), task) assert my_action.execute() is None got = my_action.out.strip() assert "3 - abc def" == got, repr(got) def test_callable_invalid(self): def get_cmd(blabla): pass task = FakeTask([],[],[],{'opt1':'3'}) my_action = action.CmdAction(get_cmd, task) got = my_action.execute() assert isinstance(got, TaskError) def test_string_list_cant_be_expanded(self): cmd = ["python", "%s/myecho.py" % TEST_PATH] task = FakeTask([],[],[], {}) my_action = action.CmdAction(cmd, task) assert cmd == my_action.expand_action() def test_list_can_contain_path(self): cmd = ["python", PurePath(TEST_PATH), Path("myecho.py")] task = FakeTask([], [], [], {}) my_action = action.CmdAction(cmd, task) assert ["python", TEST_PATH, "myecho.py"] == my_action.expand_action() def test_list_should_contain_strings_or_paths(self): cmd = ["python", PurePath(TEST_PATH), 42, Path("myecho.py")] task = FakeTask([], [], [], {}) my_action = action.CmdAction(cmd, task) assert pytest.raises(action.InvalidTask, my_action.expand_action) class TestCmd_print_process_output_line(object): def test_non_unicode_string_error_strict(self): my_action = action.CmdAction("", decode_error='strict') not_unicode = BytesIO('\xa9'.encode("latin-1")) realtime = Mock() realtime.encoding = 'utf-8' pytest.raises(UnicodeDecodeError, my_action._print_process_output, Mock(), not_unicode, Mock(), realtime) def test_non_unicode_string_error_replace(self): my_action = action.CmdAction("") # default is decode_error = 'replace' not_unicode = BytesIO('\xa9'.encode("latin-1")) realtime = Mock() realtime.encoding = 'utf-8' capture = StringIO() my_action._print_process_output( Mock(), not_unicode, capture, realtime) # get the replacement char expected = '�' assert expected == capture.getvalue() def test_non_unicode_string_ok(self): my_action = action.CmdAction("", encoding='iso-8859-1') not_unicode = BytesIO('\xa9'.encode("latin-1")) realtime = Mock() realtime.encoding = 'utf-8' capture = StringIO() my_action._print_process_output( Mock(), not_unicode, capture, realtime) # get the correct char from latin-1 encoding expected = '©' assert expected == capture.getvalue() # dont test unicode if system locale doesnt support unicode # see https://bitbucket.org/schettino72/doit/pull-request/11 @pytest.mark.skipif('locale.getlocale()[1] is None') def test_unicode_string(self, tmpfile): my_action = action.CmdAction("") unicode_in = tempfile.TemporaryFile('w+b') unicode_in.write(" 中文".encode('utf-8')) unicode_in.seek(0) my_action._print_process_output( Mock(), unicode_in, Mock(), tmpfile) @pytest.mark.skipif('locale.getlocale()[1] is None') def test_unicode_string2(self, tmpfile): # this \uXXXX has a different behavior! my_action = action.CmdAction("") unicode_in = tempfile.TemporaryFile('w+b') unicode_in.write(" 中文 \u2018".encode('utf-8')) unicode_in.seek(0) my_action._print_process_output( Mock(), unicode_in, Mock(), tmpfile) def test_line_buffered_output(self): my_action = action.CmdAction("") out, inp = os.pipe() out, inp = os.fdopen(out, 'rb'), os.fdopen(inp, 'wb') inp.write('abcd\nline2'.encode('utf-8')) inp.flush() capture = StringIO() thread = Thread(target=my_action._print_process_output, args=(Mock(), out, capture, None)) thread.start() time.sleep(0.1) try: got = capture.getvalue() # 'line2' is not captured because of line buffering assert 'abcd\n' == got print('asserted') finally: inp.close() def test_unbuffered_output(self): my_action = action.CmdAction("", buffering=1) out, inp = os.pipe() out, inp = os.fdopen(out, 'rb'), os.fdopen(inp, 'wb') inp.write('abcd\nline2'.encode('utf-8')) inp.flush() capture = StringIO() thread = Thread(target=my_action._print_process_output, args=(Mock(), out, capture, None)) thread.start() time.sleep(0.1) try: got = capture.getvalue() assert 'abcd\nline2' == got finally: inp.close() def test_unbuffered_env(self, monkeypatch): my_action = action.CmdAction("", buffering=1) proc_mock = Mock() proc_mock.configure_mock(returncode=0) popen_mock = Mock(return_value=proc_mock) from doit.action import subprocess monkeypatch.setattr(subprocess, 'Popen', popen_mock) my_action._print_process_output = Mock() my_action.execute() env = popen_mock.call_args[-1]['env'] assert env and env.get('PYTHONUNBUFFERED', False) == '1' class TestCmdSaveOuput(object): def test_success(self): TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH my_action = action.CmdAction(PROGRAM + " x1 x2", save_out='out') my_action.execute() assert {'out': 'x1'} == my_action.values class TestWriter(object): def test_write(self): w1 = StringIO() w2 = StringIO() writer = action.Writer(w1, w2) writer.flush() # make sure flush is supported writer.write("hello") assert "hello" == w1.getvalue() assert "hello" == w2.getvalue() def test_isatty_true(self): w1 = StringIO() w1.isatty = lambda: True w2 = StringIO() writer = action.Writer(w1, w2) assert not writer.isatty() def test_isatty_false(self): w1 = StringIO() w1.isatty = lambda: True w2 = StringIO() w2.isatty = lambda: True writer = action.Writer(w1, w2) assert writer.isatty() def test_isatty_overwrite_yes(self): w1 = StringIO() w1.isatty = lambda: True w2 = StringIO() writer = action.Writer(w1) writer.add_writer(w2, True) def test_isatty_overwrite_no(self): w1 = StringIO() w1.isatty = lambda: True w2 = StringIO() w2.isatty = lambda: True writer = action.Writer(w1) writer.add_writer(w2, False) ############# PythonAction class TestPythonAction(object): def test_success_bool(self): def success_sample():return True my_action = action.PythonAction(success_sample) # nothing raised it was successful my_action.execute() def test_success_None(self): def success_sample():return my_action = action.PythonAction(success_sample) # nothing raised it was successful my_action.execute() def test_success_str(self): def success_sample():return "" my_action = action.PythonAction(success_sample) # nothing raised it was successful my_action.execute() def test_success_dict(self): def success_sample():return {} my_action = action.PythonAction(success_sample) # nothing raised it was successful my_action.execute() def test_error_object(self): # anthing but None, bool, string or dict def error_sample(): return object() my_action = action.PythonAction(error_sample) got = my_action.execute() assert isinstance(got, TaskError) def test_error_taskfail(self): # should get the same exception as was returned from the # user's function def error_sample(): return TaskFailed("too bad") ye_olde_action = action.PythonAction(error_sample) ret = ye_olde_action.execute() assert isinstance(ret, TaskFailed) assert str(ret).endswith("too bad\n") def test_error_taskerror(self): def error_sample(): return TaskError("so sad") ye_olde_action = action.PythonAction(error_sample) ret = ye_olde_action.execute() assert str(ret).endswith("so sad\n") def test_error_exception(self): def error_sample(): raise Exception("asdf") my_action = action.PythonAction(error_sample) got = my_action.execute() assert isinstance(got, TaskError) def test_fail_bool(self): def fail_sample():return False my_action = action.PythonAction(fail_sample) got = my_action.execute() assert isinstance(got, TaskFailed) # any callable should work, not only functions def test_callable_obj(self): class CallMe: def __call__(self): return False my_action = action.PythonAction(CallMe()) got = my_action.execute() assert isinstance(got, TaskFailed) # helper to test callable with parameters def _func_par(self,par1,par2,par3=5): if par1 == par2 and par3 > 10: return True else: return False def test_init(self): # default values action1 = action.PythonAction(self._func_par) assert action1.args == [] assert action1.kwargs == {} # not a callable pytest.raises(action.InvalidTask, action.PythonAction, "abc") # args not a list pytest.raises(action.InvalidTask, action.PythonAction, self._func_par, "c") # kwargs not a list pytest.raises(action.InvalidTask, action.PythonAction, self._func_par, None, "a") # cant use a class as callable def test_init_callable_class(self): class CallMe(object): pass pytest.raises(action.InvalidTask, action.PythonAction, CallMe) # cant use built-ins def test_init_callable_builtin(self): pytest.raises(action.InvalidTask, action.PythonAction, any) def test_functionParametersArgs(self): my_action = action.PythonAction(self._func_par,args=(2,2,25)) my_action.execute() def test_functionParametersKwargs(self): my_action = action.PythonAction(self._func_par, kwargs={'par1':2,'par2':2,'par3':25}) my_action.execute() def test_functionParameters(self): my_action = action.PythonAction(self._func_par,args=(2,2), kwargs={'par3':25}) my_action.execute() def test_functionParametersFail(self): my_action = action.PythonAction(self._func_par, args=(2,3), kwargs={'par3':25}) got = my_action.execute() assert isinstance(got, TaskFailed) def test_str(self): def str_sample(): return True my_action = action.PythonAction(str_sample) assert "Python: function" in str(my_action) assert "str_sample" in str(my_action) def test_repr(self): def repr_sample(): return True my_action = action.PythonAction(repr_sample) assert "" % repr(repr_sample) == repr(my_action) def test_result(self): def vvv(): return "my value" my_action = action.PythonAction(vvv) my_action.execute() assert "my value" == my_action.result def test_result_dict(self): def vvv(): return {'xxx': "my value"} my_action = action.PythonAction(vvv) my_action.execute() assert {'xxx': "my value"} == my_action.result def test_values(self): def vvv(): return {'x': 5, 'y':10} my_action = action.PythonAction(vvv) my_action.execute() assert {'x': 5, 'y':10} == my_action.values class TestPythonVerbosity(object): def write_stderr(self): sys.stderr.write("this is stderr S\n") def write_stdout(self): sys.stdout.write("this is stdout S\n") def test_captureStderr(self): my_action = action.PythonAction(self.write_stderr) my_action.execute() assert "this is stderr S\n" == my_action.err, repr(my_action.err) def test_captureStdout(self): my_action = action.PythonAction(self.write_stdout) my_action.execute() assert "this is stdout S\n" == my_action.out, repr(my_action.out) def test_noCaptureStderr(self, capsys): my_action = action.PythonAction(self.write_stderr) my_action.execute(err=sys.stderr) got = capsys.readouterr()[1] assert "this is stderr S\n" == got, repr(got) def test_noCaptureStdout(self, capsys): my_action = action.PythonAction(self.write_stdout) my_action.execute(out=sys.stdout) got = capsys.readouterr()[0] assert "this is stdout S\n" == got, repr(got) def test_redirectStderr(self): tmpfile = tempfile.TemporaryFile('w+') my_action = action.PythonAction(self.write_stderr) my_action.execute(err=tmpfile) tmpfile.seek(0) got = tmpfile.read() tmpfile.close() assert "this is stderr S\n" == got, got def test_redirectStdout(self): tmpfile = tempfile.TemporaryFile('w+') my_action = action.PythonAction(self.write_stdout) my_action.execute(out=tmpfile) tmpfile.seek(0) got = tmpfile.read() tmpfile.close() assert "this is stdout S\n" == got, got class TestPythonActionPrepareKwargsMeta(object): @pytest.fixture def task_depchanged(self, request): return FakeTask(['dependencies'],['changed'],['targets'],{}) def test_no_extra_args(self, task_depchanged): def py_callable(): return True my_action = action.PythonAction(py_callable, task=task_depchanged) my_action.execute() def test_keyword_extra_args(self, task_depchanged): got = [] def py_callable(arg=None, **kwargs): got.append(kwargs['targets']) got.append(kwargs['dependencies']) got.append(kwargs['changed']) my_action = action.PythonAction(py_callable, task=task_depchanged) my_action.execute() assert got == [['targets'], ['dependencies'], ['changed']], got def test_named_extra_args(self, task_depchanged): got = [] def py_callable(targets, dependencies, changed, task): got.append(targets) got.append(dependencies) got.append(changed) got.append(task) my_action = action.PythonAction(py_callable, task=task_depchanged) my_action.execute() assert got == [['targets'], ['dependencies'], ['changed'], task_depchanged] def test_mixed_args(self, task_depchanged): got = [] def py_callable(a, b, changed): got.append(a) got.append(b) got.append(changed) my_action = action.PythonAction(py_callable, ('a', 'b'), task=task_depchanged) my_action.execute() assert got == ['a', 'b', ['changed']] def test_extra_arg_overwritten(self, task_depchanged): got = [] def py_callable(a, b, changed): got.append(a) got.append(b) got.append(changed) my_action = action.PythonAction(py_callable, ('a', 'b', 'c'), task=task_depchanged) my_action.execute() assert got == ['a', 'b', 'c'] def test_extra_kwarg_overwritten(self, task_depchanged): got = [] def py_callable(a, b, **kwargs): got.append(a) got.append(b) got.append(kwargs['changed']) my_action = action.PythonAction(py_callable, ('a', 'b'), {'changed': 'c'}, task_depchanged) my_action.execute() assert got == ['a', 'b', 'c'] def test_meta_arg_default_disallowed(self, task_depchanged): def py_callable(a, b, changed=None): pass my_action = action.PythonAction(py_callable, ('a', 'b'), task=task_depchanged) pytest.raises(action.InvalidTask, my_action.execute) def test_callable_obj(self, task_depchanged): got = [] class CallMe(object): def __call__(self, a, b, changed): got.append(a) got.append(b) got.append(changed) my_action = action.PythonAction(CallMe(), ('a', 'b'), task=task_depchanged) my_action.execute() assert got == ['a', 'b', ['changed']] def test_method(self, task_depchanged): got = [] class CallMe(object): def xxx(self, a, b, changed): got.append(a) got.append(b) got.append(changed) my_action = action.PythonAction(CallMe().xxx, ('a', 'b'), task=task_depchanged) my_action.execute() assert got == ['a', 'b', ['changed']] def test_task_options(self): got = [] def py_callable(opt1, opt3): got.append(opt1) got.append(opt3) task = FakeTask([],[],[],{'opt1':'1', 'opt2':'abc def', 'opt3':3}) my_action = action.PythonAction(py_callable, task=task) my_action.execute() assert ['1',3] == got, repr(got) def test_task_pos_arg(self): got = [] def py_callable(pos): got.append(pos) task = FakeTask([],[],[],{}, 'pos', ['hi', 'there']) my_action = action.PythonAction(py_callable, task=task) my_action.execute() assert [['hi', 'there']] == got, repr(got) def test_option_default_allowed(self, task_depchanged): got = [] def py_callable(opt2='ABC'): got.append(opt2) task = FakeTask([],[],[],{'opt2':'123'}) my_action = action.PythonAction(py_callable, task=task) my_action.execute() assert ['123'] == got, repr(got) def test_kwonlyargs_minimal(self, task_depchanged): got = [] scope = {'got': got} exec(textwrap.dedent(''' def py_callable(*args, kwonly=None): got.append(args) got.append(kwonly) '''), scope) my_action = action.PythonAction(scope['py_callable'], (1, 2, 3), {'kwonly': 4}, task=task_depchanged) my_action.execute() assert [(1, 2, 3), 4] == got, repr(got) def test_kwonlyargs_full(self, task_depchanged): got = [] scope = {'got': got} exec(textwrap.dedent(''' def py_callable(pos, *args, kwonly=None, **kwargs): got.append(pos) got.append(args) got.append(kwonly) got.append(kwargs['foo']) '''), scope) my_action = action.PythonAction(scope['py_callable'], [1,2,3], {'kwonly': 4, 'foo': 5}, task=task_depchanged) my_action.execute() assert [1, (2, 3), 4, 5] == got, repr(got) def test_action_modifies_task_attributes(self, task_depchanged): def py_callable(targets, dependencies, changed, task): targets.append('new_target') dependencies.append('new_dependency') changed.append('new_changed') my_action = action.PythonAction(py_callable, task=task_depchanged) my_action.execute() assert task_depchanged.file_dep == ['dependencies', 'new_dependency'] assert task_depchanged.targets == ['targets', 'new_target'] assert task_depchanged.dep_changed == ['changed', 'new_changed'] ############## class TestCreateAction(object): class TaskStub(object): name = 'stub' mytask = TaskStub() def testBaseAction(self): class Sample(action.BaseAction): pass my_action = action.create_action(Sample(), self.mytask) assert isinstance(my_action, Sample) assert self.mytask == my_action.task def testStringAction(self): my_action = action.create_action("xpto 14 7", self.mytask) assert isinstance(my_action, action.CmdAction) assert my_action.shell == True def testListStringAction(self): my_action = action.create_action(["xpto", 14, 7], self.mytask) assert isinstance(my_action, action.CmdAction) assert my_action.shell == False def testMethodAction(self): def dumb(): return my_action = action.create_action(dumb, self.mytask) assert isinstance(my_action, action.PythonAction) def testTupleAction(self): def dumb(): return my_action = action.create_action((dumb,[1,2],{'a':5}), self.mytask) assert isinstance(my_action, action.PythonAction) def testTupleActionMoreThanThreeElements(self): def dumb(): return pytest.raises(action.InvalidTask, action.create_action, (dumb,[1,2],{'a':5},'oo'), self.mytask) def testInvalidActionNone(self): pytest.raises(action.InvalidTask, action.create_action, None, self.mytask) def testInvalidActionObject(self): pytest.raises(action.InvalidTask, action.create_action, self, self.mytask) doit-0.30.3/tests/test_api.py000066400000000000000000000006171305250115000160540ustar00rootroot00000000000000import sys from doit.api import run def test_execute(monkeypatch, depfile_name): monkeypatch.setattr(sys, 'argv', ['did', '--db-file', depfile_name]) try: def hi(): print('hi') def task_hi(): return {'actions': [hi]} run(locals()) except SystemExit as err: assert err.code == 0 else: # pragma: no cover assert False doit-0.30.3/tests/test_cmd_auto.py000066400000000000000000000114111305250115000170700ustar00rootroot00000000000000import time from multiprocessing import Process import pytest from doit.cmdparse import DefaultUpdate from doit.task import Task from doit.cmd_base import TaskLoader from doit import filewatch from doit import cmd_auto from .conftest import CmdFactory # skip all tests in this module if platform not supported platform = filewatch.get_platform_system() pytestmark = pytest.mark.skipif( 'platform not in filewatch.FileModifyWatcher.supported_platforms') class TestFindFileDeps(object): def find_deps(self, sel_tasks): tasks = { 't1': Task("t1", [""], file_dep=['f1']), 't2': Task("t2", [""], file_dep=['f2'], task_dep=['t1']), 't3': Task("t3", [""], file_dep=['f3'], setup=['t1']), } return cmd_auto.Auto._find_file_deps(tasks, sel_tasks) def test_find_file_deps(self): assert set(['f1']) == self.find_deps(['t1']) assert set(['f1', 'f2']) == self.find_deps(['t2']) assert set(['f1', 'f3']) == self.find_deps(['t3']) class TestDepChanged(object): def test_changed(self, dependency1): started = time.time() assert not cmd_auto.Auto._dep_changed([dependency1], started, []) assert cmd_auto.Auto._dep_changed([dependency1], started-100, []) assert not cmd_auto.Auto._dep_changed([dependency1], started-100, [dependency1]) class FakeLoader(TaskLoader): def __init__(self, task_list, dep_file): self.task_list = task_list self.dep_file = dep_file def load_tasks(self, cmd, params, args): return self.task_list, {'verbosity':2, 'dep_file':self.dep_file} class TestAuto(object): def test_invalid_args(self, dependency1, depfile_name): t1 = Task("t1", [""], file_dep=[dependency1]) task_loader = FakeLoader([t1], depfile_name) cmd = CmdFactory(cmd_auto.Auto, task_loader=task_loader) # terminates with error number assert cmd.parse_execute(['t2']) == 3 def test_run_callback(self, monkeypatch): result = [] def mock_cmd(callback, shell=None): result.append(callback) monkeypatch.setattr(cmd_auto, 'call', mock_cmd) # success result = [] cmd_auto.Auto._run_callback(0, 'success', 'failure') assert 'success' == result[0] # failure result = [] cmd_auto.Auto._run_callback(3, 'success', 'failure') assert 'failure' == result[0] # nothing executed result = [] cmd_auto.Auto._run_callback(0, None , None) assert 0 == len(result) cmd_auto.Auto._run_callback(1, None , None) assert 0 == len(result) def test_run_wait(self, dependency1, target1, depfile_name): def ok(): with open(target1, 'w') as fp: fp.write('ok') t1 = Task("t1", [ok], file_dep=[dependency1]) cmd = CmdFactory(cmd_auto.Auto, task_loader=FakeLoader([t1], depfile_name)) run_wait_proc = Process(target=cmd.run_watch, args=(DefaultUpdate(), [])) run_wait_proc.start() # wait until task is executed for x in range(5): try: got = open(target1, 'r').read() print(got) if got == 'ok': break except: print('busy') time.sleep(0.1) else: # pragma: no cover raise Exception("target not created") # write on file to terminate process fd = open(dependency1, 'w') fd.write("hi" + str(time.asctime())) fd.close() run_wait_proc.join(.5) if run_wait_proc.is_alive(): # pragma: no cover # this test is very flaky so we give it one more chance... # write on file to terminate process fd = open(dependency1, 'w') fd.write("hi" + str(time.asctime())) fd.close() run_wait_proc.join(1) if run_wait_proc.is_alive(): # pragma: no cover run_wait_proc.terminate() raise Exception("process not terminated") assert 0 == run_wait_proc.exitcode def test_execute(self, monkeypatch): # use dumb operation instead of executing RUN command and waiting event def fake_run(self, params, args): # pragma: no cover 5 + 2 monkeypatch.setattr(cmd_auto.Auto, 'run_watch', fake_run) # after join raise exception to stop AUTO command original = cmd_auto.Process.join def join_interrupt(self): original(self) raise KeyboardInterrupt() monkeypatch.setattr(cmd_auto.Process, 'join', join_interrupt) cmd = CmdFactory(cmd_auto.Auto) cmd.execute(None, None) doit-0.30.3/tests/test_cmd_base.py000066400000000000000000000233161305250115000170410ustar00rootroot00000000000000import os import pytest from doit import version from doit.cmdparse import CmdParseError, CmdParse from doit.exceptions import InvalidCommand, InvalidDodoFile from doit.dependency import FileChangedChecker from doit.task import Task from doit.cmd_base import version_tuple, Command, DoitCmdBase from doit.cmd_base import ModuleTaskLoader, DodoTaskLoader from doit.cmd_base import check_tasks_exist, tasks_and_deps_iter, subtasks_iter def test_version_tuple(): assert [1,2,3] == version_tuple([1,2,3]) assert [1,2,3] == version_tuple('1.2.3') assert [0,2,0] == version_tuple('0.2.0') assert [0,2,-1] == version_tuple('0.2.dev1') opt_bool = {'name': 'flag', 'short':'f', 'long': 'flag', 'inverse':'no-flag', 'type': bool, 'default': False, 'help': 'help for opt1'} opt_rare = {'name': 'rare', 'long': 'rare-bool', 'type': bool, 'default': False, 'help': 'help for opt2 [default: %(default)s]'} opt_int = {'name': 'num', 'short':'n', 'long': 'number', 'type': int, 'default': 5, 'help': 'help for opt3 [default: %(default)s]'} opt_no = {'name': 'no', 'short':'', 'long': '', 'type': int, 'default': 5, 'help': 'user cant modify me'} class SampleCmd(Command): doc_purpose = 'PURPOSE' doc_usage = 'USAGE' doc_description = 'DESCRIPTION' cmd_options = [opt_bool, opt_rare, opt_int, opt_no] @staticmethod def execute(params, args): return params, args class TestCommand(object): def test_configure(self): config = {'GLOBAL':{'foo':1, 'bar':'2'}, 'whatever':{'xxx': 'yyy'}, 'samplecmd': {'foo':4}, } cmd = SampleCmd(config=config) assert cmd.config == config assert cmd.config_vals == {'foo':4, 'bar':'2'} def test_call_value_cmd_line_arg(self): cmd = SampleCmd() params, args = cmd.parse_execute(['-n','7','ppp']) assert ['ppp'] == args assert 7 == params['num'] def test_call_value_option_default(self): cmd = SampleCmd() params, args = cmd.parse_execute([]) assert 5 == params['num'] def test_call_value_overwritten_default(self): cmd = SampleCmd(config={'GLOBAL':{'num': 20}}) params, args = cmd.parse_execute([]) assert 20 == params['num'] def test_help(self): cmd = SampleCmd(config={'GLOBAL':{'num': 20}}) text = cmd.help() assert 'PURPOSE' in text assert 'USAGE' in text assert 'DESCRIPTION' in text assert '-f' in text assert '--rare-bool' in text assert 'help for opt1' in text assert opt_no['name'] in [o.name for o in cmd.get_options()] # option wihtout short and long are not displayed assert 'user cant modify me' not in text # default value is displayed assert "help for opt2 [default: False]" in text # overwritten default assert "help for opt3 [default: 20]" in text def test_failCall(self): cmd = SampleCmd() pytest.raises(CmdParseError, cmd.parse_execute, ['-x','35']) class TestModuleTaskLoader(object): def test_load_tasks(self): cmd = Command() members = {'task_xxx1': lambda : {'actions':[]}, 'task_no': 'strings are not tasks', 'blabla': lambda :None, 'DOIT_CONFIG': {'verbose': 2}, } loader = ModuleTaskLoader(members) task_list, config = loader.load_tasks(cmd, {}, []) assert ['xxx1'] == [t.name for t in task_list] assert {'verbose': 2} == config class TestDodoTaskLoader(object): def test_load_tasks(self, restore_cwd): os.chdir(os.path.dirname(__file__)) cmd = Command() params = {'dodoFile': 'loader_sample.py', 'cwdPath': None, 'seek_file': False, } loader = DodoTaskLoader() task_list, config = loader.load_tasks(cmd, params, []) assert ['xxx1', 'yyy2'] == [t.name for t in task_list] assert {'verbose': 2} == config class TestDoitCmdBase(object): class MyCmd(DoitCmdBase): doc_purpose = "fake for testing" doc_usage = "[TASK ...]" doc_description = None opt_my = { 'name': 'my_opt', 'short':'m', 'long': 'mine', 'type': str, 'default': 'xxx', 'help': "my option" } cmd_options = (opt_my,) def _execute(self, my_opt): return my_opt # command with lower level execute() method def test_new_cmd(self): class MyRawCmd(self.MyCmd): def execute(self, params, args): return params['my_opt'] members = {'task_xxx1': lambda : {'actions':[]},} loader = ModuleTaskLoader(members) mycmd = MyRawCmd(task_loader=loader, cmds={'foo':None, 'bar':None}) assert mycmd.loader.cmd_names == ['bar', 'foo'] assert 'min' == mycmd.parse_execute(['--mine', 'min']) # loader gets a reference to config def test_loader_config(self, depfile_name): mycmd = self.MyCmd(config={'foo':{'bar':'x'}}) assert mycmd.loader.config['foo'] == {'bar':'x'} # command with _execute() method def test_execute(self, depfile_name): members = {'task_xxx1': lambda : {'actions':[]},} loader = ModuleTaskLoader(members) mycmd = self.MyCmd(task_loader=loader) assert 'min' == mycmd.parse_execute([ '--db-file', depfile_name, '--mine', 'min']) # command with _execute() method def test_minversion(self, depfile_name, monkeypatch): members = { 'task_xxx1': lambda : {'actions':[]}, 'DOIT_CONFIG': {'minversion': '5.2.3'}, } loader = ModuleTaskLoader(members) # version ok monkeypatch.setattr(version, 'VERSION', '7.5.8') mycmd = self.MyCmd(task_loader=loader) assert 'xxx' == mycmd.parse_execute(['--db-file', depfile_name]) # version too old monkeypatch.setattr(version, 'VERSION', '5.2.1') mycmd = self.MyCmd(task_loader=loader) pytest.raises(InvalidDodoFile, mycmd.parse_execute, []) def testInvalidChecker(self): mycmd = self.MyCmd(task_loader=ModuleTaskLoader({})) params, args = CmdParse(mycmd.get_options()).parse([]) params['check_file_uptodate'] = 'i dont exist' pytest.raises(InvalidCommand, mycmd.execute, params, args) def testCustomChecker(self, depfile_name): class MyChecker(FileChangedChecker): pass mycmd = self.MyCmd(task_loader=ModuleTaskLoader({})) params, args = CmdParse(mycmd.get_options()).parse([]) params['check_file_uptodate'] = MyChecker params['dep_file'] = depfile_name mycmd.execute(params, args) assert isinstance(mycmd.dep_manager.checker, MyChecker) def testPluginBackend(self, depfile_name): mycmd = self.MyCmd(task_loader=ModuleTaskLoader({}), config={'BACKEND': {'j2': 'doit.dependency:JsonDB'}}) params, args = CmdParse(mycmd.get_options()).parse(['--backend', 'j2']) params['dep_file'] = depfile_name mycmd.execute(params, args) assert mycmd.dep_manager.db_class is mycmd._backends['j2'] def testPluginLoader(self, depfile_name): entry_point = {'mod': 'tests.sample_plugin:MyLoader'} mycmd = self.MyCmd(config={'GLOBAL': {'loader': 'mod'}, 'LOADER': entry_point}) assert mycmd.loader.__class__.__name__ == 'MyLoader' task_list, dodo_config = mycmd.loader.load_tasks(mycmd, {}, []) assert task_list[0].name == 'sample_task' assert dodo_config == {'verbosity': 2} class TestCheckTasksExist(object): def test_None(self): check_tasks_exist({}, None) # nothing is raised def test_invalid(self): pytest.raises(InvalidCommand, check_tasks_exist, {}, 't2') def test_valid(self): tasks = { 't1': Task("t1", [""] ), 't2': Task("t2", [""], task_dep=['t1']), } check_tasks_exist(tasks, ['t2']) # nothing is raised class TestTaskAndDepsIter(object): def test_dep_iter(self): tasks = { 't1': Task("t1", [""] ), 't2': Task("t2", [""], task_dep=['t1']), 't3': Task("t3", [""], setup=['t1']), 't4': Task("t4", [""], task_dep=['t3']), } def names(sel_tasks, repeated=False): task_list = tasks_and_deps_iter(tasks, sel_tasks, repeated) return [t.name for t in task_list] # no deps assert ['t1'] == names(['t1']) # with task_dep assert ['t2', 't1'] == names(['t2']) # with setup assert ['t3', 't1'] == names(['t3']) # two levels assert ['t4', 't3', 't1'] == names(['t4']) # select 2 assert set(['t2', 't1']) == set(names(['t1', 't2'])) # repeat deps got = names(['t1', 't2'], True) assert 3 == len(got) assert 't1' == got[-1] class TestSubtaskIter(object): def test_sub_iter(self): tasks = { 't1': Task("t1", [""] ), 't2': Task("t2", [""], task_dep=['t1', 't2:a', 't2:b']), 't2:a': Task("t2:a", [""], is_subtask=True), 't2:b': Task("t2:b", [""], is_subtask=True), } def names(task_name): return [t.name for t in subtasks_iter(tasks, tasks[task_name])] assert [] == names('t1') assert ['t2:a', 't2:b'] == names('t2') doit-0.30.3/tests/test_cmd_clean.py000066400000000000000000000063671305250115000172200ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.task import Task from doit.cmd_clean import Clean from .conftest import CmdFactory class TestCmdClean(object): @pytest.fixture def tasks(self, request): self.cleaned = [] def myclean(name): self.cleaned.append(name) return [ Task("t1", None, task_dep=['t2'], clean=[(myclean,('t1',))]), Task("t2", None, clean=[(myclean,('t2',))]), Task("t3", None, task_dep=['t3:a'], has_subtask=True, clean=[(myclean,('t3',))]), Task("t3:a", None, clean=[(myclean,('t3:a',))], is_subtask=True), Task("t4", None, task_dep=['t1'], clean=[(myclean,('t4',))] ), ] def test_clean_all(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, False, True) assert ['t1','t2', 't3:a', 't3', 't4'] == self.cleaned def test_clean_default(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks, sel_tasks=['t1']) cmd_clean._execute(False, False, False) # default enable --clean-dep by default assert ['t2', 't1'] == self.cleaned def test_clean_default_all(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, False, False) # default enable --clean-dep by default assert set(['t1','t2', 't3:a', 't3', 't4']) == set(self.cleaned) def test_clean_selected(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks, sel_tasks=['t1']) cmd_clean._execute(False, False, False, ['t2']) assert ['t2'] == self.cleaned def test_clean_taskdep(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, True, False, ['t1']) assert ['t2', 't1'] == self.cleaned def test_clean_taskdep_recursive(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, True, False, ['t4']) assert ['t2', 't1', 't4'] == self.cleaned def test_clean_subtasks(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, False, False, ['t3']) assert ['t3:a', 't3'] == self.cleaned def test_clean_taskdep_once(self, tasks): # do not execute clean operation more than once output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks) cmd_clean._execute(False, True, False, ['t1', 't2']) assert ['t2', 't1'] == self.cleaned def test_clean_invalid_task(self, tasks): output = StringIO() cmd_clean = CmdFactory(Clean, outstream=output, task_list=tasks, sel_tasks=['t1']) pytest.raises(InvalidCommand, cmd_clean._execute, False, False, False, ['xxxx']) doit-0.30.3/tests/test_cmd_completion.py000066400000000000000000000115331305250115000202760ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.cmdparse import CmdOption from doit.plugin import PluginDict from doit.task import Task from doit.cmd_base import Command, TaskLoader, DodoTaskLoader from doit.cmd_completion import TabCompletion from doit.cmd_help import Help from .conftest import CmdFactory # doesnt test the shell scripts. just test its creation! class FakeLoader(TaskLoader): def load_tasks(self, cmd, params, args): task_list = [ Task("t1", None, ), Task("t2", None, task_dep=['t2:a'], has_subtask=True, ), Task("t2:a", None, is_subtask=True), ] return task_list, {} @pytest.fixture def commands(request): sub_cmds = {} sub_cmds['tabcompletion'] = TabCompletion sub_cmds['help'] = Help return PluginDict(sub_cmds) def test_invalid_shell_option(): cmd = CmdFactory(TabCompletion) pytest.raises(InvalidCommand, cmd.execute, {'shell':'another_shell', 'hardcode_tasks': False}, []) class TestCmdCompletionBash(object): def test_with_dodo__dinamic_tasks(self, commands): output = StringIO() cmd = CmdFactory(TabCompletion, task_loader=DodoTaskLoader(), outstream=output, cmds=commands) cmd.execute({'shell':'bash', 'hardcode_tasks': False}, []) got = output.getvalue() assert 'dodof' in got assert 't1' not in got assert 'tabcompletion' in got def test_no_dodo__hardcoded_tasks(self, commands): output = StringIO() cmd = CmdFactory(TabCompletion, task_loader=FakeLoader(), outstream=output, cmds=commands) cmd.execute({'shell':'bash', 'hardcode_tasks': True}, []) got = output.getvalue() assert 'dodo.py' not in got assert 't1' in got def test_cmd_takes_file_args(self, commands): output = StringIO() cmd = CmdFactory(TabCompletion, task_loader=FakeLoader(), outstream=output, cmds=commands) cmd.execute({'shell':'bash', 'hardcode_tasks': False}, []) got = output.getvalue() assert """help) COMPREPLY=( $(compgen -W "${tasks} ${sub_cmds}" -- $cur) ) return 0""" in got assert """tabcompletion) COMPREPLY=( $(compgen -f -- $cur) ) return 0""" in got class TestCmdCompletionZsh(object): def test_zsh_arg_line(self): opt1 = CmdOption({'name':'o1', 'default':'', 'help':'my desc'}) assert '' == TabCompletion._zsh_arg_line(opt1) opt2 = CmdOption({'name':'o2', 'default':'', 'help':'my desc', 'short':'s'}) assert '"-s[my desc]" \\' == TabCompletion._zsh_arg_line(opt2) opt3 = CmdOption({'name':'o3', 'default':'', 'help':'my desc', 'long':'lll'}) assert '"--lll[my desc]" \\' == TabCompletion._zsh_arg_line(opt3) opt4 = CmdOption({'name':'o4', 'default':'', 'help':'my desc [b]a', 'short':'s', 'long':'lll'}) assert ('"(-s|--lll)"{-s,--lll}"[my desc [b\]a]" \\' == TabCompletion._zsh_arg_line(opt4)) # escaping `"` test opt5 = CmdOption({'name':'o5', 'default':'', 'help':'''my "des'c [b]a''', 'short':'s', 'long':'lll'}) assert ('''"(-s|--lll)"{-s,--lll}"[my \\"des'c [b\]a]" \\''' == TabCompletion._zsh_arg_line(opt5)) def test_cmd_arg_list(self): no_args = TabCompletion._zsh_arg_list(Command()) assert "'*::task:(($tasks))'" not in no_args assert "'::cmd:(($commands))'" not in no_args class CmdTakeTasks(Command): doc_usage = '[TASK ...]' with_task_args = TabCompletion._zsh_arg_list(CmdTakeTasks()) assert "'*::task:(($tasks))'" in with_task_args assert "'::cmd:(($commands))'" not in with_task_args class CmdTakeCommands(Command): doc_usage = '[COMMAND ...]' with_cmd_args = TabCompletion._zsh_arg_list(CmdTakeCommands()) assert "'*::task:(($tasks))'" not in with_cmd_args assert "'::cmd:(($commands))'" in with_cmd_args def test_cmds_with_params(self, commands): output = StringIO() cmd = CmdFactory(TabCompletion, task_loader=DodoTaskLoader(), outstream=output, cmds=commands) cmd.execute({'shell':'zsh', 'hardcode_tasks': False}, []) got = output.getvalue() assert "tabcompletion: generate script" in got def test_hardcoded_tasks(self, commands): output = StringIO() cmd = CmdFactory(TabCompletion, task_loader=FakeLoader(), outstream=output, cmds=commands) cmd.execute({'shell':'zsh', 'hardcode_tasks': True}, []) got = output.getvalue() assert 't1' in got doit-0.30.3/tests/test_cmd_dumpdb.py000066400000000000000000000011211305250115000173700ustar00rootroot00000000000000import pytest from doit.cmd_dumpdb import DumpDB class TestCmdDumpDB(object): def testDefault(self, capsys, depfile): if depfile.whichdb in ('dbm', 'dbm.ndbm'): # pragma: no cover pytest.skip('%s not supported for this operation' % depfile.whichdb) # cmd_main(["help", "task"]) depfile._set('tid', 'my_dep', 'xxx') depfile.close() cmd_dump = DumpDB() cmd_dump.execute({'dep_file': depfile.name}, []) out, err = capsys.readouterr() assert 'tid' in out assert 'my_dep' in out assert 'xxx' in out doit-0.30.3/tests/test_cmd_forget.py000066400000000000000000000075111305250115000174140ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.dependency import DbmDB, Dependency from doit.cmd_forget import Forget from .conftest import tasks_sample, CmdFactory class TestCmdForget(object): @pytest.fixture def tasks(self, request): return tasks_sample() @staticmethod def _add_task_deps(tasks, testdb): """put some data on testdb""" dep = Dependency(DbmDB, testdb) for task in tasks: dep._set(task.name,"dep","1") dep.close() dep2 = Dependency(DbmDB, testdb) assert "1" == dep2._get("g1.a", "dep") dep2.close() def testForgetAll(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory(Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=[]) cmd_forget._execute(False) got = output.getvalue().split("\n")[:-1] assert ["forgetting all tasks"] == got, repr(output.getvalue()) dep = Dependency(DbmDB, depfile_name) for task in tasks: assert None == dep._get(task.name, "dep") def testForgetOne(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory(Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=["t2", "t1"]) cmd_forget._execute(False) got = output.getvalue().split("\n")[:-1] assert ["forgetting t2", "forgetting t1"] == got dep = Dependency(DbmDB, depfile_name) assert None == dep._get("t1", "dep") assert None == dep._get("t2", "dep") assert "1" == dep._get("g1.a", "dep") def testForgetGroup(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory( Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=["g1"]) cmd_forget._execute(False) got = output.getvalue().split("\n")[:-1] assert "forgetting g1" == got[0] dep = Dependency(DbmDB, depfile_name) assert "1" == dep._get("t1", "dep") assert "1" == dep._get("t2", "dep") assert None == dep._get("g1", "dep") assert None == dep._get("g1.a", "dep") assert None == dep._get("g1.b", "dep") def testForgetTaskDependency(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory( Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=["t3"]) cmd_forget._execute(True) dep = Dependency(DbmDB, depfile_name) assert None == dep._get("t3", "dep") assert None == dep._get("t1", "dep") # if task dependency not from a group dont forget it def testDontForgetTaskDependency(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory( Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=["t3"]) cmd_forget._execute(False) dep = Dependency(DbmDB, depfile_name) assert None == dep._get("t3", "dep") assert "1" == dep._get("t1", "dep") def testForgetInvalid(self, tasks, depfile_name): self._add_task_deps(tasks, depfile_name) output = StringIO() cmd_forget = CmdFactory( Forget, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks, sel_tasks=["XXX"]) pytest.raises(InvalidCommand, cmd_forget._execute, False) doit-0.30.3/tests/test_cmd_help.py000066400000000000000000000043561305250115000170620ustar00rootroot00000000000000from doit.doit_cmd import DoitMain def cmd_main(args, extra_config=None): if extra_config: extra_config = {'GLOBAL': extra_config} return DoitMain(extra_config=extra_config).run(args) class TestHelp(object): def test_help_usage(self, capsys): returned = cmd_main(["help"]) assert returned == 0 out, err = capsys.readouterr() assert "doit list" in out def test_help_plugin_name(self, capsys): plugin = {'XXX': 'tests.sample_plugin:MyCmd'} returned = DoitMain(extra_config={'COMMAND':plugin}).run(["help"]) assert returned == 0 out, err = capsys.readouterr() assert "doit XXX " in out assert "test extending doit commands" in out, out def test_help_task_params(self, capsys): returned = cmd_main(["help", "task"]) assert returned == 0 out, err = capsys.readouterr() assert "Task Dictionary parameters" in out def test_help_cmd(self, capsys): returned = cmd_main(["help", "list"], {'dep_file': 'foo.db'}) assert returned == 0 out, err = capsys.readouterr() assert "Purpose: list tasks from dodo file" in out # overwritten defaults, are shown as default assert "file used to save successful runs [default: foo.db]" in out def test_help_task_name(self, capsys, restore_cwd, depfile_name): returned = cmd_main(["help", "-f", "tests/loader_sample.py", "--db-file", depfile_name, "xxx1"]) assert returned == 0 out, err = capsys.readouterr() assert "xxx1" in out # name assert "task doc" in out # doc assert "-p" in out # params def test_help_wrong_name(self, capsys, restore_cwd, depfile_name): returned = cmd_main(["help", "-f", "tests/loader_sample.py", "--db-file", depfile_name, "wrong_name"]) assert returned == 0 # TODO return different value? out, err = capsys.readouterr() assert "doit list" in out def test_help_no_dodo_file(self, capsys): returned = cmd_main(["help", "-f", "no_dodo", "wrong_name"]) assert returned == 0 # TODO return different value? out, err = capsys.readouterr() assert "doit list" in out doit-0.30.3/tests/test_cmd_ignore.py000066400000000000000000000051611305250115000174100ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.dependency import DbmDB, Dependency from doit.cmd_ignore import Ignore from .conftest import tasks_sample, CmdFactory class TestCmdIgnore(object): @pytest.fixture def tasks(self, request): return tasks_sample() def testIgnoreAll(self, tasks, depfile_name): output = StringIO() cmd = CmdFactory(Ignore, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks) cmd._execute([]) got = output.getvalue().split("\n")[:-1] assert ["You cant ignore all tasks! Please select a task."] == got, got dep = Dependency(DbmDB, depfile_name) for task in tasks: assert None == dep._get(task.name, "ignore:") def testIgnoreOne(self, tasks, depfile_name): output = StringIO() cmd = CmdFactory(Ignore, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks) cmd._execute(["t2", "t1"]) got = output.getvalue().split("\n")[:-1] assert ["ignoring t2", "ignoring t1"] == got dep = Dependency(DbmDB, depfile_name) assert '1' == dep._get("t1", "ignore:") assert '1' == dep._get("t2", "ignore:") assert None == dep._get("t3", "ignore:") def testIgnoreGroup(self, tasks, depfile_name): output = StringIO() cmd = CmdFactory(Ignore, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks) cmd._execute(["g1"]) got = output.getvalue().split("\n")[:-1] dep = Dependency(DbmDB, depfile_name) assert None == dep._get("t1", "ignore:"), got assert None == dep._get("t2", "ignore:") assert '1' == dep._get("g1", "ignore:") assert '1' == dep._get("g1.a", "ignore:") assert '1' == dep._get("g1.b", "ignore:") # if task dependency not from a group dont ignore it def testDontIgnoreTaskDependency(self, tasks, depfile_name): output = StringIO() cmd = CmdFactory(Ignore, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks) cmd._execute(["t3"]) dep = Dependency(DbmDB, depfile_name) assert '1' == dep._get("t3", "ignore:") assert None == dep._get("t1", "ignore:") def testIgnoreInvalid(self, tasks, depfile_name): output = StringIO() cmd = CmdFactory(Ignore, outstream=output, dep_file=depfile_name, backend='dbm', task_list=tasks) pytest.raises(InvalidCommand, cmd._execute, ["XXX"]) doit-0.30.3/tests/test_cmd_info.py000066400000000000000000000057001305250115000170570ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.task import Task from doit.cmd_info import Info from .conftest import CmdFactory class TestCmdInfo(object): def test_info(self, depfile): output = StringIO() task = Task("t1", [], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Info, outstream=output, dep_file=depfile.name, task_list=[task]) cmd._execute(['t1']) assert """name:'t1'""" in output.getvalue() assert """'tests/data/dependency1'""" in output.getvalue() def test_invalid_command_args(self, depfile): output = StringIO() task = Task("t1", [], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Info, outstream=output, dep_file=depfile.name, task_list=[task]) # fails if number of args != 1 pytest.raises(InvalidCommand, cmd._execute, []) pytest.raises(InvalidCommand, cmd._execute, ['t1', 't2']) def test_execute_status_run(self, depfile, dependency1): output = StringIO() task = Task("t1", [], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Info, outstream=output, dep_file=depfile.name, task_list=[task], backend='dbm') return_val = cmd._execute(['t1'], show_execute_status=True) assert """name:'t1'""" in output.getvalue() assert return_val == 1 # indicates task is not up-to-date assert "Task is not up-to-date" in output.getvalue() assert """ - tests/data/dependency1""" in output.getvalue() def test_execute_status_uptodate(self, depfile, dependency1): output = StringIO() task = Task("t1", [], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Info, outstream=output, dep_file=depfile.name, task_list=[task], backend='dbm') cmd.dep_manager.save_success(task) return_val = cmd._execute(['t1'], show_execute_status=True) assert """name:'t1'""" in output.getvalue() assert return_val == 0 # indicates task is not up-to-date assert "Task is up-to-date" in output.getvalue() def test_get_reasons_str(self): reasons = { 'has_no_dependencies': True, 'uptodate_false': [('func', 'arg', 'kwarg')], 'checker_changed': ['foo', 'bar'], 'missing_target': ['f1', 'f2'], } got = Info.get_reasons(reasons).splitlines() assert len(got) == 7 assert got[0] == ' * The task has no dependencies.' assert got[1] == ' * The following uptodate objects evaluate to false:' assert got[2] == ' - func (args=arg, kwargs=kwarg)' assert got[3] == ' * The file_dep checker changed from foo to bar.' assert got[4] == ' * The following targets do not exist:' assert got[5] == ' - f1' assert got[6] == ' - f2' doit-0.30.3/tests/test_cmd_list.py000066400000000000000000000144161305250115000171030ustar00rootroot00000000000000from io import StringIO import pytest from doit.exceptions import InvalidCommand from doit.task import Task from doit.tools import result_dep from doit.cmd_list import List from tests.conftest import tasks_sample, tasks_bad_sample, CmdFactory class TestCmdList(object): def testQuiet(self): output = StringIO() tasks = tasks_sample() cmd_list = CmdFactory(List, outstream=output, task_list=tasks) cmd_list._execute() got = [line.strip() for line in output.getvalue().split('\n') if line] expected = [t.name for t in tasks if not t.is_subtask] assert sorted(expected) == got def testDoc(self): output = StringIO() tasks = tasks_sample() cmd_list = CmdFactory(List, outstream=output, task_list=tasks) cmd_list._execute(quiet=False) got = [line for line in output.getvalue().split('\n') if line] expected = [] for t in sorted(tasks): if not t.is_subtask: expected.append([t.name, t.doc]) assert len(expected) == len(got) for exp1, got1 in zip(expected, got): assert exp1 == got1.split(None, 1) def testCustomTemplate(self): output = StringIO() tasks = tasks_sample() cmd_list = CmdFactory(List, outstream=output, task_list=tasks) cmd_list._execute(template='xxx {name} xxx {doc}') got = [line.strip() for line in output.getvalue().split('\n') if line] assert 'xxx g1 xxx g1 doc string' == got[0] assert 'xxx t3 xxx t3 doc string' == got[3] def testDependencies(self): my_task = Task("t2", [""], file_dep=['d2.txt']) output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=[my_task]) cmd_list._execute(list_deps=True) got = output.getvalue() assert "d2.txt" in got def testSubTask(self): output = StringIO() tasks = tasks_sample() cmd_list = CmdFactory(List, outstream=output, task_list=tasks) cmd_list._execute(subtasks=True) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = [t.name for t in sorted(tasks)] assert expected == got def testFilter(self): output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=tasks_sample()) cmd_list._execute(pos_args=['g1', 't2']) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = ['g1', 't2'] assert expected == got def testFilterSubtask(self): output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=tasks_sample()) cmd_list._execute(pos_args=['g1.a']) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = ['g1.a'] assert expected == got def testFilterAll(self): output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=tasks_sample()) cmd_list._execute(subtasks=True, pos_args=['g1']) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = ['g1', 'g1.a', 'g1.b'] assert expected == got def testStatus(self, dependency1, depfile): task_list = tasks_sample() depfile.ignore(task_list[0]) # t1 depfile.save_success(task_list[1]) # t2 depfile.close() output = StringIO() cmd_list = CmdFactory(List, outstream=output, dep_file=depfile.name, backend='dbm', task_list=task_list) cmd_list._execute(status=True) got = [line.strip() for line in output.getvalue().split('\n') if line] assert 'R g1' in got assert 'I t1' in got assert 'U t2' in got def testErrorStatus(self, dependency1, depfile): """Check that problematic tasks show an 'E' as status.""" task_list = tasks_bad_sample() output = StringIO() cmd_list = CmdFactory(List, outstream=output, dep_file=depfile.name, backend='dbm', task_list=task_list) cmd_list._execute(status=True) for line in output.getvalue().split('\n'): if line: assert line.strip().startswith('E ') def testStatus_result_dep_bug_gh44(self, dependency1, depfile): # make sure task dict is passed when checking up-to-date task_list = [Task("t1", [""], doc="t1 doc string"), Task("t2", [""], uptodate=[result_dep('t1')]),] depfile.save_success(task_list[0]) # t1 depfile.close() output = StringIO() cmd_list = CmdFactory(List, outstream=output, dep_file=depfile.name, backend='dbm', task_list=task_list) cmd_list._execute(status=True) got = [line.strip() for line in output.getvalue().split('\n') if line] assert 'R t1' in got assert 'R t2' in got def testNoPrivate(self): task_list = list(tasks_sample()) task_list.append(Task("_s3", [""])) output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=task_list) cmd_list._execute(pos_args=['_s3']) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = [] assert expected == got def testWithPrivate(self): task_list = list(tasks_sample()) task_list.append(Task("_s3", [""])) output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=task_list) cmd_list._execute(private=True, pos_args=['_s3']) got = [line.strip() for line in output.getvalue().split('\n') if line] expected = ['_s3'] assert expected == got def testListInvalidTask(self): output = StringIO() cmd_list = CmdFactory(List, outstream=output, task_list=tasks_sample()) pytest.raises(InvalidCommand, cmd_list._execute, pos_args=['xxx']) def test_unicode_name(self, depfile): task_list = [Task("t做", [""], doc="t1 doc string 做"),] output = StringIO() cmd_list = CmdFactory(List, outstream=output, dep_file=depfile.name, task_list=task_list) cmd_list._execute() got = [line.strip() for line in output.getvalue().split('\n') if line] assert 't做' == got[0] doit-0.30.3/tests/test_cmd_resetdep.py000066400000000000000000000120051305250115000177330ustar00rootroot00000000000000from io import StringIO import pytest from doit.cmd_resetdep import ResetDep from doit.dependency import TimestampChecker, get_md5, get_file_md5 from doit.exceptions import InvalidCommand from doit.task import Task from tests.conftest import tasks_sample, CmdFactory, get_abspath class TestCmdResetDep(object): def test_execute(self, depfile, dependency1): output = StringIO() tasks = tasks_sample() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=tasks, dep_manager=depfile) cmd_reset._execute() got = [line.strip() for line in output.getvalue().split('\n') if line] expected = ["processed %s" % t.name for t in tasks] assert sorted(expected) == sorted(got) def test_file_dep(self, depfile, dependency1): my_task = Task("t2", [""], file_dep=['tests/data/dependency1']) output = StringIO() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=[my_task], dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert "processed t2\n" == got dep = list(my_task.file_dep)[0] timestamp, size, md5 = depfile._get(my_task.name, dep) assert get_file_md5(get_abspath("data/dependency1")) == md5 def test_file_dep_up_to_date(self, depfile, dependency1): my_task = Task("t2", [""], file_dep=['tests/data/dependency1']) depfile.save_success(my_task) output = StringIO() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=[my_task], dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert "skip t2\n" == got def test_file_dep_change_checker(self, depfile, dependency1): my_task = Task("t2", [""], file_dep=['tests/data/dependency1']) depfile.save_success(my_task) depfile.checker = TimestampChecker() output = StringIO() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=[my_task], dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert "processed t2\n" == got def test_filter(self, depfile, dependency1): output = StringIO() tasks = tasks_sample() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=tasks, dep_manager=depfile) cmd_reset._execute(pos_args=['t2']) got = output.getvalue() assert "processed t2\n" == got def test_invalid_task(self, depfile): output = StringIO() tasks = tasks_sample() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=tasks, dep_manager=depfile) pytest.raises(InvalidCommand, cmd_reset._execute, pos_args=['xxx']) def test_missing_file_dep(self, depfile): my_task = Task("t2", [""], file_dep=['tests/data/missing']) output = StringIO() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=[my_task], dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert ("failed t2 (Dependent file 'tests/data/missing' does not " "exist.)\n") == got def test_missing_dep_and_target(self, depfile, dependency1, dependency2): task_a = Task("task_a", [""], file_dep=['tests/data/dependency1'], targets=['tests/data/dependency2']) task_b = Task("task_b", [""], file_dep=['tests/data/dependency2'], targets=['tests/data/dependency3']) task_c = Task("task_c", [""], file_dep=['tests/data/dependency3'], targets=['tests/data/dependency4']) output = StringIO() tasks = [task_a, task_b, task_c] cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=tasks, dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert ("processed task_a\n" "processed task_b\n" "failed task_c (Dependent file 'tests/data/dependency3'" " does not exist.)\n") == got def test_values_and_results(self, depfile, dependency1): my_task = Task("t2", [""], file_dep=['tests/data/dependency1']) my_task.result = "result" my_task.values = {'x': 5, 'y': 10} depfile.save_success(my_task) depfile.checker = TimestampChecker() # trigger task update reseted = Task("t2", [""], file_dep=['tests/data/dependency1']) output = StringIO() cmd_reset = CmdFactory(ResetDep, outstream=output, task_list=[reseted], dep_manager=depfile) cmd_reset._execute() got = output.getvalue() assert "processed t2\n" == got assert {'x': 5, 'y': 10} == depfile.get_values(reseted.name) assert get_md5('result') == depfile.get_result(reseted.name) doit-0.30.3/tests/test_cmd_run.py000066400000000000000000000147611305250115000167370ustar00rootroot00000000000000import os from io import StringIO import pytest from mock import Mock from doit.exceptions import InvalidCommand from doit.task import Task from doit import reporter, runner from doit.cmd_run import Run from tests.conftest import tasks_sample, CmdFactory class TestCmdRun(object): def testProcessRun(self, dependency1, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample()) result = cmd_run._execute(output) assert 0 == result got = output.getvalue().split("\n")[:-1] assert [". t1", ". t2", ". g1.a", ". g1.b", ". t3"] == got @pytest.mark.skipif('not runner.MRunner.available()') def testProcessRunMP(self, dependency1, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample()) result = cmd_run._execute(output, num_process=1) assert 0 == result got = output.getvalue().split("\n")[:-1] assert [". t1", ". t2", ". g1.a", ". g1.b", ". t3"] == got def testProcessRunMThread(self, dependency1, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample()) result = cmd_run._execute(output, num_process=1, par_type='thread') assert 0 == result got = output.getvalue().split("\n")[:-1] assert [". t1", ". t2", ". g1.a", ". g1.b", ". t3"] == got def testInvalidParType(self, dependency1, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample()) pytest.raises(InvalidCommand, cmd_run._execute, output, num_process=1, par_type='not_exist') def testMP_not_available(self, dependency1, depfile_name, capsys, monkeypatch): # make sure MRunner wont be used monkeypatch.setattr(runner.MRunner, "available", Mock(return_value=False)) output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample()) result = cmd_run._execute(output, num_process=1) assert 0 == result got = output.getvalue().split("\n")[:-1] assert [". t1", ". t2", ". g1.a", ". g1.b", ". t3"] == got err = capsys.readouterr()[1] assert "WARNING:" in err assert "parallel using threads" in err def testProcessRunFilter(self, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample(), sel_tasks=["g1.a"]) cmd_run._execute(output) got = output.getvalue().split("\n")[:-1] assert [". g1.a"] == got def testProcessRunSingle(self, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample(), sel_tasks=["t3"]) cmd_run._execute(output, single=True) got = output.getvalue().split("\n")[:-1] # t1 is a depenendency of t3 but not included assert [". t3"] == got def testProcessRunSingleSubtasks(self, depfile_name): output = StringIO() task_list = tasks_sample() assert task_list[4].name == 'g1.b' task_list[4].task_dep = ['t3'] cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=task_list, sel_tasks=["g1"]) cmd_run._execute(output, single=True) got = output.getvalue().split("\n")[:-1] # t3 is a depenendency of g1.b but not included assert [". g1.a", ". g1.b"] == got def testProcessRunEmptyFilter(self, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample(), sel_tasks=[]) cmd_run._execute(output) got = output.getvalue().split("\n")[:-1] assert [] == got class MyReporter(reporter.ConsoleReporter): def get_status(self, task): self.outstream.write('MyReporter.start %s\n' % task.name) class TestCmdRunReporter(object): def testReporterInstance(self, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=[tasks_sample()[0]]) cmd_run._execute(output, reporter=MyReporter(output, {})) got = output.getvalue().split("\n")[:-1] assert 'MyReporter.start t1' == got[0] def testCustomReporter(self, depfile_name): output = StringIO() cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=[tasks_sample()[0]]) cmd_run._execute(output, reporter=MyReporter) got = output.getvalue().split("\n")[:-1] assert 'MyReporter.start t1' == got[0] def testPluginReporter(self, depfile_name): output = StringIO() cmd_run = CmdFactory( Run, backend='dbm', dep_file=depfile_name, task_list=[tasks_sample()[0]], config={'REPORTER':{'my': 'tests.test_cmd_run:MyReporter'}}) cmd_run._execute(output, reporter='my') got = output.getvalue().split("\n")[:-1] assert 'MyReporter.start t1' == got[0] class TestCmdRunOptions(object): def testSetVerbosity(self, depfile_name): output = StringIO() t = Task('x', None) used_verbosity = [] def my_execute(out, err, verbosity): used_verbosity.append(verbosity) t.execute = my_execute cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=[t]) cmd_run._execute(output, verbosity=2) assert 2 == used_verbosity[0], used_verbosity def test_outfile(self, depfile_name): cmd_run = CmdFactory(Run, backend='dbm', dep_file=depfile_name, task_list=tasks_sample(), sel_tasks=["g1.a"]) cmd_run._execute('test.out') try: outfile = open('test.out', 'r') got = outfile.read() outfile.close() assert ". g1.a\n" == got finally: if os.path.exists('test.out'): os.remove('test.out') doit-0.30.3/tests/test_cmd_strace.py000066400000000000000000000100111305250115000173740ustar00rootroot00000000000000import os.path from io import StringIO import mock import pytest from doit.exceptions import InvalidCommand from doit.cmdparse import DefaultUpdate from doit.task import Task from doit.cmd_strace import Strace from .conftest import CmdFactory @pytest.mark.skipif( "os.system('strace -V') != 0 or sys.platform in ['win32', 'cygwin']") class TestCmdStrace(object): def test_dep(self, dependency1, depfile_name): output = StringIO() task = Task("tt", ["cat %(dependencies)s"], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Strace, outstream=output) cmd.loader.load_tasks = mock.Mock(return_value=([task], {})) params = DefaultUpdate(dep_file=depfile_name, show_all=False, keep_trace=False, backend='dbm', check_file_uptodate='md5') result = cmd.execute(params, ['tt']) assert 0 == result got = output.getvalue().split("\n") dep_path = os.path.abspath("tests/data/dependency1") assert "R %s" % dep_path in got[0] def test_opt_show_all(self, dependency1, depfile_name): output = StringIO() task = Task("tt", ["cat %(dependencies)s"], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Strace, outstream=output) cmd.loader.load_tasks = mock.Mock(return_value=([task], {})) params = DefaultUpdate(dep_file=depfile_name, show_all=True, keep_trace=False, backend='dbm', check_file_uptodate='md5') result = cmd.execute(params, ['tt']) assert 0 == result got = output.getvalue().split("\n") assert "cat" in got[0] def test_opt_keep_trace(self, dependency1, depfile_name): output = StringIO() task = Task("tt", ["cat %(dependencies)s"], file_dep=['tests/data/dependency1']) cmd = CmdFactory(Strace, outstream=output) cmd.loader.load_tasks = mock.Mock(return_value=([task], {})) params = DefaultUpdate(dep_file=depfile_name, show_all=True, keep_trace=True, backend='dbm', check_file_uptodate='md5') result = cmd.execute(params, ['tt']) assert 0 == result got = output.getvalue().split("\n") assert "cat" in got[0] assert os.path.exists(cmd.TRACE_OUT) os.unlink(cmd.TRACE_OUT) def test_target(self, dependency1, depfile_name): output = StringIO() task = Task("tt", ["touch %(targets)s"], targets=['tests/data/dependency1']) cmd = CmdFactory(Strace, outstream=output) cmd.loader.load_tasks = mock.Mock(return_value=([task], {})) params = DefaultUpdate(dep_file=depfile_name, show_all=False, keep_trace=False, backend='dbm', check_file_uptodate='md5') result = cmd.execute(params, ['tt']) assert 0 == result got = output.getvalue().split("\n") tgt_path = os.path.abspath("tests/data/dependency1") assert "W %s" % tgt_path in got[0] def test_ignore_python_actions(self, dependency1, depfile_name): output = StringIO() def py_open(): with open(dependency1) as ignore: ignore task = Task("tt", [py_open]) cmd = CmdFactory(Strace, outstream=output) cmd.loader.load_tasks = mock.Mock(return_value=([task], {})) params = DefaultUpdate(dep_file=depfile_name, show_all=False, keep_trace=False, backend='dbm', check_file_uptodate='md5') result = cmd.execute(params, ['tt']) assert 0 == result def test_invalid_command_args(self): output = StringIO() cmd = CmdFactory(Strace, outstream=output) # fails if number of args != 1 pytest.raises(InvalidCommand, cmd.execute, {}, []) pytest.raises(InvalidCommand, cmd.execute, {}, ['t1', 't2']) doit-0.30.3/tests/test_cmdparse.py000066400000000000000000000251301305250115000170760ustar00rootroot00000000000000import pickle import pytest from doit.cmdparse import DefaultUpdate, CmdParseError, CmdOption, CmdParse class TestDefaultUpdate(object): def test(self): du = DefaultUpdate() du.set_default('a', 0) du.set_default('b', 0) assert 0 == du['a'] assert 0 == du['b'] du['b'] = 1 du.update_defaults({'a':2, 'b':2}) assert 2 == du['a'] assert 1 == du['b'] def test_add_defaults(self): du = DefaultUpdate() du.add_defaults({'a': 0, 'b':1}) du['c'] = 5 du.add_defaults({'a':2, 'c':2}) assert 0 == du['a'] assert 1 == du['b'] assert 5 == du['c'] # http://bugs.python.org/issue826897 def test_pickle(self): du = DefaultUpdate() du.set_default('x', 0) dump = pickle.dumps(du,2) pickle.loads(dump) class TestCmdOption(object): def test_repr(self): opt = CmdOption({'name':'opt1', 'default':'', 'short':'o', 'long':'other'}) assert "CmdOption(" in repr(opt) assert "'name':'opt1'" in repr(opt) assert "'short':'o'" in repr(opt) assert "'long':'other'" in repr(opt) def test_non_required_fields(self): opt1 = CmdOption({'name':'op1', 'default':''}) assert '' == opt1.long def test_invalid_field(self): opt_dict = {'name':'op1', 'default':'', 'non_existent':''} pytest.raises(CmdParseError, CmdOption, opt_dict) def test_missing_field(self): opt_dict = {'name':'op1', 'long':'abc'} pytest.raises(CmdParseError, CmdOption, opt_dict) class TestCmdOption_str2val(object): def test_str2boolean(self): opt = CmdOption({'name':'op1', 'default':'', 'type':bool, 'short':'b', 'long': 'bobo'}) assert True == opt.str2boolean('1') assert True == opt.str2boolean('yes') assert True == opt.str2boolean('Yes') assert True == opt.str2boolean('YES') assert True == opt.str2boolean('true') assert True == opt.str2boolean('on') assert False == opt.str2boolean('0') assert False == opt.str2boolean('false') assert False == opt.str2boolean('no') assert False == opt.str2boolean('off') assert False == opt.str2boolean('OFF') pytest.raises(ValueError, opt.str2boolean, '2') pytest.raises(ValueError, opt.str2boolean, None) pytest.raises(ValueError, opt.str2boolean, 'other') def test_non_string_values_are_not_converted(self): opt = CmdOption({'name':'op1', 'default':'', 'type':bool}) assert False == opt.str2type(False) assert True == opt.str2type(True) assert None == opt.str2type(None) def test_str(self): opt = CmdOption({'name':'op1', 'default':'', 'type':str}) assert 'foo' == opt.str2type('foo') assert 'bar' == opt.str2type('bar') def test_bool(self): opt = CmdOption({'name':'op1', 'default':'', 'type':bool}) assert False == opt.str2type('off') assert True == opt.str2type('on') def test_int(self): opt = CmdOption({'name':'op1', 'default':'', 'type':int}) assert 2 == opt.str2type('2') assert -3 == opt.str2type('-3') def test_list(self): opt = CmdOption({'name':'op1', 'default':'', 'type':list}) assert ['foo'] == opt.str2type('foo') assert [] == opt.str2type('') assert ['foo', 'bar'] == opt.str2type('foo , bar ') def test_invalid_value(self): opt = CmdOption({'name':'op1', 'default':'', 'type':int}) pytest.raises(CmdParseError, opt.str2type, 'not a number') class TestCmdOption_help_param(object): def test_bool_param(self): opt1 = CmdOption({'name':'op1', 'default':'', 'type':bool, 'short':'b', 'long': 'bobo'}) assert '-b, --bobo' == opt1.help_param() def test_non_bool_param(self): opt1 = CmdOption({'name':'op1', 'default':'', 'type':str, 'short':'s', 'long': 'susu'}) assert '-s ARG, --susu=ARG' == opt1.help_param() def test_no_long(self): opt1 = CmdOption({'name':'op1', 'default':'', 'type':str, 'short':'s'}) assert '-s ARG' == opt1.help_param() opt_bool = {'name': 'flag', 'short':'f', 'long': 'flag', 'inverse':'no-flag', 'type': bool, 'default': False, 'help': 'help for opt1'} opt_rare = {'name': 'rare', 'long': 'rare-bool', 'type': bool, 'default': False, 'help': 'help for opt2'} opt_int = {'name': 'num', 'short':'n', 'long': 'number', 'type': int, 'default': 5, 'help': 'help for opt3'} opt_no = {'name': 'no', 'short':'', 'long': '', 'type': int, 'default': 5, 'help': 'user cant modify me'} opt_append = { 'name': 'list', 'short': 'l', 'long': 'list', 'type': list, 'default': [], 'help': 'use many -l to make a list'} opt_choices_desc = {'name': 'choices', 'short':'c', 'long': 'choice', 'type': str, 'choices': (("yes", "signify affirmative"), ("no","signify negative")), 'default': "yes", 'help': 'User chooses [default %(default)s]'} opt_choices_nodesc = {'name': 'choicesnodesc', 'short':'C', 'long': 'achoice', 'type': str, 'choices': (("yes", ""), ("no", "")), 'default': "no", 'help': 'User chooses [default %(default)s]'} class TestCmdOption_help_doc(object): def test_param(self): opt1 = CmdOption(opt_bool) got = opt1.help_doc() assert '-f, --flag' in got[0] assert 'help for opt1' in got[0] assert '--no-flag' in got[1] assert 2 == len(got) def test_no_doc_param(self): opt1 = CmdOption(opt_no) assert 0 == len(opt1.help_doc()) def test_choices_desc_doc(self): the_opt = CmdOption(opt_choices_desc) doc = the_opt.help_doc()[0] assert 'choices:\n' in doc assert 'yes: signify affirmative' in doc assert 'no: signify negative' in doc def test_choices_nodesc_doc(self): the_opt = CmdOption(opt_choices_nodesc) doc = the_opt.help_doc()[0] assert "choices: no, yes" in doc class TestCommand(object): @pytest.fixture def cmd(self, request): opt_list = (opt_bool, opt_rare, opt_int, opt_no, opt_append, opt_choices_desc, opt_choices_nodesc) options = [CmdOption(o) for o in opt_list] cmd = CmdParse(options) return cmd def test_contains(self, cmd): assert 'flag' in cmd assert 'num' in cmd assert 'xxx' not in cmd def test_getitem(self, cmd): assert cmd['flag'].short == 'f' assert cmd['num'].default == 5 def test_option_list(self, cmd): opt_names = [o.name for o in cmd.options] assert ['flag', 'rare', 'num', 'no', 'list', 'choices', 'choicesnodesc']== opt_names def test_short(self, cmd): assert "fn:l:c:C:" == cmd.get_short(), cmd.get_short() def test_long(self, cmd): longs = ["flag", "no-flag", "rare-bool", "number=", "list=", "choice=", "achoice="] assert longs == cmd.get_long() def test_getOption(self, cmd): # short opt, is_inverse = cmd.get_option('-f') assert (opt_bool['name'], False) == (opt.name, is_inverse) # long opt, is_inverse = cmd.get_option('--rare-bool') assert (opt_rare['name'], False) == (opt.name, is_inverse) # inverse opt, is_inverse = cmd.get_option('--no-flag') assert (opt_bool['name'], True) == (opt.name, is_inverse) # not found opt, is_inverse = cmd.get_option('not-there') assert (None, None) == (opt, is_inverse) opt, is_inverse = cmd.get_option('--list') assert (opt_append['name'], False) == (opt.name, is_inverse) opt, is_inverse = cmd.get_option('--choice') assert (opt_choices_desc['name'], False) == (opt.name, is_inverse) opt, is_inverse = cmd.get_option('--achoice') assert (opt_choices_nodesc['name'], False) == (opt.name, is_inverse) def test_parseDefaults(self, cmd): params, args = cmd.parse([]) assert False == params['flag'] assert 5 == params['num'] assert [] == params['list'] assert "yes" == params['choices'] assert "no" == params['choicesnodesc'] def test_overwrite_defaults(self, cmd): cmd.overwrite_defaults({'num': 9, 'i_dont_exist': 1}) params, args = cmd.parse([]) assert 9 == params['num'] def test_overwrite_defaults_convert_type(self, cmd): cmd.overwrite_defaults({'num': '9', 'list': 'foo, bar', 'flag':'on'}) params, args = cmd.parse([]) assert 9 == params['num'] assert ['foo', 'bar'] == params['list'] assert True == params['flag'] def test_parseShortValues(self, cmd): params, args = cmd.parse(['-n','89','-f', '-l', 'foo', '-l', 'bar', '-c', 'no', '-C', 'yes']) assert True == params['flag'] assert 89 == params['num'] assert ['foo', 'bar'] == params['list'] assert "no" == params['choices'] assert "yes" == params['choicesnodesc'] def test_parseLongValues(self, cmd): params, args = cmd.parse(['--rare-bool','--num','89', '--no-flag', '--list', 'flip', '--list', 'flop', '--choice', 'no', '--achoice', 'yes']) assert True == params['rare'] assert False == params['flag'] assert 89 == params['num'] assert ['flip', 'flop'] == params['list'] assert "no" == params['choices'] assert "yes" == params['choicesnodesc'] def test_parsePositionalArgs(self, cmd): params, args = cmd.parse(['-f','p1','p2', '--sub-arg']) assert ['p1','p2', '--sub-arg'] == args def test_parseError(self, cmd): pytest.raises(CmdParseError, cmd.parse, ['--not-exist-param']) def test_parseWrongType(self, cmd): pytest.raises(CmdParseError, cmd.parse, ['--num','oi']) def test_parseWrongChoice(self, cmd): pytest.raises(CmdParseError, cmd.parse, ['--choice', 'maybe']) doit-0.30.3/tests/test_control.py000066400000000000000000000706771305250115000170000ustar00rootroot00000000000000from collections import deque import pytest from doit.exceptions import InvalidDodoFile, InvalidCommand from doit.task import InvalidTask, Task, DelayedLoader from doit.control import TaskControl, TaskDispatcher, ExecNode from doit.control import no_none class TestTaskControlInit(object): def test_addTask(self): t1 = Task("taskX", None) t2 = Task("taskY", None) tc = TaskControl([t1, t2]) assert 2 == len(tc.tasks) def test_targetDependency(self): t1 = Task("taskX", None,[],['intermediate']) t2 = Task("taskY", None,['intermediate'],[]) TaskControl([t1, t2]) assert ['taskX'] == t2.task_dep # 2 tasks can not have the same name def test_addTaskSameName(self): t1 = Task("taskX", None) t2 = Task("taskX", None) pytest.raises(InvalidDodoFile, TaskControl, [t1, t2]) def test_addInvalidTask(self): pytest.raises(InvalidTask, TaskControl, [666]) def test_userErrorTaskDependency(self): tasks = [Task('wrong', None, task_dep=["typo"])] pytest.raises(InvalidTask, TaskControl, tasks) def test_userErrorSetupTask(self): tasks = [Task('wrong', None, setup=["typo"])] pytest.raises(InvalidTask, TaskControl, tasks) def test_sameTarget(self): tasks = [Task('t1',None,[],["fileX"]), Task('t2',None,[],["fileX"])] pytest.raises(InvalidTask, TaskControl, tasks) def test_wild(self): tasks = [Task('t1',None, task_dep=['foo*']), Task('foo4',None,)] TaskControl(tasks) assert 'foo4' in tasks[0].task_dep def test_bug770150_task_dependency_from_target(self): t1 = Task("taskX", None, file_dep=[], targets=['intermediate']) t2 = Task("taskY", None, file_dep=['intermediate'], task_dep=['taskZ']) t3 = Task("taskZ", None) TaskControl([t1, t2, t3]) assert ['taskZ', 'taskX'] == t2.task_dep TASKS_SAMPLE = [Task("t1", [""], doc="t1 doc string"), Task("t2", [""], doc="t2 doc string"), Task("g1", None, doc="g1 doc string"), Task("g1.a", [""], doc="g1.a doc string", is_subtask=True), Task("g1.b", [""], doc="g1.b doc string", is_subtask=True), Task("t3", [""], doc="t3 doc string", params=[{'name':'opt1','long':'message','default':''}])] class TestTaskControlCmdOptions(object): def testFilter(self): filter_ = ['t2', 't3'] tc = TaskControl(TASKS_SAMPLE) assert filter_ == tc._filter_tasks(filter_) def testProcessSelection(self): filter_ = ['t2', 't3'] tc = TaskControl(TASKS_SAMPLE) tc.process(filter_) assert filter_ == tc.selected_tasks def testProcessAll(self): tc = TaskControl(TASKS_SAMPLE) tc.process(None) assert ['t1', 't2', 'g1', 'g1.a', 'g1.b', 't3'] == tc.selected_tasks def testFilterPattern(self): tc = TaskControl(TASKS_SAMPLE) assert ['t1', 'g1', 'g1.a', 'g1.b'] == tc._filter_tasks(['*1*']) def testFilterSubtask(self): filter_ = ["t1", "g1.b"] tc = TaskControl(TASKS_SAMPLE) assert filter_ == tc._filter_tasks(filter_) def testFilterTarget(self): tasks = list(TASKS_SAMPLE) tasks.append(Task("tX", [""],[],["targetX"])) tc = TaskControl(tasks) assert ['tX'] == tc._filter_tasks(["targetX"]) def test_filter_delayed_subtask(self): t1 = Task("taskX", None) t2 = Task("taskY", None, loader=DelayedLoader(lambda: None)) control = TaskControl([t1, t2]) control._filter_tasks(['taskY:foo']) assert isinstance(t2.loader, DelayedLoader) # sub-task will use same loader, and keep parent basename assert control.tasks['taskY:foo'].loader.basename == 'taskY' assert control.tasks['taskY:foo'].loader is t2.loader def test_filter_delayed_regex_single(self): t1 = Task("taskX", None) t2 = Task("taskY", None, loader=DelayedLoader(lambda: None, target_regex='a.*')) t3 = Task("taskZ", None, loader=DelayedLoader(lambda: None, target_regex='b.*')) t4 = Task("taskW", None, loader=DelayedLoader(lambda: None)) control = TaskControl([t1, t2, t3, t4], auto_delayed_regex=False) selected = control._filter_tasks(['abc']) assert isinstance(t2.loader, DelayedLoader) assert len(selected) == 1 assert selected[0] == '_regex_target_abc:taskY' sel_task = control.tasks['_regex_target_abc:taskY'] assert sel_task.file_dep == {'abc'} assert sel_task.loader.basename == 'taskY' assert sel_task.loader is t2.loader def test_filter_delayed_multi_select(self): t1 = Task("taskX", None) t2 = Task("taskY", None, loader=DelayedLoader(lambda: None, target_regex='a.*')) t3 = Task("taskZ", None, loader=DelayedLoader(lambda: None, target_regex='b.*')) t4 = Task("taskW", None, loader=DelayedLoader(lambda: None)) control = TaskControl([t1, t2, t3, t4], auto_delayed_regex=False) selected = control._filter_tasks(['abc', 'att']) assert isinstance(t2.loader, DelayedLoader) assert len(selected) == 2 assert selected[0] == '_regex_target_abc:taskY' assert selected[1] == '_regex_target_att:taskY' def test_filter_delayed_regex_multiple_match(self): t1 = Task("taskX", None) t2 = Task("taskY", None, loader=DelayedLoader(lambda: None, target_regex='a.*')) t3 = Task("taskZ", None, loader=DelayedLoader(lambda: None, target_regex='ab.')) t4 = Task("taskW", None, loader=DelayedLoader(lambda: None)) control = TaskControl([t1, t2, t3, t4], auto_delayed_regex=False) selected = control._filter_tasks(['abc']) assert len(selected) == 2 assert (sorted(selected) == ['_regex_target_abc:taskY', '_regex_target_abc:taskZ']) assert control.tasks['_regex_target_abc:taskY'].file_dep == {'abc'} assert control.tasks['_regex_target_abc:taskZ'].file_dep == {'abc'} assert (control.tasks['_regex_target_abc:taskY'].loader.basename == t2.name) assert (control.tasks['_regex_target_abc:taskZ'].loader.basename == t3.name) def test_filter_delayed_regex_auto(self): t1 = Task("taskX", None) t2 = Task("taskY", None, loader=DelayedLoader(lambda: None, target_regex='a.*')) t3 = Task("taskZ", None, loader=DelayedLoader(lambda: None)) control = TaskControl([t1, t2, t3], auto_delayed_regex=True) selected = control._filter_tasks(['abc']) assert len(selected) == 2 assert (sorted(selected) == ['_regex_target_abc:taskY', '_regex_target_abc:taskZ']) assert control.tasks['_regex_target_abc:taskY'].file_dep == {'abc'} assert control.tasks['_regex_target_abc:taskZ'].file_dep == {'abc'} assert (control.tasks['_regex_target_abc:taskY'].loader.basename == t2.name) assert (control.tasks['_regex_target_abc:taskZ'].loader.basename == t3.name) # filter a non-existent task raises an error def testFilterWrongName(self): tc = TaskControl(TASKS_SAMPLE) pytest.raises(InvalidCommand, tc._filter_tasks, ['no']) def testFilterWrongSubtaskName(self): t1 = Task("taskX", None) t2 = Task("taskY", None) tc = TaskControl([t1, t2]) pytest.raises(InvalidCommand, tc._filter_tasks, ['taskX:no']) def testFilterEmptyList(self): filter_ = [] tc = TaskControl(TASKS_SAMPLE) assert filter_ == tc._filter_tasks(filter_) def testOptions(self): options = ["t3", "--message", "hello option!", "t1"] tc = TaskControl(TASKS_SAMPLE) assert ['t3', 't1'] == tc._filter_tasks(options) assert "hello option!" == tc.tasks['t3'].options['opt1'] def testPosParam(self): tasks = list(TASKS_SAMPLE) tasks.append(Task("tP", [""],[],[], pos_arg='myp')) tc = TaskControl(tasks) args = ["tP", "hello option!", "t1"] assert ['tP',] == tc._filter_tasks(args) assert ["hello option!", "t1"] == tc.tasks['tP'].pos_arg_val class TestExecNode(object): def test_repr(self): node = ExecNode(Task('t1', None), None) assert 't1' in repr(node) def test_ready_select__not_waiting(self): task = Task("t1", None) node = ExecNode(task, None) assert False == node.wait_select def test_parent_status_failure(self): n1 = ExecNode(Task('t1', None), None) n2 = ExecNode(Task('t2', None), None) n1.run_status = 'failure' n2.parent_status(n1) assert [n1] == n2.bad_deps assert [] == n2.ignored_deps def test_parent_status_ignore(self): n1 = ExecNode(Task('t1', None), None) n2 = ExecNode(Task('t2', None), None) n1.run_status = 'ignore' n2.parent_status(n1) assert [] == n2.bad_deps assert [n1] == n2.ignored_deps def test_step(self): def my_gen(): yield 1 yield 2 task = Task("t1", None) node = ExecNode(task, None) node.generator = my_gen() assert 1 == node.step() assert 2 == node.step() assert None == node.step() class TestDecoratorNoNone(object): def test_filtering(self): def my_gen(): yield 1 yield None yield 2 gen = no_none(my_gen) assert [1, 2] == [x for x in gen()] class TestTaskDispatcher_GenNone(object): def test_create(self): tasks = {'t1': Task('t1', None)} td = TaskDispatcher(tasks, [], None) node = td._gen_node(None, 't1') assert isinstance(node, ExecNode) assert node == td.nodes['t1'] def test_already_created(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None) } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') td._gen_node(n1, 't2') assert None == td._gen_node(None, 't1') def test_cyclic(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None) } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(n1, 't2') pytest.raises(InvalidDodoFile, td._gen_node, n2, 't1') class TestTaskDispatcher_node_add_wait_run(object): def test_wait(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n1.wait_run.add('xxx') td._node_add_wait_run(n1, ['t2']) assert 2 == len(n1.wait_run) assert 't2' in n1.wait_run assert not n1.bad_deps assert n1 in n2.waiting_me def test_none(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n2.run_status = 'done' td._node_add_wait_run(n1, ['t2']) assert not n1.wait_run def test_deps_not_ok(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n2.run_status = 'failure' td._node_add_wait_run(n1, ['t2']) assert n1.bad_deps def test_calc_dep_already_executed(self): tasks = {'t1': Task('t1', None, calc_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n2.run_status = 'done' n2.task.values = {'calc_dep': ['t3'], 'task_dep':['t5']} td._node_add_wait_run(n1, ['t2'], calc=True) # n1 is updated with results from t2 assert n1.calc_dep == set(['t2', 't3']) assert n1.task_dep == ['t5'] # n1 doesnt need to wait any calc_dep to be executed assert n1.wait_run_calc == set() class TestTaskDispatcher_add_task(object): def test_no_deps(self): tasks = {'t1': Task('t1', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') assert [tasks['t1']] == list(td._add_task(n1)) def test_task_deps(self): tasks = {'t1': Task('t1', None, task_dep=['t2', 't3']), 't2': Task('t2', None), 't3': Task('t3', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') gen = td._add_task(n1) n2 = next(gen) assert tasks['t2'] == n2.task n3 = next(gen) assert tasks['t3'] == n3.task assert 'wait' == next(gen) tasks['t2'].run_status = 'done' td._update_waiting(n2) tasks['t3'].run_status = 'done' td._update_waiting(n3) assert tasks['t1'] == next(gen) def test_task_deps_already_created(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') assert 'wait' == n1.step() assert 'wait' == n1.step() #tasks['t2'].run_status = 'done' td._update_waiting(n2) assert tasks['t1'] == n1.step() def test_task_deps_no_wait(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n2.run_status = 'done' gen = td._add_task(n1) assert tasks['t1'] == next(gen) def test_calc_dep(self): def calc_intermediate(): return {'file_dep': ['intermediate']} tasks = {'t1': Task('t1', None, calc_dep=['t2']), 't2': Task('t2', [calc_intermediate]), 't3': Task('t3', None, targets=['intermediate']), } td = TaskDispatcher(tasks, {'intermediate': 't3'}, None) n1 = td._gen_node(None, 't1') n2 = n1.step() assert tasks['t2'] == n2.task assert 'wait' == n1.step() # execute t2 to process calc_dep tasks['t2'].execute() td.nodes['t2'].run_status = 'done' td._update_waiting(n2) n3 = n1.step() assert tasks['t3'] == n3.task assert 'intermediate' in tasks['t1'].file_dep assert 't3' in tasks['t1'].task_dep # t3 was added by calc dep assert 'wait' == n1.step() n3.run_status = 'done' td._update_waiting(n3) assert tasks['t1'] == n1.step() def test_calc_dep_already_executed(self): tasks = {'t1': Task('t1', None, calc_dep=['t2']), 't2': Task('t2', None), 't3': Task('t3', None), } td = TaskDispatcher(tasks, {'intermediate': 't3'}, None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') n2.run_status = 'done' n2.task.values = {'calc_dep': ['t3']} assert 't3' == n1.step().task.name assert set() == n1.wait_run assert set() == n1.wait_run_calc #assert False def test_setup_task__run(self): tasks = {'t1': Task('t1', None, setup=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') gen = td._add_task(n1) assert tasks['t1'] == next(gen) # first time (just select) assert 'wait' == next(gen) # wait for select result n1.run_status = 'run' assert tasks['t2'] == next(gen).task # send setup task assert 'wait' == next(gen) assert tasks['t1'] == next(gen) # second time def test_delayed_creation(self): def creator(): yield Task('foo', None, loader=DelayedLoader(lambda : None)) delayed_loader = DelayedLoader(creator, executed='t2') tasks = {'t1': Task('t1', None, loader=delayed_loader), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') gen = td._add_task(n1) # first returned node is `t2` because it is an implicit task_dep n2 = next(gen) assert n2.task.name == 't2' # wait until t2 is finished n3 = next(gen) assert n3 == 'wait' # after t2 is done, generator is reseted td._update_waiting(n2) n4 = next(gen) assert n4 == "reset generator" # recursive loader is preserved assert isinstance(td.tasks['foo'].loader, DelayedLoader) pytest.raises(AssertionError, next, gen) def test_delayed_creation_sub_task(self): # usually a repeated loader is replaced by the real task # when it is first executed, the problem arises when the # the expected task is not actually created def creator(): yield Task('t1:foo', None) yield Task('t1:bar', None) delayed_loader = DelayedLoader(creator, executed='t2') tasks = { 't1': Task('t1', None, loader=delayed_loader), 't2': Task('t2', None),} # simulate a sub-task from delayed created added to task_list tasks['t1:foo'] = Task('t1:foo', None, loader=delayed_loader) tasks['t1:xxx'] = Task('t1:xxx', None, loader=delayed_loader) delayed_loader.basename = 't1' td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1:foo') gen = td._add_task(n1) # first returned node is `t2` because it is an implicit task_dep n1b = next(gen) assert n1b.task.name == 't2' # wait until t2 is finished n1c = next(gen) assert n1c == 'wait' # after t2 is done, generator is reseted n1b.run_status = 'successful' td._update_waiting(n1b) n1d = next(gen) assert n1d == "reset generator" assert 't1:foo' in td.tasks assert 't1:bar' in td.tasks # finish with t1:foo gen2 = td._add_task(n1) n1.reset_task(td.tasks[n1.task.name], gen2) n2 = next(gen2) assert n2.name == 't1:foo' pytest.raises(StopIteration, next, gen2) # try non-existent t1:xxx n3 = td._gen_node(None, 't1:xxx') gen3 = td._add_task(n3) # ? should raise a runtime error? assert next(gen3) == 'reset generator' def test_delayed_creation_target_regex(self): def creator(): yield Task('foo', None, targets=['tgt1']) delayed_loader = DelayedLoader(creator, executed='t2', target_regex='tgt1') tasks = { 't1': Task('t1', None, loader=delayed_loader), 't2': Task('t2', None), } tc = TaskControl(list(tasks.values())) selection = tc._filter_tasks(['tgt1']) assert ['_regex_target_tgt1:t1'] == selection td = TaskDispatcher(tc.tasks, tc.targets, selection) n1 = td._gen_node(None, '_regex_target_tgt1:t1') gen = td._add_task(n1) # first returned node is `t2` because it is an implicit task_dep n2 = next(gen) assert n2.task.name == 't2' # wait until t2 is finished n3 = next(gen) assert n3 == 'wait' # after t2 is done, generator is reseted n2.run_status = 'done' td._update_waiting(n2) n4 = next(gen) assert n4 == "reset generator" # manually reset generator n1.reset_task(td.tasks[n1.task.name], td._add_task(n1)) # get the delayed created task gen2 = n1.generator # n1 generator was reset / replaced # get t1 because of its target was a file_dep of _regex_target_tgt1 n5 = next(gen2) assert n5.task.name == 'foo' # get internal created task n5.run_status = 'done' td._update_waiting(n5) n6 = next(gen2) assert n6.name == '_regex_target_tgt1:t1' # file_dep is removed because foo might not be task # that creates this task (support for multi regex matches) assert n6.file_dep == {} def test_regex_group_already_created(self): # this is required to avoid loading more tasks than required, GH-#60 def creator1(): yield Task('foo1', None, targets=['tgt1']) delayed_loader1 = DelayedLoader(creator1, target_regex='tgt.*') def creator2(): # pragma: no cover yield Task('foo2', None, targets=['tgt2']) delayed_loader2 = DelayedLoader(creator2, target_regex='tgt.*') t1 = Task('t1', None, loader=delayed_loader1) t2 = Task('t2', None, loader=delayed_loader2) tc = TaskControl([t1, t2]) selection = tc._filter_tasks(['tgt1']) assert ['_regex_target_tgt1:t1', '_regex_target_tgt1:t2'] == selection td = TaskDispatcher(tc.tasks, tc.targets, selection) n1 = td._gen_node(None, '_regex_target_tgt1:t1') gen = td._add_task(n1) # delayed loader executed, so generator is reset n1b = next(gen) assert n1b == "reset generator" # manually reset generator n1.reset_task(td.tasks[n1.task.name], td._add_task(n1)) # get the delayed created task gen1b = n1.generator # n1 generator was reset / replaced # get t1 because of its target was a file_dep of _regex_target_tgt1 n1c = next(gen1b) assert n1c.task.name == 'foo1' # get internal created task n1c.run_status = 'done' td._update_waiting(n1c) n1d = next(gen1b) assert n1d.name == '_regex_target_tgt1:t1' ## go for second selected task n2 = td._gen_node(None, '_regex_target_tgt1:t2') gen2 = td._add_task(n2) # loader is not executed because target t1 was already found pytest.raises(StopIteration, next, gen2) def test_regex_not_found(self): def creator1(): yield Task('foo1', None, targets=['tgt1']) delayed_loader1 = DelayedLoader(creator1, target_regex='tgt.*') t1 = Task('t1', None, loader=delayed_loader1) tc = TaskControl([t1]) selection = tc._filter_tasks(['tgt666']) assert ['_regex_target_tgt666:t1'] == selection td = TaskDispatcher(tc.tasks, tc.targets, selection) n1 = td._gen_node(None, '_regex_target_tgt666:t1') gen = td._add_task(n1) # target not found after generating all tasks from regex group pytest.raises(InvalidCommand, next, gen) class TestTaskDispatcher_get_next_node(object): def test_none(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) assert None == td._get_next_node([], []) def test_ready(self): tasks = {'t1': Task('t1', None), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') ready = deque([n1]) assert n1 == td._get_next_node(ready, ['t2']) assert 0 == len(ready) def test_to_run(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) to_run = ['t2', 't1'] td._gen_node(None, 't1') # t1 was already created got = td._get_next_node([], to_run) assert isinstance(got, ExecNode) assert 't2' == got.task.name assert [] == to_run def test_to_run_none(self): tasks = {'t1': Task('t1', None), } td = TaskDispatcher(tasks, [], None) td._gen_node(None, 't1') # t1 was already created to_run = ['t1'] assert None == td._get_next_node([], to_run) assert [] == to_run class TestTaskDispatcher_update_waiting(object): def test_wait_select(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n2 = td._gen_node(None, 't2') n2.wait_select = True n2.run_status = 'run' td.waiting.add(n2) td._update_waiting(n2) assert False == n2.wait_select assert deque([n2]) == td.ready def test_wait_run(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') td._node_add_wait_run(n1, ['t2']) n2.run_status = 'done' td.waiting.add(n1) td._update_waiting(n2) assert not n1.bad_deps assert deque([n1]) == td.ready assert 0 == len(td.waiting) def test_wait_run_deps_not_ok(self): tasks = {'t1': Task('t1', None, task_dep=['t2']), 't2': Task('t2', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n2 = td._gen_node(None, 't2') td._node_add_wait_run(n1, ['t2']) n2.run_status = 'failure' td.waiting.add(n1) td._update_waiting(n2) assert n1.bad_deps assert deque([n1]) == td.ready assert 0 == len(td.waiting) def test_waiting_node_updated(self): tasks = {'t1': Task('t1', None, calc_dep=['t2'], task_dep=['t4']), 't2': Task('t2', None), 't3': Task('t3', None), 't4': Task('t4', None), } td = TaskDispatcher(tasks, [], None) n1 = td._gen_node(None, 't1') n1_gen = td._add_task(n1) n2 = next(n1_gen) assert 't2' == n2.task.name assert 't4' == next(n1_gen).task.name assert 'wait' == next(n1_gen) assert set() == n1.calc_dep assert td.waiting == set() n2.run_status = 'done' n2.task.values = {'calc_dep': ['t2', 't3'], 'task_dep':['t5']} assert n1.calc_dep == set() assert n1.task_dep == [] td._update_waiting(n2) assert n1.calc_dep == set(['t3']) assert n1.task_dep == ['t5'] class TestTaskDispatcher_dispatcher_generator(object): def test_normal(self): tasks = [Task("t1", None, task_dep=["t2"]), Task("t2", None,)] control = TaskControl(tasks) control.process(['t1']) gen = control.task_dispatcher().generator n2 = next(gen) assert tasks[1] == n2.task assert "hold on" == next(gen) assert "hold on" == next(gen) # hold until t2 is done assert tasks[0] == gen.send(n2).task pytest.raises(StopIteration, lambda gen: next(gen), gen) def test_delayed_creation(self): def creator(): yield {'name': 'foo1', 'actions': None, 'file_dep': ['bar']} yield {'name': 'foo2', 'actions': None, 'targets': ['bar']} delayed_loader = DelayedLoader(creator, executed='t2') tasks = [Task('t0', None, task_dep=['t1']), Task('t1', None, loader=delayed_loader), Task('t2', None)] control = TaskControl(tasks) control.process(['t0']) disp = control.task_dispatcher() gen = disp.generator nt2 = next(gen) assert nt2.task.name == "t2" # wait for t2 to be executed assert "hold on" == next(gen) assert "hold on" == next(gen) # hold until t2 is done # delayed creation of tasks for t1 does not mess existing info assert disp.nodes['t1'].waiting_me == set([disp.nodes['t0']]) nf2 = gen.send(nt2) assert disp.nodes['t1'].waiting_me == set([disp.nodes['t0']]) assert nf2.task.name == "t1:foo2" nf1 = gen.send(nf2) assert nf1.task.name == "t1:foo1" assert nf1.task.task_dep == ['t1:foo2'] # implicit dep added nt1 = gen.send(nf1) assert nt1.task.name == "t1" nt0 = gen.send(nt1) assert nt0.task.name == "t0" pytest.raises(StopIteration, lambda gen: next(gen), gen) doit-0.30.3/tests/test_dependency.py000066400000000000000000000571771305250115000174360ustar00rootroot00000000000000import os import time import sys import tempfile import uuid import pytest from doit.task import Task from doit.dependency import get_md5, get_file_md5 from doit.dependency import DbmDB, JsonDB, SqliteDB, Dependency from doit.dependency import DatabaseException, UptodateCalculator from doit.dependency import FileChangedChecker, MD5Checker, TimestampChecker from doit.dependency import DependencyStatus from .conftest import get_abspath, depfile #path to test folder TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH def test_unicode_md5(): data = "我" # no exception is raised assert get_md5(data) def test_md5(): filePath = os.path.join(os.path.dirname(__file__), "sample_md5.txt") # result got using command line md5sum expected = "45d1503cb985898ab5bd8e58973007dd" assert expected == get_file_md5(filePath) def test_sqlite_import(): """ Checks that SQLite module is not imported until the SQLite class is instantiated """ filename = os.path.join(tempfile.gettempdir(), str(uuid.uuid4())) assert 'sqlite3' not in sys.modules SqliteDB(filename) assert 'sqlite3' in sys.modules os.remove(filename) #### # dependencies are files only (not other tasks). # # whenever a task has a dependency the runner checks if this dependency # was modified since last successful run. if not the task is skipped. # since more than one task might have the same dependency, and the tasks # might have different results (success/failure). the signature is associated # not only with the file, but also with the task. # # save in db (task - dependency - (timestamp, size, signature)) # taskId_dependency => signature(dependency) # taskId is md5(CmdTask.task) # test parametrization, execute tests for all DB backends. # create a separate fixture to be used only by this module # because only here it is required to test with all backends @pytest.fixture def pdepfile(request): return depfile(request) pytest.fixture(params=[JsonDB, DbmDB, SqliteDB])(pdepfile) # FIXME there was major refactor breaking classes from dependency, # unit-tests could be more specific to base classes. class TestDependencyDb(object): # adding a new value to the DB def test_get_set(self, pdepfile): pdepfile._set("taskId_X","dependency_A","da_md5") value = pdepfile._get("taskId_X","dependency_A") assert "da_md5" == value, value def test_get_set_unicode_name(self, pdepfile): pdepfile._set("taskId_我", "dependency_A", "da_md5") value = pdepfile._get("taskId_我", "dependency_A") assert "da_md5" == value, value # def test_dump(self, pdepfile): # save and close db pdepfile._set("taskId_X","dependency_A","da_md5") pdepfile.close() # open it again and check the value d2 = Dependency(pdepfile.db_class, pdepfile.name) value = d2._get("taskId_X","dependency_A") assert "da_md5" == value, value def test_corrupted_file(self, pdepfile): if pdepfile.whichdb is None: # pragma: no cover pytest.skip('dumbdbm too dumb to detect db corruption') # create some corrupted files for name_ext in pdepfile.name_ext: full_name = pdepfile.name + name_ext fd = open(full_name, 'w') fd.write("""{"x": y}""") fd.close() pytest.raises(DatabaseException, Dependency, pdepfile.db_class, pdepfile.name) def test_corrupted_file_unrecognized_excep(self, monkeypatch, pdepfile): if pdepfile.db_class is not DbmDB: pytest.skip('test doesnt apply to non DBM DB') if pdepfile.whichdb is None: # pragma: no cover pytest.skip('dumbdbm too dumb to detect db corruption') # create some corrupted files for name_ext in pdepfile.name_ext: full_name = pdepfile.name + name_ext fd = open(full_name, 'w') fd.write("""{"x": y}""") fd.close() monkeypatch.setattr(DbmDB, 'DBM_CONTENT_ERROR_MSG', 'xxx') pytest.raises(DatabaseException, Dependency, pdepfile.db_class, pdepfile.name) # _get must return None if entry doesnt exist. def test_getNonExistent(self, pdepfile): assert pdepfile._get("taskId_X","dependency_A") == None def test_in(self, pdepfile): pdepfile._set("taskId_ZZZ","dep_1","12") assert pdepfile._in("taskId_ZZZ") assert not pdepfile._in("taskId_hohoho") def test_remove(self, pdepfile): pdepfile._set("taskId_ZZZ","dep_1","12") pdepfile._set("taskId_ZZZ","dep_2","13") pdepfile._set("taskId_YYY","dep_1","14") pdepfile.remove("taskId_ZZZ") assert None == pdepfile._get("taskId_ZZZ","dep_1") assert None == pdepfile._get("taskId_ZZZ","dep_2") assert "14" == pdepfile._get("taskId_YYY","dep_1") # special test for DBM backend and "dirty"/caching mechanism def test_remove_from_non_empty_file(self, pdepfile): # 1 - put 2 tasks of file pdepfile._set("taskId_XXX", "dep_1", "x") pdepfile._set("taskId_YYY", "dep_1", "x") pdepfile.close() # 2 - re-open and remove one task reopened = Dependency(pdepfile.db_class, pdepfile.name) reopened.remove("taskId_YYY") reopened.close() # 3 - re-open again and check task was really removed reopened2 = Dependency(pdepfile.db_class, pdepfile.name) assert reopened2._in("taskId_XXX") assert not reopened2._in("taskId_YYY") def test_remove_all(self, pdepfile): pdepfile._set("taskId_ZZZ","dep_1","12") pdepfile._set("taskId_ZZZ","dep_2","13") pdepfile._set("taskId_YYY","dep_1","14") pdepfile.remove_all() assert None == pdepfile._get("taskId_ZZZ","dep_1") assert None == pdepfile._get("taskId_ZZZ","dep_2") assert None == pdepfile._get("taskId_YYY","dep_1") class TestSaveSuccess(object): def test_save_result(self, pdepfile): t1 = Task('t_name', None) t1.result = "result" pdepfile.save_success(t1) assert get_md5("result") == pdepfile._get(t1.name, "result:") assert get_md5("result") == pdepfile.get_result(t1.name) def test_save_result_hash(self, pdepfile): t1 = Task('t_name', None) t1.result = "result" pdepfile.save_success(t1, result_hash='abc') assert 'abc' == pdepfile._get(t1.name, "result:") def test_save_resultNone(self, pdepfile): t1 = Task('t_name', None) pdepfile.save_success(t1) assert None is pdepfile._get(t1.name, "result:") def test_save_result_dict(self, pdepfile): t1 = Task('t_name', None) t1.result = {'d': "result"} pdepfile.save_success(t1) assert {'d': "result"} == pdepfile._get(t1.name, "result:") def test_save_file_md5(self, pdepfile): # create a test dependency file filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("i am the first dependency ever for doit") ff.close() # save it t1 = Task("taskId_X", None, [filePath]) pdepfile.save_success(t1) expected = "a1bb792202ce163b4f0d17cb264c04e1" value = pdepfile._get("taskId_X",filePath) assert os.path.getmtime(filePath) == value[0] # timestamp assert 39 == value[1] # size assert expected == value[2] # MD5 def test_save_skip(self, pdepfile, monkeypatch): #self.test_save_file_md5(pdepfile) filePath = get_abspath("data/dependency1") t1 = Task("taskId_X", None, [filePath]) pdepfile._set(t1.name, filePath, (345, 0, "fake")) monkeypatch.setattr(os.path, 'getmtime', lambda x: 345) # save but md5 is not modified pdepfile.save_success(t1) got = pdepfile._get("taskId_X", filePath) assert "fake" == got[2] def test_save_files(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() filePath2 = get_abspath("data/dependency2") ff = open(filePath2,"w") ff.write("part2") ff.close() assert pdepfile._get("taskId_X",filePath) is None assert pdepfile._get("taskId_X",filePath2) is None t1 = Task("taskId_X", None, [filePath,filePath2]) pdepfile.save_success(t1) assert pdepfile._get("taskId_X",filePath) is not None assert pdepfile._get("taskId_X",filePath2) is not None assert set(pdepfile._get("taskId_X", 'deps:')) == t1.file_dep def test_save_values(self, pdepfile): t1 = Task('t1', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) assert {'x':5, 'y':10} == pdepfile._get("t1", "_values_:") class TestGetValue(object): def test_all_values(self, pdepfile): t1 = Task('t1', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) assert {'x':5, 'y':10} == pdepfile.get_values('t1') def test_ok(self, pdepfile): t1 = Task('t1', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) assert 5 == pdepfile.get_value('t1', 'x') def test_ok_dot_on_task_name(self, pdepfile): t1 = Task('t1:a.ext', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) assert 5 == pdepfile.get_value('t1:a.ext', 'x') def test_invalid_taskid(self, pdepfile): t1 = Task('t1', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) pytest.raises(Exception, pdepfile.get_value, 'nonono', 'x') def test_invalid_key(self, pdepfile): t1 = Task('t1', None) t1.values = {'x':5, 'y':10} pdepfile.save_success(t1) pytest.raises(Exception, pdepfile.get_value, 't1', 'z') class TestRemoveSuccess(object): def test_save_result(self, pdepfile): t1 = Task('t_name', None) t1.result = "result" pdepfile.save_success(t1) assert get_md5("result") == pdepfile._get(t1.name, "result:") pdepfile.remove_success(t1) assert None is pdepfile._get(t1.name, "result:") class TestIgnore(object): def test_save_result(self, pdepfile): t1 = Task('t_name', None) pdepfile.ignore(t1) assert '1' == pdepfile._get(t1.name, "ignore:") class TestMD5Checker(object): def test_timestamp(self, dependency1): checker = MD5Checker() state = checker.get_state(dependency1, None) state2 = (state[0], state[1]+1, '') file_stat = os.stat(dependency1) # dep considered the same as long as timestamp is unchanged assert not checker.check_modified(dependency1, file_stat, state2) def test_size(self, dependency1): checker = MD5Checker() state = checker.get_state(dependency1, None) state2 = (state[0]+1, state[1]+1, state[2]) file_stat = os.stat(dependency1) # if size changed for sure modified (md5 is not checked) assert checker.check_modified(dependency1, file_stat, state2) def test_md5(self, dependency1): checker = MD5Checker() state = checker.get_state(dependency1, None) file_stat = os.stat(dependency1) # same size and md5 state2 = (state[0]+1, state[1], state[2]) assert not checker.check_modified(dependency1, file_stat, state2) # same size, different md5 state3 = (state[0]+1, state[1], 'not me') assert checker.check_modified(dependency1, file_stat, state3) class TestCustomChecker(object): def test_not_implemented(self, dependency1): class MyChecker(FileChangedChecker): pass checker = MyChecker() pytest.raises(NotImplementedError, checker.get_state, None, None) pytest.raises(NotImplementedError, checker.check_modified, None, None, None) class TestTimestampChecker(object): def test_timestamp(self, dependency1): checker = TimestampChecker() state = checker.get_state(dependency1, None) file_stat = os.stat(dependency1) assert not checker.check_modified(dependency1, file_stat, state) assert checker.check_modified(dependency1, file_stat, state+1) class TestDependencyStatus(object): def test_add_reason(self): result = DependencyStatus(True) assert 'up-to-date' == result.status assert not result.add_reason('changed_file_dep', 'f1') assert 'run' == result.status assert not result.add_reason('changed_file_dep', 'f2') assert ['f1', 'f2'] == result.reasons['changed_file_dep'] def test_add_reason_error(self): result = DependencyStatus(True) assert 'up-to-date' == result.status assert not result.add_reason('missing_file_dep', 'f1', 'error') assert 'error' == result.status assert ['f1'] == result.reasons['missing_file_dep'] def test_set_reason(self): result = DependencyStatus(True) assert 'up-to-date' == result.status assert not result.set_reason('has_no_dependencies', True) assert 'run' == result.status assert True == result.reasons['has_no_dependencies'] def test_no_log(self): result = DependencyStatus(False) assert 'up-to-date' == result.status assert result.set_reason('has_no_dependencies', True) assert 'run' == result.status def test_get_error_message(self): result = DependencyStatus(False) assert None == result.get_error_message() result.error_reason = 'foo xxx' assert 'foo xxx' == result.get_error_message() class TestGetStatus(object): def test_ignore(self, pdepfile): t1 = Task("t1", None) # before ignore assert not pdepfile.status_is_ignore(t1) # after ignote pdepfile.ignore(t1) assert pdepfile.status_is_ignore(t1) def test_fileDependencies(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() dependencies = [filePath] t1 = Task("t1", None, dependencies) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert dependencies == t1.dep_changed # second time no pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed # FIXME - mock timestamp time.sleep(1) # required otherwise timestamp is not modified! # a small change on the file ff = open(filePath,"a") ff.write(" part2") ff.close() # execute again assert 'run' == pdepfile.get_status(t1, {}).status assert dependencies == t1.dep_changed def test_fileDependencies_changed(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() filePath2 = get_abspath("data/dependency2") ff = open(filePath,"w") ff.write("part1") ff.close() dependencies = [filePath, filePath2] t1 = Task("t1", None, dependencies) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert sorted(dependencies) == sorted(t1.dep_changed) # second time no pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed # remove dependency filePath2 t1 = Task("t1", None, [filePath]) # execute again assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed def test_fileDependencies_changed_get_log(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() filePath2 = get_abspath("data/dependency2") ff = open(filePath,"w") ff.write("part1") ff.close() t1 = Task("t1", None, [filePath]) # first time execute result = pdepfile.get_status(t1, {}, get_log=True) assert 'run' == result.status assert [filePath] == t1.dep_changed pdepfile.save_success(t1) # second time t1b = Task("t1", None, [filePath2]) result = pdepfile.get_status(t1b, {}, get_log=True) assert 'run' == result.status assert [filePath2] == t1b.dep_changed assert [filePath] == result.reasons['removed_file_dep'] assert [filePath2] == result.reasons['added_file_dep'] def test_file_dependency_not_exist(self, pdepfile): filePath = get_abspath("data/dependency_not_exist") t1 = Task("t1", None, [filePath]) assert 'error' == pdepfile.get_status(t1, {}).status def test_change_checker(self, pdepfile, dependency1): t1 = Task("taskId_X", None, [dependency1]) pdepfile.checker = TimestampChecker() pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status # change of checker force `run` again pdepfile.checker = MD5Checker() assert 'run' == pdepfile.get_status(t1, {}).status pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status # if there is no dependency the task is always executed def test_noDependency(self, pdepfile): t1 = Task("t1", None) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed # second too pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed def test_UptodateFalse(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() t1 = Task("t1", None, file_dep=[filePath], uptodate=[False]) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed # second time execute too pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed def test_UptodateTrue(self, pdepfile): t1 = Task("t1", None, uptodate=[True]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateNone(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() t1 = Task("t1", None, file_dep=[filePath], uptodate=[None]) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert [filePath] == t1.dep_changed # second time execute too pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateFunction_True(self, pdepfile): def check(task, values): assert task.name == 't1' return True t1 = Task("t1", None, uptodate=[check]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateFunction_False(self, pdepfile): filePath = get_abspath("data/dependency1") ff = open(filePath,"w") ff.write("part1") ff.close() def check(task, values): return False t1 = Task("t1", None, file_dep=[filePath], uptodate=[check]) # first time execute assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed # second time execute too pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed def test_UptodateFunction_without_args_True(self, pdepfile): def check(): return True t1 = Task("t1", None, uptodate=[check]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_uptodate_call_all_even_if_some_False(self, pdepfile): checks = [] def check(): checks.append(1) return False t1 = Task("t1", None, uptodate=[check, check]) #pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert 2 == len(checks) def test_UptodateFunction_extra_args_True(self, pdepfile): def check(task, values, control): assert task.name == 't1' return control>30 t1 = Task("t1", None, uptodate=[ (check, [34]) ]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateCallable_True(self, pdepfile): class MyChecker(object): def __call__(self, task, values): assert task.name == 't1' return True t1 = Task("t1", None, uptodate=[ MyChecker() ]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateMethod_True(self, pdepfile): class MyChecker(object): def check(self, task, values): assert task.name == 't1' return True t1 = Task("t1", None, uptodate=[ MyChecker().check ]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateCallable_added_attributes(self, pdepfile): task_dict = "fake dict" class My_uptodate(UptodateCalculator): def __call__(self, task, values): # attributes were added to object before call'ing it assert task_dict == self.tasks_dict assert None == self.get_val('t1', None) return True check = My_uptodate() t1 = Task("t1", None, uptodate=[check]) assert 'up-to-date' == pdepfile.get_status(t1, task_dict).status def test_UptodateCommand_True(self, pdepfile): t1 = Task("t1", None, uptodate=[PROGRAM]) pdepfile.save_success(t1) assert 'up-to-date' == pdepfile.get_status(t1, {}).status def test_UptodateCommand_False(self, pdepfile): t1 = Task("t1", None, uptodate=[PROGRAM + ' please fail']) pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status # if target file does not exist, task is outdated. def test_targets_notThere(self, pdepfile, dependency1): target = get_abspath("data/target") if os.path.exists(target): os.remove(target) t1 = Task("task x", None, [dependency1], [target]) pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert [dependency1] == t1.dep_changed def test_targets(self, pdepfile, dependency1): filePath = get_abspath("data/target") ff = open(filePath,"w") ff.write("part1") ff.close() deps = [dependency1] targets = [filePath] t1 = Task("task X", None, deps, targets) pdepfile.save_success(t1) # up-to-date because target exist assert 'up-to-date' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed def test_targetFolder(self, pdepfile, dependency1): # folder not there. task is not up-to-date deps = [dependency1] folderPath = get_abspath("data/target-folder") if os.path.exists(folderPath): os.rmdir(folderPath) t1 = Task("task x", None, deps, [folderPath]) pdepfile.save_success(t1) assert 'run' == pdepfile.get_status(t1, {}).status assert deps == t1.dep_changed # create folder. task is up-to-date os.mkdir(folderPath) assert 'up-to-date' == pdepfile.get_status(t1, {}).status assert [] == t1.dep_changed doit-0.30.3/tests/test_doit_cmd.py000066400000000000000000000106201305250115000170600ustar00rootroot00000000000000import os import pytest from mock import Mock from doit.exceptions import InvalidCommand from doit.cmd_run import Run from doit.cmd_list import List from doit import doit_cmd def cmd_main(args): return doit_cmd.DoitMain().run(args) class TestRun(object): def test_version(self, capsys): cmd_main(["--version"]) out, err = capsys.readouterr() assert "lib" in out def test_usage(self, capsys): cmd_main(["--help"]) out, err = capsys.readouterr() assert "doit list" in out def test_run_is_default(self, monkeypatch): mock_run = Mock() monkeypatch.setattr(Run, "execute", mock_run) cmd_main([]) assert 1 == mock_run.call_count def test_run_other_subcommand(self, monkeypatch): mock_list = Mock() monkeypatch.setattr(List, "execute", mock_list) cmd_main(["list"]) assert 1 == mock_list.call_count def test_cmdline_vars(self, monkeypatch): mock_run = Mock() monkeypatch.setattr(Run, "execute", mock_run) cmd_main(['x=1', 'y=abc']) assert '1' == doit_cmd.get_var('x') assert 'abc' == doit_cmd.get_var('y') def test_cmdline_vars_not_opts(self, monkeypatch): mock_run = Mock() monkeypatch.setattr(Run, "execute", mock_run) cmd_main(['--z=5']) assert None == doit_cmd.get_var('--z') def test_task_loader_has_cmd_list(self, monkeypatch): cmd_names = [] def save_cmd_names(self, params, args): cmd_names.extend(self.loader.cmd_names) monkeypatch.setattr(Run, "execute", save_cmd_names) cmd_main([]) assert 'list' in cmd_names def test_extra_config(self, monkeypatch, depfile_name): outfile_val = [] def monkey_run(self, opt_values, pos_args): outfile_val.append(opt_values['outfile']) monkeypatch.setattr(Run, "execute", monkey_run) extra_config = { 'outfile': 'foo.txt', 'dep_file': depfile_name, } doit_cmd.DoitMain(extra_config={'GLOBAL': extra_config}).run([]) assert outfile_val[0] == 'foo.txt' class TestErrors(object): def test_interrupt(self, monkeypatch): def my_raise(*args): raise KeyboardInterrupt() mock_cmd = Mock(side_effect=my_raise) monkeypatch.setattr(Run, "execute", mock_cmd) pytest.raises(KeyboardInterrupt, cmd_main, []) def test_user_error(self, capsys, monkeypatch): mock_cmd = Mock(side_effect=InvalidCommand) monkeypatch.setattr(Run, "execute", mock_cmd) got = cmd_main([]) assert 3 == got out, err = capsys.readouterr() assert "ERROR" in err def test_internal_error(self, capsys, monkeypatch): mock_cmd = Mock(side_effect=Exception) monkeypatch.setattr(Run, "execute", mock_cmd) got = cmd_main([]) assert 3 == got out, err = capsys.readouterr() # traceback from Exception (this case code from mock lib) assert "mock.py" in err class TestConfig(object): def test_no_ini_config_file(self): main = doit_cmd.DoitMain(config_filenames=()) main.run(['--version']) def test_load_plugins_command(self): config_filename = os.path.join(os.path.dirname(__file__), 'sample.cfg') main = doit_cmd.DoitMain(config_filenames=config_filename) assert 1 == len(main.config['COMMAND']) # test loaded plugin command is actually used with plugin name assert 'foo' in main.get_cmds() def test_merge_api_ini_config(self): config_filename = os.path.join(os.path.dirname(__file__), 'sample.cfg') api_config = {'GLOBAL': {'opty':'10', 'optz':'10'}} main = doit_cmd.DoitMain(config_filenames=config_filename, extra_config=api_config) assert 1 == len(main.config['COMMAND']) # test loaded plugin command is actually used with plugin name assert 'foo' in main.get_cmds() # INI has higher preference the api_config assert main.config['GLOBAL'] == {'optx':'6', 'opty':'7', 'optz':'10'} def test_execute_command_plugin(self, capsys): config_filename = os.path.join(os.path.dirname(__file__), 'sample.cfg') main = doit_cmd.DoitMain(config_filenames=config_filename) main.run(['foo']) got = capsys.readouterr()[0] assert got == 'this command does nothing!\n' doit-0.30.3/tests/test_exceptions.py000066400000000000000000000050051305250115000174600ustar00rootroot00000000000000from doit import exceptions class TestInvalidCommand(object): def test_just_string(self): exception = exceptions.InvalidCommand('whatever string') assert 'whatever string' == str(exception) def test_task_not_found(self): exception = exceptions.InvalidCommand(not_found='my_task') exception.cmd_used = 'build' assert 'command `build` invalid parameter: "my_task".' in str(exception) def test_param_not_found(self): exception = exceptions.InvalidCommand(not_found='my_task') exception.cmd_used = None want = 'Invalid parameter: "my_task". Must be a command,' assert want in str(exception) assert 'Type "doit help" to see' in str(exception) def test_custom_binary_name(self): exception = exceptions.InvalidCommand(not_found='my_task') exception.cmd_used = None exception.bin_name = 'my_tool' assert 'Type "my_tool help" to see ' in str(exception) class TestCatchedException(object): def test_name(self): class XYZ(exceptions.CatchedException): pass my_excp = XYZ("hello") assert 'XYZ' == my_excp.get_name() assert 'XYZ' in str(my_excp) assert 'XYZ' in repr(my_excp) def test_msg_notraceback(self): my_excp = exceptions.CatchedException('got you') msg = my_excp.get_msg() assert 'got you' in msg def test_exception(self): try: raise IndexError('too big') except Exception as e: my_excp = exceptions.CatchedException('got this', e) msg = my_excp.get_msg() assert 'got this' in msg assert 'too big' in msg assert 'IndexError' in msg def test_catched(self): try: raise IndexError('too big') except Exception as e: my_excp = exceptions.CatchedException('got this', e) my_excp2 = exceptions.CatchedException('handle that', my_excp) msg = my_excp2.get_msg() assert 'handle that' in msg assert 'got this' not in msg # could be there too... assert 'too big' in msg assert 'IndexError' in msg class TestAllCatched(object): def test(self): assert issubclass(exceptions.TaskFailed, exceptions.CatchedException) assert issubclass(exceptions.TaskError, exceptions.CatchedException) assert issubclass(exceptions.SetupError, exceptions.CatchedException) assert issubclass(exceptions.DependencyError, exceptions.CatchedException) doit-0.30.3/tests/test_filewatch.py000066400000000000000000000066211305250115000172520ustar00rootroot00000000000000import os import time import threading import pytest from doit.filewatch import FileModifyWatcher, get_platform_system def testUnsuportedPlatform(monkeypatch): monkeypatch.setattr(FileModifyWatcher, 'supported_platforms', ()) pytest.raises(Exception, FileModifyWatcher, []) platform = get_platform_system() @pytest.mark.skipif('platform not in FileModifyWatcher.supported_platforms') class TestFileWatcher(object): def testInit(self, restore_cwd, tmpdir): dir1 = 'data3' files = ('data/w1.txt', 'data/w2.txt') tmpdir.mkdir('data') for fname in files: tmpdir.join(fname).open('a').close() os.chdir(tmpdir.strpath) fw = FileModifyWatcher((files[0], files[1], dir1)) # file_list contains absolute paths assert 2 == len(fw.file_list) assert os.path.abspath(files[0]) in fw.file_list assert os.path.abspath(files[1]) in fw.file_list # watch_dirs assert 2 == len(fw.watch_dirs) assert tmpdir.join('data') in fw.watch_dirs assert tmpdir.join('data3') in fw.watch_dirs # notify_dirs assert 1 == len(fw.notify_dirs) assert tmpdir.join('data3') in fw.notify_dirs def testHandleEventNotSubclassed(self): fw = FileModifyWatcher([]) pytest.raises(NotImplementedError, fw.handle_event, None) def testLoop(self, restore_cwd, tmpdir): files = ['data/w1.txt', 'data/w2.txt', 'data/w3.txt'] stop_file = 'data/stop' tmpdir.mkdir('data') for fname in files + [stop_file]: tmpdir.join(fname).open('a').close() os.chdir(tmpdir.strpath) fw = FileModifyWatcher((files[0], files[1], stop_file)) events = [] should_stop = [] started = [] def handle_event(event): events.append(event.pathname) if event.pathname.endswith("stop"): should_stop.append(True) fw.handle_event = handle_event def loop_callback(notifier): started.append(True) # force loop to stop if should_stop: raise KeyboardInterrupt loop_thread = threading.Thread(target=fw.loop, args=(loop_callback,)) loop_thread.daemon = True loop_thread.start() # wait watcher to be ready while not started: # pragma: no cover time.sleep(0.01) assert loop_thread.isAlive() # write in watched file fd = open(files[0], 'w') fd.write("hi") fd.close() # write in non-watched file fd = open(files[2], 'w') fd.write("hi") fd.close() # write in another watched file fd = open(files[1], 'w') fd.write("hi") fd.close() # tricky to stop watching fd = open(stop_file, 'w') fd.write("hi") fd.close() time.sleep(0.1) loop_thread.join(1) if loop_thread.isAlive(): # pragma: no cover # this test is very flaky so we give it one more chance... # write on file to terminate thread fd = open(stop_file, 'w') fd.write("hi") fd.close() loop_thread.join(1) if loop_thread.is_alive(): # pragma: no cover raise Exception("thread not terminated") assert os.path.abspath(files[0]) == events[0] assert os.path.abspath(files[1]) == events[1] doit-0.30.3/tests/test_loader.py000066400000000000000000000314511305250115000165510ustar00rootroot00000000000000import os import inspect import pytest from doit.exceptions import InvalidDodoFile, InvalidCommand from doit.task import InvalidTask, DelayedLoader, Task from doit.loader import flat_generator, get_module from doit.loader import load_tasks, load_doit_config, generate_tasks from doit.loader import create_after class TestFlatGenerator(object): def test_nested(self): def myg(items): for x in items: yield x flat = flat_generator(myg([1, myg([2, myg([3, myg([4, myg([5])])])])])) assert [1,2,3,4,5] == [f[0] for f in flat] class TestGetModule(object): def testAbsolutePath(self, restore_cwd): fileName = os.path.join(os.path.dirname(__file__),"loader_sample.py") dodo_module = get_module(fileName) assert hasattr(dodo_module, 'task_xxx1') def testRelativePath(self, restore_cwd): # test relative import but test should still work from any path # so change cwd. this_path = os.path.join(os.path.dirname(__file__),'..') os.chdir(os.path.abspath(this_path)) fileName = "tests/loader_sample.py" dodo_module = get_module(fileName) assert hasattr(dodo_module, 'task_xxx1') def testWrongFileName(self): fileName = os.path.join(os.path.dirname(__file__),"i_dont_exist.py") pytest.raises(InvalidDodoFile, get_module, fileName) def testInParentDir(self, restore_cwd): os.chdir(os.path.join(os.path.dirname(__file__), "data")) fileName = "loader_sample.py" pytest.raises(InvalidDodoFile, get_module, fileName) get_module(fileName, seek_parent=True) # cwd is changed to location of dodo.py assert os.getcwd() == os.path.dirname(os.path.abspath(fileName)) def testWrongFileNameInParentDir(self, restore_cwd): os.chdir(os.path.join(os.path.dirname(__file__), "data")) fileName = os.path.join("i_dont_exist.py") pytest.raises(InvalidDodoFile, get_module, fileName, seek_parent=True) def testInvalidCwd(self, restore_cwd): fileName = os.path.join(os.path.dirname(__file__),"loader_sample.py") cwd = os.path.join(os.path.dirname(__file__), "dataX") pytest.raises(InvalidCommand, get_module, fileName, cwd) class TestLoadTasks(object): @pytest.fixture def dodo(self): def task_xxx1(): """task doc""" return {'actions':['do nothing']} def task_yyy2(): return {'actions':None} def bad_seed(): pass task_nono = 5 task_nono # pyflakes return locals() def testNormalCase(self, dodo): task_list = load_tasks(dodo) assert 2 == len(task_list) assert 'xxx1' == task_list[0].name assert 'yyy2' == task_list[1].name def testCreateAfterDecorator(self): @create_after('yyy2') def task_zzz3(): # pragma: no cover pass # create_after annotates the function assert isinstance(task_zzz3.doit_create_after, DelayedLoader) assert task_zzz3.doit_create_after.task_dep == 'yyy2' def testInitialLoadDelayedTask(self, dodo): @create_after('yyy2') def task_zzz3(): # pragma: no cover raise Exception('Cant be executed on load phase') dodo['task_zzz3'] = task_zzz3 # placeholder task is created with `loader` attribute task_list = load_tasks(dodo, allow_delayed=True) z_task = [t for t in task_list if t.name=='zzz3'][0] assert z_task.loader.task_dep == 'yyy2' assert z_task.loader.creator == task_zzz3 def testInitialLoadDelayedTask_no_delayed(self, dodo): @create_after('yyy2') def task_zzz3(): yield {'basename': 'foo', 'actions': None} yield {'basename': 'bar', 'actions': None} dodo['task_zzz3'] = task_zzz3 # load tasks as done by the `list` command task_list = load_tasks(dodo, allow_delayed=False) tasks = {t.name:t for t in task_list} assert 'zzz3' not in tasks assert tasks['foo'].loader is None assert tasks['bar'].loader is None def testInitialLoadDelayedTask_creates(self, dodo): @create_after('yyy2', creates=['foo', 'bar']) def task_zzz3(): # pragma: no cover '''my task doc''' raise Exception('Cant be executed on load phase') dodo['task_zzz3'] = task_zzz3 # placeholder task is created with `loader` attribute task_list = load_tasks(dodo, allow_delayed=True) tasks = {t.name:t for t in task_list} assert 'zzz3' not in tasks f_task = tasks['foo'] assert f_task.loader.task_dep == 'yyy2' assert f_task.loader.creator == task_zzz3 assert tasks['bar'].loader is tasks['foo'].loader assert tasks['foo'].doc == 'my task doc' def testNameInBlacklist(self): dodo_module = {'task_cmd_name': lambda:None} pytest.raises(InvalidDodoFile, load_tasks, dodo_module, ['cmd_name']) def testDocString(self, dodo): task_list = load_tasks(dodo) assert "task doc" == task_list[0].doc def testUse_create_doit_tasks(self): def original(): pass def creator(): return {'actions': ['do nothing'], 'file_dep': ['foox']} original.create_doit_tasks = creator task_list = load_tasks({'x': original}) assert 1 == len(task_list) assert set(['foox']) == task_list[0].file_dep def testUse_create_doit_tasks_only_noargs_call(self): class Foo(object): def create_doit_tasks(self): return {'actions': ['do nothing'], 'file_dep': ['fooy']} task_list = load_tasks({'Foo':Foo, 'foo':Foo()}) assert len(task_list) == 1 assert task_list[0].file_dep == set(['fooy']) def testUse_object_methods(self): class Dodo(object): def foo(self): # pragma: no cover pass def task_method1(self): return {'actions':None} def task_method2(self): return {'actions':None} methods = dict(inspect.getmembers(Dodo())) task_list = load_tasks(methods) assert 2 == len(task_list) assert 'method1' == task_list[0].name assert 'method2' == task_list[1].name class TestDodoConfig(object): def testConfigType_Error(self): pytest.raises(InvalidDodoFile, load_doit_config, {'DOIT_CONFIG': 'abc'}) def testConfigDict_Ok(self,): config = load_doit_config({'DOIT_CONFIG': {'verbose': 2}}) assert {'verbose': 2} == config def testDefaultConfig_Dict(self): config = load_doit_config({'whatever': 2}) assert {} == config class TestGenerateTaskInvalid(object): def testInvalidValue(self): pytest.raises(InvalidTask, generate_tasks, "dict",'xpto 14') class TestGenerateTaskNone(object): def testEmpty(self): tasks = generate_tasks('xx', None) assert len(tasks) == 0 class TestGenerateTasksSingle(object): def testDict(self): tasks = generate_tasks("my_name", {'actions':['xpto 14']}) assert isinstance(tasks[0], Task) assert "my_name" == tasks[0].name def testTaskObj(self): tasks = generate_tasks("foo", Task('bar', None)) assert 1 == len(tasks) assert tasks[0].name == 'bar' def testBaseName(self): tasks = generate_tasks("function_name", { 'basename': 'real_task_name', 'actions':['xpto 14'] }) assert isinstance(tasks[0], Task) assert "real_task_name" == tasks[0].name # name field is only for subtasks. def testInvalidNameField(self): pytest.raises(InvalidTask, generate_tasks, "my_name", {'actions':['xpto 14'],'name':'bla bla'}) def testUseDocstring(self): tasks = generate_tasks("dict",{'actions':['xpto 14']}, "my doc") assert "my doc" == tasks[0].doc def testDocstringNotUsed(self): mytask = {'actions':['xpto 14'], 'doc':'from dict'} tasks = generate_tasks("dict", mytask, "from docstring") assert "from dict" == tasks[0].doc class TestGenerateTasksGenerator(object): def testGenerator(self): def f_xpto(): for i in range(3): yield {'name':str(i), 'actions' :["xpto -%d"%i]} tasks = generate_tasks("xpto", f_xpto()) assert isinstance(tasks[0], Task) assert 4 == len(tasks) assert not tasks[0].is_subtask assert "xpto:0" == tasks[0].task_dep[0] assert "xpto:0" == tasks[1].name assert tasks[1].is_subtask def testMultiLevelGenerator(self): def f_xpto(base_name): """second level docstring""" for i in range(3): name = "%s-%d" % (base_name, i) yield {'name':name, 'actions' :["xpto -%d"%i]} def f_first_level(): for i in range(2): yield f_xpto(str(i)) tasks = generate_tasks("xpto", f_first_level()) assert isinstance(tasks[0], Task) assert 7 == len(tasks) assert not tasks[0].is_subtask assert f_xpto.__doc__ == tasks[0].doc assert tasks[1].is_subtask assert "xpto:0-0" == tasks[1].name assert "xpto:1-2" == tasks[-1].name def testGeneratorReturnTaskObj(self): def foo(base_name): for i in range(3): name = "%s-%d" % (base_name, i) yield Task(name, actions=["xpto -%d"%i]) tasks = generate_tasks("foo", foo('bar')) assert 3 == len(tasks) assert tasks[0].name == 'bar-0' assert tasks[1].name == 'bar-1' assert tasks[2].name == 'bar-2' def testGeneratorDoesntReturnDict(self): def f_xpto(): for i in range(3): yield "xpto -%d" % i pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorDictMissingAction(self): def f_xpto(): for i in range(3): yield {'name':str(i)} pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorDictMissingName(self): def f_xpto(): for i in range(3): yield {'actions' :["xpto -%d"%i]} pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorBasename(self): def f_xpto(): for i in range(3): yield {'basename':str(i), 'actions' :["xpto"]} tasks = sorted(generate_tasks("xpto", f_xpto()), key=lambda t:t.name) assert isinstance(tasks[0], Task) assert 3 == len(tasks) assert "0" == tasks[0].name assert not tasks[0].is_subtask assert not tasks[1].is_subtask def testGeneratorBasenameName(self): def f_xpto(): for i in range(3): yield {'basename':'xpto', 'name':str(i), 'actions' :["a"]} tasks = sorted(generate_tasks("f_xpto", f_xpto())) assert isinstance(tasks[0], Task) assert 4 == len(tasks) assert "xpto" == tasks[0].name assert "xpto:0" == tasks[1].name assert not tasks[0].is_subtask assert tasks[1].is_subtask def testGeneratorBasenameCanNotRepeat(self): def f_xpto(): for i in range(3): yield {'basename':'again', 'actions' :["xpto"]} pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorBasenameCanNotRepeatNonGroup(self): def f_xpto(): yield {'basename': 'xpto', 'actions':["a"]} for i in range(3): yield {'name': str(i), 'actions' :["a"]} pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorNameCanNotRepeat(self): def f_xpto(): yield {'basename':'bn', 'name': 'xxx', 'actions' :["xpto"]} yield {'basename':'bn', 'name': 'xxx', 'actions' :["xpto2"]} pytest.raises(InvalidTask, generate_tasks, "xpto", f_xpto()) def testGeneratorDocString(self): def f_xpto(): "the doc" for i in range(3): yield {'name':str(i), 'actions' :["xpto -%d"%i]} tasks = sorted(generate_tasks("xpto", f_xpto(), f_xpto.__doc__)) assert "the doc" == tasks[0].doc def testGeneratorWithNoTasks(self): def f_xpto(): for x in []: yield x tasks = generate_tasks("xpto", f_xpto()) assert 1 == len(tasks) assert "xpto" == tasks[0].name assert not tasks[0].is_subtask def testGeneratorBaseOnly(self): def f_xpto(): yield {'basename':'xpto', 'name':None, 'doc': 'xxx doc'} tasks = sorted(generate_tasks("f_xpto", f_xpto())) assert 1 == len(tasks) assert isinstance(tasks[0], Task) assert "xpto" == tasks[0].name assert tasks[0].has_subtask assert 'xxx doc' == tasks[0].doc doit-0.30.3/tests/test_plugin.py000066400000000000000000000051251305250115000166000ustar00rootroot00000000000000import pytest from mock import Mock from doit.plugin import PluginEntry, PluginDict class TestPluginEntry(object): def test_repr(self): plugin = PluginEntry('category1', 'name1', 'mock:Mock') assert "PluginEntry('category1', 'name1', 'mock:Mock')" == repr(plugin) def test_get(self): plugin = PluginEntry('category1', 'name1', 'mock:Mock') got = plugin.get() assert got is Mock def test_load_error_module_not_found(self): plugin = PluginEntry('category1', 'name1', 'i_dont:exist') with pytest.raises(Exception) as exc_info: plugin.load() assert 'Plugin category1 module `i_dont`' in str(exc_info.value) def test_load_error_obj_not_found(self): plugin = PluginEntry('category1', 'name1', 'mock:i_dont_exist') with pytest.raises(Exception) as exc_info: plugin.load() assert 'Plugin category1:name1 module `mock`' in str(exc_info.value) assert 'i_dont_exist' in str(exc_info.value) class TestPluginDict(object): @pytest.fixture def plugins(self): plugins = PluginDict() config_dict = {'name1': 'pytest:raises', 'name2': 'mock:Mock'} plugins.add_plugins({'category1': config_dict}, 'category1') return plugins def test_add_plugins_from_dict(self, plugins): assert len(plugins) == 2 name1 = plugins['name1'] assert isinstance(name1, PluginEntry) assert name1.category == 'category1' assert name1.name == 'name1' assert name1.location == 'pytest:raises' def test_add_plugins_from_pkg_resources(self, monkeypatch): # mock entry points import pkg_resources def fake_entries(group): yield pkg_resources.EntryPoint('name1', 'pytest', ('raises',)) monkeypatch.setattr(pkg_resources, 'iter_entry_points', fake_entries) plugins = PluginDict() plugins.add_plugins({}, 'category2') name1 = plugins['name1'] assert isinstance(name1, PluginEntry) assert name1.category == 'category2' assert name1.name == 'name1' assert name1.location == 'pytest:raises' def test_get_plugin_actual_plugin(self, plugins): assert plugins.get_plugin('name2') is Mock def test_get_plugin_not_a_plugin(self, plugins): my_val = 4 plugins['builtin-item'] = my_val assert plugins.get_plugin('builtin-item') is my_val def test_to_dict(self, plugins): expected = {'name1': pytest.raises, 'name2': Mock} assert plugins.to_dict() == expected doit-0.30.3/tests/test_reporter.py000066400000000000000000000203561305250115000171470ustar00rootroot00000000000000import sys import json from io import StringIO from doit import reporter from doit.task import Task from doit.exceptions import CatchedException class TestConsoleReporter(object): def test_initialize(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.initialize([Task("t_name", None)]) # no output on initialize assert "" in rep.outstream.getvalue() def test_startTask(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.get_status(Task("t_name", None)) # no output on start task assert "" in rep.outstream.getvalue() def test_executeTask(self): rep = reporter.ConsoleReporter(StringIO(), {}) def do_nothing():pass t1 = Task("with_action",[(do_nothing,)]) rep.execute_task(t1) assert ". with_action\n" == rep.outstream.getvalue() def test_executeTask_unicode(self): rep = reporter.ConsoleReporter(StringIO(), {}) def do_nothing():pass task_name = "中文 with_action" t1 = Task(task_name, [(do_nothing,)]) rep.execute_task(t1) assert ". 中文 with_action\n" == rep.outstream.getvalue() def test_executeHidden(self): rep = reporter.ConsoleReporter(StringIO(), {}) def do_nothing():pass t1 = Task("_hidden",[(do_nothing,)]) rep.execute_task(t1) assert "" == rep.outstream.getvalue() def test_executeGroupTask(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.execute_task(Task("t_name", None)) assert "" == rep.outstream.getvalue() def test_skipUptodate(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.skip_uptodate(Task("t_name", None)) assert "-- " in rep.outstream.getvalue() assert "t_name" in rep.outstream.getvalue() def test_skipUptodate_hidden(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.skip_uptodate(Task("_name", None)) assert "" == rep.outstream.getvalue() def test_skipIgnore(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.skip_ignore(Task("t_name", None)) assert "!! " in rep.outstream.getvalue() assert "t_name" in rep.outstream.getvalue() def test_cleanupError(self, capsys): rep = reporter.ConsoleReporter(StringIO(), {}) exception = CatchedException("I got you") rep.cleanup_error(exception) err = capsys.readouterr()[1] assert "I got you" in err def test_teardownTask(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.teardown_task(Task("t_name", None)) # no output on teardown task assert "" in rep.outstream.getvalue() def test_addSuccess(self): rep = reporter.ConsoleReporter(StringIO(), {}) rep.add_success(Task("t_name", None)) # no output on success task assert "" in rep.outstream.getvalue() def test_addFailure(self): rep = reporter.ConsoleReporter(StringIO(), {}) try: raise Exception("original 中文 exception message here") except Exception as e: catched = CatchedException("catched exception there", e) rep.add_failure(Task("t_name", None), catched) rep.complete_run() got = rep.outstream.getvalue() # description assert "Exception: original 中文 exception message here" in got, got # traceback assert """raise Exception("original 中文 exception message here")""" in got # catched message assert "catched exception there" in got def test_runtime_error(self): msg = "runtime error" rep = reporter.ConsoleReporter(StringIO(), {}) assert [] == rep.runtime_errors # no imediate output rep.runtime_error(msg) assert 1 == len(rep.runtime_errors) assert msg == rep.runtime_errors[0] assert "" in rep.outstream.getvalue() # runtime errors abort execution rep.complete_run() got = rep.outstream.getvalue() assert msg in got assert "Execution aborted" in got class TestExecutedOnlyReporter(object): def test_skipUptodate(self): rep = reporter.ExecutedOnlyReporter(StringIO(), {}) rep.skip_uptodate(Task("t_name", None)) assert "" == rep.outstream.getvalue() def test_skipIgnore(self): rep = reporter.ExecutedOnlyReporter(StringIO(), {}) rep.skip_ignore(Task("t_name", None)) assert "" == rep.outstream.getvalue() class TestZeroReporter(object): def test_executeTask(self): rep = reporter.ZeroReporter(StringIO(), {}) def do_nothing():pass t1 = Task("with_action",[(do_nothing,)]) rep.execute_task(t1) assert "" == rep.outstream.getvalue() def test_runtime_error(self, capsys): msg = "zero runtime error" rep = reporter.ZeroReporter(StringIO(), {}) # imediate output rep.runtime_error(msg) assert msg in capsys.readouterr()[1] class TestTaskResult(object): def test(self): def sample(): print("this is printed") t1 = Task("t1", [(sample,)]) result = reporter.TaskResult(t1) result.start() t1.execute() result.set_result('success') got = result.to_dict() assert t1.name == got['name'], got assert 'success' == got['result'], got assert "this is printed\n" == got['out'], got assert "" == got['err'], got assert got['started'] assert 'elapsed' in got class TestJsonReporter(object): def test_normal(self): output = StringIO() rep = reporter.JsonReporter(output) t1 = Task("t1", None) t2 = Task("t2", None) t3 = Task("t3", None) t4 = Task("t4", None) expected = {'t1':'fail', 't2':'up-to-date', 't3':'success', 't4':'ignore'} # t1 fail rep.get_status(t1) rep.execute_task(t1) rep.add_failure(t1, CatchedException('t1 failed!')) # t2 skipped rep.get_status(t2) rep.skip_uptodate(t2) # t3 success rep.get_status(t3) rep.execute_task(t3) rep.add_success(t3) # t4 ignore rep.get_status(t4) rep.skip_ignore(t4) rep.teardown_task(t4) rep.complete_run() got = json.loads(output.getvalue()) for task_result in got['tasks']: assert expected[task_result['name']] == task_result['result'], got if task_result['name'] == 't1': assert 't1 failed!' in task_result['error'] def test_cleanup_error(self, capsys): output = StringIO() rep = reporter.JsonReporter(output) t1 = Task("t1", None) msg = "cleanup error" exception = CatchedException(msg) assert [] == rep.errors rep.get_status(t1) rep.execute_task(t1) rep.add_success(t1) rep.cleanup_error(exception) assert [msg+'\n'] == rep.errors assert "" in rep.outstream.getvalue() rep.complete_run() got = json.loads(output.getvalue()) assert msg in got['err'] def test_runtime_error(self): output = StringIO() rep = reporter.JsonReporter(output) t1 = Task("t1", None) msg = "runtime error" assert [] == rep.errors rep.get_status(t1) rep.execute_task(t1) rep.add_success(t1) rep.runtime_error(msg) assert [msg] == rep.errors assert "" in rep.outstream.getvalue() # runtime errors abort execution rep.complete_run() got = json.loads(output.getvalue()) assert msg in got['err'] def test_ignore_stdout(self): output = StringIO() rep = reporter.JsonReporter(output) sys.stdout.write("info that doesnt belong to any task...") sys.stderr.write('something on err') t1 = Task("t1", None) expected = {'t1':'success'} rep.get_status(t1) rep.execute_task(t1) rep.add_success(t1) rep.complete_run() got = json.loads(output.getvalue()) assert expected[got['tasks'][0]['name']] == got['tasks'][0]['result'] assert "info that doesnt belong to any task..." == got['out'] assert "something on err" == got['err'] doit-0.30.3/tests/test_runner.py000066400000000000000000000740411305250115000166160ustar00rootroot00000000000000import os import pickle from multiprocessing import Queue import platform import pytest from mock import Mock from doit.exceptions import InvalidTask from doit.dependency import DbmDB, Dependency from doit.reporter import ConsoleReporter from doit.task import Task, DelayedLoader from doit.control import TaskDispatcher, ExecNode from doit import runner PLAT_IMPL = platform.python_implementation() # sample actions def my_print(*args): pass def _fail(): return False def _error(): raise Exception("I am the exception.\n") def _exit(): raise SystemExit() def simple_result(): return 'my-result' class FakeReporter(object): """Just log everything in internal attribute - used on tests""" def __init__(self, outstream=None, options=None): self.log = [] def get_status(self, task): self.log.append(('start', task)) def execute_task(self, task): self.log.append(('execute', task)) def add_failure(self, task, exception): self.log.append(('fail', task)) def add_success(self, task): self.log.append(('success', task)) def skip_uptodate(self, task): self.log.append(('up-to-date', task)) def skip_ignore(self, task): self.log.append(('ignore', task)) def cleanup_error(self, exception): self.log.append(('cleanup_error',)) def runtime_error(self, msg): self.log.append(('runtime_error',)) def teardown_task(self, task): self.log.append(('teardown', task)) def complete_run(self): pass @pytest.fixture def reporter(request): return FakeReporter() class TestRunner(object): def testInit(self, reporter, dep_manager): my_runner = runner.Runner(dep_manager, reporter) assert False == my_runner._stop_running assert runner.SUCCESS == my_runner.final_result class TestRunner_SelectTask(object): def test_ready(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )]) my_runner = runner.Runner(dep_manager, reporter) assert True == my_runner.select_task(ExecNode(t1, None), {}) assert ('start', t1) == reporter.log.pop(0) assert not reporter.log def test_DependencyError(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )], file_dep=["i_dont_exist"]) my_runner = runner.Runner(dep_manager, reporter) assert False == my_runner.select_task(ExecNode(t1, None), {}) assert ('start', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) assert not reporter.log def test_upToDate(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )], file_dep=[__file__]) my_runner = runner.Runner(dep_manager, reporter) my_runner.dep_manager.save_success(t1) assert False == my_runner.select_task(ExecNode(t1, None), {}) assert ('start', t1) == reporter.log.pop(0) assert ('up-to-date', t1) == reporter.log.pop(0) assert not reporter.log def test_ignore(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )]) my_runner = runner.Runner(dep_manager, reporter) my_runner.dep_manager.ignore(t1) assert False == my_runner.select_task(ExecNode(t1, None), {}) assert ('start', t1) == reporter.log.pop(0) assert ('ignore', t1) == reporter.log.pop(0) assert not reporter.log def test_alwaysExecute(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )], uptodate=[True]) my_runner = runner.Runner(dep_manager, reporter, always_execute=True) my_runner.dep_manager.save_success(t1) n1 = ExecNode(t1, None) assert True == my_runner.select_task(n1, {}) # run_status is set to run even if task is up-to-date assert n1.run_status == 'run' assert ('start', t1) == reporter.log.pop(0) assert not reporter.log def test_noSetup_ok(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )]) my_runner = runner.Runner(dep_manager, reporter) assert True == my_runner.select_task(ExecNode(t1, None), {}) assert ('start', t1) == reporter.log.pop(0) assert not reporter.log def test_withSetup(self, reporter, dep_manager): t1 = Task("taskX", [(my_print, ["out a"] )], setup=["taskY"]) my_runner = runner.Runner(dep_manager, reporter) # defer execution n1 = ExecNode(t1, None) assert False == my_runner.select_task(n1, {}) assert ('start', t1) == reporter.log.pop(0) assert not reporter.log # trying to select again assert True == my_runner.select_task(n1, {}) assert not reporter.log def test_getargs_ok(self, reporter, dep_manager): def ok(): return {'x':1} def check_x(my_x): return my_x == 1 t1 = Task('t1', [(ok,)]) n1 = ExecNode(t1, None) t2 = Task('t2', [(check_x,)], getargs={'my_x':('t1','x')}) n2 = ExecNode(t2, None) tasks_dict = {'t1': t1, 't2':t2} my_runner = runner.Runner(dep_manager, reporter) # t2 gives chance for setup tasks to be executed assert False == my_runner.select_task(n2, tasks_dict) assert ('start', t2) == reporter.log.pop(0) # execute task t1 to calculate value assert True == my_runner.select_task(n1, tasks_dict) assert ('start', t1) == reporter.log.pop(0) t1_result = my_runner.execute_task(t1) assert ('execute', t1) == reporter.log.pop(0) my_runner.process_task_result(n1, t1_result) assert ('success', t1) == reporter.log.pop(0) # t2.options are set on select_task assert True == my_runner.select_task(n2, tasks_dict) assert not reporter.log assert {'my_x': 1} == t2.options def test_getargs_fail(self, reporter, dep_manager): # invalid getargs. Exception wil be raised and task will fail def check_x(my_x): return True t1 = Task('t1', [lambda :True]) n1 = ExecNode(t1, None) t2 = Task('t2', [(check_x,)], getargs={'my_x':('t1','x')}) n2 = ExecNode(t2, None) tasks_dict = {'t1': t1, 't2':t2} my_runner = runner.Runner(dep_manager, reporter) # t2 gives chance for setup tasks to be executed assert False == my_runner.select_task(n2, tasks_dict) assert ('start', t2) == reporter.log.pop(0) # execute task t1 to calculate value assert True == my_runner.select_task(n1, tasks_dict) assert ('start', t1) == reporter.log.pop(0) t1_result = my_runner.execute_task(t1) assert ('execute', t1) == reporter.log.pop(0) my_runner.process_task_result(n1, t1_result) assert ('success', t1) == reporter.log.pop(0) # select_task t2 fails assert False == my_runner.select_task(n2, tasks_dict) assert ('fail', t2) == reporter.log.pop(0) assert not reporter.log def test_getargs_dict(self, reporter, dep_manager): def ok(): return {'x':1} t1 = Task('t1', [(ok,)]) n1 = ExecNode(t1, None) t2 = Task('t2', None, getargs={'my_x':('t1', None)}) tasks_dict = {'t1': t1, 't2':t2} my_runner = runner.Runner(dep_manager, reporter) t1_result = my_runner.execute_task(t1) my_runner.process_task_result(n1, t1_result) # t2.options are set on _get_task_args my_runner._get_task_args(t2, tasks_dict) assert {'my_x': {'x':1}} == t2.options def test_getargs_group(self, reporter, dep_manager): def ok(): return {'x':1} t1 = Task('t1', None, task_dep=['t1:a'], has_subtask=True) t1a = Task('t1:a', [(ok,)], is_subtask=True) t2 = Task('t2', None, getargs={'my_x':('t1', None)}) tasks_dict = {'t1': t1, 't1a':t1a, 't2':t2} my_runner = runner.Runner(dep_manager, reporter) t1a_result = my_runner.execute_task(t1a) my_runner.process_task_result(ExecNode(t1a, None), t1a_result) # t2.options are set on _get_task_args my_runner._get_task_args(t2, tasks_dict) assert {'my_x': {'a':{'x':1}} } == t2.options def test_getargs_group_value(self, reporter, dep_manager): def ok(): return {'x':1} t1 = Task('t1', None, task_dep=['t1:a'], has_subtask=True) t1a = Task('t1:a', [(ok,)], is_subtask=True) t2 = Task('t2', None, getargs={'my_x':('t1', 'x')}) tasks_dict = {'t1': t1, 't1a':t1a, 't2':t2} my_runner = runner.Runner(dep_manager, reporter) t1a_result = my_runner.execute_task(t1a) my_runner.process_task_result(ExecNode(t1a, None), t1a_result) # t2.options are set on _get_task_args my_runner._get_task_args(t2, tasks_dict) assert {'my_x': {'a':1} } == t2.options class TestTask_Teardown(object): def test_ok(self, reporter, dep_manager): touched = [] def touch(): touched.append(1) t1 = Task('t1', [], teardown=[(touch,)]) my_runner = runner.Runner(dep_manager, reporter) my_runner.teardown_list = [t1] t1.execute() my_runner.teardown() assert 1 == len(touched) assert ('teardown', t1) == reporter.log.pop(0) assert not reporter.log def test_reverse_order(self, reporter, dep_manager): def do_nothing():pass t1 = Task('t1', [], teardown=[do_nothing]) t2 = Task('t2', [], teardown=[do_nothing]) my_runner = runner.Runner(dep_manager, reporter) my_runner.teardown_list = [t1, t2] t1.execute() t2.execute() my_runner.teardown() assert ('teardown', t2) == reporter.log.pop(0) assert ('teardown', t1) == reporter.log.pop(0) assert not reporter.log def test_errors(self, reporter, dep_manager): def raise_something(x): raise Exception(x) t1 = Task('t1', [], teardown=[(raise_something,['t1 blow'])]) t2 = Task('t2', [], teardown=[(raise_something,['t2 blow'])]) my_runner = runner.Runner(dep_manager, reporter) my_runner.teardown_list = [t1, t2] t1.execute() t2.execute() my_runner.teardown() assert ('teardown', t2) == reporter.log.pop(0) assert ('cleanup_error',) == reporter.log.pop(0) assert ('teardown', t1) == reporter.log.pop(0) assert ('cleanup_error',) == reporter.log.pop(0) assert not reporter.log class TestTask_RunAll(object): def test_reporter_runtime_error(self, reporter, dep_manager): t1 = Task('t1', [], calc_dep=['t2']) t2 = Task('t2', [lambda: {'file_dep':[1]}]) my_runner = runner.Runner(dep_manager, reporter) my_runner.run_all(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert runner.ERROR == my_runner.final_result assert ('start', t2) == reporter.log.pop(0) assert ('execute', t2) == reporter.log.pop(0) assert ('success', t2) == reporter.log.pop(0) assert ('runtime_error',) == reporter.log.pop(0) assert not reporter.log # run tests in both single process runner and multi-process runner RUNNERS = [runner.Runner, runner.MThreadRunner] # TODO: test should be added and skipped! if runner.MRunner.available(): RUNNERS.append(runner.MRunner) @pytest.fixture(params=RUNNERS) def RunnerClass(request): return request.param # function used on actions, define here to make sure they are pickable def ok(): return "ok" def ok2(): return "different" def my_action(): import sys sys.stdout.write('out here') sys.stderr.write('err here') return {'bb': 5} def use_args(arg1): print(arg1) def make_args(): return {'myarg':1} def action_add_filedep(task, extra_dep): task.file_dep.add(extra_dep) class TestRunner_run_tasks(object): def test_teardown(self, reporter, RunnerClass, dep_manager): t1 = Task('t1', [], teardown=[ok]) t2 = Task('t2', []) my_runner = RunnerClass(dep_manager, reporter) assert [] == my_runner.teardown_list my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) my_runner.finish() assert ('teardown', t1) == reporter.log[-1] # testing whole process/API def test_success(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(my_print, ["out a"] )] ) t2 = Task("t2", [(my_print, ["out a"] )] ) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert runner.SUCCESS == my_runner.finish() assert ('start', t1) == reporter.log.pop(0), reporter.log assert ('execute', t1) == reporter.log.pop(0) assert ('success', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('execute', t2) == reporter.log.pop(0) assert ('success', t2) == reporter.log.pop(0) # test result, value, out, err are saved into task def test_result(self, reporter, RunnerClass, dep_manager): task = Task("taskY", [my_action] ) my_runner = RunnerClass(dep_manager, reporter) assert None == task.result assert {} == task.values assert [None] == [a.out for a in task.actions] assert [None] == [a.err for a in task.actions] my_runner.run_tasks(TaskDispatcher({'taskY':task}, [], ['taskY'])) assert runner.SUCCESS == my_runner.finish() assert {'bb': 5} == task.result assert {'bb': 5} == task.values assert ['out here'] == [a.out for a in task.actions] assert ['err here'] == [a.err for a in task.actions] # whenever a task fails remaining task are not executed def test_failureOutput(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [_fail]) t2 = Task("t2", [_fail]) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert runner.FAILURE == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('execute', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) # second task is not executed assert 0 == len(reporter.log) def test_error(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [_error]) t2 = Task("t2", [_error]) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert runner.ERROR == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('execute', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) # second task is not executed assert 0 == len(reporter.log) # when successful dependencies are updated def test_updateDependencies(self, reporter, RunnerClass, depfile_name): depPath = os.path.join(os.path.dirname(__file__), "data", "dependency1") ff = open(depPath,"a") ff.write("xxx") ff.close() dependencies = [depPath] filePath = os.path.join(os.path.dirname(__file__), "data", "target") ff = open(filePath,"a") ff.write("xxx") ff.close() targets = [filePath] t1 = Task("t1", [my_print], dependencies, targets) dep_manager = Dependency(DbmDB, depfile_name) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1}, [], ['t1'])) assert runner.SUCCESS == my_runner.finish() d = Dependency(DbmDB, depfile_name) assert d._get("t1", os.path.abspath(depPath)) def test_continue(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(_fail,)] ) t2 = Task("t2", [(_error,)] ) t3 = Task("t3", [(ok,)]) my_runner = RunnerClass(dep_manager, reporter, continue_=True) disp = TaskDispatcher({'t1':t1, 't2':t2, 't3':t3}, [], ['t1', 't2', 't3']) my_runner.run_tasks(disp) assert runner.ERROR == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('execute', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('execute', t2) == reporter.log.pop(0) assert ('fail', t2) == reporter.log.pop(0) assert ('start', t3) == reporter.log.pop(0) assert ('execute', t3) == reporter.log.pop(0) assert ('success', t3) == reporter.log.pop(0) assert 0 == len(reporter.log) def test_continue_dont_execute_parent_of_failed_task(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(_error,)] ) t2 = Task("t2", [(ok,)], task_dep=['t1']) t3 = Task("t3", [(ok,)]) my_runner = RunnerClass(dep_manager, reporter, continue_=True) disp = TaskDispatcher({'t1':t1, 't2':t2, 't3':t3}, [], ['t1', 't2', 't3']) my_runner.run_tasks(disp) assert runner.ERROR == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('execute', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('fail', t2) == reporter.log.pop(0) assert ('start', t3) == reporter.log.pop(0) assert ('execute', t3) == reporter.log.pop(0) assert ('success', t3) == reporter.log.pop(0) assert 0 == len(reporter.log) def test_continue_dep_error(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(ok,)], file_dep=['i_dont_exist'] ) t2 = Task("t2", [(ok,)], task_dep=['t1']) my_runner = RunnerClass(dep_manager, reporter, continue_=True) disp = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) my_runner.run_tasks(disp) assert runner.ERROR == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('fail', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('fail', t2) == reporter.log.pop(0) assert 0 == len(reporter.log) def test_continue_ignored_dep(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(ok,)], ) t2 = Task("t2", [(ok,)], task_dep=['t1']) my_runner = RunnerClass(dep_manager, reporter, continue_=True) my_runner.dep_manager.ignore(t1) disp = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) my_runner.run_tasks(disp) assert runner.SUCCESS == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('ignore', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('ignore', t2) == reporter.log.pop(0) assert 0 == len(reporter.log) def test_getargs(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [(use_args,)], getargs=dict(arg1=('t2','myarg')) ) t2 = Task("t2", [(make_args,)]) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert runner.SUCCESS == my_runner.finish() assert ('start', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('execute', t2) == reporter.log.pop(0) assert ('success', t2) == reporter.log.pop(0) assert ('execute', t1) == reporter.log.pop(0) assert ('success', t1) == reporter.log.pop(0) assert 0 == len(reporter.log) def testActionModifiesFiledep(self, reporter, RunnerClass, dep_manager): extra_dep = os.path.join(os.path.dirname(__file__), 'sample_md5.txt') t1 = Task("t1", [(my_print, ["out a"] ), (action_add_filedep, (), {'extra_dep': extra_dep}) ] ) my_runner = RunnerClass(dep_manager, reporter) my_runner.run_tasks(TaskDispatcher({'t1':t1}, [], ['t1'])) assert runner.SUCCESS == my_runner.finish() assert ('start', t1) == reporter.log.pop(0), reporter.log assert ('execute', t1) == reporter.log.pop(0) assert ('success', t1) == reporter.log.pop(0) assert t1.file_dep == set([extra_dep]) # SystemExit runner should not interfere with SystemExit def testSystemExitRaises(self, reporter, RunnerClass, dep_manager): t1 = Task("t1", [_exit]) my_runner = RunnerClass(dep_manager, reporter) disp = TaskDispatcher({'t1':t1}, [], ['t1']) pytest.raises(SystemExit, my_runner.run_tasks, disp) my_runner.finish() @pytest.mark.skipif('not runner.MRunner.available()') class TestMReporter(object): class MyRunner(object): def __init__(self): self.result_q = Queue() def testReporterMethod(self, reporter): fake_runner = self.MyRunner() mp_reporter = runner.MReporter(fake_runner, reporter) my_task = Task("task x", []) mp_reporter.add_success(my_task) # note limit is 2 seconds because of http://bugs.python.org/issue17707 got = fake_runner.result_q.get(True, 2) assert {'name': "task x", "reporter": 'add_success'} == got def testNonReporterMethod(self, reporter): fake_runner = self.MyRunner() mp_reporter = runner.MReporter(fake_runner, reporter) assert hasattr(mp_reporter, 'add_success') assert not hasattr(mp_reporter, 'no_existent_method') class TestJobTask(object): def test_closure_is_picklable(self): # can pickle because we use cloudpickle def non_top_function(): return 4 t1 = Task('t1', [non_top_function]) t1p = runner.JobTask(t1).task_pickle t2 = pickle.loads(t1p) assert 4 == t2.actions[0].py_callable() @pytest.mark.xfail('PLAT_IMPL == "PyPy"') # pypy can handle it :) def test_not_picklable_raises_InvalidTask(self): # create a large enough recursive obj so pickle fails d1 = {} last = d1 for x in range(400): dn = {'p': last} last = dn d1['p'] = last def non_top_function(): pass t1 = Task('t1', [non_top_function, (d1,)]) pytest.raises(InvalidTask, runner.JobTask, t1) # multiprocessing on Windows requires the whole object to be pickable def test_MRunner_pickable(dep_manager): t1 = Task('t1', []) import sys reporter = ConsoleReporter(sys.stdout, {}) run = runner.MRunner(dep_manager, reporter) run._run_tasks_init(TaskDispatcher({'t1':t1}, [], ['t1'])) # assert nothing is raised pickle.dumps(run) @pytest.mark.skipif('not runner.MRunner.available()') class TestMRunner_get_next_job(object): # simple normal case def test_run_task(self, reporter, dep_manager): t1 = Task('t1', []) t2 = Task('t2', []) run = runner.MRunner(dep_manager, reporter) run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert t1.name == run.get_next_job(None).name assert t2.name == run.get_next_job(None).name assert None == run.get_next_job(None) def test_stop_running(self, reporter, dep_manager): t1 = Task('t1', []) t2 = Task('t2', []) run = runner.MRunner(dep_manager, reporter) run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) assert t1.name == run.get_next_job(None).name run._stop_running = True assert None == run.get_next_job(None) def test_waiting(self, reporter, dep_manager): t1 = Task('t1', []) t2 = Task('t2', [], setup=('t1',)) run = runner.MRunner(dep_manager, reporter) dispatcher = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t2']) run._run_tasks_init(dispatcher) # first start task 1 j1 = run.get_next_job(None) assert t1.name == j1.name # hold until t1 is done assert isinstance(run.get_next_job(None), runner.JobHold) assert isinstance(run.get_next_job(None), runner.JobHold) n1 = dispatcher.nodes[j1.name] n1.run_status = 'done' j2 = run.get_next_job(n1) assert t2.name == j2.name assert None == run.get_next_job(dispatcher.nodes[j2.name]) def test_waiting_controller(self, reporter, dep_manager): t1 = Task('t1', []) t2 = Task('t2', [], calc_dep=('t1',)) run = runner.MRunner(dep_manager, reporter) run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])) # first task ok assert t1.name == run.get_next_job(None).name # hold until t1 finishes assert 0 == run.free_proc assert isinstance(run.get_next_job(None), runner.JobHold) assert 1 == run.free_proc def test_delayed_loaded(self, reporter, dep_manager): def create(): return {'basename':'t1', 'actions': None} t1 = Task('t1', [], loader=DelayedLoader(create, executed='t2')) t2 = Task('t2', []) run = runner.MRunner(dep_manager, reporter) dispatcher = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) run._run_tasks_init(dispatcher) assert t2.name == run.get_next_job(None).name assert runner.JobHold.type == run.get_next_job(None).type # after t2 is done t1 can be dispatched n2 = dispatcher.nodes[t2.name] n2.run_status = 'done' j1 = run.get_next_job(n2) assert t1.name == j1.name # the job for t1 contains the whole task since sub-process dont # have it assert j1.type == runner.JobTask.type @pytest.mark.skipif('not runner.MRunner.available()') class TestMRunner_start_process(object): # 2 process, 3 tasks def test_all_processes(self, reporter, monkeypatch, dep_manager): mock_process = Mock() monkeypatch.setattr(runner.MRunner, 'Child', mock_process) t1 = Task('t1', []) t2 = Task('t2', []) td = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) run = runner.MRunner(dep_manager, reporter, num_process=2) run._run_tasks_init(td) result_q = Queue() task_q = Queue() proc_list = run._run_start_processes(task_q, result_q) run.finish() assert 2 == len(proc_list) assert t1.name == task_q.get().name assert t2.name == task_q.get().name # 2 process, 1 task def test_less_processes(self, reporter, monkeypatch, dep_manager): mock_process = Mock() monkeypatch.setattr(runner.MRunner, 'Child', mock_process) t1 = Task('t1', []) td = TaskDispatcher({'t1':t1}, [], ['t1']) run = runner.MRunner(dep_manager, reporter, num_process=2) run._run_tasks_init(td) result_q = Queue() task_q = Queue() proc_list = run._run_start_processes(task_q, result_q) run.finish() assert 1 == len(proc_list) assert t1.name == task_q.get().name # 2 process, 2 tasks (but only one task can be started) def test_waiting_process(self, reporter, monkeypatch, dep_manager): mock_process = Mock() monkeypatch.setattr(runner.MRunner, 'Child', mock_process) t1 = Task('t1', []) t2 = Task('t2', [], task_dep=['t1']) td = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) run = runner.MRunner(dep_manager, reporter, num_process=2) run._run_tasks_init(td) result_q = Queue() task_q = Queue() proc_list = run._run_start_processes(task_q, result_q) run.finish() assert 2 == len(proc_list) assert t1.name == task_q.get().name assert isinstance(task_q.get(), runner.JobHold) def non_pickable_creator(): return {'basename': 't2', 'actions': [lambda: True]} class TestMRunner_parallel_run_tasks(object): @pytest.mark.skipif('not runner.MRunner.available()') def test_task_cloudpicklabe_multiprocess(self, reporter, dep_manager): t1 = Task("t1", [(my_print, ["out a"] )] ) t2 = Task("t2", None, loader=DelayedLoader( non_pickable_creator, executed='t1')) my_runner = runner.MRunner(dep_manager, reporter) dispatcher = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) my_runner.run_tasks(dispatcher) assert runner.SUCCESS == my_runner.finish() def test_task_not_picklabe_thread(self, reporter, dep_manager): t1 = Task("t1", [(my_print, ["out a"] )] ) t2 = Task("t2", None, loader=DelayedLoader( non_pickable_creator, executed='t1')) my_runner = runner.MThreadRunner(dep_manager, reporter) dispatcher = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']) # threaded code have no problems with closures my_runner.run_tasks(dispatcher) assert runner.SUCCESS == my_runner.finish() assert ('start', t1) == reporter.log.pop(0), reporter.log assert ('execute', t1) == reporter.log.pop(0) assert ('success', t1) == reporter.log.pop(0) assert ('start', t2) == reporter.log.pop(0) assert ('execute', t2) == reporter.log.pop(0) assert ('success', t2) == reporter.log.pop(0) @pytest.mark.skipif('not runner.MRunner.available()') class TestMRunner_execute_task(object): def test_hold(self, reporter, dep_manager): run = runner.MRunner(dep_manager, reporter) task_q = Queue() task_q.put(runner.JobHold()) # to test task_q.put(None) # to terminate function result_q = Queue() run.execute_task_subprocess(task_q, result_q, reporter.__class__) run.finish() # nothing was done assert result_q.empty() def test_full_task(self, reporter, dep_manager): # test execute_task_subprocess can receive a full Task object run = runner.MRunner(dep_manager, reporter) t1 = Task('t1', [simple_result]) task_q = Queue() task_q.put(runner.JobTask(t1)) # to test task_q.put(None) # to terminate function result_q = Queue() run.execute_task_subprocess(task_q, result_q, reporter.__class__) run.finish() # check result assert result_q.get() == {'name': 't1', 'reporter': 'execute_task'} assert result_q.get()['task']['result'] == 'my-result' assert result_q.empty() def test_MThreadRunner_available(): assert runner.MThreadRunner.available() == True doit-0.30.3/tests/test_task.py000066400000000000000000000476161305250115000162570ustar00rootroot00000000000000import os, shutil import tempfile from io import StringIO from pathlib import Path, PurePath import pytest from doit.exceptions import TaskError from doit.exceptions import CatchedException from doit import action from doit import task #path to test folder TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH class TestTaskCheckInput(object): def testOkType(self): task.Task.check_attr('xxx', 'attr', [], ([int, list],[])) def testOkValue(self): task.Task.check_attr('xxx', 'attr', None, ([list], [None])) def testFailType(self): pytest.raises(task.InvalidTask, task.Task.check_attr, 'xxx', 'attr', int, ([list], [False])) def testFailValue(self): pytest.raises(task.InvalidTask, task.Task.check_attr, 'xxx', 'attr', True, ([list], [False])) class TestTaskCompare(object): def test_equal(self): # only task name is used to compare for equality t1 = task.Task("foo", None) t2 = task.Task("bar", None) t3 = task.Task("foo", None) assert t1 != t2 assert t1 == t3 def test_lt(self): # task name is used to compare/sort tasks t1 = task.Task("foo", None) t2 = task.Task("bar", None) t3 = task.Task("gee", None) assert t1 > t2 sorted_names = sorted(t.name for t in (t1,t2,t3)) assert sorted_names == ['bar', 'foo', 'gee'] class TestTaskInit(object): def test_groupTask(self): # group tasks have no action t = task.Task("taskX", None) assert t.actions == [] def test_dependencySequenceIsValid(self): task.Task("Task X", ["taskcmd"], file_dep=["123","456"]) # dependency must be a sequence or bool. # give proper error message when anything else is used. def test_dependencyNotSequence(self): filePath = "data/dependency1" pytest.raises(task.InvalidTask, task.Task, "Task X",["taskcmd"], file_dep=filePath) def test_options(self): # when task is created, options contain the default values p1 = {'name':'p1', 'default':'p1-default'} p2 = {'name':'p2', 'default':'', 'short':'m'} t = task.Task("MyName", None, params=[p1, p2], pos_arg='pos') t.execute() assert 'p1-default' == t.options['p1'] assert '' == t.options['p2'] assert 'pos' == t.pos_arg assert None == t.pos_arg_val # always unitialized def test_setup(self): t = task.Task("task5", ['action'], setup=["task2"]) assert ["task2"] == t.setup_tasks def test_forbid_equal_sign_on_name(self): pytest.raises(task.InvalidTask, task.Task, "a=1", ["taskcmd"]) class TestTaskValueSavers(object): def test_execute_value_savers(self): t = task.Task("Task X", ["taskcmd"]) t.value_savers.append(lambda: {'v1':1}) t.save_extra_values() assert 1 == t.values['v1'] class TestTaskUpToDate(object): def test_FalseRunalways(self): t = task.Task("Task X", ["taskcmd"], uptodate=[False]) assert t.uptodate == [(False, None, None)] def test_NoneIgnored(self): t = task.Task("Task X", ["taskcmd"], uptodate=[None]) assert t.uptodate == [(None, None, None)] def test_callable_function(self): def custom_check(): return True t = task.Task("Task X", ["taskcmd"], uptodate=[custom_check]) assert t.uptodate[0] == (custom_check, [], {}) def test_callable_instance_method(self): class Base(object): def check(self): return True base = Base() t = task.Task("Task X", ["taskcmd"], uptodate=[base.check]) assert t.uptodate[0] == (base.check, [], {}) def test_tuple(self): def custom_check(pos_arg, xxx=None): return True t = task.Task("Task X", ["taskcmd"], uptodate=[(custom_check, [123], {'xxx':'yyy'})]) assert t.uptodate[0] == (custom_check, [123], {'xxx':'yyy'}) def test_str(self): t = task.Task("Task X", ["taskcmd"], uptodate=['my-cmd xxx']) assert t.uptodate[0] == ('my-cmd xxx', [], {}) def test_object_with_configure(self): class Check(object): def __call__(self): return True def configure_task(self, task): task.task_dep.append('y1') check = Check() t = task.Task("Task X", ["taskcmd"], uptodate=[check]) assert (check, [], {}) == t.uptodate[0] assert ['y1'] == t.task_dep def test_invalid(self): pytest.raises(task.InvalidTask, task.Task, "Task X", ["taskcmd"], uptodate=[{'x':'y'}]) class TestTaskExpandFileDep(object): def test_dependencyStringIsFile(self): my_task = task.Task("Task X", ["taskcmd"], file_dep=["123","456"]) assert set(["123","456"]) == my_task.file_dep def test_file_dep_path(self): my_task = task.Task("Task X", ["taskcmd"], file_dep=["123", Path("456"), PurePath("789")]) assert {"123", "456", "789"} == my_task.file_dep def test_file_dep_str(self): pytest.raises(task.InvalidTask, task.Task, "Task X", ["taskcmd"], file_dep=[['aaaa']]) def test_file_dep_unicode(self): unicode_name = "中文" my_task = task.Task("Task X", ["taskcmd"], file_dep=[unicode_name]) assert unicode_name in my_task.file_dep class TestTaskDeps(object): def test_task_dep(self): my_task = task.Task("Task X", ["taskcmd"], task_dep=["123","4*56"]) assert ["123"] == my_task.task_dep assert ["4*56"] == my_task.wild_dep def test_calc_dep(self): my_task = task.Task("Task X", ["taskcmd"], calc_dep=["123"]) assert set(["123"]) == my_task.calc_dep def test_update_deps(self): my_task = task.Task("Task X", ["taskcmd"], file_dep=["fileX"], calc_dep=["calcX"], uptodate=[None]) my_task.update_deps({'file_dep': ['fileY'], 'task_dep': ['taskY'], 'calc_dep': ['calcX', 'calcY'], 'uptodate': [True], 'to_be_ignored': 'asdf', }) assert set(['fileX', 'fileY']) == my_task.file_dep assert ['taskY'] == my_task.task_dep assert set(['calcX', 'calcY']) == my_task.calc_dep assert [(None, None, None), (True, None, None)] == my_task.uptodate class TestTaskTargets(object): def test_targets_can_be_path(self): my_task = task.Task("Task X", ["taskcmd"], targets=["123", Path("456"), PurePath("789")]) assert ["123", "456", "789"] == my_task.targets def test_targets_should_be_string_or_path(self): assert pytest.raises(task.InvalidTask, task.Task, "Task X", ["taskcmd"], targets=["123", Path("456"), 789]) class TestTask_Loader(object): def test_delayed_after_execution(self): # after `executed` creates an implicit task_dep delayed = task.DelayedLoader(lambda: None, executed='foo') t1 = task.Task('bar', None, loader=delayed) assert t1.task_dep == ['foo'] class TestTask_Getargs(object): def test_ok(self): getargs = {'x' : ('t1','x'), 'y': ('t2','z')} t = task.Task('t3', None, getargs=getargs) assert len(t.uptodate) == 2 assert ['t1', 't2'] == sorted([t.uptodate[0][0].dep_name, t.uptodate[1][0].dep_name]) def test_invalid_desc(self): getargs = {'x' : 't1'} assert pytest.raises(task.InvalidTask, task.Task, 't3', None, getargs=getargs) def test_invalid_desc_tuple(self): getargs = {'x' : ('t1',)} assert pytest.raises(task.InvalidTask, task.Task, 't3', None, getargs=getargs) class TestTaskTitle(object): def test_title(self): t = task.Task("MyName",["MyAction"]) assert "MyName" == t.title() def test_custom_title(self): t = task.Task("MyName",["MyAction"], title=(lambda x: "X%sX" % x.name)) assert "X%sX"%str(t.name) == t.title(), t.title() class TestTaskRepr(object): def test_repr(self): t = task.Task("taskX",None,('t1','t2')) assert "" == repr(t), repr(t) class TestTaskActions(object): def test_success(self): t = task.Task("taskX", [PROGRAM]) t.execute() def test_result(self): # task.result is the value of last action t = task.Task('t1', ["%s hi_list hi1" % PROGRAM, "%s hi_list hi2" % PROGRAM]) t.execute() assert "hi_listhi2" == t.result def test_values(self): def return_dict(d): return d # task.result is the value of last action t = task.Task('t1', [(return_dict, [{'x':5}]), (return_dict, [{'y':10}]),]) t.execute() assert {'x':5, 'y':10} == t.values def test_failure(self): t = task.Task("taskX", ["%s 1 2 3" % PROGRAM]) got = t.execute() assert isinstance(got, TaskError) # make sure all cmds are being executed. def test_many(self): t = task.Task("taskX",["%s hi_stdout hi2" % PROGRAM, "%s hi_list hi6" % PROGRAM]) t.execute() got = "".join([a.out for a in t.actions]) assert "hi_stdouthi_list" == got, repr(got) def test_fail_first(self): t = task.Task("taskX", ["%s 1 2 3" % PROGRAM, PROGRAM]) got = t.execute() assert isinstance(got, TaskError) def test_fail_second(self): t = task.Task("taskX", ["%s 1 2" % PROGRAM, "%s 1 2 3" % PROGRAM]) got = t.execute() assert isinstance(got, TaskError) # python and commands mixed on same task def test_mixed(self): def my_print(msg): print(msg, end='') t = task.Task("taskX",["%s hi_stdout hi2" % PROGRAM, (my_print,['_PY_']), "%s hi_list hi6" % PROGRAM]) t.execute() got = "".join([a.out for a in t.actions]) assert "hi_stdout_PY_hi_list" == got, repr(got) class TestTaskTeardown(object): def test_ok(self): got = [] def put(x): got.append(x) t = task.Task('t1', [], teardown=[(put, [1]), (put, [2])]) t.execute() assert None == t.execute_teardown() assert [1,2] == got def test_fail(self): def my_raise(): raise Exception('hoho') t = task.Task('t1', [], teardown=[(my_raise,)]) t.execute() got = t.execute_teardown() assert isinstance(got, CatchedException) class TestTaskClean(object): @pytest.fixture def tmpdir(self, request): tmpdir = {} tmpdir['dir'] = tempfile.mkdtemp(prefix='doit-') files = [os.path.join(tmpdir['dir'], fname) for fname in ['a.txt', 'b.txt']] tmpdir['files'] = files # create empty files for filename in tmpdir['files']: open(filename, 'a').close() def remove_tmpdir(): if os.path.exists(tmpdir['dir']): shutil.rmtree(tmpdir['dir']) request.addfinalizer(remove_tmpdir) return tmpdir def test_clean_nothing(self, tmpdir): t = task.Task("xxx", None) assert False == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), False) for filename in tmpdir['files']: assert os.path.exists(filename) def test_clean_targets(self, tmpdir): t = task.Task("xxx", None, targets=tmpdir['files'], clean=True) assert True == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), False) for filename in tmpdir['files']: assert not os.path.exists(filename), filename def test_clean_non_existent_targets(self): t = task.Task('xxx', None, targets=["i_dont_exist"], clean=True) t.clean(StringIO(), False) # nothing is raised def test_clean_empty_dirs(self, tmpdir): # Remove empty directories listed in targets targets = tmpdir['files'] + [tmpdir['dir']] t = task.Task("xxx", None, targets=targets, clean=True) assert True == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), False) for filename in tmpdir['files']: assert not os.path.exists(filename) assert not os.path.exists(tmpdir['dir']) def test_keep_non_empty_dirs(self, tmpdir): # Keep non empty directories listed in targets targets = [tmpdir['files'][0], tmpdir['dir']] t = task.Task("xxx", None, targets=targets, clean=True) assert True == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), False) for filename in tmpdir['files']: expected = not filename in targets assert expected == os.path.exists(filename) assert os.path.exists(tmpdir['dir']) def test_clean_actions(self, tmpdir): # a clean action can be anything, it can even not clean anything! c_path = tmpdir['files'][0] def say_hello(): fh = open(c_path, 'a') fh.write("hello!!!") fh.close() t = task.Task("xxx",None,targets=tmpdir['files'], clean=[(say_hello,)]) assert False == t._remove_targets assert 1 == len(t.clean_actions) t.clean(StringIO(), False) for filename in tmpdir['files']: assert os.path.exists(filename) fh = open(c_path, 'r') got = fh.read() fh.close() assert "hello!!!" == got def test_clean_action_error(self, capsys): def fail_clean(): 5/0 t = task.Task("xxx", None, clean=[(fail_clean,)]) assert 1 == len(t.clean_actions) t.clean(StringIO(), dryrun=False) err = capsys.readouterr()[1] assert "PythonAction Error" in err def test_clean_action_kwargs(self): def fail_clean(dryrun): print('hello %s' % dryrun) t = task.Task("xxx", None, clean=[(fail_clean,)]) assert 1 == len(t.clean_actions) out = StringIO() t.clean(out, dryrun=False) assert "hello False" in out.getvalue() def test_dryrun_file(self, tmpdir): t = task.Task("xxx", None, targets=tmpdir['files'], clean=True) assert True == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), True) # files are NOT removed for filename in tmpdir['files']: assert os.path.exists(filename), filename def test_dryrun_dir(self, tmpdir): targets = tmpdir['files'] + [tmpdir['dir']] for filename in tmpdir['files']: os.remove(filename) t = task.Task("xxx", None, targets=targets, clean=True) assert True == t._remove_targets assert 0 == len(t.clean_actions) t.clean(StringIO(), True) assert os.path.exists(tmpdir['dir']) def test_dryrun_actions(self, tmpdir): # a clean action can be anything, it can even not clean anything! self.executed = False def say_hello(): self.executed = True t = task.Task("xxx",None,targets=tmpdir['files'], clean=[(say_hello,)]) assert False == t._remove_targets assert 1 == len(t.clean_actions) t.clean(StringIO(), True) assert not self.executed class TestTaskDoc(object): def test_no_doc(self): t = task.Task("name", ["action"]) assert '' == t.doc def test_single_line(self): t = task.Task("name", ["action"], doc=" i am doc") assert "i am doc" == t.doc def test_multiple_lines(self): t = task.Task("name", ["action"], doc="i am doc \n with many lines\n") assert "i am doc" == t.doc def test_start_with_empty_lines(self): t = task.Task("name", ["action"], doc="\n\n i am doc \n") assert "i am doc" == t.doc def test_just_new_line(self): t = task.Task("name", ["action"], doc=" \n \n\n") assert "" == t.doc class TestTaskPickle(object): def test_geststate(self): t = task.Task("my_name", ["action"]) pd = t.__getstate__() assert None == pd['uptodate'] assert None == pd['_action_instances'] def test_safedict(self): t = task.Task("my_name", ["action"]) pd = t.pickle_safe_dict() assert 'uptodate' not in pd assert '_action_instances' not in pd assert 'value_savers' not in pd assert 'clean_actions' not in pd class TestTaskUpdateFromPickle(object): def test_change_value(self): t = task.Task("my_name", ["action"]) assert {} == t.values class FakePickle(): def __init__(self): self.values = [1,2,3] t.update_from_pickle(FakePickle().__dict__) assert [1,2,3] == t.values assert 'my_name' == t.name class TestDictToTask(object): def testDictOkMinimum(self): dict_ = {'name':'simple','actions':['xpto 14']} assert isinstance(task.dict_to_task(dict_), task.Task) def testDictFieldTypo(self): dict_ = {'name':'z','actions':['xpto 14'],'typo_here':['xxx']} pytest.raises(action.InvalidTask, task.dict_to_task, dict_) def testDictMissingFieldAction(self): pytest.raises(action.InvalidTask, task.dict_to_task, {'name':'xpto 14'}) class TestResultDep(object): def test_single(self, depfile): dep_manager = depfile tasks = {'t1': task.Task("t1", None, uptodate=[task.result_dep('t2')]), 't2': task.Task("t2", None), } # _config_task was executed and t2 added as task_dep assert ['t2'] == tasks['t1'].task_dep # first t2 result tasks['t2'].result = 'yes' dep_manager.save_success(tasks['t2']) assert 'run' == dep_manager.get_status(tasks['t1'], tasks).status # first time tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'up-to-date' == dep_manager.get_status(tasks['t1'], tasks).status # t2 result changed tasks['t2'].result = '222' dep_manager.save_success(tasks['t2']) tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'run' == dep_manager.get_status(tasks['t1'], tasks).status tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'up-to-date' == dep_manager.get_status(tasks['t1'], tasks).status def test_group(self, depfile): dep_manager = depfile tasks = {'t1': task.Task("t1", None, uptodate=[task.result_dep('t2')]), 't2': task.Task("t2", None, task_dep=['t2:a', 't2:b'], has_subtask=True), 't2:a': task.Task("t2:a", None), 't2:b': task.Task("t2:b", None), } # _config_task was executed and t2 added as task_dep assert ['t2'] == tasks['t1'].task_dep # first t2 result tasks['t2:a'].result = 'yes1' dep_manager.save_success(tasks['t2:a']) tasks['t2:b'].result = 'yes2' dep_manager.save_success(tasks['t2:b']) assert 'run' == dep_manager.get_status(tasks['t1'], tasks).status # first time tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'up-to-date' == dep_manager.get_status(tasks['t1'], tasks).status # t2 result changed tasks['t2:a'].result = '222' dep_manager.save_success(tasks['t2:a']) tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'run' == dep_manager.get_status(tasks['t1'], tasks).status tasks['t1'].save_extra_values() dep_manager.save_success(tasks['t1']) assert 'up-to-date' == dep_manager.get_status(tasks['t1'], tasks).status doit-0.30.3/tests/test_tools.py000066400000000000000000000226111305250115000164410ustar00rootroot00000000000000import os import datetime import operator import pytest from doit import exceptions from doit import tools from doit import task class TestCreateFolder(object): def test_create_folder(self): def rm_dir(): if os.path.exists(DIR_DEP): os.removedirs(DIR_DEP) DIR_DEP = os.path.join(os.path.dirname(__file__),"parent/child/") rm_dir() tools.create_folder(DIR_DEP) assert os.path.exists(DIR_DEP) rm_dir() def test_error_if_path_is_a_file(self): def rm_file(path): if os.path.exists(path): os.remove(path) path = os.path.join(os.path.dirname(__file__), "test_create_folder") with open(path, 'w') as fp: fp.write('testing') pytest.raises(OSError, tools.create_folder, path) rm_file(path) class TestTitleWithActions(object): def test_actions(self): t = task.Task("MyName",["MyAction"], title=tools.title_with_actions) assert "MyName => Cmd: MyAction" == t.title() def test_group(self): t = task.Task("MyName", None, file_dep=['file_foo'], task_dep=['t1','t2'], title=tools.title_with_actions) assert "MyName => Group: t1, t2" == t.title() class TestRunOnce(object): def test_run(self): t = task.Task("TaskX", None, uptodate=[tools.run_once]) assert False == tools.run_once(t, t.values) t.save_extra_values() assert True == tools.run_once(t, t.values) class TestConfigChanged(object): def test_invalid_type(self): class NotValid(object):pass uptodate = tools.config_changed(NotValid()) pytest.raises(Exception, uptodate, None, None) def test_string(self): ua = tools.config_changed('a') ub = tools.config_changed('b') t1 = task.Task("TaskX", None, uptodate=[ua]) assert False == ua(t1, t1.values) assert False == ub(t1, t1.values) t1.save_extra_values() assert True == ua(t1, t1.values) assert False == ub(t1, t1.values) def test_unicode(self): ua = tools.config_changed({'x': "中文"}) ub = tools.config_changed('b') t1 = task.Task("TaskX", None, uptodate=[ua]) assert False == ua(t1, t1.values) assert False == ub(t1, t1.values) t1.save_extra_values() assert True == ua(t1, t1.values) assert False == ub(t1, t1.values) def test_dict(self): ua = tools.config_changed({'x':'a', 'y':1}) ub = tools.config_changed({'x':'b', 'y':1}) t1 = task.Task("TaskX", None, uptodate=[ua]) assert False == ua(t1, t1.values) assert False == ub(t1, t1.values) t1.save_extra_values() assert True == ua(t1, t1.values) assert False == ub(t1, t1.values) class TestTimeout(object): def test_invalid(self): pytest.raises(Exception, tools.timeout, "abc") def test_int(self, monkeypatch): monkeypatch.setattr(tools.time_module, 'time', lambda: 100) uptodate = tools.timeout(5) t = task.Task("TaskX", None, uptodate=[uptodate]) assert False == uptodate(t, t.values) t.save_extra_values() assert 100 == t.values['success-time'] monkeypatch.setattr(tools.time_module, 'time', lambda: 103) assert True == uptodate(t, t.values) monkeypatch.setattr(tools.time_module, 'time', lambda: 106) assert False == uptodate(t, t.values) def test_timedelta(self, monkeypatch): monkeypatch.setattr(tools.time_module, 'time', lambda: 10) limit = datetime.timedelta(minutes=2) uptodate = tools.timeout(limit) t = task.Task("TaskX", None, uptodate=[uptodate]) assert False == uptodate(t, t.values) t.save_extra_values() assert 10 == t.values['success-time'] monkeypatch.setattr(tools.time_module, 'time', lambda: 100) assert True == uptodate(t, t.values) monkeypatch.setattr(tools.time_module, 'time', lambda: 200) assert False == uptodate(t, t.values) def test_timedelta_big(self, monkeypatch): monkeypatch.setattr(tools.time_module, 'time', lambda: 10) limit = datetime.timedelta(days=2, minutes=5) uptodate = tools.timeout(limit) t = task.Task("TaskX", None, uptodate=[uptodate]) assert False == uptodate(t, t.values) t.save_extra_values() assert 10 == t.values['success-time'] monkeypatch.setattr(tools.time_module, 'time', lambda: 3600 * 30) assert True == uptodate(t, t.values) monkeypatch.setattr(tools.time_module, 'time', lambda: 3600 * 49) assert False == uptodate(t, t.values) @pytest.fixture def checked_file(request): fname = 'mytmpfile' file_ = open(fname, 'a') file_.close() def remove(): os.remove(fname) request.addfinalizer(remove) return fname class TestCheckTimestampUnchanged(object): def test_time_selection(self): check = tools.check_timestamp_unchanged('check_atime', 'atime') assert 'st_atime' == check._timeattr check = tools.check_timestamp_unchanged('check_ctime', 'ctime') assert 'st_ctime' == check._timeattr check = tools.check_timestamp_unchanged('check_mtime', 'mtime') assert 'st_mtime' == check._timeattr pytest.raises( ValueError, tools.check_timestamp_unchanged, 'check_invalid_time', 'foo') def test_file_missing(self): check = tools.check_timestamp_unchanged('no_such_file') t = task.Task("TaskX", None, uptodate=[check]) # fake values saved from previous run task_values = {check._key: 1} # needs any value different from None pytest.raises(OSError, check, t, task_values) def test_op_ge(self, monkeypatch, checked_file): check = tools.check_timestamp_unchanged(checked_file,cmp_op=operator.ge) t = task.Task("TaskX", None, uptodate=[check]) # no stored value/first run assert False == check(t, t.values) # value just stored is equal to itself t.save_extra_values() assert True == check(t, t.values) # stored timestamp less than current, up to date future_time = list(t.values.values())[0] + 100 monkeypatch.setattr(check, '_get_time', lambda: future_time) assert False == check(t, t.values) def test_op_bad_custom(self, monkeypatch, checked_file): # handling misbehaving custom operators def bad_op(prev_time, current_time): raise Exception('oops') check = tools.check_timestamp_unchanged(checked_file, cmp_op=bad_op) t = task.Task("TaskX", None, uptodate=[check]) # fake values saved from previous run task_values = {check._key: 1} # needs any value different from None pytest.raises(Exception, check, t, task_values) def test_multiple_checks(self): # handling multiple checks on one file (should save values in such way # they don't override each other) check_a = tools.check_timestamp_unchanged('check_multi', 'atime') check_m = tools.check_timestamp_unchanged('check_multi', 'mtime') assert check_a._key != check_m._key class TestLongRunning(object): def test_success(self): TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH my_action = tools.LongRunning(PROGRAM + " please fail") got = my_action.execute() assert got is None def test_ignore_keyboard_interrupt(self, monkeypatch): my_action = tools.LongRunning('') class FakeRaiseInterruptProcess(object): def __init__(self, *args, **kwargs): pass def wait(self): raise KeyboardInterrupt() monkeypatch.setattr(tools.subprocess, 'Popen', FakeRaiseInterruptProcess) got = my_action.execute() assert got is None class TestInteractive(object): def test_fail(self): TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH my_action = tools.Interactive(PROGRAM + " please fail") got = my_action.execute() assert isinstance(got, exceptions.TaskFailed) def test_success(self): TEST_PATH = os.path.dirname(__file__) PROGRAM = "python %s/sample_process.py" % TEST_PATH my_action = tools.Interactive(PROGRAM + " ok") got = my_action.execute() assert got is None class TestPythonInteractiveAction(object): def test_success(self): def hello(): print('hello') my_action = tools.PythonInteractiveAction(hello) got = my_action.execute() assert got is None def test_ignore_keyboard_interrupt(self, monkeypatch): def raise_x(): raise Exception('x') my_action = tools.PythonInteractiveAction(raise_x) got = my_action.execute() assert isinstance(got, exceptions.TaskError) def test_returned_dict_saved_result_values(self): def val(): return {'x': 3} my_action = tools.PythonInteractiveAction(val) got = my_action.execute() assert got is None assert my_action.result == {'x': 3} assert my_action.values == {'x': 3} def test_returned_string_saved_result(self): def val(): return 'hello' my_action = tools.PythonInteractiveAction(val) got = my_action.execute() assert got is None assert my_action.result == 'hello' doit-0.30.3/zsh_completion_doit000066400000000000000000000312751305250115000165330ustar00rootroot00000000000000#compdef doit _doit() { local -a commands tasks # format is 'completion:description' commands=( 'auto: automatically execute tasks when a dependency changes' 'clean: clean action / remove targets' 'dumpdb: dump dependency DB' 'forget: clear successful run status from internal DB' 'help: show help' 'ignore: ignore task (skip) on subsequent runs' 'info: show info about a task' 'list: list tasks from dodo file' 'reset-dep: recompute and save the state of file dependencies without executing actions' 'run: run tasks' 'strace: use strace to list file_deps and targets' 'tabcompletion: generate script for tab-completion' ) # split output by lines to create an array tasks=("${(f)$(doit list --template '{name}: {doc}')}") # complete command or task name if (( CURRENT == 2 )); then _arguments -A : '::cmd:(($commands))' '::task:(($tasks))' return fi # revome program name from $words and decrement CURRENT local curcontext context state state_desc line _arguments -C '*:: :->' # complete sub-command or task options local -a _command_args case "$words[1]" in (auto) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-v|--verbosity)'{-v,--verbosity}'[0 capture (do not print) stdout/stderr from task. 1 capture stdout only. 2 do not capture anything (print everything immediately). [default: 1\]]' \ '*::task:(($tasks))' '' ) ;; (clean) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-c|--clean-dep)'{-c,--clean-dep}'[clean task dependencies too]' \ '(-a|--clean-all)'{-a,--clean-all}'[clean all task]' \ '(-n|--dry-run)'{-n,--dry-run}'[print actions without really executing them]' \ '*::task:(($tasks))' '' ) ;; (dumpdb) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '' ) ;; (forget) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-s|--follow-sub)'{-s,--follow-sub}'[forget task dependencies too]' \ '*::task:(($tasks))' '' ) ;; (help) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '*::task:(($tasks))' '::cmd:(($commands))' '' ) ;; (ignore) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '*::task:(($tasks))' '' ) ;; (info) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '*::task:(($tasks))' '' ) ;; (list) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '--all[list include all sub-tasks from dodo file]' \ '(-q|--quiet)'{-q,--quiet}'[print just task name (less verbose than default)]' \ '(-s|--status)'{-s,--status}'[print task status (R)un, (U)p-to-date, (I)gnored]' \ '(-p|--private)'{-p,--private}'[print private tasks (start with '_')]' \ '--deps[print list of dependencies (file dependencies only)]' \ '--template[display entries with template]' \ '*::task:(($tasks))' '' ) ;; (reset-dep) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '*::task:(($tasks))' '' ) ;; (run) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-a|--always-execute)'{-a,--always-execute}'[always execute tasks even if up-to-date [default: %(default)s\]]' \ '(-c|--continue)'{-c,--continue}'[continue executing tasks even after a failure [default: %(default)s\]]' \ '(-v|--verbosity)'{-v,--verbosity}'[0 capture (do not print) stdout/stderr from task. 1 capture stdout only. 2 do not capture anything (print everything immediately). [default: 1\]]' \ '(-r|--reporter)'{-r,--reporter}'[Choose output reporter. [default: %(default)s\]]' \ '(-o|--output-file)'{-o,--output-file}'[write output into file [default: stdout\]]' \ '(-n|--process)'{-n,--process}'[number of subprocesses [default: %(default)s\]]' \ '(-P|--parallel-type)'{-P,--parallel-type}'[Tasks can be executed in parallel in different ways: 'process': uses python multiprocessing module 'thread': uses threads [default: %(default)s\] ]' \ '--pdb[get into PDB (python debugger) post-mortem in case of unhandled exception]' \ '(-s|--single)'{-s,--single}'[Execute only specified tasks ignoring their task_dep [default: %(default)s\]]' \ '*::task:(($tasks))' '' ) ;; (strace) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-a|--all)'{-a,--all}'[display all files (not only from within CWD path)]' \ '--keep[save strace command output into strace.txt]' \ '*::task:(($tasks))' '' ) ;; (tabcompletion) _command_args=( '--db-file[file used to save successful runs [default: %(default)s\]]' \ '--backend[Select dependency file backend. [default: %(default)s\]]' \ '--check_file_uptodate[Choose how to check if files have been modified. Available options [default: %(default)s\]: 'md5': use the md5sum 'timestamp': use the timestamp ]' \ '(-f|--file)'{-f,--file}'[load task from dodo FILE [default: %(default)s\]]' \ '(-d|--dir)'{-d,--dir}'[set path to be used as cwd directory (file paths on dodo file are relative to dodo.py location).]' \ '(-k|--seek-file)'{-k,--seek-file}'[seek dodo file on parent folders [default: %(default)s\]]' \ '(-s|--shell)'{-s,--shell}'[Completion code for SHELL. [default: %(default)s\]]' \ '--hardcode-tasks[Hardcode tasks from current task list.]' \ '' ) ;; # default completes task names (*) _command_args='*::task:(($tasks))' ;; esac # -A no options will be completed after the first non-option argument _arguments -A : $_command_args return 0 } _doit