taskflow-0.1.3/0000775000175300017540000000000012275003604014501 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/setup.py0000664000175300017540000000141512275003514016214 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools setuptools.setup( setup_requires=['pbr'], pbr=True) taskflow-0.1.3/README.md0000664000175300017540000000252012275003514015757 0ustar jenkinsjenkins00000000000000TaskFlow ======== A library to do [jobs, tasks, flows] in a HA manner using different backends to be used with OpenStack projects. * More information at http://wiki.openstack.org/wiki/TaskFlow Join us ------- - http://launchpad.net/taskflow Testing and requirements ------------------------ ### Requirements Because TaskFlow has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required by TaskFlow (you can't use TaskFlow without them) are put to `requirements.txt`; - things that are required by some optional part of TaskFlow (you can use TaskFlow without them) are put to `optional-requirements.txt`; if you want to use the feature in question, you should add that requirements to your project or environment; - as usual, things that required only for running tests are put to `test-requirements.txt`. ### Tox.ini Our tox.ini describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. To generate tox.ini, use the `toxgen.py` script by first installing [toxgen](https://pypi.python.org/pypi/toxgen/) and then provide that script as input the `tox-tmpl.ini` file to generate the final `tox.ini` file. *For example:* $ toxgen.py -i tox-tmpl.ini -o tox.ini taskflow-0.1.3/.coveragerc0000664000175300017540000000020412275003514016616 0ustar jenkinsjenkins00000000000000[run] branch = True source = taskflow omit = taskflow/tests/*,taskflow/openstack/*,taskflow/test.py [report] ignore-errors = True taskflow-0.1.3/optional-requirements.txt0000664000175300017540000000110112275003514021601 0ustar jenkinsjenkins00000000000000# This file lists dependencies that are used by different # pluggable (optional) parts of TaskFlow, like engines # or persistence backends. They are not strictly required # by TaskFlow (you can use TaskFlow without them), but # so they don't go to requirements.txt. # Database (sqlalchemy) persistence: SQLAlchemy<=0.7.99,<=0.9.99 alembic>=0.4.1 # Database (sqlalchemy) persistence with MySQL: pyMySQL MySQL-python # Database (sqlalchemy) persistence with PostgreSQL: psycopg2 # ZooKeeper backends kazoo>=1.3.1 # Eventlet may be used with parallel engine eventlet>=0.13.0 taskflow-0.1.3/taskflow.egg-info/0000775000175300017540000000000012275003604020025 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow.egg-info/SOURCES.txt0000664000175300017540000001314412275003604021714 0ustar jenkinsjenkins00000000000000.coveragerc .mailmap .testr.conf AUTHORS CONTRIBUTING.rst ChangeLog LICENSE MANIFEST.in README.md openstack-common.conf optional-requirements.txt pylintrc requirements.txt run_tests.sh setup.cfg setup.py test-requirements.txt tox-tmpl.ini tox.ini doc/Makefile doc/conf.py doc/index.rst doc/taskflow.engines.action_engine.rst doc/taskflow.engines.rst doc/taskflow.jobs.rst doc/taskflow.listeners.rst doc/taskflow.patterns.rst doc/taskflow.persistence.backends.rst doc/taskflow.persistence.backends.sqlalchemy.rst doc/taskflow.persistence.rst doc/taskflow.rst doc/taskflow.utils.rst taskflow/__init__.py taskflow/atom.py taskflow/exceptions.py taskflow/flow.py taskflow/states.py taskflow/storage.py taskflow/task.py taskflow/test.py taskflow/version.py taskflow.egg-info/PKG-INFO taskflow.egg-info/SOURCES.txt taskflow.egg-info/dependency_links.txt taskflow.egg-info/entry_points.txt taskflow.egg-info/not-zip-safe taskflow.egg-info/requires.txt taskflow.egg-info/top_level.txt taskflow/engines/__init__.py taskflow/engines/base.py taskflow/engines/helpers.py taskflow/engines/action_engine/__init__.py taskflow/engines/action_engine/engine.py taskflow/engines/action_engine/executor.py taskflow/engines/action_engine/graph_action.py taskflow/engines/action_engine/graph_analyzer.py taskflow/engines/action_engine/task_action.py taskflow/examples/build_a_car.py taskflow/examples/buildsystem.py taskflow/examples/calculate_in_parallel.py taskflow/examples/calculate_linear.py taskflow/examples/create_parallel_volume.py taskflow/examples/example_utils.py taskflow/examples/fake_billing.py taskflow/examples/graph_flow.py taskflow/examples/persistence_example.py taskflow/examples/resume_from_backend.out.txt taskflow/examples/resume_from_backend.py taskflow/examples/resume_many_flows.out.txt taskflow/examples/resume_many_flows.py taskflow/examples/resume_vm_boot.py taskflow/examples/resume_volume_create.py taskflow/examples/reverting_linear.out.txt taskflow/examples/reverting_linear.py taskflow/examples/simple_linear.out.txt taskflow/examples/simple_linear.py taskflow/examples/simple_linear_listening.out.txt taskflow/examples/simple_linear_listening.py taskflow/examples/wrapped_exception.py taskflow/examples/resume_many_flows/my_flows.py taskflow/examples/resume_many_flows/resume_all.py taskflow/examples/resume_many_flows/run_flow.py taskflow/jobs/__init__.py taskflow/jobs/job.py taskflow/jobs/jobboard.py taskflow/listeners/__init__.py taskflow/listeners/base.py taskflow/listeners/logging.py taskflow/listeners/printing.py taskflow/listeners/timing.py taskflow/openstack/__init__.py taskflow/openstack/common/__init__.py taskflow/openstack/common/excutils.py taskflow/openstack/common/gettextutils.py taskflow/openstack/common/importutils.py taskflow/openstack/common/jsonutils.py taskflow/openstack/common/timeutils.py taskflow/openstack/common/uuidutils.py taskflow/openstack/common/py3kcompat/__init__.py taskflow/openstack/common/py3kcompat/urlutils.py taskflow/patterns/__init__.py taskflow/patterns/graph_flow.py taskflow/patterns/linear_flow.py taskflow/patterns/unordered_flow.py taskflow/persistence/__init__.py taskflow/persistence/logbook.py taskflow/persistence/backends/__init__.py taskflow/persistence/backends/base.py taskflow/persistence/backends/impl_dir.py taskflow/persistence/backends/impl_memory.py taskflow/persistence/backends/impl_sqlalchemy.py taskflow/persistence/backends/impl_zookeeper.py taskflow/persistence/backends/sqlalchemy/__init__.py taskflow/persistence/backends/sqlalchemy/migration.py taskflow/persistence/backends/sqlalchemy/models.py taskflow/persistence/backends/sqlalchemy/alembic/README taskflow/persistence/backends/sqlalchemy/alembic/alembic.ini taskflow/persistence/backends/sqlalchemy/alembic/env.py taskflow/persistence/backends/sqlalchemy/alembic/script.py.mako taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_exception_an.py taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logbook_deta.py taskflow/persistence/backends/sqlalchemy/alembic/versions/README taskflow/tests/__init__.py taskflow/tests/test_examples.py taskflow/tests/utils.py taskflow/tests/unit/__init__.py taskflow/tests/unit/test_action_engine.py taskflow/tests/unit/test_arguments_passing.py taskflow/tests/unit/test_check_transition.py taskflow/tests/unit/test_duration.py taskflow/tests/unit/test_engine_helpers.py taskflow/tests/unit/test_flattening.py taskflow/tests/unit/test_flow_dependencies.py taskflow/tests/unit/test_functor_task.py taskflow/tests/unit/test_graph_flow.py taskflow/tests/unit/test_green_executor.py taskflow/tests/unit/test_progress.py taskflow/tests/unit/test_storage.py taskflow/tests/unit/test_suspend_flow.py taskflow/tests/unit/test_task.py taskflow/tests/unit/test_unordered_flow.py taskflow/tests/unit/test_utils.py taskflow/tests/unit/test_utils_async_utils.py taskflow/tests/unit/test_utils_binary.py taskflow/tests/unit/test_utils_failure.py taskflow/tests/unit/test_utils_lock_utils.py taskflow/tests/unit/persistence/__init__.py taskflow/tests/unit/persistence/base.py taskflow/tests/unit/persistence/test_dir_persistence.py taskflow/tests/unit/persistence/test_memory_persistence.py taskflow/tests/unit/persistence/test_sql_persistence.py taskflow/tests/unit/persistence/test_zake_persistence.py taskflow/tests/unit/persistence/test_zk_persistence.py taskflow/utils/__init__.py taskflow/utils/async_utils.py taskflow/utils/eventlet_utils.py taskflow/utils/flow_utils.py taskflow/utils/graph_utils.py taskflow/utils/kazoo_utils.py taskflow/utils/lock_utils.py taskflow/utils/misc.py taskflow/utils/persistence_utils.py taskflow/utils/reflection.py taskflow/utils/threading_utils.py tools/state_graph.pytaskflow-0.1.3/taskflow.egg-info/PKG-INFO0000664000175300017540000000517212275003604021127 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: taskflow Version: 0.1.3 Summary: Taskflow structured state management library. Home-page: https://launchpad.net/taskflow Author: Taskflow Developers Author-email: taskflow-dev@lists.launchpad.net License: UNKNOWN Description: TaskFlow ======== A library to do [jobs, tasks, flows] in a HA manner using different backends to be used with OpenStack projects. * More information at http://wiki.openstack.org/wiki/TaskFlow Join us ------- - http://launchpad.net/taskflow Testing and requirements ------------------------ ### Requirements Because TaskFlow has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required by TaskFlow (you can't use TaskFlow without them) are put to `requirements.txt`; - things that are required by some optional part of TaskFlow (you can use TaskFlow without them) are put to `optional-requirements.txt`; if you want to use the feature in question, you should add that requirements to your project or environment; - as usual, things that required only for running tests are put to `test-requirements.txt`. ### Tox.ini Our tox.ini describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. To generate tox.ini, use the `toxgen.py` script by first installing [toxgen](https://pypi.python.org/pypi/toxgen/) and then provide that script as input the `tox-tmpl.ini` file to generate the final `tox.ini` file. *For example:* $ toxgen.py -i tox-tmpl.ini -o tox.ini Keywords: reliable recoverable execution tasks flows workflows jobs persistence states asynchronous parallel threads Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 taskflow-0.1.3/taskflow.egg-info/not-zip-safe0000664000175300017540000000000112275003602022251 0ustar jenkinsjenkins00000000000000 taskflow-0.1.3/taskflow.egg-info/dependency_links.txt0000664000175300017540000000000112275003604024073 0ustar jenkinsjenkins00000000000000 taskflow-0.1.3/taskflow.egg-info/top_level.txt0000664000175300017540000000001112275003604022547 0ustar jenkinsjenkins00000000000000taskflow taskflow-0.1.3/taskflow.egg-info/requires.txt0000664000175300017540000000016112275003604022423 0ustar jenkinsjenkins00000000000000pbr>=0.5.21,<1.0 anyjson>=0.3.3 iso8601>=0.1.8 six>=1.4.1 networkx>=1.8 Babel>=1.3 stevedore>=0.12 futures>=2.1.3taskflow-0.1.3/taskflow.egg-info/entry_points.txt0000664000175300017540000000133712275003604023327 0ustar jenkinsjenkins00000000000000[taskflow.persistence] sqlite = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend zookeeper = taskflow.persistence.backends.impl_zookeeper:ZkBackend postgresql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend memory = taskflow.persistence.backends.impl_memory:MemoryBackend file = taskflow.persistence.backends.impl_dir:DirBackend mysql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend dir = taskflow.persistence.backends.impl_dir:DirBackend [taskflow.engines] default = taskflow.engines.action_engine.engine:SingleThreadedActionEngine serial = taskflow.engines.action_engine.engine:SingleThreadedActionEngine parallel = taskflow.engines.action_engine.engine:MultiThreadedActionEngine taskflow-0.1.3/openstack-common.conf0000664000175300017540000000036012275003514020624 0ustar jenkinsjenkins00000000000000[DEFAULT] # The list of modules to copy from oslo-incubator.git module=excutils module=importutils module=jsonutils module=py3kcompat module=timeutils module=uuidutils # The base module to hold the copy of openstack.common base=taskflow taskflow-0.1.3/test-requirements.txt0000664000175300017540000000020212275003514020734 0ustar jenkinsjenkins00000000000000hacking>=0.8.0,<0.9 discover coverage>=3.6 mock>=1.0 python-subunit>=0.0.18 testrepository>=0.0.17 testtools>=0.9.34 zake>=0.0.13 taskflow-0.1.3/tox.ini0000664000175300017540000001231412275003514016015 0ustar jenkinsjenkins00000000000000# DO NOT EDIT THIS FILE - it is machine generated from tox-tmpl.ini [tox] minversion = 1.6 skipsdist = True envlist = cover, pep8, py26, py26-sa7-mysql, py26-sa7-mysql-ev, py26-sa7-pymysql, py26-sa7-pymysql-ev, py26-sa8-mysql, py26-sa8-mysql-ev, py26-sa8-pymysql, py26-sa8-pymysql-ev, py26-sa9-mysql, py26-sa9-mysql-ev, py26-sa9-pymysql, py26-sa9-pymysql-ev, py27, py27-sa7-mysql, py27-sa7-mysql-ev, py27-sa7-pymysql, py27-sa7-pymysql-ev, py27-sa8-mysql, py27-sa8-mysql-ev, py27-sa8-pymysql, py27-sa8-pymysql-ev, py27-sa9-mysql, py27-sa9-mysql-ev, py27-sa9-pymysql, py27-sa9-pymysql-ev, py33, py33-sa7-pymysql, py33-sa8-pymysql, py33-sa9-pymysql, pylint [testenv] usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt alembic>=0.4.1 psycopg2 kazoo>=1.3.1 commands = python setup.py testr --slowest --testr-args='{posargs}' [tox:jenkins] downloadcache = ~/cache/pip [testenv:pep8] commands = flake8 {posargs} [testenv:pylint] setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/requirements.txt pylint==0.26.0 commands = pylint [testenv:cover] basepython = python2.7 deps = {[testenv:py27]deps} commands = python setup.py testr --coverage --testr-args='{posargs}' [testenv:venv] commands = {posargs} [flake8] builtins = _ exclude = .venv,.tox,dist,doc,./taskflow/openstack/common,*egg,.git,build,tools [testenv:py26] basepython = python2.6 deps = {[testenv:py26-sa7-mysql-ev]deps} [testenv:py27] basepython = python2.7 deps = -r{toxinidir}/requirements.txt -r{toxinidir}/optional-requirements.txt -r{toxinidir}/test-requirements.txt [testenv:py33] basepython = python3.3 deps = {[testenv:py33-sa9-pymysql]deps} [testenv:py26-sa7-mysql-ev] deps = {[testenv]deps} SQLAlchemy<=0.7.99 MySQL-python eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa7-mysql] deps = {[testenv]deps} SQLAlchemy<=0.7.99 MySQL-python basepython = python2.6 [testenv:py26-sa7-pymysql-ev] deps = {[testenv]deps} SQLAlchemy<=0.7.99 pyMySQL eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa7-pymysql] deps = {[testenv]deps} SQLAlchemy<=0.7.99 pyMySQL basepython = python2.6 [testenv:py26-sa8-mysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 MySQL-python eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa8-mysql] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 MySQL-python basepython = python2.6 [testenv:py26-sa8-pymysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 pyMySQL eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa8-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 pyMySQL basepython = python2.6 [testenv:py26-sa9-mysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 MySQL-python eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa9-mysql] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 MySQL-python basepython = python2.6 [testenv:py26-sa9-pymysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 pyMySQL eventlet>=0.13.0 basepython = python2.6 [testenv:py26-sa9-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 pyMySQL basepython = python2.6 [testenv:py27-sa7-mysql-ev] deps = {[testenv]deps} SQLAlchemy<=0.7.99 MySQL-python eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa7-mysql] deps = {[testenv]deps} SQLAlchemy<=0.7.99 MySQL-python basepython = python2.7 [testenv:py27-sa7-pymysql-ev] deps = {[testenv]deps} SQLAlchemy<=0.7.99 pyMySQL eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa7-pymysql] deps = {[testenv]deps} SQLAlchemy<=0.7.99 pyMySQL basepython = python2.7 [testenv:py27-sa8-mysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 MySQL-python eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa8-mysql] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 MySQL-python basepython = python2.7 [testenv:py27-sa8-pymysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 pyMySQL eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa8-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 pyMySQL basepython = python2.7 [testenv:py27-sa9-mysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 MySQL-python eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa9-mysql] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 MySQL-python basepython = python2.7 [testenv:py27-sa9-pymysql-ev] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 pyMySQL eventlet>=0.13.0 basepython = python2.7 [testenv:py27-sa9-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 pyMySQL basepython = python2.7 [testenv:py33-sa7-pymysql] deps = {[testenv]deps} SQLAlchemy<=0.7.99 pyMySQL basepython = python3.3 [testenv:py33-sa8-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.8,<=0.8.99 pyMySQL basepython = python3.3 [testenv:py33-sa9-pymysql] deps = {[testenv]deps} SQLAlchemy>=0.9,<=0.9.99 pyMySQL basepython = python3.3 taskflow-0.1.3/.testr.conf0000664000175300017540000000052112275003514016565 0ustar jenkinsjenkins00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \ ${PYTHON:-python} -m subunit.run discover -t ./ ./taskflow/tests $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list taskflow-0.1.3/PKG-INFO0000664000175300017540000000517212275003604015603 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: taskflow Version: 0.1.3 Summary: Taskflow structured state management library. Home-page: https://launchpad.net/taskflow Author: Taskflow Developers Author-email: taskflow-dev@lists.launchpad.net License: UNKNOWN Description: TaskFlow ======== A library to do [jobs, tasks, flows] in a HA manner using different backends to be used with OpenStack projects. * More information at http://wiki.openstack.org/wiki/TaskFlow Join us ------- - http://launchpad.net/taskflow Testing and requirements ------------------------ ### Requirements Because TaskFlow has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required by TaskFlow (you can't use TaskFlow without them) are put to `requirements.txt`; - things that are required by some optional part of TaskFlow (you can use TaskFlow without them) are put to `optional-requirements.txt`; if you want to use the feature in question, you should add that requirements to your project or environment; - as usual, things that required only for running tests are put to `test-requirements.txt`. ### Tox.ini Our tox.ini describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. To generate tox.ini, use the `toxgen.py` script by first installing [toxgen](https://pypi.python.org/pypi/toxgen/) and then provide that script as input the `tox-tmpl.ini` file to generate the final `tox.ini` file. *For example:* $ toxgen.py -i tox-tmpl.ini -o tox.ini Keywords: reliable recoverable execution tasks flows workflows jobs persistence states asynchronous parallel threads Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 taskflow-0.1.3/setup.cfg0000664000175300017540000000334112275003604016323 0ustar jenkinsjenkins00000000000000[metadata] name = taskflow summary = Taskflow structured state management library. description-file = README.md author = Taskflow Developers author-email = taskflow-dev@lists.launchpad.net home-page = https://launchpad.net/taskflow keywords = reliable recoverable execution tasks flows workflows jobs persistence states asynchronous parallel threads classifier = Development Status :: 4 - Beta Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: Developers License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.6 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.3 [global] setup-hooks = pbr.hooks.setup_hook [files] packages = taskflow [entry_points] taskflow.persistence = dir = taskflow.persistence.backends.impl_dir:DirBackend file = taskflow.persistence.backends.impl_dir:DirBackend memory = taskflow.persistence.backends.impl_memory:MemoryBackend mysql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend postgresql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend sqlite = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend zookeeper = taskflow.persistence.backends.impl_zookeeper:ZkBackend taskflow.engines = default = taskflow.engines.action_engine.engine:SingleThreadedActionEngine serial = taskflow.engines.action_engine.engine:SingleThreadedActionEngine parallel = taskflow.engines.action_engine.engine:MultiThreadedActionEngine [nosetests] cover-erase = true verbosity = 2 [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 taskflow-0.1.3/requirements.txt0000664000175300017540000000047512275003514017773 0ustar jenkinsjenkins00000000000000# Packages needed for using this library. pbr>=0.5.21,<1.0 anyjson>=0.3.3 iso8601>=0.1.8 # Python 2->3 compatibility library. six>=1.4.1 # Very nice graph library networkx>=1.8 Babel>=1.3 # Used for backend storage engine loading. stevedore>=0.12 # Backport for concurrent.futures which exists in 3.2+ futures>=2.1.3 taskflow-0.1.3/.mailmap0000664000175300017540000000117112275003514016122 0ustar jenkinsjenkins00000000000000Anastasia Karpinska Angus Salkeld Changbin Liu Changbin Liu Ivan A. Melnikov Jessica Lucci Jessica Lucci Joshua Harlow Joshua Harlow Kevin Chen Kevin Chen Kevin Chen taskflow-0.1.3/run_tests.sh0000775000175300017540000000370412275003514017072 0ustar jenkinsjenkins00000000000000#!/bin/bash function usage { echo "Usage: $0 [OPTION]..." echo "Run Taskflow's test suite(s)" echo "" echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added." echo " -p, --pep8 Just run pep8" echo " -P, --no-pep8 Don't run static code checks" echo " -v, --verbose Increase verbosity of reporting output" echo " -h, --help Print this usage message" echo "" exit } function process_option { case "$1" in -h|--help) usage;; -p|--pep8) let just_pep8=1;; -P|--no-pep8) let no_pep8=1;; -f|--force) let force=1;; -v|--verbose) let verbose=1;; *) pos_args="$pos_args $1" esac } verbose=0 force=0 pos_args="" just_pep8=0 no_pep8=0 tox_args="" tox="" for arg in "$@"; do process_option $arg done py=`which python` if [ -z "$py" ]; then echo "Python is required to use $0" echo "Please install it via your distributions package management system." exit 1 fi py_envs=`python -c 'import sys; print("py%s%s" % (sys.version_info[0:2]))'` py_envs=${PY_ENVS:-$py_envs} function run_tests { local tox_cmd="${tox} ${tox_args} -e $py_envs ${pos_args}" echo "Running tests for environments $py_envs via $tox_cmd" bash -c "$tox_cmd" } function run_flake8 { local tox_cmd="${tox} ${tox_args} -e pep8 ${pos_args}" echo "Running flake8 via $tox_cmd" bash -c "$tox_cmd" } if [ $force -eq 1 ]; then tox_args="$tox_args -r" fi if [ $verbose -eq 1 ]; then tox_args="$tox_args -v" fi tox=`which tox` if [ -z "$tox" ]; then echo "Tox is required to use $0" echo "Please install it via \`pip\` or via your distributions" \ "package management system." echo "Visit http://tox.readthedocs.org/ for additional installation" \ "instructions." exit 1 fi if [ $just_pep8 -eq 1 ]; then run_flake8 exit fi run_tests || exit if [ $no_pep8 -eq 0 ]; then run_flake8 fi taskflow-0.1.3/tools/0000775000175300017540000000000012275003604015641 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/tools/state_graph.py0000664000175300017540000000466012275003514020522 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python import os import sys top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) import optparse import subprocess import tempfile import networkx as nx from taskflow import states from taskflow.utils import graph_utils as gu def mini_exec(cmd, ok_codes=(0,)): stdout = subprocess.PIPE stderr = subprocess.PIPE proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr, stdin=None) (stdout, stderr) = proc.communicate() rc = proc.returncode if rc not in ok_codes: raise RuntimeError("Could not run %s [%s]\nStderr: %s" % (cmd, rc, stderr)) return (stdout, stderr) def make_svg(graph, output_filename, output_format): # NOTE(harlowja): requires pydot! gdot = gu.export_graph_to_dot(graph) if output_format == 'dot': output = gdot elif output_format in ('svg', 'svgz', 'png'): with tempfile.NamedTemporaryFile(suffix=".dot") as fh: fh.write(gdot) fh.flush() cmd = ['dot', '-T%s' % output_format, fh.name] output, _stderr = mini_exec(cmd) else: raise ValueError('Unknown format: %s' % output_filename) with open(output_filename, "wb") as fh: fh.write(output) def main(): parser = optparse.OptionParser() parser.add_option("-f", "--file", dest="filename", help="write svg to FILE", metavar="FILE") parser.add_option("-t", "--tasks", dest="tasks", action='store_true', help="use task state transitions", default=False) parser.add_option("-T", "--format", dest="format", help="output in given format", default='svg') (options, args) = parser.parse_args() if options.filename is None: options.filename = 'states.%s' % options.format g = nx.DiGraph(name="State transitions") if not options.tasks: source = states._ALLOWED_FLOW_TRANSITIONS else: source = states._ALLOWED_TASK_TRANSITIONS for (u, v) in source: if not g.has_node(u): g.add_node(u) if not g.has_node(v): g.add_node(v) g.add_edge(u, v) make_svg(g, options.filename, options.format) print("Created %s at '%s'" % (options.format, options.filename)) if __name__ == '__main__': main() taskflow-0.1.3/AUTHORS0000664000175300017540000000022412275003604015547 0ustar jenkinsjenkins00000000000000 Alexander Gorodnev Anastasia Karpinska Ivan A. Melnikov taskflow-0.1.3/CONTRIBUTING.rst0000664000175300017540000000103612275003514017142 0ustar jenkinsjenkins00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in the "If you're a developer, start here" section of this page: http://wiki.openstack.org/HowToContribute Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: http://wiki.openstack.org/GerritWorkflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/taskflow taskflow-0.1.3/MANIFEST.in0000664000175300017540000000013712275003514016240 0ustar jenkinsjenkins00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pyc taskflow-0.1.3/taskflow/0000775000175300017540000000000012275003604016333 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/atom.py0000664000175300017540000001347112275003514017653 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import six from taskflow.utils import misc from taskflow.utils import reflection LOG = logging.getLogger(__name__) def _save_as_to_mapping(save_as): """Convert save_as to mapping name => index. Result should follow storage convention for mappings. """ # TODO(harlowja): we should probably document this behavior & convention # outside of code so that its more easily understandable, since what an # atom returns is pretty crucial for other later operations. if save_as is None: return {} if isinstance(save_as, six.string_types): # NOTE(harlowja): this means that your atom will only return one item # instead of a dictionary-like object or a indexable object (like a # list or tuple). return {save_as: None} elif isinstance(save_as, (tuple, list)): # NOTE(harlowja): this means that your atom will return a indexable # object, like a list or tuple and the results can be mapped by index # to that tuple/list that is returned for others to use. return dict((key, num) for num, key in enumerate(save_as)) elif isinstance(save_as, set): # NOTE(harlowja): in the case where a set is given we will not be # able to determine the numeric ordering in a reliable way (since it is # a unordered set) so the only way for us to easily map the result of # the atom will be via the key itself. return dict((key, key) for key in save_as) raise TypeError('Task provides parameter ' 'should be str, set or tuple/list, not %r' % save_as) def _build_rebind_dict(args, rebind_args): """Build a argument remapping/rebinding dictionary. This dictionary allows an atom to declare that it will take a needed requirement bound to a given name with another name instead (mapping the new name onto the required name). """ if rebind_args is None: return {} elif isinstance(rebind_args, (list, tuple)): rebind = dict(zip(args, rebind_args)) if len(args) < len(rebind_args): rebind.update((a, a) for a in rebind_args[len(args):]) return rebind elif isinstance(rebind_args, dict): return rebind_args else: raise TypeError('Invalid rebind value: %s' % rebind_args) def _build_arg_mapping(task_name, reqs, rebind_args, function, do_infer): """Given a function, its requirements and a rebind mapping this helper function will build the correct argument mapping for the given function as well as verify that the final argument mapping does not have missing or extra arguments (where applicable). """ task_args = reflection.get_callable_args(function, required_only=True) result = {} if reqs: result.update((a, a) for a in reqs) if do_infer: result.update((a, a) for a in task_args) result.update(_build_rebind_dict(task_args, rebind_args)) if not reflection.accepts_kwargs(function): all_args = reflection.get_callable_args(function, required_only=False) extra_args = set(result) - set(all_args) if extra_args: extra_args_str = ', '.join(sorted(extra_args)) raise ValueError('Extra arguments given to task %s: %s' % (task_name, extra_args_str)) # NOTE(imelnikov): don't use set to preserve order in error message missing_args = [arg for arg in task_args if arg not in result] if missing_args: raise ValueError('Missing arguments for task %s: %s' % (task_name, ' ,'.join(missing_args))) return result class Atom(object): """An abstract flow atom that causes a flow to progress (in some manner). An atom is a named object that operates with input flow data to perform some action that furthers the overall flows progress. It usually also produces some of its own named output as a result of this process. """ def __init__(self, name=None, provides=None): self._name = name # An *immutable* output 'resource' name dict this atom # produces that other atoms may depend on this atom providing. # # Format is output index:arg_name self.save_as = _save_as_to_mapping(provides) # This identifies the version of the atom to be ran which # can be useful in resuming older versions of atoms. Standard # major, minor version semantics apply. self.version = (1, 0) def _build_arg_mapping(self, executor, requires=None, rebind=None, auto_extract=True): self.rebind = _build_arg_mapping(self.name, requires, rebind, executor, auto_extract) @property def name(self): return self._name def __str__(self): return "%s==%s" % (self.name, misc.get_version_string(self)) @property def provides(self): """Any outputs this atom produces.""" return set(self.save_as) @property def requires(self): """Any inputs this atom requires to execute.""" return set(self.rebind.values()) taskflow-0.1.3/taskflow/examples/0000775000175300017540000000000012275003604020151 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/examples/reverting_linear.out.txt0000664000175300017540000000020412275003514025053 0ustar jenkinsjenkins00000000000000Calling jim 555. Calling joe 444. Calling 444 and apologizing. Calling 555 and apologizing. Flow failed: Suzzie not home right now. taskflow-0.1.3/taskflow/examples/create_parallel_volume.py0000664000175300017540000001055212275003514025234 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.listeners import printing from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.utils import reflection # INTRO: This examples shows how unordered_flow can be used to create a large # number of fake volumes in parallel (or serially, depending on a constant that # can be easily changed). @contextlib.contextmanager def show_time(name): start = time.time() yield end = time.time() print(" -- %s took %0.3f seconds" % (name, end - start)) # This affects how many volumes to create and how much time to *simulate* # passing for that volume to be created. MAX_CREATE_TIME = 3 VOLUME_COUNT = 5 # This will be used to determine if all the volumes are created in parallel # or whether the volumes are created serially (in an undefined ordered since # a unordered flow is used). Note that there is a disconnection between the # ordering and the concept of parallelism (since unordered items can still be # ran in a serial ordering). A typical use-case for offering both is to allow # for debugging using a serial approach, while when running at a larger scale # one would likely want to use the parallel approach. # # If you switch this flag from serial to parallel you can see the overall # time difference that this causes. SERIAL = False if SERIAL: engine_conf = { 'engine': 'serial', } else: engine_conf = { 'engine': 'parallel', } class VolumeCreator(task.Task): def __init__(self, volume_id): # Note here that the volume name is composed of the name of the class # along with the volume id that is being created, since a name of a # task uniquely identifies that task in storage it is important that # the name be relevant and identifiable if the task is recreated for # subsequent resumption (if applicable). # # UUIDs are *not* used as they can not be tied back to a previous tasks # state on resumption (since they are unique and will vary for each # task that is created). A name based off the volume id that is to be # created is more easily tied back to the original task so that the # volume create can be resumed/revert, and is much easier to use for # audit and tracking purposes. base_name = reflection.get_callable_name(self) super(VolumeCreator, self).__init__(name="%s-%s" % (base_name, volume_id)) self._volume_id = volume_id def execute(self): print("Making volume %s" % (self._volume_id)) time.sleep(random.random() * MAX_CREATE_TIME) print("Finished making volume %s" % (self._volume_id)) # Assume there is no ordering dependency between volumes. flow = uf.Flow("volume-maker") for i in range(0, VOLUME_COUNT): flow.add(VolumeCreator(volume_id="vol-%s" % (i))) # Show how much time the overall engine loading and running takes. with show_time(name=flow.name.title()): eng = engines.load(flow, engine_conf=engine_conf) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities # that show you exactly what transitions the engine is going through # while running the various volume create tasks. with printing.PrintingListener(eng): eng.run() taskflow-0.1.3/taskflow/examples/resume_many_flows.out.txt0000664000175300017540000000140612275003514025257 0ustar jenkinsjenkins00000000000000Run flow: Running flow example 18995b55-aaad-49fa-938f-006ac21ea4c7 executing first==1.0 executing boom==1.0 > this time not exiting executing second==1.0 Run flow, something happens: Running flow example f8f62ea6-1c9b-4e81-9ff9-1acaa299a648 executing first==1.0 executing boom==1.0 > Critical error: boom = exit please Run flow, something happens again: Running flow example 16f11c15-4d8a-4552-b422-399565c873c4 executing first==1.0 executing boom==1.0 > Critical error: boom = exit please Resuming all failed flows Resuming flow example f8f62ea6-1c9b-4e81-9ff9-1acaa299a648 executing boom==1.0 > this time not exiting executing second==1.0 Resuming flow example 16f11c15-4d8a-4552-b422-399565c873c4 executing boom==1.0 > this time not exiting executing second==1.0 taskflow-0.1.3/taskflow/examples/resume_many_flows.py0000664000175300017540000000624412275003514024267 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import subprocess import sys import tempfile self_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, self_dir) import example_utils # noqa # INTRO: In this example we create a common persistence database (sqlite based) # and then we run a few set of processes which themselves use this persistence # database, those processes 'crash' (in a simulated way) by exiting with a # system error exception. After this occurs a few times we then activate a # script which doesn't 'crash' and it will resume all the given engines flows # that did not complete and run them to completion (instead of crashing). # # This shows how a set of tasks can be finished even after repeatedly being # crashed, *crash resistance* if you may call it, due to the engine concept as # well as the persistence layer which keeps track of the state a flow # transitions through and persists the intermediary inputs and outputs and # overall flow state. def _exec(cmd, add_env=None): env = None if add_env: env = os.environ.copy() env.update(add_env) proc = subprocess.Popen(cmd, env=env, stdin=None, stdout=subprocess.PIPE, stderr=sys.stderr) stdout, stderr = proc.communicate() rc = proc.returncode if rc != 0: raise RuntimeError("Could not run %s [%s]", cmd, rc) print(stdout.decode()) def _path_to(name): return os.path.abspath(os.path.join(os.path.dirname(__file__), 'resume_many_flows', name)) def main(): backend_uri = None tmp_path = None try: if example_utils.SQLALCHEMY_AVAILABLE: tmp_path = tempfile.mktemp(prefix='tf-resume-example') backend_uri = "sqlite:///%s" % (tmp_path) else: tmp_path = tempfile.mkdtemp(prefix='tf-resume-example') backend_uri = 'file:///%s' % (tmp_path) def run_example(name, add_env=None): _exec([sys.executable, _path_to(name), backend_uri], add_env) print('Run flow:') run_example('run_flow.py') print('\nRun flow, something happens:') run_example('run_flow.py', {'BOOM': 'exit please'}) print('\nRun flow, something happens again:') run_example('run_flow.py', {'BOOM': 'exit please'}) print('\nResuming all failed flows') run_example('resume_all.py') finally: if tmp_path: example_utils.rm_path(tmp_path) if __name__ == '__main__': main() taskflow-0.1.3/taskflow/examples/resume_volume_create.py0000664000175300017540000001335512275003514024744 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import hashlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import engines from taskflow import task from taskflow.utils import persistence_utils as p_utils import example_utils # noqa # INTRO: This examples shows how a hierarchy of flows can be used to create a # pseudo-volume in a reliable & resumable manner using taskflow + a miniature # version of what cinder does while creating a volume (very miniature). @contextlib.contextmanager def slow_down(how_long=0.5): try: yield how_long finally: print("** Ctrl-c me please!!! **") time.sleep(how_long) def find_flow_detail(backend, book_id, flow_id): # NOTE(harlowja): this is used to attempt to find a given logbook with # a given id and a given flow details inside that logbook, we need this # reference so that we can resume the correct flow (as a logbook tracks # flows and a flow detail tracks a individual flow). # # Without a reference to the logbook and the flow details in that logbook # we will not know exactly what we should resume and that would mean we # can't resume what we don't know. with contextlib.closing(backend.get_connection()) as conn: lb = conn.get_logbook(book_id) return lb.find(flow_id) class PrintText(task.Task): def __init__(self, print_what, no_slow=False): content_hash = hashlib.md5(print_what.encode('utf-8')).hexdigest()[0:8] super(PrintText, self).__init__(name="Print: %s" % (content_hash)) self._text = print_what self._no_slow = no_slow def execute(self): if self._no_slow: print("-" * (len(self._text))) print(self._text) print("-" * (len(self._text))) else: with slow_down(): print("-" * (len(self._text))) print(self._text) print("-" * (len(self._text))) class CreateSpecForVolumes(task.Task): def execute(self): volumes = [] for i in range(0, random.randint(1, 10)): volumes.append({ 'type': 'disk', 'location': "/dev/vda%s" % (i + 1), }) return volumes class PrepareVolumes(task.Task): def execute(self, volume_specs): for v in volume_specs: with slow_down(): print("Dusting off your hard drive %s" % (v)) with slow_down(): print("Taking a well deserved break.") print("Your drive %s has been certified." % (v)) # Setup the set of things to do (mini-cinder). flow = lf.Flow("root").add( PrintText("Starting volume create", no_slow=True), gf.Flow('maker').add( CreateSpecForVolumes("volume_specs", provides='volume_specs'), PrintText("I need a nap, it took me a while to build those specs."), PrepareVolumes(), ), PrintText("Finished volume create", no_slow=True)) # Setup the persistence & resumption layer. with example_utils.get_backend() as backend: try: book_id, flow_id = sys.argv[2].split("+", 1) except (IndexError, ValueError): book_id = None flow_id = None if not all([book_id, flow_id]): # If no 'tracking id' (think a fedex or ups tracking id) is provided # then we create one by creating a logbook (where flow details are # stored) and creating a flow detail (where flow and task state is # stored). The combination of these 2 objects unique ids (uuids) allows # the users of taskflow to reassociate the workflows that were # potentially running (and which may have partially completed) back # with taskflow so that those workflows can be resumed (or reverted) # after a process/thread/engine has failed in someway. logbook = p_utils.temporary_log_book(backend) flow_detail = p_utils.create_flow_detail(flow, logbook, backend) print("!! Your tracking id is: '%s+%s'" % (logbook.uuid, flow_detail.uuid)) print("!! Please submit this on later runs for tracking purposes") else: flow_detail = find_flow_detail(backend, book_id, flow_id) # Load and run. engine_conf = { 'engine': 'serial', } engine = engines.load(flow, flow_detail=flow_detail, backend=backend, engine_conf=engine_conf) engine.run() # How to use. # # 1. $ python me.py "sqlite:////tmp/cinder.db" # 2. ctrl-c before this finishes # 3. Find the tracking id (search for 'Your tracking id is') # 4. $ python me.py "sqlite:////tmp/cinder.db" "$tracking_id" # 5. Profit! taskflow-0.1.3/taskflow/examples/example_utils.py0000664000175300017540000000601412275003514023377 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import shutil import sys import tempfile from taskflow import exceptions from taskflow.openstack.common.py3kcompat import urlutils from taskflow.persistence import backends LOG = logging.getLogger(__name__) try: import sqlalchemy as _sa # noqa SQLALCHEMY_AVAILABLE = True except ImportError: SQLALCHEMY_AVAILABLE = False def rm_path(persist_path): if not os.path.exists(persist_path): return if os.path.isdir(persist_path): rm_func = shutil.rmtree elif os.path.isfile(persist_path): rm_func = os.unlink else: raise ValueError("Unknown how to `rm` path: %s" % (persist_path)) try: rm_func(persist_path) except (IOError, OSError): pass def _make_conf(backend_uri): parsed_url = urlutils.urlparse(backend_uri) backend_type = parsed_url.scheme.lower() if not backend_type: raise ValueError("Unknown backend type for uri: %s" % (backend_type)) if backend_type in ('file', 'dir'): conf = { 'path': parsed_url.path, 'connection': backend_uri, } else: conf = { 'connection': backend_uri, } return conf @contextlib.contextmanager def get_backend(backend_uri=None): tmp_dir = None if not backend_uri: if len(sys.argv) > 1: backend_uri = str(sys.argv[1]) if not backend_uri: tmp_dir = tempfile.mkdtemp() backend_uri = "file:///%s" % tmp_dir try: backend = backends.fetch(_make_conf(backend_uri)) except exceptions.NotFound as e: # Fallback to one that will work if the provided backend is not found. if not tmp_dir: tmp_dir = tempfile.mkdtemp() backend_uri = "file:///%s" % tmp_dir LOG.exception("Falling back to file backend using temporary" " directory located at: %s", tmp_dir) backend = backends.fetch(_make_conf(backend_uri)) else: raise e try: # Ensure schema upgraded before we continue working. with contextlib.closing(backend.get_connection()) as conn: conn.upgrade() yield backend finally: # Make sure to cleanup the temporary path if one was created for us. if tmp_dir: rm_path(tmp_dir) taskflow-0.1.3/taskflow/examples/reverting_linear.py0000664000175300017540000000771112275003514024070 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example we create three tasks, each of which ~calls~ a given # number (provided as a function input), one of those tasks fails calling a # given number (the suzzie calling); this causes the workflow to enter the # reverting process, which activates the revert methods of the previous two # phone ~calls~. # # This simulated calling makes it appear like all three calls occur or all # three don't occur (transaction-like capabilities). No persistence layer is # used here so reverting and executing will not handle process failure. # # This example shows a basic usage of the taskflow structures without involving # the complexity of persistence. Using the structures that taskflow provides # via tasks and flows makes it possible for you to easily at a later time # hook in a persistence layer (and then gain the functionality that offers) # when you decide the complexity of adding that layer in is 'worth it' for your # applications usage pattern (which some applications may not need). class CallJim(task.Task): def execute(self, jim_number, *args, **kwargs): print("Calling jim %s." % jim_number) def revert(self, jim_number, *args, **kwargs): print("Calling %s and apologizing." % jim_number) class CallJoe(task.Task): def execute(self, joe_number, *args, **kwargs): print("Calling joe %s." % joe_number) def revert(self, joe_number, *args, **kwargs): print("Calling %s and apologizing." % joe_number) class CallSuzzie(task.Task): def execute(self, suzzie_number, *args, **kwargs): raise IOError("Suzzie not home right now.") # Create your flow and associated tasks (the work to be done). flow = lf.Flow('simple-linear').add( CallJim(), CallJoe(), CallSuzzie() ) try: # Now run that flow using the provided initial data (store below). taskflow.engines.run(flow, store=dict(joe_number=444, jim_number=555, suzzie_number=666)) except Exception as e: # NOTE(harlowja): This exception will be the exception that came out of the # 'CallSuzzie' task instead of a different exception, this is useful since # typically surrounding code wants to handle the original exception and not # a wrapped or altered one. # # *WARNING* If this flow was multi-threaded and multiple active tasks threw # exceptions then the above exception would be wrapped into a combined # exception (the object has methods to iterate over the contained # exceptions). See: exceptions.py and the class 'WrappedFailure' to look at # how to deal with multiple tasks failing while running. # # You will also note that this is not a problem in this case since no # parallelism is involved; this is ensured by the usage of a linear flow, # which runs serially as well as the default engine type which is 'serial'. print("Flow failed: %s" % e) taskflow-0.1.3/taskflow/examples/resume_from_backend.out.txt0000664000175300017540000000124612275003514025515 0ustar jenkinsjenkins00000000000000----------------------------------- At the beginning, there is no state ----------------------------------- Flow 'resume from backend example' state: None ------- Running ------- executing first==1.0 ------------- After running ------------- Flow 'resume from backend example' state: SUSPENDED boom==1.0: SUCCESS, result=None first==1.0: SUCCESS, result=ok second==1.0: PENDING, result=None -------------------------- Resuming and running again -------------------------- executing second==1.0 ---------- At the end ---------- Flow 'resume from backend example' state: SUCCESS boom==1.0: SUCCESS, result=None first==1.0: SUCCESS, result=ok second==1.0: SUCCESS, result=ok taskflow-0.1.3/taskflow/examples/build_a_car.py0000664000175300017540000001413312275003514022751 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: This examples shows how a graph_flow and linear_flow can be used # together to execute non-dependent tasks by going through the steps required # to build a simplistic car (an assembly line if you will). It also shows # how raw functions can be wrapped into a task object instead of being forced # to use the more heavy task base class. This is useful in scenarios where # pre-existing code has functions that you easily want to plug-in to taskflow, # without requiring a large amount of code changes. def build_frame(): return 'steel' def build_engine(): return 'honda' def build_doors(): return '2' def build_wheels(): return '4' def install_engine(frame, engine): return True def install_doors(frame, windows_installed, doors): return True def install_windows(frame, doors): return True def install_wheels(frame, engine, engine_installed, wheels): return True def trash(**kwargs): print_wrapped("Throwing away pieces of car!") def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) def startup(**kwargs): # If you want to see the rollback function being activated try uncommenting # the following line. # # raise ValueError("Car not verified") return True def verify(spec, **kwargs): # If the car is not what we ordered throw away the car (trigger reversion). for key, value in kwargs.items(): if spec[key] != value: raise Exception("Car doesn't match spec!") return True # These two functions connect into the state transition notification emission # points that the engine outputs, they can be used to log state transitions # that are occurring, or they can be used to suspend the engine (or perform # other useful activities). def flow_watch(state, details): print('Flow => %s' % state) def task_watch(state, details): print('Task %s => %s' % (details.get('task_name'), state)) flow = lf.Flow("make-auto").add( task.FunctorTask(startup, revert=trash, provides='ran'), gf.Flow("install-parts").add( task.FunctorTask(build_frame, provides='frame'), task.FunctorTask(build_engine, provides='engine'), task.FunctorTask(build_doors, provides='doors'), task.FunctorTask(build_wheels, provides='wheels'), # These *_installed outputs allow for other tasks to depend on certain # actions being performed (aka the components were installed), another # way to do this is to link() the tasks manually instead of creating # an 'artificial' data dependency that accomplishes the same goal the # manual linking would result in. task.FunctorTask(install_engine, provides='engine_installed'), task.FunctorTask(install_doors, provides='doors_installed'), task.FunctorTask(install_windows, provides='windows_installed'), task.FunctorTask(install_wheels, provides='wheels_installed')), task.FunctorTask(verify, requires=['frame', 'engine', 'doors', 'wheels', 'engine_installed', 'doors_installed', 'windows_installed', 'wheels_installed'])) # This dictionary will be provided to the tasks as a specification for what # the tasks should produce, in this example this specification will influence # what those tasks do and what output they create. Different tasks depend on # different information from this specification, all of which will be provided # automatically by the engine. spec = { "frame": 'steel', "engine": 'honda', "doors": '2', "wheels": '4', # These are used to compare the result product, a car without the pieces # installed is not a car after all. "engine_installed": True, "doors_installed": True, "windows_installed": True, "wheels_installed": True, } engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) # This registers all (*) state transitions to trigger a call to the flow_watch # function for flow state transitions, and registers the same all (*) state # transitions for task state transitions. engine.notifier.register('*', flow_watch) engine.task_notifier.register('*', task_watch) print_wrapped("Building a car") engine.run() # Alter the specification and ensure that the reverting logic gets triggered # since the resultant car that will be built by the build_wheels function will # build a car with 4 doors only (not 5), this will cause the verification # task to mark the car that is produced as not matching the desired spec. spec['doors'] = 5 engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) engine.notifier.register('*', flow_watch) engine.task_notifier.register('*', task_watch) print_wrapped("Building a wrong car that doesn't match specification") try: engine.run() except Exception as e: print_wrapped("Flow failed: %s" % e) taskflow-0.1.3/taskflow/examples/resume_vm_boot.py0000664000175300017540000002253112275003514023553 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import hashlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.openstack.common import uuidutils from taskflow import engines from taskflow import exceptions as exc from taskflow import task from taskflow.utils import eventlet_utils as e_utils from taskflow.utils import persistence_utils as p_utils import example_utils # noqa # INTRO: This examples shows how a hierarchy of flows can be used to create a # vm in a reliable & resumable manner using taskflow + a miniature version of # what nova does while booting a vm. @contextlib.contextmanager def slow_down(how_long=0.5): try: yield how_long finally: if len(sys.argv) > 1: # Only both to do this if user input provided. print("** Ctrl-c me please!!! **") time.sleep(how_long) def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) class PrintText(task.Task): """Just inserts some text print outs in a workflow.""" def __init__(self, print_what, no_slow=False): content_hash = hashlib.md5(print_what.encode('utf-8')).hexdigest()[0:8] super(PrintText, self).__init__(name="Print: %s" % (content_hash)) self._text = print_what self._no_slow = no_slow def execute(self): if self._no_slow: print_wrapped(self._text) else: with slow_down(): print_wrapped(self._text) class DefineVMSpec(task.Task): """Defines a vm specification to be.""" def __init__(self, name): super(DefineVMSpec, self).__init__(provides='vm_spec', name=name) def execute(self): return { 'type': 'kvm', 'disks': 2, 'vcpu': 1, 'ips': 1, 'volumes': 3, } class LocateImages(task.Task): """Locates where the vm images are.""" def __init__(self, name): super(LocateImages, self).__init__(provides='image_locations', name=name) def execute(self, vm_spec): image_locations = {} for i in range(0, vm_spec['disks']): url = "http://www.yahoo.com/images/%s" % (i) image_locations[url] = "/tmp/%s.img" % (i) return image_locations class DownloadImages(task.Task): """Downloads all the vm images.""" def __init__(self, name): super(DownloadImages, self).__init__(provides='download_paths', name=name) def execute(self, image_locations): for src, loc in image_locations.items(): with slow_down(1): print("Downloading from %s => %s" % (src, loc)) return sorted(image_locations.values()) class CreateNetworkTpl(task.Task): """Generates the network settings file to be placed in the images.""" SYSCONFIG_CONTENTS = """DEVICE=eth%s BOOTPROTO=static IPADDR=%s ONBOOT=yes""" def __init__(self, name): super(CreateNetworkTpl, self).__init__(provides='network_settings', name=name) def execute(self, ips): settings = [] for i, ip in enumerate(ips): settings.append(self.SYSCONFIG_CONTENTS % (i, ip)) return settings class AllocateIP(task.Task): """Allocates the ips for the given vm.""" def __init__(self, name): super(AllocateIP, self).__init__(provides='ips', name=name) def execute(self, vm_spec): ips = [] for i in range(0, vm_spec.get('ips', 0)): ips.append("192.168.0.%s" % (random.randint(1, 254))) return ips class WriteNetworkSettings(task.Task): """Writes all the network settings into the downloaded images.""" def execute(self, download_paths, network_settings): for j, path in enumerate(download_paths): with slow_down(1): print("Mounting %s to /tmp/%s" % (path, j)) for i, setting in enumerate(network_settings): filename = ("/tmp/etc/sysconfig/network-scripts/" "ifcfg-eth%s" % (i)) with slow_down(1): print("Writing to %s" % (filename)) print(setting) class BootVM(task.Task): """Fires off the vm boot operation.""" def execute(self, vm_spec): print("Starting vm!") with slow_down(1): print("Created: %s" % (vm_spec)) class AllocateVolumes(task.Task): """Allocates the volumes for the vm.""" def execute(self, vm_spec): volumes = [] for i in range(0, vm_spec['volumes']): with slow_down(1): volumes.append("/dev/vda%s" % (i + 1)) print("Allocated volume %s" % volumes[-1]) return volumes class FormatVolumes(task.Task): """Formats the volumes for the vm.""" def execute(self, volumes): for v in volumes: print("Formatting volume %s" % v) with slow_down(1): pass print("Formatted volume %s" % v) def create_flow(): # Setup the set of things to do (mini-nova). flow = lf.Flow("root").add( PrintText("Starting vm creation.", no_slow=True), lf.Flow('vm-maker').add( # First create a specification for the final vm to-be. DefineVMSpec("define_spec"), # This does all the image stuff. gf.Flow("img-maker").add( LocateImages("locate_images"), DownloadImages("download_images"), ), # This does all the network stuff. gf.Flow("net-maker").add( AllocateIP("get_my_ips"), CreateNetworkTpl("fetch_net_settings"), WriteNetworkSettings("write_net_settings"), ), # This does all the volume stuff. gf.Flow("volume-maker").add( AllocateVolumes("allocate_my_volumes", provides='volumes'), FormatVolumes("volume_formatter"), ), # Finally boot it all. BootVM("boot-it"), ), # Ya it worked! PrintText("Finished vm create.", no_slow=True), PrintText("Instance is running!", no_slow=True)) return flow print_wrapped("Initializing") # Setup the persistence & resumption layer. with example_utils.get_backend() as backend: try: book_id, flow_id = sys.argv[2].split("+", 1) if not uuidutils.is_uuid_like(book_id): book_id = None if not uuidutils.is_uuid_like(flow_id): flow_id = None except (IndexError, ValueError): book_id = None flow_id = None # Set up how we want our engine to run, serial, parallel... engine_conf = { 'engine': 'parallel', } if e_utils.EVENTLET_AVAILABLE: engine_conf['executor'] = e_utils.GreenExecutor(5) # Create/fetch a logbook that will track the workflows work. book = None flow_detail = None if all([book_id, flow_id]): with contextlib.closing(backend.get_connection()) as conn: try: book = conn.get_logbook(book_id) flow_detail = book.find(flow_id) except exc.NotFound: pass if book is None and flow_detail is None: book = p_utils.temporary_log_book(backend) engine = engines.load_from_factory(create_flow, backend=backend, book=book, engine_conf=engine_conf) print("!! Your tracking id is: '%s+%s'" % (book.uuid, engine.storage.flow_uuid)) print("!! Please submit this on later runs for tracking purposes") else: # Attempt to load from a previously partially completed flow. engine = engines.load_from_detail(flow_detail, backend=backend, engine_conf=engine_conf) # Make me my vm please! print_wrapped('Running') engine.run() # How to use. # # 1. $ python me.py "sqlite:////tmp/nova.db" # 2. ctrl-c before this finishes # 3. Find the tracking id (search for 'Your tracking id is') # 4. $ python me.py "sqlite:////tmp/cinder.db" "$tracking_id" # 5. Watch it pick up where it left off. # 6. Profit! taskflow-0.1.3/taskflow/examples/calculate_linear.py0000664000175300017540000001102112275003514024005 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example linear_flow is used to group four tasks to calculate # a value. A single added task is used twice, showing how this can be done # and the twice added task takes in different bound values. In the first case # it uses default parameters ('x' and 'y') and in the second case arguments # are bound with ('z', 'd') keys from the engines storage mechanism. # # A multiplier task uses a binding that another task also provides, but this # example explicitly shows that 'z' parameter is bound with 'a' key # This shows that if a task depends on a key named the same as a key provided # from another task the name can be remapped to take the desired key from a # different origin. # This task provides some values from as a result of execution, this can be # useful when you want to provide values from a static set to other tasks that # depend on those values existing before those tasks can run. # # This method is *depreciated* in favor of a simpler mechanism that just # provides those values on engine running by prepopulating the storage backend # before your tasks are ran (which accomplishes a similar goal in a more # uniform manner). class Provider(task.Task): def __init__(self, name, *args, **kwargs): super(Provider, self).__init__(name=name, **kwargs) self._provide = args def execute(self): return self._provide # This task adds two input variables and returns the result. # # Note that since this task does not have a revert() function (since addition # is a stateless operation) there are no side-effects that this function needs # to undo if some later operation fails. class Adder(task.Task): def execute(self, x, y): return x + y # This task multiplies an input variable by a multiplier and returns the # result. # # Note that since this task does not have a revert() function (since # multiplication is a stateless operation) and there are no side-effects that # this function needs to undo if some later operation fails. class Multiplier(task.Task): def __init__(self, name, multiplier, provides=None, rebind=None): super(Multiplier, self).__init__(name=name, provides=provides, rebind=rebind) self._multiplier = multiplier def execute(self, z): return z * self._multiplier # Note here that the ordering is established so that the correct sequences # of operations occurs where the adding and multiplying is done according # to the expected and typical mathematical model. A graph_flow could also be # used here to automatically ensure the correct ordering. flow = lf.Flow('root').add( # Provide the initial values for other tasks to depend on. # # x = 2, y = 3, d = 5 Provider("provide-adder", 2, 3, 5, provides=('x', 'y', 'd')), # z = x+y = 5 Adder("add-1", provides='z'), # a = z+d = 10 Adder("add-2", provides='a', rebind=['z', 'd']), # Calculate 'r = a*3 = 30' # # Note here that the 'z' argument of the execute() function will not be # bound to the 'z' variable provided from the above 'provider' object but # instead the 'z' argument will be taken from the 'a' variable provided # by the second add-2 listed above. Multiplier("multi", 3, provides='r', rebind={'z': 'a'}) ) # The result here will be all results (from all tasks) which is stored in an # in-memory storage location that backs this engine since it is not configured # with persistence storage. results = taskflow.engines.run(flow) print(results) taskflow-0.1.3/taskflow/examples/simple_linear_listening.py0000664000175300017540000000754112275003514025431 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example we create two tasks (this time as functions instead # of task subclasses as in the simple_linear.py example), each of which ~calls~ # a given ~phone~ number (provided as a function input) in a linear fashion # (one after the other). # # For a workflow which is serial this shows a extremely simple way # of structuring your tasks (the code that does the work) into a linear # sequence (the flow) and then passing the work off to an engine, with some # initial data to be ran in a reliable manner. # # This example shows a basic usage of the taskflow structures without involving # the complexity of persistence. Using the structures that taskflow provides # via tasks and flows makes it possible for you to easily at a later time # hook in a persistence layer (and then gain the functionality that offers) # when you decide the complexity of adding that layer in is 'worth it' for your # applications usage pattern (which some applications may not need). # # It **also** adds on to the simple_linear.py example by adding a set of # callback functions which the engine will call when a flow state transition # or task state transition occurs. These types of functions are useful for # updating task or flow progress, or for debugging, sending notifications to # external systems, or for other yet unknown future usage that you may create! def call_jim(context): print("Calling jim.") print("Context = %s" % (sorted(context.items(), key=lambda x: x[0]))) def call_joe(context): print("Calling joe.") print("Context = %s" % (sorted(context.items(), key=lambda x: x[0]))) def flow_watch(state, details): print('Flow => %s' % state) def task_watch(state, details): print('Task %s => %s' % (details.get('task_name'), state)) # Wrap your functions into a task type that knows how to treat your functions # as tasks. There was previous work done to just allow a function to be # directly passed, but in python 3.0 there is no easy way to capture an # instance method, so this wrapping approach was decided upon instead which # can attach to instance methods (if that's desired). flow = lf.Flow("Call-them") flow.add(task.FunctorTask(execute=call_jim)) flow.add(task.FunctorTask(execute=call_joe)) # Now load (but do not run) the flow using the provided initial data. engine = taskflow.engines.load(flow, store={ 'context': { "joe_number": 444, "jim_number": 555, } }) # This is where we attach our callback functions to the 2 different # notification objects that a engine exposes. The usage of a '*' (kleene star) # here means that we want to be notified on all state changes, if you want to # restrict to a specific state change, just register that instead. engine.notifier.register('*', flow_watch) engine.task_notifier.register('*', task_watch) # And now run! engine.run() taskflow-0.1.3/taskflow/examples/persistence_example.py0000664000175300017540000001016712275003514024567 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys import tempfile import traceback logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow.persistence import logbook from taskflow import task from taskflow.utils import persistence_utils as p_utils import example_utils # noqa # INTRO: In this example we create two tasks, one that will say hi and one # that will say bye with optional capability to raise an error while # executing. During execution if a later task fails, the reverting that will # occur in the hi task will undo this (in a ~funny~ way). # # To also show the effect of task persistence we create a temporary database # that will track the state transitions of this hi + bye workflow, this # persistence allows for you to examine what is stored (using a sqlite client) # as well as shows you what happens during reversion and what happens to # the database during both of these modes (failing or not failing). def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) class HiTask(task.Task): def execute(self): print("Hi!") def revert(self, **kwargs): print("Whooops, said hi too early, take that back!") class ByeTask(task.Task): def __init__(self, blowup): super(ByeTask, self).__init__() self._blowup = blowup def execute(self): if self._blowup: raise Exception("Fail!") print("Bye!") # This generates your flow structure (at this stage nothing is ran). def make_flow(blowup=False): flow = lf.Flow("hello-world") flow.add(HiTask(), ByeTask(blowup)) return flow # Persist the flow and task state here, if the file/dir exists already blowup # if not don't blowup, this allows a user to see both the modes and to see # what is stored in each case. if example_utils.SQLALCHEMY_AVAILABLE: persist_path = os.path.join(tempfile.gettempdir(), "persisting.db") backend_uri = "sqlite:///%s" % (persist_path) else: persist_path = os.path.join(tempfile.gettempdir(), "persisting") backend_uri = "file:///%s" % (persist_path) if os.path.exists(persist_path): blowup = False else: blowup = True with example_utils.get_backend(backend_uri) as backend: # Now we can run. engine_config = { 'backend': backend, 'engine_conf': 'serial', 'book': logbook.LogBook("my-test"), } # Make a flow that will blowup if the file doesn't exist previously, if it # did exist, assume we won't blowup (and therefore this shows the undo # and redo that a flow will go through). flow = make_flow(blowup=blowup) print_wrapped("Running") try: eng = engines.load(flow, **engine_config) eng.run() if not blowup: example_utils.rm_path(persist_path) except Exception: # NOTE(harlowja): don't exit with non-zero status code, so that we can # print the book contents, as well as avoiding exiting also makes the # unit tests (which also runs these examples) pass. traceback.print_exc(file=sys.stdout) print_wrapped("Book contents") print(p_utils.pformat(engine_config['book'])) taskflow-0.1.3/taskflow/examples/buildsystem.py0000664000175300017540000000730012275003514023067 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow import task # In this example we demonstrate use of TargetedFlow to make oversimplified # build system. It pretends to compile all sources to object files and # link them into an executable. It also can build docs, but this can be # "switched off" via targeted flow special power -- ability to ignore # all tasks not needed by its target. class CompileTask(task.Task): """Pretends to take a source and make object file.""" default_provides = 'object_filename' def execute(self, source_filename): object_filename = '%s.o' % os.path.splitext(source_filename)[0] print('Compiling %s into %s' % (source_filename, object_filename)) return object_filename class LinkTask(task.Task): """Pretends to link executable form several object files.""" default_provides = 'executable' def __init__(self, executable_path, *args, **kwargs): super(LinkTask, self).__init__(*args, **kwargs) self._executable_path = executable_path def execute(self, **kwargs): object_filenames = list(kwargs.values()) print('Linking executable %s from files %s' % (self._executable_path, ', '.join(object_filenames))) return self._executable_path class BuildDocsTask(task.Task): """Pretends to build docs from sources.""" default_provides = 'docs' def execute(self, **kwargs): for source_filename in kwargs.values(): print("Building docs for %s" % source_filename) return 'docs' def make_flow_and_store(source_files, executable_only=False): flow = gf.TargetedFlow('build flow') object_targets = [] store = {} for source in source_files: source_stored = '%s-source' % source object_stored = '%s-object' % source store[source_stored] = source object_targets.append(object_stored) flow.add(CompileTask(name='compile-%s' % source, rebind={'source_filename': source_stored}, provides=object_stored)) flow.add(BuildDocsTask(requires=list(store.keys()))) # Try this to see executable_only switch broken: object_targets.append('docs') link_task = LinkTask('build/executable', requires=object_targets) flow.add(link_task) if executable_only: flow.set_target(link_task) return flow, store SOURCE_FILES = ['first.c', 'second.cpp', 'main.cpp'] print('Running all tasks:') flow, store = make_flow_and_store(SOURCE_FILES) taskflow.engines.run(flow, store=store) print('\nBuilding executable, no docs:') flow, store = make_flow_and_store(SOURCE_FILES, executable_only=True) taskflow.engines.run(flow, store=store) taskflow-0.1.3/taskflow/examples/simple_linear_listening.out.txt0000664000175300017540000000045412275003514026422 0ustar jenkinsjenkins00000000000000Flow => RUNNING Task __main__.call_jim => RUNNING Calling jim. Context = [('jim_number', 555), ('joe_number', 444)] Task __main__.call_jim => SUCCESS Task __main__.call_joe => RUNNING Calling joe. Context = [('jim_number', 555), ('joe_number', 444)] Task __main__.call_joe => SUCCESS Flow => SUCCESS taskflow-0.1.3/taskflow/examples/simple_linear.out.txt0000664000175300017540000000004212275003514024337 0ustar jenkinsjenkins00000000000000Calling jim 555. Calling joe 444. taskflow-0.1.3/taskflow/examples/simple_linear.py0000664000175300017540000000474212275003514023355 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example we create two tasks, each of which ~calls~ a given # ~phone~ number (provided as a function input) in a linear fashion (one after # the other). For a workflow which is serial this shows a extremely simple way # of structuring your tasks (the code that does the work) into a linear # sequence (the flow) and then passing the work off to an engine, with some # initial data to be ran in a reliable manner. # # This example shows a basic usage of the taskflow structures without involving # the complexity of persistence. Using the structures that taskflow provides # via tasks and flows makes it possible for you to easily at a later time # hook in a persistence layer (and then gain the functionality that offers) # when you decide the complexity of adding that layer in is 'worth it' for your # applications usage pattern (which some applications may not need). class CallJim(task.Task): def execute(self, jim_number, *args, **kwargs): print("Calling jim %s." % jim_number) class CallJoe(task.Task): def execute(self, joe_number, *args, **kwargs): print("Calling joe %s." % joe_number) # Create your flow and associated tasks (the work to be done). flow = lf.Flow('simple-linear').add( CallJim(), CallJoe() ) # Now run that flow using the provided initial data (store below). taskflow.engines.run(flow, store=dict(joe_number=444, jim_number=555)) taskflow-0.1.3/taskflow/examples/wrapped_exception.py0000664000175300017540000001204412275003514024244 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow import exceptions from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.utils import misc # INTRO: In this example we create two tasks which can trigger exceptions # based on various inputs to show how to analyze the thrown exceptions for # which types were thrown and handle the different types in different ways. # # This is especially important if a set of tasks run in parallel and each of # those tasks may fail while running. This creates a scenario where multiple # exceptions have been thrown and those exceptions need to be handled in a # unified manner. Since an engine does not currently know how to resolve # those exceptions (someday it could) the code using that engine and activating # the flows and tasks using that engine will currently have to deal with # catching those exceptions (and silencing them if this is desired). # # NOTE(harlowja): The engine *will* trigger rollback even under multiple # exceptions being thrown, but at the end of that rollback the engine will # rethrow these exceptions to the code that called the run() method; allowing # that code to do further cleanups (if desired). def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) @contextlib.contextmanager def wrap_all_failures(): """Convert any exceptions to WrappedFailure. When you expect several failures, it may be convenient to wrap any exception with WrappedFailure in order to unify error handling. """ try: yield except Exception: raise exceptions.WrappedFailure([misc.Failure()]) class FirstException(Exception): """Exception that first task raises.""" class SecondException(Exception): """Exception that second task raises.""" class FirstTask(task.Task): def execute(self, sleep1, raise1): time.sleep(sleep1) if not isinstance(raise1, bool): raise TypeError('Bad raise1 value: %r' % raise1) if raise1: raise FirstException('First task failed') class SecondTask(task.Task): def execute(self, sleep2, raise2): time.sleep(sleep2) if not isinstance(raise2, bool): raise TypeError('Bad raise2 value: %r' % raise2) if raise2: raise SecondException('Second task failed') def run(**store): # Creates a flow, each task in the flow will examine the kwargs passed in # here and based on those kwargs it will behave in a different manner # while executing; this allows for the calling code (see below) to show # different usages of the failure catching and handling mechanism. flow = uf.Flow('flow').add( FirstTask(), SecondTask() ) try: with wrap_all_failures(): taskflow.engines.run(flow, store=store, engine_conf='parallel') except exceptions.WrappedFailure as ex: unknown_failures = [] for failure in ex: if failure.check(FirstException): print("Got FirstException: %s" % failure.exception_str) elif failure.check(SecondException): print("Got SecondException: %s" % failure.exception_str) else: print("Unknown failure: %s" % failure) unknown_failures.append(failure) misc.Failure.reraise_if_any(unknown_failures) print_wrapped("Raise and catch first exception only") run(sleep1=0.0, raise1=True, sleep2=0.0, raise2=False) # NOTE(imelnikov): in general, sleeping does not guarantee that we'll have both # task running before one of them fails, but with current implementation this # works most of times, which is enough for our purposes here (as an example). print_wrapped("Raise and catch both exceptions") run(sleep1=1.0, raise1=True, sleep2=1.0, raise2=True) print_wrapped("Handle one exception, and re-raise another") try: run(sleep1=1.0, raise1=True, sleep2=1.0, raise2='boom') except TypeError as ex: print("As expected, TypeError is here: %s" % ex) else: assert False, "TypeError expected" taskflow-0.1.3/taskflow/examples/resume_from_backend.py0000664000175300017540000001072512275003514024522 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.utils import persistence_utils as p_utils import example_utils # noqa # INTRO: In this example linear_flow is used to group three tasks, one which # will suspend the future work the engine may do. This suspend engine is then # discarded and the workflow is reloaded from the persisted data and then the # workflow is resumed from where it was suspended. This allows you to see how # to start an engine, have a task stop the engine from doing future work (if # a multi-threaded engine is being used, then the currently active work is not # preempted) and then resume the work later. ### UTILITY FUNCTIONS ######################################### def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) def print_task_states(flowdetail, msg): print_wrapped(msg) print("Flow '%s' state: %s" % (flowdetail.name, flowdetail.state)) # Sort by these so that our test validation doesn't get confused by the # order in which the items in the flow detail can be in. items = sorted((td.name, td.version, td.state, td.results) for td in flowdetail) for item in items: print(" %s==%s: %s, result=%s" % item) def find_flow_detail(backend, lb_id, fd_id): conn = backend.get_connection() lb = conn.get_logbook(lb_id) return lb.find(fd_id) ### CREATE FLOW ############################################### class InterruptTask(task.Task): def execute(self): # DO NOT TRY THIS AT HOME engine.suspend() class TestTask(task.Task): def execute(self): print('executing %s' % self) return 'ok' def flow_factory(): return lf.Flow('resume from backend example').add( TestTask(name='first'), InterruptTask(name='boom'), TestTask(name='second')) ### INITIALIZE PERSISTENCE #################################### with example_utils.get_backend() as backend: logbook = p_utils.temporary_log_book(backend) ### CREATE AND RUN THE FLOW: FIRST ATTEMPT #################### flow = flow_factory() flowdetail = p_utils.create_flow_detail(flow, logbook, backend) engine = taskflow.engines.load(flow, flow_detail=flowdetail, backend=backend) print_task_states(flowdetail, "At the beginning, there is no state") print_wrapped("Running") engine.run() print_task_states(flowdetail, "After running") ### RE-CREATE, RESUME, RUN #################################### print_wrapped("Resuming and running again") # NOTE(harlowja): reload the flow detail from backend, this will allow us # to resume the flow from its suspended state, but first we need to search # for the right flow details in the correct logbook where things are # stored. # # We could avoid re-loading the engine and just do engine.run() again, but # this example shows how another process may unsuspend a given flow and # start it again for situations where this is useful to-do (say the process # running the above flow crashes). flow2 = flow_factory() flowdetail2 = find_flow_detail(backend, logbook.uuid, flowdetail.uuid) engine2 = taskflow.engines.load(flow2, flow_detail=flowdetail2, backend=backend) engine2.run() print_task_states(flowdetail2, "At the end") taskflow-0.1.3/taskflow/examples/calculate_in_parallel.py0000664000175300017540000000755312275003514025034 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import task # INTRO: This examples shows how linear_flow and unordered_flow can be used # together to execute calculations in parallel and then use the # result for the next task. Adder task is used for all calculations # and arguments' bindings are used to set correct parameters to the task. # This task provides some values from as a result of execution, this can be # useful when you want to provide values from a static set to other tasks that # depend on those values existing before those tasks can run. # # This method is *depreciated* in favor of a simpler mechanism that just # provides those values on engine running by prepopulating the storage backend # before your tasks are ran (which accomplishes a similar goal in a more # uniform manner). class Provider(task.Task): def __init__(self, name, *args, **kwargs): super(Provider, self).__init__(name=name, **kwargs) self._provide = args def execute(self): return self._provide # This task adds two input variables and returns the result of that addition. # # Note that since this task does not have a revert() function (since addition # is a stateless operation) there are no side-effects that this function needs # to undo if some later operation fails. class Adder(task.Task): def execute(self, x, y): return x + y flow = lf.Flow('root').add( # Provide the initial values for other tasks to depend on. # # x1 = 2, y1 = 3, x2 = 5, x3 = 8 Provider("provide-adder", 2, 3, 5, 8, provides=('x1', 'y1', 'x2', 'y2')), # Note here that we define the flow that contains the 2 adders to be an # unordered flow since the order in which these execute does not matter, # another way to solve this would be to use a graph_flow pattern, which # also can run in parallel (since they have no ordering dependencies). uf.Flow('adders').add( # Calculate 'z1 = x1+y1 = 5' # # Rebind here means that the execute() function x argument will be # satisfied from a previous output named 'x1', and the y argument # of execute() will be populated from the previous output named 'y1' # # The output (result of adding) will be mapped into a variable named # 'z1' which can then be refereed to and depended on by other tasks. Adder(name="add", provides='z1', rebind=['x1', 'y1']), # z2 = x2+y2 = 13 Adder(name="add-2", provides='z2', rebind=['x2', 'y2']), ), # r = z1+z2 = 18 Adder(name="sum-1", provides='r', rebind=['z1', 'z2'])) # The result here will be all results (from all tasks) which is stored in an # in-memory storage location that backs this engine since it is not configured # with persistence storage. result = taskflow.engines.run(flow, engine_conf='parallel') print(result) taskflow-0.1.3/taskflow/examples/fake_billing.py0000664000175300017540000001520312275003514023132 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import logging import os import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow.openstack.common import uuidutils from taskflow import engines from taskflow.listeners import printing from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.utils import misc # INTRO: This example walks through a miniature workflow which simulates a # the reception of a API request, creation of a database entry, driver # activation (which invokes a 'fake' webservice) and final completion. # # This example also shows how a function/object (in this class the url sending) # that occurs during driver activation can update the progress of a task # without being aware of the internals of how to do this by associating a # callback that the url sending can update as the sending progresses from 0.0% # complete to 100% complete. class DB(object): def query(self, sql): print("Querying with: %s" % (sql)) class UrlCaller(object): def __init__(self): self._send_time = 0.5 self._chunks = 25 def send(self, url, data, status_cb=None): sleep_time = float(self._send_time) / self._chunks for i in range(0, len(data)): time.sleep(sleep_time) # As we send the data, each chunk we 'fake' send will progress # the sending progress that much further to 100%. if status_cb: status_cb(float(i) / len(data)) # Since engines save the output of tasks to a optional persistent storage # backend resources have to be dealt with in a slightly different manner since # resources are transient and can not be persisted (or serialized). For tasks # that require access to a set of resources it is a common pattern to provide # a object (in this case this object) on construction of those tasks via the # task constructor. class ResourceFetcher(object): def __init__(self): self._db_handle = None self._url_handle = None @property def db_handle(self): if self._db_handle is None: self._db_handle = DB() return self._db_handle @property def url_handle(self): if self._url_handle is None: self._url_handle = UrlCaller() return self._url_handle class ExtractInputRequest(task.Task): def __init__(self, resources): super(ExtractInputRequest, self).__init__(provides="parsed_request") self._resources = resources def execute(self, request): return { 'user': request.user, 'user_id': misc.as_int(request.id), 'request_id': uuidutils.generate_uuid(), } class MakeDBEntry(task.Task): def __init__(self, resources): super(MakeDBEntry, self).__init__() self._resources = resources def execute(self, parsed_request): db_handle = self._resources.db_handle db_handle.query("INSERT %s INTO mydb" % (parsed_request)) def revert(self, result, parsed_request): db_handle = self._resources.db_handle db_handle.query("DELETE %s FROM mydb IF EXISTS" % (parsed_request)) class ActivateDriver(task.Task): def __init__(self, resources): super(ActivateDriver, self).__init__(provides='sent_to') self._resources = resources self._url = "http://blahblah.com" def execute(self, parsed_request): print("Sending billing data to %s" % (self._url)) url_sender = self._resources.url_handle # Note that here we attach our update_progress function (which is a # function that the engine also 'binds' to) to the progress function # that the url sending helper class uses. This allows the task progress # to be tied to the url sending progress, which is very useful for # downstream systems to be aware of what a task is doing at any time. url_sender.send(self._url, json.dumps(parsed_request), status_cb=self.update_progress) return self._url def update_progress(self, progress, **kwargs): # Override the parent method to also print out the status. super(ActivateDriver, self).update_progress(progress, **kwargs) print("%s is %0.2f%% done" % (self.name, progress * 100)) class DeclareSuccess(task.Task): def execute(self, sent_to): print("Done!") print("All data processed and sent to %s" % (sent_to)) # Resources (db handles and similar) of course can't be persisted so we need # to make sure that we pass this resource fetcher to the tasks constructor so # that the tasks have access to any needed resources (the resources are # lazily loaded so that they are only created when they are used). resources = ResourceFetcher() flow = lf.Flow("initialize-me") # 1. First we extract the api request into a usable format. # 2. Then we go ahead and make a database entry for our request. flow.add(ExtractInputRequest(resources), MakeDBEntry(resources)) # 3. Then we activate our payment method and finally declare success. sub_flow = gf.Flow("after-initialize") sub_flow.add(ActivateDriver(resources), DeclareSuccess()) flow.add(sub_flow) # Initially populate the storage with the following request object, # prepopulating this allows the tasks that dependent on the 'request' variable # to start processing (in this case this is the ExtractInputRequest task). store = { 'request': misc.AttrDict(user="bob", id="1.35"), } eng = engines.load(flow, engine_conf='serial', store=store) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities # that show you exactly what transitions the engine is going through # while running the various billing related tasks. with printing.PrintingListener(eng): eng.run() taskflow-0.1.3/taskflow/examples/resume_many_flows/0000775000175300017540000000000012275003604023707 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/examples/resume_many_flows/resume_all.py0000664000175300017540000000335712275003514026421 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath( os.path.join(self_dir, os.pardir, os.pardir, os.pardir)) example_dir = os.path.abspath(os.path.join(self_dir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, example_dir) import taskflow.engines from taskflow import states import example_utils # noqa FINISHED_STATES = (states.SUCCESS, states.FAILURE, states.REVERTED) def resume(flowdetail, backend): print('Resuming flow %s %s' % (flowdetail.name, flowdetail.uuid)) engine = taskflow.engines.load_from_detail(flow_detail=flowdetail, backend=backend) engine.run() def main(): with example_utils.get_backend() as backend: logbooks = list(backend.get_connection().get_logbooks()) for lb in logbooks: for fd in lb: if fd.state not in FINISHED_STATES: resume(fd, backend) if __name__ == '__main__': main() taskflow-0.1.3/taskflow/examples/resume_many_flows/my_flows.py0000664000175300017540000000246512275003514026127 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from taskflow.patterns import linear_flow as lf from taskflow import task class UnfortunateTask(task.Task): def execute(self): print('executing %s' % self) boom = os.environ.get('BOOM') if boom: print('> Critical error: boom = %s' % boom) raise SystemExit() else: print('> this time not exiting') class TestTask(task.Task): def execute(self): print('executing %s' % self) def flow_factory(): return lf.Flow('example').add( TestTask(name='first'), UnfortunateTask(name='boom'), TestTask(name='second')) taskflow-0.1.3/taskflow/examples/resume_many_flows/run_flow.py0000664000175300017540000000270612275003514026121 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath( os.path.join(self_dir, os.pardir, os.pardir, os.pardir)) example_dir = os.path.abspath(os.path.join(self_dir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) sys.path.insert(0, example_dir) import taskflow.engines import example_utils # noqa import my_flows # noqa with example_utils.get_backend() as backend: engine = taskflow.engines.load_from_factory(my_flows.flow_factory, backend=backend) print('Running flow %s %s' % (engine.storage.flow_name, engine.storage.flow_uuid)) engine.run() taskflow-0.1.3/taskflow/examples/graph_flow.py0000664000175300017540000000554012275003514022657 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task # In this example there are complex dependencies between tasks that are used to # perform a simple set of linear equations. # # As you will see below the tasks just define what they require as input # and produce as output (named values). Then the user doesn't care about # ordering the TASKS (in this case the tasks calculate pieces of the overall # equation). # # As you will notice graph_flow resolves dependencies automatically using the # tasks requirements and provided values and no ordering dependency has to be # manually created. # # Also notice that flows of any types can be nested into a graph_flow; subflows # dependencies will be resolved too!! Pretty cool right! class Adder(task.Task): def execute(self, x, y): return x + y flow = gf.Flow('root').add( lf.Flow('nested_linear').add( # x2 = y3+y4 = 12 Adder("add2", provides='x2', rebind=['y3', 'y4']), # x1 = y1+y2 = 4 Adder("add1", provides='x1', rebind=['y1', 'y2']) ), # x5 = x1+x3 = 20 Adder("add5", provides='x5', rebind=['x1', 'x3']), # x3 = x1+x2 = 16 Adder("add3", provides='x3', rebind=['x1', 'x2']), # x4 = x2+y5 = 21 Adder("add4", provides='x4', rebind=['x2', 'y5']), # x6 = x5+x4 = 41 Adder("add6", provides='x6', rebind=['x5', 'x4']), # x7 = x6+x6 = 82 Adder("add7", provides='x7', rebind=['x6', 'x6'])) # Provide the initial variable inputs using a storage dictionary. store = { "y1": 1, "y2": 3, "y3": 5, "y4": 7, "y5": 9, } result = taskflow.engines.run( flow, engine_conf='serial', store=store) print("Single threaded engine result %s" % result) result = taskflow.engines.run( flow, engine_conf='parallel', store=store) print("Multi threaded engine result %s" % result) taskflow-0.1.3/taskflow/listeners/0000775000175300017540000000000012275003604020343 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/listeners/base.py0000664000175300017540000001216612275003514021635 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import abc import logging import six from taskflow.openstack.common import excutils from taskflow import states from taskflow.utils import misc LOG = logging.getLogger(__name__) # NOTE(harlowja): on these states will results be usable, all other states # do not produce results. FINISH_STATES = (states.FAILURE, states.SUCCESS) class ListenerBase(object): """This provides a simple listener that can be attached to an engine which can be derived from to do some action on various flow and task state transitions. It provides a useful context manager access to be able to register and unregister with a given engine automatically when a context is entered and when it is exited. """ def __init__(self, engine, task_listen_for=(misc.TransitionNotifier.ANY,), flow_listen_for=(misc.TransitionNotifier.ANY,)): if not task_listen_for: task_listen_for = [] if not flow_listen_for: flow_listen_for = [] self._listen_for = { 'task': list(task_listen_for), 'flow': list(flow_listen_for), } self._engine = engine self._registered = False def _flow_receiver(self, state, details): pass def _task_receiver(self, state, details): pass def deregister(self): if not self._registered: return def _deregister(watch_states, notifier, cb): for s in watch_states: notifier.deregister(s, cb) _deregister(self._listen_for['task'], self._engine.task_notifier, self._task_receiver) _deregister(self._listen_for['flow'], self._engine.notifier, self._flow_receiver) self._registered = False def register(self): if self._registered: return def _register(watch_states, notifier, cb): registered = [] try: for s in watch_states: if not notifier.is_registered(s, cb): notifier.register(s, cb) registered.append((s, cb)) except ValueError: with excutils.save_and_reraise_exception(): for (s, cb) in registered: notifier.deregister(s, cb) _register(self._listen_for['task'], self._engine.task_notifier, self._task_receiver) _register(self._listen_for['flow'], self._engine.notifier, self._flow_receiver) self._registered = True def __enter__(self): self.register() def __exit__(self, type, value, tb): try: self.deregister() except Exception: # Don't let deregistering throw exceptions LOG.exception("Failed deregistering listeners from engine %s", self._engine) @six.add_metaclass(abc.ABCMeta) class LoggingBase(ListenerBase): """This provides a simple listener that can be attached to an engine which can be derived from to log the received actions to some logging backend. It provides a useful context manager access to be able to register and unregister with a given engine automatically when a context is entered and when it is exited. """ @abc.abstractmethod def _log(self, message, *args, **kwargs): raise NotImplementedError() def _flow_receiver(self, state, details): self._log("%s has moved flow '%s' (%s) into state '%s'", self._engine, details['flow_name'], details['flow_uuid'], state) def _task_receiver(self, state, details): if state in FINISH_STATES: result = details.get('result') exc_info = None was_failure = False if isinstance(result, misc.Failure): if result.exc_info: exc_info = tuple(result.exc_info) was_failure = True self._log("%s has moved task '%s' (%s) into state '%s'" " with result '%s' (failure=%s)", self._engine, details['task_name'], details['task_uuid'], state, result, was_failure, exc_info=exc_info) else: self._log("%s has moved task '%s' (%s) into state '%s'", self._engine, details['task_name'], details['task_uuid'], state) taskflow-0.1.3/taskflow/listeners/__init__.py0000664000175300017540000000127512275003514022461 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/listeners/timing.py0000664000175300017540000000464512275003514022215 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import logging from taskflow import exceptions as excp from taskflow.listeners import base from taskflow import states from taskflow.utils import misc STARTING_STATES = (states.RUNNING, states.REVERTING) FINISHED_STATES = base.FINISH_STATES + (states.REVERTED,) WATCH_STATES = frozenset(FINISHED_STATES + STARTING_STATES + (states.PENDING,)) LOG = logging.getLogger(__name__) class TimingListener(base.ListenerBase): def __init__(self, engine): super(TimingListener, self).__init__(engine, task_listen_for=WATCH_STATES, flow_listen_for=[]) self._timers = {} def deregister(self): super(TimingListener, self).deregister() self._timers.clear() def _record_ending(self, timer, task_name): meta_update = { 'duration': float(timer.elapsed()), } try: # Don't let storage failures throw exceptions in a listener method. self._engine.storage.update_task_metadata(task_name, meta_update) except excp.StorageError: LOG.exception("Failure to store duration update %s for task %s", meta_update, task_name) def _task_receiver(self, state, details): task_name = details['task_name'] if state == states.PENDING: self._timers.pop(task_name, None) elif state in STARTING_STATES: self._timers[task_name] = misc.StopWatch() self._timers[task_name].start() elif state in FINISHED_STATES: if task_name in self._timers: self._record_ending(self._timers[task_name], task_name) taskflow-0.1.3/taskflow/listeners/logging.py0000664000175300017540000000344412275003514022350 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import logging from taskflow.listeners import base from taskflow.utils import misc LOG = logging.getLogger(__name__) class LoggingListener(base.LoggingBase): """Listens for task and flow notifications and writes those notifications to a provided logging backend (if none is provided then this modules logger is used instead) using a configurable logging level (logging.DEBUG if not provided). """ def __init__(self, engine, task_listen_for=(misc.TransitionNotifier.ANY,), flow_listen_for=(misc.TransitionNotifier.ANY,), log=None, level=logging.DEBUG): super(LoggingListener, self).__init__(engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for) self._logger = log if not self._logger: self._logger = LOG self._level = level def _log(self, message, *args, **kwargs): self._logger.log(self._level, message, *args, **kwargs) taskflow-0.1.3/taskflow/listeners/printing.py0000664000175300017540000000335712275003514022557 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import sys import traceback from taskflow.listeners import base from taskflow.utils import misc class PrintingListener(base.LoggingBase): """Writes the task and flow notifications messages to stdout or stderr.""" def __init__(self, engine, task_listen_for=(misc.TransitionNotifier.ANY,), flow_listen_for=(misc.TransitionNotifier.ANY,), stderr=False): super(PrintingListener, self).__init__(engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for) if stderr: self._file = sys.stderr else: self._file = sys.stdout def _log(self, message, *args, **kwargs): print(message % args, file=self._file) exc_info = kwargs.get('exc_info') if exc_info is not None: traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], file=self._file) taskflow-0.1.3/taskflow/exceptions.py0000664000175300017540000001100712275003514021065 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six class TaskFlowException(Exception): """Base class for exceptions emitted from this library.""" pass class ConnectionFailure(TaskFlowException): """Raised when some type of connection can not be opened or is lost.""" pass class Duplicate(TaskFlowException): """Raised when a duplicate entry is found.""" pass class StorageError(TaskFlowException): """Raised when logbook can not be read/saved/deleted.""" def __init__(self, message, cause=None): super(StorageError, self).__init__(message) self.cause = cause class NotFound(TaskFlowException): """Raised when some entry in some object doesn't exist.""" pass class AlreadyExists(TaskFlowException): """Raised when some entry in some object already exists.""" pass class InvalidState(TaskFlowException): """Raised when a task/job/workflow is in an invalid state when an operation is attempting to apply to said task/job/workflow. """ pass class InvariantViolation(TaskFlowException): """Raised when flow invariant violation is attempted.""" pass class UnclaimableJob(TaskFlowException): """Raised when a job can not be claimed.""" pass class JobNotFound(TaskFlowException): """Raised when a job entry can not be found.""" pass class MissingDependencies(InvariantViolation): """Raised when a entity has dependencies that can not be satisfied.""" message = ("%(who)s requires %(requirements)s but no other entity produces" " said requirements") def __init__(self, who, requirements): message = self.message % {'who': who, 'requirements': requirements} super(MissingDependencies, self).__init__(message) self.missing_requirements = requirements class DependencyFailure(TaskFlowException): """Raised when flow can't resolve dependency.""" pass class EmptyFlow(TaskFlowException): """Raised when flow doesn't contain tasks.""" pass class WrappedFailure(TaskFlowException): """Wraps one or several failures. When exception cannot be re-raised (for example, because the value and traceback is lost in serialization) or there are several exceptions, we wrap corresponding Failure objects into this exception class. """ def __init__(self, causes): self._causes = [] for cause in causes: if cause.check(type(self)) and cause.exception: # NOTE(imelnikov): flatten wrapped failures. self._causes.extend(cause.exception) else: self._causes.append(cause) def __iter__(self): """Iterate over failures that caused the exception.""" return iter(self._causes) def __len__(self): """Return number of wrapped failures.""" return len(self._causes) def check(self, *exc_classes): """Check if any of exc_classes caused (part of) the failure. Arguments of this method can be exception types or type names (stings). If any of wrapped failures were caused by exception of given type, the corresponding argument is returned. Else, None is returned. """ if not exc_classes: return None for cause in self: result = cause.check(*exc_classes) if result is not None: return result return None def __str__(self): causes = [exception_message(cause) for cause in self._causes] return 'WrappedFailure: %s' % causes def exception_message(exc): """Return the string representation of exception.""" # NOTE(imelnikov): Dealing with non-ascii data in python is difficult: # https://bugs.launchpad.net/taskflow/+bug/1275895 # https://bugs.launchpad.net/taskflow/+bug/1276053 try: return six.text_type(exc) except UnicodeError: return str(exc) taskflow-0.1.3/taskflow/flow.py0000664000175300017540000000512612275003514017660 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow.utils import reflection @six.add_metaclass(abc.ABCMeta) class Flow(object): """The base abstract class of all flow implementations. A flow is a structure that defines relationships between tasks. You can add tasks and other flows (as subflows) to the flow, and the flow provides a way to implicitly or explicitly define how they are interdependent. Exact structure of the relationships is defined by concrete implementation, while this class defines common interface and adds human-readable (not necessary unique) name. NOTE(harlowja): if a flow is placed in another flow as a subflow, a desired way to compose flows together, then it is valid and permissible that during execution the subflow & parent flow may be flattened into a new flow. Since a flow is just a 'structuring' concept this is typically a behavior that should not be worried about (as it is not visible to the user), but it is worth mentioning here. Flows are expected to provide the following methods/properties: - add - __len__ - requires - provides """ def __init__(self, name): self._name = str(name) @property def name(self): """A non-unique name for this flow (human readable).""" return self._name @abc.abstractmethod def __len__(self): """Returns how many items are in this flow.""" def __str__(self): lines = ["%s: %s" % (reflection.get_class_name(self), self.name)] lines.append("%s" % (len(self))) return "; ".join(lines) @abc.abstractmethod def add(self, *items): """Adds a given item/items to this flow.""" @abc.abstractproperty def requires(self): """Browse argument requirement names this flow requires to run.""" @abc.abstractproperty def provides(self): """Browse argument names provided by the flow.""" taskflow-0.1.3/taskflow/utils/0000775000175300017540000000000012275003604017473 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/utils/lock_utils.py0000664000175300017540000003022312275003514022215 0ustar jenkinsjenkins00000000000000# vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a modified version of what was in oslo-incubator lockutils.py from # commit 5039a610355e5265fb9fbd1f4023e8160750f32e but this one does not depend # on oslo.cfg or the very large oslo-incubator oslo logging module (which also # pulls in oslo.cfg) and is reduced to only what taskflow currently wants to # use from that code. import collections import contextlib import errno import logging import os import threading import time from taskflow.utils import misc from taskflow.utils import threading_utils as tu LOG = logging.getLogger(__name__) def locked(*args, **kwargs): """A decorator that looks for a given attribute (typically a lock or a list of locks) and before executing the decorated function uses the given lock or list of locks as a context manager, automatically releasing on exit. """ def decorator(f): attr_name = kwargs.get('lock', '_lock') @misc.wraps(f) def wrapper(*args, **kwargs): lock = getattr(args[0], attr_name) if isinstance(lock, (tuple, list)): lock = MultiLock(locks=list(lock)) with lock: return f(*args, **kwargs) return wrapper # This is needed to handle when the decorator has args or the decorator # doesn't have args, python is rather weird here... if kwargs or not args: return decorator else: if len(args) == 1: return decorator(args[0]) else: return decorator class ReaderWriterLock(object): """A reader/writer lock. This lock allows for simultaneous readers to exist but only one writer to exist for use-cases where it is useful to have such types of locks. Currently a reader can not escalate its read lock to a write lock and a writer can not acquire a read lock while it owns or is waiting on the write lock. In the future these restrictions may be relaxed. """ WRITER = 'w' READER = 'r' def __init__(self): self._writer = None self._pending_writers = collections.deque() self._readers = collections.deque() self._cond = threading.Condition() @property def pending_writers(self): self._cond.acquire() try: return len(self._pending_writers) finally: self._cond.release() def is_writer(self, check_pending=True): """Returns if the caller is the active writer or a pending writer.""" self._cond.acquire() try: me = tu.get_ident() if self._writer is not None and self._writer == me: return True if check_pending: return me in self._pending_writers else: return False finally: self._cond.release() @property def owner(self): """Returns whether the lock is locked by a writer or reader.""" self._cond.acquire() try: if self._writer is not None: return self.WRITER if self._readers: return self.READER return None finally: self._cond.release() def is_reader(self): """Returns if the caller is one of the readers.""" self._cond.acquire() try: return tu.get_ident() in self._readers finally: self._cond.release() @contextlib.contextmanager def read_lock(self): """Grants a read lock. Will wait until no active or pending writers. Raises a RuntimeError if an active or pending writer tries to acquire a read lock. """ me = tu.get_ident() if self.is_writer(): raise RuntimeError("Writer %s can not acquire a read lock" " while holding/waiting for the write lock" % me) self._cond.acquire() try: while True: # No active writer; we are good to become a reader. if self._writer is None: self._readers.append(me) break # An active writer; guess we have to wait. self._cond.wait() finally: self._cond.release() try: yield self finally: # I am no longer a reader, remove *one* occurrence of myself. # If the current thread acquired two read locks, then it will # still have to remove that other read lock; this allows for # basic reentrancy to be possible. self._cond.acquire() try: self._readers.remove(me) self._cond.notify_all() finally: self._cond.release() @contextlib.contextmanager def write_lock(self): """Grants a write lock. Will wait until no active readers. Blocks readers after acquiring. Raises a RuntimeError if an active reader attempts to acquire a lock. """ me = tu.get_ident() if self.is_reader(): raise RuntimeError("Reader %s to writer privilege" " escalation not allowed" % me) if self.is_writer(check_pending=False): # Already the writer; this allows for basic reentrancy. yield self else: self._cond.acquire() try: self._pending_writers.append(me) while True: # No readers, and no active writer, am I next?? if len(self._readers) == 0 and self._writer is None: if self._pending_writers[0] == me: self._writer = self._pending_writers.popleft() break self._cond.wait() finally: self._cond.release() try: yield self finally: self._cond.acquire() try: self._writer = None self._cond.notify_all() finally: self._cond.release() class DummyReaderWriterLock(object): """A dummy reader/writer lock that doesn't lock anything but provides same functions as a normal reader/writer lock class. """ @contextlib.contextmanager def write_lock(self): yield self @contextlib.contextmanager def read_lock(self): yield self @property def owner(self): return None def is_reader(self): return False def is_writer(self): return False class MultiLock(object): """A class which can attempt to obtain many locks at once and release said locks when exiting. Useful as a context manager around many locks (instead of having to nest said individual context managers). """ def __init__(self, locks): assert len(locks) > 0, "Zero locks requested" self._locks = locks self._locked = [False] * len(locks) def __enter__(self): self.acquire() def acquire(self): def is_locked(lock): # NOTE(harlowja): reentrant locks (rlock) don't have this # attribute, but normal non-reentrant locks do, how odd... if hasattr(lock, 'locked'): return lock.locked() return False for i in range(0, len(self._locked)): if self._locked[i] or is_locked(self._locks[i]): raise threading.ThreadError("Lock %s not previously released" % (i + 1)) self._locked[i] = False for (i, lock) in enumerate(self._locks): self._locked[i] = lock.acquire() def __exit__(self, type, value, traceback): self.release() def release(self): for (i, locked) in enumerate(self._locked): try: if locked: self._locks[i].release() self._locked[i] = False except threading.ThreadError: LOG.exception("Unable to release lock %s", i + 1) class _InterProcessLock(object): """Lock implementation which allows multiple locks, working around issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does not require any cleanup. Since the lock is always held on a file descriptor rather than outside of the process, the lock gets dropped automatically if the process crashes, even if __exit__ is not executed. There are no guarantees regarding usage by multiple green threads in a single process here. This lock works only between processes. Note these locks are released when the descriptor is closed, so it's not safe to close the file descriptor while another green thread holds the lock. Just opening and closing the lock file can break synchronisation, so lock files must be accessed only using this abstraction. """ def __init__(self, name): self.lockfile = None self.fname = name def acquire(self): basedir = os.path.dirname(self.fname) if not os.path.exists(basedir): misc.ensure_tree(basedir) LOG.info('Created lock path: %s', basedir) self.lockfile = open(self.fname, 'w') while True: try: # Using non-blocking locks since green threads are not # patched to deal with blocking locking calls. # Also upon reading the MSDN docs for locking(), it seems # to have a laughable 10 attempts "blocking" mechanism. self.trylock() LOG.debug('Got file lock "%s"', self.fname) return True except IOError as e: if e.errno in (errno.EACCES, errno.EAGAIN): # external locks synchronise things like iptables # updates - give it some time to prevent busy spinning time.sleep(0.01) else: raise threading.ThreadError("Unable to acquire lock on" " `%(filename)s` due to" " %(exception)s" % { 'filename': self.fname, 'exception': e, }) def __enter__(self): self.acquire() return self def release(self): try: self.unlock() self.lockfile.close() # This is fixed in: https://review.openstack.org/70506 LOG.debug('Released file lock "%s"', self.fname) except IOError: LOG.exception("Could not release the acquired lock `%s`", self.fname) def __exit__(self, exc_type, exc_val, exc_tb): self.release() def trylock(self): raise NotImplementedError() def unlock(self): raise NotImplementedError() class _WindowsLock(_InterProcessLock): def trylock(self): msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1) def unlock(self): msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1) class _PosixLock(_InterProcessLock): def trylock(self): fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB) def unlock(self): fcntl.lockf(self.lockfile, fcntl.LOCK_UN) if os.name == 'nt': import msvcrt InterProcessLock = _WindowsLock else: import fcntl InterProcessLock = _PosixLock taskflow-0.1.3/taskflow/utils/async_utils.py0000664000175300017540000000452112275003514022404 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading from concurrent import futures from taskflow.utils import eventlet_utils as eu class _Waiter(object): """Provides the event that wait_for_any() blocks on.""" def __init__(self, is_green): if is_green: assert eu.EVENTLET_AVAILABLE, ('eventlet is needed to use this' ' feature') self.event = eu.green_threading.Event() else: self.event = threading.Event() def add_result(self, future): self.event.set() def add_exception(self, future): self.event.set() def add_cancelled(self, future): self.event.set() def _done_futures(fs): return set(f for f in fs if f._state in [futures._base.CANCELLED_AND_NOTIFIED, futures._base.FINISHED]) def wait_for_any(fs, timeout=None): """Wait for one of the futures to complete. Works correctly with both green and non-green futures. Returns pair (done, not_done). """ with futures._base._AcquireFutures(fs): done = _done_futures(fs) if done: return done, set(fs) - done is_green = any(isinstance(f, eu.GreenFuture) for f in fs) waiter = _Waiter(is_green) for f in fs: f._waiters.append(waiter) waiter.event.wait(timeout) for f in fs: f._waiters.remove(waiter) with futures._base._AcquireFutures(fs): done = _done_futures(fs) return done, set(fs) - done def make_completed_future(result): """Make with completed with given result.""" future = futures.Future() future.set_result(result) return future taskflow-0.1.3/taskflow/utils/__init__.py0000664000175300017540000000127512275003514021611 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/utils/persistence_utils.py0000664000175300017540000002615112275003514023616 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import copy import logging import six from taskflow.openstack.common import timeutils from taskflow.openstack.common import uuidutils from taskflow.persistence import logbook from taskflow.utils import misc LOG = logging.getLogger(__name__) def temporary_log_book(backend=None): """Creates a temporary logbook for temporary usage in the given backend. Mainly useful for tests and other use cases where a temporary logbook is needed for a short-period of time. """ book = logbook.LogBook('tmp') if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) return book def temporary_flow_detail(backend=None): """Creates a temporary flow detail and logbook for temporary usage in the given backend. Mainly useful for tests and other use cases where a temporary flow detail is needed for a short-period of time. """ flow_id = uuidutils.generate_uuid() book = temporary_log_book(backend) book.add(logbook.FlowDetail(name='tmp-flow-detail', uuid=flow_id)) if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) # Return the one from the saved logbook instead of the local one so # that the freshest version is given back. return book, book.find(flow_id) def create_flow_detail(flow, book=None, backend=None, meta=None): """Creates a flow detail for the given flow and adds it to the provided logbook (if provided) and then uses the given backend (if provided) to save the logbook then returns the created flow detail. """ flow_id = uuidutils.generate_uuid() flow_name = getattr(flow, 'name', None) if flow_name is None: LOG.warn("No name provided for flow %s (id %s)" % (flow, flow_id)) flow_name = flow_id flow_detail = logbook.FlowDetail(name=flow_name, uuid=flow_id) if meta is not None: if flow_detail.meta is None: flow_detail.meta = {} flow_detail.meta.update(meta) if backend is not None and book is None: LOG.warn("No logbook provided for flow %s, creating one.", flow) book = temporary_log_book(backend) if book is not None: book.add(flow_detail) if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) # Return the one from the saved logbook instead of the local one so # that the freshest version is given back. return book.find(flow_id) else: return flow_detail def _copy_function(deep_copy): if deep_copy: return copy.deepcopy else: return lambda x: x def task_details_merge(td_e, td_new, deep_copy=False): """Merges an existing task details with a new task details object. The new task details fields, if they differ will replace the existing objects fields (except name, version, uuid which can not be replaced). If 'deep_copy' is True, fields are copied deeply (by value) if possible. """ if td_e is td_new: return td_e copy_fn = _copy_function(deep_copy) if td_e.state != td_new.state: # NOTE(imelnikov): states are just strings, no need to copy. td_e.state = td_new.state if td_e.results != td_new.results: td_e.results = copy_fn(td_new.results) if td_e.failure != td_new.failure: # NOTE(imelnikov): we can't just deep copy Failures, as they # contain tracebacks, which are not copyable. if deep_copy: td_e.failure = td_new.failure.copy() else: td_e.failure = td_new.failure if td_e.meta != td_new.meta: td_e.meta = copy_fn(td_new.meta) if td_e.version != td_new.version: td_e.version = copy_fn(td_new.version) return td_e def flow_details_merge(fd_e, fd_new, deep_copy=False): """Merges an existing flow details with a new flow details object. The new flow details fields, if they differ will replace the existing objects fields (except name and uuid which can not be replaced). If 'deep_copy' is True, fields are copied deeply (by value) if possible. """ if fd_e is fd_new: return fd_e copy_fn = _copy_function(deep_copy) if fd_e.meta != fd_new.meta: fd_e.meta = copy_fn(fd_new.meta) if fd_e.state != fd_new.state: # NOTE(imelnikov): states are just strings, no need to copy. fd_e.state = fd_new.state return fd_e def logbook_merge(lb_e, lb_new, deep_copy=False): """Merges an existing logbook with a new logbook object. The new logbook fields, if they differ will replace the existing objects fields (except name and uuid which can not be replaced). If 'deep_copy' is True, fields are copied deeply (by value) if possible. """ if lb_e is lb_new: return lb_e copy_fn = _copy_function(deep_copy) if lb_e.meta != lb_new.meta: lb_e.meta = copy_fn(lb_new.meta) return lb_e def failure_to_dict(failure): """Convert misc.Failure object to JSON-serializable dict.""" if not failure: return None if not isinstance(failure, misc.Failure): raise TypeError('Failure object expected, but got %r' % failure) return { 'exception_str': failure.exception_str, 'traceback_str': failure.traceback_str, 'exc_type_names': list(failure), 'version': 1 } def failure_from_dict(data): """Restore misc.Failure object from dict. The dict should be similar to what failure_to_dict() function produces. """ if not data: return None version = data.pop('version', None) if version != 1: raise ValueError('Invalid version of saved Failure object: %r' % version) return misc.Failure(**data) def _format_meta(metadata, indent): """Format the common metadata dictionary in the same manner.""" if not metadata: return [] lines = [ '%s- metadata:' % (" " * indent), ] for (k, v) in metadata.items(): # Progress for now is a special snowflake and will be formatted # in percent format. if k == 'progress' and isinstance(v, misc.NUMERIC_TYPES): v = "%0.2f%%" % (v * 100.0) lines.append("%s+ %s = %s" % (" " * (indent + 2), k, v)) return lines def _format_shared(obj, indent): """Format the common shared attributes in the same manner.""" if obj is None: return [] lines = [] for attr_name in ("uuid", "state"): if not hasattr(obj, attr_name): continue lines.append("%s- %s = %s" % (" " * indent, attr_name, getattr(obj, attr_name))) return lines def pformat_task_detail(task_detail, indent=0): """Pretty formats a task detail.""" lines = ["%sTask: '%s'" % (" " * (indent), task_detail.name)] lines.extend(_format_shared(task_detail, indent=indent + 1)) lines.append("%s- version = %s" % (" " * (indent + 1), misc.get_version_string(task_detail))) lines.append("%s- results = %s" % (" " * (indent + 1), task_detail.results)) lines.append("%s- failure = %s" % (" " * (indent + 1), bool(task_detail.failure))) lines.extend(_format_meta(task_detail.meta, indent=indent + 1)) return "\n".join(lines) def pformat_flow_detail(flow_detail, indent=0): """Pretty formats a flow detail.""" lines = ["%sFlow: '%s'" % (" " * indent, flow_detail.name)] lines.extend(_format_shared(flow_detail, indent=indent + 1)) lines.extend(_format_meta(flow_detail.meta, indent=indent + 1)) for task_detail in flow_detail: lines.append(pformat_task_detail(task_detail, indent=indent + 1)) return "\n".join(lines) def pformat(book, indent=0): """Pretty formats a logbook.""" lines = ["%sLogbook: '%s'" % (" " * indent, book.name)] lines.extend(_format_shared(book, indent=indent + 1)) lines.extend(_format_meta(book.meta, indent=indent + 1)) if book.created_at is not None: lines.append("%s- created_at = %s" % (" " * (indent + 1), timeutils.isotime(book.created_at))) if book.updated_at is not None: lines.append("%s- updated_at = %s" % (" " * (indent + 1), timeutils.isotime(book.updated_at))) for flow_detail in book: lines.append(pformat_flow_detail(flow_detail, indent=indent + 1)) return "\n".join(lines) def _str_2_datetime(text): """Converts an iso8601 string/text into a datetime object (or none).""" if text is None: return None if not isinstance(text, six.string_types): raise ValueError("Can only convert strings into a datetime object and" " not %r" % (text)) if not len(text): return None return timeutils.parse_isotime(text) def format_task_detail(td): return { 'failure': failure_to_dict(td.failure), 'meta': td.meta, 'name': td.name, 'results': td.results, 'state': td.state, 'version': td.version, } def unformat_task_detail(uuid, td_data): td = logbook.TaskDetail(name=td_data['name'], uuid=uuid) td.state = td_data.get('state') td.results = td_data.get('results') td.failure = failure_from_dict(td_data.get('failure')) td.meta = td_data.get('meta') td.version = td_data.get('version') return td def format_flow_detail(fd): return { 'name': fd.name, 'meta': fd.meta, 'state': fd.state, } def unformat_flow_detail(uuid, fd_data): fd = logbook.FlowDetail(name=fd_data['name'], uuid=uuid) fd.state = fd_data.get('state') fd.meta = fd_data.get('meta') return fd def format_logbook(lb, created_at=None): lb_data = { 'name': lb.name, 'meta': lb.meta, } if created_at: lb_data['created_at'] = timeutils.isotime(at=created_at) lb_data['updated_at'] = timeutils.isotime() else: lb_data['created_at'] = timeutils.isotime() lb_data['updated_at'] = None return lb_data def unformat_logbook(uuid, lb_data): lb = logbook.LogBook(name=lb_data['name'], uuid=uuid, updated_at=_str_2_datetime(lb_data['updated_at']), created_at=_str_2_datetime(lb_data['created_at'])) lb.meta = lb_data.get('meta') return lb taskflow-0.1.3/taskflow/utils/kazoo_utils.py0000664000175300017540000000373312275003514022416 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from kazoo import client import six def _parse_hosts(hosts): if isinstance(hosts, six.string_types): return hosts.strip() if isinstance(hosts, (dict)): host_ports = [] for (k, v) in six.iteritems(hosts): host_ports.append("%s:%s" % (k, v)) hosts = host_ports if isinstance(hosts, (list, set, tuple)): return ",".join([str(h) for h in hosts]) return hosts def make_client(conf): """Creates a kazoo client given a configuration dictionary.""" client_kwargs = { 'read_only': bool(conf.get('read_only')), 'randomize_hosts': bool(conf.get('randomize_hosts')), } hosts = _parse_hosts(conf.get("hosts", "localhost:2181")) if not hosts or not isinstance(hosts, six.string_types): raise TypeError("Invalid hosts format, expected " "non-empty string/list, not %s" % type(hosts)) client_kwargs['hosts'] = hosts if 'timeout' in conf: client_kwargs['timeout'] = float(conf['timeout']) # Kazoo supports various handlers, gevent, threading, eventlet... # allow the user of this client object to optionally specify one to be # used. if 'handler' in conf: client_kwargs['handler'] = conf['handler'] return client.KazooClient(**client_kwargs) taskflow-0.1.3/taskflow/utils/graph_utils.py0000664000175300017540000000707712275003514022401 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import networkx as nx import six def get_edge_attrs(graph, u, v): """Gets the dictionary of edge attributes between u->v (or none).""" if not graph.has_edge(u, v): return None return dict(graph.adj[u][v]) def merge_graphs(graphs, allow_overlaps=False): if not graphs: return None graph = graphs[0] for g in graphs[1:]: # This should ensure that the nodes to be merged do not already exist # in the graph that is to be merged into. This could be problematic if # there are duplicates. if not allow_overlaps: # Attempt to induce a subgraph using the to be merged graphs nodes # and see if any graph results. overlaps = graph.subgraph(g.nodes_iter()) if len(overlaps): raise ValueError("Can not merge graph %s into %s since there " "are %s overlapping nodes" (g, graph, len(overlaps))) # Keep the target graphs name. name = graph.name graph = nx.algorithms.compose(graph, g) graph.name = name return graph def get_no_successors(graph): """Returns an iterator for all nodes with no successors.""" for n in graph.nodes_iter(): if not len(graph.successors(n)): yield n def get_no_predecessors(graph): """Returns an iterator for all nodes with no predecessors.""" for n in graph.nodes_iter(): if not len(graph.predecessors(n)): yield n def pformat(graph): """Pretty formats your graph into a string representation that includes details about your graph, including; name, type, frozeness, node count, nodes, edge count, edges, graph density and graph cycles (if any). """ lines = [] lines.append("Name: %s" % graph.name) lines.append("Type: %s" % type(graph).__name__) lines.append("Frozen: %s" % nx.is_frozen(graph)) lines.append("Nodes: %s" % graph.number_of_nodes()) for n in graph.nodes_iter(): lines.append(" - %s" % n) lines.append("Edges: %s" % graph.number_of_edges()) for (u, v, e_data) in graph.edges_iter(data=True): if e_data: lines.append(" %s -> %s (%s)" % (u, v, e_data)) else: lines.append(" %s -> %s" % (u, v)) lines.append("Density: %0.3f" % nx.density(graph)) cycles = list(nx.cycles.recursive_simple_cycles(graph)) lines.append("Cycles: %s" % len(cycles)) for cycle in cycles: buf = six.StringIO() buf.write(str(cycle[0])) for i in range(1, len(cycle)): buf.write(" --> %s" % (cycle[i])) buf.write(" --> %s" % (cycle[0])) lines.append(" %s" % buf.getvalue()) return "\n".join(lines) def export_graph_to_dot(graph): """Exports the graph to a dot format (requires pydot library).""" return nx.to_pydot(graph).to_string() taskflow-0.1.3/taskflow/utils/misc.py0000664000175300017540000004623012275003514021005 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import errno import functools import keyword import logging import os import string import sys import time import traceback import six from taskflow import exceptions as exc from taskflow.openstack.common import jsonutils from taskflow.utils import reflection LOG = logging.getLogger(__name__) NUMERIC_TYPES = six.integer_types + (float,) def binary_encode(text, encoding='utf-8'): """Converts a string of into a binary type using given encoding. Does nothing if text not unicode string. """ if isinstance(text, six.binary_type): return text elif isinstance(text, six.text_type): return text.encode(encoding) else: raise TypeError("Expected binary or string type") def binary_decode(data, encoding='utf-8'): """Converts a binary type into a text type using given encoding. Does nothing if data is already unicode string. """ if isinstance(data, six.binary_type): return data.decode(encoding) elif isinstance(data, six.text_type): return data else: raise TypeError("Expected binary or string type") def decode_json(raw_data, root_types=(dict,)): """Parse raw data to get JSON object. Decodes a JSON from a given raw data binary and checks that the root type of that decoded object is in the allowed set of types (by default a JSON object/dict should be the root type). """ try: data = jsonutils.loads(binary_decode(raw_data)) except UnicodeDecodeError as e: raise ValueError("Expected UTF-8 decodable data: %s" % e) except ValueError as e: raise ValueError("Expected JSON decodable data: %s" % e) if root_types and not isinstance(data, tuple(root_types)): ok_types = ", ".join(str(t) for t in root_types) raise ValueError("Expected (%s) root types not: %s" % (ok_types, type(data))) return data def wallclock(): # NOTE(harlowja): made into a function so that this can be easily mocked # out if we want to alter time related functionality (for testing # purposes). return time.time() def wraps(fn): """This will not be needed in python 3.2 or greater which already has this built-in to its functools.wraps method. """ def wrapper(f): f = functools.wraps(fn)(f) f.__wrapped__ = getattr(fn, '__wrapped__', fn) return f return wrapper def get_version_string(obj): """Gets a object's version as a string. Returns string representation of object's version taken from its 'version' attribute, or None if object does not have such attribute or its version is None. """ obj_version = getattr(obj, 'version', None) if isinstance(obj_version, (list, tuple)): obj_version = '.'.join(str(item) for item in obj_version) if obj_version is not None and not isinstance(obj_version, six.string_types): obj_version = str(obj_version) return obj_version def item_from(container, index, name=None): """Attempts to fetch a index/key from a given container.""" if index is None: return container try: return container[index] except (IndexError, KeyError, ValueError, TypeError): # NOTE(harlowja): Perhaps the container is a dictionary-like object # and that key does not exist (key error), or the container is a # tuple/list and a non-numeric key is being requested (index error), # or there was no container and an attempt to index into none/other # unsubscriptable type is being requested (type error). if name is None: name = index raise exc.NotFound("Unable to find %r in container %s" % (name, container)) def get_duplicate_keys(iterable, key=None): if key is not None: iterable = six.moves.map(key, iterable) keys = set() duplicates = set() for item in iterable: if item in keys: duplicates.add(item) keys.add(item) return duplicates # NOTE(imelnikov): we should not use str.isalpha or str.isdigit # as they are locale-dependant _ASCII_WORD_SYMBOLS = frozenset(string.ascii_letters + string.digits + '_') def is_valid_attribute_name(name, allow_self=False, allow_hidden=False): """Validates that a string name is a valid/invalid python attribute name. """ return all(( isinstance(name, six.string_types), len(name) > 0, (allow_self or not name.lower().startswith('self')), (allow_hidden or not name.lower().startswith('_')), # NOTE(imelnikov): keywords should be forbidden. not keyword.iskeyword(name), # See: http://docs.python.org/release/2.5.2/ref/grammar.txt not (name[0] in string.digits), all(symbol in _ASCII_WORD_SYMBOLS for symbol in name) )) class AttrDict(dict): """Helper utility dict sub-class to create a class that can be accessed by attribute name from a dictionary that contains a set of keys and values. """ NO_ATTRS = tuple(reflection.get_member_names(dict)) @classmethod def _is_valid_attribute_name(cls, name): if not is_valid_attribute_name(name): return False # Make the name just be a simple string in latin-1 encoding in python3. if name in cls.NO_ATTRS: return False return True def __init__(self, **kwargs): for (k, v) in kwargs.items(): if not self._is_valid_attribute_name(k): raise AttributeError("Invalid attribute name: '%s'" % (k)) self[k] = v def __getattr__(self, name): if not self._is_valid_attribute_name(name): raise AttributeError("Invalid attribute name: '%s'" % (name)) try: return self[name] except KeyError: raise AttributeError("No attributed named: '%s'" % (name)) def __setattr__(self, name, value): if not self._is_valid_attribute_name(name): raise AttributeError("Invalid attribute name: '%s'" % (name)) self[name] = value class ExponentialBackoff(object): """An iterable object that will yield back an exponential delay sequence provided an exponent and a number of items to yield. This object may be iterated over multiple times (yielding the same sequence each time). """ def __init__(self, count, exponent=2, max_backoff=3600): self.count = max(0, int(count)) self.exponent = exponent self.max_backoff = max(0, int(max_backoff)) def __iter__(self): if self.count <= 0: raise StopIteration() for i in six.moves.range(0, self.count): yield min(self.exponent ** i, self.max_backoff) def __str__(self): return "ExponentialBackoff: %s" % ([str(v) for v in self]) def as_bool(val): """Converts an arbitary value into a boolean.""" if isinstance(val, bool): return val if isinstance(val, six.string_types): if val.lower() in ('f', 'false', '0', 'n', 'no'): return False if val.lower() in ('t', 'true', '1', 'y', 'yes'): return True return bool(val) def as_int(obj, quiet=False): """Converts an arbitrary value into a integer.""" # Try "2" -> 2 try: return int(obj) except (ValueError, TypeError): pass # Try "2.5" -> 2 try: return int(float(obj)) except (ValueError, TypeError): pass # Eck, not sure what this is then. if not quiet: raise TypeError("Can not translate %s to an integer." % (obj)) return obj # Taken from oslo-incubator file-utils but since that module pulls in a large # amount of other files it does not seem so useful to include that full # module just for this function. def ensure_tree(path): """Create a directory (and any ancestor directories required). :param path: Directory to create """ try: os.makedirs(path) except OSError as exc: if exc.errno == errno.EEXIST: if not os.path.isdir(path): raise else: raise class StopWatch(object): """A simple timer/stopwatch helper class. Inspired by: apache-commons-lang java stopwatch. Not thread-safe. """ _STARTED = 'STARTED' _STOPPED = 'STOPPED' def __init__(self, duration=None): self._duration = duration self._started_at = None self._stopped_at = None self._state = None def start(self): if self._state == self._STARTED: return self self._started_at = wallclock() self._stopped_at = None self._state = self._STARTED return self def elapsed(self): if self._state == self._STOPPED: return float(self._stopped_at - self._started_at) elif self._state == self._STARTED: return float(wallclock() - self._started_at) else: raise RuntimeError("Can not get the elapsed time of an invalid" " stopwatch") def __enter__(self): self.start() return self def __exit__(self, type, value, traceback): try: self.stop() except RuntimeError: pass # NOTE(harlowja): don't silence the exception. return False def expired(self): if self._duration is None: return False if self.elapsed() > self._duration: return True return False def resume(self): if self._state == self._STOPPED: self._state = self._STARTED return self else: raise RuntimeError("Can not resume a stopwatch that has not been" " stopped") def stop(self): if self._state == self._STOPPED: return self if self._state != self._STARTED: raise RuntimeError("Can not stop a stopwatch that has not been" " started") self._stopped_at = wallclock() self._state = self._STOPPED return self class TransitionNotifier(object): """A utility helper class that can be used to subscribe to notifications of events occurring as well as allow a entity to post said notifications to subscribers. """ RESERVED_KEYS = ('details',) ANY = '*' def __init__(self): self._listeners = collections.defaultdict(list) def __len__(self): """Returns how many callbacks are registered.""" count = 0 for (_s, callbacks) in six.iteritems(self._listeners): count += len(callbacks) return count def is_registered(self, state, callback): listeners = list(self._listeners.get(state, [])) for (cb, _args, _kwargs) in listeners: if reflection.is_same_callback(cb, callback): return True return False def reset(self): self._listeners.clear() def notify(self, state, details): listeners = list(self._listeners.get(self.ANY, [])) for i in self._listeners[state]: if i not in listeners: listeners.append(i) if not listeners: return for (callback, args, kwargs) in listeners: if args is None: args = [] if kwargs is None: kwargs = {} kwargs['details'] = details try: callback(state, *args, **kwargs) except Exception: LOG.exception(("Failure calling callback %s to notify about" " state transition %s"), callback, state) def register(self, state, callback, args=None, kwargs=None): assert six.callable(callback), "Callback must be callable" if self.is_registered(state, callback): raise ValueError("Callback %s already registered" % (callback)) if kwargs: for k in self.RESERVED_KEYS: if k in kwargs: raise KeyError(("Reserved key '%s' not allowed in " "kwargs") % k) kwargs = copy.copy(kwargs) if args: args = copy.copy(args) self._listeners[state].append((callback, args, kwargs)) def deregister(self, state, callback): if state not in self._listeners: return for i, (cb, args, kwargs) in enumerate(self._listeners[state]): if reflection.is_same_callback(cb, callback): self._listeners[state].pop(i) break def copy_exc_info(exc_info): """Make copy of exception info tuple, as deep as possible.""" if exc_info is None: return None exc_type, exc_value, tb = exc_info # NOTE(imelnikov): there is no need to copy type, and # we can't copy traceback. return (exc_type, copy.deepcopy(exc_value), tb) def are_equal_exc_info_tuples(ei1, ei2): if ei1 == ei2: return True if ei1 is None or ei2 is None: return False # if both are None, we returned True above # NOTE(imelnikov): we can't compare exceptions with '==' # because we want exc_info be equal to it's copy made with # copy_exc_info above. if ei1[0] is not ei2[0]: return False if not all((type(ei1[1]) == type(ei2[1]), exc.exception_message(ei1[1]) == exc.exception_message(ei2[1]), repr(ei1[1]) == repr(ei2[1]))): return False if ei1[2] == ei2[2]: return True tb1 = traceback.format_tb(ei1[2]) tb2 = traceback.format_tb(ei2[2]) return tb1 == tb2 class Failure(object): """Object that represents failure. Failure objects encapsulate exception information so that it can be re-used later to re-raise or inspect. """ def __init__(self, exc_info=None, **kwargs): if not kwargs: if exc_info is None: exc_info = sys.exc_info() self._exc_info = exc_info self._exc_type_names = list( reflection.get_all_class_names(exc_info[0], up_to=Exception)) if not self._exc_type_names: raise TypeError('Invalid exception type: %r' % exc_info[0]) self._exception_str = exc.exception_message(self._exc_info[1]) self._traceback_str = ''.join( traceback.format_tb(self._exc_info[2])) else: self._exc_info = exc_info # may be None self._exception_str = kwargs.pop('exception_str') self._exc_type_names = kwargs.pop('exc_type_names', []) self._traceback_str = kwargs.pop('traceback_str', None) if kwargs: raise TypeError( 'Failure.__init__ got unexpected keyword argument(s): %s' % ', '.join(six.iterkeys(kwargs))) @classmethod def from_exception(cls, exception): return cls((type(exception), exception, None)) def _matches(self, other): if self is other: return True return (self._exc_type_names == other._exc_type_names and self.exception_str == other.exception_str and self.traceback_str == other.traceback_str) def matches(self, other): if not isinstance(other, Failure): return False if self.exc_info is None or other.exc_info is None: return self._matches(other) else: return self == other def __eq__(self, other): if not isinstance(other, Failure): return NotImplemented return (self._matches(other) and are_equal_exc_info_tuples(self.exc_info, other.exc_info)) def __ne__(self, other): return not (self == other) # NOTE(imelnikov): obj.__hash__() should return same values for equal # objects, so we should redefine __hash__. Failure equality semantics # is a bit complicated, so for now we just mark Failure objects as # unhashable. See python docs on object.__hash__ for more info: # http://docs.python.org/2/reference/datamodel.html#object.__hash__ __hash__ = None @property def exception(self): """Exception value, or None if exception value is not present. Exception value may be lost during serialization. """ if self._exc_info: return self._exc_info[1] else: return None @property def exception_str(self): """String representation of exception.""" return self._exception_str @property def exc_info(self): """Exception info tuple or None.""" return self._exc_info @property def traceback_str(self): """Exception traceback as string.""" return self._traceback_str @staticmethod def reraise_if_any(failures): """Re-raise exceptions if argument is not empty. If argument is empty list, this method returns None. If argument is list with single Failure object in it, this failure is reraised. Else, WrappedFailure exception is raised with failures list as causes. """ failures = list(failures) if len(failures) == 1: failures[0].reraise() elif len(failures) > 1: raise exc.WrappedFailure(failures) def reraise(self): """Re-raise captured exception.""" if self._exc_info: six.reraise(*self._exc_info) else: raise exc.WrappedFailure([self]) def check(self, *exc_classes): """Check if any of exc_classes caused the failure. Arguments of this method can be exception types or type names (stings). If captured exception is instance of exception of given type, the corresponding argument is returned. Else, None is returned. """ for cls in exc_classes: if isinstance(cls, type): err = reflection.get_class_name(cls) else: err = cls if err in self._exc_type_names: return cls return None def __str__(self): return 'Failure: %s: %s' % (self._exc_type_names[0], self._exception_str) def __iter__(self): """Iterate over exception type names.""" for et in self._exc_type_names: yield et def copy(self): return Failure(exc_info=copy_exc_info(self.exc_info), exception_str=self.exception_str, traceback_str=self.traceback_str, exc_type_names=self._exc_type_names[:]) taskflow-0.1.3/taskflow/utils/reflection.py0000664000175300017540000001210412275003514022175 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import types import six def get_member_names(obj, exclude_hidden=True): """Get all the member names for a object.""" names = [] for (name, _value) in inspect.getmembers(obj): if exclude_hidden and name.startswith("_"): continue names.append(name) return sorted(names) def get_class_name(obj): """Get class name for object. If object is a type, fully qualified name of the type is returned. Else, fully qualified name of the type of the object is returned. For builtin types, just name is returned. """ if not isinstance(obj, six.class_types): obj = type(obj) if obj.__module__ in ('builtins', '__builtin__', 'exceptions'): return obj.__name__ return '.'.join((obj.__module__, obj.__name__)) def get_all_class_names(obj, up_to=object): """Get class names of object parent classes. Iterate over all class names object is instance or subclass of, in order of method resolution (mro). If up_to parameter is provided, only name of classes that are sublcasses to that class are returned. """ if not isinstance(obj, six.class_types): obj = type(obj) for cls in obj.mro(): if issubclass(cls, up_to): yield get_class_name(cls) def get_callable_name(function): """Generate a name from callable. Tries to do the best to guess fully qualified callable name. """ method_self = get_method_self(function) if method_self is not None: # this is bound method if isinstance(method_self, six.class_types): # this is bound class method im_class = method_self else: im_class = type(method_self) parts = (im_class.__module__, im_class.__name__, function.__name__) elif inspect.isfunction(function) or inspect.ismethod(function): parts = (function.__module__, function.__name__) else: im_class = type(function) if im_class is type: im_class = function parts = (im_class.__module__, im_class.__name__) return '.'.join(parts) def get_method_self(method): if not inspect.ismethod(method): return None try: return six.get_method_self(method) except AttributeError: return None def is_same_callback(callback1, callback2, strict=True): """Returns if the two callbacks are the same.""" if callback1 is callback2: # This happens when plain methods are given (or static/non-bound # methods). return True if callback1 == callback2: if not strict: return True # If two bound method are equal if functions themselves are equal # and objects they are applied to are equal. This means that a bound # method could be the same bound method on another object if the # objects have __eq__ methods that return true (when in fact it is a # different bound method). Python u so crazy! try: self1 = six.get_method_self(callback1) self2 = six.get_method_self(callback2) return self1 is self2 except AttributeError: pass return False def is_bound_method(method): """Returns if the method given is a bound to a object or not.""" return bool(get_method_self(method)) def _get_arg_spec(function): if isinstance(function, type): bound = True function = function.__init__ elif isinstance(function, (types.FunctionType, types.MethodType)): bound = is_bound_method(function) function = getattr(function, '__wrapped__', function) else: function = function.__call__ bound = is_bound_method(function) return inspect.getargspec(function), bound def get_callable_args(function, required_only=False): """Get names of callable arguments. Special arguments (like *args and **kwargs) are not included into output. If required_only is True, optional arguments (with default values) are not included into output. """ argspec, bound = _get_arg_spec(function) f_args = argspec.args if required_only and argspec.defaults: f_args = f_args[:-len(argspec.defaults)] if bound: f_args = f_args[1:] return f_args def accepts_kwargs(function): """Returns True if function accepts kwargs.""" argspec, _bound = _get_arg_spec(function) return bool(argspec.keywords) taskflow-0.1.3/taskflow/utils/threading_utils.py0000664000175300017540000000237412275003514023240 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import multiprocessing import six if six.PY2: from thread import get_ident # noqa else: # In python3+ the get_ident call moved (whhhy??) from threading import get_ident # noqa def get_optimal_thread_count(): """Try to guess optimal thread count for current system.""" try: return multiprocessing.cpu_count() + 1 except NotImplementedError: # NOTE(harlowja): apparently may raise so in this case we will # just setup two threads since its hard to know what else we # should do in this situation. return 2 taskflow-0.1.3/taskflow/utils/eventlet_utils.py0000664000175300017540000001101012275003514023104 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import threading from concurrent import futures try: from eventlet.green import threading as green_threading from eventlet import greenpool from eventlet import patcher from eventlet import queue EVENTLET_AVAILABLE = True except ImportError: EVENTLET_AVAILABLE = False from taskflow.utils import lock_utils LOG = logging.getLogger(__name__) # NOTE(harlowja): this object signals to threads that they should stop # working and rest in peace. _TOMBSTONE = object() class _WorkItem(object): def __init__(self, future, fn, args, kwargs): self.future = future self.fn = fn self.args = args self.kwargs = kwargs def run(self): if not self.future.set_running_or_notify_cancel(): return try: result = self.fn(*self.args, **self.kwargs) except BaseException as e: self.future.set_exception(e) else: self.future.set_result(result) class _Worker(object): def __init__(self, executor, work_queue, worker_id): self.executor = executor self.work_queue = work_queue self.worker_id = worker_id def __call__(self): try: while True: work = self.work_queue.get(block=True) if work is _TOMBSTONE: # NOTE(harlowja): give notice to other workers (this is # basically a chain of tombstone calls that will cause all # the workers on the queue to eventually shut-down). self.work_queue.put(_TOMBSTONE) break else: work.run() except BaseException: LOG.critical("Exception in worker %s of '%s'", self.worker_id, self.executor, exc_info=True) class GreenFuture(futures.Future): def __init__(self): super(GreenFuture, self).__init__() # NOTE(harlowja): replace the built-in condition with a greenthread # compatible one so that when getting the result of this future the # functions will correctly yield to eventlet. If this is not done then # waiting on the future never actually causes the greenthreads to run # and thus you wait for infinity. if not patcher.is_monkey_patched('threading'): self._condition = green_threading.Condition() class GreenExecutor(futures.Executor): """A greenthread backed executor.""" def __init__(self, max_workers=1000): assert EVENTLET_AVAILABLE, 'eventlet is needed to use GreenExecutor' assert int(max_workers) > 0, 'Max workers must be greater than zero' self._max_workers = int(max_workers) self._pool = greenpool.GreenPool(self._max_workers) self._work_queue = queue.LightQueue() self._shutdown_lock = threading.RLock() self._shutdown = False @lock_utils.locked(lock='_shutdown_lock') def submit(self, fn, *args, **kwargs): if self._shutdown: raise RuntimeError('cannot schedule new futures after shutdown') f = GreenFuture() w = _WorkItem(f, fn, args, kwargs) self._work_queue.put(w) # Spin up any new workers (since they are spun up on demand and # not at executor initialization). self._spin_up() return f def _spin_up(self): cur_am = (self._pool.running() + self._pool.waiting()) if cur_am < self._max_workers and cur_am < self._work_queue.qsize(): # Spin up a new worker to do the work as we are behind. worker = _Worker(self, self._work_queue, cur_am + 1) self._pool.spawn(worker) def shutdown(self, wait=True): with self._shutdown_lock: self._shutdown = True self._work_queue.put(_TOMBSTONE) if wait: self._pool.waitall() taskflow-0.1.3/taskflow/utils/flow_utils.py0000664000175300017540000001774012275003514022245 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import threading import networkx as nx from taskflow import exceptions from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.utils import graph_utils as gu from taskflow.utils import lock_utils as lu from taskflow.utils import misc LOG = logging.getLogger(__name__) # Use the 'flatten' attribute as the need to add an edge here, which is useful # for doing later analysis of the edges (to determine why the edges were # created). FLATTEN_EDGE_DATA = { 'flatten': True, } class Flattener(object): def __init__(self, root, freeze=True): self._root = root self._graph = None self._history = set() self._freeze = bool(freeze) self._lock = threading.Lock() self._edge_data = FLATTEN_EDGE_DATA.copy() def _add_new_edges(self, graph, nodes_from, nodes_to, edge_attrs=None): """Adds new edges from nodes to other nodes in the specified graph, with the following edge attributes (defaulting to the class provided edge_data if None), if the edge does not already exist. """ if edge_attrs is None: edge_attrs = self._edge_data else: edge_attrs = edge_attrs.copy() edge_attrs.update(self._edge_data) for u in nodes_from: for v in nodes_to: if not graph.has_edge(u, v): # NOTE(harlowja): give each edge its own attr copy so that # if it's later modified that the same copy isn't modified. graph.add_edge(u, v, attr_dict=edge_attrs.copy()) def _flatten(self, item): functor = self._find_flattener(item) if not functor: raise TypeError("Unknown type requested to flatten: %s (%s)" % (item, type(item))) self._pre_item_flatten(item) graph = functor(item) self._post_item_flatten(item, graph) return graph def _find_flattener(self, item): """Locates the flattening function to use to flatten the given item.""" if isinstance(item, lf.Flow): return self._flatten_linear elif isinstance(item, uf.Flow): return self._flatten_unordered elif isinstance(item, gf.Flow): return self._flatten_graph elif isinstance(item, task.BaseTask): return self._flatten_task else: return None def _flatten_linear(self, flow): """Flattens a linear flow.""" graph = nx.DiGraph(name=flow.name) previous_nodes = [] for item in flow: subgraph = self._flatten(item) graph = gu.merge_graphs([graph, subgraph]) # Find nodes that have no predecessor, make them have a predecessor # of the previous nodes so that the linearity ordering is # maintained. Find the ones with no successors and use this list # to connect the next subgraph (if any). self._add_new_edges(graph, previous_nodes, list(gu.get_no_predecessors(subgraph))) # There should always be someone without successors, otherwise we # have a cycle A -> B -> A situation, which should not be possible. previous_nodes = list(gu.get_no_successors(subgraph)) return graph def _flatten_unordered(self, flow): """Flattens a unordered flow.""" graph = nx.DiGraph(name=flow.name) for item in flow: # NOTE(harlowja): we do *not* connect the graphs together, this # retains that each item (translated to subgraph) is disconnected # from each other which will result in unordered execution while # running. graph = gu.merge_graphs([graph, self._flatten(item)]) return graph def _flatten_task(self, task): """Flattens a individual task.""" graph = nx.DiGraph(name=task.name) graph.add_node(task) return graph def _flatten_graph(self, flow): """Flattens a graph flow.""" graph = nx.DiGraph(name=flow.name) # Flatten all nodes into a single subgraph per node. subgraph_map = {} for item in flow: subgraph = self._flatten(item) subgraph_map[item] = subgraph graph = gu.merge_graphs([graph, subgraph]) # Reconnect all node edges to there corresponding subgraphs. for (u, v) in flow.graph.edges_iter(): # Retain and update the original edge attributes. u_v_attrs = gu.get_edge_attrs(flow.graph, u, v) # Connect the ones with no predecessors in v to the ones with no # successors in u (thus maintaining the edge dependency). self._add_new_edges(graph, list(gu.get_no_successors(subgraph_map[u])), list(gu.get_no_predecessors(subgraph_map[v])), edge_attrs=u_v_attrs) return graph def _pre_item_flatten(self, item): """Called before a item is flattened; any pre-flattening actions.""" if id(item) in self._history: raise ValueError("Already flattened item: %s (%s), recursive" " flattening not supported" % (item, id(item))) LOG.debug("Starting to flatten '%s'", item) self._history.add(id(item)) def _post_item_flatten(self, item, graph): """Called before a item is flattened; any post-flattening actions.""" LOG.debug("Finished flattening '%s'", item) # NOTE(harlowja): this one can be expensive to calculate (especially # the cycle detection), so only do it if we know debugging is enabled # and not under all cases. if LOG.isEnabledFor(logging.DEBUG): LOG.debug("Translated '%s' into a graph:", item) for line in gu.pformat(graph).splitlines(): # Indent it so that it's slightly offset from the above line. LOG.debug(" %s", line) def _pre_flatten(self): """Called before the flattening of the item starts.""" self._history.clear() def _post_flatten(self, graph): """Called after the flattening of the item finishes successfully.""" dup_names = misc.get_duplicate_keys(graph.nodes_iter(), key=lambda node: node.name) if dup_names: dup_names = ', '.join(sorted(dup_names)) raise exceptions.InvariantViolation("Tasks with duplicate names " "found: %s" % (dup_names)) self._history.clear() @lu.locked def flatten(self): """Flattens a item (a task or flow) into a single execution graph.""" if self._graph is not None: return self._graph self._pre_flatten() graph = self._flatten(self._root) self._post_flatten(graph) if self._freeze: self._graph = nx.freeze(graph) else: self._graph = graph return self._graph def flatten(item, freeze=True): """Flattens a item (a task or flow) into a single execution graph.""" return Flattener(item, freeze=freeze).flatten() taskflow-0.1.3/taskflow/tests/0000775000175300017540000000000012275003604017475 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/tests/unit/0000775000175300017540000000000012275003604020454 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/tests/unit/test_suspend_flow.py0000664000175300017540000001677612275003514024616 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import taskflow.engines from taskflow import exceptions as exc from taskflow.listeners import base as lbase from taskflow.patterns import linear_flow as lf from taskflow import states from taskflow import test from taskflow.tests import utils from taskflow.utils import eventlet_utils as eu class SuspendingListener(lbase.ListenerBase): def __init__(self, engine, task_name, task_state): super(SuspendingListener, self).__init__( engine, task_listen_for=(task_state,)) self._task_name = task_name def _task_receiver(self, state, details): if details['task_name'] == self._task_name: self._engine.suspend() class SuspendFlowTest(utils.EngineTestBase): def test_suspend_one_task(self): flow = utils.SaveOrderTask('a') engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS): engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) self.assertEqual(self.values, ['a']) engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) self.assertEqual(self.values, ['a']) def test_suspend_linear_flow(self): flow = lf.Flow('linear').add( utils.SaveOrderTask('a'), utils.SaveOrderTask('b'), utils.SaveOrderTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS): engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) self.assertEqual(self.values, ['a', 'b']) engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) self.assertEqual(self.values, ['a', 'b', 'c']) def test_suspend_linear_flow_on_revert(self): flow = lf.Flow('linear').add( utils.SaveOrderTask('a'), utils.SaveOrderTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED): engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) self.assertEqual( self.values, ['a', 'b', 'c reverted(Failure: RuntimeError: Woot!)', 'b reverted(5)']) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) self.assertEqual( self.values, ['a', 'b', 'c reverted(Failure: RuntimeError: Woot!)', 'b reverted(5)', 'a reverted(5)']) def test_suspend_and_resume_linear_flow_on_revert(self): flow = lf.Flow('linear').add( utils.SaveOrderTask('a'), utils.SaveOrderTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED): engine.run() # pretend we are resuming engine2 = self._make_engine(flow, engine.storage._flowdetail) self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) self.assertEqual( self.values, ['a', 'b', 'c reverted(Failure: RuntimeError: Woot!)', 'b reverted(5)', 'a reverted(5)']) def test_suspend_and_revert_even_if_task_is_gone(self): flow = lf.Flow('linear').add( utils.SaveOrderTask('a'), utils.SaveOrderTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED): engine.run() expected_values = ['a', 'b', 'c reverted(Failure: RuntimeError: Woot!)', 'b reverted(5)'] self.assertEqual(self.values, expected_values) # pretend we are resuming, but task 'c' gone when flow got updated flow2 = lf.Flow('linear').add( utils.SaveOrderTask('a'), utils.SaveOrderTask('b') ) engine2 = self._make_engine(flow2, engine.storage._flowdetail) self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) expected_values.append('a reverted(5)') self.assertEqual(self.values, expected_values) def test_storage_is_rechecked(self): flow = lf.Flow('linear').add( utils.SaveOrderTask('b', requires=['foo']), utils.SaveOrderTask('c') ) engine = self._make_engine(flow) engine.storage.inject({'foo': 'bar'}) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS): engine.run() self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) # uninject everything: engine.storage.save(engine.storage.injector_name, {}, states.SUCCESS) self.assertRaises(exc.MissingDependencies, engine.run) class SingleThreadedEngineTest(SuspendFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf='serial', backend=self.backend) class MultiThreadedEngineTest(SuspendFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): engine_conf = dict(engine='parallel', executor=executor) return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf=engine_conf, backend=self.backend) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(SuspendFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = eu.GreenExecutor() engine_conf = dict(engine='parallel', executor=executor) return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf=engine_conf, backend=self.backend) taskflow-0.1.3/taskflow/tests/unit/test_check_transition.py0000664000175300017540000001141012275003514025411 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as exc from taskflow import states from taskflow import test class TransitionTest(test.TestCase): def assertTransitionAllowed(self, from_state, to_state): self.assertTrue(self.check_transition(from_state, to_state)) def assertTransitionIgnored(self, from_state, to_state): self.assertFalse(self.check_transition(from_state, to_state)) def assertTransitionForbidden(self, from_state, to_state): self.assertRaisesRegexp(exc.InvalidState, self.transition_exc_regexp, self.check_transition, from_state, to_state) def assertTransitions(self, from_state, allowed=None, ignored=None, forbidden=None): for a in allowed or []: self.assertTransitionAllowed(from_state, a) for i in ignored or []: self.assertTransitionIgnored(from_state, i) for f in forbidden or []: self.assertTransitionForbidden(from_state, f) class CheckFlowTransitionTest(TransitionTest): def setUp(self): super(CheckFlowTransitionTest, self).setUp() self.check_transition = states.check_flow_transition self.transition_exc_regexp = '^Flow transition.*not allowed' def test_to_same_state(self): self.assertTransitionIgnored(states.SUCCESS, states.SUCCESS) def test_rerunning_allowed(self): self.assertTransitionAllowed(states.SUCCESS, states.RUNNING) def test_no_resuming_from_pending(self): self.assertTransitionIgnored(states.PENDING, states.RESUMING) def test_resuming_from_running(self): self.assertTransitionAllowed(states.RUNNING, states.RESUMING) def test_bad_transition_raises(self): self.assertTransitionForbidden(states.FAILURE, states.SUCCESS) class CheckTaskTransitionTest(TransitionTest): def setUp(self): super(CheckTaskTransitionTest, self).setUp() self.check_transition = states.check_task_transition self.transition_exc_regexp = '^Task transition.*not allowed' def test_from_pending_state(self): self.assertTransitions(from_state=states.PENDING, allowed=(states.RUNNING,), ignored=(states.PENDING, states.REVERTING), forbidden=(states.SUCCESS, states.FAILURE, states.REVERTED)) def test_from_running_state(self): self.assertTransitions(from_state=states.RUNNING, allowed=(states.RUNNING, states.SUCCESS, states.FAILURE, states.REVERTING), forbidden=(states.PENDING, states.REVERTED)) def test_from_success_state(self): self.assertTransitions(from_state=states.SUCCESS, allowed=(states.REVERTING,), ignored=(states.RUNNING, states.SUCCESS), forbidden=(states.PENDING, states.FAILURE, states.REVERTED)) def test_from_failure_state(self): self.assertTransitions(from_state=states.FAILURE, allowed=(states.REVERTING,), ignored=(states.FAILURE,), forbidden=(states.PENDING, states.RUNNING, states.SUCCESS, states.REVERTED)) def test_from_reverting_state(self): self.assertTransitions(from_state=states.REVERTING, allowed=(states.RUNNING, states.FAILURE, states.REVERTING, states.REVERTED), forbidden=(states.PENDING, states.SUCCESS)) def test_from_reverted_state(self): self.assertTransitions(from_state=states.REVERTED, allowed=(states.PENDING,), ignored=(states.REVERTING, states.REVERTED), forbidden=(states.RUNNING, states.SUCCESS, states.FAILURE)) taskflow-0.1.3/taskflow/tests/unit/test_unordered_flow.py0000664000175300017540000000441212275003514025104 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow import test from taskflow.tests import utils class UnorderedFlowTest(test.TestCase): def _make_engine(self, flow): return taskflow.engines.load(flow, store={'context': {}}) def test_result_access(self): class DoApply(task.Task): default_provides = ('a', 'b') def execute(self): return [1, 2] wf = uf.Flow("the-test-action") wf.add(DoApply()) e = self._make_engine(wf) e.run() data = e.storage.fetch_all() self.assertIn('a', data) self.assertIn('b', data) self.assertEqual(2, data['b']) self.assertEqual(1, data['a']) def test_reverting_flow(self): wf = uf.Flow("the-test-action") wf.add(utils.make_reverting_task('1')) wf.add(utils.make_reverting_task('2', blowup=True)) e = self._make_engine(wf) self.assertRaisesRegexp(RuntimeError, '^I blew up', e.run) def test_functor_flow(self): class DoApply1(task.Task): default_provides = ('a', 'b', 'c') def execute(self, context): context['1'] = True return ['a', 'b', 'c'] class DoApply2(task.Task): def execute(self, context): context['2'] = True wf = uf.Flow("the-test-action") wf.add(DoApply1()) wf.add(DoApply2()) e = self._make_engine(wf) e.run() self.assertEqual(2, len(e.storage.fetch('context'))) taskflow-0.1.3/taskflow/tests/unit/test_utils_binary.py0000664000175300017540000000635012275003514024575 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow import test from taskflow.utils import misc def _bytes(data): if six.PY3: return data.encode(encoding='utf-8') else: return data class BinaryEncodeTest(test.TestCase): def _check(self, data, expected_result): result = misc.binary_encode(data) self.assertIsInstance(result, six.binary_type) self.assertEqual(result, expected_result) def test_simple_binary(self): data = _bytes('hello') self._check(data, data) def test_unicode_binary(self): data = _bytes('привет') self._check(data, data) def test_simple_text(self): self._check(u'hello', _bytes('hello')) def test_unicode_text(self): self._check(u'привет', _bytes('привет')) def test_unicode_other_encoding(self): result = misc.binary_encode(u'mañana', 'latin-1') self.assertIsInstance(result, six.binary_type) self.assertEqual(result, u'mañana'.encode('latin-1')) class BinaryDecodeTest(test.TestCase): def _check(self, data, expected_result): result = misc.binary_decode(data) self.assertIsInstance(result, six.text_type) self.assertEqual(result, expected_result) def test_simple_text(self): data = u'hello' self._check(data, data) def test_unicode_text(self): data = u'привет' self._check(data, data) def test_simple_binary(self): self._check(_bytes('hello'), u'hello') def test_unicode_binary(self): self._check(_bytes('привет'), u'привет') def test_unicode_other_encoding(self): data = u'mañana'.encode('latin-1') result = misc.binary_decode(data, 'latin-1') self.assertIsInstance(result, six.text_type) self.assertEqual(result, u'mañana') class DecodeJsonTest(test.TestCase): def test_it_works(self): self.assertEqual(misc.decode_json(_bytes('{"foo": 1}')), {"foo": 1}) def test_it_works_with_unicode(self): data = _bytes('{"foo": "фуу"}') self.assertEqual(misc.decode_json(data), {"foo": u'фуу'}) def test_handles_invalid_unicode(self): self.assertRaises(ValueError, misc.decode_json, six.b('{"\xf1": 1}')) def test_handles_bad_json(self): self.assertRaises(ValueError, misc.decode_json, _bytes('{"foo":')) def test_handles_wrong_types(self): self.assertRaises(ValueError, misc.decode_json, _bytes('42')) taskflow-0.1.3/taskflow/tests/unit/test_storage.py0000664000175300017540000004355312275003514023543 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import threading import mock from taskflow import exceptions from taskflow.openstack.common import uuidutils from taskflow.persistence.backends import impl_memory from taskflow.persistence import logbook from taskflow import states from taskflow import storage from taskflow import test from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils class StorageTest(test.TestCase): def setUp(self): super(StorageTest, self).setUp() self.backend = impl_memory.MemoryBackend(conf={}) self.thread_count = 50 def _run_many_threads(self, threads): for t in threads: t.start() for t in threads: t.join() def _get_storage(self, threaded=False): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) if threaded: return storage.MultiThreadedStorage(backend=self.backend, flow_detail=flow_detail) else: return storage.SingleThreadedStorage(backend=self.backend, flow_detail=flow_detail) def tearDown(self): super(StorageTest, self).tearDown() with contextlib.closing(self.backend) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() def test_non_saving_storage(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = storage.SingleThreadedStorage(flow_detail=flow_detail) s.ensure_task('my_task') self.assertTrue( uuidutils.is_uuid_like(s.get_task_uuid('my_task'))) def test_flow_name_and_uuid(self): fd = logbook.FlowDetail(name='test-fd', uuid='aaaa') s = storage.SingleThreadedStorage(flow_detail=fd) self.assertEqual(s.flow_name, 'test-fd') self.assertEqual(s.flow_uuid, 'aaaa') def test_ensure_task(self): s = self._get_storage() s.ensure_task('my task') self.assertEqual(s.get_task_state('my task'), states.PENDING) self.assertTrue( uuidutils.is_uuid_like(s.get_task_uuid('my task'))) def test_get_tasks_states(self): s = self._get_storage() s.ensure_task('my task') s.ensure_task('my task2') s.save('my task', 'foo') expected = { 'my task': states.SUCCESS, 'my task2': states.PENDING, } self.assertEqual(s.get_tasks_states(['my task', 'my task2']), expected) def test_ensure_task_fd(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = storage.SingleThreadedStorage(backend=self.backend, flow_detail=flow_detail) s.ensure_task('my task', '3.11') td = flow_detail.find(s.get_task_uuid('my task')) self.assertIsNotNone(td) self.assertEqual(td.name, 'my task') self.assertEqual(td.version, '3.11') self.assertEqual(td.state, states.PENDING) def test_get_without_save(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) td = logbook.TaskDetail(name='my_task', uuid='42') flow_detail.add(td) s = storage.SingleThreadedStorage(backend=self.backend, flow_detail=flow_detail) self.assertEqual('42', s.get_task_uuid('my_task')) def test_ensure_existing_task(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) td = logbook.TaskDetail(name='my_task', uuid='42') flow_detail.add(td) s = storage.SingleThreadedStorage(backend=self.backend, flow_detail=flow_detail) s.ensure_task('my_task') self.assertEqual('42', s.get_task_uuid('my_task')) def test_save_and_get(self): s = self._get_storage() s.ensure_task('my task') s.save('my task', 5) self.assertEqual(s.get('my task'), 5) self.assertEqual(s.fetch_all(), {}) self.assertEqual(s.get_task_state('my task'), states.SUCCESS) def test_save_and_get_other_state(self): s = self._get_storage() s.ensure_task('my task') s.save('my task', 5, states.FAILURE) self.assertEqual(s.get('my task'), 5) self.assertEqual(s.get_task_state('my task'), states.FAILURE) def test_save_and_get_failure(self): fail = misc.Failure(exc_info=(RuntimeError, RuntimeError(), None)) s = self._get_storage() s.ensure_task('my task') s.save('my task', fail, states.FAILURE) self.assertEqual(s.get('my task'), fail) self.assertEqual(s.get_task_state('my task'), states.FAILURE) self.assertIs(s.has_failures(), True) self.assertEqual(s.get_failures(), {'my task': fail}) def test_get_failure_from_reverted_task(self): fail = misc.Failure(exc_info=(RuntimeError, RuntimeError(), None)) s = self._get_storage() s.ensure_task('my task') s.save('my task', fail, states.FAILURE) s.set_task_state('my task', states.REVERTING) self.assertEqual(s.get('my task'), fail) s.set_task_state('my task', states.REVERTED) self.assertEqual(s.get('my task'), fail) def test_get_failure_after_reload(self): fail = misc.Failure(exc_info=(RuntimeError, RuntimeError(), None)) s = self._get_storage() s.ensure_task('my task') s.save('my task', fail, states.FAILURE) s2 = storage.SingleThreadedStorage(backend=self.backend, flow_detail=s._flowdetail) self.assertIs(s2.has_failures(), True) self.assertEqual(s2.get_failures(), {'my task': fail}) self.assertEqual(s2.get('my task'), fail) self.assertEqual(s2.get_task_state('my task'), states.FAILURE) def test_get_non_existing_var(self): s = self._get_storage() s.ensure_task('my task') self.assertRaises(exceptions.NotFound, s.get, 'my task') def test_reset(self): s = self._get_storage() s.ensure_task('my task') s.save('my task', 5) s.reset('my task') self.assertEqual(s.get_task_state('my task'), states.PENDING) self.assertRaises(exceptions.NotFound, s.get, 'my task') def test_reset_unknown_task(self): s = self._get_storage() s.ensure_task('my task') self.assertEqual(s.reset('my task'), None) def test_reset_tasks(self): s = self._get_storage() s.ensure_task('my task') s.save('my task', 5) s.ensure_task('my other task') s.save('my other task', 7) s.reset_tasks() self.assertEqual(s.get_task_state('my task'), states.PENDING) self.assertRaises(exceptions.NotFound, s.get, 'my task') self.assertEqual(s.get_task_state('my other task'), states.PENDING) self.assertRaises(exceptions.NotFound, s.get, 'my other task') def test_reset_tasks_does_not_breaks_inject(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) # NOTE(imelnikov): injecting is implemented as special task # so resetting tasks may break it if implemented incorrectly. s.reset_tasks() self.assertEqual(s.fetch('spam'), 'eggs') self.assertEqual(s.fetch_all(), { 'foo': 'bar', 'spam': 'eggs', }) def test_fetch_by_name(self): s = self._get_storage() name = 'my result' s.ensure_task('my task', '1.0', {name: None}) s.save('my task', 5) self.assertEqual(s.fetch(name), 5) self.assertEqual(s.fetch_all(), {name: 5}) def test_fetch_unknown_name(self): s = self._get_storage() self.assertRaisesRegexp(exceptions.NotFound, "^Name 'xxx' is not mapped", s.fetch, 'xxx') def test_default_task_progress(self): s = self._get_storage() s.ensure_task('my task') self.assertEqual(s.get_task_progress('my task'), 0.0) self.assertEqual(s.get_task_progress_details('my task'), None) def test_task_progress(self): s = self._get_storage() s.ensure_task('my task') s.set_task_progress('my task', 0.5, {'test_data': 11}) self.assertEqual(s.get_task_progress('my task'), 0.5) self.assertEqual(s.get_task_progress_details('my task'), { 'at_progress': 0.5, 'details': {'test_data': 11} }) s.set_task_progress('my task', 0.7, {'test_data': 17}) self.assertEqual(s.get_task_progress('my task'), 0.7) self.assertEqual(s.get_task_progress_details('my task'), { 'at_progress': 0.7, 'details': {'test_data': 17} }) s.set_task_progress('my task', 0.99) self.assertEqual(s.get_task_progress('my task'), 0.99) self.assertEqual(s.get_task_progress_details('my task'), { 'at_progress': 0.7, 'details': {'test_data': 17} }) def test_task_progress_erase(self): s = self._get_storage() s.ensure_task('my task') s.set_task_progress('my task', 0.8, {}) self.assertEqual(s.get_task_progress('my task'), 0.8) self.assertEqual(s.get_task_progress_details('my task'), None) def test_fetch_result_not_ready(self): s = self._get_storage() name = 'my result' s.ensure_task('my task', result_mapping={name: None}) self.assertRaises(exceptions.NotFound, s.get, name) self.assertEqual(s.fetch_all(), {}) def test_save_multiple_results(self): s = self._get_storage() result_mapping = {'foo': 0, 'bar': 1, 'whole': None} s.ensure_task('my task', result_mapping=result_mapping) s.save('my task', ('spam', 'eggs')) self.assertEqual(s.fetch_all(), { 'foo': 'spam', 'bar': 'eggs', 'whole': ('spam', 'eggs') }) def test_mapping_none(self): s = self._get_storage() s.ensure_task('my task') s.save('my task', 5) self.assertEqual(s.fetch_all(), {}) def test_inject(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual(s.fetch('spam'), 'eggs') self.assertEqual(s.fetch_all(), { 'foo': 'bar', 'spam': 'eggs', }) def test_inject_twice(self): s = self._get_storage() s.inject({'foo': 'bar'}) self.assertEqual(s.fetch_all(), {'foo': 'bar'}) s.inject({'spam': 'eggs'}) self.assertEqual(s.fetch_all(), { 'foo': 'bar', 'spam': 'eggs', }) def test_inject_resumed(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) # verify it's there self.assertEqual(s.fetch_all(), { 'foo': 'bar', 'spam': 'eggs', }) # imagine we are resuming, so we need to make new # storage from same flow details s2 = storage.SingleThreadedStorage(s._flowdetail, backend=self.backend) # injected data should still be there: self.assertEqual(s2.fetch_all(), { 'foo': 'bar', 'spam': 'eggs', }) def test_many_thread_ensure_same_task(self): s = self._get_storage(threaded=True) def ensure_my_task(): s.ensure_task('my_task', result_mapping={}) threads = [] for i in range(0, self.thread_count): threads.append(threading.Thread(target=ensure_my_task)) self._run_many_threads(threads) # Only one task should have been made, no more. self.assertEqual(1, len(s._flowdetail)) def test_many_thread_one_reset(self): s = self._get_storage(threaded=True) s.ensure_task('a') s.set_task_state('a', states.SUSPENDED) s.ensure_task('b') s.set_task_state('b', states.SUSPENDED) results = [] result_lock = threading.Lock() def reset_all(): r = s.reset_tasks() with result_lock: results.append(r) threads = [] for i in range(0, self.thread_count): threads.append(threading.Thread(target=reset_all)) self._run_many_threads(threads) # Only one thread should have actually reset (not anymore) results = [r for r in results if len(r)] self.assertEqual(1, len(results)) self.assertEqual(['a', 'b'], sorted([a[0] for a in results[0]])) def test_many_thread_inject(self): s = self._get_storage(threaded=True) def inject_values(values): s.inject(values) threads = [] for i in range(0, self.thread_count): values = { str(i): str(i), } threads.append(threading.Thread(target=inject_values, args=[values])) self._run_many_threads(threads) self.assertEqual(self.thread_count, len(s.fetch_all())) self.assertEqual(1, len(s._flowdetail)) def test_fetch_meapped_args(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual(s.fetch_mapped_args({'viking': 'spam'}), {'viking': 'eggs'}) def test_fetch_not_found_args(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertRaises(exceptions.NotFound, s.fetch_mapped_args, {'viking': 'helmet'}) def test_set_and_get_task_state(self): s = self._get_storage() state = states.PENDING s.ensure_task('my task') s.set_task_state('my task', state) self.assertEqual(s.get_task_state('my task'), state) def test_get_state_of_unknown_task(self): s = self._get_storage() self.assertRaisesRegexp(exceptions.NotFound, '^Unknown', s.get_task_state, 'my task') def test_task_by_name(self): s = self._get_storage() s.ensure_task('my task') self.assertTrue( uuidutils.is_uuid_like(s.get_task_uuid('my task'))) def test_unknown_task_by_name(self): s = self._get_storage() self.assertRaisesRegexp(exceptions.NotFound, '^Unknown task name:', s.get_task_uuid, '42') def test_initial_flow_state(self): s = self._get_storage() self.assertEqual(s.get_flow_state(), states.PENDING) def test_get_flow_state(self): _lb, fd = p_utils.temporary_flow_detail(backend=self.backend) fd.state = states.FAILURE with contextlib.closing(self.backend.get_connection()) as conn: fd.update(conn.update_flow_details(fd)) s = storage.SingleThreadedStorage(flow_detail=fd, backend=self.backend) self.assertEqual(s.get_flow_state(), states.FAILURE) def test_set_and_get_flow_state(self): s = self._get_storage() s.set_flow_state(states.SUCCESS) self.assertEqual(s.get_flow_state(), states.SUCCESS) @mock.patch.object(storage.LOG, 'warning') def test_result_is_checked(self, mocked_warning): s = self._get_storage() s.ensure_task('my task', result_mapping={'result': 'key'}) s.save('my task', {}) mocked_warning.assert_called_once_with( mock.ANY, 'my task', 'key', 'result') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'result') @mock.patch.object(storage.LOG, 'warning') def test_empty_result_is_checked(self, mocked_warning): s = self._get_storage() s.ensure_task('my task', result_mapping={'a': 0}) s.save('my task', ()) mocked_warning.assert_called_once_with( mock.ANY, 'my task', 0, 'a') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'a') @mock.patch.object(storage.LOG, 'warning') def test_short_result_is_checked(self, mocked_warning): s = self._get_storage() s.ensure_task('my task', result_mapping={'a': 0, 'b': 1}) s.save('my task', ['result']) mocked_warning.assert_called_once_with( mock.ANY, 'my task', 1, 'b') self.assertEqual(s.fetch('a'), 'result') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'b') @mock.patch.object(storage.LOG, 'warning') def test_multiple_providers_are_checked(self, mocked_warning): s = self._get_storage() s.ensure_task('my task', result_mapping={'result': 'key'}) self.assertEqual(mocked_warning.mock_calls, []) s.ensure_task('my other task', result_mapping={'result': 'key'}) mocked_warning.assert_called_once_with( mock.ANY, 'result') @mock.patch.object(storage.LOG, 'warning') def test_multiple_providers_with_inject_are_checked(self, mocked_warning): s = self._get_storage() s.inject({'result': 'DONE'}) self.assertEqual(mocked_warning.mock_calls, []) s.ensure_task('my other task', result_mapping={'result': 'key'}) mocked_warning.assert_called_once_with(mock.ANY, 'result') taskflow-0.1.3/taskflow/tests/unit/test_utils_failure.py0000664000175300017540000002365012275003514024742 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow import exceptions from taskflow import test from taskflow.tests import utils as test_utils from taskflow.utils import misc def _captured_failure(msg): try: raise RuntimeError(msg) except Exception: return misc.Failure() class GeneralFailureObjTestsMixin(object): def test_captures_message(self): self.assertEqual(self.fail_obj.exception_str, 'Woot!') def test_str(self): self.assertEqual(str(self.fail_obj), 'Failure: RuntimeError: Woot!') def test_exception_types(self): self.assertEqual(list(self.fail_obj), test_utils.RUNTIME_ERROR_CLASSES[:-2]) def test_check_str(self): val = 'Exception' self.assertEqual(self.fail_obj.check(val), val) def test_check_str_not_there(self): val = 'ValueError' self.assertEqual(self.fail_obj.check(val), None) def test_check_type(self): self.assertIs(self.fail_obj.check(RuntimeError), RuntimeError) def test_check_type_not_there(self): self.assertIs(self.fail_obj.check(ValueError), None) class CaptureFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(CaptureFailureTestCase, self).setUp() self.fail_obj = _captured_failure('Woot!') def test_captures_value(self): self.assertIsInstance(self.fail_obj.exception, RuntimeError) def test_captures_exc_info(self): exc_info = self.fail_obj.exc_info self.assertEqual(len(exc_info), 3) self.assertEqual(exc_info[0], RuntimeError) self.assertIs(exc_info[1], self.fail_obj.exception) def test_reraises(self): self.assertRaisesRegexp(RuntimeError, '^Woot!$', self.fail_obj.reraise) class ReCreatedFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(ReCreatedFailureTestCase, self).setUp() fail_obj = _captured_failure('Woot!') self.fail_obj = misc.Failure(exception_str=fail_obj.exception_str, traceback_str=fail_obj.traceback_str, exc_type_names=list(fail_obj)) def test_value_lost(self): self.assertIs(self.fail_obj.exception, None) def test_no_exc_info(self): self.assertIs(self.fail_obj.exc_info, None) def test_reraises(self): exc = self.assertRaises(exceptions.WrappedFailure, self.fail_obj.reraise) self.assertIs(exc.check(RuntimeError), RuntimeError) class FromExceptionTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(FromExceptionTestCase, self).setUp() self.fail_obj = misc.Failure.from_exception(RuntimeError('Woot!')) class FailureObjectTestCase(test.TestCase): def test_dont_catch_base_exception(self): try: raise SystemExit() except BaseException: self.assertRaises(TypeError, misc.Failure) def test_unknown_argument(self): exc = self.assertRaises(TypeError, misc.Failure, exception_str='Woot!', traceback_str=None, exc_type_names=['Exception'], hi='hi there') expected = "Failure.__init__ got unexpected keyword argument(s): hi" self.assertEqual(str(exc), expected) def test_empty_does_not_reraise(self): self.assertIs(misc.Failure.reraise_if_any([]), None) def test_reraises_one(self): fls = [_captured_failure('Woot!')] self.assertRaisesRegexp(RuntimeError, '^Woot!$', misc.Failure.reraise_if_any, fls) def test_reraises_several(self): fls = [ _captured_failure('Woot!'), _captured_failure('Oh, not again!') ] exc = self.assertRaises(exceptions.WrappedFailure, misc.Failure.reraise_if_any, fls) self.assertEqual(list(exc), fls) def test_failure_copy(self): fail_obj = _captured_failure('Woot!') copied = fail_obj.copy() self.assertIsNot(fail_obj, copied) self.assertEqual(fail_obj, copied) self.assertTrue(fail_obj.matches(copied)) def test_failure_copy_recaptured(self): captured = _captured_failure('Woot!') fail_obj = misc.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) copied = fail_obj.copy() self.assertIsNot(fail_obj, copied) self.assertEqual(fail_obj, copied) self.assertFalse(fail_obj != copied) self.assertTrue(fail_obj.matches(copied)) def test_recaptured_not_eq(self): captured = _captured_failure('Woot!') fail_obj = misc.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) self.assertFalse(fail_obj == captured) self.assertTrue(fail_obj != captured) self.assertTrue(fail_obj.matches(captured)) def test_two_captured_eq(self): captured = _captured_failure('Woot!') captured2 = _captured_failure('Woot!') self.assertEqual(captured, captured2) def test_two_recaptured_neq(self): captured = _captured_failure('Woot!') fail_obj = misc.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) new_exc_str = captured.exception_str.replace('Woot', 'w00t') fail_obj2 = misc.Failure(exception_str=new_exc_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) self.assertNotEqual(fail_obj, fail_obj2) self.assertFalse(fail_obj2.matches(fail_obj)) def test_compares_to_none(self): captured = _captured_failure('Woot!') self.assertNotEqual(captured, None) self.assertFalse(captured.matches(None)) class WrappedFailureTestCase(test.TestCase): def test_simple_iter(self): fail_obj = _captured_failure('Woot!') wf = exceptions.WrappedFailure([fail_obj]) self.assertEqual(len(wf), 1) self.assertEqual(list(wf), [fail_obj]) def test_simple_check(self): fail_obj = _captured_failure('Woot!') wf = exceptions.WrappedFailure([fail_obj]) self.assertEqual(wf.check(RuntimeError), RuntimeError) self.assertEqual(wf.check(ValueError), None) def test_two_failures(self): fls = [ _captured_failure('Woot!'), _captured_failure('Oh, not again!') ] wf = exceptions.WrappedFailure(fls) self.assertEqual(len(wf), 2) self.assertEqual(list(wf), fls) def test_flattening(self): f1 = _captured_failure('Wrap me') f2 = _captured_failure('Wrap me, too') f3 = _captured_failure('Woot!') try: raise exceptions.WrappedFailure([f1, f2]) except Exception: fail_obj = misc.Failure() wf = exceptions.WrappedFailure([fail_obj, f3]) self.assertEqual(list(wf), [f1, f2, f3]) class NonAsciiExceptionsTestCase(test.TestCase): def test_exception_with_non_ascii_str(self): bad_string = chr(200) fail = misc.Failure.from_exception(ValueError(bad_string)) self.assertEqual(fail.exception_str, bad_string) self.assertEqual(str(fail), 'Failure: ValueError: %s' % bad_string) def test_exception_non_ascii_unicode(self): hi_ru = u'привет' fail = misc.Failure.from_exception(ValueError(hi_ru)) self.assertEqual(fail.exception_str, hi_ru) self.assertIsInstance(fail.exception_str, six.text_type) self.assertEqual(six.text_type(fail), u'Failure: ValueError: %s' % hi_ru) def test_wrapped_failure_non_ascii_unicode(self): hi_cn = u'嗨' fail = ValueError(hi_cn) self.assertEqual(hi_cn, exceptions.exception_message(fail)) fail = misc.Failure.from_exception(fail) wrapped_fail = exceptions.WrappedFailure([fail]) if six.PY2: # Python 2.x will unicode escape it, while python 3.3+ will not, # so we sadly have to differentiate between these two... expected_result = (u"WrappedFailure: " "[u'Failure: ValueError: %s']" % (hi_cn.encode("unicode-escape"))) else: expected_result = (u"WrappedFailure: " "['Failure: ValueError: %s']" % (hi_cn)) self.assertEqual(expected_result, six.text_type(wrapped_fail)) def test_failure_equality_with_non_ascii_str(self): bad_string = chr(200) fail = misc.Failure.from_exception(ValueError(bad_string)) copied = fail.copy() self.assertEqual(fail, copied) def test_failure_equality_non_ascii_unicode(self): hi_ru = u'привет' fail = misc.Failure.from_exception(ValueError(hi_ru)) copied = fail.copy() self.assertEqual(fail, copied) taskflow-0.1.3/taskflow/tests/unit/test_progress.py0000664000175300017540000001172212275003514023734 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from taskflow import task from taskflow import test import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow.utils import persistence_utils as p_utils class ProgressTask(task.Task): def __init__(self, name, segments): super(ProgressTask, self).__init__(name=name) self._segments = segments def execute(self): if self._segments <= 0: return for i in range(1, self._segments): progress = float(i) / self._segments self.update_progress(progress) class ProgressTaskWithDetails(task.Task): def execute(self): self.update_progress(0.5, test='test data', foo='bar') class TestProgress(test.TestCase): def _make_engine(self, flow, flow_detail=None, backend=None): e = taskflow.engines.load(flow, flow_detail=flow_detail, backend=backend) e.compile() return e def tearDown(self): super(TestProgress, self).tearDown() with contextlib.closing(impl_memory.MemoryBackend({})) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() def test_sanity_progress(self): fired_events = [] def notify_me(task, event_data, progress): fired_events.append(progress) ev_count = 5 t = ProgressTask("test", ev_count) t.bind('update_progress', notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) e.run() self.assertEqual(ev_count + 1, len(fired_events)) self.assertEqual(1.0, fired_events[-1]) self.assertEqual(0.0, fired_events[0]) def test_no_segments_progress(self): fired_events = [] def notify_me(task, event_data, progress): fired_events.append(progress) t = ProgressTask("test", 0) t.bind('update_progress', notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) e.run() # 0.0 and 1.0 should be automatically fired self.assertEqual(2, len(fired_events)) self.assertEqual(1.0, fired_events[-1]) self.assertEqual(0.0, fired_events[0]) def test_storage_progress(self): with contextlib.closing(impl_memory.MemoryBackend({})) as be: flo = lf.Flow("test") flo.add(ProgressTask("test", 3)) b, fd = p_utils.temporary_flow_detail(be) e = self._make_engine(flo, flow_detail=fd, backend=be) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) task_uuid = e.storage.get_task_uuid("test") td = fd.find(task_uuid) self.assertEqual(1.0, td.meta['progress']) self.assertFalse(td.meta['progress_details']) def test_storage_progress_detail(self): flo = ProgressTaskWithDetails("test") e = self._make_engine(flo) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) end_details = e.storage.get_task_progress_details("test") self.assertEqual(end_details.get('at_progress'), 0.5) self.assertEqual(end_details.get('details'), { 'test': 'test data', 'foo': 'bar' }) def test_dual_storage_progress(self): fired_events = [] def notify_me(task, event_data, progress): fired_events.append(progress) with contextlib.closing(impl_memory.MemoryBackend({})) as be: t = ProgressTask("test", 5) t.bind('update_progress', notify_me) flo = lf.Flow("test") flo.add(t) b, fd = p_utils.temporary_flow_detail(be) e = self._make_engine(flo, flow_detail=fd, backend=be) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) task_uuid = e.storage.get_task_uuid("test") td = fd.find(task_uuid) self.assertEqual(1.0, td.meta['progress']) self.assertFalse(td.meta['progress_details']) self.assertEqual(6, len(fired_events)) taskflow-0.1.3/taskflow/tests/unit/test_utils.py0000664000175300017540000003507612275003514023240 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import sys import time from taskflow import states from taskflow import test from taskflow.tests import utils as test_utils from taskflow.utils import lock_utils from taskflow.utils import misc from taskflow.utils import reflection def mere_function(a, b): pass def function_with_defs(a, b, optional=None): pass def function_with_kwargs(a, b, **kwargs): pass class Class(object): def method(self, c, d): pass @staticmethod def static_method(e, f): pass @classmethod def class_method(cls, g, h): pass class CallableClass(object): def __call__(self, i, j): pass class ClassWithInit(object): def __init__(self, k, l): pass class CallbackEqualityTest(test.TestCase): def test_different_simple_callbacks(self): def a(): pass def b(): pass self.assertFalse(reflection.is_same_callback(a, b)) def test_static_instance_callbacks(self): class A(object): @staticmethod def b(a, b, c): pass a = A() b = A() self.assertTrue(reflection.is_same_callback(a.b, b.b)) def test_different_instance_callbacks(self): class A(object): def b(self): pass def __eq__(self, other): return True b = A() c = A() self.assertFalse(reflection.is_same_callback(b.b, c.b)) self.assertTrue(reflection.is_same_callback(b.b, c.b, strict=False)) class GetCallableNameTest(test.TestCase): def test_mere_function(self): name = reflection.get_callable_name(mere_function) self.assertEqual(name, '.'.join((__name__, 'mere_function'))) def test_method(self): name = reflection.get_callable_name(Class.method) self.assertEqual(name, '.'.join((__name__, 'method'))) def test_instance_method(self): name = reflection.get_callable_name(Class().method) self.assertEqual(name, '.'.join((__name__, 'Class', 'method'))) def test_static_method(self): # NOTE(imelnikov): static method are just functions, class name # is not recorded anywhere in them. name = reflection.get_callable_name(Class.static_method) self.assertEqual(name, '.'.join((__name__, 'static_method'))) def test_class_method(self): name = reflection.get_callable_name(Class.class_method) self.assertEqual(name, '.'.join((__name__, 'Class', 'class_method'))) def test_constructor(self): name = reflection.get_callable_name(Class) self.assertEqual(name, '.'.join((__name__, 'Class'))) def test_callable_class(self): name = reflection.get_callable_name(CallableClass()) self.assertEqual(name, '.'.join((__name__, 'CallableClass'))) def test_callable_class_call(self): name = reflection.get_callable_name(CallableClass().__call__) self.assertEqual(name, '.'.join((__name__, 'CallableClass', '__call__'))) class NotifierTest(test.TestCase): def test_notify_called(self): call_collector = [] def call_me(state, details): call_collector.append((state, details)) notifier = misc.TransitionNotifier() notifier.register(misc.TransitionNotifier.ANY, call_me) notifier.notify(states.SUCCESS, {}) notifier.notify(states.SUCCESS, {}) self.assertEqual(2, len(call_collector)) self.assertEqual(1, len(notifier)) def test_notify_register_deregister(self): def call_me(state, details): pass class A(object): def call_me_too(self, state, details): pass notifier = misc.TransitionNotifier() notifier.register(misc.TransitionNotifier.ANY, call_me) a = A() notifier.register(misc.TransitionNotifier.ANY, a.call_me_too) self.assertEqual(2, len(notifier)) notifier.deregister(misc.TransitionNotifier.ANY, call_me) notifier.deregister(misc.TransitionNotifier.ANY, a.call_me_too) self.assertEqual(0, len(notifier)) def test_notify_reset(self): def call_me(state, details): pass notifier = misc.TransitionNotifier() notifier.register(misc.TransitionNotifier.ANY, call_me) self.assertEqual(1, len(notifier)) notifier.reset() self.assertEqual(0, len(notifier)) def test_bad_notify(self): def call_me(state, details): pass notifier = misc.TransitionNotifier() self.assertRaises(KeyError, notifier.register, misc.TransitionNotifier.ANY, call_me, kwargs={'details': 5}) def test_selective_notify(self): call_counts = collections.defaultdict(list) def call_me_on(registered_state, state, details): call_counts[registered_state].append((state, details)) notifier = misc.TransitionNotifier() notifier.register(states.SUCCESS, functools.partial(call_me_on, states.SUCCESS)) notifier.register(misc.TransitionNotifier.ANY, functools.partial(call_me_on, misc.TransitionNotifier.ANY)) self.assertEqual(2, len(notifier)) notifier.notify(states.SUCCESS, {}) self.assertEqual(1, len(call_counts[misc.TransitionNotifier.ANY])) self.assertEqual(1, len(call_counts[states.SUCCESS])) notifier.notify(states.FAILURE, {}) self.assertEqual(2, len(call_counts[misc.TransitionNotifier.ANY])) self.assertEqual(1, len(call_counts[states.SUCCESS])) self.assertEqual(2, len(call_counts)) class GetCallableArgsTest(test.TestCase): def test_mere_function(self): result = reflection.get_callable_args(mere_function) self.assertEqual(['a', 'b'], result) def test_function_with_defaults(self): result = reflection.get_callable_args(function_with_defs) self.assertEqual(['a', 'b', 'optional'], result) def test_required_only(self): result = reflection.get_callable_args(function_with_defs, required_only=True) self.assertEqual(['a', 'b'], result) def test_method(self): result = reflection.get_callable_args(Class.method) self.assertEqual(['self', 'c', 'd'], result) def test_instance_method(self): result = reflection.get_callable_args(Class().method) self.assertEqual(['c', 'd'], result) def test_class_method(self): result = reflection.get_callable_args(Class.class_method) self.assertEqual(['g', 'h'], result) def test_class_constructor(self): result = reflection.get_callable_args(ClassWithInit) self.assertEqual(['k', 'l'], result) def test_class_with_call(self): result = reflection.get_callable_args(CallableClass()) self.assertEqual(['i', 'j'], result) def test_decorators_work(self): @lock_utils.locked def locked_fun(x, y): pass result = reflection.get_callable_args(locked_fun) self.assertEqual(['x', 'y'], result) class AcceptsKwargsTest(test.TestCase): def test_no_kwargs(self): self.assertEqual( reflection.accepts_kwargs(mere_function), False) def test_with_kwargs(self): self.assertEqual( reflection.accepts_kwargs(function_with_kwargs), True) class GetClassNameTest(test.TestCase): def test_std_exception(self): name = reflection.get_class_name(RuntimeError) self.assertEqual(name, 'RuntimeError') def test_global_class(self): name = reflection.get_class_name(misc.Failure) self.assertEqual(name, 'taskflow.utils.misc.Failure') def test_class(self): name = reflection.get_class_name(Class) self.assertEqual(name, '.'.join((__name__, 'Class'))) def test_instance(self): name = reflection.get_class_name(Class()) self.assertEqual(name, '.'.join((__name__, 'Class'))) def test_int(self): name = reflection.get_class_name(42) self.assertEqual(name, 'int') class GetAllClassNamesTest(test.TestCase): def test_std_class(self): names = list(reflection.get_all_class_names(RuntimeError)) self.assertEqual(names, test_utils.RUNTIME_ERROR_CLASSES) def test_std_class_up_to(self): names = list(reflection.get_all_class_names(RuntimeError, up_to=Exception)) self.assertEqual(names, test_utils.RUNTIME_ERROR_CLASSES[:-2]) class AttrDictTest(test.TestCase): def test_ok_create(self): attrs = { 'a': 1, 'b': 2, } obj = misc.AttrDict(**attrs) self.assertEqual(obj.a, 1) self.assertEqual(obj.b, 2) def test_private_create(self): attrs = { '_a': 1, } self.assertRaises(AttributeError, misc.AttrDict, **attrs) def test_invalid_create(self): attrs = { # Python attributes can't start with a number. '123_abc': 1, } self.assertRaises(AttributeError, misc.AttrDict, **attrs) def test_no_overwrite(self): attrs = { # Python attributes can't start with a number. 'update': 1, } self.assertRaises(AttributeError, misc.AttrDict, **attrs) def test_back_todict(self): attrs = { 'a': 1, } obj = misc.AttrDict(**attrs) self.assertEqual(obj.a, 1) self.assertEqual(attrs, dict(obj)) def test_runtime_invalid_set(self): def bad_assign(obj): obj._123 = 'b' attrs = { 'a': 1, } obj = misc.AttrDict(**attrs) self.assertEqual(obj.a, 1) self.assertRaises(AttributeError, bad_assign, obj) def test_bypass_get(self): attrs = { 'a': 1, } obj = misc.AttrDict(**attrs) self.assertEqual(1, obj['a']) def test_bypass_set_no_get(self): def bad_assign(obj): obj._b = 'e' attrs = { 'a': 1, } obj = misc.AttrDict(**attrs) self.assertEqual(1, obj['a']) obj['_b'] = 'c' self.assertRaises(AttributeError, bad_assign, obj) self.assertEqual('c', obj['_b']) class IsValidAttributeNameTestCase(test.TestCase): def test_a_is_ok(self): self.assertTrue(misc.is_valid_attribute_name('a')) def test_name_can_be_longer(self): self.assertTrue(misc.is_valid_attribute_name('foobarbaz')) def test_name_can_have_digits(self): self.assertTrue(misc.is_valid_attribute_name('fo12')) def test_name_cannot_start_with_digit(self): self.assertFalse(misc.is_valid_attribute_name('1z')) def test_hidden_names_are_forbidden(self): self.assertFalse(misc.is_valid_attribute_name('_z')) def test_hidden_names_can_be_allowed(self): self.assertTrue( misc.is_valid_attribute_name('_z', allow_hidden=True)) def test_self_is_forbidden(self): self.assertFalse(misc.is_valid_attribute_name('self')) def test_self_can_be_allowed(self): self.assertTrue( misc.is_valid_attribute_name('self', allow_self=True)) def test_no_unicode_please(self): self.assertFalse(misc.is_valid_attribute_name('mañana')) class StopWatchUtilsTest(test.TestCase): def test_no_states(self): watch = misc.StopWatch() self.assertRaises(RuntimeError, watch.stop) self.assertRaises(RuntimeError, watch.resume) def test_expiry(self): watch = misc.StopWatch(0.1) watch.start() time.sleep(0.2) self.assertTrue(watch.expired()) def test_no_expiry(self): watch = misc.StopWatch(0.1) watch.start() self.assertFalse(watch.expired()) def test_elapsed(self): watch = misc.StopWatch() watch.start() time.sleep(0.2) # NOTE(harlowja): Allow for a slight variation by using 0.19. self.assertGreaterEqual(0.19, watch.elapsed()) def test_pause_resume(self): watch = misc.StopWatch() watch.start() time.sleep(0.05) watch.stop() elapsed = watch.elapsed() time.sleep(0.05) self.assertAlmostEqual(elapsed, watch.elapsed()) watch.resume() self.assertNotEqual(elapsed, watch.elapsed()) def test_context_manager(self): with misc.StopWatch() as watch: time.sleep(0.05) self.assertGreater(0.01, watch.elapsed()) class ExcInfoUtilsTest(test.TestCase): def _make_ex_info(self): try: raise RuntimeError('Woot!') except Exception: return sys.exc_info() def test_copy_none(self): result = misc.copy_exc_info(None) self.assertIsNone(result) def test_copy_exc_info(self): exc_info = self._make_ex_info() result = misc.copy_exc_info(exc_info) self.assertIsNot(result, exc_info) self.assertIs(result[0], RuntimeError) self.assertIsNot(result[1], exc_info[1]) self.assertIs(result[2], exc_info[2]) def test_none_equals(self): self.assertTrue(misc.are_equal_exc_info_tuples(None, None)) def test_none_ne_tuple(self): exc_info = self._make_ex_info() self.assertFalse(misc.are_equal_exc_info_tuples(None, exc_info)) def test_tuple_nen_none(self): exc_info = self._make_ex_info() self.assertFalse(misc.are_equal_exc_info_tuples(exc_info, None)) def test_tuple_equals_itself(self): exc_info = self._make_ex_info() self.assertTrue(misc.are_equal_exc_info_tuples(exc_info, exc_info)) def test_typle_equals_copy(self): exc_info = self._make_ex_info() copied = misc.copy_exc_info(exc_info) self.assertTrue(misc.are_equal_exc_info_tuples(exc_info, copied)) taskflow-0.1.3/taskflow/tests/unit/persistence/0000775000175300017540000000000012275003604023000 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/tests/unit/persistence/test_zk_persistence.py0000664000175300017540000000477712275003514027460 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 AT&T Labs All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import testtools from taskflow.openstack.common import uuidutils from taskflow.persistence.backends import impl_zookeeper from taskflow import test from taskflow.tests.unit.persistence import base from taskflow.utils import kazoo_utils TEST_CONFIG = { 'timeout': 1.0, 'hosts': ["localhost:2181"], } TEST_PATH_TPL = '/taskflow/persistence-test/%s' def _zookeeper_available(): client = kazoo_utils.make_client(TEST_CONFIG) try: client.start() zk_ver = client.server_version() if zk_ver >= impl_zookeeper.MIN_ZK_VERSION: return True else: return False except Exception: return False finally: try: client.stop() client.close() except Exception: pass @testtools.skipIf(not _zookeeper_available(), 'zookeeper is not available') class ZkPersistenceTest(test.TestCase, base.PersistenceTestMixin): def _get_connection(self): return self.backend.get_connection() def _clear_all(self): with contextlib.closing(self._get_connection()) as conn: conn.clear_all() def setUp(self): super(ZkPersistenceTest, self).setUp() conf = TEST_CONFIG.copy() # Create a unique path just for this test (so that we don't overwrite # what other tests are doing). conf['path'] = TEST_PATH_TPL % (uuidutils.generate_uuid()) try: self.backend = impl_zookeeper.ZkBackend(conf) self.addCleanup(self.backend.close) except Exception as e: self.skipTest("Failed creating backend created from configuration" " %s due to %s" % (conf, e)) with contextlib.closing(self._get_connection()) as conn: conn.upgrade() self.addCleanup(self._clear_all) taskflow-0.1.3/taskflow/tests/unit/persistence/base.py0000664000175300017540000002267412275003514024277 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from taskflow import exceptions as exc from taskflow.openstack.common import uuidutils from taskflow.persistence import logbook from taskflow.utils import misc class PersistenceTestMixin(object): def _get_connection(self): raise NotImplementedError() def test_logbook_save_retrieve(self): lb_id = uuidutils.generate_uuid() lb_meta = {'1': 2} lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) lb.meta = lb_meta # Should not already exist with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) conn.save_logbook(lb) # Make sure we can reload it (and all of its attributes are what # we expect them to be). with contextlib.closing(self._get_connection()) as conn: lb = conn.get_logbook(lb_id) self.assertEqual(lb_name, lb.name) self.assertEqual(0, len(lb)) self.assertEqual(lb_meta, lb.meta) self.assertIsNone(lb.updated_at) self.assertIsNotNone(lb.created_at) def test_flow_detail_save(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) # Ensure we can't save it since its owning logbook hasn't been # saved (flow details can not exist on there own without a connection # to a logbook). with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) self.assertRaises(exc.NotFound, conn.update_flow_details, fd) # Ok now we should be able to save both. with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) def test_flow_detail_meta_update(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) fd.meta = {'test': 42} lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) fd.meta['test'] = 43 with contextlib.closing(self._get_connection()) as conn: conn.update_flow_details(fd) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) self.assertEqual(fd2.meta.get('test'), 43) def test_task_detail_save(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) fd.add(td) # Ensure we can't save it since its owning logbook hasn't been # saved (flow details/task details can not exist on there own without # there parent existing). with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.update_flow_details, fd) self.assertRaises(exc.NotFound, conn.update_task_details, td) # Ok now we should be able to save them. with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_task_details(td) def test_task_detail_meta_update(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) td.meta = {'test': 42} fd.add(td) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_task_details(td) td.meta['test'] = 43 with contextlib.closing(self._get_connection()) as conn: conn.update_task_details(td) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) self.assertEqual(td2.meta.get('test'), 43) def test_task_detail_with_failure(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) try: raise RuntimeError('Woot!') except Exception: td.failure = misc.Failure() fd.add(td) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_task_details(td) # Read failure back with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) failure = td2.failure self.assertEqual(failure.exception_str, 'Woot!') self.assertIs(failure.check(RuntimeError), RuntimeError) self.assertEqual(failure.traceback_str, td.failure.traceback_str) def test_logbook_merge_flow_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) lb2 = logbook.LogBook(name=lb_name, uuid=lb_id) fd2 = logbook.FlowDetail('test2', uuid=uuidutils.generate_uuid()) lb2.add(fd2) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb2) with contextlib.closing(self._get_connection()) as conn: lb3 = conn.get_logbook(lb_id) self.assertEqual(2, len(lb3)) def test_logbook_add_flow_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertEqual(1, len(lb2)) self.assertEqual(1, len(lb)) self.assertEqual(fd.name, lb2.find(fd.uuid).name) def test_logbook_add_task_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) td = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) td.version = '4.2' fd.add(td) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertEqual(1, len(lb2)) tasks = 0 for fd in lb: tasks += len(fd) self.assertEqual(1, tasks) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) self.assertIsNot(td2, None) self.assertEqual(td2.name, 'detail-1') self.assertEqual(td2.version, '4.2') def test_logbook_delete(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = logbook.LogBook(name=lb_name, uuid=lb_id) with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.destroy_logbook, lb_id) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertIsNotNone(lb2) with contextlib.closing(self._get_connection()) as conn: conn.destroy_logbook(lb_id) self.assertRaises(exc.NotFound, conn.destroy_logbook, lb_id) taskflow-0.1.3/taskflow/tests/unit/persistence/test_zake_persistence.py0000664000175300017540000000312712275003514027752 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 AT&T Labs All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from zake import fake_client from taskflow.persistence import backends from taskflow.persistence.backends import impl_zookeeper from taskflow import test from taskflow.tests.unit.persistence import base class ZakePersistenceTest(test.TestCase, base.PersistenceTestMixin): def _get_connection(self): return self._backend.get_connection() def setUp(self): super(ZakePersistenceTest, self).setUp() conf = { "path": "/taskflow", } client = fake_client.FakeClient() client.start() self._backend = impl_zookeeper.ZkBackend(conf, client=client) conn = self._backend.get_connection() conn.upgrade() def test_zk_persistence_entry_point(self): conf = {'connection': 'zookeeper:'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_zookeeper.ZkBackend) taskflow-0.1.3/taskflow/tests/unit/persistence/test_dir_persistence.py0000664000175300017540000000406312275003514027576 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import os import shutil import tempfile from taskflow.persistence import backends from taskflow.persistence.backends import impl_dir from taskflow import test from taskflow.tests.unit.persistence import base class DirPersistenceTest(test.TestCase, base.PersistenceTestMixin): def _get_connection(self): conf = { 'path': self.path, } return impl_dir.DirBackend(conf).get_connection() def setUp(self): super(DirPersistenceTest, self).setUp() self.path = tempfile.mkdtemp() conn = self._get_connection() conn.upgrade() def tearDown(self): super(DirPersistenceTest, self).tearDown() conn = self._get_connection() conn.clear_all() if self.path and os.path.isdir(self.path): shutil.rmtree(self.path) self.path = None def test_dir_persistence_entry_point(self): conf = { 'connection': 'dir:', 'path': self.path } backend = backends.fetch(conf) self.assertIsInstance(backend, impl_dir.DirBackend) backend.close() def test_file_persistence_entry_point(self): conf = { 'connection': 'file:', 'path': self.path } with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_dir.DirBackend) taskflow-0.1.3/taskflow/tests/unit/persistence/test_sql_persistence.py0000664000175300017540000002652312275003514027624 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import os import tempfile import threading import testtools # NOTE(harlowja): by default this will test against sqlite using a temporary # sqlite file (this is done instead of in-memory to ensure thread safety, # in-memory sqlite is not thread safe). # # There are also "opportunistic" tests for both mysql and postgresql in here, # which allows testing against all 3 databases (sqlite, mysql, postgres) in # a properly configured unit test environment. For the opportunistic testing # you need to set up a db named 'openstack_citest' with user 'openstack_citest' # and password 'openstack_citest' on localhost. USER = "openstack_citest" PASSWD = "openstack_citest" DATABASE = "openstack_citest" try: from taskflow.persistence.backends import impl_sqlalchemy import sqlalchemy as sa SQLALCHEMY_AVAILABLE = True except Exception: SQLALCHEMY_AVAILABLE = False # Testing will try to run against these two mysql library variants. MYSQL_VARIANTS = ('mysqldb', 'pymysql') from taskflow.persistence import backends from taskflow import test from taskflow.tests.unit.persistence import base from taskflow.utils import lock_utils def _get_connect_string(backend, user, passwd, database=None, variant=None): """Try to get a connection with a very specific set of values, if we get these then we'll run the tests, otherwise they are skipped. """ if backend == "postgres": if not variant: variant = 'psycopg2' backend = "postgresql+%s" % (variant) elif backend == "mysql": if not variant: variant = 'mysqldb' backend = "mysql+%s" % (variant) else: raise Exception("Unrecognized backend: '%s'" % backend) if not database: database = '' return "%s://%s:%s@localhost/%s" % (backend, user, passwd, database) def _mysql_exists(): if not SQLALCHEMY_AVAILABLE: return False for variant in MYSQL_VARIANTS: engine = None try: db_uri = _get_connect_string('mysql', USER, PASSWD, variant=variant) engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()): return True except Exception: pass finally: if engine is not None: try: engine.dispose() except Exception: pass return False def _postgres_exists(): if not SQLALCHEMY_AVAILABLE: return False engine = None try: db_uri = _get_connect_string('postgres', USER, PASSWD, 'template1') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()): return True except Exception: return False finally: if engine is not None: try: engine.dispose() except Exception: pass @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') class SqlitePersistenceTest(test.TestCase, base.PersistenceTestMixin): """Inherits from the base test and sets up a sqlite temporary db.""" def _get_connection(self): conf = { 'connection': self.db_uri, } return impl_sqlalchemy.SQLAlchemyBackend(conf).get_connection() def setUp(self): super(SqlitePersistenceTest, self).setUp() self.db_location = tempfile.mktemp(suffix='.db') self.db_uri = "sqlite:///%s" % (self.db_location) # Ensure upgraded to the right schema with contextlib.closing(self._get_connection()) as conn: conn.upgrade() def tearDown(self): super(SqlitePersistenceTest, self).tearDown() if self.db_location and os.path.isfile(self.db_location): os.unlink(self.db_location) self.db_location = None class BackendPersistenceTestMixin(base.PersistenceTestMixin): """Specifies a backend type and does required setup and teardown.""" LOCK_NAME = None def _get_connection(self): return self.backend.get_connection() def _reset_database(self): """Resets the database, and returns the uri to that database. Called *only* after locking succeeds. """ raise NotImplementedError() def setUp(self): super(BackendPersistenceTestMixin, self).setUp() self.backend = None self.big_lock.acquire() self.addCleanup(self.big_lock.release) try: conf = { 'connection': self._reset_database(), } except Exception as e: self.skipTest("Failed to reset your database;" " testing being skipped due to: %s" % (e)) try: self.backend = impl_sqlalchemy.SQLAlchemyBackend(conf) self.addCleanup(self.backend.close) with contextlib.closing(self._get_connection()) as conn: conn.upgrade() except Exception as e: self.skipTest("Failed to setup your database;" " testing being skipped due to: %s" % (e)) @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') @testtools.skipIf(not _mysql_exists(), 'mysql is not available') class MysqlPersistenceTest(BackendPersistenceTestMixin, test.TestCase): LOCK_NAME = 'mysql_persistence_test' def __init__(self, *args, **kwargs): test.TestCase.__init__(self, *args, **kwargs) # We need to make sure that each test goes through a set of locks # to ensure that multiple tests are not modifying the database, # dropping it, creating it at the same time. To accomplish this we use # a lock that ensures multiple parallel processes can't run at the # same time as well as a in-process lock to ensure that multiple # threads can't run at the same time. lock_path = os.path.join(tempfile.gettempdir(), 'taskflow-%s.lock' % (self.LOCK_NAME)) locks = [ lock_utils.InterProcessLock(lock_path), threading.RLock(), ] self.big_lock = lock_utils.MultiLock(locks) def _reset_database(self): working_variant = None for variant in MYSQL_VARIANTS: engine = None try: db_uri = _get_connect_string('mysql', USER, PASSWD, variant=variant) engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) conn.execute("CREATE DATABASE %s" % DATABASE) working_variant = variant except Exception: pass finally: if engine is not None: try: engine.dispose() except Exception: pass if working_variant: break if not working_variant: variants = ", ".join(MYSQL_VARIANTS) self.skipTest("Failed to find a mysql variant" " (tried %s) that works; mysql testing" " being skipped" % (variants)) else: return _get_connect_string('mysql', USER, PASSWD, database=DATABASE, variant=working_variant) @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') @testtools.skipIf(not _postgres_exists(), 'postgres is not available') class PostgresPersistenceTest(BackendPersistenceTestMixin, test.TestCase): LOCK_NAME = 'postgres_persistence_test' def __init__(self, *args, **kwargs): test.TestCase.__init__(self, *args, **kwargs) # We need to make sure that each test goes through a set of locks # to ensure that multiple tests are not modifying the database, # dropping it, creating it at the same time. To accomplish this we use # a lock that ensures multiple parallel processes can't run at the # same time as well as a in-process lock to ensure that multiple # threads can't run at the same time. lock_path = os.path.join(tempfile.gettempdir(), 'taskflow-%s.lock' % (self.LOCK_NAME)) locks = [ lock_utils.InterProcessLock(lock_path), threading.RLock(), ] self.big_lock = lock_utils.MultiLock(locks) def _reset_database(self): engine = None try: # Postgres can't operate on the database its connected to, thats # why we connect to the default template database 'template1' and # then drop and create the desired database. db_uri = _get_connect_string('postgres', USER, PASSWD, database='template1') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: conn.connection.set_isolation_level(0) conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) conn.connection.set_isolation_level(1) with contextlib.closing(engine.connect()) as conn: conn.connection.set_isolation_level(0) conn.execute("CREATE DATABASE %s" % DATABASE) conn.connection.set_isolation_level(1) finally: if engine is not None: try: engine.dispose() except Exception: pass return _get_connect_string('postgres', USER, PASSWD, database=DATABASE) @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') class SQLBackendFetchingTest(test.TestCase): def test_sqlite_persistence_entry_point(self): conf = {'connection': 'sqlite:///'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) @testtools.skipIf(not _postgres_exists(), 'postgres is not available') def test_mysql_persistence_entry_point(self): uri = "mysql://%s:%s@localhost/%s" % (USER, PASSWD, DATABASE) conf = {'connection': uri} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) @testtools.skipIf(not _mysql_exists(), 'mysql is not available') def test_postgres_persistence_entry_point(self): uri = "postgresql://%s:%s@localhost/%s" % (USER, PASSWD, DATABASE) conf = {'connection': uri} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) taskflow-0.1.3/taskflow/tests/unit/persistence/test_memory_persistence.py0000664000175300017540000000303512275003514030326 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from taskflow.persistence import backends from taskflow.persistence.backends import impl_memory from taskflow import test from taskflow.tests.unit.persistence import base class MemoryPersistenceTest(test.TestCase, base.PersistenceTestMixin): def setUp(self): super(MemoryPersistenceTest, self).setUp() self._backend = impl_memory.MemoryBackend({}) def _get_connection(self): return self._backend.get_connection() def tearDown(self): conn = self._get_connection() conn.clear_all() self._backend = None super(MemoryPersistenceTest, self).tearDown() def test_memory_persistence_entry_point(self): conf = {'connection': 'memory:'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_memory.MemoryBackend) taskflow-0.1.3/taskflow/tests/unit/persistence/__init__.py0000664000175300017540000000127512275003514025116 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/tests/unit/__init__.py0000664000175300017540000000127512275003514022572 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/tests/unit/test_green_executor.py0000664000175300017540000000662112275003514025110 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import testtools from taskflow import test from taskflow.utils import eventlet_utils as eu @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class GreenExecutorTest(test.TestCase): def make_funcs(self, called, amount): def store_call(name): called[name] += 1 for i in range(0, amount): yield functools.partial(store_call, name=int(i)) def test_func_calls(self): called = collections.defaultdict(int) with eu.GreenExecutor(2) as e: for f in self.make_funcs(called, 2): e.submit(f) self.assertEqual(1, called[0]) self.assertEqual(1, called[1]) def test_no_construction(self): self.assertRaises(AssertionError, eu.GreenExecutor, 0) self.assertRaises(AssertionError, eu.GreenExecutor, -1) self.assertRaises(AssertionError, eu.GreenExecutor, "-1") def test_result_callback(self): called = collections.defaultdict(int) def call_back(future): called[future] += 1 funcs = list(self.make_funcs(called, 1)) with eu.GreenExecutor(2) as e: f = e.submit(funcs[0]) f.add_done_callback(call_back) self.assertEqual(2, len(called)) def test_exception_transfer(self): def blowup(): raise IOError("Broke!") with eu.GreenExecutor(2) as e: f = e.submit(blowup) self.assertRaises(IOError, f.result) def test_result_transfer(self): def return_given(given): return given create_am = 50 with eu.GreenExecutor(2) as e: fs = [] for i in range(0, create_am): fs.append(e.submit(functools.partial(return_given, i))) self.assertEqual(create_am, len(fs)) for i in range(0, create_am): result = fs[i].result() self.assertEqual(i, result) def test_func_cancellation(self): called = collections.defaultdict(int) fs = [] with eu.GreenExecutor(2) as e: for func in self.make_funcs(called, 2): fs.append(e.submit(func)) # Greenthreads don't start executing until we wait for them # to, since nothing here does IO, this will work out correctly. # # If something here did a blocking call, then eventlet could swap # one of the executors threads in, but nothing in this test does. for f in fs: self.assertFalse(f.running()) f.cancel() self.assertEqual(0, len(called)) for f in fs: self.assertTrue(f.cancelled()) self.assertTrue(f.done()) taskflow-0.1.3/taskflow/tests/unit/test_flow_dependencies.py0000664000175300017540000003220312275003514025542 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import exceptions from taskflow import test from taskflow.tests import utils class FlowDependenciesTest(test.TestCase): def test_task_without_dependencies(self): flow = utils.TaskNoRequiresNoReturns() self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set()) def test_task_requires_default_values(self): flow = utils.TaskMultiArg() self.assertEqual(flow.requires, set(['x', 'y', 'z'])) self.assertEqual(flow.provides, set()) def test_task_requires_rebinded_mapped(self): flow = utils.TaskMultiArg(rebind={'x': 'a', 'y': 'b', 'z': 'c'}) self.assertEqual(flow.requires, set(['a', 'b', 'c'])) self.assertEqual(flow.provides, set()) def test_task_requires_additional_values(self): flow = utils.TaskMultiArg(requires=['a', 'b']) self.assertEqual(flow.requires, set(['a', 'b', 'x', 'y', 'z'])) self.assertEqual(flow.provides, set()) def test_task_provides_values(self): flow = utils.TaskMultiReturn(provides=['a', 'b', 'c']) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['a', 'b', 'c'])) def test_task_provides_and_requires_values(self): flow = utils.TaskMultiArgMultiReturn(provides=['a', 'b', 'c']) self.assertEqual(flow.requires, set(['x', 'y', 'z'])) self.assertEqual(flow.provides, set(['a', 'b', 'c'])) def test_linear_flow_without_dependencies(self): flow = lf.Flow('lf').add( utils.TaskNoRequiresNoReturns('task1'), utils.TaskNoRequiresNoReturns('task2')) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set()) def test_linear_flow_requires_values(self): flow = lf.Flow('lf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z'])) self.assertEqual(flow.provides, set()) def test_linear_flow_requires_rebind_values(self): flow = lf.Flow('lf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z', 'q'])) self.assertEqual(flow.provides, set()) def test_linear_flow_provides_values(self): flow = lf.Flow('lf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['x', 'a', 'b', 'c'])) def test_linear_flow_provides_out_of_order(self): flow = lf.Flow('lf') self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskOneArg('task2'), utils.TaskOneReturn('task1', provides='x')) def test_linear_flow_provides_required_values(self): flow = lf.Flow('lf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['x'])) def test_linear_flow_multi_provides_and_requires_values(self): flow = lf.Flow('lf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['x', 'y', 'q']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(flow.requires, set(['a', 'b', 'c', 'z'])) self.assertEqual(flow.provides, set(['x', 'y', 'q', 'i', 'j', 'k'])) def test_linear_flow_self_requires(self): flow = lf.Flow('lf') self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskNoRequiresNoReturns(rebind=['x'], provides='x')) def test_linear_flow_provides_same_values(self): flow = lf.Flow('lf').add(utils.TaskOneReturn(provides='x')) self.assertRaises(exceptions.DependencyFailure, flow.add, utils.TaskOneReturn(provides='x')) def test_linear_flow_provides_same_values_one_add(self): flow = lf.Flow('lf') self.assertRaises(exceptions.DependencyFailure, flow.add, utils.TaskOneReturn(provides='x'), utils.TaskOneReturn(provides='x')) def test_unordered_flow_without_dependencies(self): flow = uf.Flow('uf').add( utils.TaskNoRequiresNoReturns('task1'), utils.TaskNoRequiresNoReturns('task2')) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set()) def test_unordered_flow_self_requires(self): flow = uf.Flow('uf') self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskNoRequiresNoReturns(rebind=['x'], provides='x')) def test_unordered_flow_requires_values(self): flow = uf.Flow('uf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z'])) self.assertEqual(flow.provides, set()) def test_unordered_flow_requires_rebind_values(self): flow = uf.Flow('uf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z', 'q'])) self.assertEqual(flow.provides, set()) def test_unordered_flow_provides_values(self): flow = uf.Flow('uf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['x', 'a', 'b', 'c'])) def test_unordered_flow_provides_required_values(self): flow = uf.Flow('uf') self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) def test_unordered_flow_requires_provided_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneReturn('task1', provides='x')) self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskOneArg('task2')) def test_unordered_flow_provides_required_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneArg('task2')) self.assertRaises(exceptions.InvariantViolation, flow.add, utils.TaskOneReturn('task1', provides='x')) def test_unordered_flow_multi_provides_and_requires_values(self): flow = uf.Flow('uf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['d', 'e', 'f']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(flow.requires, set(['a', 'b', 'c', 'x', 'y', 'z'])) self.assertEqual(flow.provides, set(['d', 'e', 'f', 'i', 'j', 'k'])) def test_unordered_flow_provides_same_values(self): flow = uf.Flow('uf').add(utils.TaskOneReturn(provides='x')) self.assertRaises(exceptions.DependencyFailure, flow.add, utils.TaskOneReturn(provides='x')) def test_unordered_flow_provides_same_values_one_add(self): flow = uf.Flow('uf') self.assertRaises(exceptions.DependencyFailure, flow.add, utils.TaskOneReturn(provides='x'), utils.TaskOneReturn(provides='x')) def test_nested_flows_requirements(self): flow = uf.Flow('uf').add( lf.Flow('lf').add( utils.TaskOneArgOneReturn('task1', rebind=['a'], provides=['x']), utils.TaskOneArgOneReturn('task2', provides=['y'])), uf.Flow('uf').add( utils.TaskOneArgOneReturn('task3', rebind=['b'], provides=['z']), utils.TaskOneArgOneReturn('task4', rebind=['c'], provides=['q']))) self.assertEqual(flow.requires, set(['a', 'b', 'c'])) self.assertEqual(flow.provides, set(['x', 'y', 'z', 'q'])) def test_nested_flows_provides_same_values(self): flow = lf.Flow('lf').add( uf.Flow('uf').add(utils.TaskOneReturn(provides='x'))) self.assertRaises(exceptions.DependencyFailure, flow.add, gf.Flow('gf').add(utils.TaskOneReturn(provides='x'))) def test_graph_flow_without_dependencies(self): flow = gf.Flow('gf').add( utils.TaskNoRequiresNoReturns('task1'), utils.TaskNoRequiresNoReturns('task2')) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set()) def test_graph_flow_self_requires(self): flow = gf.Flow('g-1-req-error') self.assertRaisesRegexp(exceptions.DependencyFailure, '^No path', flow.add, utils.TaskOneArgOneReturn(requires=['a'], provides='a')) def test_graph_flow_requires_values(self): flow = gf.Flow('gf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z'])) self.assertEqual(flow.provides, set()) def test_graph_flow_requires_rebind_values(self): flow = gf.Flow('gf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(flow.requires, set(['x', 'y', 'z', 'q'])) self.assertEqual(flow.provides, set()) def test_graph_flow_provides_values(self): flow = gf.Flow('gf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['x', 'a', 'b', 'c'])) def test_graph_flow_provides_required_values(self): flow = gf.Flow('gf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) self.assertEqual(flow.requires, set()) self.assertEqual(flow.provides, set(['x'])) def test_graph_flow_provides_provided_value_other_call(self): flow = gf.Flow('gf') flow.add(utils.TaskOneReturn('task1', provides='x')) self.assertRaises(exceptions.DependencyFailure, flow.add, utils.TaskOneReturn('task2', provides='x')) def test_graph_flow_multi_provides_and_requires_values(self): flow = gf.Flow('gf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['d', 'e', 'f']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(flow.requires, set(['a', 'b', 'c', 'x', 'y', 'z'])) self.assertEqual(flow.provides, set(['d', 'e', 'f', 'i', 'j', 'k'])) def test_graph_cyclic_dependency(self): flow = gf.Flow('g-3-cyclic') self.assertRaisesRegexp(exceptions.DependencyFailure, '^No path', flow.add, utils.TaskOneArgOneReturn(provides='a', requires=['b']), utils.TaskOneArgOneReturn(provides='b', requires=['c']), utils.TaskOneArgOneReturn(provides='c', requires=['a'])) taskflow-0.1.3/taskflow/tests/unit/test_engine_helpers.py0000664000175300017540000001072112275003514025055 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from taskflow import test from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as p_utils import taskflow.engines class FlowFromDetailTestCase(test.TestCase): def test_no_meta(self): _lb, flow_detail = p_utils.temporary_flow_detail() self.assertIs(flow_detail.meta, None) self.assertRaisesRegexp(ValueError, '^Cannot .* no factory information saved.$', taskflow.engines.flow_from_detail, flow_detail) def test_no_factory_in_meta(self): _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = {} self.assertRaisesRegexp(ValueError, '^Cannot .* no factory information saved.$', taskflow.engines.flow_from_detail, flow_detail) def test_no_importable_function(self): _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict( name='you can not import me, i contain spaces' )) self.assertRaisesRegexp(ImportError, '^Could not import factory', taskflow.engines.flow_from_detail, flow_detail) def test_no_arg_factory(self): name = 'some.test.factory' _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name)) with mock.patch('taskflow.openstack.common.importutils.import_class', return_value=lambda: 'RESULT') as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_onec_with(name) self.assertEqual(result, 'RESULT') def test_factory_with_arg(self): name = 'some.test.factory' _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name, args=['foo'])) with mock.patch('taskflow.openstack.common.importutils.import_class', return_value=lambda x: 'RESULT %s' % x) as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_onec_with(name) self.assertEqual(result, 'RESULT foo') def my_flow_factory(task_name): return test_utils.DummyTask(name=task_name) class LoadFromFactoryTestCase(test.TestCase): def test_non_reimportable(self): def factory(): pass self.assertRaisesRegexp(ValueError, 'Flow factory .* is not reimportable', taskflow.engines.load_from_factory, factory) def test_it_works(self): engine = taskflow.engines.load_from_factory( my_flow_factory, factory_kwargs={'task_name': 'test1'}) self.assertIsInstance(engine._flow, test_utils.DummyTask) fd = engine.storage._flowdetail self.assertEqual(fd.name, 'test1') self.assertEqual(fd.meta.get('factory'), { 'name': '%s.my_flow_factory' % __name__, 'args': [], 'kwargs': {'task_name': 'test1'}, }) def test_it_works_by_name(self): factory_name = '%s.my_flow_factory' % __name__ engine = taskflow.engines.load_from_factory( factory_name, factory_kwargs={'task_name': 'test1'}) self.assertIsInstance(engine._flow, test_utils.DummyTask) fd = engine.storage._flowdetail self.assertEqual(fd.name, 'test1') self.assertEqual(fd.meta.get('factory'), { 'name': factory_name, 'args': [], 'kwargs': {'task_name': 'test1'}, }) taskflow-0.1.3/taskflow/tests/unit/test_duration.py0000664000175300017540000000425712275003514023722 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import time from taskflow import task from taskflow import test import taskflow.engines from taskflow.listeners import timing from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow.utils import persistence_utils as p_utils class SleepyTask(task.Task): def __init__(self, name, sleep_for=0.0): super(SleepyTask, self).__init__(name=name) self._sleep_for = float(sleep_for) def execute(self): if self._sleep_for <= 0: return else: time.sleep(self._sleep_for) class TestDuration(test.TestCase): def make_engine(self, flow, flow_detail, backend): e = taskflow.engines.load(flow, flow_detail=flow_detail, backend=backend) e.compile() return e def test_duration(self): with contextlib.closing(impl_memory.MemoryBackend({})) as be: flo = lf.Flow("test") flo.add(SleepyTask("test-1", sleep_for=0.1)) (lb, fd) = p_utils.temporary_flow_detail(be) e = self.make_engine(flo, fd, be) with timing.TimingListener(e): e.run() t_uuid = e.storage.get_task_uuid("test-1") td = fd.find(t_uuid) self.assertIsNotNone(td) self.assertIsNotNone(td.meta) self.assertIn('duration', td.meta) self.assertGreaterEqual(0.1, td.meta['duration']) taskflow-0.1.3/taskflow/tests/unit/test_functor_task.py0000664000175300017540000000366512275003514024601 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines from taskflow.patterns import linear_flow from taskflow import task as base from taskflow import test def add(a, b): return a + b class BunchOfFunctions(object): def __init__(self, values): self.values = values def run_one(self, *args, **kwargs): self.values.append('one') def revert_one(self, *args, **kwargs): self.values.append('revert one') def run_fail(self, *args, **kwargs): self.values.append('fail') raise RuntimeError('Woot!') class FunctorTaskTest(test.TestCase): def test_simple(self): task = base.FunctorTask(add) self.assertEqual(task.name, __name__ + '.add') def test_other_name(self): task = base.FunctorTask(add, name='my task') self.assertEqual(task.name, 'my task') def test_it_runs(self): values = [] bof = BunchOfFunctions(values) t = base.FunctorTask flow = linear_flow.Flow('test') flow.add( t(bof.run_one, revert=bof.revert_one), t(bof.run_fail) ) self.assertRaisesRegexp(RuntimeError, '^Woot', taskflow.engines.run, flow) self.assertEqual(values, ['one', 'fail', 'revert one']) taskflow-0.1.3/taskflow/tests/unit/test_utils_lock_utils.py0000664000175300017540000002235512275003514025464 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import threading import time from concurrent import futures from taskflow import test from taskflow.utils import lock_utils def _find_overlaps(times, start, end): overlaps = 0 for (s, e) in times: if s >= start and e <= end: overlaps += 1 return overlaps def _spawn_variation(readers, writers, max_workers=None): start_stops = collections.deque() lock = lock_utils.ReaderWriterLock() def read_func(): with lock.read_lock(): start_stops.append(('r', time.time(), time.time())) def write_func(): with lock.write_lock(): start_stops.append(('w', time.time(), time.time())) if max_workers is None: max_workers = max(0, readers) + max(0, writers) if max_workers > 0: with futures.ThreadPoolExecutor(max_workers=max_workers) as e: for i in range(0, readers): e.submit(read_func) for i in range(0, writers): e.submit(write_func) writer_times = [] reader_times = [] for (t, start, stop) in list(start_stops): if t == 'w': writer_times.append((start, stop)) else: reader_times.append((start, stop)) return (writer_times, reader_times) class ReadWriteLockTest(test.TestCase): def test_writer_abort(self): lock = lock_utils.ReaderWriterLock() self.assertFalse(lock.owner) def blow_up(): with lock.write_lock(): self.assertEqual(lock.WRITER, lock.owner) raise RuntimeError("Broken") self.assertRaises(RuntimeError, blow_up) self.assertFalse(lock.owner) def test_reader_abort(self): lock = lock_utils.ReaderWriterLock() self.assertFalse(lock.owner) def blow_up(): with lock.read_lock(): self.assertEqual(lock.READER, lock.owner) raise RuntimeError("Broken") self.assertRaises(RuntimeError, blow_up) self.assertFalse(lock.owner) def test_double_reader_abort(self): lock = lock_utils.ReaderWriterLock() activated = collections.deque() def double_bad_reader(): with lock.read_lock(): with lock.read_lock(): raise RuntimeError("Broken") def happy_writer(): with lock.write_lock(): activated.append(lock.owner) with futures.ThreadPoolExecutor(max_workers=20) as e: for i in range(0, 20): if i % 2 == 0: e.submit(double_bad_reader) else: e.submit(happy_writer) self.assertEqual(10, len([a for a in activated if a == 'w'])) def test_double_reader_writer(self): lock = lock_utils.ReaderWriterLock() activated = collections.deque() active = threading.Event() def double_reader(): with lock.read_lock(): active.set() while lock.pending_writers == 0: time.sleep(0.001) with lock.read_lock(): activated.append(lock.owner) def happy_writer(): with lock.write_lock(): activated.append(lock.owner) reader = threading.Thread(target=double_reader) reader.start() active.wait() writer = threading.Thread(target=happy_writer) writer.start() reader.join() writer.join() self.assertEqual(2, len(activated)) self.assertEqual(['r', 'w'], list(activated)) def test_reader_chaotic(self): lock = lock_utils.ReaderWriterLock() activated = collections.deque() def chaotic_reader(blow_up): with lock.read_lock(): if blow_up: raise RuntimeError("Broken") else: activated.append(lock.owner) def happy_writer(): with lock.write_lock(): activated.append(lock.owner) with futures.ThreadPoolExecutor(max_workers=20) as e: for i in range(0, 20): if i % 2 == 0: e.submit(chaotic_reader, blow_up=bool(i % 4 == 0)) else: e.submit(happy_writer) writers = [a for a in activated if a == 'w'] readers = [a for a in activated if a == 'r'] self.assertEqual(10, len(writers)) self.assertEqual(5, len(readers)) def test_writer_chaotic(self): lock = lock_utils.ReaderWriterLock() activated = collections.deque() def chaotic_writer(blow_up): with lock.write_lock(): if blow_up: raise RuntimeError("Broken") else: activated.append(lock.owner) def happy_reader(): with lock.read_lock(): activated.append(lock.owner) with futures.ThreadPoolExecutor(max_workers=20) as e: for i in range(0, 20): if i % 2 == 0: e.submit(chaotic_writer, blow_up=bool(i % 4 == 0)) else: e.submit(happy_reader) writers = [a for a in activated if a == 'w'] readers = [a for a in activated if a == 'r'] self.assertEqual(5, len(writers)) self.assertEqual(10, len(readers)) def test_single_reader_writer(self): results = [] lock = lock_utils.ReaderWriterLock() with lock.read_lock(): self.assertTrue(lock.is_reader()) self.assertEqual(0, len(results)) with lock.write_lock(): results.append(1) self.assertTrue(lock.is_writer()) with lock.read_lock(): self.assertTrue(lock.is_reader()) self.assertEqual(1, len(results)) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_writer()) def test_reader_to_writer(self): lock = lock_utils.ReaderWriterLock() def writer_func(): with lock.write_lock(): pass with lock.read_lock(): self.assertRaises(RuntimeError, writer_func) self.assertFalse(lock.is_writer()) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_writer()) def test_writer_to_reader(self): lock = lock_utils.ReaderWriterLock() def reader_func(): with lock.read_lock(): pass with lock.write_lock(): self.assertRaises(RuntimeError, reader_func) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_writer()) def test_double_writer(self): lock = lock_utils.ReaderWriterLock() with lock.write_lock(): self.assertFalse(lock.is_reader()) self.assertTrue(lock.is_writer()) with lock.write_lock(): self.assertTrue(lock.is_writer()) self.assertTrue(lock.is_writer()) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_writer()) def test_double_reader(self): lock = lock_utils.ReaderWriterLock() with lock.read_lock(): self.assertTrue(lock.is_reader()) self.assertFalse(lock.is_writer()) with lock.read_lock(): self.assertTrue(lock.is_reader()) self.assertTrue(lock.is_reader()) self.assertFalse(lock.is_reader()) self.assertFalse(lock.is_writer()) def test_multi_reader_multi_writer(self): writer_times, reader_times = _spawn_variation(10, 10) self.assertEqual(10, len(writer_times)) self.assertEqual(10, len(reader_times)) for (start, stop) in writer_times: self.assertEqual(0, _find_overlaps(reader_times, start, stop)) self.assertEqual(1, _find_overlaps(writer_times, start, stop)) for (start, stop) in reader_times: self.assertEqual(0, _find_overlaps(writer_times, start, stop)) def test_multi_reader_single_writer(self): writer_times, reader_times = _spawn_variation(9, 1) self.assertEqual(1, len(writer_times)) self.assertEqual(9, len(reader_times)) start, stop = writer_times[0] self.assertEqual(0, _find_overlaps(reader_times, start, stop)) def test_multi_writer(self): writer_times, reader_times = _spawn_variation(0, 10) self.assertEqual(10, len(writer_times)) self.assertEqual(0, len(reader_times)) for (start, stop) in writer_times: self.assertEqual(1, _find_overlaps(writer_times, start, stop)) taskflow-0.1.3/taskflow/tests/unit/test_flattening.py0000664000175300017540000001362212275003514024224 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import string import networkx as nx from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import test from taskflow.tests import utils as t_utils from taskflow.utils import flow_utils as f_utils from taskflow.utils import graph_utils as g_utils def _make_many(amount): assert amount <= len(string.ascii_lowercase), 'Not enough letters' tasks = [] for i in range(0, amount): tasks.append(t_utils.DummyTask(name=string.ascii_lowercase[i])) return tasks class FlattenTest(test.TestCase): def test_linear_flatten(self): a, b, c, d = _make_many(4) flo = lf.Flow("test") flo.add(a, b, c) sflo = lf.Flow("sub-test") sflo.add(d) flo.add(sflo) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) order = nx.topological_sort(g) self.assertEqual([a, b, c, d], order) self.assertTrue(g.has_edge(c, d)) self.assertEqual([d], list(g_utils.get_no_successors(g))) self.assertEqual([a], list(g_utils.get_no_predecessors(g))) def test_invalid_flatten(self): a, b, c = _make_many(3) flo = lf.Flow("test") flo.add(a, b, c) flo.add(flo) self.assertRaises(ValueError, f_utils.flatten, flo) def test_unordered_flatten(self): a, b, c, d = _make_many(4) flo = uf.Flow("test") flo.add(a, b, c, d) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) self.assertEqual(0, g.number_of_edges()) self.assertEqual(set([a, b, c, d]), set(g_utils.get_no_successors(g))) self.assertEqual(set([a, b, c, d]), set(g_utils.get_no_predecessors(g))) def test_linear_nested_flatten(self): a, b, c, d = _make_many(4) flo = lf.Flow("test") flo.add(a, b) flo2 = uf.Flow("test2") flo2.add(c, d) flo.add(flo2) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) lb = g.subgraph([a, b]) self.assertTrue(lb.has_edge(a, b)) self.assertFalse(lb.has_edge(b, a)) ub = g.subgraph([c, d]) self.assertEqual(0, ub.number_of_edges()) # This ensures that c and d do not start executing until after b. self.assertTrue(g.has_edge(b, c)) self.assertTrue(g.has_edge(b, d)) def test_unordered_nested_flatten(self): a, b, c, d = _make_many(4) flo = uf.Flow("test") flo.add(a, b) flo2 = lf.Flow("test2") flo2.add(c, d) flo.add(flo2) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) for n in [a, b]: self.assertFalse(g.has_edge(n, c)) self.assertFalse(g.has_edge(n, d)) self.assertTrue(g.has_edge(c, d)) self.assertFalse(g.has_edge(d, c)) ub = g.subgraph([a, b]) self.assertEqual(0, ub.number_of_edges()) lb = g.subgraph([c, d]) self.assertEqual(1, lb.number_of_edges()) def test_graph_flatten(self): a, b, c, d = _make_many(4) flo = gf.Flow("test") flo.add(a, b, c, d) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) self.assertEqual(0, g.number_of_edges()) def test_graph_flatten_nested(self): a, b, c, d, e, f, g = _make_many(7) flo = gf.Flow("test") flo.add(a, b, c, d) flo2 = lf.Flow('test2') flo2.add(e, f, g) flo.add(flo2) g = f_utils.flatten(flo) self.assertEqual(7, len(g)) self.assertEqual(2, g.number_of_edges()) def test_graph_flatten_nested_graph(self): a, b, c, d, e, f, g = _make_many(7) flo = gf.Flow("test") flo.add(a, b, c, d) flo2 = gf.Flow('test2') flo2.add(e, f, g) flo.add(flo2) g = f_utils.flatten(flo) self.assertEqual(7, len(g)) self.assertEqual(0, g.number_of_edges()) def test_graph_flatten_links(self): a, b, c, d = _make_many(4) flo = gf.Flow("test") flo.add(a, b, c, d) flo.link(a, b) flo.link(b, c) flo.link(c, d) g = f_utils.flatten(flo) self.assertEqual(4, len(g)) self.assertEqual(3, g.number_of_edges()) self.assertEqual(set([a]), set(g_utils.get_no_predecessors(g))) self.assertEqual(set([d]), set(g_utils.get_no_successors(g))) def test_flatten_checks_for_dups(self): flo = gf.Flow("test").add( t_utils.DummyTask(name="a"), t_utils.DummyTask(name="a") ) self.assertRaisesRegexp(exc.InvariantViolation, '^Tasks with duplicate names', f_utils.flatten, flo) def test_flatten_checks_for_dups_globally(self): flo = gf.Flow("test").add( gf.Flow("int1").add(t_utils.DummyTask(name="a")), gf.Flow("int2").add(t_utils.DummyTask(name="a"))) self.assertRaisesRegexp(exc.InvariantViolation, '^Tasks with duplicate names', f_utils.flatten, flo) taskflow-0.1.3/taskflow/tests/unit/test_action_engine.py0000664000175300017540000005330412275003514024674 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import networkx import testtools from concurrent import futures from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf import taskflow.engines from taskflow.engines.action_engine import engine as eng from taskflow import exceptions as exc from taskflow.persistence import logbook from taskflow import states from taskflow import task from taskflow import test from taskflow.tests import utils from taskflow.utils import eventlet_utils as eu from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils class EngineTaskTest(utils.EngineTestBase): def test_run_task_as_flow(self): flow = utils.SaveOrderTask(name='task1') engine = self._make_engine(flow) engine.run() self.assertEqual(self.values, ['task1']) @staticmethod def _callback(state, values, details): name = details.get('task_name', '') values.append('%s %s' % (name, state)) @staticmethod def _flow_callback(state, values, details): values.append('flow %s' % state) def test_run_task_with_notifications(self): flow = utils.SaveOrderTask(name='task1') engine = self._make_engine(flow) engine.notifier.register('*', self._flow_callback, kwargs={'values': self.values}) engine.task_notifier.register('*', self._callback, kwargs={'values': self.values}) engine.run() self.assertEqual(self.values, ['flow RUNNING', 'task1 RUNNING', 'task1', 'task1 SUCCESS', 'flow SUCCESS']) def test_failing_task_with_notifications(self): flow = utils.FailingTask('fail') engine = self._make_engine(flow) engine.notifier.register('*', self._flow_callback, kwargs={'values': self.values}) engine.task_notifier.register('*', self._callback, kwargs={'values': self.values}) expected = ['flow RUNNING', 'fail RUNNING', 'fail FAILURE', 'flow FAILURE', 'flow REVERTING', 'fail REVERTING', 'fail reverted(Failure: RuntimeError: Woot!)', 'fail REVERTED', 'flow REVERTED'] self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(self.values, expected) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) now_expected = expected + ['fail PENDING', 'flow PENDING'] + expected self.assertEqual(self.values, now_expected) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) def test_invalid_flow_raises(self): def compile_bad(value): engine = self._make_engine(value) engine.compile() value = 'i am string, not task/flow, sorry' err = self.assertRaises(TypeError, compile_bad, value) self.assertIn(value, str(err)) def test_invalid_flow_raises_from_run(self): def run_bad(value): engine = self._make_engine(value) engine.run() value = 'i am string, not task/flow, sorry' err = self.assertRaises(TypeError, run_bad, value) self.assertIn(value, str(err)) def test_nasty_failing_task_exception_reraised(self): flow = utils.NastyFailingTask() engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) class EngineLinearFlowTest(utils.EngineTestBase): def test_run_empty_flow(self): flow = lf.Flow('flow-1') engine = self._make_engine(flow) self.assertRaises(exc.EmptyFlow, engine.run) def test_sequential_flow_one_task(self): flow = lf.Flow('flow-1').add( utils.SaveOrderTask(name='task1') ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1']) def test_sequential_flow_two_tasks(self): flow = lf.Flow('flow-2').add( utils.SaveOrderTask(name='task1'), utils.SaveOrderTask(name='task2') ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1', 'task2']) self.assertEqual(len(flow), 2) def test_revert_removes_data(self): flow = lf.Flow('revert-removes').add( utils.TaskOneReturn(provides='one'), utils.TaskMultiReturn(provides=('a', 'b', 'c')), utils.FailingTask(name='fail') ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(engine.storage.fetch_all(), {}) def test_sequential_flow_nested_blocks(self): flow = lf.Flow('nested-1').add( utils.SaveOrderTask('task1'), lf.Flow('inner-1').add( utils.SaveOrderTask('task2') ) ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1', 'task2']) def test_revert_exception_is_reraised(self): flow = lf.Flow('revert-1').add( utils.NastyTask(), utils.FailingTask(name='fail') ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) def test_revert_not_run_task_is_not_reverted(self): flow = lf.Flow('revert-not-run').add( utils.FailingTask('fail'), utils.NeverRunningTask(), ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual( self.values, ['fail reverted(Failure: RuntimeError: Woot!)']) def test_correctly_reverts_children(self): flow = lf.Flow('root-1').add( utils.SaveOrderTask('task1'), lf.Flow('child-1').add( utils.SaveOrderTask('task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual( self.values, ['task1', 'task2', 'fail reverted(Failure: RuntimeError: Woot!)', 'task2 reverted(5)', 'task1 reverted(5)']) def test_flow_failures_are_passed_to_revert(self): class CheckingTask(task.Task): def execute(m_self): return 'RESULT' def revert(m_self, result, flow_failures): self.assertEqual(result, 'RESULT') self.assertEqual(list(flow_failures.keys()), ['fail1']) fail = flow_failures['fail1'] self.assertIsInstance(fail, misc.Failure) self.assertEqual(str(fail), 'Failure: RuntimeError: Woot!') flow = lf.Flow('test').add( CheckingTask(), utils.FailingTask('fail1') ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) class EngineParallelFlowTest(utils.EngineTestBase): def test_run_empty_flow(self): flow = uf.Flow('p-1') engine = self._make_engine(flow) self.assertRaises(exc.EmptyFlow, engine.run) def test_parallel_flow_one_task(self): flow = uf.Flow('p-1').add( utils.SaveOrderTask(name='task1') ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1']) def test_parallel_flow_two_tasks(self): flow = uf.Flow('p-2').add( utils.SaveOrderTask(name='task1'), utils.SaveOrderTask(name='task2') ) self._make_engine(flow).run() result = set(self.values) self.assertEqual(result, set(['task1', 'task2'])) self.assertEqual(len(flow), 2) def test_parallel_revert(self): flow = uf.Flow('p-r-3').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.FailingTask(name='fail'), utils.TaskNoRequiresNoReturns(name='task2') ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertIn('fail reverted(Failure: RuntimeError: Woot!)', self.values) def test_parallel_revert_exception_is_reraised(self): # NOTE(imelnikov): if we put NastyTask and FailingTask # into the same unordered flow, it is not guaranteed # that NastyTask execution would be attempted before # FailingTask fails. flow = lf.Flow('p-r-r-l').add( uf.Flow('p-r-r').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.NastyTask() ), utils.FailingTask() ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) def test_sequential_flow_two_tasks_with_resumption(self): flow = lf.Flow('lf-2-r').add( utils.SaveOrderTask(name='task1', provides='x1'), utils.SaveOrderTask(name='task2', provides='x2') ) # Create FlowDetail as if we already run task1 _lb, fd = p_utils.temporary_flow_detail(self.backend) td = logbook.TaskDetail(name='task1', uuid='42') td.state = states.SUCCESS td.results = 17 fd.add(td) with contextlib.closing(self.backend.get_connection()) as conn: fd.update(conn.update_flow_details(fd)) td.update(conn.update_task_details(td)) engine = self._make_engine(flow, fd) engine.run() self.assertEqual(self.values, ['task2']) self.assertEqual(engine.storage.fetch_all(), {'x1': 17, 'x2': 5}) class EngineLinearAndUnorderedExceptionsTest(utils.EngineTestBase): def test_revert_ok_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( utils.SaveOrderTask(name='task1'), utils.SaveOrderTask(name='task2'), uf.Flow('p-inner').add( utils.SaveOrderTask(name='task3'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been reverted in correct order. possible_values_no_task3 = [ 'task1', 'task2', 'fail reverted(Failure: RuntimeError: Woot!)', 'task2 reverted(5)', 'task1 reverted(5)' ] self.assertIsSuperAndSubsequence(self.values, possible_values_no_task3) if 'task3' in self.values: possible_values_task3 = [ 'task1', 'task2', 'task3', 'task3 reverted(5)', 'task2 reverted(5)', 'task1 reverted(5)' ] self.assertIsSuperAndSubsequence(self.values, possible_values_task3) def test_revert_raises_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( utils.SaveOrderTask(name='task1'), utils.SaveOrderTask(name='task2'), uf.Flow('p-inner').add( utils.SaveOrderTask(name='task3'), utils.NastyFailingTask() ) ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been reverted in correct order. possible_values = ['task1', 'task2', 'task3', 'task3 reverted(5)'] self.assertIsSuperAndSubsequence(possible_values, self.values) possible_values_no_task3 = ['task1', 'task2'] self.assertIsSuperAndSubsequence(self.values, possible_values_no_task3) def test_revert_ok_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( utils.SaveOrderTask(name='task1'), lf.Flow('p-inner').add( utils.SaveOrderTask(name='task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertIn('fail reverted(Failure: RuntimeError: Woot!)', self.values) # NOTE(imelnikov): if task1 was run, it should have been reverted. if 'task1' in self.values: task1_story = ['task1', 'task1 reverted(5)'] self.assertIsSuperAndSubsequence(self.values, task1_story) # NOTE(imelnikov): task2 should have been run and reverted task2_story = ['task2', 'task2 reverted(5)'] self.assertIsSuperAndSubsequence(self.values, task2_story) def test_revert_raises_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( utils.SaveOrderTask(name='task1'), lf.Flow('p-inner').add( utils.SaveOrderTask(name='task2'), utils.NastyFailingTask() ) ) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) self.assertNotIn('task2 reverted(5)', self.values) class EngineGraphFlowTest(utils.EngineTestBase): def test_run_empty_flow(self): flow = gf.Flow('g-1') engine = self._make_engine(flow) self.assertRaises(exc.EmptyFlow, engine.run) def test_run_nested_empty_flows(self): flow = gf.Flow('g-1').add(lf.Flow('l-1'), gf.Flow('g-2')) engine = self._make_engine(flow) self.assertRaises(exc.EmptyFlow, engine.run) def test_graph_flow_one_task(self): flow = gf.Flow('g-1').add( utils.SaveOrderTask(name='task1') ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1']) def test_graph_flow_two_independent_tasks(self): flow = gf.Flow('g-2').add( utils.SaveOrderTask(name='task1'), utils.SaveOrderTask(name='task2') ) self._make_engine(flow).run() self.assertEqual(set(self.values), set(['task1', 'task2'])) self.assertEqual(len(flow), 2) def test_graph_flow_two_tasks(self): flow = gf.Flow('g-1-1').add( utils.SaveOrderTask(name='task2', requires=['a']), utils.SaveOrderTask(name='task1', provides='a') ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1', 'task2']) def test_graph_flow_four_tasks_added_separately(self): flow = (gf.Flow('g-4') .add(utils.SaveOrderTask(name='task4', provides='d', requires=['c'])) .add(utils.SaveOrderTask(name='task2', provides='b', requires=['a'])) .add(utils.SaveOrderTask(name='task3', provides='c', requires=['b'])) .add(utils.SaveOrderTask(name='task1', provides='a')) ) self._make_engine(flow).run() self.assertEqual(self.values, ['task1', 'task2', 'task3', 'task4']) def test_graph_flow_four_tasks_revert(self): flow = gf.Flow('g-4-failing').add( utils.SaveOrderTask(name='task4', provides='d', requires=['c']), utils.SaveOrderTask(name='task2', provides='b', requires=['a']), utils.FailingTask(name='task3', provides='c', requires=['b']), utils.SaveOrderTask(name='task1', provides='a')) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual( self.values, ['task1', 'task2', 'task3 reverted(Failure: RuntimeError: Woot!)', 'task2 reverted(5)', 'task1 reverted(5)']) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) def test_graph_flow_four_tasks_revert_failure(self): flow = gf.Flow('g-3-nasty').add( utils.NastyTask(name='task2', provides='b', requires=['a']), utils.FailingTask(name='task3', requires=['b']), utils.SaveOrderTask(name='task1', provides='a')) engine = self._make_engine(flow) self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) self.assertEqual(engine.storage.get_flow_state(), states.FAILURE) def test_graph_flow_with_multireturn_and_multiargs_tasks(self): flow = gf.Flow('g-3-multi').add( utils.TaskMultiArgOneReturn(name='task1', rebind=['a', 'b', 'y'], provides='z'), utils.TaskMultiReturn(name='task2', provides=['a', 'b', 'c']), utils.TaskMultiArgOneReturn(name='task3', rebind=['c', 'b', 'x'], provides='y')) engine = self._make_engine(flow) engine.storage.inject({'x': 30}) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'a': 1, 'b': 3, 'c': 5, 'x': 30, 'y': 38, 'z': 42 }) def test_task_graph_property(self): flow = gf.Flow('test').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.TaskNoRequiresNoReturns(name='task2')) engine = self._make_engine(flow) graph = engine.execution_graph self.assertIsInstance(graph, networkx.DiGraph) def test_task_graph_property_for_one_task(self): flow = utils.TaskNoRequiresNoReturns(name='task1') engine = self._make_engine(flow) graph = engine.execution_graph self.assertIsInstance(graph, networkx.DiGraph) class SingleThreadedEngineTest(EngineTaskTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineGraphFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf='serial', backend=self.backend) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.SingleThreadedActionEngine) def test_singlethreaded_is_the_default(self): engine = taskflow.engines.load(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.SingleThreadedActionEngine) class MultiThreadedEngineTest(EngineTaskTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineGraphFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): engine_conf = dict(engine='parallel', executor=executor) return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf=engine_conf, backend=self.backend) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.MultiThreadedActionEngine) self.assertIs(engine._executor, None) def test_using_common_executor(self): flow = utils.TaskNoRequiresNoReturns(name='task1') executor = futures.ThreadPoolExecutor(2) try: e1 = self._make_engine(flow, executor=executor) e2 = self._make_engine(flow, executor=executor) self.assertIs(e1._executor, e2._executor) finally: executor.shutdown(wait=True) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(EngineTaskTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineGraphFlowTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = eu.GreenExecutor() engine_conf = dict(engine='parallel', executor=executor) return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf=engine_conf, backend=self.backend) taskflow-0.1.3/taskflow/tests/unit/test_arguments_passing.py0000664000175300017540000001235212275003514025621 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines from taskflow import exceptions as exc from taskflow import test from taskflow.tests import utils class ArgumentsPassingTest(utils.EngineTestBase): def test_save_as(self): flow = utils.TaskOneReturn(name='task1', provides='first_data') engine = self._make_engine(flow) engine.run() self.assertEqual(engine.storage.fetch_all(), {'first_data': 1}) def test_save_all_in_one(self): flow = utils.TaskMultiReturn(provides='all_data') engine = self._make_engine(flow) engine.run() self.assertEqual(engine.storage.fetch_all(), {'all_data': (1, 3, 5)}) def test_save_several_values(self): flow = utils.TaskMultiReturn(provides=('badger', 'mushroom', 'snake')) engine = self._make_engine(flow) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'badger': 1, 'mushroom': 3, 'snake': 5 }) def test_save_dict(self): flow = utils.TaskMultiDictk(provides=set(['badger', 'mushroom', 'snake'])) engine = self._make_engine(flow) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'badger': 0, 'mushroom': 1, 'snake': 2, }) def test_bad_save_as_value(self): self.assertRaises(TypeError, utils.TaskOneReturn, name='task1', provides=object()) def test_arguments_passing(self): flow = utils.TaskMultiArgOneReturn(provides='result') engine = self._make_engine(flow) engine.storage.inject({'x': 1, 'y': 4, 'z': 9, 'a': 17}) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'x': 1, 'y': 4, 'z': 9, 'a': 17, 'result': 14, }) def test_arguments_missing(self): flow = utils.TaskMultiArg() engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'b': 4, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_partial_arguments_mapping(self): flow = utils.TaskMultiArgOneReturn(provides='result', rebind={'x': 'a'}) engine = self._make_engine(flow) engine.storage.inject({'x': 1, 'y': 4, 'z': 9, 'a': 17}) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'x': 1, 'y': 4, 'z': 9, 'a': 17, 'result': 30, }) def test_all_arguments_mapping(self): flow = utils.TaskMultiArgOneReturn(provides='result', rebind=['a', 'b', 'c']) engine = self._make_engine(flow) engine.storage.inject({ 'a': 1, 'b': 2, 'c': 3, 'x': 4, 'y': 5, 'z': 6 }) engine.run() self.assertEqual(engine.storage.fetch_all(), { 'a': 1, 'b': 2, 'c': 3, 'x': 4, 'y': 5, 'z': 6, 'result': 6, }) def test_invalid_argument_name_map(self): flow = utils.TaskMultiArg(rebind={'z': 'b'}) engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'y': 4, 'c': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_invalid_argument_name_list(self): flow = utils.TaskMultiArg(rebind=['a', 'z', 'b']) engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'b': 4, 'c': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_bad_rebind_args_value(self): self.assertRaises(TypeError, utils.TaskOneArg, rebind=object()) class SingleThreadedEngineTest(ArgumentsPassingTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf='serial', backend=self.backend) class MultiThreadedEngineTest(ArgumentsPassingTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): engine_conf = dict(engine='parallel', executor=executor) return taskflow.engines.load(flow, flow_detail=flow_detail, engine_conf=engine_conf, backend=self.backend) taskflow-0.1.3/taskflow/tests/unit/test_task.py0000664000175300017540000002225012275003514023030 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from taskflow import task from taskflow import test from taskflow.utils import reflection class MyTask(task.Task): def execute(self, context, spam, eggs): pass class KwargsTask(task.Task): def execute(self, spam, **kwargs): pass class DefaultArgTask(task.Task): def execute(self, spam, eggs=()): pass class DefaultProvidesTask(task.Task): default_provides = 'def' def execute(self): return None class ProgressTask(task.Task): def execute(self, values, **kwargs): for value in values: self.update_progress(value) class TaskTest(test.TestCase): def test_passed_name(self): my_task = MyTask(name='my name') self.assertEqual(my_task.name, 'my name') def test_generated_name(self): my_task = MyTask() self.assertEqual(my_task.name, '%s.%s' % (__name__, 'MyTask')) def test_no_provides(self): my_task = MyTask() self.assertEqual(my_task.save_as, {}) def test_provides(self): my_task = MyTask(provides='food') self.assertEqual(my_task.save_as, {'food': None}) def test_multi_provides(self): my_task = MyTask(provides=('food', 'water')) self.assertEqual(my_task.save_as, {'food': 0, 'water': 1}) def test_unpack(self): my_task = MyTask(provides=('food',)) self.assertEqual(my_task.save_as, {'food': 0}) def test_bad_provides(self): self.assertRaisesRegexp(TypeError, '^Task provides', MyTask, provides=object()) def test_requires_by_default(self): my_task = MyTask() self.assertEqual(my_task.rebind, { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' }) def test_requires_amended(self): my_task = MyTask(requires=('spam', 'eggs')) self.assertEqual(my_task.rebind, { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' }) def test_requires_explicit(self): my_task = MyTask(auto_extract=False, requires=('spam', 'eggs', 'context')) self.assertEqual(my_task.rebind, { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' }) def test_requires_explicit_not_enough(self): self.assertRaisesRegexp(ValueError, '^Missing arguments', MyTask, auto_extract=False, requires=('spam', 'eggs')) def test_requires_ignores_optional(self): my_task = DefaultArgTask() self.assertEqual(my_task.requires, set(['spam'])) def test_requires_allows_optional(self): my_task = DefaultArgTask(requires=('spam', 'eggs')) self.assertEqual(my_task.requires, set(['spam', 'eggs'])) def test_rebind_all_args(self): my_task = MyTask(rebind={'spam': 'a', 'eggs': 'b', 'context': 'c'}) self.assertEqual(my_task.rebind, { 'spam': 'a', 'eggs': 'b', 'context': 'c' }) def test_rebind_partial(self): my_task = MyTask(rebind={'spam': 'a', 'eggs': 'b'}) self.assertEqual(my_task.rebind, { 'spam': 'a', 'eggs': 'b', 'context': 'context' }) def test_rebind_unknown(self): self.assertRaisesRegexp(ValueError, '^Extra arguments', MyTask, rebind={'foo': 'bar'}) def test_rebind_unknown_kwargs(self): task = KwargsTask(rebind={'foo': 'bar'}) self.assertEqual(task.rebind, { 'foo': 'bar', 'spam': 'spam' }) def test_rebind_list_all(self): my_task = MyTask(rebind=('a', 'b', 'c')) self.assertEqual(my_task.rebind, { 'context': 'a', 'spam': 'b', 'eggs': 'c' }) def test_rebind_list_partial(self): my_task = MyTask(rebind=('a', 'b')) self.assertEqual(my_task.rebind, { 'context': 'a', 'spam': 'b', 'eggs': 'eggs' }) def test_rebind_list_more(self): self.assertRaisesRegexp(ValueError, '^Extra arguments', MyTask, rebind=('a', 'b', 'c', 'd')) def test_rebind_list_more_kwargs(self): task = KwargsTask(rebind=('a', 'b', 'c')) self.assertEqual(task.rebind, { 'spam': 'a', 'b': 'b', 'c': 'c' }) def test_rebind_list_bad_value(self): self.assertRaisesRegexp(TypeError, '^Invalid rebind value:', MyTask, rebind=object()) def test_default_provides(self): task = DefaultProvidesTask() self.assertEqual(task.provides, set(['def'])) self.assertEqual(task.save_as, {'def': None}) def test_default_provides_can_be_overridden(self): task = DefaultProvidesTask(provides=('spam', 'eggs')) self.assertEqual(task.provides, set(['spam', 'eggs'])) self.assertEqual(task.save_as, {'spam': 0, 'eggs': 1}) def test_update_progress_within_bounds(self): values = [0.0, 0.5, 1.0] result = [] def progress_callback(task, event_data, progress): result.append(progress) task = ProgressTask() with task.autobind('update_progress', progress_callback): task.execute(values) self.assertEqual(result, values) @mock.patch.object(task.LOG, 'warn') def test_update_progress_lower_bound(self, mocked_warn): result = [] def progress_callback(task, event_data, progress): result.append(progress) task = ProgressTask() with task.autobind('update_progress', progress_callback): task.execute([-1.0, -0.5, 0.0]) self.assertEqual(result, [0.0, 0.0, 0.0]) self.assertEqual(mocked_warn.call_count, 2) @mock.patch.object(task.LOG, 'warn') def test_update_progress_upper_bound(self, mocked_warn): result = [] def progress_callback(task, event_data, progress): result.append(progress) task = ProgressTask() with task.autobind('update_progress', progress_callback): task.execute([1.0, 1.5, 2.0]) self.assertEqual(result, [1.0, 1.0, 1.0]) self.assertEqual(mocked_warn.call_count, 2) @mock.patch.object(task.LOG, 'exception') def test_update_progress_handler_failure(self, mocked_exception): def progress_callback(*args, **kwargs): raise Exception('Woot!') task = ProgressTask() with task.autobind('update_progress', progress_callback): task.execute([0.5]) mocked_exception.assert_called_once_with( mock.ANY, reflection.get_callable_name(progress_callback), 'update_progress') @mock.patch.object(task.LOG, 'exception') def test_autobind_non_existent_event(self, mocked_exception): event = 'test-event' handler = lambda: None task = MyTask() with task.autobind(event, handler): self.assertEqual(len(task._events_listeners), 0) mocked_exception.assert_called_once_with( mock.ANY, handler, event, task) def test_autobind_handler_is_none(self): task = MyTask() with task.autobind('update_progress', None): self.assertEqual(len(task._events_listeners), 0) def test_unbind_any_handler(self): task = MyTask() self.assertEqual(len(task._events_listeners), 0) task.bind('update_progress', lambda: None) self.assertEqual(len(task._events_listeners), 1) self.assertTrue(task.unbind('update_progress')) self.assertEqual(len(task._events_listeners), 0) def test_unbind_any_handler_empty_listeners(self): task = MyTask() self.assertEqual(len(task._events_listeners), 0) self.assertFalse(task.unbind('update_progress')) self.assertEqual(len(task._events_listeners), 0) def test_unbind_non_existent_listener(self): handler1 = lambda: None handler2 = lambda: None task = MyTask() task.bind('update_progress', handler1) self.assertEqual(len(task._events_listeners), 1) self.assertFalse(task.unbind('update_progress', handler2)) self.assertEqual(len(task._events_listeners), 1) class FunctorTaskTest(test.TestCase): def test_creation_with_version(self): version = (2, 0) f_task = task.FunctorTask(lambda: None, version=version) self.assertEqual(f_task.version, version) taskflow-0.1.3/taskflow/tests/unit/test_utils_async_utils.py0000664000175300017540000000637412275003514025654 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from concurrent import futures from taskflow import test from taskflow.utils import async_utils as au from taskflow.utils import eventlet_utils as eu class WaitForAnyTestsMixin(object): timeout = 0.001 def test_waits_and_finishes(self): def foo(): pass with self.executor_cls(2) as e: fs = [e.submit(foo), e.submit(foo)] # this test assumes that our foo will end within 10 seconds done, not_done = au.wait_for_any(fs, 10) self.assertIn(len(done), (1, 2)) self.assertTrue(any(f in done for f in fs)) def test_not_done_futures(self): fs = [futures.Future(), futures.Future()] done, not_done = au.wait_for_any(fs, self.timeout) self.assertEqual(len(done), 0) self.assertEqual(len(not_done), 2) def test_mixed_futures(self): f1 = futures.Future() f2 = futures.Future() f2.set_result(1) done, not_done = au.wait_for_any([f1, f2], self.timeout) self.assertEqual(len(done), 1) self.assertEqual(len(not_done), 1) self.assertIs(not_done.pop(), f1) self.assertIs(done.pop(), f2) class WaiterTestsMixin(object): def test_add_result(self): waiter = au._Waiter(self.is_green) self.assertFalse(waiter.event.is_set()) waiter.add_result(futures.Future()) self.assertTrue(waiter.event.is_set()) def test_add_exception(self): waiter = au._Waiter(self.is_green) self.assertFalse(waiter.event.is_set()) waiter.add_exception(futures.Future()) self.assertTrue(waiter.event.is_set()) def test_add_cancelled(self): waiter = au._Waiter(self.is_green) self.assertFalse(waiter.event.is_set()) waiter.add_cancelled(futures.Future()) self.assertTrue(waiter.event.is_set()) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class AsyncUtilsEventletTest(test.TestCase, WaitForAnyTestsMixin, WaiterTestsMixin): executor_cls = eu.GreenExecutor is_green = True class AsyncUtilsThreadedTest(test.TestCase, WaitForAnyTestsMixin, WaiterTestsMixin): executor_cls = futures.ThreadPoolExecutor is_green = False class MakeCompletedFutureTest(test.TestCase): def test_make_completed_future(self): result = object() future = au.make_completed_future(result) self.assertTrue(future.done()) self.assertIs(future.result(), result) taskflow-0.1.3/taskflow/tests/unit/test_graph_flow.py0000664000175300017540000002122112275003514024213 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import taskflow.engines from taskflow.patterns import graph_flow as gw from taskflow.utils import flow_utils as fu from taskflow.utils import graph_utils as gu from taskflow import test from taskflow.tests import utils class GraphFlowTest(test.TestCase): def _make_engine(self, flow): return taskflow.engines.load(flow, store={}) def _capture_states(self): # TODO(harlowja): move function to shared helper. capture_where = collections.defaultdict(list) def do_capture(state, details): task_uuid = details.get('task_uuid') if not task_uuid: return capture_where[task_uuid].append(state) return (do_capture, capture_where) def test_ordering(self): wf = gw.Flow("the-test-action") test_1 = utils.ProvidesRequiresTask('test-1', requires=[], provides=set(['a', 'b'])) test_2 = utils.ProvidesRequiresTask('test-2', provides=['c'], requires=['a', 'b']) test_3 = utils.ProvidesRequiresTask('test-3', provides=[], requires=['c']) wf.add(test_1, test_2, test_3) self.assertTrue(wf.graph.has_edge(test_1, test_2)) self.assertTrue(wf.graph.has_edge(test_2, test_3)) self.assertEqual(3, len(wf.graph)) self.assertEqual([test_1], list(gu.get_no_predecessors(wf.graph))) self.assertEqual([test_3], list(gu.get_no_successors(wf.graph))) def test_basic_edge_reasons(self): wf = gw.Flow("the-test-action") test_1 = utils.ProvidesRequiresTask('test-1', requires=[], provides=set(['a', 'b'])) test_2 = utils.ProvidesRequiresTask('test-2', provides=['c'], requires=['a', 'b']) wf.add(test_1, test_2) self.assertTrue(wf.graph.has_edge(test_1, test_2)) edge_attrs = gu.get_edge_attrs(wf.graph, test_1, test_2) self.assertTrue(len(edge_attrs) > 0) self.assertIn('reasons', edge_attrs) self.assertEqual(set(['a', 'b']), edge_attrs['reasons']) # 2 -> 1 should not be linked, and therefore have no attrs no_edge_attrs = gu.get_edge_attrs(wf.graph, test_2, test_1) self.assertFalse(no_edge_attrs) def test_linked_edge_reasons(self): wf = gw.Flow("the-test-action") test_1 = utils.ProvidesRequiresTask('test-1', requires=[], provides=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=[], requires=[]) wf.add(test_1, test_2) self.assertFalse(wf.graph.has_edge(test_1, test_2)) wf.link(test_1, test_2) self.assertTrue(wf.graph.has_edge(test_1, test_2)) edge_attrs = gu.get_edge_attrs(wf.graph, test_1, test_2) self.assertTrue(len(edge_attrs) > 0) self.assertTrue(edge_attrs.get('manual')) def test_flatten_attribute(self): wf = gw.Flow("the-test-action") test_1 = utils.ProvidesRequiresTask('test-1', requires=[], provides=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=[], requires=[]) wf.add(test_1, test_2) wf.link(test_1, test_2) g = fu.flatten(wf) self.assertEqual(2, len(g)) edge_attrs = gu.get_edge_attrs(g, test_1, test_2) self.assertTrue(edge_attrs.get('manual')) self.assertTrue(edge_attrs.get('flatten')) class TargetedGraphFlowTest(test.TestCase): def test_targeted_flow(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=['a'], requires=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=['b'], requires=['a']) test_3 = utils.ProvidesRequiresTask('test-3', provides=[], requires=['b']) test_4 = utils.ProvidesRequiresTask('test-4', provides=[], requires=['b']) wf.add(test_1, test_2, test_3, test_4) wf.set_target(test_3) g = fu.flatten(wf) self.assertEqual(3, len(g)) self.assertFalse(g.has_node(test_4)) self.assertFalse('c' in wf.provides) def test_targeted_flow_reset(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=['a'], requires=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=['b'], requires=['a']) test_3 = utils.ProvidesRequiresTask('test-3', provides=[], requires=['b']) test_4 = utils.ProvidesRequiresTask('test-4', provides=['c'], requires=['b']) wf.add(test_1, test_2, test_3, test_4) wf.set_target(test_3) wf.reset_target() g = fu.flatten(wf) self.assertEqual(4, len(g)) self.assertTrue(g.has_node(test_4)) def test_targeted_flow_bad_target(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=['a'], requires=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=['b'], requires=['a']) wf.add(test_1) self.assertRaisesRegexp(ValueError, '^Item .* not found', wf.set_target, test_2) def test_targeted_flow_one_node(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=['a'], requires=[]) wf.add(test_1) wf.set_target(test_1) g = fu.flatten(wf) self.assertEqual(1, len(g)) self.assertTrue(g.has_node(test_1)) def test_recache_on_add(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=[], requires=['a']) wf.add(test_1) wf.set_target(test_1) self.assertEqual(1, len(wf.graph)) test_2 = utils.ProvidesRequiresTask('test-2', provides=['a'], requires=[]) wf.add(test_2) self.assertEqual(2, len(wf.graph)) def test_recache_on_add_no_deps(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=[], requires=[]) wf.add(test_1) wf.set_target(test_1) self.assertEqual(1, len(wf.graph)) test_2 = utils.ProvidesRequiresTask('test-2', provides=[], requires=[]) wf.add(test_2) self.assertEqual(1, len(wf.graph)) def test_recache_on_link(self): wf = gw.TargetedFlow("test") test_1 = utils.ProvidesRequiresTask('test-1', provides=[], requires=[]) test_2 = utils.ProvidesRequiresTask('test-2', provides=[], requires=[]) wf.add(test_1, test_2) wf.set_target(test_1) self.assertEqual(1, len(wf.graph)) wf.link(test_2, test_1) self.assertEqual(2, len(wf.graph)) self.assertEqual([(test_2, test_1)], list(wf.graph.edges())) taskflow-0.1.3/taskflow/tests/__init__.py0000664000175300017540000000127512275003514021613 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/tests/test_examples.py0000664000175300017540000000752012275003514022730 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Run examples as unit tests. This module executes examples as unit tests, thus ensuring they at least can be executed with current taskflow. For examples with deterministic output, the output can be put to file with same name and '.out.txt' extension; then it will be checked that output did not change. When this module is used as main module, output for all examples are generated. Please note that this will break tests as output for most examples is indeterministic. """ import os import re import subprocess import sys import taskflow.test ROOT_DIR = os.path.abspath( os.path.dirname( os.path.dirname( os.path.dirname(__file__)))) def root_path(*args): return os.path.join(ROOT_DIR, *args) def run_example(name): path = root_path('taskflow', 'examples', '%s.py' % name) obj = subprocess.Popen([sys.executable, path], stdout=subprocess.PIPE, stderr=subprocess.PIPE) output = obj.communicate() stdout = output[0].decode() stderr = output[1].decode() rc = obj.wait() if rc != 0: raise RuntimeError('Example %s failed, return code=%s\n' '<<>>\n%s' '<<>>\n' '<<>>\n%s' '<<>>' % (name, rc, stdout, stderr)) return stdout def expected_output_path(name): return root_path('taskflow', 'examples', '%s.out.txt' % name) def list_examples(): examples_dir = root_path('taskflow', 'examples') for filename in os.listdir(examples_dir): name, ext = os.path.splitext(filename) if ext == ".py" and 'utils' not in name.lower(): yield name class ExamplesTestCase(taskflow.test.TestCase): maxDiff = None # sky's the limit uuid_re = re.compile('XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' .replace('X', '[0-9a-f]')) @classmethod def update(cls): def add_test_method(name, method_name): def test_example(self): self._check_example(name) test_example.__name__ = method_name setattr(cls, method_name, test_example) for name in list_examples(): add_test_method(name, 'test_%s' % name) def _check_example(self, name): output = run_example(name) eop = expected_output_path(name) if os.path.isfile(eop): with open(eop) as f: expected_output = f.read() # NOTE(imelnikov): on each run new uuid is generated, so we just # replace them with some constant string output = self.uuid_re.sub('', output) expected_output = self.uuid_re.sub('', expected_output) self.assertEqual(output, expected_output) ExamplesTestCase.update() def make_output_files(): """Generate output files for all examples.""" for name in list_examples(): output = run_example(name) with open(expected_output_path(name), 'w') as f: f.write(output) if __name__ == '__main__': make_output_files() taskflow-0.1.3/taskflow/tests/utils.py0000664000175300017540000001247412275003514021217 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import six from taskflow.persistence.backends import impl_memory from taskflow import task ARGS_KEY = '__args__' KWARGS_KEY = '__kwargs__' ORDER_KEY = '__order__' def make_reverting_task(token, blowup=False): def do_revert(context, *args, **kwargs): context[token] = 'reverted' if blowup: def blow_up(context, *args, **kwargs): raise RuntimeError("I blew up") return task.FunctorTask(blow_up, name='blowup_%s' % token) else: def do_apply(context, *args, **kwargs): context[token] = 'passed' return task.FunctorTask(do_apply, revert=do_revert, name='do_apply_%s' % token) class DummyTask(task.Task): def execute(self, context, *args, **kwargs): pass if six.PY3: RUNTIME_ERROR_CLASSES = ['RuntimeError', 'Exception', 'BaseException', 'object'] else: RUNTIME_ERROR_CLASSES = ['RuntimeError', 'StandardError', 'Exception', 'BaseException', 'object'] class ProvidesRequiresTask(task.Task): def __init__(self, name, provides, requires, return_tuple=True): super(ProvidesRequiresTask, self).__init__(name=name, provides=provides, requires=requires) self.return_tuple = isinstance(provides, (tuple, list)) def execute(self, *args, **kwargs): if self.return_tuple: return tuple(range(len(self.provides))) else: return dict((k, k) for k in self.provides) class SaveOrderTask(task.Task): def __init__(self, name=None, *args, **kwargs): super(SaveOrderTask, self).__init__(name=name, *args, **kwargs) self.values = EngineTestBase.values def execute(self, **kwargs): self.update_progress(0.0) self.values.append(self.name) self.update_progress(1.0) return 5 def revert(self, **kwargs): self.update_progress(0) self.values.append(self.name + ' reverted(%s)' % kwargs.get('result')) self.update_progress(1.0) class FailingTask(SaveOrderTask): def execute(self, **kwargs): self.update_progress(0) self.update_progress(0.99) raise RuntimeError('Woot!') class NastyTask(task.Task): def execute(self, **kwargs): pass def revert(self, **kwargs): raise RuntimeError('Gotcha!') class NastyFailingTask(NastyTask): def execute(self, **kwargs): raise RuntimeError('Woot!') class TaskNoRequiresNoReturns(task.Task): def execute(self, **kwargs): pass def revert(self, **kwargs): pass class TaskOneArg(task.Task): def execute(self, x, **kwargs): pass def revert(self, x, **kwargs): pass class TaskMultiArg(task.Task): def execute(self, x, y, z, **kwargs): pass def revert(self, x, y, z, **kwargs): pass class TaskOneReturn(task.Task): def execute(self, **kwargs): return 1 def revert(self, **kwargs): pass class TaskMultiReturn(task.Task): def execute(self, **kwargs): return 1, 3, 5 def revert(self, **kwargs): pass class TaskOneArgOneReturn(task.Task): def execute(self, x, **kwargs): return 1 def revert(self, x, **kwargs): pass class TaskMultiArgOneReturn(task.Task): def execute(self, x, y, z, **kwargs): return x + y + z def revert(self, x, y, z, **kwargs): pass class TaskMultiArgMultiReturn(task.Task): def execute(self, x, y, z, **kwargs): return 1, 3, 5 def revert(self, x, y, z, **kwargs): pass class TaskMultiDictk(task.Task): def execute(self): output = {} for i, k in enumerate(sorted(self.provides)): output[k] = i return output class NeverRunningTask(task.Task): def execute(self, **kwargs): assert False, 'This method should not be called' def revert(self, **kwargs): assert False, 'This method should not be called' class EngineTestBase(object): values = None def setUp(self): super(EngineTestBase, self).setUp() EngineTestBase.values = [] self.backend = impl_memory.MemoryBackend(conf={}) def tearDown(self): EngineTestBase.values = None with contextlib.closing(self.backend) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() super(EngineTestBase, self).tearDown() def _make_engine(self, flow, flow_detail=None): raise NotImplementedError() taskflow-0.1.3/taskflow/persistence/0000775000175300017540000000000012275003604020657 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/persistence/logbook.py0000664000175300017540000001357212275003514022675 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import six from taskflow.openstack.common import uuidutils LOG = logging.getLogger(__name__) class LogBook(object): """This class that contains a dict of flow detail entries for a given *job* so that the job can track what 'work' has been completed for resumption/reverting and miscellaneous tracking purposes. The data contained within this class need *not* be backed by the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save occurs via some backend connection. """ def __init__(self, name, uuid=None, updated_at=None, created_at=None): if uuid: self._uuid = uuid else: self._uuid = uuidutils.generate_uuid() self._name = name self._flowdetails_by_id = {} self._updated_at = updated_at self._created_at = created_at self.meta = None @property def created_at(self): return self._created_at @property def updated_at(self): return self._updated_at def add(self, fd): """Adds a new entry to the underlying logbook. Does not *guarantee* that the details will be immediately saved. """ self._flowdetails_by_id[fd.uuid] = fd def find(self, flow_uuid): return self._flowdetails_by_id.get(flow_uuid, None) @property def uuid(self): return self._uuid @property def name(self): return self._name def __iter__(self): for fd in six.itervalues(self._flowdetails_by_id): yield fd def __len__(self): return len(self._flowdetails_by_id) class FlowDetail(object): """This class contains a dict of task detail entries for a given flow along with any metadata associated with that flow. The data contained within this class need *not* be backed by the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save/update occurs via some backend connection. """ def __init__(self, name, uuid): self._uuid = uuid self._name = name self._taskdetails_by_id = {} self.state = None # Any other metadata to include about this flow while storing. For # example timing information could be stored here, other misc. flow # related items (edge connections)... self.meta = None def update(self, fd): """Updates the objects state to be the same as the given one.""" if fd is self: return self._taskdetails_by_id = dict(fd._taskdetails_by_id) self.state = fd.state self.meta = fd.meta def add(self, td): self._taskdetails_by_id[td.uuid] = td def find(self, td_uuid): return self._taskdetails_by_id.get(td_uuid) @property def uuid(self): return self._uuid @property def name(self): return self._name def __iter__(self): for td in six.itervalues(self._taskdetails_by_id): yield td def __len__(self): return len(self._taskdetails_by_id) class TaskDetail(object): """This class contains an entry that contains the persistence of a task after or before (or during) it is running including any results it may have produced, any state that it may be in (failed for example), any exception that occurred when running and any associated stacktrace that may have occurring during that exception being thrown and any other metadata that should be stored along-side the details about this task. The data contained within this class need *not* backed by the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save/update occurs via some backend connection. """ def __init__(self, name, uuid): self._uuid = uuid self._name = name # TODO(harlowja): decide if these should be passed in and therefore # immutable or let them be assigned? # # The state the task was last in. self.state = None # The results it may have produced (useful for reverting). self.results = None # An Failure object that holds exception the task may have thrown # (or part of it), useful for knowing what failed. self.failure = None # Any other metadata to include about this task while storing. For # example timing information could be stored here, other misc. task # related items. self.meta = None # The version of the task this task details was associated with which # is quite useful for determining what versions of tasks this detail # information can be associated with. self.version = None def update(self, td): """Updates the objects state to be the same as the given one.""" if td is self: return self.state = td.state self.meta = td.meta self.failure = td.failure self.results = td.results self.version = td.version @property def uuid(self): return self._uuid @property def name(self): return self._name taskflow-0.1.3/taskflow/persistence/__init__.py0000664000175300017540000000131012275003514022763 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/persistence/backends/0000775000175300017540000000000012275003604022431 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/0000775000175300017540000000000012275003604024573 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/migration.py0000664000175300017540000000311112275003514027132 0ustar jenkinsjenkins00000000000000# vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" import os from alembic import config as a_config from alembic import environment as a_env from alembic import script as a_script def _alembic_config(): path = os.path.join(os.path.dirname(__file__), 'alembic', 'alembic.ini') return a_config.Config(path) def db_sync(connection, revision='head'): script = a_script.ScriptDirectory.from_config(_alembic_config()) def upgrade(rev, context): return script._upgrade_revs(revision, rev) config = _alembic_config() with a_env.EnvironmentContext(config, script, fn=upgrade, as_sql=False, starting_rev=None, destination_rev=revision, tag=None) as context: context.configure(connection=connection) context.run_migrations() taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/0000775000175300017540000000000012275003604026167 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/script.py.mako0000664000175300017540000000063412275003514030776 0ustar jenkinsjenkins00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/README0000664000175300017540000000063312275003514027051 0ustar jenkinsjenkins00000000000000Please see https://alembic.readthedocs.org/en/latest/index.html for general documentation To create alembic migrations you need to have alembic installed and available in PATH: # pip install alembic $ cd ./taskflow/persistence/backends/sqlalchemy/alembic $ alembic revision -m "migration_description" See Operation Reference https://alembic.readthedocs.org/en/latest/ops.html#ops for a short list of commands taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/0000775000175300017540000000000012275003604030037 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/README0000664000175300017540000000004612275003514030717 0ustar jenkinsjenkins00000000000000Directory for alembic migration files ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logbook_deta.pytaskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logboo0000664000175300017540000001150012275003514034646 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """initial_logbook_details_tables Revision ID: 1cea328f0f65 Revises: None Create Date: 2013-08-23 11:41:49.207087 """ # revision identifiers, used by Alembic. revision = '1cea328f0f65' down_revision = None import logging from alembic import op import sqlalchemy as sa LOG = logging.getLogger(__name__) def _get_indexes(): # Ensure all uuids are indexed since they are what is typically looked # up and fetched, so attempt to ensure that that is done quickly. indexes = [ { 'name': 'logbook_uuid_idx', 'table_name': 'logbooks', 'columns': ['uuid'], }, { 'name': 'flowdetails_uuid_idx', 'table_name': 'flowdetails', 'columns': ['uuid'], }, { 'name': 'taskdetails_uuid_idx', 'table_name': 'taskdetails', 'columns': ['uuid'], }, ] return indexes def _get_foreign_keys(): f_keys = [ # Flow details uuid -> logbook parent uuid { 'name': 'flowdetails_ibfk_1', 'source': 'flowdetails', 'referent': 'logbooks', 'local_cols': ['parent_uuid'], 'remote_cols': ['uuid'], 'ondelete': 'CASCADE', }, # Task details uuid -> flow details parent uuid { 'name': 'taskdetails_ibfk_1', 'source': 'taskdetails', 'referent': 'flowdetails', 'local_cols': ['parent_uuid'], 'remote_cols': ['uuid'], 'ondelete': 'CASCADE', }, ] return f_keys def upgrade(): op.create_table('logbooks', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('meta', sa.Text(), nullable=True), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('uuid', sa.String(length=64), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table('flowdetails', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('parent_uuid', sa.String(length=64)), sa.Column('meta', sa.Text(), nullable=True), sa.Column('state', sa.String(length=255), nullable=True), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('uuid', sa.String(length=64), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table('taskdetails', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('parent_uuid', sa.String(length=64)), sa.Column('meta', sa.Text(), nullable=True), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('results', sa.Text(), nullable=True), sa.Column('version', sa.String(length=64), nullable=True), sa.Column('stacktrace', sa.Text(), nullable=True), sa.Column('exception', sa.Text(), nullable=True), sa.Column('state', sa.String(length=255), nullable=True), sa.Column('uuid', sa.String(length=64), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') try: for fkey_descriptor in _get_foreign_keys(): op.create_foreign_key(**fkey_descriptor) except NotImplementedError as e: LOG.warn("Foreign keys are not supported: %s", e) try: for index_descriptor in _get_indexes(): op.create_index(**index_descriptor) except NotImplementedError as e: LOG.warn("Indexes are not supported: %s", e) def downgrade(): for table in ['logbooks', 'flowdetails', 'taskdetails']: op.drop_table(table) ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_exception_an.pytaskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_except0000664000175300017540000000271212275003514034510 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Replace exception and stacktrace with failure column Revision ID: 1c783c0c2875 Revises: 1cea328f0f65 Create Date: 2013-09-26 12:33:30.970122 """ # revision identifiers, used by Alembic. revision = '1c783c0c2875' down_revision = '1cea328f0f65' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('taskdetails', sa.Column('failure', sa.Text(), nullable=True)) op.drop_column('taskdetails', 'exception') op.drop_column('taskdetails', 'stacktrace') def downgrade(): op.drop_column('taskdetails', 'failure') op.add_column('taskdetails', sa.Column('stacktrace', sa.Text(), nullable=True)) op.add_column('taskdetails', sa.Column('exception', sa.Text(), nullable=True)) taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/env.py0000664000175300017540000000470612275003514027340 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import with_statement from alembic import context from sqlalchemy import engine_from_config, pool # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = None # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = config.get_main_option("sqlalchemy.url") context.configure(url=url) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = engine_from_config(config.get_section(config.config_ini_section), prefix='sqlalchemy.', poolclass=pool.NullPool) connection = engine.connect() context.configure(connection=connection, target_metadata=target_metadata) try: with context.begin_transaction(): context.run_migrations() finally: connection.close() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/alembic/alembic.ini0000664000175300017540000000064412275003514030270 0ustar jenkinsjenkins00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # This is set inside of migration script # sqlalchemy.url = driver://user:pass@localhost/dbname taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/models.py0000664000175300017540000000740112275003514026432 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column, String, DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import ForeignKey from sqlalchemy.orm import backref from sqlalchemy.orm import relationship from sqlalchemy import types as types from taskflow.openstack.common import jsonutils from taskflow.openstack.common import timeutils from taskflow.openstack.common import uuidutils from taskflow.utils import persistence_utils BASE = declarative_base() # TODO(harlowja): remove when oslo.db exists class TimestampMixin(object): created_at = Column(DateTime, default=timeutils.utcnow) updated_at = Column(DateTime, onupdate=timeutils.utcnow) class Json(types.TypeDecorator): impl = types.Text def process_bind_param(self, value, dialect): return jsonutils.dumps(value) def process_result_value(self, value, dialect): return jsonutils.loads(value) class Failure(types.TypeDecorator): """Put misc.Failure object into database column. We convert Failure object to dict, serialize that dict into JSON and save it. None is stored as NULL. The conversion is lossy since we cannot save exc_info. """ impl = types.Text def process_bind_param(self, value, dialect): if value is None: return None return jsonutils.dumps(persistence_utils.failure_to_dict(value)) def process_result_value(self, value, dialect): if value is None: return None return persistence_utils.failure_from_dict(jsonutils.loads(value)) class ModelBase(TimestampMixin): """Base model for all taskflow objects.""" uuid = Column(String, default=uuidutils.generate_uuid, primary_key=True, nullable=False, unique=True) name = Column(String, nullable=True) meta = Column(Json, nullable=True) class LogBook(BASE, ModelBase): """Represents a logbook for a set of flows.""" __tablename__ = 'logbooks' # Relationships flowdetails = relationship("FlowDetail", single_parent=True, backref=backref("logbooks", cascade="save-update, delete, " "merge")) class FlowDetail(BASE, ModelBase): __tablename__ = 'flowdetails' # Member variables state = Column(String) # Relationships parent_uuid = Column(String, ForeignKey('logbooks.uuid')) taskdetails = relationship("TaskDetail", single_parent=True, backref=backref("flowdetails", cascade="save-update, delete, " "merge")) class TaskDetail(BASE, ModelBase): __tablename__ = 'taskdetails' # Member variables state = Column(String) results = Column(Json) failure = Column(Failure) version = Column(String) # Relationships parent_uuid = Column(String, ForeignKey('flowdetails.uuid')) taskflow-0.1.3/taskflow/persistence/backends/sqlalchemy/__init__.py0000664000175300017540000000137412275003514026711 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/persistence/backends/base.py0000664000175300017540000000664112275003514023724 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class Backend(object): """Base class for persistence backends.""" def __init__(self, conf): if not conf: conf = {} if not isinstance(conf, dict): raise TypeError("Configuration dictionary expected not: %s" % type(conf)) self._conf = conf @abc.abstractmethod def get_connection(self): """Return a Connection instance based on the configuration settings.""" pass @abc.abstractmethod def close(self): """Closes any resources this backend has open.""" pass @six.add_metaclass(abc.ABCMeta) class Connection(object): """Base class for backend connections.""" @abc.abstractproperty def backend(self): """Returns the backend this connection is associated with.""" pass @abc.abstractmethod def close(self): """Closes any resources this connection has open.""" pass @abc.abstractmethod def upgrade(self): """Migrate the persistence backend to the most recent version.""" pass @abc.abstractmethod def clear_all(self): """Clear all entries from this backend.""" pass @abc.abstractmethod def validate(self): """Validates that a backend is still ok to be used (the semantics of this vary depending on the backend). On failure a backend specific exception is raised that will indicate why the failure occurred. """ pass @abc.abstractmethod def update_task_details(self, task_detail): """Updates a given task details and returns the updated version. NOTE(harlowja): the details that is to be updated must already have been created by saving a flow details with the given task detail inside of it. """ pass @abc.abstractmethod def update_flow_details(self, flow_detail): """Updates a given flow details and returns the updated version. NOTE(harlowja): the details that is to be updated must already have been created by saving a logbook with the given flow detail inside of it. """ pass @abc.abstractmethod def save_logbook(self, book): """Saves a logbook, and all its contained information.""" pass @abc.abstractmethod def destroy_logbook(self, book_uuid): """Deletes/destroys a logbook matching the given uuid.""" pass @abc.abstractmethod def get_logbook(self, book_uuid): """Fetches a logbook object matching the given uuid.""" pass @abc.abstractmethod def get_logbooks(self): """Return an iterable of logbook objects.""" pass taskflow-0.1.3/taskflow/persistence/backends/__init__.py0000664000175300017540000000277012275003514024550 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from stevedore import driver from taskflow import exceptions as exc from taskflow.openstack.common.py3kcompat import urlutils # NOTE(harlowja): this is the entrypoint namespace, not the module namespace. BACKEND_NAMESPACE = 'taskflow.persistence' LOG = logging.getLogger(__name__) def fetch(conf, namespace=BACKEND_NAMESPACE): backend_name = urlutils.urlparse(conf['connection']).scheme LOG.debug('Looking for %r backend driver in %r', backend_name, namespace) try: mgr = driver.DriverManager(namespace, backend_name, invoke_on_load=True, invoke_kwds={'conf': conf}) return mgr.driver except RuntimeError as e: raise exc.NotFound("Could not find backend %s: %s" % (backend_name, e)) taskflow-0.1.3/taskflow/persistence/backends/impl_sqlalchemy.py0000664000175300017540000005111412275003514026170 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of a SQLAlchemy storage backend.""" from __future__ import absolute_import import contextlib import copy import logging import time import sqlalchemy as sa from sqlalchemy import exc as sa_exc from sqlalchemy import orm as sa_orm from sqlalchemy import pool as sa_pool from taskflow import exceptions as exc from taskflow.persistence.backends import base from taskflow.persistence.backends.sqlalchemy import migration from taskflow.persistence.backends.sqlalchemy import models from taskflow.persistence import logbook from taskflow.utils import eventlet_utils from taskflow.utils import misc from taskflow.utils import persistence_utils LOG = logging.getLogger(__name__) # NOTE(harlowja): This is all very similar to what oslo-incubator uses but is # not based on using oslo.cfg and its global configuration (which should not be # used in libraries such as taskflow). # # TODO(harlowja): once oslo.db appears we should be able to use that instead # since it's not supposed to have any usage of oslo.cfg in it when it # materializes as a library. # See: http://dev.mysql.com/doc/refman/5.0/en/error-messages-client.html MY_SQL_CONN_ERRORS = ( # Lost connection to MySQL server at '%s', system error: %d '2006', # Can't connect to MySQL server on '%s' (%d) '2003', # Can't connect to local MySQL server through socket '%s' (%d) '2002', ) MY_SQL_GONE_WAY_AWAY_ERRORS = ( # Lost connection to MySQL server at '%s', system error: %d '2006', # Lost connection to MySQL server during query '2013', # Commands out of sync; you can't run this command now '2014', # Can't open shared memory; no answer from server (%lu) '2045', # Lost connection to MySQL server at '%s', system error: %d '2055', ) # See: http://www.postgresql.org/docs/9.1/static/errcodes-appendix.html POSTGRES_CONN_ERRORS = ( # connection_exception '08000', # connection_does_not_exist '08003', # connection_failure '08006', # sqlclient_unable_to_establish_sqlconnection '08001', # sqlserver_rejected_establishment_of_sqlconnection '08004', # Just couldn't connect (postgres errors are pretty weird) 'could not connect to server', ) POSTGRES_GONE_WAY_AWAY_ERRORS = ( # Server terminated while in progress (postgres errors are pretty weird). 'server closed the connection unexpectedly', 'terminating connection due to administrator command', ) # These connection urls mean sqlite is being used as an in-memory DB. SQLITE_IN_MEMORY = ('sqlite://', 'sqlite:///', 'sqlite:///:memory:') def _in_any(reason, err_haystack): """Checks if any elements of the haystack are in the given reason.""" for err in err_haystack: if reason.find(str(err)) != -1: return True return False def _is_db_connection_error(reason): return _in_any(reason, list(MY_SQL_CONN_ERRORS + POSTGRES_CONN_ERRORS)) def _thread_yield(dbapi_con, con_record): """Ensure other greenthreads get a chance to be executed. If we use eventlet.monkey_patch(), eventlet.greenthread.sleep(0) will execute instead of time.sleep(0). Force a context switch. With common database backends (eg MySQLdb and sqlite), there is no implicit yield caused by network I/O since they are implemented by C libraries that eventlet cannot monkey patch. """ time.sleep(0) def _set_mode_traditional(dbapi_con, con_record, connection_proxy): """Set engine mode to 'traditional'. Required to prevent silent truncates at insert or update operations under MySQL. By default MySQL truncates inserted string if it longer than a declared field just with warning. That is fraught with data corruption. """ dbapi_con.cursor().execute("SET SESSION sql_mode = TRADITIONAL;") def _ping_listener(dbapi_conn, connection_rec, connection_proxy): """Ensures that MySQL connections checked out of the pool are alive. Modified + borrowed from: http://bit.ly/14BYaW6. """ try: dbapi_conn.cursor().execute('select 1') except dbapi_conn.OperationalError as ex: if _in_any(str(ex.args[0]), MY_SQL_GONE_WAY_AWAY_ERRORS): LOG.warn('Got mysql server has gone away: %s', ex) raise sa_exc.DisconnectionError("Database server went away") elif _in_any(str(ex.args[0]), POSTGRES_GONE_WAY_AWAY_ERRORS): LOG.warn('Got postgres server has gone away: %s', ex) raise sa_exc.DisconnectionError("Database server went away") else: raise class SQLAlchemyBackend(base.Backend): def __init__(self, conf, engine=None): super(SQLAlchemyBackend, self).__init__(conf) if engine is not None: self._engine = engine self._owns_engine = False else: self._engine = None self._owns_engine = True self._session_maker = None self._validated = False def _create_engine(self): # NOTE(harlowja): copy the internal one so that we don't modify it via # all the popping that will happen below. conf = copy.deepcopy(self._conf) engine_args = { 'echo': misc.as_bool(conf.pop('echo', False)), 'convert_unicode': misc.as_bool(conf.pop('convert_unicode', True)), 'pool_recycle': 3600, } if 'idle_timeout' in conf: idle_timeout = misc.as_int(conf.pop('idle_timeout')) engine_args['pool_recycle'] = idle_timeout sql_connection = conf.pop('connection') e_url = sa.engine.url.make_url(sql_connection) if 'sqlite' in e_url.drivername: engine_args["poolclass"] = sa_pool.NullPool # Adjustments for in-memory sqlite usage. if sql_connection.lower().strip() in SQLITE_IN_MEMORY: engine_args["poolclass"] = sa_pool.StaticPool engine_args["connect_args"] = {'check_same_thread': False} else: for (k, lookup_key) in [('pool_size', 'max_pool_size'), ('max_overflow', 'max_overflow'), ('pool_timeout', 'pool_timeout')]: if lookup_key in conf: engine_args[k] = misc.as_int(conf.pop(lookup_key)) # If the configuration dict specifies any additional engine args # or engine arg overrides make sure we merge them in. engine_args.update(conf.pop('engine_args', {})) engine = sa.create_engine(sql_connection, **engine_args) checkin_yield = conf.pop('checkin_yield', eventlet_utils.EVENTLET_AVAILABLE) if misc.as_bool(checkin_yield): sa.event.listen(engine, 'checkin', _thread_yield) if 'mysql' in e_url.drivername: if misc.as_bool(conf.pop('checkout_ping', True)): sa.event.listen(engine, 'checkout', _ping_listener) if misc.as_bool(conf.pop('mysql_traditional_mode', True)): sa.event.listen(engine, 'checkout', _set_mode_traditional) return engine @property def engine(self): if self._engine is None: self._engine = self._create_engine() return self._engine def _get_session_maker(self): if self._session_maker is None: self._session_maker = sa_orm.sessionmaker(bind=self.engine, autocommit=True) return self._session_maker def get_connection(self): conn = Connection(self, self._get_session_maker()) if not self._validated: try: max_retries = misc.as_int(self._conf.get('max_retries', None)) except TypeError: max_retries = 0 conn.validate(max_retries=max_retries) self._validated = True return conn def close(self): if self._session_maker is not None: self._session_maker.close_all() self._session_maker = None if self._engine is not None and self._owns_engine: # NOTE(harlowja): Only dispose of the engine and clear it from # our local state if we actually own the engine in the first # place. If the user passed in their own engine we should not # be disposing it on their behalf (and we shouldn't be clearing # our local engine either, since then we would just recreate a # new engine if the engine property is accessed). self._engine.dispose() self._engine = None self._validated = False class Connection(base.Connection): def __init__(self, backend, session_maker): self._backend = backend self._session_maker = session_maker self._engine = backend.engine @property def backend(self): return self._backend def validate(self, max_retries=0): def test_connect(failures): try: # See if we can make a connection happen. # # NOTE(harlowja): note that even though we are connecting # once it does not mean that we will be able to connect in # the future, so this is more of a sanity test and is not # complete connection insurance. with contextlib.closing(self._engine.connect()): pass except sa_exc.OperationalError as ex: if _is_db_connection_error(str(ex.args[0])): failures.append(misc.Failure()) return False return True failures = [] if test_connect(failures): return # Sorry it didn't work out... if max_retries <= 0: failures[-1].reraise() # Go through the exponential backoff loop and see if we can connect # after a given number of backoffs (with a backoff sleeping period # between each attempt)... attempts_left = max_retries for sleepy_secs in misc.ExponentialBackoff(max_retries): LOG.warn("SQL connection failed due to '%s', %s attempts left.", failures[-1].exc, attempts_left) LOG.info("Attempting to test the connection again in %s seconds.", sleepy_secs) time.sleep(sleepy_secs) if test_connect(failures): return attempts_left -= 1 # Sorry it didn't work out... failures[-1].reraise() def _run_in_session(self, functor, *args, **kwargs): """Runs a function in a session and makes sure that sqlalchemy exceptions aren't emitted from that sessions actions (as that would expose the underlying backends exception model). """ try: session = self._make_session() with session.begin(): return functor(session, *args, **kwargs) except sa_exc.SQLAlchemyError as e: LOG.exception('Failed running database session') raise exc.StorageError("Failed running database session: %s" % e, e) def _make_session(self): try: return self._session_maker() except sa_exc.SQLAlchemyError as e: LOG.exception('Failed creating database session') raise exc.StorageError("Failed creating database session: %s" % e, e) def upgrade(self): try: with contextlib.closing(self._engine.connect()) as conn: # NOTE(imelnikov): Alembic does not support SQLite, # and we don't recommend to use SQLite in production # deployments, so migrations are rarely needed # for SQLite. So we don't bother about working around # SQLite limitations, and create database from models # when it is in use. if 'sqlite' in self._engine.url.drivername: models.BASE.metadata.create_all(conn) else: migration.db_sync(conn) except sa_exc.SQLAlchemyError as e: LOG.exception('Failed upgrading database version') raise exc.StorageError("Failed upgrading database version: %s" % e, e) def _clear_all(self, session): # NOTE(harlowja): due to how we have our relationship setup and # cascading deletes are enabled, this will cause all associated # task details and flow details to automatically be purged. try: return session.query(models.LogBook).delete() except sa_exc.DBAPIError as e: LOG.exception('Failed clearing all entries') raise exc.StorageError("Failed clearing all entries: %s" % e, e) def clear_all(self): return self._run_in_session(self._clear_all) def _update_task_details(self, session, td): # Must already exist since a tasks details has a strong connection to # a flow details, and tasks details can not be saved on there own since # they *must* have a connection to an existing flow details. td_m = _task_details_get_model(td.uuid, session=session) td_m = _taskdetails_merge(td_m, td) td_m = session.merge(td_m) return _convert_td_to_external(td_m) def update_task_details(self, task_detail): return self._run_in_session(self._update_task_details, td=task_detail) def _update_flow_details(self, session, fd): # Must already exist since a flow details has a strong connection to # a logbook, and flow details can not be saved on there own since they # *must* have a connection to an existing logbook. fd_m = _flow_details_get_model(fd.uuid, session=session) fd_m = _flowdetails_merge(fd_m, fd) fd_m = session.merge(fd_m) return _convert_fd_to_external(fd_m) def update_flow_details(self, flow_detail): return self._run_in_session(self._update_flow_details, fd=flow_detail) def _destroy_logbook(self, session, lb_id): try: lb = _logbook_get_model(lb_id, session=session) session.delete(lb) except sa_exc.DBAPIError as e: LOG.exception('Failed destroying logbook') raise exc.StorageError("Failed destroying" " logbook %s: %s" % (lb_id, e), e) def destroy_logbook(self, book_uuid): return self._run_in_session(self._destroy_logbook, lb_id=book_uuid) def _save_logbook(self, session, lb): try: lb_m = _logbook_get_model(lb.uuid, session=session) # NOTE(harlowja): Merge them (note that this doesn't provide # 100% correct update semantics due to how databases have # MVCC). This is where a stored procedure or a better backing # store would handle this better by allowing this merge logic # to exist in the database itself. lb_m = _logbook_merge(lb_m, lb) except exc.NotFound: lb_m = _convert_lb_to_internal(lb) try: lb_m = session.merge(lb_m) return _convert_lb_to_external(lb_m) except sa_exc.DBAPIError as e: LOG.exception('Failed saving logbook') raise exc.StorageError("Failed saving logbook %s: %s" % (lb.uuid, e), e) def save_logbook(self, book): return self._run_in_session(self._save_logbook, lb=book) def get_logbook(self, book_uuid): session = self._make_session() try: lb = _logbook_get_model(book_uuid, session=session) return _convert_lb_to_external(lb) except sa_exc.DBAPIError as e: LOG.exception('Failed getting logbook') raise exc.StorageError("Failed getting logbook %s: %s" % (book_uuid, e), e) def get_logbooks(self): session = self._make_session() try: raw_books = session.query(models.LogBook).all() books = [_convert_lb_to_external(lb) for lb in raw_books] except sa_exc.DBAPIError as e: LOG.exception('Failed getting logbooks') raise exc.StorageError("Failed getting logbooks: %s" % e, e) for lb in books: yield lb def close(self): pass ### # Internal <-> external model + merging + other helper functions. ### def _convert_fd_to_external(fd): fd_c = logbook.FlowDetail(fd.name, uuid=fd.uuid) fd_c.meta = fd.meta fd_c.state = fd.state for td in fd.taskdetails: fd_c.add(_convert_td_to_external(td)) return fd_c def _convert_fd_to_internal(fd, parent_uuid): fd_m = models.FlowDetail(name=fd.name, uuid=fd.uuid, parent_uuid=parent_uuid, meta=fd.meta, state=fd.state) fd_m.taskdetails = [] for td in fd: fd_m.taskdetails.append(_convert_td_to_internal(td, fd_m.uuid)) return fd_m def _convert_td_to_internal(td, parent_uuid): return models.TaskDetail(name=td.name, uuid=td.uuid, state=td.state, results=td.results, failure=td.failure, meta=td.meta, version=td.version, parent_uuid=parent_uuid) def _convert_td_to_external(td): # Convert from sqlalchemy model -> external model, this allows us # to change the internal sqlalchemy model easily by forcing a defined # interface (that isn't the sqlalchemy model itself). td_c = logbook.TaskDetail(td.name, uuid=td.uuid) td_c.state = td.state td_c.results = td.results td_c.failure = td.failure td_c.meta = td.meta td_c.version = td.version return td_c def _convert_lb_to_external(lb_m): """Don't expose the internal sqlalchemy ORM model to the external api.""" lb_c = logbook.LogBook(lb_m.name, lb_m.uuid, updated_at=lb_m.updated_at, created_at=lb_m.created_at) lb_c.meta = lb_m.meta for fd_m in lb_m.flowdetails: lb_c.add(_convert_fd_to_external(fd_m)) return lb_c def _convert_lb_to_internal(lb_c): """Don't expose the external model to the sqlalchemy ORM model.""" lb_m = models.LogBook(uuid=lb_c.uuid, meta=lb_c.meta, name=lb_c.name) lb_m.flowdetails = [] for fd_c in lb_c: lb_m.flowdetails.append(_convert_fd_to_internal(fd_c, lb_c.uuid)) return lb_m def _logbook_get_model(lb_id, session): entry = session.query(models.LogBook).filter_by(uuid=lb_id).first() if entry is None: raise exc.NotFound("No logbook found with id: %s" % lb_id) return entry def _flow_details_get_model(f_id, session): entry = session.query(models.FlowDetail).filter_by(uuid=f_id).first() if entry is None: raise exc.NotFound("No flow details found with id: %s" % f_id) return entry def _task_details_get_model(t_id, session): entry = session.query(models.TaskDetail).filter_by(uuid=t_id).first() if entry is None: raise exc.NotFound("No task details found with id: %s" % t_id) return entry def _logbook_merge(lb_m, lb): lb_m = persistence_utils.logbook_merge(lb_m, lb) for fd in lb: existing_fd = False for fd_m in lb_m.flowdetails: if fd_m.uuid == fd.uuid: existing_fd = True fd_m = _flowdetails_merge(fd_m, fd) if not existing_fd: lb_m.flowdetails.append(_convert_fd_to_internal(fd, lb_m.uuid)) return lb_m def _flowdetails_merge(fd_m, fd): fd_m = persistence_utils.flow_details_merge(fd_m, fd) for td in fd: existing_td = False for td_m in fd_m.taskdetails: if td_m.uuid == td.uuid: existing_td = True td_m = _taskdetails_merge(td_m, td) break if not existing_td: td_m = _convert_td_to_internal(td, fd_m.uuid) fd_m.taskdetails.append(td_m) return fd_m def _taskdetails_merge(td_m, td): return persistence_utils.task_details_merge(td_m, td) taskflow-0.1.3/taskflow/persistence/backends/impl_zookeeper.py0000664000175300017540000003776212275003514026046 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2014 AT&T Labs All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging from kazoo import exceptions as k_exc from kazoo.protocol import paths from taskflow import exceptions as exc from taskflow.openstack.common import jsonutils from taskflow.persistence.backends import base from taskflow.persistence import logbook from taskflow.utils import kazoo_utils as k_utils from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils LOG = logging.getLogger(__name__) # Transaction support was added in 3.4.0 MIN_ZK_VERSION = (3, 4, 0) class ZkBackend(base.Backend): """ZooKeeper as backend storage implementation Example conf (use Kazoo): conf = { "hosts": "192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181", "path": "/taskflow", } """ def __init__(self, conf, client=None): super(ZkBackend, self).__init__(conf) path = str(conf.get("path", "/taskflow")) if not path: raise ValueError("Empty zookeeper path is disallowed") if not paths.isabs(path): raise ValueError("Zookeeper path must be absolute") self._path = path if client is not None: self._client = client self._owned = False else: self._client = k_utils.make_client(conf) self._owned = True self._validated = False @property def path(self): return self._path def get_connection(self): conn = ZkConnection(self, self._client) if not self._validated: conn.validate() self._validated = True return conn def close(self): self._validated = False if not self._owned: return try: self._client.stop() except (k_exc.KazooException, k_exc.ZookeeperError) as e: raise exc.StorageError("Unable to stop client: %s" % e) try: self._client.close() except TypeError: # NOTE(harlowja): https://github.com/python-zk/kazoo/issues/167 pass except (k_exc.KazooException, k_exc.ZookeeperError) as e: raise exc.StorageError("Unable to close client: %s" % e) class ZkConnection(base.Connection): def __init__(self, backend, client): self._backend = backend self._client = client self._book_path = paths.join(self._backend.path, "books") self._flow_path = paths.join(self._backend.path, "flow_details") self._task_path = paths.join(self._backend.path, "task_details") with self._exc_wrapper(): # NOOP if already started. self._client.start() def validate(self): with self._exc_wrapper(): zk_ver = self._client.server_version() if tuple(zk_ver) < MIN_ZK_VERSION: given_zk_ver = ".".join([str(a) for a in zk_ver]) desired_zk_ver = ".".join([str(a) for a in MIN_ZK_VERSION]) raise exc.StorageError("Incompatible zookeeper version" " %s detected, zookeeper >= %s required" % (given_zk_ver, desired_zk_ver)) @property def backend(self): return self._backend @property def book_path(self): return self._book_path @property def flow_path(self): return self._flow_path @property def task_path(self): return self._task_path def close(self): pass def upgrade(self): """Creates the initial paths (if they already don't exist).""" with self._exc_wrapper(): for path in (self.book_path, self.flow_path, self.task_path): self._client.ensure_path(path) @contextlib.contextmanager def _exc_wrapper(self): """Exception wrapper which wraps kazoo exceptions and groups them to taskflow exceptions. """ try: yield except self._client.handler.timeout_exception as e: raise exc.ConnectionFailure("Storage backend timeout: %s" % e) except k_exc.SessionExpiredError as e: raise exc.ConnectionFailure("Storage backend session" " has expired: %s" % e) except k_exc.NoNodeError as e: raise exc.NotFound("Storage backend node not found: %s" % e) except k_exc.NodeExistsError as e: raise exc.AlreadyExists("Storage backend duplicate node: %s" % e) except (k_exc.KazooException, k_exc.ZookeeperError) as e: raise exc.StorageError("Storage backend internal error: %s" % e) def update_task_details(self, td): """Update a task_detail transactionally.""" with self._exc_wrapper(): with self._client.transaction() as txn: return self._update_task_details(td, txn) def _update_task_details(self, td, txn, create_missing=False): # Determine whether the desired data exists or not. td_path = paths.join(self.task_path, td.uuid) try: td_data, _zstat = self._client.get(td_path) except k_exc.NoNodeError: # Not-existent: create or raise exception. if create_missing: txn.create(td_path) e_td = logbook.TaskDetail(name=td.name, uuid=td.uuid) else: raise exc.NotFound("No task details found with id: %s" % td.uuid) else: # Existent: read it out. e_td = p_utils.unformat_task_detail(td.uuid, misc.decode_json(td_data)) # Update and write it back e_td = p_utils.task_details_merge(e_td, td) td_data = p_utils.format_task_detail(e_td) txn.set_data(td_path, misc.binary_encode(jsonutils.dumps(td_data))) return e_td def get_task_details(self, td_uuid): """Read a taskdetail. *Read-only*, so no need of zk transaction. """ with self._exc_wrapper(): return self._get_task_details(td_uuid) def _get_task_details(self, td_uuid): td_path = paths.join(self.task_path, td_uuid) try: td_data, _zstat = self._client.get(td_path) except k_exc.NoNodeError: raise exc.NotFound("No task details found with id: %s" % td_uuid) else: return p_utils.unformat_task_detail(td_uuid, misc.decode_json(td_data)) def update_flow_details(self, fd): """Update a flowdetail transactionally.""" with self._exc_wrapper(): with self._client.transaction() as txn: return self._update_flow_details(fd, txn) def _update_flow_details(self, fd, txn, create_missing=False): # Determine whether the desired data exists or not fd_path = paths.join(self.flow_path, fd.uuid) try: fd_data, _zstat = self._client.get(fd_path) except k_exc.NoNodeError: # Not-existent: create or raise exception if create_missing: txn.create(fd_path) e_fd = logbook.FlowDetail(name=fd.name, uuid=fd.uuid) else: raise exc.NotFound("No flow details found with id: %s" % fd.uuid) else: # Existent: read it out e_fd = p_utils.unformat_flow_detail(fd.uuid, misc.decode_json(fd_data)) # Update and write it back e_fd = p_utils.flow_details_merge(e_fd, fd) fd_data = p_utils.format_flow_detail(e_fd) txn.set_data(fd_path, misc.binary_encode(jsonutils.dumps(fd_data))) for td in fd: td_path = paths.join(fd_path, td.uuid) # NOTE(harlowja): create an entry in the flow detail path # for the provided task detail so that a reference exists # from the flow detail to its task details. if not self._client.exists(td_path): txn.create(td_path) e_fd.add(self._update_task_details(td, txn, create_missing=True)) return e_fd def get_flow_details(self, fd_uuid): """Read a flowdetail. *Read-only*, so no need of zk transaction. """ with self._exc_wrapper(): return self._get_flow_details(fd_uuid) def _get_flow_details(self, fd_uuid): fd_path = paths.join(self.flow_path, fd_uuid) try: fd_data, _zstat = self._client.get(fd_path) except k_exc.NoNodeError: raise exc.NotFound("No flow details found with id: %s" % fd_uuid) fd = p_utils.unformat_flow_detail(fd_uuid, misc.decode_json(fd_data)) for td_uuid in self._client.get_children(fd_path): fd.add(self._get_task_details(td_uuid)) return fd def save_logbook(self, lb): """Save (update) a log_book transactionally.""" def _create_logbook(lb_path, txn): lb_data = p_utils.format_logbook(lb, created_at=None) txn.create(lb_path, misc.binary_encode(jsonutils.dumps(lb_data))) for fd in lb: # NOTE(harlowja): create an entry in the logbook path # for the provided flow detail so that a reference exists # from the logbook to its flow details. txn.create(paths.join(lb_path, fd.uuid)) fd_path = paths.join(self.flow_path, fd.uuid) fd_data = jsonutils.dumps(p_utils.format_flow_detail(fd)) txn.create(fd_path, misc.binary_encode(fd_data)) for td in fd: # NOTE(harlowja): create an entry in the flow detail path # for the provided task detail so that a reference exists # from the flow detail to its task details. txn.create(paths.join(fd_path, td.uuid)) td_path = paths.join(self.task_path, td.uuid) td_data = jsonutils.dumps(p_utils.format_task_detail(td)) txn.create(td_path, misc.binary_encode(td_data)) return lb def _update_logbook(lb_path, lb_data, txn): e_lb = p_utils.unformat_logbook(lb.uuid, misc.decode_json(lb_data)) e_lb = p_utils.logbook_merge(e_lb, lb) lb_data = p_utils.format_logbook(e_lb, created_at=lb.created_at) txn.set_data(lb_path, misc.binary_encode(jsonutils.dumps(lb_data))) for fd in lb: fd_path = paths.join(lb_path, fd.uuid) if not self._client.exists(fd_path): # NOTE(harlowja): create an entry in the logbook path # for the provided flow detail so that a reference exists # from the logbook to its flow details. txn.create(fd_path) e_fd = self._update_flow_details(fd, txn, create_missing=True) e_lb.add(e_fd) return e_lb with self._exc_wrapper(): with self._client.transaction() as txn: # Determine whether the desired data exists or not. lb_path = paths.join(self.book_path, lb.uuid) try: lb_data, _zstat = self._client.get(lb_path) except k_exc.NoNodeError: # Create a new logbook since it doesn't exist. e_lb = _create_logbook(lb_path, txn) else: # Otherwise update the existing logbook instead. e_lb = _update_logbook(lb_path, lb_data, txn) # Finally return (updated) logbook. return e_lb def _get_logbook(self, lb_uuid): lb_path = paths.join(self.book_path, lb_uuid) try: lb_data, _zstat = self._client.get(lb_path) except k_exc.NoNodeError: raise exc.NotFound("No logbook found with id: %s" % lb_uuid) else: lb = p_utils.unformat_logbook(lb_uuid, misc.decode_json(lb_data)) for fd_uuid in self._client.get_children(lb_path): lb.add(self._get_flow_details(fd_uuid)) return lb def get_logbook(self, lb_uuid): """Read a logbook. *Read-only*, so no need of zk transaction. """ with self._exc_wrapper(): return self._get_logbook(lb_uuid) def get_logbooks(self): """Read all logbooks. *Read-only*, so no need of zk transaction. """ with self._exc_wrapper(): for lb_uuid in self._client.get_children(self.book_path): yield self._get_logbook(lb_uuid) def destroy_logbook(self, lb_uuid): """Detroy (delete) a log_book transactionally.""" def _destroy_task_details(td_uuid, txn): td_path = paths.join(self.task_path, td_uuid) if not self._client.exists(td_path): raise exc.NotFound("No task details found with id: %s" % td_uuid) txn.delete(td_path) def _destroy_flow_details(fd_uuid, txn): fd_path = paths.join(self.flow_path, fd_uuid) if not self._client.exists(fd_path): raise exc.NotFound("No flow details found with id: %s" % fd_uuid) for td_uuid in self._client.get_children(fd_path): _destroy_task_details(td_uuid, txn) txn.delete(paths.join(fd_path, td_uuid)) txn.delete(fd_path) def _destroy_logbook(lb_uuid, txn): lb_path = paths.join(self.book_path, lb_uuid) if not self._client.exists(lb_path): raise exc.NotFound("No logbook found with id: %s" % lb_uuid) for fd_uuid in self._client.get_children(lb_path): _destroy_flow_details(fd_uuid, txn) txn.delete(paths.join(lb_path, fd_uuid)) txn.delete(lb_path) with self._exc_wrapper(): with self._client.transaction() as txn: _destroy_logbook(lb_uuid, txn) def clear_all(self, delete_dirs=True): """Delete all data transactioanlly.""" with self._exc_wrapper(): with self._client.transaction() as txn: # Delete all data under logbook path. for lb_uuid in self._client.get_children(self.book_path): lb_path = paths.join(self.book_path, lb_uuid) for fd_uuid in self._client.get_children(lb_path): txn.delete(paths.join(lb_path, fd_uuid)) txn.delete(lb_path) # Delete all data under flowdetail path. for fd_uuid in self._client.get_children(self.flow_path): fd_path = paths.join(self.flow_path, fd_uuid) for td_uuid in self._client.get_children(fd_path): txn.delete(paths.join(fd_path, td_uuid)) txn.delete(fd_path) # Delete all data under taskdetail path. for td_uuid in self._client.get_children(self.task_path): td_path = paths.join(self.task_path, td_uuid) txn.delete(td_path) # Delete containing directories. if delete_dirs: txn.delete(self.book_path) txn.delete(self.task_path) txn.delete(self.flow_path) taskflow-0.1.3/taskflow/persistence/backends/impl_memory.py0000664000175300017540000001434712275003514025345 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of in-memory backend.""" import logging import threading from taskflow import exceptions as exc from taskflow.openstack.common import timeutils from taskflow.persistence.backends import base from taskflow.persistence import logbook from taskflow.utils import lock_utils from taskflow.utils import persistence_utils as p_utils LOG = logging.getLogger(__name__) class MemoryBackend(base.Backend): def __init__(self, conf): super(MemoryBackend, self).__init__(conf) self._log_books = {} self._flow_details = {} self._task_details = {} self._save_lock = threading.RLock() self._read_lock = threading.RLock() self._read_save_order = (self._read_lock, self._save_lock) @property def log_books(self): return self._log_books @property def flow_details(self): return self._flow_details @property def task_details(self): return self._task_details @property def read_locks(self): return (self._read_lock,) @property def save_locks(self): return self._read_save_order def get_connection(self): return Connection(self) def close(self): pass class Connection(base.Connection): def __init__(self, backend): self._read_locks = backend.read_locks self._save_locks = backend.save_locks self._backend = backend def upgrade(self): pass def validate(self): pass @property def backend(self): return self._backend def close(self): pass @lock_utils.locked(lock="_save_locks") def clear_all(self): count = 0 for uuid in list(self.backend.log_books.keys()): self.destroy_logbook(uuid) count += 1 return count @lock_utils.locked(lock="_save_locks") def destroy_logbook(self, book_uuid): try: # Do the same cascading delete that the sql layer does. lb = self.backend.log_books.pop(book_uuid) for fd in lb: self.backend.flow_details.pop(fd.uuid, None) for td in fd: self.backend.task_details.pop(td.uuid, None) except KeyError: raise exc.NotFound("No logbook found with id: %s" % book_uuid) @lock_utils.locked(lock="_save_locks") def update_task_details(self, task_detail): try: e_td = self.backend.task_details[task_detail.uuid] except KeyError: raise exc.NotFound("No task details found with id: %s" % task_detail.uuid) return p_utils.task_details_merge(e_td, task_detail, deep_copy=True) def _save_flowdetail_tasks(self, e_fd, flow_detail): for task_detail in flow_detail: e_td = e_fd.find(task_detail.uuid) if e_td is None: e_td = logbook.TaskDetail(name=task_detail.name, uuid=task_detail.uuid) e_fd.add(e_td) if task_detail.uuid not in self.backend.task_details: self.backend.task_details[task_detail.uuid] = e_td p_utils.task_details_merge(e_td, task_detail, deep_copy=True) @lock_utils.locked(lock="_save_locks") def update_flow_details(self, flow_detail): try: e_fd = self.backend.flow_details[flow_detail.uuid] except KeyError: raise exc.NotFound("No flow details found with id: %s" % flow_detail.uuid) p_utils.flow_details_merge(e_fd, flow_detail, deep_copy=True) self._save_flowdetail_tasks(e_fd, flow_detail) return e_fd @lock_utils.locked(lock="_save_locks") def save_logbook(self, book): # Get a existing logbook model (or create it if it isn't there). try: e_lb = self.backend.log_books[book.uuid] except KeyError: e_lb = logbook.LogBook(book.name, book.uuid, updated_at=book.updated_at, created_at=timeutils.utcnow()) self.backend.log_books[e_lb.uuid] = e_lb else: # TODO(harlowja): figure out a better way to set this property # without actually setting a 'private' property. e_lb._updated_at = timeutils.utcnow() p_utils.logbook_merge(e_lb, book, deep_copy=True) # Add anything in to the new logbook that isn't already in the existing # logbook. for flow_detail in book: try: e_fd = self.backend.flow_details[flow_detail.uuid] except KeyError: e_fd = logbook.FlowDetail(name=flow_detail.name, uuid=flow_detail.uuid) e_lb.add(flow_detail) self.backend.flow_details[flow_detail.uuid] = e_fd p_utils.flow_details_merge(e_fd, flow_detail, deep_copy=True) self._save_flowdetail_tasks(e_fd, flow_detail) return e_lb @lock_utils.locked(lock='_read_locks') def get_logbook(self, book_uuid): try: return self.backend.log_books[book_uuid] except KeyError: raise exc.NotFound("No logbook found with id: %s" % book_uuid) @lock_utils.locked(lock='_read_locks') def _get_logbooks(self): return list(self.backend.log_books.values()) def get_logbooks(self): # NOTE(harlowja): don't hold the lock while iterating. for lb in self._get_logbooks(): yield lb taskflow-0.1.3/taskflow/persistence/backends/impl_dir.py0000664000175300017540000003674712275003514024623 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import logging import os import shutil import threading import weakref import six from taskflow import exceptions as exc from taskflow.openstack.common import jsonutils from taskflow.persistence.backends import base from taskflow.utils import lock_utils from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils LOG = logging.getLogger(__name__) # The lock storage is not thread safe to set items in, so this lock is used to # protect that access. _LOCK_STORAGE_MUTATE = threading.RLock() # Currently in use paths -> in-process locks are maintained here. # # NOTE(harlowja): Values in this dictionary will be automatically released once # the objects referencing those objects have been garbage collected. _LOCK_STORAGE = weakref.WeakValueDictionary() class DirBackend(base.Backend): """A backend that writes logbooks, flow details, and task details to a provided directory. This backend does *not* provide transactional semantics although it does guarantee that there will be no race conditions when writing/reading by using file level locking and in-process locking. NOTE(harlowja): this is more of an example/testing backend and likely should *not* be used in production, since this backend lacks transactional semantics. """ def __init__(self, conf): super(DirBackend, self).__init__(conf) self._path = os.path.abspath(conf['path']) self._lock_path = os.path.join(self._path, 'locks') self._file_cache = {} # Ensure that multiple threads are not accessing the same storage at # the same time, the file lock mechanism doesn't protect against this # so we must do in-process locking as well. with _LOCK_STORAGE_MUTATE: self._lock = _LOCK_STORAGE.setdefault(self._path, threading.RLock()) @property def lock_path(self): return self._lock_path @property def base_path(self): return self._path def get_connection(self): return Connection(self) def close(self): pass class Connection(base.Connection): def __init__(self, backend): self._backend = backend self._file_cache = self._backend._file_cache self._flow_path = os.path.join(self._backend.base_path, 'flows') self._task_path = os.path.join(self._backend.base_path, 'tasks') self._book_path = os.path.join(self._backend.base_path, 'books') # Share the backends lock so that all threads using the given backend # are restricted in writing, since the per-process lock we are using # to restrict the multi-process access does not work inside a process. self._lock = backend._lock def validate(self): # Verify key paths exist. paths = [ self._backend.base_path, self._backend.lock_path, self._flow_path, self._task_path, self._book_path, ] for p in paths: if not os.path.isdir(p): raise RuntimeError("Missing required directory: %s" % (p)) def _read_from(self, filename): # This is very similar to the oslo-incubator fileutils module, but # tweaked to not depend on a global cache, as well as tweaked to not # pull-in the oslo logging module (which is a huge pile of code). mtime = os.path.getmtime(filename) cache_info = self._file_cache.setdefault(filename, {}) if not cache_info or mtime > cache_info.get('mtime', 0): with open(filename, 'rb') as fp: cache_info['data'] = fp.read().decode('utf-8') cache_info['mtime'] = mtime return cache_info['data'] def _write_to(self, filename, contents): if isinstance(contents, six.text_type): contents = contents.encode('utf-8') with open(filename, 'wb') as fp: fp.write(contents) self._file_cache.pop(filename, None) def _run_with_process_lock(self, lock_name, functor, *args, **kwargs): lock_path = os.path.join(self.backend.lock_path, lock_name) with lock_utils.InterProcessLock(lock_path): try: return functor(*args, **kwargs) except exc.TaskFlowException: raise except Exception as e: LOG.exception("Failed running locking file based session") # NOTE(harlowja): trap all other errors as storage errors. raise exc.StorageError("Failed running locking file based " "session: %s" % e, e) def _get_logbooks(self): lb_uuids = [] try: lb_uuids = [d for d in os.listdir(self._book_path) if os.path.isdir(os.path.join(self._book_path, d))] except EnvironmentError as e: if e.errno != errno.ENOENT: raise for lb_uuid in lb_uuids: try: yield self._get_logbook(lb_uuid) except exc.NotFound: pass def get_logbooks(self): # NOTE(harlowja): don't hold the lock while iterating. with self._lock: books = list(self._get_logbooks()) for b in books: yield b @property def backend(self): return self._backend def close(self): pass def _save_task_details(self, task_detail, ignore_missing): # See if we have an existing task detail to merge with. e_td = None try: e_td = self._get_task_details(task_detail.uuid, lock=False) except EnvironmentError: if not ignore_missing: raise exc.NotFound("No task details found with id: %s" % task_detail.uuid) if e_td is not None: task_detail = p_utils.task_details_merge(e_td, task_detail) td_path = os.path.join(self._task_path, task_detail.uuid) td_data = p_utils.format_task_detail(task_detail) self._write_to(td_path, jsonutils.dumps(td_data)) return task_detail @lock_utils.locked def update_task_details(self, task_detail): return self._run_with_process_lock("task", self._save_task_details, task_detail, ignore_missing=False) def _get_task_details(self, uuid, lock=True): def _get(): td_path = os.path.join(self._task_path, uuid) td_data = jsonutils.loads(self._read_from(td_path)) return p_utils.unformat_task_detail(uuid, td_data) if lock: return self._run_with_process_lock('task', _get) else: return _get() def _get_flow_details(self, uuid, lock=True): def _get(): fd_path = os.path.join(self._flow_path, uuid) meta_path = os.path.join(fd_path, 'metadata') meta = jsonutils.loads(self._read_from(meta_path)) fd = p_utils.unformat_flow_detail(uuid, meta) td_to_load = [] td_path = os.path.join(fd_path, 'tasks') try: td_to_load = [f for f in os.listdir(td_path) if os.path.islink(os.path.join(td_path, f))] except EnvironmentError as e: if e.errno != errno.ENOENT: raise for t_uuid in td_to_load: fd.add(self._get_task_details(t_uuid)) return fd if lock: return self._run_with_process_lock('flow', _get) else: return _get() def _save_tasks_and_link(self, task_details, local_task_path): for task_detail in task_details: self._save_task_details(task_detail, ignore_missing=True) src_td_path = os.path.join(self._task_path, task_detail.uuid) target_td_path = os.path.join(local_task_path, task_detail.uuid) try: os.symlink(src_td_path, target_td_path) except EnvironmentError as e: if e.errno != errno.EEXIST: raise def _save_flow_details(self, flow_detail, ignore_missing): # See if we have an existing flow detail to merge with. e_fd = None try: e_fd = self._get_flow_details(flow_detail.uuid, lock=False) except EnvironmentError: if not ignore_missing: raise exc.NotFound("No flow details found with id: %s" % flow_detail.uuid) if e_fd is not None: e_fd = p_utils.flow_details_merge(e_fd, flow_detail) for td in flow_detail: if e_fd.find(td.uuid) is None: e_fd.add(td) flow_detail = e_fd flow_path = os.path.join(self._flow_path, flow_detail.uuid) misc.ensure_tree(flow_path) self._write_to( os.path.join(flow_path, 'metadata'), jsonutils.dumps(p_utils.format_flow_detail(flow_detail))) if len(flow_detail): task_path = os.path.join(flow_path, 'tasks') misc.ensure_tree(task_path) self._run_with_process_lock('task', self._save_tasks_and_link, list(flow_detail), task_path) return flow_detail @lock_utils.locked def update_flow_details(self, flow_detail): return self._run_with_process_lock("flow", self._save_flow_details, flow_detail, ignore_missing=False) def _save_flows_and_link(self, flow_details, local_flow_path): for flow_detail in flow_details: self._save_flow_details(flow_detail, ignore_missing=True) src_fd_path = os.path.join(self._flow_path, flow_detail.uuid) target_fd_path = os.path.join(local_flow_path, flow_detail.uuid) try: os.symlink(src_fd_path, target_fd_path) except EnvironmentError as e: if e.errno != errno.EEXIST: raise def _save_logbook(self, book): # See if we have an existing logbook to merge with. e_lb = None try: e_lb = self._get_logbook(book.uuid) except exc.NotFound: pass if e_lb is not None: e_lb = p_utils.logbook_merge(e_lb, book) for fd in book: if e_lb.find(fd.uuid) is None: e_lb.add(fd) book = e_lb book_path = os.path.join(self._book_path, book.uuid) misc.ensure_tree(book_path) created_at = None if e_lb is not None: created_at = e_lb.created_at self._write_to(os.path.join(book_path, 'metadata'), jsonutils.dumps( p_utils.format_logbook(book, created_at=created_at))) if len(book): flow_path = os.path.join(book_path, 'flows') misc.ensure_tree(flow_path) self._run_with_process_lock('flow', self._save_flows_and_link, list(book), flow_path) return book @lock_utils.locked def save_logbook(self, book): return self._run_with_process_lock("book", self._save_logbook, book) @lock_utils.locked def upgrade(self): def _step_create(): for d in (self._book_path, self._flow_path, self._task_path): misc.ensure_tree(d) misc.ensure_tree(self._backend.base_path) misc.ensure_tree(self._backend.lock_path) self._run_with_process_lock("init", _step_create) @lock_utils.locked def clear_all(self): def _step_clear(): for d in (self._book_path, self._flow_path, self._task_path): if os.path.isdir(d): shutil.rmtree(d) def _step_task(): self._run_with_process_lock("task", _step_clear) def _step_flow(): self._run_with_process_lock("flow", _step_task) def _step_book(): self._run_with_process_lock("book", _step_flow) # Acquire all locks by going through this little hierarchy. self._run_with_process_lock("init", _step_book) @lock_utils.locked def destroy_logbook(self, book_uuid): def _destroy_tasks(task_details): for task_detail in task_details: try: shutil.rmtree(os.path.join(self._task_path, task_detail.uuid)) except EnvironmentError as e: if e.errno != errno.ENOENT: raise def _destroy_flows(flow_details): for flow_detail in flow_details: self._run_with_process_lock("task", _destroy_tasks, list(flow_detail)) try: shutil.rmtree(os.path.join(self._flow_path, flow_detail.uuid)) except EnvironmentError as e: if e.errno != errno.ENOENT: raise def _destroy_book(): book = self._get_logbook(book_uuid) self._run_with_process_lock("flow", _destroy_flows, list(book)) try: shutil.rmtree(os.path.join(self._book_path, book.uuid)) except EnvironmentError as e: if e.errno != errno.ENOENT: raise # Acquire all locks by going through this little hierarchy. self._run_with_process_lock("book", _destroy_book) def _get_logbook(self, book_uuid): book_path = os.path.join(self._book_path, book_uuid) meta_path = os.path.join(book_path, 'metadata') try: meta = jsonutils.loads(self._read_from(meta_path)) except EnvironmentError as e: if e.errno == errno.ENOENT: raise exc.NotFound("No logbook found with id: %s" % book_uuid) else: raise lb = p_utils.unformat_logbook(book_uuid, meta) fd_path = os.path.join(book_path, 'flows') fd_uuids = [] try: fd_uuids = [f for f in os.listdir(fd_path) if os.path.islink(os.path.join(fd_path, f))] except EnvironmentError as e: if e.errno != errno.ENOENT: raise for fd_uuid in fd_uuids: lb.add(self._get_flow_details(fd_uuid)) return lb @lock_utils.locked def get_logbook(self, book_uuid): return self._run_with_process_lock("book", self._get_logbook, book_uuid) taskflow-0.1.3/taskflow/__init__.py0000664000175300017540000000127512275003514020451 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/storage.py0000664000175300017540000004104612275003514020356 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import contextlib import logging import six from taskflow import exceptions from taskflow.openstack.common import uuidutils from taskflow.persistence import logbook from taskflow import states from taskflow.utils import lock_utils from taskflow.utils import misc from taskflow.utils import reflection LOG = logging.getLogger(__name__) STATES_WITH_RESULTS = (states.SUCCESS, states.REVERTING, states.FAILURE) @six.add_metaclass(abc.ABCMeta) class Storage(object): """Interface between engines and logbook. This class provides a simple interface to save tasks of a given flow and associated activity and results to persistence layer (logbook, task_details, flow_details) for use by engines, making it easier to interact with the underlying storage & backend mechanism. """ injector_name = '_TaskFlow_INJECTOR' def __init__(self, flow_detail, backend=None): self._result_mappings = {} self._reverse_mapping = {} self._backend = backend self._flowdetail = flow_detail self._lock = self._lock_cls() # NOTE(imelnikov): failure serialization looses information, # so we cache failures here, in task name -> misc.Failure mapping. self._failures = {} for td in self._flowdetail: if td.failure is not None: self._failures[td.name] = td.failure self._task_name_to_uuid = dict((td.name, td.uuid) for td in self._flowdetail) try: injector_td = self._taskdetail_by_name(self.injector_name) except exceptions.NotFound: pass else: names = six.iterkeys(injector_td.results) self._set_result_mapping(injector_td.name, dict((name, name) for name in names)) @abc.abstractproperty def _lock_cls(self): """Lock class used to generate reader/writer locks for protecting read/write access to the underlying storage backend and internally mutating operations. """ def _with_connection(self, functor, *args, **kwargs): # NOTE(harlowja): Activate the given function with a backend # connection, if a backend is provided in the first place, otherwise # don't call the function. if self._backend is None: LOG.debug("No backend provided, not calling functor '%s'", reflection.get_callable_name(functor)) return with contextlib.closing(self._backend.get_connection()) as conn: functor(conn, *args, **kwargs) def ensure_task(self, task_name, task_version=None, result_mapping=None): """Ensure that there is taskdetail that correspond the task. If task does not exist, adds a record for it. Added task will have PENDING state. Sets result mapping for the task from result_mapping argument. Returns uuid for the task details corresponding to the task with given name. """ with self._lock.write_lock(): try: task_id = self._task_name_to_uuid[task_name] except KeyError: task_id = uuidutils.generate_uuid() self._add_task(task_id, task_name, task_version) self._set_result_mapping(task_name, result_mapping) return task_id def _add_task(self, uuid, task_name, task_version=None): """Add the task to storage. Task becomes known to storage by that name and uuid. Task state is set to PENDING. """ def save_both(conn, td): """Saves the flow and the task detail with the same connection.""" self._save_flow_detail(conn) self._save_task_detail(conn, td) # TODO(imelnikov): check that task with same uuid or # task name does not exist. td = logbook.TaskDetail(name=task_name, uuid=uuid) td.state = states.PENDING td.version = task_version self._flowdetail.add(td) self._with_connection(save_both, td) self._task_name_to_uuid[task_name] = uuid @property def flow_name(self): # This never changes (so no read locking needed). return self._flowdetail.name @property def flow_uuid(self): # This never changes (so no read locking needed). return self._flowdetail.uuid def _save_flow_detail(self, conn): # NOTE(harlowja): we need to update our contained flow detail if # the result of the update actually added more (aka another process # added item to the flow detail). self._flowdetail.update(conn.update_flow_details(self._flowdetail)) def _taskdetail_by_name(self, task_name): try: return self._flowdetail.find(self._task_name_to_uuid[task_name]) except KeyError: raise exceptions.NotFound("Unknown task name: %s" % task_name) def _save_task_detail(self, conn, task_detail): # NOTE(harlowja): we need to update our contained task detail if # the result of the update actually added more (aka another process # is also modifying the task detail). task_detail.update(conn.update_task_details(task_detail)) def get_task_uuid(self, task_name): """Get task uuid by given name.""" with self._lock.read_lock(): td = self._taskdetail_by_name(task_name) return td.uuid def set_task_state(self, task_name, state): """Set task state.""" with self._lock.write_lock(): td = self._taskdetail_by_name(task_name) td.state = state self._with_connection(self._save_task_detail, td) def get_task_state(self, task_name): """Get state of task with given name.""" with self._lock.read_lock(): td = self._taskdetail_by_name(task_name) return td.state def get_tasks_states(self, task_names): """Gets all task states.""" with self._lock.read_lock(): return dict((name, self.get_task_state(name)) for name in task_names) def update_task_metadata(self, task_name, update_with): """Updates a tasks metadata.""" if not update_with: return with self._lock.write_lock(): td = self._taskdetail_by_name(task_name) if not td.meta: td.meta = {} td.meta.update(update_with) self._with_connection(self._save_task_detail, td) def set_task_progress(self, task_name, progress, details=None): """Set task progress. :param task_name: task name :param progress: task progress :param details: task specific progress information """ metadata_update = { 'progress': progress, } if details is not None: # NOTE(imelnikov): as we can update progress without # updating details (e.g. automatically from engine) # we save progress value with details, too. if details: metadata_update['progress_details'] = { 'at_progress': progress, 'details': details, } else: metadata_update['progress_details'] = None self.update_task_metadata(task_name, metadata_update) def get_task_progress(self, task_name): """Get progress of task with given name. :param task_name: task name :returns: current task progress value """ with self._lock.read_lock(): td = self._taskdetail_by_name(task_name) if not td.meta: return 0.0 return td.meta.get('progress', 0.0) def get_task_progress_details(self, task_name): """Get progress details of task with given name. :param task_name: task name :returns: None if progress_details not defined, else progress_details dict """ with self._lock.read_lock(): td = self._taskdetail_by_name(task_name) if not td.meta: return None return td.meta.get('progress_details') def _check_all_results_provided(self, task_name, data): """Warn if task did not provide some of results. This may happen if task returns shorter tuple or list or dict without all needed keys. It may also happen if task returns result of wrong type. """ result_mapping = self._result_mappings.get(task_name) if not result_mapping: return for name, index in six.iteritems(result_mapping): try: misc.item_from(data, index, name=name) except exceptions.NotFound: LOG.warning("Task %s did not supply result " "with index %r (name %s)", task_name, index, name) def save(self, task_name, data, state=states.SUCCESS): """Put result for task with id 'uuid' to storage.""" with self._lock.write_lock(): td = self._taskdetail_by_name(task_name) td.state = state if state == states.FAILURE and isinstance(data, misc.Failure): td.results = None td.failure = data self._failures[td.name] = data else: td.results = data td.failure = None self._check_all_results_provided(td.name, data) self._with_connection(self._save_task_detail, td) def get(self, task_name): """Get result for task with name 'task_name' to storage.""" with self._lock.read_lock(): td = self._taskdetail_by_name(task_name) if td.failure is not None: cached = self._failures.get(task_name) if td.failure.matches(cached): return cached return td.failure if td.state not in STATES_WITH_RESULTS: raise exceptions.NotFound("Result for task %s is not known" % task_name) return td.results def get_failures(self): """Get list of failures that happened with this flow. No order guaranteed. """ with self._lock.read_lock(): return self._failures.copy() def has_failures(self): """Returns True if there are failed tasks in the storage.""" with self._lock.read_lock(): return bool(self._failures) def _reset_task(self, td, state): if td.name == self.injector_name: return False if td.state == state: return False td.results = None td.failure = None td.state = state self._failures.pop(td.name, None) return True def reset(self, task_name, state=states.PENDING): """Remove result for task with id 'uuid' from storage.""" with self._lock.write_lock(): td = self._taskdetail_by_name(task_name) if self._reset_task(td, state): self._with_connection(self._save_task_detail, td) def reset_tasks(self): """Reset all tasks to PENDING state, removing results. Returns list of (name, uuid) tuples for all tasks that were reset. """ reset_results = [] def do_reset_all(connection): for td in self._flowdetail: if self._reset_task(td, states.PENDING): self._save_task_detail(connection, td) reset_results.append((td.name, td.uuid)) with self._lock.write_lock(): self._with_connection(do_reset_all) return reset_results def inject(self, pairs): """Add values into storage. This method should be used to put flow parameters (requirements that are not satisfied by any task in the flow) into storage. """ with self._lock.write_lock(): try: td = self._taskdetail_by_name(self.injector_name) except exceptions.NotFound: self._add_task(uuidutils.generate_uuid(), self.injector_name) td = self._taskdetail_by_name(self.injector_name) td.results = dict(pairs) td.state = states.SUCCESS else: td.results.update(pairs) self._with_connection(self._save_task_detail, td) names = six.iterkeys(td.results) self._set_result_mapping(self.injector_name, dict((name, name) for name in names)) def _set_result_mapping(self, task_name, mapping): """Set mapping for naming task results. The result saved with given name would be accessible by names defined in mapping. Mapping is a dict name => index. If index is None, the whole result will have this name; else, only part of it, result[index]. """ if not mapping: return self._result_mappings[task_name] = mapping for name, index in six.iteritems(mapping): entries = self._reverse_mapping.setdefault(name, []) # NOTE(imelnikov): We support setting same result mapping for # the same task twice (e.g when we are injecting 'a' and then # injecting 'a' again), so we should not log warning below in # that case and we should have only one item for each pair # (task_name, index) in entries. It should be put to the end of # entries list because order matters on fetching. try: entries.remove((task_name, index)) except ValueError: pass entries.append((task_name, index)) if len(entries) > 1: LOG.warning("Multiple provider mappings being created for %r", name) def fetch(self, name): """Fetch named task result.""" with self._lock.read_lock(): try: indexes = self._reverse_mapping[name] except KeyError: raise exceptions.NotFound("Name %r is not mapped" % name) # Return the first one that is found. for (task_name, index) in reversed(indexes): try: result = self.get(task_name) return misc.item_from(result, index, name=name) except exceptions.NotFound: pass raise exceptions.NotFound("Unable to find result %r" % name) def fetch_all(self): """Fetch all named task results known so far. Should be used for debugging and testing purposes mostly. """ with self._lock.read_lock(): results = {} for name in self._reverse_mapping: try: results[name] = self.fetch(name) except exceptions.NotFound: pass return results def fetch_mapped_args(self, args_mapping): """Fetch arguments for the task using arguments mapping.""" with self._lock.read_lock(): return dict((key, self.fetch(name)) for key, name in six.iteritems(args_mapping)) def set_flow_state(self, state): """Set flow details state and save it.""" with self._lock.write_lock(): self._flowdetail.state = state self._with_connection(self._save_flow_detail) def get_flow_state(self): """Get state from flow details.""" with self._lock.read_lock(): state = self._flowdetail.state if state is None: state = states.PENDING return state class MultiThreadedStorage(Storage): """Storage that uses locks to protect against concurrent access.""" _lock_cls = lock_utils.ReaderWriterLock class SingleThreadedStorage(Storage): """Storage that uses dummy locks when you really don't need locks.""" _lock_cls = lock_utils.DummyReaderWriterLock taskflow-0.1.3/taskflow/engines/0000775000175300017540000000000012275003604017763 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/engines/base.py0000664000175300017540000000451612275003514021255 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow.utils import misc @six.add_metaclass(abc.ABCMeta) class EngineBase(object): """Base for all engines implementations.""" def __init__(self, flow, flow_detail, backend, conf): self._flow = flow self._flow_detail = flow_detail self._backend = backend if not conf: self._conf = {} else: self._conf = dict(conf) self._storage = None self.notifier = misc.TransitionNotifier() self.task_notifier = misc.TransitionNotifier() @property def storage(self): """The storage unit for this flow.""" if self._storage is None: self._storage = self._storage_cls(self._flow_detail, self._backend) return self._storage @abc.abstractproperty def _storage_cls(self): """Storage class that will be used to generate storage objects.""" @abc.abstractmethod def compile(self): """Compiles the contained flow into a structure which the engine can use to run or if this can not be done then an exception is thrown indicating why this compilation could not be achieved. """ @abc.abstractmethod def run(self): """Runs the flow in the engine to completion (or die trying).""" @abc.abstractmethod def suspend(self): """Attempts to suspend the engine. If the engine is currently running tasks then this will attempt to suspend future work from being started (currently active tasks can not currently be preempted) and move the engine into a suspend state which can then later be resumed from. """ taskflow-0.1.3/taskflow/engines/helpers.py0000664000175300017540000002467512275003514022015 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import six import stevedore.driver from taskflow.openstack.common import importutils from taskflow.persistence import backends as p_backends from taskflow.utils import persistence_utils as p_utils from taskflow.utils import reflection # NOTE(imelnikov): this is the entrypoint namespace, not the module namespace. ENGINES_NAMESPACE = 'taskflow.engines' def _fetch_validate_factory(flow_factory): if isinstance(flow_factory, six.string_types): factory_fun = importutils.import_class(flow_factory) factory_name = flow_factory else: factory_fun = flow_factory factory_name = reflection.get_callable_name(flow_factory) try: reimported = importutils.import_class(factory_name) assert reimported == factory_fun except (ImportError, AssertionError): raise ValueError('Flow factory %r is not reimportable by name %s' % (factory_fun, factory_name)) return (factory_name, factory_fun) def load(flow, store=None, flow_detail=None, book=None, engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE): """Load flow into engine. This function creates and prepares engine to run the flow. All that is left is to run the engine with 'run()' method. Which engine to load is specified in 'engine_conf' parameter. It can be a string that names engine type or a dictionary which holds engine type (with 'engine' key) and additional engine-specific configuration (for example, executor for multithreaded engine). Which storage backend to use is defined by backend parameter. It can be backend itself, or a dictionary that is passed to taskflow.persistence.backends.fetch to obtain backend. :param flow: flow to load :param store: dict -- data to put to storage to satisfy flow requirements :param flow_detail: FlowDetail that holds the state of the flow (if one is not provided then one will be created for you in the provided backend) :param book: LogBook to create flow detail in if flow_detail is None :param engine_conf: engine type and configuration configuration :param backend: storage backend to use or configuration :param namespace: driver namespace for stevedore (default is fine if you don't know what is it) :returns: engine """ if engine_conf is None: engine_conf = {'engine': 'default'} # NOTE(imelnikov): this allows simpler syntax. if isinstance(engine_conf, six.string_types): engine_conf = {'engine': engine_conf} if isinstance(backend, dict): backend = p_backends.fetch(backend) if flow_detail is None: flow_detail = p_utils.create_flow_detail(flow, book=book, backend=backend) mgr = stevedore.driver.DriverManager( namespace, engine_conf['engine'], invoke_on_load=True, invoke_kwds={ 'conf': engine_conf.copy(), 'flow': flow, 'flow_detail': flow_detail, 'backend': backend }) engine = mgr.driver if store: engine.storage.inject(store) return engine def run(flow, store=None, flow_detail=None, book=None, engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE): """Run the flow. This function load the flow into engine (with 'load' function) and runs the engine. Which engine to load is specified in 'engine_conf' parameter. It can be a string that names engine type or a dictionary which holds engine type (with 'engine' key) and additional engine-specific configuration (for example, executor for multithreaded engine). Which storage backend to use is defined by backend parameter. It can be backend itself, or a dictionary that is passed to taskflow.persistence.backends.fetch to obtain backend. :param flow: flow to run :param store: dict -- data to put to storage to satisfy flow requirements :param flow_detail: FlowDetail that holds the state of the flow (if one is not provided then one will be created for you in the provided backend) :param book: LogBook to create flow detail in if flow_detail is None :param engine_conf: engine type and configuration configuration :param backend: storage backend to use or configuration :param namespace: driver namespace for stevedore (default is fine if you don't know what is it) :returns: dictionary of all named task results (see Storage.fetch_all) """ engine = load(flow, store=store, flow_detail=flow_detail, book=book, engine_conf=engine_conf, backend=backend, namespace=namespace) engine.run() return engine.storage.fetch_all() def save_factory_details(flow_detail, flow_factory, factory_args, factory_kwargs, backend=None): """Saves the given factories reimportable name, args, kwargs into the flow detail. This function saves the factory name, arguments, and keyword arguments into the given flow details object and if a backend is provided it will also ensure that the backend saves the flow details after being updated. :param flow_detail: FlowDetail that holds state of the flow to load :param flow_factory: function or string: function that creates the flow :param factory_args: list or tuple of factory positional arguments :param factory_kwargs: dict of factory keyword arguments :param backend: storage backend to use or configuration """ if not factory_args: factory_args = [] if not factory_kwargs: factory_kwargs = {} factory_name, _factory_fun = _fetch_validate_factory(flow_factory) factory_data = { 'factory': { 'name': factory_name, 'args': factory_args, 'kwargs': factory_kwargs, }, } if not flow_detail.meta: flow_detail.meta = factory_data else: flow_detail.meta.update(factory_data) if backend is not None: if isinstance(backend, dict): backend = p_backends.fetch(backend) with contextlib.closing(backend.get_connection()) as conn: conn.update_flow_details(flow_detail) def load_from_factory(flow_factory, factory_args=None, factory_kwargs=None, store=None, book=None, engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE): """Loads a flow from a factory function into an engine. Gets flow factory function (or name of it) and creates flow with it. Then, flow is loaded into engine with load(), and factory function fully qualified name is saved to flow metadata so that it can be later resumed with resume. :param flow_factory: function or string: function that creates the flow :param factory_args: list or tuple of factory positional arguments :param factory_kwargs: dict of factory keyword arguments :param store: dict -- data to put to storage to satisfy flow requirements :param book: LogBook to create flow detail in :param engine_conf: engine type and configuration configuration :param backend: storage backend to use or configuration :param namespace: driver namespace for stevedore (default is fine if you don't know what is it) :returns: engine """ _factory_name, factory_fun = _fetch_validate_factory(flow_factory) if not factory_args: factory_args = [] if not factory_kwargs: factory_kwargs = {} flow = factory_fun(*factory_args, **factory_kwargs) if isinstance(backend, dict): backend = p_backends.fetch(backend) flow_detail = p_utils.create_flow_detail(flow, book=book, backend=backend) save_factory_details(flow_detail, flow_factory, factory_args, factory_kwargs, backend=backend) return load(flow=flow, store=store, flow_detail=flow_detail, book=book, engine_conf=engine_conf, backend=backend, namespace=namespace) def flow_from_detail(flow_detail): """Recreate flow previously loaded with load_form_factory. Gets flow factory name from metadata, calls it to recreate the flow. :param flow_detail: FlowDetail that holds state of the flow to load """ try: factory_data = flow_detail.meta['factory'] except (KeyError, AttributeError, TypeError): raise ValueError('Cannot reconstruct flow %s %s: ' 'no factory information saved.' % (flow_detail.name, flow_detail.uuid)) try: factory_fun = importutils.import_class(factory_data['name']) except (KeyError, ImportError): raise ImportError('Could not import factory for flow %s %s' % (flow_detail.name, flow_detail.uuid)) args = factory_data.get('args', ()) kwargs = factory_data.get('kwargs', {}) return factory_fun(*args, **kwargs) def load_from_detail(flow_detail, store=None, engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE): """Reload flow previously loaded with load_form_factory function. Gets flow factory name from metadata, calls it to recreate the flow and loads flow into engine with load(). :param flow_detail: FlowDetail that holds state of the flow to load :param store: dict -- data to put to storage to satisfy flow requirements :param engine_conf: engine type and configuration configuration :param backend: storage backend to use or configuration :param namespace: driver namespace for stevedore (default is fine if you don't know what is it) :returns: engine """ flow = flow_from_detail(flow_detail) return load(flow, flow_detail=flow_detail, store=store, engine_conf=engine_conf, backend=backend, namespace=namespace) taskflow-0.1.3/taskflow/engines/__init__.py0000664000175300017540000000214712275003514022100 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # promote helpers to this module namespace from taskflow.engines.helpers import flow_from_detail # noqa from taskflow.engines.helpers import load # noqa from taskflow.engines.helpers import load_from_detail # noqa from taskflow.engines.helpers import load_from_factory # noqa from taskflow.engines.helpers import run # noqa from taskflow.engines.helpers import save_factory_details # noqa taskflow-0.1.3/taskflow/engines/action_engine/0000775000175300017540000000000012275003604022565 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/engines/action_engine/task_action.py0000664000175300017540000000743612275003514025450 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from taskflow import states from taskflow.utils import misc LOG = logging.getLogger(__name__) SAVE_RESULT_STATES = (states.SUCCESS, states.FAILURE) class TaskAction(object): def __init__(self, storage, task_executor, notifier): self._storage = storage self._task_executor = task_executor self._notifier = notifier def _change_state(self, task, state, result=None, progress=None): old_state = self._storage.get_task_state(task.name) if not states.check_task_transition(old_state, state): return False if state in SAVE_RESULT_STATES: self._storage.save(task.name, result, state) else: self._storage.set_task_state(task.name, state) if progress is not None: self._storage.set_task_progress(task.name, progress) task_uuid = self._storage.get_task_uuid(task.name) details = dict(task_name=task.name, task_uuid=task_uuid, result=result) self._notifier.notify(state, details) if progress is not None: task.update_progress(progress) return True def _on_update_progress(self, task, event_data, progress, **kwargs): """Should be called when task updates its progress.""" try: self._storage.set_task_progress(task.name, progress, kwargs) except Exception: # Update progress callbacks should never fail, so capture and log # the emitted exception instead of raising it. LOG.exception("Failed setting task progress for %s to %0.3f", task, progress) def schedule_execution(self, task): if not self._change_state(task, states.RUNNING, progress=0.0): return kwargs = self._storage.fetch_mapped_args(task.rebind) return self._task_executor.execute_task(task, kwargs, self._on_update_progress) def complete_execution(self, task, result): if isinstance(result, misc.Failure): self._change_state(task, states.FAILURE, result=result) else: self._change_state(task, states.SUCCESS, result=result, progress=1.0) def schedule_reversion(self, task): if not self._change_state(task, states.REVERTING, progress=0.0): return kwargs = self._storage.fetch_mapped_args(task.rebind) task_result = self._storage.get(task.name) failures = self._storage.get_failures() future = self._task_executor.revert_task(task, kwargs, task_result, failures, self._on_update_progress) return future def complete_reversion(self, task, rev_result): if isinstance(rev_result, misc.Failure): self._change_state(task, states.FAILURE) else: self._change_state(task, states.REVERTED, progress=1.0) def wait_for_any(self, fs, timeout): return self._task_executor.wait_for_any(fs, timeout) taskflow-0.1.3/taskflow/engines/action_engine/graph_analyzer.py0000664000175300017540000000620512275003514026150 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow import states as st class GraphAnalyzer(object): """Analyzes a execution graph to get the next nodes for execution or reversion by utilizing the graphs nodes and edge relations and comparing the node state against the states stored in storage. """ def __init__(self, graph, storage): self._graph = graph self._storage = storage @property def execution_graph(self): return self._graph def browse_nodes_for_execute(self, node=None): """Browse next nodes to execute for given node if specified and for whole graph otherwise. """ if node: nodes = self._graph.successors(node) else: nodes = self._graph.nodes_iter() available_nodes = [] for node in nodes: if self._is_ready_for_execute(node): available_nodes.append(node) return available_nodes def browse_nodes_for_revert(self, node=None): """Browse next nodes to revert for given node if specified and for whole graph otherwise. """ if node: nodes = self._graph.predecessors(node) else: nodes = self._graph.nodes_iter() available_nodes = [] for node in nodes: if self._is_ready_for_revert(node): available_nodes.append(node) return available_nodes def _is_ready_for_execute(self, task): """Checks if task is ready to be executed.""" state = self._storage.get_task_state(task.name) if not st.check_task_transition(state, st.RUNNING): return False task_names = [] for prev_task in self._graph.predecessors(task): task_names.append(prev_task.name) task_states = self._storage.get_tasks_states(task_names) return all(state == st.SUCCESS for state in six.itervalues(task_states)) def _is_ready_for_revert(self, task): """Checks if task is ready to be reverted.""" state = self._storage.get_task_state(task.name) if not st.check_task_transition(state, st.REVERTING): return False task_names = [] for prev_task in self._graph.successors(task): task_names.append(prev_task.name) task_states = self._storage.get_tasks_states(task_names) return all(state in (st.PENDING, st.REVERTED) for state in six.itervalues(task_states)) taskflow-0.1.3/taskflow/engines/action_engine/executor.py0000664000175300017540000001071012275003514024774 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc from concurrent import futures import six from taskflow.utils import async_utils from taskflow.utils import misc from taskflow.utils import threading_utils # Execution and reversion events. EXECUTED = 'executed' REVERTED = 'reverted' def _execute_task(task, arguments, progress_callback): with task.autobind('update_progress', progress_callback): try: result = task.execute(**arguments) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. result = misc.Failure() return (task, EXECUTED, result) def _revert_task(task, arguments, result, failures, progress_callback): kwargs = arguments.copy() kwargs['result'] = result kwargs['flow_failures'] = failures with task.autobind('update_progress', progress_callback): try: result = task.revert(**kwargs) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. result = misc.Failure() return (task, REVERTED, result) @six.add_metaclass(abc.ABCMeta) class TaskExecutorBase(object): """Executes and reverts tasks. This class takes task and its arguments and executes or reverts it. It encapsulates knowledge on how task should be executed or reverted: right now, on separate thread, on another machine, etc. """ @abc.abstractmethod def execute_task(self, task, arguments, progress_callback=None): """Schedules task execution.""" @abc.abstractmethod def revert_task(self, task, arguments, result, failures, progress_callback=None): """Schedules task reversion.""" @abc.abstractmethod def wait_for_any(self, fs, timeout=None): """Wait for futures returned by this executor to complete.""" def start(self): """Prepare to execute tasks.""" pass def stop(self): """Finalize task executor.""" pass class SerialTaskExecutor(TaskExecutorBase): """Execute task one after another.""" def execute_task(self, task, arguments, progress_callback=None): return async_utils.make_completed_future( _execute_task(task, arguments, progress_callback)) def revert_task(self, task, arguments, result, failures, progress_callback=None): return async_utils.make_completed_future( _revert_task(task, arguments, result, failures, progress_callback)) def wait_for_any(self, fs, timeout=None): # NOTE(imelnikov): this executor returns only done futures. return fs, [] class ParallelTaskExecutor(TaskExecutorBase): """Executes tasks in parallel. Submits tasks to executor which should provide interface similar to concurrent.Futures.Executor. """ def __init__(self, executor=None): self._executor = executor self._own_executor = executor is None def execute_task(self, task, arguments, progress_callback=None): return self._executor.submit( _execute_task, task, arguments, progress_callback) def revert_task(self, task, arguments, result, failures, progress_callback=None): return self._executor.submit( _revert_task, task, arguments, result, failures, progress_callback) def wait_for_any(self, fs, timeout=None): return async_utils.wait_for_any(fs, timeout) def start(self): if self._own_executor: thread_count = threading_utils.get_optimal_thread_count() self._executor = futures.ThreadPoolExecutor(thread_count) def stop(self): if self._own_executor: self._executor.shutdown(wait=True) self._executor = None taskflow-0.1.3/taskflow/engines/action_engine/__init__.py0000664000175300017540000000127512275003514024703 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/engines/action_engine/graph_action.py0000664000175300017540000000732412275003514025603 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import states as st from taskflow.utils import misc _WAITING_TIMEOUT = 60 # in seconds class FutureGraphAction(object): """Graph action build around futures returned by task action. This graph action schedules all task it can for execution and than waits on returned futures. If task executor is able to execute tasks in parallel, this enables parallel flow run and reversion. """ def __init__(self, analyzer, storage, task_action): self._analyzer = analyzer self._storage = storage self._task_action = task_action def is_running(self): return self._storage.get_flow_state() == st.RUNNING def is_reverting(self): return self._storage.get_flow_state() == st.REVERTING def execute(self): was_suspended = self._run( self.is_running, self._task_action.schedule_execution, self._task_action.complete_execution, self._analyzer.browse_nodes_for_execute) return st.SUSPENDED if was_suspended else st.SUCCESS def revert(self): was_suspended = self._run( self.is_reverting, self._task_action.schedule_reversion, self._task_action.complete_reversion, self._analyzer.browse_nodes_for_revert) return st.SUSPENDED if was_suspended else st.REVERTED def _run(self, running, schedule_node, complete_node, get_next_nodes): def schedule(nodes, not_done): for node in nodes: future = schedule_node(node) if future is not None: not_done.append(future) else: schedule(get_next_nodes(node), not_done) failures = [] not_done = [] schedule(get_next_nodes(), not_done) was_suspended = False while not_done: # NOTE(imelnikov): if timeout occurs before any of futures # completes, done list will be empty and we'll just go # for next iteration. done, not_done = self._task_action.wait_for_any( not_done, _WAITING_TIMEOUT) not_done = list(not_done) next_nodes = [] for future in done: # NOTE(harlowja): event will be used in the future for smart # reversion (ignoring it for now). node, _event, result = future.result() complete_node(node, result) if isinstance(result, misc.Failure): failures.append(result) else: next_nodes.extend(get_next_nodes(node)) if next_nodes: if running() and not failures: schedule(next_nodes, not_done) else: # NOTE(imelnikov): engine stopped while there were # still some tasks to do, so we either failed # or were suspended. was_suspended = True misc.Failure.reraise_if_any(failures) return was_suspended taskflow-0.1.3/taskflow/engines/action_engine/engine.py0000664000175300017540000001772212275003514024415 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading from taskflow.engines.action_engine import executor from taskflow.engines.action_engine import graph_action from taskflow.engines.action_engine import graph_analyzer from taskflow.engines.action_engine import task_action from taskflow.engines import base from taskflow import exceptions as exc from taskflow.openstack.common import excutils from taskflow import states from taskflow import storage as t_storage from taskflow.utils import flow_utils from taskflow.utils import lock_utils from taskflow.utils import misc from taskflow.utils import reflection class ActionEngine(base.EngineBase): """Generic action-based engine. This engine flattens the flow (and any subflows) into a execution graph which contains the full runtime definition to be executed and then uses this graph in combination with the action classes & storage to attempt to run your flow (and any subflows & contained tasks) to completion. During this process it is permissible and valid to have a task or multiple tasks in the execution graph fail, which will cause the process of reversion to commence. See the valid states in the states module to learn more about what other states the tasks & flow being ran can go through. """ _graph_action_cls = graph_action.FutureGraphAction _graph_analyzer_cls = graph_analyzer.GraphAnalyzer _task_action_cls = task_action.TaskAction _task_executor_cls = executor.SerialTaskExecutor def __init__(self, flow, flow_detail, backend, conf): super(ActionEngine, self).__init__(flow, flow_detail, backend, conf) self._analyzer = None self._root = None self._compiled = False self._lock = threading.RLock() self._state_lock = threading.RLock() self._task_executor = None self._task_action = None def _revert(self, current_failure=None): self._change_state(states.REVERTING) try: state = self._root.revert() except Exception: with excutils.save_and_reraise_exception(): self._change_state(states.FAILURE) self._change_state(state) if state == states.SUSPENDED: return failures = self.storage.get_failures() misc.Failure.reraise_if_any(failures.values()) if current_failure: current_failure.reraise() def __str__(self): return "%s: %s" % (reflection.get_class_name(self), id(self)) def suspend(self): if not self._compiled: raise exc.InvariantViolation("Can not suspend an engine" " which has not been compiled") self._change_state(states.SUSPENDING) @property def execution_graph(self): self.compile() return self._analyzer.execution_graph @lock_utils.locked def run(self): """Runs the flow in the engine to completion.""" if self.storage.get_flow_state() == states.REVERTED: self._reset() self.compile() external_provides = set(self.storage.fetch_all().keys()) missing = self._flow.requires - external_provides if missing: raise exc.MissingDependencies(self._flow, sorted(missing)) self._task_executor.start() try: if self.storage.has_failures(): self._revert() else: self._run() finally: self._task_executor.stop() def _run(self): self._change_state(states.RUNNING) try: state = self._root.execute() except Exception: self._change_state(states.FAILURE) self._revert(misc.Failure()) else: self._change_state(state) @lock_utils.locked(lock='_state_lock') def _change_state(self, state): old_state = self.storage.get_flow_state() if not states.check_flow_transition(old_state, state): return self.storage.set_flow_state(state) try: flow_uuid = self._flow.uuid except AttributeError: # NOTE(harlowja): if the flow was just a single task, then it will # not itself have a uuid, but the constructed flow_detail will. if self._flow_detail is not None: flow_uuid = self._flow_detail.uuid else: flow_uuid = None details = dict(engine=self, flow_name=self._flow.name, flow_uuid=flow_uuid, old_state=old_state) self.notifier.notify(state, details) def _reset(self): for name, uuid in self.storage.reset_tasks(): details = dict(engine=self, task_name=name, task_uuid=uuid, result=None) self.task_notifier.notify(states.PENDING, details) self._change_state(states.PENDING) def _ensure_storage_for(self, task_graph): # NOTE(harlowja): signal to the tasks that exist that we are about to # resume, if they have a previous state, they will now transition to # a resuming state (and then to suspended). self._change_state(states.RESUMING) # does nothing in PENDING state for task in task_graph.nodes_iter(): task_version = misc.get_version_string(task) self.storage.ensure_task(task.name, task_version, task.save_as) self._change_state(states.SUSPENDED) # does nothing in PENDING state @lock_utils.locked def compile(self): if self._compiled: return task_graph = flow_utils.flatten(self._flow) if task_graph.number_of_nodes() == 0: raise exc.EmptyFlow("Flow %s is empty." % self._flow.name) self._analyzer = self._graph_analyzer_cls(task_graph, self.storage) if self._task_executor is None: self._task_executor = self._task_executor_cls() if self._task_action is None: self._task_action = self._task_action_cls(self.storage, self._task_executor, self.task_notifier) self._root = self._graph_action_cls(self._analyzer, self.storage, self._task_action) # NOTE(harlowja): Perform initial state manipulation and setup. # # TODO(harlowja): This doesn't seem like it should be in a compilation # function since compilation seems like it should not modify any # external state. self._ensure_storage_for(task_graph) self._compiled = True class SingleThreadedActionEngine(ActionEngine): """Engine that runs tasks in serial manner.""" _storage_cls = t_storage.SingleThreadedStorage class MultiThreadedActionEngine(ActionEngine): """Engine that runs tasks in parallel manner.""" _storage_cls = t_storage.MultiThreadedStorage def _task_executor_cls(self): return executor.ParallelTaskExecutor(self._executor) def __init__(self, flow, flow_detail, backend, conf): super(MultiThreadedActionEngine, self).__init__( flow, flow_detail, backend, conf) self._executor = conf.get('executor', None) taskflow-0.1.3/taskflow/patterns/0000775000175300017540000000000012275003604020173 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/patterns/linear_flow.py0000664000175300017540000000623612275003514023055 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions from taskflow import flow class Flow(flow.Flow): """"Linear Flow pattern. A linear (potentially nested) flow of *tasks/flows* that can be applied in order as one unit and rolled back as one unit using the reverse order that the *tasks/flows* have been applied in. NOTE(imelnikov): Tasks/flows contained in this linear flow must not depend on outputs (provided names/values) of tasks/flows that follow it. """ def __init__(self, name): super(Flow, self).__init__(name) self._children = [] def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" if not items: return self # NOTE(imelnikov): we add item to the end of flow, so it should # not provide anything previous items of the flow require. requires = self.requires provides = self.provides for item in items: requires |= item.requires out_of_order = requires & item.provides if out_of_order: raise exceptions.InvariantViolation( "%(item)s provides %(oo)s that are required " "by previous item(s) of linear flow %(flow)s" % dict(item=item.name, flow=self.name, oo=sorted(out_of_order))) same_provides = provides & item.provides if same_provides: raise exceptions.DependencyFailure( "%(item)s provides %(value)s but is already being" " provided by %(flow)s and duplicate producers" " are disallowed" % dict(item=item.name, flow=self.name, value=sorted(same_provides))) provides |= item.provides self._children.extend(items) return self def __len__(self): return len(self._children) def __iter__(self): for child in self._children: yield child def __getitem__(self, index): return self._children[index] @property def provides(self): provides = set() for subflow in self._children: provides.update(subflow.provides) return provides @property def requires(self): requires = set() provides = set() for subflow in self._children: requires.update(subflow.requires - provides) provides.update(subflow.provides) return requires taskflow-0.1.3/taskflow/patterns/unordered_flow.py0000664000175300017540000000675312275003514023576 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions from taskflow import flow class Flow(flow.Flow): """"Unordered Flow pattern. A unordered (potentially nested) flow of *tasks/flows* that can be executed in any order as one unit and rolled back as one unit. NOTE(harlowja): Since the flow is unordered there can *not* be any dependency between task/flow inputs (requirements) and task/flow outputs (provided names/values). """ def __init__(self, name): super(Flow, self).__init__(name) # NOTE(imelnikov): A unordered flow is unordered, so we use # set instead of list to save children, children so that # people using it don't depend on the ordering self._children = set() def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" if not items: return self # NOTE(harlowja): check that items to be added are actually # independent. provides = self.provides old_requires = self.requires for item in items: item_provides = item.provides bad_provs = item_provides & old_requires if bad_provs: raise exceptions.InvariantViolation( "%(item)s provides %(oo)s that are required " "by other item(s) of unordered flow %(flow)s" % dict(item=item.name, flow=self.name, oo=sorted(bad_provs))) same_provides = provides & item.provides if same_provides: raise exceptions.DependencyFailure( "%(item)s provides %(value)s but is already being" " provided by %(flow)s and duplicate producers" " are disallowed" % dict(item=item.name, flow=self.name, value=sorted(same_provides))) provides |= item.provides for item in items: bad_reqs = provides & item.requires if bad_reqs: raise exceptions.InvariantViolation( "%(item)s requires %(oo)s that are provided " "by other item(s) of unordered flow %(flow)s" % dict(item=item.name, flow=self.name, oo=sorted(bad_reqs))) self._children.update(items) return self @property def provides(self): provides = set() for subflow in self: provides.update(subflow.provides) return provides @property def requires(self): requires = set() for subflow in self: requires.update(subflow.requires) return requires def __len__(self): return len(self._children) def __iter__(self): for child in self._children: yield child taskflow-0.1.3/taskflow/patterns/__init__.py0000664000175300017540000000127512275003514022311 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/patterns/graph_flow.py0000664000175300017540000001700712275003514022702 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import networkx as nx from networkx.algorithms import traversal from taskflow import exceptions as exc from taskflow import flow from taskflow.utils import graph_utils class Flow(flow.Flow): """Graph flow pattern. Contained *flows/tasks* will be executed according to their dependencies which will be resolved by using the *flows/tasks* provides and requires mappings or by following manually created dependency links. From dependencies directed graph is build. If it has edge A -> B, this means B depends on A. Note: Cyclic dependencies are not allowed. """ def __init__(self, name): super(Flow, self).__init__(name) self._graph = nx.freeze(nx.DiGraph()) def _validate(self, graph=None): if graph is None: graph = self._graph # Ensure that there is a valid topological ordering. if not nx.is_directed_acyclic_graph(graph): raise exc.DependencyFailure("No path through the items in the" " graph produces an ordering that" " will allow for correct dependency" " resolution") def link(self, u, v): """Link existing node u as a runtime dependency of existing node v.""" if not self._graph.has_node(u): raise ValueError('Item %s not found to link from' % (u)) if not self._graph.has_node(v): raise ValueError('Item %s not found to link to' % (v)) self._swap(self._link(u, v, manual=True)) return self def _link(self, u, v, graph=None, reason=None, manual=False): mutable_graph = True if graph is None: graph = self._graph mutable_graph = False # NOTE(harlowja): Add an edge to a temporary copy and only if that # copy is valid then do we swap with the underlying graph. attrs = graph_utils.get_edge_attrs(graph, u, v) if not attrs: attrs = {} if manual: attrs['manual'] = True if reason is not None: if 'reasons' not in attrs: attrs['reasons'] = set() attrs['reasons'].add(reason) if not mutable_graph: graph = nx.DiGraph(graph) graph.add_edge(u, v, **attrs) return graph def _swap(self, replacement_graph): """Validates the replacement graph and then swaps the underlying graph with a frozen version of the replacement graph (this maintains the invariant that the underlying graph is immutable). """ self._validate(replacement_graph) self._graph = nx.freeze(replacement_graph) def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" items = [i for i in items if not self._graph.has_node(i)] if not items: return self requirements = collections.defaultdict(list) provided = {} def update_requirements(node): for value in node.requires: requirements[value].append(node) for node in self: update_requirements(node) for value in node.provides: provided[value] = node # NOTE(harlowja): Add items and edges to a temporary copy of the # underlying graph and only if that is successful added to do we then # swap with the underlying graph. tmp_graph = nx.DiGraph(self._graph) for item in items: tmp_graph.add_node(item) update_requirements(item) for value in item.provides: if value in provided: raise exc.DependencyFailure( "%(item)s provides %(value)s but is already being" " provided by %(flow)s and duplicate producers" " are disallowed" % dict(item=item.name, flow=provided[value].name, value=value)) provided[value] = item for value in item.requires: if value in provided: self._link(provided[value], item, graph=tmp_graph, reason=value) for value in item.provides: if value in requirements: for node in requirements[value]: self._link(item, node, graph=tmp_graph, reason=value) self._swap(tmp_graph) return self def __len__(self): return self.graph.number_of_nodes() def __iter__(self): for n in self.graph.nodes_iter(): yield n @property def provides(self): provides = set() for subflow in self: provides.update(subflow.provides) return provides @property def requires(self): requires = set() for subflow in self: requires.update(subflow.requires) return requires - self.provides @property def graph(self): return self._graph class TargetedFlow(Flow): """Graph flow with a target. Adds possibility to execute a flow up to certain graph node (task or subflow). """ def __init__(self, *args, **kwargs): super(TargetedFlow, self).__init__(*args, **kwargs) self._subgraph = None self._target = None def set_target(self, target_item): """Set target for the flow. Any items (tasks or subflows) not needed for the target item will not be executed. """ if not self._graph.has_node(target_item): raise ValueError('Item %s not found' % target_item) self._target = target_item self._subgraph = None def reset_target(self): """Reset target for the flow. All items of the flow will be executed. """ self._target = None self._subgraph = None def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" super(TargetedFlow, self).add(*items) # reset cached subgraph, in case it was affected self._subgraph = None return self def link(self, u, v): """Link existing node u as a runtime dependency of existing node v.""" super(TargetedFlow, self).link(u, v) # reset cached subgraph, in case it was affected self._subgraph = None return self @property def graph(self): if self._subgraph is not None: return self._subgraph if self._target is None: return self._graph nodes = [self._target] nodes.extend(dst for _src, dst in traversal.dfs_edges(self._graph.reverse(), self._target)) self._subgraph = nx.freeze(self._graph.subgraph(nodes)) return self._subgraph taskflow-0.1.3/taskflow/test.py0000664000175300017540000000645312275003514017674 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from testtools import compat from testtools import matchers from testtools import testcase class GreaterThanEqual(object): """Matches if the item is geq than the matchers reference object.""" def __init__(self, source): self.source = source def match(self, other): if other >= self.source: return None return matchers.Mismatch("%s was not >= %s" % (other, self.source)) class TestCase(testcase.TestCase): """Test case base class for all taskflow unit tests.""" def assertRaisesRegexp(self, exc_class, pattern, callable_obj, *args, **kwargs): # TODO(harlowja): submit a pull/review request to testtools to add # this method to there codebase instead of having it exist in ours # since it really doesn't belong here. class ReRaiseOtherTypes(object): def match(self, matchee): if not issubclass(matchee[0], exc_class): compat.reraise(*matchee) class CaptureMatchee(object): def match(self, matchee): self.matchee = matchee[1] capture = CaptureMatchee() matcher = matchers.Raises(matchers.MatchesAll(ReRaiseOtherTypes(), matchers.MatchesException(exc_class, pattern), capture)) our_callable = testcase.Nullary(callable_obj, *args, **kwargs) self.assertThat(our_callable, matcher) return capture.matchee def assertGreater(self, first, second): matcher = matchers.GreaterThan(first) self.assertThat(second, matcher) def assertGreaterEqual(self, first, second): matcher = GreaterThanEqual(first) self.assertThat(second, matcher) def assertRegexpMatches(self, text, pattern): matcher = matchers.MatchesRegex(pattern) self.assertThat(text, matcher) def assertIsSuperAndSubsequence(self, super_seq, sub_seq, msg=None): super_seq = list(super_seq) sub_seq = list(sub_seq) current_tail = super_seq for sub_elem in sub_seq: try: super_index = current_tail.index(sub_elem) except ValueError: # element not found if msg is None: msg = ("%r is not subsequence of %r: " "element %r not found in tail %r" % (sub_seq, super_seq, sub_elem, current_tail)) self.fail(msg) else: current_tail = current_tail[super_index + 1:] taskflow-0.1.3/taskflow/version.py0000664000175300017540000000220212275003514020366 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from pbr import version as pbr_version TASK_VENDOR = "OpenStack Foundation" TASK_PRODUCT = "OpenStack TaskFlow" TASK_PACKAGE = None # OS distro package version suffix version_info = pbr_version.VersionInfo('taskflow') def version_string(): return version_info.version_string() def version_string_with_package(): if TASK_PACKAGE is None: return version_string() else: return "%s-%s" % (version_string(), TASK_PACKAGE) taskflow-0.1.3/taskflow/openstack/0000775000175300017540000000000012275003604020322 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/openstack/common/0000775000175300017540000000000012275003604021612 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/openstack/common/importutils.py0000664000175300017540000000421512275003514024561 0ustar jenkinsjenkins00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Import related utilities and helper functions. """ import sys import traceback def import_class(import_str): """Returns a class from a string including module and class.""" mod_str, _sep, class_str = import_str.rpartition('.') try: __import__(mod_str) return getattr(sys.modules[mod_str], class_str) except (ValueError, AttributeError): raise ImportError('Class %s cannot be found (%s)' % (class_str, traceback.format_exception(*sys.exc_info()))) def import_object(import_str, *args, **kwargs): """Import a class and return an instance of it.""" return import_class(import_str)(*args, **kwargs) def import_object_ns(name_space, import_str, *args, **kwargs): """Tries to import object from default namespace. Imports a class and return an instance of it, first by trying to find the class in a default namespace, then failing back to a full path if not found in the default namespace. """ import_value = "%s.%s" % (name_space, import_str) try: return import_class(import_value)(*args, **kwargs) except ImportError: return import_class(import_str)(*args, **kwargs) def import_module(import_str): """Import a module.""" __import__(import_str) return sys.modules[import_str] def try_import(import_str, default=None): """Try to import a module and if it fails return default.""" try: return import_module(import_str) except ImportError: return default taskflow-0.1.3/taskflow/openstack/common/gettextutils.py0000664000175300017540000004217312275003514024740 0ustar jenkinsjenkins00000000000000# Copyright 2012 Red Hat, Inc. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ gettext for openstack-common modules. Usual usage in an openstack.common module: from taskflow.openstack.common.gettextutils import _ """ import copy import gettext import locale from logging import handlers import os import re from babel import localedata import six _localedir = os.environ.get('taskflow'.upper() + '_LOCALEDIR') _t = gettext.translation('taskflow', localedir=_localedir, fallback=True) _AVAILABLE_LANGUAGES = {} USE_LAZY = False def enable_lazy(): """Convenience function for configuring _() to use lazy gettext Call this at the start of execution to enable the gettextutils._ function to use lazy gettext functionality. This is useful if your project is importing _ directly instead of using the gettextutils.install() way of importing the _ function. """ global USE_LAZY USE_LAZY = True def _(msg): if USE_LAZY: return Message(msg, domain='taskflow') else: if six.PY3: return _t.gettext(msg) return _t.ugettext(msg) def install(domain, lazy=False): """Install a _() function using the given translation domain. Given a translation domain, install a _() function using gettext's install() function. The main difference from gettext.install() is that we allow overriding the default localedir (e.g. /usr/share/locale) using a translation-domain-specific environment variable (e.g. NOVA_LOCALEDIR). :param domain: the translation domain :param lazy: indicates whether or not to install the lazy _() function. The lazy _() introduces a way to do deferred translation of messages by installing a _ that builds Message objects, instead of strings, which can then be lazily translated into any available locale. """ if lazy: # NOTE(mrodden): Lazy gettext functionality. # # The following introduces a deferred way to do translations on # messages in OpenStack. We override the standard _() function # and % (format string) operation to build Message objects that can # later be translated when we have more information. def _lazy_gettext(msg): """Create and return a Message object. Lazy gettext function for a given domain, it is a factory method for a project/module to get a lazy gettext function for its own translation domain (i.e. nova, glance, cinder, etc.) Message encapsulates a string so that we can translate it later when needed. """ return Message(msg, domain=domain) from six import moves moves.builtins.__dict__['_'] = _lazy_gettext else: localedir = '%s_LOCALEDIR' % domain.upper() if six.PY3: gettext.install(domain, localedir=os.environ.get(localedir)) else: gettext.install(domain, localedir=os.environ.get(localedir), unicode=True) class Message(six.text_type): """A Message object is a unicode object that can be translated. Translation of Message is done explicitly using the translate() method. For all non-translation intents and purposes, a Message is simply unicode, and can be treated as such. """ def __new__(cls, msgid, msgtext=None, params=None, domain='taskflow', *args): """Create a new Message object. In order for translation to work gettext requires a message ID, this msgid will be used as the base unicode text. It is also possible for the msgid and the base unicode text to be different by passing the msgtext parameter. """ # If the base msgtext is not given, we use the default translation # of the msgid (which is in English) just in case the system locale is # not English, so that the base text will be in that locale by default. if not msgtext: msgtext = Message._translate_msgid(msgid, domain) # We want to initialize the parent unicode with the actual object that # would have been plain unicode if 'Message' was not enabled. msg = super(Message, cls).__new__(cls, msgtext) msg.msgid = msgid msg.domain = domain msg.params = params return msg def translate(self, desired_locale=None): """Translate this message to the desired locale. :param desired_locale: The desired locale to translate the message to, if no locale is provided the message will be translated to the system's default locale. :returns: the translated message in unicode """ translated_message = Message._translate_msgid(self.msgid, self.domain, desired_locale) if self.params is None: # No need for more translation return translated_message # This Message object may have been formatted with one or more # Message objects as substitution arguments, given either as a single # argument, part of a tuple, or as one or more values in a dictionary. # When translating this Message we need to translate those Messages too translated_params = _translate_args(self.params, desired_locale) translated_message = translated_message % translated_params return translated_message @staticmethod def _translate_msgid(msgid, domain, desired_locale=None): if not desired_locale: system_locale = locale.getdefaultlocale() # If the system locale is not available to the runtime use English if not system_locale[0]: desired_locale = 'en_US' else: desired_locale = system_locale[0] locale_dir = os.environ.get(domain.upper() + '_LOCALEDIR') lang = gettext.translation(domain, localedir=locale_dir, languages=[desired_locale], fallback=True) if six.PY3: translator = lang.gettext else: translator = lang.ugettext translated_message = translator(msgid) return translated_message def __mod__(self, other): # When we mod a Message we want the actual operation to be performed # by the parent class (i.e. unicode()), the only thing we do here is # save the original msgid and the parameters in case of a translation params = self._sanitize_mod_params(other) unicode_mod = super(Message, self).__mod__(params) modded = Message(self.msgid, msgtext=unicode_mod, params=params, domain=self.domain) return modded def _sanitize_mod_params(self, other): """Sanitize the object being modded with this Message. - Add support for modding 'None' so translation supports it - Trim the modded object, which can be a large dictionary, to only those keys that would actually be used in a translation - Snapshot the object being modded, in case the message is translated, it will be used as it was when the Message was created """ if other is None: params = (other,) elif isinstance(other, dict): params = self._trim_dictionary_parameters(other) else: params = self._copy_param(other) return params def _trim_dictionary_parameters(self, dict_param): """Return a dict that only has matching entries in the msgid.""" # NOTE(luisg): Here we trim down the dictionary passed as parameters # to avoid carrying a lot of unnecessary weight around in the message # object, for example if someone passes in Message() % locals() but # only some params are used, and additionally we prevent errors for # non-deepcopyable objects by unicoding() them. # Look for %(param) keys in msgid; # Skip %% and deal with the case where % is first character on the line keys = re.findall('(?:[^%]|^)?%\((\w*)\)[a-z]', self.msgid) # If we don't find any %(param) keys but have a %s if not keys and re.findall('(?:[^%]|^)%[a-z]', self.msgid): # Apparently the full dictionary is the parameter params = self._copy_param(dict_param) else: params = {} # Save our existing parameters as defaults to protect # ourselves from losing values if we are called through an # (erroneous) chain that builds a valid Message with # arguments, and then does something like "msg % kwds" # where kwds is an empty dictionary. src = {} if isinstance(self.params, dict): src.update(self.params) src.update(dict_param) for key in keys: params[key] = self._copy_param(src[key]) return params def _copy_param(self, param): try: return copy.deepcopy(param) except TypeError: # Fallback to casting to unicode this will handle the # python code-like objects that can't be deep-copied return six.text_type(param) def __add__(self, other): msg = _('Message objects do not support addition.') raise TypeError(msg) def __radd__(self, other): return self.__add__(other) def __str__(self): # NOTE(luisg): Logging in python 2.6 tries to str() log records, # and it expects specifically a UnicodeError in order to proceed. msg = _('Message objects do not support str() because they may ' 'contain non-ascii characters. ' 'Please use unicode() or translate() instead.') raise UnicodeError(msg) def get_available_languages(domain): """Lists the available languages for the given translation domain. :param domain: the domain to get languages for """ if domain in _AVAILABLE_LANGUAGES: return copy.copy(_AVAILABLE_LANGUAGES[domain]) localedir = '%s_LOCALEDIR' % domain.upper() find = lambda x: gettext.find(domain, localedir=os.environ.get(localedir), languages=[x]) # NOTE(mrodden): en_US should always be available (and first in case # order matters) since our in-line message strings are en_US language_list = ['en_US'] # NOTE(luisg): Babel <1.0 used a function called list(), which was # renamed to locale_identifiers() in >=1.0, the requirements master list # requires >=0.9.6, uncapped, so defensively work with both. We can remove # this check when the master list updates to >=1.0, and update all projects list_identifiers = (getattr(localedata, 'list', None) or getattr(localedata, 'locale_identifiers')) locale_identifiers = list_identifiers() for i in locale_identifiers: if find(i) is not None: language_list.append(i) # NOTE(luisg): Babel>=1.0,<1.3 has a bug where some OpenStack supported # locales (e.g. 'zh_CN', and 'zh_TW') aren't supported even though they # are perfectly legitimate locales: # https://github.com/mitsuhiko/babel/issues/37 # In Babel 1.3 they fixed the bug and they support these locales, but # they are still not explicitly "listed" by locale_identifiers(). # That is why we add the locales here explicitly if necessary so that # they are listed as supported. aliases = {'zh': 'zh_CN', 'zh_Hant_HK': 'zh_HK', 'zh_Hant': 'zh_TW', 'fil': 'tl_PH'} for (locale, alias) in six.iteritems(aliases): if locale in language_list and alias not in language_list: language_list.append(alias) _AVAILABLE_LANGUAGES[domain] = language_list return copy.copy(language_list) def translate(obj, desired_locale=None): """Gets the translated unicode representation of the given object. If the object is not translatable it is returned as-is. If the locale is None the object is translated to the system locale. :param obj: the object to translate :param desired_locale: the locale to translate the message to, if None the default system locale will be used :returns: the translated object in unicode, or the original object if it could not be translated """ message = obj if not isinstance(message, Message): # If the object to translate is not already translatable, # let's first get its unicode representation message = six.text_type(obj) if isinstance(message, Message): # Even after unicoding() we still need to check if we are # running with translatable unicode before translating return message.translate(desired_locale) return obj def _translate_args(args, desired_locale=None): """Translates all the translatable elements of the given arguments object. This method is used for translating the translatable values in method arguments which include values of tuples or dictionaries. If the object is not a tuple or a dictionary the object itself is translated if it is translatable. If the locale is None the object is translated to the system locale. :param args: the args to translate :param desired_locale: the locale to translate the args to, if None the default system locale will be used :returns: a new args object with the translated contents of the original """ if isinstance(args, tuple): return tuple(translate(v, desired_locale) for v in args) if isinstance(args, dict): translated_dict = {} for (k, v) in six.iteritems(args): translated_v = translate(v, desired_locale) translated_dict[k] = translated_v return translated_dict return translate(args, desired_locale) class TranslationHandler(handlers.MemoryHandler): """Handler that translates records before logging them. The TranslationHandler takes a locale and a target logging.Handler object to forward LogRecord objects to after translating them. This handler depends on Message objects being logged, instead of regular strings. The handler can be configured declaratively in the logging.conf as follows: [handlers] keys = translatedlog, translator [handler_translatedlog] class = handlers.WatchedFileHandler args = ('/var/log/api-localized.log',) formatter = context [handler_translator] class = openstack.common.log.TranslationHandler target = translatedlog args = ('zh_CN',) If the specified locale is not available in the system, the handler will log in the default locale. """ def __init__(self, locale=None, target=None): """Initialize a TranslationHandler :param locale: locale to use for translating messages :param target: logging.Handler object to forward LogRecord objects to after translation """ # NOTE(luisg): In order to allow this handler to be a wrapper for # other handlers, such as a FileHandler, and still be able to # configure it using logging.conf, this handler has to extend # MemoryHandler because only the MemoryHandlers' logging.conf # parsing is implemented such that it accepts a target handler. handlers.MemoryHandler.__init__(self, capacity=0, target=target) self.locale = locale def setFormatter(self, fmt): self.target.setFormatter(fmt) def emit(self, record): # We save the message from the original record to restore it # after translation, so other handlers are not affected by this original_msg = record.msg original_args = record.args try: self._translate_and_log_record(record) finally: record.msg = original_msg record.args = original_args def _translate_and_log_record(self, record): record.msg = translate(record.msg, self.locale) # In addition to translating the message, we also need to translate # arguments that were passed to the log method that were not part # of the main message e.g., log.info(_('Some message %s'), this_one)) record.args = _translate_args(record.args, self.locale) self.target.emit(record) taskflow-0.1.3/taskflow/openstack/common/__init__.py0000664000175300017540000000000012275003514023711 0ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/openstack/common/py3kcompat/0000775000175300017540000000000012275003604023704 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/openstack/common/py3kcompat/urlutils.py0000664000175300017540000000356012275003514026145 0ustar jenkinsjenkins00000000000000# # Copyright 2013 Canonical Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Python2/Python3 compatibility layer for OpenStack """ import six if six.PY3: # python3 import urllib.error import urllib.parse import urllib.request urlencode = urllib.parse.urlencode urljoin = urllib.parse.urljoin quote = urllib.parse.quote quote_plus = urllib.parse.quote_plus parse_qsl = urllib.parse.parse_qsl unquote = urllib.parse.unquote unquote_plus = urllib.parse.unquote_plus urlparse = urllib.parse.urlparse urlsplit = urllib.parse.urlsplit urlunsplit = urllib.parse.urlunsplit SplitResult = urllib.parse.SplitResult urlopen = urllib.request.urlopen URLError = urllib.error.URLError pathname2url = urllib.request.pathname2url else: # python2 import urllib import urllib2 import urlparse urlencode = urllib.urlencode quote = urllib.quote quote_plus = urllib.quote_plus unquote = urllib.unquote unquote_plus = urllib.unquote_plus parse = urlparse parse_qsl = parse.parse_qsl urljoin = parse.urljoin urlparse = parse.urlparse urlsplit = parse.urlsplit urlunsplit = parse.urlunsplit SplitResult = parse.SplitResult urlopen = urllib2.urlopen URLError = urllib2.URLError pathname2url = urllib.pathname2url taskflow-0.1.3/taskflow/openstack/common/py3kcompat/__init__.py0000664000175300017540000000000012275003514026003 0ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/openstack/common/timeutils.py0000664000175300017540000001424112275003514024205 0ustar jenkinsjenkins00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Time related utilities and helper functions. """ import calendar import datetime import time import iso8601 import six # ISO 8601 extended time format with microseconds _ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f' _ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S' PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND def isotime(at=None, subsecond=False): """Stringify time in ISO 8601 format.""" if not at: at = utcnow() st = at.strftime(_ISO8601_TIME_FORMAT if not subsecond else _ISO8601_TIME_FORMAT_SUBSECOND) tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC' st += ('Z' if tz == 'UTC' else tz) return st def parse_isotime(timestr): """Parse time from ISO 8601 format.""" try: return iso8601.parse_date(timestr) except iso8601.ParseError as e: raise ValueError(six.text_type(e)) except TypeError as e: raise ValueError(six.text_type(e)) def strtime(at=None, fmt=PERFECT_TIME_FORMAT): """Returns formatted utcnow.""" if not at: at = utcnow() return at.strftime(fmt) def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT): """Turn a formatted time back into a datetime.""" return datetime.datetime.strptime(timestr, fmt) def normalize_time(timestamp): """Normalize time in arbitrary timezone to UTC naive object.""" offset = timestamp.utcoffset() if offset is None: return timestamp return timestamp.replace(tzinfo=None) - offset def is_older_than(before, seconds): """Return True if before is older than seconds.""" if isinstance(before, six.string_types): before = parse_strtime(before).replace(tzinfo=None) else: before = before.replace(tzinfo=None) return utcnow() - before > datetime.timedelta(seconds=seconds) def is_newer_than(after, seconds): """Return True if after is newer than seconds.""" if isinstance(after, six.string_types): after = parse_strtime(after).replace(tzinfo=None) else: after = after.replace(tzinfo=None) return after - utcnow() > datetime.timedelta(seconds=seconds) def utcnow_ts(): """Timestamp version of our utcnow function.""" if utcnow.override_time is None: # NOTE(kgriffs): This is several times faster # than going through calendar.timegm(...) return int(time.time()) return calendar.timegm(utcnow().timetuple()) def utcnow(): """Overridable version of utils.utcnow.""" if utcnow.override_time: try: return utcnow.override_time.pop(0) except AttributeError: return utcnow.override_time return datetime.datetime.utcnow() def iso8601_from_timestamp(timestamp): """Returns a iso8601 formatted date from timestamp.""" return isotime(datetime.datetime.utcfromtimestamp(timestamp)) utcnow.override_time = None def set_time_override(override_time=None): """Overrides utils.utcnow. Make it return a constant time or a list thereof, one at a time. :param override_time: datetime instance or list thereof. If not given, defaults to the current UTC time. """ utcnow.override_time = override_time or datetime.datetime.utcnow() def advance_time_delta(timedelta): """Advance overridden time using a datetime.timedelta.""" assert(not utcnow.override_time is None) try: for dt in utcnow.override_time: dt += timedelta except TypeError: utcnow.override_time += timedelta def advance_time_seconds(seconds): """Advance overridden time by seconds.""" advance_time_delta(datetime.timedelta(0, seconds)) def clear_time_override(): """Remove the overridden time.""" utcnow.override_time = None def marshall_now(now=None): """Make an rpc-safe datetime with microseconds. Note: tzinfo is stripped, but not required for relative times. """ if not now: now = utcnow() return dict(day=now.day, month=now.month, year=now.year, hour=now.hour, minute=now.minute, second=now.second, microsecond=now.microsecond) def unmarshall_time(tyme): """Unmarshall a datetime dict.""" return datetime.datetime(day=tyme['day'], month=tyme['month'], year=tyme['year'], hour=tyme['hour'], minute=tyme['minute'], second=tyme['second'], microsecond=tyme['microsecond']) def delta_seconds(before, after): """Return the difference between two timing objects. Compute the difference in seconds between two date, time, or datetime objects (as a float, to microsecond resolution). """ delta = after - before return total_seconds(delta) def total_seconds(delta): """Return the total seconds of datetime.timedelta object. Compute total seconds of datetime.timedelta, datetime.timedelta doesn't have method total_seconds in Python2.6, calculate it manually. """ try: return delta.total_seconds() except AttributeError: return ((delta.days * 24 * 3600) + delta.seconds + float(delta.microseconds) / (10 ** 6)) def is_soon(dt, window): """Determines if time is going to happen in the next window seconds. :param dt: the time :param window: minimum seconds to remain to consider the time not soon :return: True if expiration is within the given duration """ soon = (utcnow() + datetime.timedelta(seconds=window)) return normalize_time(dt) <= soon taskflow-0.1.3/taskflow/openstack/common/jsonutils.py0000664000175300017540000001507512275003514024226 0ustar jenkinsjenkins00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ''' JSON related utilities. This module provides a few things: 1) A handy function for getting an object down to something that can be JSON serialized. See to_primitive(). 2) Wrappers around loads() and dumps(). The dumps() wrapper will automatically use to_primitive() for you if needed. 3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson is available. ''' import datetime import functools import inspect import itertools import json try: import xmlrpclib except ImportError: # NOTE(jaypipes): xmlrpclib was renamed to xmlrpc.client in Python3 # however the function and object call signatures # remained the same. This whole try/except block should # be removed and replaced with a call to six.moves once # six 1.4.2 is released. See http://bit.ly/1bqrVzu import xmlrpc.client as xmlrpclib import six from taskflow.openstack.common import gettextutils from taskflow.openstack.common import importutils from taskflow.openstack.common import timeutils netaddr = importutils.try_import("netaddr") _nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod, inspect.isfunction, inspect.isgeneratorfunction, inspect.isgenerator, inspect.istraceback, inspect.isframe, inspect.iscode, inspect.isbuiltin, inspect.isroutine, inspect.isabstract] _simple_types = (six.string_types + six.integer_types + (type(None), bool, float)) def to_primitive(value, convert_instances=False, convert_datetime=True, level=0, max_depth=3): """Convert a complex object into primitives. Handy for JSON serialization. We can optionally handle instances, but since this is a recursive function, we could have cyclical data structures. To handle cyclical data structures we could track the actual objects visited in a set, but not all objects are hashable. Instead we just track the depth of the object inspections and don't go too deep. Therefore, convert_instances=True is lossy ... be aware. """ # handle obvious types first - order of basic types determined by running # full tests on nova project, resulting in the following counts: # 572754 # 460353 # 379632 # 274610 # 199918 # 114200 # 51817 # 26164 # 6491 # 283 # 19 if isinstance(value, _simple_types): return value if isinstance(value, datetime.datetime): if convert_datetime: return timeutils.strtime(value) else: return value # value of itertools.count doesn't get caught by nasty_type_tests # and results in infinite loop when list(value) is called. if type(value) == itertools.count: return six.text_type(value) # FIXME(vish): Workaround for LP bug 852095. Without this workaround, # tests that raise an exception in a mocked method that # has a @wrap_exception with a notifier will fail. If # we up the dependency to 0.5.4 (when it is released) we # can remove this workaround. if getattr(value, '__module__', None) == 'mox': return 'mock' if level > max_depth: return '?' # The try block may not be necessary after the class check above, # but just in case ... try: recursive = functools.partial(to_primitive, convert_instances=convert_instances, convert_datetime=convert_datetime, level=level, max_depth=max_depth) if isinstance(value, dict): return dict((k, recursive(v)) for k, v in six.iteritems(value)) elif isinstance(value, (list, tuple)): return [recursive(lv) for lv in value] # It's not clear why xmlrpclib created their own DateTime type, but # for our purposes, make it a datetime type which is explicitly # handled if isinstance(value, xmlrpclib.DateTime): value = datetime.datetime(*tuple(value.timetuple())[:6]) if convert_datetime and isinstance(value, datetime.datetime): return timeutils.strtime(value) elif isinstance(value, gettextutils.Message): return value.data elif hasattr(value, 'iteritems'): return recursive(dict(value.iteritems()), level=level + 1) elif hasattr(value, '__iter__'): return recursive(list(value)) elif convert_instances and hasattr(value, '__dict__'): # Likely an instance of something. Watch for cycles. # Ignore class member vars. return recursive(value.__dict__, level=level + 1) elif netaddr and isinstance(value, netaddr.IPAddress): return six.text_type(value) else: if any(test(value) for test in _nasty_type_tests): return six.text_type(value) return value except TypeError: # Class objects are tricky since they may define something like # __iter__ defined but it isn't callable as list(). return six.text_type(value) def dumps(value, default=to_primitive, **kwargs): return json.dumps(value, default=default, **kwargs) def loads(s): return json.loads(s) def load(s): return json.load(s) try: import anyjson except ImportError: pass else: anyjson._modules.append((__name__, 'dumps', TypeError, 'loads', ValueError, 'load')) anyjson.force_implementation(__name__) taskflow-0.1.3/taskflow/openstack/common/excutils.py0000664000175300017540000000717312275003514024034 0ustar jenkinsjenkins00000000000000# Copyright 2011 OpenStack Foundation. # Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Exception related utilities. """ import logging import sys import time import traceback import six from taskflow.openstack.common.gettextutils import _ class save_and_reraise_exception(object): """Save current exception, run some code and then re-raise. In some cases the exception context can be cleared, resulting in None being attempted to be re-raised after an exception handler is run. This can happen when eventlet switches greenthreads or when running an exception handler, code raises and catches an exception. In both cases the exception context will be cleared. To work around this, we save the exception state, run handler code, and then re-raise the original exception. If another exception occurs, the saved exception is logged and the new exception is re-raised. In some cases the caller may not want to re-raise the exception, and for those circumstances this context provides a reraise flag that can be used to suppress the exception. For example:: except Exception: with save_and_reraise_exception() as ctxt: decide_if_need_reraise() if not should_be_reraised: ctxt.reraise = False """ def __init__(self): self.reraise = True def __enter__(self): self.type_, self.value, self.tb, = sys.exc_info() return self def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is not None: logging.error(_('Original exception being dropped: %s'), traceback.format_exception(self.type_, self.value, self.tb)) return False if self.reraise: six.reraise(self.type_, self.value, self.tb) def forever_retry_uncaught_exceptions(infunc): def inner_func(*args, **kwargs): last_log_time = 0 last_exc_message = None exc_count = 0 while True: try: return infunc(*args, **kwargs) except Exception as exc: this_exc_message = six.u(str(exc)) if this_exc_message == last_exc_message: exc_count += 1 else: exc_count = 1 # Do not log any more frequently than once a minute unless # the exception message changes cur_time = int(time.time()) if (cur_time - last_log_time > 60 or this_exc_message != last_exc_message): logging.exception( _('Unexpected exception occurred %d time(s)... ' 'retrying.') % exc_count) last_log_time = cur_time last_exc_message = this_exc_message exc_count = 0 # This should be a very rare event. In case it isn't, do # a sleep. time.sleep(1) return inner_func taskflow-0.1.3/taskflow/openstack/common/uuidutils.py0000664000175300017540000000204512275003514024214 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 Intel Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ UUID related utilities and helper functions. """ import uuid def generate_uuid(): return str(uuid.uuid4()) def is_uuid_like(val): """Returns validation of a value as a UUID. For our purposes, a UUID is a canonical form string: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa """ try: return str(uuid.UUID(val)) == val except (TypeError, ValueError, AttributeError): return False taskflow-0.1.3/taskflow/openstack/__init__.py0000664000175300017540000000000012275003514022421 0ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/states.py0000664000175300017540000001504212275003514020212 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as exc # Job states. CLAIMED = 'CLAIMED' FAILURE = 'FAILURE' PENDING = 'PENDING' RUNNING = 'RUNNING' SUCCESS = 'SUCCESS' UNCLAIMED = 'UNCLAIMED' # Flow states. FAILURE = FAILURE PENDING = PENDING REVERTING = 'REVERTING' REVERTED = 'REVERTED' RUNNING = RUNNING SUCCESS = SUCCESS SUSPENDING = 'SUSPENDING' SUSPENDED = 'SUSPENDED' RESUMING = 'RESUMING' # Task states. FAILURE = FAILURE PENDING = PENDING REVERTED = REVERTED REVERTING = REVERTING SUCCESS = SUCCESS # TODO(harlowja): use when we can timeout tasks?? TIMED_OUT = 'TIMED_OUT' ## Flow state transitions # https://wiki.openstack.org/wiki/TaskFlow/States_of_Task_and_Flow#Flow_States _ALLOWED_FLOW_TRANSITIONS = frozenset(( (PENDING, RUNNING), # run it! (RUNNING, SUCCESS), # all tasks finished successfully (RUNNING, FAILURE), # some of task failed (RUNNING, SUSPENDING), # engine.suspend was called (RUNNING, RESUMING), # resuming from a previous running (SUCCESS, RUNNING), # see note below (FAILURE, RUNNING), # see note below (FAILURE, REVERTING), # flow failed, do cleanup now (REVERTING, REVERTED), # revert done (REVERTING, FAILURE), # revert failed (REVERTING, SUSPENDING), # engine.suspend was called (REVERTING, RESUMING), # resuming from a previous reverting (REVERTED, PENDING), # try again (SUSPENDING, SUSPENDED), # suspend finished (SUSPENDING, SUCCESS), # all tasks finished while we were waiting (SUSPENDING, FAILURE), # some tasks failed while we were waiting (SUSPENDING, REVERTED), # all tasks were reverted while we were waiting (SUSPENDING, RESUMING), # resuming from a previous suspending (SUSPENDED, RUNNING), # restart from suspended (SUSPENDED, REVERTING), # revert from suspended (RESUMING, SUSPENDED), # after flow resumed, it is suspended )) # NOTE(imelnikov) SUCCESS->RUNNING and FAILURE->RUNNING transitions are # useful when flow or flowdetails backing it were altered after the flow # was finished; then, client code may want to run through flow again # to ensure all tasks from updated flow had a chance to run. # NOTE(imelnikov): Engine cannot transition flow from SUSPENDING to # SUSPENDED while some tasks from the flow are running and some results # from them are not retrieved and saved properly, so while flow is # in SUSPENDING state it may wait for some of the tasks to stop. Then, # flow can go to SUSPENDED, SUCCESS, FAILURE or REVERTED state depending # of actual state of the tasks -- e.g. if all tasks were finished # successfully while we were waiting, flow can be transitioned from # SUSPENDING to SUCCESS state. _IGNORED_FLOW_TRANSITIONS = frozenset( (a, b) for a in (PENDING, FAILURE, SUCCESS, SUSPENDED, REVERTED) for b in (SUSPENDING, SUSPENDED, RESUMING) if a != b ) def check_flow_transition(old_state, new_state): """Check that flow can transition from old_state to new_state. If transition can be performed, it returns True. If transition should be ignored, it returns False. If transition is not valid, it raises InvalidState exception. """ if old_state == new_state: return False pair = (old_state, new_state) if pair in _ALLOWED_FLOW_TRANSITIONS: return True if pair in _IGNORED_FLOW_TRANSITIONS: return False raise exc.InvalidState("Flow transition from %s to %s is not allowed" % pair) ## Task state transitions # https://wiki.openstack.org/wiki/TaskFlow/States_of_Task_and_Flow#Task_States _ALLOWED_TASK_TRANSITIONS = frozenset(( (PENDING, RUNNING), # run it! (RUNNING, SUCCESS), # the task finished successfully (RUNNING, FAILURE), # the task failed (FAILURE, REVERTING), # task failed, do cleanup now (SUCCESS, REVERTING), # some other task failed, do cleanup now (REVERTING, REVERTED), # revert done (REVERTING, FAILURE), # revert failed (REVERTED, PENDING), # try again # NOTE(harlowja): allow the tasks to restart if in the same state # as a they were in before as a task may be 'killed' while in one of the # below states and it is permissible to let the task to re-enter that # same state to try to finish. (REVERTING, REVERTING), (RUNNING, RUNNING), # NOTE(harlowja): the task was 'killed' while in one of the starting/ending # states and it is permissible to let the task to start running or # reverting again (if it really wants too). (REVERTING, RUNNING), (RUNNING, REVERTING), )) _IGNORED_TASK_TRANSITIONS = [ (SUCCESS, RUNNING), # already finished (PENDING, REVERTING), # never ran in the first place (REVERTED, REVERTING), # the task already reverted ] # NOTE(harlowja): ignore transitions to the same state (in these cases). # # NOTE(harlowja): the above ALLOWED_TASK_TRANSITIONS does allow # transitions to certain equivalent states (but only for a few special # cases). _IGNORED_TASK_TRANSITIONS.extend( (a, a) for a in (PENDING, FAILURE, SUCCESS, REVERTED) ) _IGNORED_TASK_TRANSITIONS = frozenset(_IGNORED_TASK_TRANSITIONS) def check_task_transition(old_state, new_state): """Check that task can transition from old_state to new_state. If transition can be performed, it returns True. If transition should be ignored, it returns False. If transition is not valid, it raises InvalidState exception. """ pair = (old_state, new_state) if pair in _ALLOWED_TASK_TRANSITIONS: return True if pair in _IGNORED_TASK_TRANSITIONS: return False # TODO(harlowja): Should we check/allow for 3rd party states to be # triggered during RUNNING by having a concept of a sub-state that we also # verify against?? raise exc.InvalidState("Task transition from %s to %s is not allowed" % pair) taskflow-0.1.3/taskflow/task.py0000664000175300017540000002022512275003514017650 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import contextlib import logging import six from taskflow import atom from taskflow.utils import reflection LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class BaseTask(atom.Atom): """An abstraction that defines a potential piece of work that can be applied and can be reverted to undo the work as a single task. """ TASK_EVENTS = ('update_progress', ) def __init__(self, name, provides=None): if name is None: name = reflection.get_class_name(self) super(BaseTask, self).__init__(name, provides) # Map of events => lists of callbacks to invoke on task events. self._events_listeners = collections.defaultdict(list) @abc.abstractmethod def execute(self, *args, **kwargs): """Activate a given task which will perform some operation and return. This method can be used to perform an action on a given set of input requirements (passed in via *args and **kwargs) to accomplish some type of operation. This operation may provide some named outputs/results as a result of it executing for later reverting (or for other tasks to depend on). NOTE(harlowja): the result (if any) that is returned should be persistable so that it can be passed back into this task if reverting is triggered (especially in the case where reverting happens in a different python process or on a remote machine) and so that the result can be transmitted to other tasks (which may be local or remote). """ def revert(self, *args, **kwargs): """Revert this task using the result that the execute function provided as well as any failure information which caused the reversion to be triggered in the first place. NOTE(harlowja): The **kwargs which are passed into the execute() method will also be passed into this method. The **kwargs key 'result' will contain the execute() functions result (if any) and the **kwargs key 'flow_failures' will contain the failure information. """ def update_progress(self, progress, **kwargs): """Update task progress and notify all registered listeners. :param progress: task progress float value between 0 and 1 :param kwargs: task specific progress information """ if progress > 1.0: LOG.warn("Progress must be <= 1.0, clamping to upper bound") progress = 1.0 if progress < 0.0: LOG.warn("Progress must be >= 0.0, clamping to lower bound") progress = 0.0 self._trigger('update_progress', progress, **kwargs) def _trigger(self, event, *args, **kwargs): """Execute all handlers for the given event type.""" for (handler, event_data) in self._events_listeners.get(event, []): try: handler(self, event_data, *args, **kwargs) except Exception: LOG.exception("Failed calling `%s` on event '%s'", reflection.get_callable_name(handler), event) @contextlib.contextmanager def autobind(self, event_name, handler_func, **kwargs): """Binds a given function to the task for a given event name and then unbinds that event name and associated function automatically on exit. """ bound = False if handler_func is not None: try: self.bind(event_name, handler_func, **kwargs) bound = True except ValueError: LOG.exception("Failed binding functor `%s` as a receiver of" " event '%s' notifications emitted from task %s", handler_func, event_name, self) try: yield self finally: if bound: self.unbind(event_name, handler_func) def bind(self, event, handler, **kwargs): """Attach a handler to an event for the task. :param event: event type :param handler: callback to execute each time event is triggered :param kwargs: optional named parameters that will be passed to the event handler :raises ValueError: if invalid event type passed """ if event not in self.TASK_EVENTS: raise ValueError("Unknown task event '%s', can only bind" " to events %s" % (event, self.TASK_EVENTS)) assert six.callable(handler), "Handler must be callable" self._events_listeners[event].append((handler, kwargs)) def unbind(self, event, handler=None): """Remove a previously-attached event handler from the task. If handler function not passed, then unbind all event handlers for the provided event. If multiple of the same handlers are bound, then the first match is removed (and only the first match). :param event: event type :param handler: handler previously bound :rtype: boolean :return: whether anything was removed """ removed_any = False if not handler: removed_any = self._events_listeners.pop(event, removed_any) else: event_listeners = self._events_listeners.get(event, []) for i, (handler2, _event_data) in enumerate(event_listeners): if reflection.is_same_callback(handler, handler2): event_listeners.pop(i) removed_any = True break return bool(removed_any) class Task(BaseTask): """Base class for user-defined tasks. Adds following features to Task: - auto-generates name from type of self - adds all execute argument names to task requirements - items provided by the task may be specified via 'default_provides' class attribute or property """ default_provides = None def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None): """Initialize task instance.""" if provides is None: provides = self.default_provides super(Task, self).__init__(name, provides=provides) self._build_arg_mapping(self.execute, requires, rebind, auto_extract) class FunctorTask(BaseTask): """Adaptor to make a task from a callable. Take any callable and make a task from it. """ def __init__(self, execute, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert=None, version=None): assert six.callable(execute), ("Function to use for executing must be" " callable") if revert: assert six.callable(revert), ("Function to use for reverting must" " be callable") if name is None: name = reflection.get_callable_name(execute) super(FunctorTask, self).__init__(name, provides=provides) self._execute = execute self._revert = revert if version is not None: self.version = version self._build_arg_mapping(execute, requires, rebind, auto_extract) def execute(self, *args, **kwargs): return self._execute(*args, **kwargs) def revert(self, *args, **kwargs): if self._revert: return self._revert(*args, **kwargs) else: return None taskflow-0.1.3/taskflow/jobs/0000775000175300017540000000000012275003604017270 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/taskflow/jobs/__init__.py0000664000175300017540000000130312275003514021376 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. taskflow-0.1.3/taskflow/jobs/job.py0000664000175300017540000000520112275003514020412 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading from taskflow.openstack.common import uuidutils from taskflow.utils import lock_utils class Job(object): """A job is a higher level abstraction over a set of flows as well as the *ownership* of those flows, it is the highest piece of work that can be owned by an entity performing those flows. Only one entity will be operating on the flows contained in a job at a given time (for the foreseeable future). It is the object that should be transferred to another entity on failure of so that the contained flows ownership can be transferred to the secondary entity for resumption/continuation/reverting. """ def __init__(self, name, uuid=None): if uuid: self._uuid = uuid else: self._uuid = uuidutils.generate_uuid() self._name = name self._lock = threading.RLock() self._flows = [] self.owner = None self.state = None self.book = None @lock_utils.locked def add(self, *flows): self._flows.extend(flows) @lock_utils.locked def remove(self, flow): j = -1 for i, f in enumerate(self._flows): if f.uuid == flow.uuid: j = i break if j == -1: raise ValueError("Could not find %r to remove" % (flow)) self._flows.pop(j) def __contains__(self, flow): for f in self: if f.uuid == flow.uuid: return True return False @property def uuid(self): """The uuid of this job.""" return self._uuid @property def name(self): """The non-uniquely identifying name of this job.""" return self._name def __iter__(self): # Don't iterate while holding the lock. with self._lock: flows = list(self._flows) for f in flows: yield f taskflow-0.1.3/taskflow/jobs/jobboard.py0000664000175300017540000000634612275003514021435 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class JobBoard(object): """A jobboard is an abstract representation of a place where jobs can be posted, reposted, claimed and transferred. There can be multiple implementations of this job board, depending on the desired semantics and capabilities of the underlying jobboard implementation. """ def __init__(self, name): self._name = name @property def name(self): """The non-uniquely identifying name of this jobboard.""" return self._name @abc.abstractmethod def consume(self, job): """Permanently (and atomically) removes a job from the jobboard, signaling that this job has been completed by the entity assigned to that job. Only the entity that has claimed that job is able to consume a job. A job that has been consumed can not be reclaimed or reposted by another entity (job postings are immutable). Any entity consuming a unclaimed job (or a job they do not own) will cause an exception. """ @abc.abstractmethod def post(self, job): """Atomically posts a given job to the jobboard, allowing others to attempt to claim that job (and subsequently work on that job). Once a job has been posted it can only be removed by consuming that job (after that job is claimed). Any entity can post or propose jobs to the jobboard (in the future this may be restricted). """ @abc.abstractmethod def claim(self, job, who): """Atomically attempts to claim the given job for the entity and either succeeds or fails at claiming by throwing corresponding exceptions. If a job is claimed it is expected that the entity that claims that job will at sometime in the future work on that jobs flows and either fail at completing them (resulting in a reposting) or consume that job from the jobboard (signaling its completion). """ @abc.abstractmethod def repost(self, job): """Atomically reposts the given job on the jobboard, allowing that job to be reclaimed by others. This would typically occur if the entity that has claimed the job has failed or is unable to complete the job or jobs it has claimed. Only the entity that has claimed that job can repost a job. Any entity reposting a unclaimed job (or a job they do not own) will cause an exception. """ taskflow-0.1.3/doc/0000775000175300017540000000000012275003604015246 5ustar jenkinsjenkins00000000000000taskflow-0.1.3/doc/taskflow.persistence.rst0000664000175300017540000000071412275003514022157 0ustar jenkinsjenkins00000000000000taskflow.persistence package ============================ Subpackages ----------- .. toctree:: taskflow.persistence.backends Submodules ---------- taskflow.persistence.logbook module ----------------------------------- .. automodule:: taskflow.persistence.logbook :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.persistence :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/conf.py0000664000175300017540000002370712275003514016556 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # # taskflow documentation build configuration file, created by # sphinx-quickstart on Mon Nov 25 17:55:12 2013. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.viewcode', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'taskflow' copyright = u'2013, Alex' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '' # The full version, including alpha/beta/rc tags. release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'taskflowdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'taskflow.tex', u'taskflow Documentation', u'Alex', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'taskflow', u'taskflow Documentation', [u'Alex'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'taskflow', u'taskflow Documentation', u'Alex', 'taskflow', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # -- Options for Epub output ---------------------------------------------- # Bibliographic Dublin Core info. epub_title = u'taskflow' epub_author = u'Alex' epub_publisher = u'Alex' epub_copyright = u'2013, Alex' # The basename for the epub file. It defaults to the project name. #epub_basename = u'taskflow' # The HTML theme for the epub output. Since the default themes are not optimized # for small screen space, using the same theme for HTML and epub output is # usually not wise. This defaults to 'epub', a theme designed to save visual # space. #epub_theme = 'epub' # The language of the text. It defaults to the language option # or en if the language is not set. #epub_language = '' # The scheme of the identifier. Typical schemes are ISBN or URL. #epub_scheme = '' # The unique identifier of the text. This can be a ISBN number # or the project homepage. #epub_identifier = '' # A unique identification for the text. #epub_uid = '' # A tuple containing the cover image and cover page html template filenames. #epub_cover = () # A sequence of (type, uri, title) tuples for the guide element of content.opf. #epub_guide = () # HTML files that should be inserted before the pages created by sphinx. # The format is a list of tuples containing the path and title. #epub_pre_files = [] # HTML files shat should be inserted after the pages created by sphinx. # The format is a list of tuples containing the path and title. #epub_post_files = [] # A list of files that should not be packed into the epub file. #epub_exclude_files = [] # The depth of the table of contents in toc.ncx. #epub_tocdepth = 3 # Allow duplicate toc entries. #epub_tocdup = True # Choose between 'default' and 'includehidden'. #epub_tocscope = 'default' # Fix unsupported image types using the PIL. #epub_fix_images = False # Scale large images. #epub_max_image_width = 0 # How to display URL addresses: 'footnote', 'no', or 'inline'. #epub_show_urls = 'inline' # If false, no index is generated. #epub_use_index = True taskflow-0.1.3/doc/taskflow.patterns.rst0000664000175300017540000000135012275003514021470 0ustar jenkinsjenkins00000000000000taskflow.patterns package ========================= Submodules ---------- taskflow.patterns.graph_flow module ----------------------------------- .. automodule:: taskflow.patterns.graph_flow :members: :undoc-members: :show-inheritance: taskflow.patterns.linear_flow module ------------------------------------ .. automodule:: taskflow.patterns.linear_flow :members: :undoc-members: :show-inheritance: taskflow.patterns.unordered_flow module --------------------------------------- .. automodule:: taskflow.patterns.unordered_flow :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.patterns :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.engines.action_engine.rst0000664000175300017540000000211412275003514024060 0ustar jenkinsjenkins00000000000000taskflow.engines.action_engine package ====================================== Submodules ---------- taskflow.engines.action_engine.base_action module ------------------------------------------------- .. automodule:: taskflow.engines.action_engine.base_action :members: :undoc-members: :show-inheritance: taskflow.engines.action_engine.engine module -------------------------------------------- .. automodule:: taskflow.engines.action_engine.engine :members: :undoc-members: :show-inheritance: taskflow.engines.action_engine.graph_action module -------------------------------------------------- .. automodule:: taskflow.engines.action_engine.graph_action :members: :undoc-members: :show-inheritance: taskflow.engines.action_engine.task_action module ------------------------------------------------- .. automodule:: taskflow.engines.action_engine.task_action :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.engines.action_engine :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.engines.rst0000664000175300017540000000112012275003514021253 0ustar jenkinsjenkins00000000000000taskflow.engines package ======================== Subpackages ----------- .. toctree:: taskflow.engines.action_engine Submodules ---------- taskflow.engines.base module ---------------------------- .. automodule:: taskflow.engines.base :members: :undoc-members: :show-inheritance: taskflow.engines.helpers module ------------------------------- .. automodule:: taskflow.engines.helpers :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.engines :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/index.rst0000664000175300017540000000043112275003514017105 0ustar jenkinsjenkins00000000000000Taskflow ======== Taskflow is a Python library for OpenStack that helps make task execution easy, consistent, and reliable. Contents ======== .. toctree:: :maxdepth: 2 taskflow Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` taskflow-0.1.3/doc/taskflow.persistence.backends.sqlalchemy.rst0000664000175300017540000000135112275003514026067 0ustar jenkinsjenkins00000000000000taskflow.persistence.backends.sqlalchemy package ================================================ Submodules ---------- taskflow.persistence.backends.sqlalchemy.migration module --------------------------------------------------------- .. automodule:: taskflow.persistence.backends.sqlalchemy.migration :members: :undoc-members: :show-inheritance: taskflow.persistence.backends.sqlalchemy.models module ------------------------------------------------------ .. automodule:: taskflow.persistence.backends.sqlalchemy.models :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.persistence.backends.sqlalchemy :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.persistence.backends.rst0000664000175300017540000000221412275003514023725 0ustar jenkinsjenkins00000000000000taskflow.persistence.backends package ===================================== Subpackages ----------- .. toctree:: taskflow.persistence.backends.sqlalchemy Submodules ---------- taskflow.persistence.backends.base module ----------------------------------------- .. automodule:: taskflow.persistence.backends.base :members: :undoc-members: :show-inheritance: taskflow.persistence.backends.impl_dir module --------------------------------------------- .. automodule:: taskflow.persistence.backends.impl_dir :members: :undoc-members: :show-inheritance: taskflow.persistence.backends.impl_memory module ------------------------------------------------ .. automodule:: taskflow.persistence.backends.impl_memory :members: :undoc-members: :show-inheritance: taskflow.persistence.backends.impl_sqlalchemy module ---------------------------------------------------- .. automodule:: taskflow.persistence.backends.impl_sqlalchemy :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.persistence.backends :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.utils.rst0000664000175300017540000000303112275003514020766 0ustar jenkinsjenkins00000000000000taskflow.utils package ====================== Submodules ---------- taskflow.utils.eventlet_utils module ------------------------------------ .. automodule:: taskflow.utils.eventlet_utils :members: :undoc-members: :show-inheritance: taskflow.utils.flow_utils module -------------------------------- .. automodule:: taskflow.utils.flow_utils :members: :undoc-members: :show-inheritance: taskflow.utils.graph_utils module --------------------------------- .. automodule:: taskflow.utils.graph_utils :members: :undoc-members: :show-inheritance: taskflow.utils.lock_utils module -------------------------------- .. automodule:: taskflow.utils.lock_utils :members: :undoc-members: :show-inheritance: taskflow.utils.misc module -------------------------- .. automodule:: taskflow.utils.misc :members: :undoc-members: :show-inheritance: taskflow.utils.persistence_utils module --------------------------------------- .. automodule:: taskflow.utils.persistence_utils :members: :undoc-members: :show-inheritance: taskflow.utils.reflection module -------------------------------- .. automodule:: taskflow.utils.reflection :members: :undoc-members: :show-inheritance: taskflow.utils.threading_utils module ------------------------------------- .. automodule:: taskflow.utils.threading_utils :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.utils :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.listeners.rst0000664000175300017540000000130412275003514021637 0ustar jenkinsjenkins00000000000000taskflow.listeners package ========================== Submodules ---------- taskflow.listeners.base module ------------------------------ .. automodule:: taskflow.listeners.base :members: :undoc-members: :show-inheritance: taskflow.listeners.logging module --------------------------------- .. automodule:: taskflow.listeners.logging :members: :undoc-members: :show-inheritance: taskflow.listeners.printing module ---------------------------------- .. automodule:: taskflow.listeners.printing :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.listeners :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.jobs.rst0000664000175300017540000000075212275003514020572 0ustar jenkinsjenkins00000000000000taskflow.jobs package ===================== Submodules ---------- taskflow.jobs.job module ------------------------ .. automodule:: taskflow.jobs.job :members: :undoc-members: :show-inheritance: taskflow.jobs.jobboard module ----------------------------- .. automodule:: taskflow.jobs.jobboard :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow.jobs :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/taskflow.rst0000664000175300017540000000222212275003514017630 0ustar jenkinsjenkins00000000000000taskflow package ================ Subpackages ----------- .. toctree:: taskflow.engines taskflow.jobs taskflow.listeners taskflow.patterns taskflow.persistence taskflow.utils Submodules ---------- taskflow.exceptions module -------------------------- .. automodule:: taskflow.exceptions :members: :undoc-members: :show-inheritance: taskflow.flow module -------------------- .. automodule:: taskflow.flow :members: :undoc-members: :show-inheritance: taskflow.states module ---------------------- .. automodule:: taskflow.states :members: :undoc-members: :show-inheritance: taskflow.storage module ----------------------- .. automodule:: taskflow.storage :members: :undoc-members: :show-inheritance: taskflow.task module -------------------- .. automodule:: taskflow.task :members: :undoc-members: :show-inheritance: taskflow.version module ----------------------- .. automodule:: taskflow.version :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: taskflow :members: :undoc-members: :show-inheritance: taskflow-0.1.3/doc/Makefile0000664000175300017540000001516212275003514016713 0ustar jenkinsjenkins00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/taskflow.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/taskflow.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/taskflow" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/taskflow" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." taskflow-0.1.3/tox-tmpl.ini0000664000175300017540000000430512275003514016770 0ustar jenkinsjenkins00000000000000# NOTE(harlowja): this is a template, not a fully-generated tox.ini, use toxgen # to translate this into a fully specified tox.ini file before using. Changes # made to tox.ini will only be reflected if ran through the toxgen generator. [tox] minversion = 1.6 skipsdist = True [testenv] usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt alembic>=0.4.1 psycopg2 kazoo>=1.3.1 commands = python setup.py testr --slowest --testr-args='{posargs}' [tox:jenkins] downloadcache = ~/cache/pip [testenv:pep8] commands = flake8 {posargs} [testenv:pylint] setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/requirements.txt pylint==0.26.0 commands = pylint [testenv:cover] basepython = python2.7 deps = {[testenv:py27]deps} commands = python setup.py testr --coverage --testr-args='{posargs}' [testenv:venv] commands = {posargs} [flake8] builtins = _ exclude = .venv,.tox,dist,doc,./taskflow/openstack/common,*egg,.git,build,tools # NOTE(imelnikov): pyXY envs are considered to be default, so they must have # richest set of test requirements [testenv:py26] basepython = python2.6 deps = {[testenv:py26-sa7-mysql-ev]deps} [testenv:py27] basepython = python2.7 deps = -r{toxinidir}/requirements.txt -r{toxinidir}/optional-requirements.txt -r{toxinidir}/test-requirements.txt [testenv:py33] basepython = python3.3 deps = {[testenv:py33-sa9-pymysql]deps} [axes] python = py26,py27,py33 sqlalchemy = sa7,sa8,sa9 mysql = mysql,pymysql eventlet = ev,* [axis:python:py26] basepython = python2.6 deps = {[testenv]deps} [axis:python:py27] basepython = python2.7 deps = {[testenv]deps} [axis:python:py33] basepython = python3.3 deps = {[testenv]deps} [axis:eventlet:ev] deps = eventlet>=0.13.0 constraints = !python:py33 [axis:sqlalchemy:sa7] deps = SQLAlchemy<=0.7.99 [axis:sqlalchemy:sa8] deps = SQLAlchemy>=0.8,<=0.8.99 [axis:sqlalchemy:sa9] deps = SQLAlchemy>=0.9,<=0.9.99 [axis:mysql:mysql] deps = MySQL-python constraints = !python:py33 [axis:mysql:pymysql] deps = pyMySQL taskflow-0.1.3/LICENSE0000664000175300017540000002363712275003514015521 0ustar jenkinsjenkins00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. taskflow-0.1.3/pylintrc0000664000175300017540000000143012275003514016266 0ustar jenkinsjenkins00000000000000[MESSAGES CONTROL] # Disable the message(s) with the given id(s). disable=C0111,I0011,R0201,R0922,W0142,W0511,W0613,W0622,W0703 [BASIC] # Variable names can be 1 to 31 characters long, with lowercase and underscores variable-rgx=[a-z_][a-z0-9_]{0,30}$ # Argument names can be 2 to 31 characters long, with lowercase and underscores argument-rgx=[a-z_][a-z0-9_]{1,30}$ # Method names should be at least 3 characters long # and be lowecased with underscores method-rgx=[a-z_][a-z0-9_]{2,50}$ # Don't require docstrings on tests. no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$ [DESIGN] max-args=10 max-attributes=20 max-branchs=30 max-public-methods=100 max-statements=60 min-public-methods=0 [REPORTS] output-format=parseable include-ids=yes [VARIABLES] additional-builtins=_ taskflow-0.1.3/ChangeLog0000664000175300017540000004563012275003604016263 0ustar jenkinsjenkins00000000000000CHANGES ======= 0.1.3 ----- * Add validate() base method * Fix deadlock on waiting for pending_writers to be empty * Rename self._zk to self._client * Use listener instead of AutoSuspendTask in test_suspend_flow * Use test utils in test_suspend_flow * Use reader/writer locks in storage * Allow the usage of a passed in sqlalchemy engine * Be really careful with non-ascii data in exceptions/failures * Run zookeeper tests if localhost has a compat. zookeeper server * Add optional-requirements.txt * Move kazoo to testenv requirements * Unpin testtools version and bump subunit to >=0.0.18 * Remove use of str() in utils.misc.Failure * Be more resilent around import/detection/setup errors * Some zookeeper persistence improvements/adjustments * Add a validate method to dir and memory backends * Update oslo copy to oslo commit 39e1c5c5f39204 * Update oslo.lock from incubator commit 3c125e66d183 * Refactor task/flow flattening * Engine tests refactoring * Tests: don't pass 'values' to task constructor * Test fetching backends via entry points * Pin testtools to 0.9.34 in test requirements * Ensure we register the new zookeeper backend as an entrypoint * Implement ZooKeeper as persistence storage backend * Use addCleanup instead of tearDown in test_sql_persistence * Retain the same api for all helpers * Update execute/revert comments * Added more unit tests for Task and FunctorTask * Doc strings and comments clean-up * List examples function doesn't accept arguments * Tests: Persistence test mixin fix * Test using mysql + postgres if available * Clean-up and improve async-utils tests * Use already defined PENDING variable * Add utilities for working with binary data * Cleanup engine base class * Engine cleanups * Update atom comments * Put full set of requirements to py26, py27 and py33 envs * Add base class Atom for all flow units * Add more requirements to cover tox environment * Put SQLAlchemy requirements on single line * Proper exception raised from check_task_transition * Fix function name typo in persistence utils * Use the same way of assert isinstance in all tests * Minor cleanup in test_examples * Add possibility to create Failure from exception * Exceptions cleanup * Alter is_locked() helper comment * Add a setup.cfg keywords to describe taskflow * Use the released toxgen tool instead of our copy 0.1.2 ----- * Move autobinding to task base class * Assert functor task revert/execute are callable * Use the six callback checker * Add envs for different sqlalchemy versions * Refactor task handler binding * Move six to the right location * Use constants for the execution event strings * Added htmlcov folder to .gitignore * Reduce visibility of task_action * Change internal data store of LogBook from list to dict * Misc minor fixes to taskflow/examples * Add connection_proxy param * Ignore doc build files * Fix spelling errors * Switch to just using tox * Enable H202 warning for flake8 * Check tasks should not provide same values * Allow max_backoff and use count instead of attempts * Skip invariant checking and adding when nothing provided * Avoid not_done naming conflict * Add stronger checking of backend configuration * Raise type error instead of silencing it * Move the container fetcher function to utils * Explicitly list the valid transitions to RESUMING state * Name the graph property the same as in engine * Bind outside of the try block * Graph action refactoring * Add make_completed_future to async_utils * Update oslo-incubator copy to oslo-incubator commit 8b2b0b743 * Ensure that mysql traditional mode is enabled * Move async utils to own file * Update requirements from opentack/requirements * Code cleanup for eventlet_utils.wait_fo_any * Refactor engine internals * Add wait_for_any method to eventlet utils * Introduce TaskExecutor * Run some engine tests with eventlet if it's available * Do not create TaskAction for each task * Storage: use names instead of uuids in interface * Add tests for metadata updates * Fix sqlalchemy 0.8 issues * Fix minor python3 incompatibility * Speed up FlowDetail.find * Fix misspellings * Raise exception when trying to run empty flow * Use update_task_metadata in set_task_progress * Capture task duration * Fix another instance of callback comparison * Don't forget to return self * Fixes how instances methods are not deregistered * Targeted graph flow pattern * All classes should explicitly inherit object class * Initial commit of sphinx related files * Improve is_valid_attribute_name utility function * Coverage calculation improvements * Fix up python 3.3 incompatabilities 0.1.1 ----- * Pass flow failures to task's revert method * Storage: add methods to get all flow failures * Pbr requirement went missing * Update code to comply with hacking 0.8.0 * Don't reset tasks to PENDING state while reverting * Let pbr determine version automatically * Be more careful when passing result to revert() 0.1 --- * Support for optional task arguments * Do not erase task progress details * Storage: restore injected data on resumption * Inherit the greenpool default size * Add debug logging showing what is flattened * Remove incorrect comment * Unit tests refactoring * Use py3kcompat.urlutils from oslo instead of six.urllib_parse * Update oslo and bring py3kcompat in * Support several output formats in state_graph tool * Remove task_action state checks * Wrapped exception doc/intro comment updates * Doc/intro updates for simple_linear_listening * Add docs/intro to simple_linear example * Update intro/comments for reverting_linear example * Add docs explaining what/how resume_volume_create works * A few resuming from backend comment adjustments * Add an introduction to explain resume_many example * Increase persistence example comments * Boost graph flow example comments * Also allow "_" to be valid identifier * Remove uuid from taskflow.flow.Flow * A few additional example boot_vm comments + tweaks * Add a resuming booting vm example * Add task state verification * Beef up storage comments * Removed unused utilities * Helpers to save flow factory in metadata * Storage: add flow name and uuid properties * Create logbook if not provided for create_flow_details * Prepare for 0.1 release * Comment additions for exponential backoff * Beef up the action engine comments * Pattern comment additions/adjustments * Add more comments to flow/task * Save with the same connection * Add a persistence util logbook formatting function * Rename get_graph() -> execution_graph * Continue adding docs to examples * Add more comments that explain example & usage * Add more comments that explain example & usage * Add more comments that explain example & usage * Add more comments that explain example & usage * Fix several python3 incompatibilities * Python3 compatibility for utils.reflection * No module name for builtin type and exception names * Fix python3 compatibility issues in examples * Fix print statements for python 2/3 * Add a mini-cinder volume create with resumption * Update oslo copy and bring over versionutils * Move toward python 3/2 compatible metaclass * Add a secondary booting vm example * Resumption from backend for action engine * A few wording/spelling adjustments * Create a green executor & green future * Add a simple mini-billing stack example * Add a example which uses a sqlite persistence layer * Add state to dot->svg tool * Add a set of useful listeners * Remove decorators and move to utils * Add reasons as to why the edges were created * Fix entrypoints being updated/created by update.py * Validate each flow state change * Update state sequence for failed flows * Flow utils and adding comments * Bump requirements to the latest * Add a inspect sanity check and note about bound methods * Some small exception cleanups * Check for duplicate task names on flattening * Correctly save task versions * Allow access by index * Fix importing of module files * Wrapping and serializing failures * Simpler API to load flows into engines * Avoid setting object variables * A few adjustments to the progress code * Cleanup unused states * Remove d2to dependency * Warn if multiple providers found * Memory persistence backend improvements * Create database from models for SQLite * Don't allow mutating operations on the underlying graph * Add graph density * Suspend single and multi threaded engines * Remove old tests for unexisted flow types * Boot fake vm example fixed * Export graph to dot util * Remove unused utility classes * Remove black list of graph flow * Task decorator was removed and examples updated * Remove weakref usage * Add basic sanity tests for unordered flow * Clean up job/jobboard code * Add a directory/filesystem based persistence layer * Remove the older (not used) resumption mechanism * Reintegrate parallel action * Add a flow flattening util * Allow to specify default provides at task definition * Graph flow, sequential graph action * Task progress * Verify provides and requires * Remap the emails of the committers * Use executors instead of pools * Fix linked exception forming * Remove threaded and distributed flows * Add check that task provides all results it should * Use six string types instead of basestring * Remove usage of oslo.db and oslo.config * Move toward using a backend+connection model * Add provides and requires properties to Flow * Fixed crash when running the engine * Remove the common config since its not needed * Allow the lock decorator to take a list * Allow provides to be a set and results to be a dictionary * Allow engines to be copied + blacklist broken flows * Add link to why we have to make this factory due to late binding * Use the lock decorator and close/join the thread pool * Engine, task, linear_flow unification * Combine multiple exceptions into a linked one * Converted some examples to use patterns/engines * MultiThreaded engine and parallel action * State management for engines * Action engine: save task results * Initial implementation of action-based engine * Further updates to update.py * Split utils module * Rename Task.__call__ to Task.execute * Reader/writer no longer used * Rename "revert_with" => "revert" and "execute_with" to "execute" * Notify on task reversion * Have runner keep the exception * Use distutil version classes * Add features to task.Task * Add get_required_callable_args utility function * Add get_callable_name utility function * Require uuid + move functor_task to task.py * Check examples when running tests * Use the same root test class * LazyPluggable is no longer used * Add a locally running threaded flow * Change namings in functor_task and add docs to its __init__ * Rework the persistence layer * Do not have the runner modify the uuid * Refactor decorators * Nicer way to make task out of any callable * Use oslo's sqlalchemy layer * File movements * Added Backend API Database Implementation * Added Memory Persistence API and Generic Datatypes * Resync the latest oslo code * Remove openstack.common.exception usage * Forgot to move this one to the right folder * Add a new simple calculator example * Quiet the provider linking * Deep-copy not always possible * Add a example which simulates booting a vm * Add a more complicated graph example * Move examples under the source tree * Adjust a bunch of hacking violations * Fix typos in test_linear_flow.py and simple_linear_listening.py * Fix minor code style * Fix two minor bugs in docs/examples * Show file modifications and fix dirpath based on config file * Add a way to use taskflow until library stabilized * Provide the length of the flows * Parents should be frozen after creation * Allow graph dependencies to be manually provided * Add helper reset internals function * Move to using pbr * Unify creation/usage of uuids * Use the runner interface as the best task lookup * Ensure we document and complete correct removal * Pass runners instead of task objects/uuids * Move how resuming is done to be disconnected from jobs/flows * Clear out before connecting * Make connection/validation of tasks be after they are added * Add helper to do notification * Store results by add() uuid instead of in array format * Integrate better locking and a runner helper class * Cleaning up various components * Move some of the ordered flow helper classes to utils * Allow instance methods to be wrapped and unwrapped correctly * Add a start of a few simple examples * Update readme to point to links * Fix most of the hacking rules * Fix all flake8 E* and F* errors * Fix the current flake8 errors * Don't keep the state/version in the task name * Dinky change to trigger jenkins so I can cleanup * Add the task to the accumulator before running * Add .settings and .venv into .gitignore * Fix tests for python 2.6 * Add the ability to soft_reset a workflow * Add a .gitreview file so that git-review works * Ensure we have an exception and capture the exc_info * Update how graph results are fetched when they are optional * Allow for optional task requirements * We were not notifying when errors occured so fix that * Bring over the nova get_wrapped_function helper and use it * Allow for passing in the metadata when creating a task detail entry * Update how the version task functor attribute is found * Remove more tabs incidents * Removed test noise and formatted for pep8 * Continue work on decorator usage * Ensure we pickup the packages * Fixed pep8 formatting... Finally * Add flow disassociation and adjust the assocate path * Add a setup.cfg and populate it with a default set of nosetests options * Fix spacing * Add a better task name algorithm * Add a major/minor version * Add a get many attr/s and join helper functions * Reduce test noise * Fix a few unit tests due to changes * Ensure we handle functor names and resetting correctly * Remove safe_attr * Modifying db tests * Removing .pyc * Fixing .py in .gitignore * Update db api test * DB api test cases and revisions * Allow for turning off auto-extract and add a test * Use a function to filter args and add comments * Use update instead of overwrite * Move decorators to new file and update to use better wraps() * Continue work with decorator usage * Update with adding a provides and requires decorator for standalone function usage * Instead of apply use __call__ * Add comment to why we accumulate before notifying task listeners * Use a default sqlite backing using a taskflow file * Add a basic rollback accumlator test * Use rollback accumulator and remove requires()/provides() from being functions * Allow (or disallow) multiple providers of items * Clean the lines in a seperate function * Resync with oslo-incubator * Remove uuid since we are now using uuidutils * Remove error code not found in strict version of pylint * Include more dev testing packages + matching versions * Update dependencies for new db/distributed backends * Move some of the functions to use there openstack/common counterparts * More import fixups * Patch up the imports * Fix syntax error * Rename cause -> exception and make exception optional * Allow any of the previous tasks to satisfy requirements * Ensure we change the self and parents states correctly * Always have a name provided * Cleaning up files/extraneous files/fixing relations * More pylint cleanups * Make more tests for linear and shuffle test utils to common file * Only do differences on set objects * Ensure we fetch the appropriate inputs for the running task * Have the linear workflow verify the tasks inputs * Specify that task provides/requires must be an immutable set * Clean Up for DB changes * db api defined * Fleshing out sqlalchemy api * Almost done with sqlalchemy api * Fix state check * Fix flow exception wording * Ensure job is pending before we associate and run * More pylint cleanups * Ensure we associate with parent flows as well * Add a nice run() method to the job class that will run a flow * Massive pylint cleanup * deleting .swp files * deleting .swp files * cleaning for initial pull request * Add a few more graph ordering test cases * Update automatic naming and arg checks * Update order calls and connect call * Move flow failure to flow file and correctly catch ordering failure * Just kidding - really fixing relations this time * Fixing table relations * Allow job id to be passed in * Check who is being connected to and ensure > 0 connectors * Move the await function to utils * Graph tests and adjustments releated to * Add graph flow tests * Fix name changes missed * Enable extraction of what a functor requires from its args * Called flow now, not workflow * Second pass at models * More tests * Simplify existence checks * More pythonic functions and workflow -> flow renaming * Added more utils, added model for workflow * Spelling errors and stuff * adding parentheses to read method * Implemented basic sqlalchemy session class * Setting up Configs and SQLAlchemy/DB backend * Fix the import * Use a different logger method if tolerant vs not tolerant * More function comments * Add a bunch of linear workflow tests * Allow resuming stage to be interrupted * Fix the missing context variable * Moving over celery/distributed workflows * Update description wording * Pep fix * Instead of using notify member functions, just use functors * More wording fixes * Add the ability to alter the task failure reconcilation * Correctly run the tasks after partial resumption * Another wording fix * Spelling fix * Allow the functor task to take a name and provide it a default * Updated functor task comments * Move some of the useful helpers and functions to other files * Add the ability to associate a workflow with a job * Move the useful functor wrapping task from test to wrappers file * Add a thread posting/claiming example and rework tests to use it * After adding reposting/unclaiming reflect those changes here * Add a nicer string name that shows what the class name is * Adjust some of the states jobs and workflows could be in * Add a more useful name that shows this is a task * Remove impl of erasing which doesn't do much and allow for job reposting * Various reworkings * Rename logbook contents * Get a memory test example working * Add a pylintrc file to be used with pylint * Rework the logbook to be chapter/page based * Move ordered workflow to its own file * Increase the number of comments * Start adding in a more generic DAG based workflow * Remove dict_provider dependency * Rework due to code comments * Begin adding testing functionality * Fill in the majority of the memory job * Rework how we should be using lists instead of ordereddicts for optimal usage * Add a context manager to the useful read/writer lock * Ensure that the task has a name * Add a running state which can be used to know when a workflow is running * Rename the date created field * Add some search functionality and adjust the await() function params * Remove and add a few new exceptions * Shrink down the exposed methods * Remove the promise object for now * Add RESUMING * Fix spelling * Continue on getting ready for the memory impl. to be useful * On python <= 2.6 we need to import ordereddict * Remove a few other references to nova * Add in openstack common and remove patch references * Move simplification over * Continue moving here * Update README.md * Update readme * Move the code over for now * Initial commit