././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6560426 taskflow-4.6.4/0000775000175000017500000000000000000000000013402 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/.coveragerc0000664000175000017500000000020400000000000015517 0ustar00zuulzuul00000000000000[run] branch = True source = taskflow omit = taskflow/tests/*,taskflow/openstack/*,taskflow/test.py [report] ignore_errors = True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/.mailmap0000664000175000017500000000117100000000000015023 0ustar00zuulzuul00000000000000Anastasia Karpinska Angus Salkeld Changbin Liu Changbin Liu Ivan A. Melnikov Jessica Lucci Jessica Lucci Joshua Harlow Joshua Harlow Kevin Chen Kevin Chen Kevin Chen ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/.pre-commit-config.yaml0000664000175000017500000000252300000000000017665 0ustar00zuulzuul00000000000000# We from the Oslo project decided to pin repos based on the # commit hash instead of the version tag to prevend arbitrary # code from running in developer's machines. To update to a # newer version, run `pre-commit autoupdate` and then replace # the newer versions with their commit hash. default_language_version: python: python3 repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: 9136088a246768144165fcc3ecc3d31bb686920a # v3.3.0 hooks: - id: trailing-whitespace # Replaces or checks mixed line ending - id: mixed-line-ending args: ['--fix', 'lf'] exclude: '.*\.(svg)$' # Forbid files which have a UTF-8 byte-order marker - id: check-byte-order-marker # Checks that non-binary executables have a proper shebang - id: check-executables-have-shebangs # Check for files that contain merge conflict strings. - id: check-merge-conflict # Check for debugger imports and py37+ breakpoint() # calls in python source - id: debug-statements - id: check-yaml files: .*\.(yaml|yml)$ - repo: local hooks: - id: flake8 name: flake8 additional_dependencies: - hacking>=3.0.1,<3.1.0 language: python entry: flake8 files: '^.*\.py$' exclude: '^(doc|releasenotes|tools)/.*$' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/.stestr.conf0000664000175000017500000000006400000000000015653 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./taskflow/tests/unit top_dir=. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/.zuul.yaml0000664000175000017500000000040000000000000015335 0ustar00zuulzuul00000000000000- project: templates: - check-requirements - lib-forward-testing-python3 - openstack-cover-jobs - openstack-python3-wallaby-jobs - periodic-stable-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/AUTHORS0000664000175000017500000001037100000000000014454 0ustar00zuulzuul00000000000000Adam Harwell Alexander Gorodnev Anastasia Karpinska Anastasia Karpinska Andreas Jaeger Angus Salkeld Ann Kamyshnikova Ann Taraday Atsushi SAKAI Balaji Narayanan Ben Nemec Brian Jarrett ChangBo Guo(gcb) Changbin Liu Christian Berendt Chuck Short Corey Bryant Cyril Roelandt Dan Krause Daniel Bengtsson Davanum Srinivas Dirk Mueller Doug Hellmann Doug Hellmann Elod Illes Eric Harney Flavio Percoco Fredrik Bergroth Gevorg Davoian Ghanshyam Mann Greg Hill Gregory Thiemonge Ha Manh Dong Hervé Beraud Ihar Hrachyshka Ivan A. Melnikov Ivan A. Melnikov Ivan Kolodyazhny Ivan Melnikov James Page Jay S. Bryant Jeremy Stanley Jessica Lucci Ji-Wei Joe Gordon Joshua Harlow Joshua Harlow Joshua Harlow Kevin Chen Luong Anh Tuan Manish Godara Matthew Thode Michael Johnson Michal Arbet Min Pae Monty Taylor Olga Kopylova Ondřej Nový OpenStack Release Bot Pablo Iranzo Gómez Pavlo Shchelokovskyy Rafael Rivero Rick van de Loo Sahid Orentino Ferdjaoui Sascha Peilicke Sean McGinnis Sriram Madapusi Vasudevan Stanislav Kudriashev Stanislav Kudriashev Stephen Finucane Suneel Bomminayuni Takashi Kajinami Theodoros Tsioutsias Thomas Bechtold Thomas Bechtold Timofey Durakov Tony Breeds Victor Rodionov Vilobh Meshram Vu Cong Tuan XiaojueGuan YAMAMOTO Takashi Zhao Lei Zhihai Song baiwenteng chenghuiyu gecong1973 gengchc2 haobing1 howardlee ji-xuepeng jiansong leizhang lin-hua-cheng liuqing liuwei luke.li maaoyu melissaml qinchunhua rahulram skudriashev sunjia ting.wang tonytan4ever venkatamahesh wangqi weiweigu wu.shiming xhzhf xuanyandong yangxurong zhang.lei zhangzs ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/CONTRIBUTING.rst0000664000175000017500000000141000000000000016037 0ustar00zuulzuul00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps documented at: https://docs.openstack.org/infra/manual/developers.html#development-workflow Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: https://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/taskflow The mailing list is (prefix subjects with "[Oslo][TaskFlow]"): https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss Questions and discussions take place in #openstack-oslo on irc.OFTC.net. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/ChangeLog0000664000175000017500000022565700000000000015175 0ustar00zuulzuul00000000000000CHANGES ======= 4.6.4 ----- * Handle invalid redis entries in RedisJobBoard * Fix minor typo in ActionEngine exception message * Use LOG.warning instead of deprecated LOG.warn 4.6.3 ----- 4.6.2 ----- * Replace deprecated import of ABCs from collections * Use custom JSONType columns 4.6.1 ----- * Updating for OFTC IRC network * Fix flowdetails meta size * Use unittest.mock instead of mock * setup.cfg: Replace dashes with underscores * Move flake8 as a pre-commit local target * Remove lower-constraints remnants 4.6.0 ----- * Fix deprecated Alembic function args * Dropping lower constraints testing * Use TOX\_CONSTRAINTS\_FILE * Use py3 as the default runtime for tox * Add Python3 wallaby unit tests * Update master for stable/victoria * ignore reno generated artifacts * Adding pre-commit 4.5.0 ----- * [goal] Migrate testing to ubuntu focal 4.4.0 ----- * Avoid endless loop on StorageFailure * Add sentinel redis support * Switch from unittest2 compat methods to Python 3.x methods 4.3.1 ----- * Make test-setup.sh compatible with mysql8 4.3.0 ----- * Stop to use the \_\_future\_\_ module 4.2.0 ----- * Switch to newer openstackdocstheme and reno versions * Cap jsonschema 3.2.0 as the minimal version * Import modules, not classes * Bump default tox env from py37 to py38 * Add py38 package metadata * Add release notes links to doc index * Drop use of deprecated collections classes * Add Python3 victoria unit tests * Update master for stable/ussuri 4.1.0 ----- * Zookeeper backend SSL support 4.0.0 ----- * [ussuri][goal] Drop python 2.7 support and testing 3.8.0 ----- * Switch to Ussuri jobs * Update TaskFlow for networkx 2.x * Update master for stable/train * Fix python3.8 hmac compatibility 3.7.1 ----- * Use mysql LONGTEXT for atomdetails results * Add Python 3 Train unit tests * Add local bindep.txt * Remove unused tools/tox\_install.sh 3.7.0 ----- * update git.openstack.org to opendev * Dropping the py35 testing * Remove debtcollector requirement * Update Sphinx requirement 3.6.0 ----- * Remove unsused tools/tox\_install.sh * Handle collections.abc deprecations * Uncap jsonschema * OpenDev Migration Patch * Update master for stable/stein * add python 3.7 unit test job 3.4.0 ----- * Move test requirements out of runtime requirements * Change openstack-dev to openstack-discuss 3.3.1 ----- * Update doc/conf.py to avoid warnings with sphinx 1.8 * Use templates for cover and lower-constraints * Remove the duplicated word * Fix a symbol error * Create KazooClient with taskflow logger * add lib-forward-testing-python3 test job * add python 3.6 unit test job * add proper pydot3 dependency * import zuul job settings from project-config * Switch to use stestr for unit test * Add pydot test dependency * Remove PyPI downloads * Update reno for stable/rocky * Update various links in docs 3.2.0 ----- * Remove unused link target * Fix code to support networkx > 1.0 * add release notes to README.rst * replace http with https * Update links in README * fix tox python3 overrides * Drop py34 target in tox.ini * Uncap networkx * give pep8 and docs environments all of the dependencies they need * Trivial: update pypi url to new url * Fix doc build * Trivial: Update pypi url to new url * stop using tox\_install.sh * only run doc8 as part of pep8 test job * standardize indentation in tox.ini * set default python to python3 * don't let tox\_install.sh error if there is nothing to do * Updated from global requirements * add lower-constraints job * Updated from global requirements * Fix invalid json unit test * Update reno for stable/queens * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements 3.1.0 ----- * Updated from global requirements * Add doc8 to pep8 environment * Use doc/requirements.txt 3.0.1 ----- 3.0.0 ----- * Remove setting of version/release from releasenotes * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove class StopWatch from timing 2.17.0 ------ 2.16.0 ------ * Updated from global requirements * Updated from global requirements * Update "indentify" to "identify" in comments 2.15.0 ------ * Updated from global requirements * Remove method blather in log adapter * Remove kwarg timeout in executor conductor * Updated from global requirements * Avoid log warning when closing is underway (on purpose) * Update reno for stable/pike * Updated from global requirements 2.14.0 ------ * Updated from global requirements * Update URLs in documents according to document migration * Updated from global requirements * Fix process based executor task proxying-back events * turn on warning-is-error in doc build * switch from oslosphinx to openstackdocstheme * rearrange existing documentation into the new standard layout * Updated from global requirements 2.13.0 ------ * Updated from global requirements * Fix html\_last\_updated\_fmt for Python3 * Replace assertRaisesRegexp with assertRaisesRegex 2.12.0 ------ * Updated from global requirements * Updated from global requirements * do not allow redis job reclaim by same owner 2.11.0 ------ * Fix py35 test failure * Stop using oslotest.mockpatch * Updated from global requirements * python3.0 has deprecated LOG.warn 2.10.0 ------ * Updated from global requirements * Updated from global requirements * Updated from global requirements * Prepare for using standard python tests * Use https instead of http for git.openstack.org * Updated from global requirements * Update reno for stable/ocata * Protect storage better against external concurrent access 2.9.0 ----- * Remove dep on monotonic * Rename engine analyzer to be named selector * Update author and author-email * Updated from global requirements * Updated from global requirements * Add Constraints support * Show team and repo badges on README 2.8.0 ----- * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Updated from global requirements * Remove vim header from source files * Fix release notes gate job failure * Updated from global requirements * Use assertIs(Not)None to check for None * Fix typo in tox.ini * Fix broken link * Replace retrying with tenacity * Updated from global requirements * Add reno for release notes management * Updated from global requirements 2.7.0 ----- * Changed the home-page link * Using assertIsNone() instead of assertIs(None, ..) * Updated from global requirements * Fix a typo in documentation * Fix typo: remove redundant 'that' * Updated from global requirements * Fix a typo in logging.py * Use method ensure\_tree from oslo.utils * Make failure formatter configurable for DynamicLoggingListener * Updated from global requirements * Some classes not define \_\_ne\_\_() built-in function 2.6.0 ----- 2.5.0 ----- * Updated from global requirements * Add logging around metadata, ignore tallying + history 2.4.0 ----- * Updated from global requirements * Start to add a location for contributed useful tasks/flows/more * Change dependency to use flavors * Updated from global requirements * Remove white space between print and () * Updated from global requirements * Add Python 3.5 classifier and venv * Replace assertEqual(None, \*) with assertIsNone in tests 2.3.0 ----- * Updated from global requirements * remove unused LOG * Fixes: typo error in comments * Updated from global requirements * Fix some misspellings in the function name and descriptions * Updated from global requirements 2.2.0 ----- * Don't use deprecated method timeutils.isotime * Add tests to verify kwargs behavior on revert validation * Make tests less dependent on transient state 2.1.0 ----- * Updated from global requirements * Ensure the fetching jobs does not fetch anything when in bad state * Updated from global requirements * Use the full 'get\_execute\_failures' vs the shortname * Split revert/execute missing args messages * Updated from global requirements * Instead of a multiprocessing queue use sockets via asyncore * Add a simple sanity test for pydot outputting 2.0.0 ----- * Updated from global requirements * Fix documentation related to missing BaseTask class * Remove deprecated things for 2.0 release * Always used the library packaged mock 1.32.0 ------ * Attempt to cancel active futures when suspending is underway * Allow for specifying green threaded to parallel engine * Make conductor.stop stop the running engine gracefully 1.31.0 ------ * Updated from global requirements * Don't set html\_last\_updated\_fmt without git * Updated from global requirements * Add the ability to skip resolving from activating * Fix export\_to\_dot for networkx package changes * Ensure upgrade for sqlalchemy is protected by a lock * Add periodic jobboard refreshing (incase of sync issues) * Fallback if git is absent * Allow for revert to have a different argument list from execute 1.30.0 ------ * Updated from global requirements * Use a automaton machine for WBE request state machine * Sqlalchemy-utils double entry (already in test-requirements.txt) 1.29.0 ------ * Updated from global requirements * Refactor Atom/BaseTask/Task/Retry class hierarchy * Add missing direct dependency for sqlalchemy-utils 1.28.0 ------ * Add WBE worker expiry * Some WBE protocol/executor cleanups * Remove need for separate notify thread * Updated from global requirements * Don't bother scanning for workers if no new messages arrived * Updated from global requirements * Updated from global requirements * Updated from global requirements * Allow cachedproperty to avoid locking * Spice up WBE banner and add simple worker \_\_main\_\_ entrypoint 1.27.0 ------ * Updated from global requirements * Fix for WBE sporadic timeout of tasks * Add some basic/initial engine statistics * Handle cases where exc\_args can't be serialized as JSON in the WBE * Enable OS\_LOG\_CAPTURE so that logs can be seen (on error) * Retrieve the store from flowdetails as well, if it exists * Disable oslotest LOG capturing * Updated from global requirements * Updated from global requirements * Use helper function for post-atom-completion work * Ensure that the engine finishes up even under sent-in failures * 99 bottles example trace logging was not being output * Add useful/helpful comment to retry scheduler * Updated from global requirements * Updated from global requirements * Replace clear zookeeper python with clear zookeeper bash * Remove stray LOG.blather 1.26.0 ------ * Some additional engine logging * Replace deprecated library function os.popen() with subprocess * Add comment as to why we continue when tallying edge decider nay voters * Add rundimentary and/or non-optimized job priorities * Allow for alterations in decider 'area of influence' * Fix wrong usage of iter\_utils.unique\_seen * Updated from global requirements * Updated from global requirements * Updated from global requirements * Use the retrying lib. to do basic sqlalchemy engine validation * For taskflow patterns don't show taskflow.patterns prefix * Rename '\_emit' -> '\_try\_emit' since it is best-effort (not ensured) * Cache atom name -> actions and provide accessor function * Quote/standardize atom name output * Use shared util helper for driver name + config extraction * Fix currently broken and inactive mysql tests * Trap and expose exception any 'args' * Revert "Remove failure version number" * Move all internal blather usage/calls to trace usage/calls * Start rename of BLATHER -> TRACE * Add ability of job poster/job iterator to wait for jobs to complete * Updated from global requirements * Use 'match\_type' utility function instead of staticmethod * Remove failure version number * Translate kazoo exceptions into job equivalents if register\_entity fails * Change name of misc.ensure\_dict to misc.safe\_copy\_dict * Avoid recreating notify details for each dispatch iteration * fix doc change caused by the change of tooz * Deprecated tox -downloadcache option removed * Updated from global requirements * Add some useful commentary on rebinding processes * Use small helper routine to fetch atom metadata entries * Remove 'MANIFEST.in' * Pass through run timeout in engine run() * Change engine 'self.\_check' into a decorator 1.25.0 ------ * Move validation of compiled unit out of compiler * Allow provided flow to be empty * Move engine options extraction to \_\_init\_\_ methods * Updated from global requirements * Updated from global requirements * Convert executor proxied engine options into their correct type * Enable conversion of the tree nodes into a digraph * Use the misc.ensure\_dict helper in conductor engine options saving * Add optional 'defer\_reverts' behavior * Add public property from storage to flowdetail.meta * Adding notification points for job completion * Remove python 2.6 and cleanup tox.ini * Correctly apply deciders across flow boundaries * Move 'convert\_to\_timeout' to timing type as a helper function * Use conductor entity class constant instead of raw string * Add a executor backed conductor and have existing impl. use it * Add flow durations to DurationListener * Update docstrings on entity type * Move 'fill\_iter' to 'iter\_utils.fill' 1.24.0 ------ * Updated from global requirements * Updated from global requirements * Register conductor information on jobboard * Add atom priority ability * Add validation of base exception type(s) in failure type * Fix order of assertEqual for unit.test\_\* * Fix order of assertEqual for unit.worker\_based * Fix order of assertEqual for unit.persistence * Fix order of assertEqual for unit.patterns * Fix order of assertEqual for unit.jobs * Fix order of assertEqual for unit.action\_engine 1.23.0 ------ * Updated from global requirements * feat: add max\_dispatches arg to conductor's run * Ensure node 'remove' and 'disassociate' can not be called when frozen * Add in-memory backend delete() in recursive/non-recursive modes * Use batch 'get\_atoms\_states' where we can * Use automaton's converters/pydot * Make more of the WBE logging and '\_\_repr\_\_' message more useful * Fix bad sphinx module reference * Relabel internal engine 'event' -> 'outcome' * No need for Oslo Incubator Sync * Use the node built-in 'dfs\_iter' instead of recursion 1.22.0 ------ * Simplify flow action engine compilation * Fix 'dependened upon' spelling error * docs - Set pbr warnerrors option for doc build * Rename 'history' -> 'Release notes' * Remove dummy/placeholder 'ChangeLog' as its not needed * Remove ./taskflow/openstack/common as it no longer exists * Remove quotes from subshell call in bash script * Refactor common parts of 'get\_maybe\_ready\_for' methods * Fix the sphinx build path in .gitignore file * Change ignore-errors to ignore\_errors * Use graphs as the underlying structure of patterns * Updated from global requirements * Fix '\_cache\_get' multiple keyword argument name overlap * Use the sqlalchemy-utils json type instead of our own 1.21.0 ------ * Updated from global requirements * Fix how the dir persistence backend was not listing logbooks * Explain that jobs arch. diagram is only for zookeeper 1.20.0 ------ * Updated from global requirements * iter\_nodes method added to flows * Updated from global requirements * Use 'iter\_utils.count' to determine how many unfinished nodes left * Fix flow states link * Avoid running this example if zookeeper is not found * Updated from global requirements * Have the storage class provide a 'change\_flow\_state' method 1.19.0 ------ * Updated from global requirements * Updated from global requirements * Add nicely made task structural diagram * Updated from global requirements * Remove some temporary variables not needed * Only remove all 'next\_nodes' that were done * Fix busted stevedore doc(s) link * Extend and improve failure logging * Improve docstrings in graph flow to denote exceptions raised * Enable testr OS\_DEBUG to be TRACE(blather) by default * Updated from global requirements * Show intermediary compilation(s) when BLATHER is enabled 1.18.0 ------ * Give the GC more of a break with regard to cycles * Base class for deciders * Remove extra runner layer and just use use machine in engine * Updated from global requirements * .gitignore update * Avoid adding 1 to a failure (if it gets triggered) * Replace the tree 'pformat()' recursion with non-recursive variant * Fix seven typos and one readability on taskflow documentation 1.17.0 ------ * Bump futurist and remove waiting code in taskflow * Use the action engine '\_check' helper method * Modify listeners to handle the results now possible from revert() * Remove no longer used '\_was\_failure' static method * Remove legacy py2.6 backwards logging compat. code * Updated from global requirements * Fix lack of space between functions * Create and use a serial retry executor * Just link to the worker engine docs instead of including a TOC inline * Link to run() method in engines doc * Add ability to reset an engine via a \`reset\` method 1.16.0 ------ * Updated from global requirements * Use 'addCleanup' instead of 'tearDown' in engine(s) test * Update 'make\_client' kazoo docs and link to them * Remove \*\*most\*\* usage of taskflow.utils in examples * Move doc8 to being a normal test requirement in test-requirements.txt * Updated from global requirements * Found another removal\_version=? that should be removal\_version=2.0 * Add deprecated module(s) for prior FSM/table code-base * Replace internal fsm + table with automaton library * Remove direct usage of timeutils overrides and use fixture 1.15.0 ------ * Provide a deprecated alias for the now removed stop watch class * Update all removal\_version from being ? to being 2.0 * Add deprecated and only alias modules for the moved types * Unify the zookeeper/redis jobboard iterators * Updated from global requirements * Run the '99\_bottles.py' demo at a fast rate when activated * Use io.open vs raw open * Retain atom 'revert' result (or failure) * Update the version on the old/deprecated logbook module * Add docs for u, v, decider on graph flow link method * Fix mock calls * Remove setup.cfg 'requires-python' incorrect entry * Compile lists of retry/task atoms at runtime compile time * Integrate futurist (and \*\*remove\*\* taskflow originating code) * Allow the 99\_bottles.py demo to run in BLATHER mode * Make currently implemented jobs use @functools.total\_ordering * Add more useful \`\_\_str\_\_\` to redis job * Show job posted and goodbye in 99\_bottles.py example * Rename logbook module -> models module * Notify on the individual engine steps 1.14.0 ------ * Expose strategies so doc generation can easily pick them up * Denote mail subject should be '[Oslo][TaskFlow]' * Add support for conditional execution * Use encodeutils for exception -> string function * Updated from global requirements * Build-out + test a redis backed jobboard 0.13.0 ------ * Just make the compiler object at \_\_init\_\_ time * Remove kazoo hack/fix for issue no longer needed * Add history.rst that uses generated 'ChangeLog' file * Add docstrings on runtime objects methods and link to them in docs 0.12.0 ------ * Updated from global requirements * Update states comment to refer to task section * Updated from global requirements * Remove 2.6 classifier + 2.6 compatibility code * Remove reference to 'requirements-pyN.txt' files * Add smarter/better/faster impl. of \`ensure\_atoms\` * Add bulk \`ensure\_atoms\` method to storage * Make it possible to see the queries executed (in BLATHER mode) * Add doc warning to engine components * Perform a few optimizations to decrease persistence interactions * Handle conductor ctrl-c more appropriately * Cache the individual atom schedulers at compile time * Split-off the additional retry states from the task states * Use the \`excutils.raise\_with\_cause\` after doing our type check * Updated from global requirements * Use monotonic lib. to avoid finding monotonic time function * Document more of the retry subclasses special keyword arguments 0.11.0 ------ * Address concurrent mutation of sqlalchemy backend * Add indestructible 99 bottles of beer example * Use alembic upgrade function/command directly * Updated from global requirements * Remove usage of deprecated 'task\_notifier' property in build\_car example * Add \`simple\_linear\_listening\` example to generated docs * Handy access to INFO level * Switch badges from 'pypip.in' to 'shields.io' * Adding a revert\_all option to retry controllers * A few jobboard documentation tweaks * Use sphinx deprecated docstring markup * Use a class constant for the default path based backend path * Updated from global requirements * Remove example not tested * Make the default file encoding a class constant with a docstring * Use a lru cache to limit the size of the internal file cache * Updated from global requirements * Use hash path lookup vs path finding * Remove all 'lock\_utils' now that fasteners provides equivalents * Add a new \`ls\_r\` method * Updated from global requirements * Refactor machine builder + runner into single unit * Replace lock\_utils lock(s) with fasteners package * Updated from global requirements * Use shared '\_check' function to check engine stages * Remove a couple more useless 'pass' keywords found * Add a test that checks for task result visibility * Remove testing using persistence sqlalchemy backend with 'mysqldb' * Remove customized pyX.Y tox requirements * Updated from global requirements * Allow same deps for requires and provides in task * Remove 'pass' usage not needed * Only show state transitions to logging when in BLATHER mode * Fix updated\_at column of sqlalchemy tables * Remove script already nuked from oslo-incubator * Ensure path\_based abstract base class is included in docs * Beef up docs on the logbook/flow detail/atom details models * Remove custom py26/py27 tox venvs no longer used * Executors come in via options config, not keyword arguments * Use newer versions of futures that adds exception tracebacks * Ensure empty paths raise a value error * Remove listener stack and replace with exit stack * Expose action engine no reraising states constants * Chain a few more exception raises that were previously missed * Expose in memory backend split staticmethod * Updated from global requirements * Remove tox py33 environment no longer used * Avoid creating temporary removal lists 0.10.1 ------ * Avoid trying to copy tasks results when cloning/copying * Avoid re-normalizing paths when following links * Add a profiling context manager that can be easily enabled * Updated from global requirements 0.10.0 ------ * Remove validation of state on state read property access * Make the default path a constant and tweak class docstring * Avoid duplicating exception message * Add speed-test tools script * Speed up memory backend via a path -> node reverse mapping * Updated from global requirements * Fix a typo in taskflow docs * Small refactoring of 'merge\_uri' utility function * Fix post coverage job option not recognized * Refactor/reduce shared 'ensure(task/retry)' code * Move implementations into there own sub-sections * Remove run\_cross\_tests.sh * Move zookeeper jobboard constants to class level * Retain chain of missing dependencies * Expose fake filesystem 'join' and 'normpath' * Add + use diagram explaining retry controller area of influence * Add openclipart.org conductor image to conductor docs * Use oslo\_utils eventletutils to warn about eventlet patching * Test more engine types in argument passing unit test * Add a conductor running example * Replace more instance(s) of exception chaining with helper * Avoid attribute error by checking executor for being non-none 0.9.0 ----- * Validate correct exception subclass in 'raise\_with\_cause' * Remove link to kazoo eventlet handler * Add states generating venv and use pydot2 * Add strict job state transition checking * Uncap library requirements for liberty * Have reset state handlers go through a shared list * Add job states in docs + states in python * Expose r/o listener callback + details filter callback * Expose listener notification type + docs * Ensure listener args are always a tuple/immutable * Include the 'dump\_memory\_backend' example in the docs * Make resolution/retry strategies more clear and better * Rename notifier 'listeners' to 'topics' * Mention link to states doc in notify state transitions * Ensure we don't get stuck in formatting loops * Add note about thread safety of fake filesystem * Have the notification/listener docs match other sections * Put semantics preservation section into note block * Note that the traditional mode also avoids this truncation issue * Avoid going into causes of non-taskflow exceptions * Use the ability to chain exceptions correctly * Add a example showing how to share an executor * Shrink the bookshelf description * Remove link about implementing job garbage binning * Make the storage layer more resilent to failures * Put the examples/misc/considerations under a new section * Add a suspension engine section 0.8.1 ----- * Switch back to maxdepth 2 * Allow ls() to list recursively (using breadth-first) * Make an attempt at having taskflow exceptions print causes better * fix renamed class to call super correctly * Turn 'check\_who' into a decorator * Use 'node' terminology instead of 'item' terminology * Remove 11635 bug reference * Allow providing a node stringify function to tree pformat * Add in memory filesystem clearing * Just unify having a single requirements.txt file * Fix a couple of spelling and grammar errors * Add memory backend get() support * Make the graph '\_unsatisfied\_requires' be a staticmethod * Add more comments to fake in-memory filesystem * Add a set of tests to the in memory fake filesystem 0.8.0 ----- * Adding test to improve CaptureListener coverage * Prefer posixpath to os.path * By default use a in memory backend (when none is provided) * Allow using shallow copy instead of deep copy * Move to the newer debtcollector provided functions * Move to using the oslo.utils stop watch * Updated from global requirements * Ensure thread-safety of persistence dir backend * Ensure we are really setup before being connected * Ensure docstring on storage properties * Expose the storage backend being used * Use iteration instead of list(s) when extracting scopes * Use binary/encode decode helper routines in dir backend * Rename memory backend filesystem -> fake filesystem * Just let the future executors handle the max workers * Always return scope walker instances from \`fetch\_scopes\_for\` * Give the GC a break * Use the class name instead of the TYPE property in \_\_str\_\_ * Just use the class name instead of TYPE constant * Ensure we have a 'coverage-package-name' * Attempt to extract traceback from exception * Use compatible map and update map/reduce task docs * Update engine docs with new validation stage * Ensure we register & deregister conductor listeners * Rename attribute '\_graph' to '\_execution\_graph' * Add a log statement pre-validation that dumps graph info * Have this example exit non-zero if incorrect results * Use a collections.namedtuple for the request work unit * Some small wbe engine doc tweaks * Add newline to avoid sphinx warning * Allow passing 'many\_handler' to fetch\_all function * Ensure event time listener is in listeners docs * Add a in-memory backend dumping example * Added a map and a reduce task * Restructure the in-memory node usage * Switch to non-namespaced module imports * Allow the storage unit to use the right scoping strategy * Just use the local conf variable * Put underscore in-front of alchemist helper * lazy loading for logbooks and flowdetails * Allow backend connection config (via fetch) to be a string * Add + use failure json schema validation * Use ordered[set/dict] to retain ordering * Allow injected atom args to be persisted * add \_listeners\_from\_job method to Conductor base * update uses of TimingListener to DurationListener * Added EventTimeListner to record when events occur * added update\_flow\_metadata method to Storage class * Retain nested causes where/when we can * Denote issue 17911 has been merged/accepted * Persistence backend refactor * Remove support for 3.3 * Writers can now claim a read lock in ReaderWriterLock * Add another probabilistic rw-lock test * Add + use read/write lock decorators * Add no double writers thread test * Use condition context manager instead of acquire/release * Remove condition acquiring for read-only ops * Set a no-op functor when none is provided * Ensure needed locks is used when reading/setting intention * Specialize checking for overlaps * Use links instead of raw block quotes * Rename the timing listeners to duration listeners * Add a bookshelf developer section * Ensure the thread bundle stops in last to first order * Add warning about transient arguments and worker-based-engines * Ensure ordered set is pickleable * Add node removal/disassociate functions * Add a fully functional orderedset * Make the worker banner template part of the worker class * Use compilation helper objects * Allow node finding to not do a deep search * Add a frozen checking decorator * Tweak functor used to find flatteners/storage routines * Add specific scoping documentation * add jobboard trash method * Provide more contextual information about invalid periodics * Fix lookup scoping multi-match ordering * Stick to one space after a period * Refactor parts of the periodic worker * Use oslo.utils encodeutils for encode/decode functions * Bring over pretty\_tox.sh from nova/heat/others * Tweak some of the types thread safety docstrings * Add pypi link badges * Switch the note about process pool executor to warning * Chain exceptions correctly on py3.x * Updated from global requirements * Remove WBE experimental documentation note * Use the enum library for the retry strategy enumerations * Use debtcollector library to replace internal utility * add get\_flow\_details and get\_atom\_details to all backends * Tweaks to atom documentation * Update Flow::\_\_str\_\_ * Add todo note for kombu pull request * Move 'provides' and 'name' to instance attributes * Allow loading conductors via entrypoints 0.7.1 ----- * Revert "Add retries to fetching the zookeeper server version" * Allow turning off the version check * adding check for str/unicode type in requires * Make the dispatcher handler be an actual type * Add retries to fetching the zookeeper server version * Remove duplicate 'the' and link to worker engine section * Remove delayed decorator and replace with nicer method * Fix log statement * Make the atom class an abstract class * Improve multilock class and its associated unit test * Mark conductor 'stop' method deprecation kwarg with versions * Move to hacking 0.10 * catch NotFound errors when consuming or abandoning * Use the new table length constants * Improve upon/adjust/move around new optional example * Clarify documentation related to inputs * Docstrings should document parameters return values * Let the multi-lock convert the provided value to a tuple * Map optional arguments as well as required arguments * Add a BFS tree iterator * DFS in right order when not starting at the provided node * Rework the sqlalchemy backend * Modify stop and add wait on conductor to prevent lockups * Default to using a thread-safe storage unit * Add warning to sqlalchemy backend size limit docs * Updated from global requirements * Use a thread-identifier that can't easily be recycled * Use a notifier instead of a direct property assignment * Tweak the WBE diagram (and present it as an svg) * Remove duplicate code * Improved diagram for Taskflow * Bump up the env\_builder.sh to 2.7.9 * Add a capturing listener (for test or other usage) * Add + use a staticmethod to fetch the immediate callables * Just directly access the callback attributes * Use class constants during pformatting a tree node 0.7.0 ----- * Abstract out the worker finding from the WBE engine * Add and use a nicer kombu message formatter * Remove duplicated 'do' in types documentation * Use the class defined constant instead of raw strings * Use kombu socket.timeout alias instead of socket.timeout * Stopwatch usage cleanup/tweak * Add note about publicly consumable types * Add docstring to wbe proxy to denote not for public use * Use monotonic time when/if available * Updated from global requirements * Link WBE docs together better (especially around arguments) * Emit a warning when no routing keys provided on publish() * Center SVG state diagrams * Use importutils.try\_import for optional eventlet imports * Shrink the WBE request transition SVG image size * Add a thread bundle helper utility + tests * Make all/most usage of type errors follow a similar pattern * Leave use-cases out of WBE developer documentation * Allow just specifying 'workers' for WBE entrypoint * Add comments to runner state machine reaction functions * Fix coverage environment * Use explicit WBE worker object arguments (instead of kwargs) * WBE documentation tweaks/adjustments * Add a WBE request state diagram + explanation * Tidy up the WBE cache (now WBE types) module * Fix leftover/remaining 'oslo.utils' usage * Show the failure discarded (and the future intention) * Use a class provided logger before falling back to module * Use explicit WBE object arguments (instead of kwargs) * Fix persistence doc inheritance hierarchy * The gathered runtime is for failures/not failures * add clarification re parallel engine * Increase robustness of WBE producer/consumers * Move implementation(s) to there own sections * Move the jobboard/job bases to a jobboard/base module * Have the serial task executor shutdown/restart its executor * Mirror the task executor methods in the retry action * Add back a 'eventlet\_utils' helper utility module * Use constants for runner state machine event names * Remove 'SaveOrderTask' and test state in class variables * Provide the stopwatch elapsed method a maximum * Fix unused and conflicting variables * Switch to using 'oslo\_serialization' vs 'oslo.serialization' * Switch to using 'oslo\_utils' vs 'oslo.utils' * Add executor statistics * Use oslo.utils reflection for class name * Add split time capturing to the stop watch * Use platform neutral line separator(s) * Create and use a multiprocessing sync manager subclass * Use a single sender * Updated from global requirements * Include the 'old\_state' in all currently provided listeners * Update the README.rst with accurate requirements * Include docstrings for parallel engine types/strings supported * The taskflow logger module does not provide a logging adapter * Send in the prior atom state on notification of a state change * Pass a string as executor in the example instead of an executor * Updated from global requirements * Fix for job consumption example using wrong object 0.6.1 ----- * Remove need to inherit/adjust netutils split * Allow specifying the engine 'executor' as a string * Disallowing starting the executor when worker running * Use a single shared queue for an executors lifecycle * Avoid creating a temporary list(s) for tree type * Update statement around stopwatch thread safety * Register with 'ANY' in the cloned process * Add edge labels for engine states * Remove less than useful action\_engine \_\_str\_\_ * Ensure manager started/shutdown/joined and reset * Return the same namedtuple that the future module returns * Add a simplistic hello world example * Get event/notification sending working correctly * Move the engine scoping test to its engines test folder * Get the basics of a process executor working * Move the persistence base to the parent directory * Correctly trigger 'on\_exit' of starting/initial state 0.6.0 ----- * Add an example which shows how to send events out from tasks * Move over to using oslo.utils [reflection, uuidutils] * Rework the in-memory backend * Updated from global requirements * Add a basic map/reduce example to show how this can be done * Add a parallel table mutation example * Add a 'can\_be\_registered' method that checks before notifying * Base task executor should provide 'wait\_for\_any' * Replace autobind with a notifier module helper function * Cleanup some doc warnings/bad/broken links * Use the notifier type in the task class/module directly * Use a tiny clamp helper to clamp the 'on\_progress' value * Retain the existence of a 'EngineBase' until 0.7 or later * Remove the base postfix from the internal task executor * Remove usage of listener base postfix * Add a moved\_inheritable\_class deprecation helper * Avoid holding the lock while scanning for existing jobs * Remove the base postfix for engine abstract base class * Avoid popping while another entity is iterating * Updated from global requirements * Use explict 'attr\_dict' when adding provider->consumer edge * Properly handle and skip empty intermediary flows * Ensure message gets processed correctly * Just assign a empty collection instead of copy/clear * Remove rtype from task clone() doc * Add and use a new simple helper logging module * Have the sphinx copyright date be dynamic * Add appropriate links into README.rst * Use condition variables using 'with' * Use an appropriate \`\`extract\_traceback\`\` limit * Allow all deprecation helpers to take a stacklevel * Correctly identify stack level in \`\`\_extract\_engine\`\` * Stop returning atoms from execute/revert methods * Have tasks be able to provide copy() methods * Allow stopwatches to be restarted * Ensure that failures can be pickled * Rework pieces of the task callback capability * Just use 4 spaces for classifier indents * Move atom action handlers to there own subfolder/submodule * Workflow documentation is now in infra-manual * Ensure frozen attribute is set in fsm clones/copies * Fix split on "+" for connection strings that specify dialects * Update listeners to ensure they correctly handle all atoms * Allow for the notifier to provide a 'details\_filter' * Be explicit about publish keyword arguments * Some package additions and adjustments to the env\_builder.sh * Cache immutable visible scopes in the runtime component * Raise value errors instead of asserts * Add a claims listener that connects job claims to engines * Split the scheduler into sub-schedulers * Use a module level constant to provide the DEFAULT\_LISTEN\_FOR * Move the \_pformat() method to be a classmethod * Add link to issue 17911 * Avoid deepcopying exception values * Include documentation of the utility modules * Use a metaclass to dynamically add testcases to example runner * Remove default setting of 'mysql\_traditional\_mode' * Move scheduler and completer classes to there own modules * Ensure that the zookeeper backend creates missing atoms * Use the deprecation utility module instead of warnings * Tweaks to setup.cfg * Add a jobboard high level architecture diagram * Mark 'task\_notifier' as renamed to 'atom\_notifier' * Revert wrapt usage until further notice * Updated from global requirements * Add a history retry object, makes retry histories easier to use * Format failures via a static method * When creating daemon threads use the bundled threading\_utils * Ensure failure types contain only immutable items * Mark 'task\_notifier' as renamed to 'atom\_notifier' * Use wrapt to provide the deprecated class proxy * Updated from global requirements * Updated from global requirements * Updated from global requirements * Reduce the worker-engine joint testing time * Link bug in requirements so people understand why pbr is listed * Updated from global requirements * Use standard threading locks in the cache types * Handle the case where '\_exc\_type\_names' is empty * Add pbr to installation requirements * Updated from global requirements * Remove direct usage of the deprecated failure location * Fix the example 'default\_provides' * Use constants for retry automatically provided kwargs * Remove direct usage of the deprecated notifier location * Remove attrdict and just use existing types * Use the mock that finds a working implementation * Add a futures type that can unify our future functionality * Bump the deprecation version number * Use and verify event and latch wait() return using timeouts * Deprecate \`engine\_conf\` and prefer \`engine\` instead * Use constants for link metadata keys * Bump up the sqlalchemy version for py26 * Hoist the notifier to its own module * Move failure to its own type specific module * Use constants for revert automatically provided kwargs * Improve some of the task docstrings * We can now use PyMySQL in py3.x tests * Updated from global requirements * Add the database schema to the sqlalchemy docs * Change messaging from handler connection timeouts -> operation timeouts * Switch to a custom NotImplementedError derivative * Allow the worker banner to be written to an arbitrary location * Update engine class names to better reflect there usage 0.5.0 ----- * Avoid usage of six.moves in local functions * Refactor parts of the job lock/job condition zookeeper usage * Make it so that the import works for older versions of kombu * Rework the state documentation * Updated from global requirements * Add a more dynamic/useful logging listener * Use timeutils functions instead of misc.wallclock * Expose only \`ensure\_atom\` from storage * Adjust docs+venv tox environments requirements/dependencies * Increase robustness of WBE message and request processing * Adjust the WBE log levels * Use the features that the oslotest mock base class provides * Use oslotest to provide our base test case class * Jobboard example that show jobs + workers + producers * Adjust on\_job\_posting to not hold the lock while investigating * Bring in a newer optional eventlet * Move some of the custom requirements out of tox.ini * Document more function/class/method params * Stop using intersphinx * Expand toctree to three levels * Documentation cleanups and tweaks * Fix multilock concurrency when shared by > 1 threads * Increase/adjust the logging of the WBE response/send activities * Color some of the states depending on there meaning * Switch to using oslo.utils and oslo.serialization * Typos "searchs" * Update the requirements-py2.txt file * Remove no longer needed r/w lock interface base class * Updated from global requirements * Better handle the tree freeze method * Ensure state machine can be frozen * Link a few of the classes to implemented features/bugs in python * Add a timing listener that also prints the results * Remove useless \_\_exit\_\_ return * Example which shows how to move values from one task to another * Mention issue with more than one thread and reduce workers * Add a mandelbrot parallel calculation WBE example * Add existing types to generated documentation * Remove the dependency on prettytable * Work toward Python 3.4 support and testing * Add a state machine copy() method * Update the state graph builder to use state machine type * Add a docs virtualenv * Reduce unused tox environments 0.4.0 ----- * Add a couple of scope shadowing test cases * Relax the graph flow symbol constraints * Relax the unordered flow symbol constraints * Relax the linear flow symbol constraints * Revamp the symbol lookup mechanism * Be smarter about required flow symbols * Update oslo-incubator to 32e7f0b56f52742754 * Translate the engine runner into a well defined state-machine * Raise a runtime error when mixed green/non-green futures * Ensure the cachedproperty creation/setting is thread-safe * warn against sorting requirements * Updated from global requirements * Update transitioning function name to be more understandable * Move parts of action engine tests to a subdirectory * Tweak engine iteration 'close-up shop' runtime path * Use explicit WBE request state transitions * Reject WBE messages if they can't be put in an ack state * Make version.py handle pbr not being installed * Cleanup WBE example to be simpler to understand * Use \_\_qualname\_\_ where appropriate * Updated from global requirements * Updated from global requirements * Make the WBE worker banner information more meaningful * Have the dispatch\_job function return a future * Expand documention on failures and wrapped failures types * Allow worker count to be specified when no executor provided * Remove sphinx examples emphasize-lines * Split requirements into py2 and py3 files * Update oslo-incubator to 037dee004c3e2239 * Remove db locks and use random db names for tests * Allow WBE request transition timeout to be dynamic * Avoid naming time type module the same as a builtin * LOG which requeue filter callback failed * Add a pformat() failure method and use it in the conductor * add pre/post execute/retry callbacks to tasks * Use checked\_commit() around consume() and abandon() * Use a check + create transaction when claiming a job * Improve WBE testing coverage * Add basic WBE validation sanity tests * WBE request message validation * WBE response message validation * WBE notification message validation * Allow handlers to provide validation callables * Use a common message dispatcher * Use checked commit when committing kazoo transactions * Enable hacking checks H305 and H307 in tox.ini template * Fixes unsorted dicts and sets in doctests * README.rst: Avoid using non-ascii character * Updated from global requirements * Add a sample script that can be used to build a test environment * Enabled hacking checks H305 and H307 * Bump hacking to version 0.9.2 * Allow a jobs posted book to be none by default * Cleanup some of the example code & docs * Make greenexecutor not keep greenthreads active * Add the arch/big picture omnigraffle diagram * Remove pbr as a runtime dependency * Use the \`state\_graph.py\` for all states diagrams * Make the examples documentation more relevant * Raise NotImplementedError instead of NotImplemented * Move the stopwatch tests to test\_types * Remove need to do special exception catching in parse\_uri * Update oslo incubator code to commit 0b02fc0f36814968 * Fix the section name in CONTRIBUTING.rst * Add a conductor considerations section * Make the expiring cache a top level cache type * Use \`flow\_uuid\` and \`flow\_name\` from storage * Fix traces left in zookeeper * Clarify locked decorator is for instance methods * Extract the state changes from the ensure storage method * Create a top level time type * Simplify identity transition handling for tasks and retries * Remove check\_doc.py and use doc8 * Remove functions created for pre-six 1.7.0 * Add a tree type * Make intentions a tuple (to denote immutability) * Updated from global requirements * Add example for pseudo-scoping * Fix E265 hacking warnings * Fix doc which should state fetch() usage * Adjust sphinx requirement * Upgrade hacking version and fix some of the issues * Denote that other projects can use this library * Remove misc.as\_bool as oslo provides an equivalent * Update zake to requirements version 0.3.21 ------ * Rename additional to general/higher-level * Sync our version of the interprocess lock * Increase usefulness of the retry component compile errors * Switch to a restructuredtext README file * Create a considerations section * Include the function name on internal errors * Add in default transaction isolation levels * Allow the mysql mode to be more than just TRADITIONAL * Make the runner a runtime provided property * Rename inject\_task\_args to inject\_atom\_args * Rename the graph analyzer to analyzer * Provide the compilation object instead of just a part of it * Ensure cachedproperty descriptor picks up docstrings 0.3 --- * Warn about internal helper/utility usage * Rename to atom from task * Invert the conductor stop() returned result * Move flattening to the action engine compiler * Increase the level of usefulness of the dispatching logging * Avoid forcing engine\_conf to a dict * Allow for two ways to find a flow detail in a job for a conductor * Add docs related to the new conductor feature * Add docstring describing the inject instance variable * Finish factoring apart the graph\_action module * Update sphinx pin from global requirements * Fix docstring list format * Allow indent text to be passed in * Factor out the on\_failure to a mixin type * Use a name property setter instead of a set\_name method * Adds a single threaded flow conductor * add the ability to inject arguments into tasks at task creation * Synced jsonutils from oslo-incubator * Remove wording issue (track does not make sense here) * Fix case of taskflow in docs * Put the job external wiki link in a note section * Rework atom documentation * Add doc link to examples * Rework the overview of the notification mechanism * Standardize on the same capitalization pattern * Regenerate engine-state sequence diagram * Add source of engine-state sequence diagram * Add kwarg check\_pending argument to fake lock * Add a example which uses the run\_iter function in a for loop * Fix error string interpolation * Rename t\_storage to atom\_storage * Create and use a new compilation module * Add engine state diagram * Add tests for the misc.cachedproperty descriptor * Complete the cachedproperty descriptor protocol * Don't create fake LogBook when we can not fetch one * Use futures wait() when possible * Use /taskflow/flush-test in the flush function * Add a reset nodes function * Default the impl\_memory conf to none * Fix spelling mistake * Add a helper tool which clears zookeeper test dirs * Add a zookeeper jobboard integration test * Cleanup zookeeper integration testing * Use a more stable flush method * Remove the \_clear method and do not reset the job\_watcher * Allow command and connection retry configuration * Check documentation for simple style requirements * Add an example which uses the run iteration functionality * Implement run iterations * Put provides and requires code to basic Flow * Allow the watcher to re-register if the session is lost * Add a new wait() method that waits for jobs to arrive * Add a cachedproperty descriptor * Add an example for the job board feature * Engine \_cls postfix is not correct * Pass executor via kwargs instead of config * Allow the WBE to use a preexisting executor * Tweaks to object hiearchy diagrams * Adjust doc linking * Medium-level docs on engines * Add docs for the worker based engine (WBE) * Updated from global requirements * Move from generator to iterator for iterjobs * Add a jobboard fetching context manager * Wrap the failure to load in the not found exception * Update jobboard docs * Synced jsonutils from oslo-incubator * Remove persistence wiki page link * Load engines with defined args and provided kwargs * Integrate urlparse for configuration augmentation * Fix "occured" -> "occurred" * Documentation tune-ups * Fix spelling error * Add a resumption strategy doc * Docs and cleanups for test\_examples runner * Skip loading (and failing to load) lock files * Add a persistence backend fetching context manager * Add a example that activates a future when a result is ready * Fix documentation spelling errors * Add a job consideration doc * Add last\_modified & created\_on attributes to jobs * Allow jobboard event notification * Use sequencing when posting jobs * Add a directed graph type (new types module) * Add persistence docs + adjustments * Updated from global requirements * Stings -> Strings * Be better at failure tolerance * Ensure example abandons job when it fails * Add docs for jobs and jobboards * Get persistence backend via kwargs instead of conf * Allow fetching jobboard implementations * Reuse already defined variable * More keywords & classifier topics * Allow transient values to be stored in storage * Doc adjustments * Move the daemon thread helper function * Create a periodic worker helper class * Fix not found being raised when iterating * Allow for only iterating over the most 'fresh' jobs * Updated from global requirements * Update oslo-incubator to 46f2b697b6aacc67 * import run\_cross\_tests.sh from incubator * Exception in worker queue thread * Avoid holding the state lock while notifying 0.2 --- * Allow atoms to save their own state/result * Use correct exception in the timing listener * Add a engine preparation stage * Decrease extraneous logging * Handle retry last\_results/last\_failure better * Improve documentation for engines * Worker executor adjustments * Revert "Move taskflow.utils.misc.Failure to its own module" * Move taskflow.utils.misc.Failure to its own module * Leave the execution\_graph as none until compiled * Move state link to developer docs * Raise error if atom asked to schedule with unknown intention * Removed unused TIMED\_OUT state * Rework documentation of notifications * Test retry fails on revert * Exception when scheduling task with invalid state * Fix race in worker-based executor result processing * Set logbook/flowdetail/atomdetail meta to empty dict * Move 'inputs and outputs' to developers docs * tests: Discover absence of zookeeper faster * Fix spelling mistake * Should be greater or equal to zero and not greater than * Persistence cleanup part one * Run worker-based engine tests faster * SQLAlchemy requirements put in order * Add timeout to WaitForOneFromTask * Use same code to reset flow and parts of it * Optimize dependency links in flattening * Adjust the exception hierachy * docs: Links to methods on arguments and results page * Add \_\_repr\_\_ method to Atom * Flattening improvements * tests: Fix WaitForOneFromTask constructor parameter introspection * Rework graph flow unit tests * Rewrite assertion for same elements in sequences * Unit tests for unordered flow * Linear flow: mark links and rework unit tests * Drop indexing operator from linear flow * Drop obsolete test\_unordered\_flow * Iteration over links in flow interface * Add a timeout object that can be interrupted * Avoid shutting down of a passed executor * Add more tests for resumption with retry * Improve logging for proxy publish * Small documentation fix * Improve proxy publish method * Add Retry to developers documentation * Move flow states to developers documentation * Remove extraneous vim configuration comments * Make schedule a proper method of GraphAction * Simplify graph analyzer interface * Test storage with memory and sqlite backends * Fix few minor spelling errors * Fix executor requests publishing bug * Flow smart revert with retry controller * Add atom intentions for tasks and retries * [WBE] Collect information from workers * Add tox environment for pypy * docs: Add inheritance diagram to exceptions documentation * Adjust logging levels and usage to follow standards * Introduce message types for WBE protocol * Add retry action to execute retries * Extend logbook and storage to work with retry * Add retry to execution graph * Add retry to Flow patterns * Add base class for Retry * Update request \`expired\` property docsting * docs: Add page describing atom arguments and results * docs: Improve BaseTask method docstrings * Remove extra quote symbol * docs: Relative links improvements * docs: Ingore 'taskflow.' prefix when sorting module index * Update comment + six.text\_type instead of str for name * Avoid calling callbacks while holding locks * Rename remote task to request * Rework proxy publish functionality * Updated from global requirements * Use message.requeue instead of message.reject * Lock test tweaks * Move endpoint subclass finding to reflection util * Correct LOG.warning in persistence utils * Introduce remote tasks cache for worker-executor * Worker-based engine clean-ups * A few worker-engine cleanups * Add a delay before releasing the lock * Allow connection string to be just backend name * Get rid of openstack.common.py3kcompat * Clean-up several comments in reflection.py * Fix try\_clean not getting the job\_path * Updated from global requirements * Rename uuid to topic * Fixups for threads\_count usage and logging * Use the stop watch utility instead of custom timing * Unify usage of storage error exception type * Add zookeeper job/jobboard impl * Updated from global requirements * Removed copyright from empty files * Remove extraneous vim configuration comments * Use six.text\_type() instead of str() in sqlalchemy backend * Fix dummy lock missing pending\_writers method * Move some common/to be shared kazoo utils to kazoo\_utils * Switch to using the type checking decode\_json * Fix few spelling and grammar errors * Fixed spelling error * Run action-engine tests with worker-based engine * Message-oriented worker-based flow with kombu * Check atom doesn't provide and return same values * Fix command for pylint tox env * Remove locale overrides form tox template * Reduce test and optional requirements to global requirements * Rework sphinx documentation * Remove extraneous vim configuration comments * Sync with global requirements * Instead of doing set diffing just partition when state checking * Add ZooKeeper backend to examples * Storage protects lower level backend against thread safety * Remove tox locale overrides * Update .gitreview after repo rename * Small storage tests clean-up * Support building wheels (PEP-427) 0.1.3 ----- * Add validate() base method * Fix deadlock on waiting for pending\_writers to be empty * Rename self.\_zk to self.\_client * Use listener instead of AutoSuspendTask in test\_suspend\_flow * Use test utils in test\_suspend\_flow * Use reader/writer locks in storage * Allow the usage of a passed in sqlalchemy engine * Be really careful with non-ascii data in exceptions/failures * Run zookeeper tests if localhost has a compat. zookeeper server * Add optional-requirements.txt * Move kazoo to testenv requirements * Unpin testtools version and bump subunit to >=0.0.18 * Remove use of str() in utils.misc.Failure * Be more resilent around import/detection/setup errors * Some zookeeper persistence improvements/adjustments * Add a validate method to dir and memory backends * Update oslo copy to oslo commit 39e1c5c5f39204 * Update oslo.lock from incubator commit 3c125e66d183 * Refactor task/flow flattening * Engine tests refactoring * Tests: don't pass 'values' to task constructor * Test fetching backends via entry points * Pin testtools to 0.9.34 in test requirements * Ensure we register the new zookeeper backend as an entrypoint * Implement ZooKeeper as persistence storage backend * Use addCleanup instead of tearDown in test\_sql\_persistence * Retain the same api for all helpers * Update execute/revert comments * Added more unit tests for Task and FunctorTask * Doc strings and comments clean-up * List examples function doesn't accept arguments * Tests: Persistence test mixin fix * Test using mysql + postgres if available * Clean-up and improve async-utils tests * Use already defined PENDING variable * Add utilities for working with binary data * Cleanup engine base class * Engine cleanups * Update atom comments * Put full set of requirements to py26, py27 and py33 envs * Add base class Atom for all flow units * Add more requirements to cover tox environment * Put SQLAlchemy requirements on single line * Proper exception raised from check\_task\_transition * Fix function name typo in persistence utils * Use the same way of assert isinstance in all tests * Minor cleanup in test\_examples * Add possibility to create Failure from exception * Exceptions cleanup * Alter is\_locked() helper comment * Add a setup.cfg keywords to describe taskflow * Use the released toxgen tool instead of our copy 0.1.2 ----- * Move autobinding to task base class * Assert functor task revert/execute are callable * Use the six callback checker * Add envs for different sqlalchemy versions * Refactor task handler binding * Move six to the right location * Use constants for the execution event strings * Added htmlcov folder to .gitignore * Reduce visibility of task\_action * Change internal data store of LogBook from list to dict * Misc minor fixes to taskflow/examples * Add connection\_proxy param * Ignore doc build files * Fix spelling errors * Switch to just using tox * Enable H202 warning for flake8 * Check tasks should not provide same values * Allow max\_backoff and use count instead of attempts * Skip invariant checking and adding when nothing provided * Avoid not\_done naming conflict * Add stronger checking of backend configuration * Raise type error instead of silencing it * Move the container fetcher function to utils * Explicitly list the valid transitions to RESUMING state * Name the graph property the same as in engine * Bind outside of the try block * Graph action refactoring * Add make\_completed\_future to async\_utils * Update oslo-incubator copy to oslo-incubator commit 8b2b0b743 * Ensure that mysql traditional mode is enabled * Move async utils to own file * Update requirements from opentack/requirements * Code cleanup for eventlet\_utils.wait\_fo\_any * Refactor engine internals * Add wait\_for\_any method to eventlet utils * Introduce TaskExecutor * Run some engine tests with eventlet if it's available * Do not create TaskAction for each task * Storage: use names instead of uuids in interface * Add tests for metadata updates * Fix sqlalchemy 0.8 issues * Fix minor python3 incompatibility * Speed up FlowDetail.find * Fix misspellings * Raise exception when trying to run empty flow * Use update\_task\_metadata in set\_task\_progress * Capture task duration * Fix another instance of callback comparison * Don't forget to return self * Fixes how instances methods are not deregistered * Targeted graph flow pattern * All classes should explicitly inherit object class * Initial commit of sphinx related files * Improve is\_valid\_attribute\_name utility function * Coverage calculation improvements * Fix up python 3.3 incompatabilities 0.1.1 ----- * Pass flow failures to task's revert method * Storage: add methods to get all flow failures * Pbr requirement went missing * Update code to comply with hacking 0.8.0 * Don't reset tasks to PENDING state while reverting * Let pbr determine version automatically * Be more careful when passing result to revert() 0.1 --- * Support for optional task arguments * Do not erase task progress details * Storage: restore injected data on resumption * Inherit the greenpool default size * Add debug logging showing what is flattened * Remove incorrect comment * Unit tests refactoring * Use py3kcompat.urlutils from oslo instead of six.urllib\_parse * Update oslo and bring py3kcompat in * Support several output formats in state\_graph tool * Remove task\_action state checks * Wrapped exception doc/intro comment updates * Doc/intro updates for simple\_linear\_listening * Add docs/intro to simple\_linear example * Update intro/comments for reverting\_linear example * Add docs explaining what/how resume\_volume\_create works * A few resuming from backend comment adjustments * Add an introduction to explain resume\_many example * Increase persistence example comments * Boost graph flow example comments * Also allow "\_" to be valid identifier * Remove uuid from taskflow.flow.Flow * A few additional example boot\_vm comments + tweaks * Add a resuming booting vm example * Add task state verification * Beef up storage comments * Removed unused utilities * Helpers to save flow factory in metadata * Storage: add flow name and uuid properties * Create logbook if not provided for create\_flow\_details * Prepare for 0.1 release * Comment additions for exponential backoff * Beef up the action engine comments * Pattern comment additions/adjustments * Add more comments to flow/task * Save with the same connection * Add a persistence util logbook formatting function * Rename get\_graph() -> execution\_graph * Continue adding docs to examples * Add more comments that explain example & usage * Add more comments that explain example & usage * Add more comments that explain example & usage * Add more comments that explain example & usage * Fix several python3 incompatibilities * Python3 compatibility for utils.reflection * No module name for builtin type and exception names * Fix python3 compatibility issues in examples * Fix print statements for python 2/3 * Add a mini-cinder volume create with resumption * Update oslo copy and bring over versionutils * Move toward python 3/2 compatible metaclass * Add a secondary booting vm example * Resumption from backend for action engine * A few wording/spelling adjustments * Create a green executor & green future * Add a simple mini-billing stack example * Add a example which uses a sqlite persistence layer * Add state to dot->svg tool * Add a set of useful listeners * Remove decorators and move to utils * Add reasons as to why the edges were created * Fix entrypoints being updated/created by update.py * Validate each flow state change * Update state sequence for failed flows * Flow utils and adding comments * Bump requirements to the latest * Add a inspect sanity check and note about bound methods * Some small exception cleanups * Check for duplicate task names on flattening * Correctly save task versions * Allow access by index * Fix importing of module files * Wrapping and serializing failures * Simpler API to load flows into engines * Avoid setting object variables * A few adjustments to the progress code * Cleanup unused states * Remove d2to dependency * Warn if multiple providers found * Memory persistence backend improvements * Create database from models for SQLite * Don't allow mutating operations on the underlying graph * Add graph density * Suspend single and multi threaded engines * Remove old tests for unexisted flow types * Boot fake vm example fixed * Export graph to dot util * Remove unused utility classes * Remove black list of graph flow * Task decorator was removed and examples updated * Remove weakref usage * Add basic sanity tests for unordered flow * Clean up job/jobboard code * Add a directory/filesystem based persistence layer * Remove the older (not used) resumption mechanism * Reintegrate parallel action * Add a flow flattening util * Allow to specify default provides at task definition * Graph flow, sequential graph action * Task progress * Verify provides and requires * Remap the emails of the committers * Use executors instead of pools * Fix linked exception forming * Remove threaded and distributed flows * Add check that task provides all results it should * Use six string types instead of basestring * Remove usage of oslo.db and oslo.config * Move toward using a backend+connection model * Add provides and requires properties to Flow * Fixed crash when running the engine * Remove the common config since its not needed * Allow the lock decorator to take a list * Allow provides to be a set and results to be a dictionary * Allow engines to be copied + blacklist broken flows * Add link to why we have to make this factory due to late binding * Use the lock decorator and close/join the thread pool * Engine, task, linear\_flow unification * Combine multiple exceptions into a linked one * Converted some examples to use patterns/engines * MultiThreaded engine and parallel action * State management for engines * Action engine: save task results * Initial implementation of action-based engine * Further updates to update.py * Split utils module * Rename Task.\_\_call\_\_ to Task.execute * Reader/writer no longer used * Rename "revert\_with" => "revert" and "execute\_with" to "execute" * Notify on task reversion * Have runner keep the exception * Use distutil version classes * Add features to task.Task * Add get\_required\_callable\_args utility function * Add get\_callable\_name utility function * Require uuid + move functor\_task to task.py * Check examples when running tests * Use the same root test class * LazyPluggable is no longer used * Add a locally running threaded flow * Change namings in functor\_task and add docs to its \_\_init\_\_ * Rework the persistence layer * Do not have the runner modify the uuid * Refactor decorators * Nicer way to make task out of any callable * Use oslo's sqlalchemy layer * File movements * Added Backend API Database Implementation * Added Memory Persistence API and Generic Datatypes * Resync the latest oslo code * Remove openstack.common.exception usage * Forgot to move this one to the right folder * Add a new simple calculator example * Quiet the provider linking * Deep-copy not always possible * Add a example which simulates booting a vm * Add a more complicated graph example * Move examples under the source tree * Adjust a bunch of hacking violations * Fix typos in test\_linear\_flow.py and simple\_linear\_listening.py * Fix minor code style * Fix two minor bugs in docs/examples * Show file modifications and fix dirpath based on config file * Add a way to use taskflow until library stabilized * Provide the length of the flows * Parents should be frozen after creation * Allow graph dependencies to be manually provided * Add helper reset internals function * Move to using pbr * Unify creation/usage of uuids * Use the runner interface as the best task lookup * Ensure we document and complete correct removal * Pass runners instead of task objects/uuids * Move how resuming is done to be disconnected from jobs/flows * Clear out before connecting * Make connection/validation of tasks be after they are added * Add helper to do notification * Store results by add() uuid instead of in array format * Integrate better locking and a runner helper class * Cleaning up various components * Move some of the ordered flow helper classes to utils * Allow instance methods to be wrapped and unwrapped correctly * Add a start of a few simple examples * Update readme to point to links * Fix most of the hacking rules * Fix all flake8 E\* and F\* errors * Fix the current flake8 errors * Don't keep the state/version in the task name * Dinky change to trigger jenkins so I can cleanup * Add the task to the accumulator before running * Add .settings and .venv into .gitignore * Fix tests for python 2.6 * Add the ability to soft\_reset a workflow * Add a .gitreview file so that git-review works * Ensure we have an exception and capture the exc\_info * Update how graph results are fetched when they are optional * Allow for optional task requirements * We were not notifying when errors occured so fix that * Bring over the nova get\_wrapped\_function helper and use it * Allow for passing in the metadata when creating a task detail entry * Update how the version task functor attribute is found * Remove more tabs incidents * Removed test noise and formatted for pep8 * Continue work on decorator usage * Ensure we pickup the packages * Fixed pep8 formatting... Finally * Add flow disassociation and adjust the assocate path * Add a setup.cfg and populate it with a default set of nosetests options * Fix spacing * Add a better task name algorithm * Add a major/minor version * Add a get many attr/s and join helper functions * Reduce test noise * Fix a few unit tests due to changes * Ensure we handle functor names and resetting correctly * Remove safe\_attr * Modifying db tests * Removing .pyc * Fixing .py in .gitignore * Update db api test * DB api test cases and revisions * Allow for turning off auto-extract and add a test * Use a function to filter args and add comments * Use update instead of overwrite * Move decorators to new file and update to use better wraps() * Continue work with decorator usage * Update with adding a provides and requires decorator for standalone function usage * Instead of apply use \_\_call\_\_ * Add comment to why we accumulate before notifying task listeners * Use a default sqlite backing using a taskflow file * Add a basic rollback accumlator test * Use rollback accumulator and remove requires()/provides() from being functions * Allow (or disallow) multiple providers of items * Clean the lines in a seperate function * Resync with oslo-incubator * Remove uuid since we are now using uuidutils * Remove error code not found in strict version of pylint * Include more dev testing packages + matching versions * Update dependencies for new db/distributed backends * Move some of the functions to use there openstack/common counterparts * More import fixups * Patch up the imports * Fix syntax error * Rename cause -> exception and make exception optional * Allow any of the previous tasks to satisfy requirements * Ensure we change the self and parents states correctly * Always have a name provided * Cleaning up files/extraneous files/fixing relations * More pylint cleanups * Make more tests for linear and shuffle test utils to common file * Only do differences on set objects * Ensure we fetch the appropriate inputs for the running task * Have the linear workflow verify the tasks inputs * Specify that task provides/requires must be an immutable set * Clean Up for DB changes * db api defined * Fleshing out sqlalchemy api * Almost done with sqlalchemy api * Fix state check * Fix flow exception wording * Ensure job is pending before we associate and run * More pylint cleanups * Ensure we associate with parent flows as well * Add a nice run() method to the job class that will run a flow * Massive pylint cleanup * deleting .swp files * deleting .swp files * cleaning for initial pull request * Add a few more graph ordering test cases * Update automatic naming and arg checks * Update order calls and connect call * Move flow failure to flow file and correctly catch ordering failure * Just kidding - really fixing relations this time * Fixing table relations * Allow job id to be passed in * Check who is being connected to and ensure > 0 connectors * Move the await function to utils * Graph tests and adjustments releated to * Add graph flow tests * Fix name changes missed * Enable extraction of what a functor requires from its args * Called flow now, not workflow * Second pass at models * More tests * Simplify existence checks * More pythonic functions and workflow -> flow renaming * Added more utils, added model for workflow * Spelling errors and stuff * adding parentheses to read method * Implemented basic sqlalchemy session class * Setting up Configs and SQLAlchemy/DB backend * Fix the import * Use a different logger method if tolerant vs not tolerant * More function comments * Add a bunch of linear workflow tests * Allow resuming stage to be interrupted * Fix the missing context variable * Moving over celery/distributed workflows * Update description wording * Pep fix * Instead of using notify member functions, just use functors * More wording fixes * Add the ability to alter the task failure reconcilation * Correctly run the tasks after partial resumption * Another wording fix * Spelling fix * Allow the functor task to take a name and provide it a default * Updated functor task comments * Move some of the useful helpers and functions to other files * Add the ability to associate a workflow with a job * Move the useful functor wrapping task from test to wrappers file * Add a thread posting/claiming example and rework tests to use it * After adding reposting/unclaiming reflect those changes here * Add a nicer string name that shows what the class name is * Adjust some of the states jobs and workflows could be in * Add a more useful name that shows this is a task * Remove impl of erasing which doesn't do much and allow for job reposting * Various reworkings * Rename logbook contents * Get a memory test example working * Add a pylintrc file to be used with pylint * Rework the logbook to be chapter/page based * Move ordered workflow to its own file * Increase the number of comments * Start adding in a more generic DAG based workflow * Remove dict\_provider dependency * Rework due to code comments * Begin adding testing functionality * Fill in the majority of the memory job * Rework how we should be using lists instead of ordereddicts for optimal usage * Add a context manager to the useful read/writer lock * Ensure that the task has a name * Add a running state which can be used to know when a workflow is running * Rename the date created field * Add some search functionality and adjust the await() function params * Remove and add a few new exceptions * Shrink down the exposed methods * Remove the promise object for now * Add RESUMING * Fix spelling * Continue on getting ready for the memory impl. to be useful * On python <= 2.6 we need to import ordereddict * Remove a few other references to nova * Add in openstack common and remove patch references * Move simplification over * Continue moving here * Update README.md * Update readme * Move the code over for now * Initial commit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/LICENSE0000664000175000017500000002363700000000000014422 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6560426 taskflow-4.6.4/PKG-INFO0000664000175000017500000001024600000000000014502 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: taskflow Version: 4.6.4 Summary: Taskflow structured state management library. Home-page: https://docs.openstack.org/taskflow/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/taskflow.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on TaskFlow ======== .. image:: https://img.shields.io/pypi/v/taskflow.svg :target: https://pypi.org/project/taskflow/ :alt: Latest Version A library to do [jobs, tasks, flows] in a highly available, easy to understand and declarative manner (and more!) to be used with OpenStack and other projects. * Free software: Apache license * Documentation: https://docs.openstack.org/taskflow/latest/ * Source: https://opendev.org/openstack/taskflow * Bugs: https://bugs.launchpad.net/taskflow/ * Release notes: https://docs.openstack.org/releasenotes/taskflow/ Join us ------- - https://launchpad.net/taskflow Testing and requirements ------------------------ Requirements ~~~~~~~~~~~~ Because this project has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required (you can't use the project without them) are put into ``requirements.txt``. The requirements that are required by some optional part of this project (you can use the project without them) are put into our ``test-requirements.txt`` file (so that we can still test the optional functionality works as expected). If you want to use the feature in question (`eventlet`_ or the worker based engine that uses `kombu`_ or the `sqlalchemy`_ persistence backend or jobboards which have an implementation built using `kazoo`_ ...), you should add that requirement(s) to your project or environment. Tox.ini ~~~~~~~ Our ``tox.ini`` file describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. Please refer to the `tox`_ documentation to understand how to make these test environments work for you. Developer documentation ----------------------- We also have sphinx documentation in ``docs/source``. *To build it, run:* :: $ python setup.py build_sphinx .. _kazoo: https://kazoo.readthedocs.io/en/latest/ .. _sqlalchemy: https://www.sqlalchemy.org/ .. _kombu: https://kombu.readthedocs.io/en/latest/ .. _eventlet: http://eventlet.net/ .. _tox: https://tox.testrun.org/ Keywords: reliable,tasks,execution,parallel,dataflow,workflows,distributed Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: System :: Distributed Computing Requires-Python: >=3.6 Provides-Extra: database Provides-Extra: eventlet Provides-Extra: redis Provides-Extra: test Provides-Extra: workers Provides-Extra: zookeeper ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/README.rst0000664000175000017500000000451400000000000015075 0ustar00zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/taskflow.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on TaskFlow ======== .. image:: https://img.shields.io/pypi/v/taskflow.svg :target: https://pypi.org/project/taskflow/ :alt: Latest Version A library to do [jobs, tasks, flows] in a highly available, easy to understand and declarative manner (and more!) to be used with OpenStack and other projects. * Free software: Apache license * Documentation: https://docs.openstack.org/taskflow/latest/ * Source: https://opendev.org/openstack/taskflow * Bugs: https://bugs.launchpad.net/taskflow/ * Release notes: https://docs.openstack.org/releasenotes/taskflow/ Join us ------- - https://launchpad.net/taskflow Testing and requirements ------------------------ Requirements ~~~~~~~~~~~~ Because this project has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required (you can't use the project without them) are put into ``requirements.txt``. The requirements that are required by some optional part of this project (you can use the project without them) are put into our ``test-requirements.txt`` file (so that we can still test the optional functionality works as expected). If you want to use the feature in question (`eventlet`_ or the worker based engine that uses `kombu`_ or the `sqlalchemy`_ persistence backend or jobboards which have an implementation built using `kazoo`_ ...), you should add that requirement(s) to your project or environment. Tox.ini ~~~~~~~ Our ``tox.ini`` file describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. Please refer to the `tox`_ documentation to understand how to make these test environments work for you. Developer documentation ----------------------- We also have sphinx documentation in ``docs/source``. *To build it, run:* :: $ python setup.py build_sphinx .. _kazoo: https://kazoo.readthedocs.io/en/latest/ .. _sqlalchemy: https://www.sqlalchemy.org/ .. _kombu: https://kombu.readthedocs.io/en/latest/ .. _eventlet: http://eventlet.net/ .. _tox: https://tox.testrun.org/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/bindep.txt0000664000175000017500000000055500000000000015411 0ustar00zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. graphviz [!platform:gentoo] media-gfx/graphviz [platform:gentoo] libpq-dev [platform:dpkg] mysql-client [platform:dpkg] mysql-server [platform:dpkg] postgresql postgresql-client [platform:dpkg] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.5880404 taskflow-4.6.4/doc/0000775000175000017500000000000000000000000014147 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.5920405 taskflow-4.6.4/doc/diagrams/0000775000175000017500000000000000000000000015736 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/diagrams/area_of_influence.graffle.tgz0000664000175000017500000000522600000000000023522 0ustar00zuulzuul00000000000000}8U[OJG'=Ow>~Ӂ< ZEzZMNm@+SOY;̎w?3;c)=`;ݾ3aO}|MC*`(F$q=j&FP$D(+Wd!A!ZE(wA?i~A6mS,Xs maXbeqfh(RupxV[Xfez[#cˮ-onQꓖ߳Q1ƊiW~۟WI[M6qmۇ?;R%MOo \uS۾hyBԙ5C?8|gd9?or;rҸnaIyg\e{u'=i Nw[FhvvӣvG6 FFJ 7f~xw6A5&QnؼfwMN>#zCn7*tr7dotcs"Ͻ%ϙSqb3}mPF.d,٫@Ɣ(EK&6 S1S$XI5{惇5f;Ծ=po94(='cXw٬da L3Ҟw3Mu>IJ;>I5{D$^@kQ N|> f+-PLFn G1AĜ<蛍dF ?wi D60]KE{a_kBmXiAU^By?kd_"zt8Mʊ4\Rخq&ڞQEt-L7AhusxL+n?AC,Ds>W?@sg=i݁#N #Puu[MG‡蠛@L$(VD"!. '>uXcJ]3w܍}DHs9-A~Y/ċ&7VCE.CCn>]CXQ!R,1BTƴU) XB1UH[Ϲb+8Մ\D",@Zܷ:a/;4 p*X gj}[Gę'Ƴ,0na Ċ8{^僚<)90.Wp(*ޭBҹuaHBώ|TCǁΓ@A<@4PA("h\`$5@0<%'U,jčmHLyLG%& #")\ug%KsU xt(UWG?7MUM+E1δb0Efبr0Gdw"5w+ 0@xRي#^W!hoeC@t{XoO+ 0 B$_bj`Ƃ*0Ҡss憙 .Qr/-\X` W5;^z7Oצ6dY8kG8|sŋX#@.dfwy{mywx5_w~=l Ós׎=h9~7:p:c`%,3r%C#I'㍟B|Eu2al[׹Şy7VR葫0z^s 5q~G]=', yܐbX钻Wa<^1ݎ )D r -R6F͂~OI,Z\Ɇ%QxՒoS"#o jQl0j*HM7368l^ңuI7J2rQ 锃xtŋ.PpW1knu[Vխnu[ViMXmP././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/diagrams/core.graffle.tgz0000664000175000017500000003773000000000000021033 0ustar00zuulzuul000000000000002oT][s۶k+tpf91M'q6Z6MMLY u!AdYL$X +a'io%aL)fߘHP$`#(U_Y|q.ɧn_x<[M#tL(~=8?ypp898xo g0{pӧ(6avfN^y> F6et8{wYG2/&χ[Nfi?B$ptϠ_˞DqýYFeg4.ΣWR8`*1VXIEUÃE .fn˛irh$~4wLr_9y0 XPim6NWwH^<]_L,q8Y\8O˫yi5~yUL,>&[vi\i,;kx*OO-֨X*lxNfEXϓxŇESxC!H#5 [&Ex3dMS6p~,@ӦyNxܸ1Id x D;xy~ϓCTLb2=O0V rVj}q&oB%#?F@bT,iSz(s bS)TC}F~x\ ʟ?Kv1Ѽѿu":}E2NΒɬ&٭STVկjAv/(bHpEA8"FKz`kl30ƾR-a\+[z$1!#Icltbz*FDa4!43GvWWl2fF#5ZݒlZ0Zsb/*:לrC!z#1!))!c$*"{_Z\@Z\;T_n@RZ6-" ֱ&[@+(bQԱPLJk ;eWcf WomkV.lz9|Q3`-7tb| /Dn R hNM U{+J'm ]s˶zToPs%LqX 9"ZFR*. É"NPJĶ>bUoW]עŏIY]\~aHEDsi/v`3GZcbUaunJiO_~LӿGk0K^LN2?#ˏdck ,n"),fY<9;CTNMP/UGBVMv)̽G8jVuNQ<S@Nr4̆Yl.X18ݛ=̓X92 ::~Jӣ8&34Xw}qCdD8?:5ޡyf cL7!xSȼݼ cE&@m. MLi>gy<>poό8168t656P&^]BS/*nS劍U$y!$6YA# :Lj*vhdEo.&5J g1Vp(Y\ڷz_?\.JA3P!qaF0X]CK"*0iqyUu8.FA_(>̍~fjbRá[ocPuFMEvU^m4WYHu\PJ00- G d& T%!n#5A.YbB|$^^qƈ-D%r-Y@u6 7>;npwNYG{?FtÇEV=AHfEF۔Ai~C20r ›?Q!Ŀ$8])!;(T p_f_}8aD.͉0&6,Š(EL^겨W*,k`@Uq@jXP.m{OΝ$UB&UU`,"[Yժ9YeWCm^ftQ&Q :^l=IJi,nOҙnH55P]czr*p #jy{'&YiMKQ.a,K1l~w4#j0ImPc U&d(B){`!.EML:rȗFk V> 58˻iFTᄪ-= Oq:?"xݨҜP0i,t[ܽ' @cCc|OFĠ %uql S%Nd$ 8p2q^wL8ݜbehL䌵V $9O򣽟cszN w75vNGi⨻,;w8tDM ^k+l$&5dtad'.$7ڮ*; *3"slH-/Y wqFѪʩ^s]@5;wN+h'C[[>>]\V.>p?ne Ux;k=nvL:-njϝV2v)=C;s-;dže6J :"G%:~$Y><ш5"w Eڃ^WWJ=/֚s$:Ry]4^^dsZfjT P B#?%VJCΙ`ˀ??~qY~$4&H 1hI|Dp9H2ZA 8eZ@N)i;I_ 6x.]lr'%g5zwF[*I!Iwo{:)W[P\șPOUp/P\B&+Oo]q-;/+27x{*x ŅۘP\xWr%SȔpwGHmݻ(R[{'slL G^u W@X=wm q[?[,n?@@pIB:Ņ  YI,.JݲB毳ŅNʐN zO!:`KTo^q!Ʌt Jr115AzPGZs-9ӌ*&MK,#j~vQҪYDV}T}z_qv1rݡՋ;6kj lj_beoY1ɧ (S/W]Kz3Niv{YG-/׃>|ˑydRԈ K<8PUThD,Ӻ8zXKbKBRA.Gwg?@E _TpC" 3EAa278y5;dJFD*F5MX JjR;=,.7q<_v~njdES j%nF &Dʤ̊EZ3M4J5KXޯzUֵs?8Xg}L,p|YpںSt6$k1ԕW` +pqvzSv|gc"MR,7GlJ4`09)lf! ]_h Ʊ#(YPzMgHO9qm1_p^#b^\$ќh;X&^̩yľLs"mX:%neLwjrlbK D')0 j۱#NR ա1[uCej Q?{N lKpN]z5wizwĨ}vǣ!8IM0[+ڗq矏O^Э;?ӈIX\`GM9РYX=Q@+)L#&VNAc׮ )n8 3ّd9[s g{/{ۓ)7ޝl ђZ זHod_ OǖM×qo̶ww}FKvݛ[q.ce۲9vfs-:.K9N6pzLһ>ґXb`1u#UAHa)frFbY%Sk_~o+(^=!] [s!^"]h8k#`Uiv0"%la"&gp / c"wmˍI =mرrUe%c:]hg^Pnƨ^ "(*MG$P:y"ikxx-4(I/GfEbǧ/FVe-ËIiGƋ/RV=0E[$bӗk: e[I m%R<fCv;ldÞޔfEGqIi7Zni7B4D:dI \L Q9K?"Jc%ӳ!hHxԣdPʢѨր{cpaX#f%g,52]BZ6Ej!`3qgjtP3Hn6"FI( v0'[v"Kex9`2ZqTp|uYn9POCwtL}SwUqȹ@7}!Ž,kɢa4`qsĪU}7Yhugg!z՝yLt,$ؙ$evVv<,4 3&;a'63;\Vbg:`4 b,'ΚY_wv\&Y7vfpLΪÎ57k{ag,GMvΕϖKdvvJqL<lHyѱ3|,v(Ѻ8;+LnL;U;^QyڵIΪ3vؙ!`g{@ΛΎn=фD x3WBY TD}22nYIĐFsjt|dL 7$L+˴Єz#14Z+]SUB8I-};Z`:C\CEϛ|XTivV;&3^aD15̛8+.ַ65)8$t1h犍&2z=SC[w@5t7=Ź/%4֛P=xÐXD ̼dz\j>LN'_(|܊bn~\iFq(uu0DP d, =0 ~zl"ac*+~.xm1),JH I:g U?~\^\!0\[=+]$ v{/$<;u=zk{M7ZX6pI=.NNonoFWA)wؗi4FD(hQFP'>jÕGWֶ`r5[ˁitV)o&e@eym~3gΰ!ΐY@ۜdps{=yvqu2[a:p0dSd"# H_]2NvuFGMmg-ႍ뷯~xӳ4w߾Obszw˗sUs̛?فx:(c-4ԈC`MAo)+azs әxf@?M9~:K وi~ʼn 1[[;e j)hAQ' '}$YHt,y90<٩ |0d6+khU+A0l{pqDޅ~^O˧^1x7Ŗs@hTDzeBtF9iW5HK Ckds.ҏeU%Uue%ti4TuL+6z9Jpt5 cc%Mc軎1:X3ni<D)bUy 1VâAUBG@=jjՀ"o!b ,b_F= SJq4V9|AmMatn[434IhW/~ 9{UڹYqg?#Po޼XkF,DTR1&mˀy FL ImJcKƕ+tpņo8E(C0l_<dXɰr@HNtVnWx6g*l(ɶlCd^jS@?2rLsxp?(p48v~z۳si>x䂡\k.ػQ#fV-Lp'ޤےn7W-*w=~|A#)@Cu&@#ΕeD_{v:5~؉a 8FR+΅eD.kKeŶI^ة_tK'SE9ר(ơ/GSGODG{ǫ[S-פt`ťm,aģYYl,Q^jV׭nd{# yU AvxVYw(E-K'D UMqE=T[oL-xV|gI Sh㽶ZH,ֆ TRZ%OMYAq;kl iK2ʟU-~8~۩)/f:}\׶jd!'Oc&x/.'يr9KcR0R[fTN3StV&E%RQ{!訌Eٯuf@|1vgEp=|+I<68V[g-m7Tl0%袈iu&{Ά]'%LOdl=}X;~[:VJ=DIJ!r$S}wk|mӓM"kV%1yj &4>&33$4^hLgY҃T62JCŇaF( K^(#JF#~gv2+7Ӱ/6^i':},}>Z=kFC:xO)K8e:%s6;v¼̮]ΖmȎeH *hX)8< nY+,_Cg@ N y 1dBB "EKo F* xL}M챰 ū\$fVg l E: [٩,钝ГNUa}9D؀D^pQ[!NRu7<S{;H`g:*BerG)SS,+`pX:BY+&tX;:>C-X)c`Sy8{QG+!JfkY7Ab"jFVYe4Λ({>b+amFO&0'N}=OEI1ȃnr&7i2j_/_6ǫRT*Ijoh0%Lhk\$G퍭3p9\ƃA$ -J偢-GhR.UU]^:zjWPMnoe8#+nT7sk&?/N4je9#߫_ 6 U8VahXLJ_ԉc"Z)63쇶톹ID\F}6T|ېZ[бqݓz8-1cQm |u粮ٺi#ۚ 8mO٤eeMev|g6oښ)Lڢ HBi UPll2 [ܡt̚f㚦^yhcTk \w8æq7; YudѦ:kP)Vc1Qz!wdyD[(OCn"$c)fFl Lz#FF]ƄJU߇࿜VZ%řx9F [>KǢ]QMYfj$K5z U |WpeL4R/%vy,],2PS.q5j;UZR2f|DGv&ĵ,2!|VAb (`93ѹ&f[&#>kԕuHoA`h Obt'A)KT|rO:c3èT&YȦû;ˌmuS^2c;b_?/NݎCi r<}ٯ61!^]Th#.-_,l\ ӽIv?y=:yNt F%- ]=+ nnP,[} ?'nT0AXadܚӑ Nxb捲و( ol y0G$-i+`tpw ?Ş( x쑔Fbc?8"^t.`6X&"Dk<ՁӦgyN0 # 4qAՖ&Xyz#mvH#5H.$Ug Vd<˅* ~nTOʲ*;ZUdjpm&!ln:,H,Hڀ"͂ kM6 N:'mzCً"yU^mɀ Gٌűf\ qf\޵6$uW~?B8$BZ9@,lOW{^=ĎӃ$3=ꯪ*d+qqNiay=JiC"w uIB1iX13ZC1|8^d8Tb(^qd+bԲۺC ֩o68k)NLBQJqIERek=Ep [IUZ)E\ˀXKq׾@J`6^P2TkA1^l7~8W+E@Eˀg5< f$(bIq;AZJ\T7vžnFsز={7_ ղ6TBy-Ӈ!ED U _"X#1'Dx8^ _g%,Mדš[;5oɘJîǃqYw:W׎܊w#$J2-&*b uQ&ˤYk^ɴRZ c8撊0.lpHnu7xr0V9)8p7L2 HF9uIٲ2Y 42XJ91iqt+YNa! i)s@O=372lZcj)Pq8 W1 ˰+% \]^,mJ}˜G !!9+m {>Cݭʛ*ZM>` TAd|o"*?t^/? YZ/-Up4\""6kQ%ӥ {f[)vܝ|%K̋ z0Ķq75V3q[L_eY x!#X~)wRZnsmpJRiI,WRE5 FZ*RLAL"G2j-CJVMvbf+fZ@1e2 l\kAKi$Z[9q bQ1kOX'aXa@U ؑ&ʻP^rmawCy%GA~L\4l{wxiҋ;FtnW:0\nyryYИ^Spk̃qjB*jF3e# B%uPmZ +qLBY-ՊPa 1V[31h@(4FŬJ2|T0U%DP~Fb({f$ڝvH`r16tx2{sӥYgI[6jZpTT"K\ٳbUF~`spvRfSE9V!1|D"bC/A>KVjuR^MRRt@+Y, nHyu$H6J`95|;",Bbz3hI~b!as%S}I:oh3~YŎ.ʛ5%vL?[}m4Y$lVdcA*L<@C{Sv&pl+j,[ZEC1gY-"Z]p+P"RXHki1#eD$>VeS$nc4{j6Eo)՚k}*t6  7Wy/'I<9/֋IXAU"z21('iܶ&RV ?2"5w@ Zʍ`XD};#$H[5G>z12Yj@jXrրcՊա<6F(eWwƚzpհyffT^]xm«,RP;Q r#G X8#4` RۺCޏ]|}Kd<LJig5ASϿp9fѲN#ճk=dm7j 68͝UGHMRx."ɸQ8 (jd #ќYS"E@ OBI w^ +".\t[po .\@4 .Gv pӲG֔4ĥ:"D%T*u{f=Oj,l8 ;7aa6ξ)V4u? Q hG ?w#U!\8 >H"4a$%n7Grx|"7' |܇BZJ'Vl`4 zxr1ísu gEL6oBy0m$ j)3by4i#L—s`%*500x?8=7=vNkR3R)$].\\q ki*ť.k$sƔ3|S97-g\j\ A8FastyE †lVrHwU(\ )piy~$]g6%h^-]O.J#42 :)&2NohD7 &#"?+R˛MkN\t iKS3`iB(f 553R<$Lx29/^ѻ]?fs$/¿N!\x8~ .O'Gг[|&^v/q9UN*v3ݕCbth7c[w؃sK M+울>.=O_/qL]{bɨt9H{'Q]zɴ2ߓ~볞t}!~}tό $?H[Mя0 ,w<"ͺg5u2y=F%VI Wp9bc/WԿ:zL^![uyO[g-ο C'<?Dғ/ɳw}8D_?Eyc^{Lq|s)70Z˅xrp={f#InÎ}\8jTa .9٠ң)4^\gq9@qZ[wRеi!X5LJ-SF\cwid۷Y _v?Mn緕xf^f1LjF|p#vGW>?\Cfs7nZR?c:Y<'jH|t]w:}8𬇑Oܤr*/Q<[6_#ΝfF[wTsg:Vt[g ~DY[yI#3VWr/$wCrIWC<@! .:駋GL~ƞIq{69Yk4o WSw|?y:ƿ~ۇQ~1_z5iO!)>4z:ۓ|:fjx.Xq͎ϛn|>hb6!h[FNO"4C ˌN__rvge%_t#unYA~RՂ) +‹-}LwևEcʳN彺1}p' Ǖl;Skl3<B^ e,ϣ$uK k%3oP$^uk&9L.("dLBl. 1GHTEn<;w2;$L4m\8pxޗ翤"Faz"[Do}Cz?JT2VT(~aJO?q3G0A{"!CX`,XE)CɱqJHЕER13VB!%WWǂW, 18dq*r*V8W|$Y`|+ {+L'NQ8U+Ne1TDf4q&%O+|8H*'_Ox!qa`.B&ͼ3NFt;Pc^B=9⥐9}*FTXN62~P$=Q$ԍf db xaIu3CCƤ$F&Ca1î`[Fk1*=Iį c2(fa F%#??'†ao]i1`4L][DX:&\m!W2B„`BHDoSAP\h*jEҷ+(&VMF08p5L^q[B[F)GWC SpЍQ/a g_;2*[WLX{ҙ7.N9'8^‡G  Yxu) \,aܒW(qtx=vϜʘK~ ?Dk`|I' s/NB x,{"]Ǵ7^h my(V魪 #krɩK _DC^1{kĂWH¸ոw9fK V40׭t+Un,X@WzŌUDdHbU~nN#p:jvmx_|7-WO=RM?ۗ[Cg5fGJGQ,Ae/:38m OH'mld\:13/r˩ žˢp C含J2<Ɓ=0q[B:9q7aEqi(|s YXFEM!h#N8xRM5P 4=V^wJUepc $d`'<)ž^?K!R^0(GI(,tU Lz5IH|i_/l#`["Ń)VvIEOTĪk;0W~~h$f"L7;QՉ+9c\ NB/\9 402$lP0V+Uq%VeB^5a%E]b!16^٘-EZ/:(M߭Hb!íM3!^; c1Xhxc)"X|FUZLKWV^X(J)uZI7(\^ `ɇ[VqC%-A XnHEJd/VBCcsF8 Q Ba2 r$q$@8H 210VeK(ŏ \V VK,EKj,C#3J" 9XTXTQp 4z 1eO@k !FBCn\rk(VJJ3z;p"ث:1g_q b@N 7TER"}]~ <:h_zD/#W=󰇘UPƩۘ#֠oz/vƌ0p]j Ƣ: Y#?pi"ҲEC9e/yY: ~7ܕwqo\ CZzi73^? ͺ`Ů1D*6ܮGDX% .*R"O E`(߬H x'B^mrgEb_ ]sn7ñڳ䢩?]*}.NڴRBd q="UWE ղh&¯6Xu``u[ӒE,)/kOfX'-3ZhƂ)9P F<#$Na~YU:0Eאig\ZsHLUgh%+Fdl@6ŋ=HBK tAcbŸF8^"B%$2zN1DXbRcIXmѽ砞wE8Jx+!b^:P1y!:CD[00=+M?F0g1- ?vb^l z0AF+q?؊g,.ێ@nIrP|c ?Yd s)gE:/( S#c/3({eeXV2c.\{ ~rɒPL:r/13T!_(/?.c+,\Wbev#v (D(vRF-acǷv;gXdq姸eOi6cL9UM"~~TՒN? ^$|Ja0Qo Q,F{7_8J_Nv'_(];0?]rB`9 p~׼x ;T!\&dfdrf.$"9Uܹ~d5nÑ|Mw-SԶqΝ嶴WWstƕ<]0Zy{~_3\;~"Br${#/_!xw=yۏ9v=kp| ^#<}3)׾~k#򚅧/-:iٟ}~v h5CoPߣq/} ~/]9Zق7 ?Ok #?Y}{ !D&CCFar !S("/0.O Kk ,A1n~Q)KåH1>o n'k7[Ѫ֪֪LʷӪ !wM͚,&OЎ ձo }|}v0}19wB}F,[ ?~zWϟ }-%h}Vm-)$!*_ijoߎTLʙ{_5}=z5$5M 9!( zR`A Rh|G\_Ju  @'V/&ⲈoX?/oT՗?pSnPbEC04lgL!E߽)>cҳeȑwDI_%OS8FX~ǿC8y+2!K^~qA>N(\~m*_cL?8r<E*0Q|KwW|oͳn!ԤFVHo^Zs5 }E֌~2}KPwūF3>]I+l۽厢.HaWrsmvos[%z_+n˾(jϿWh%H<}O]ї3=?f1|f~Wz36^Gӹލw7ۋ?uu_Y]/<𛮠q|Yq[UpN]7It'lԥ!#B`@ [nPZ!LIS cs4J1wNQD .& kQu&p:#XWMku] tB^)/ŏ :o3ڬ~E8m|aL ԛ,đ/6튲Oq P _%. 7Np6??|s) Q 1HP-xIi 8wఅ*< ӗ&}qVt),r=RLޠo|//>w ImYn3ļdY"rCkۏ0z郬,2: 幥t^7rߝҀmnɢW~b UG^o2bwȼ6GCܻY7$)e˫o𿽕 %|@q_e!I M_QC~t&vwن(]:D ;a |orx#Nl堰!,8]F?"cl xR..Y ]o ^};wOto~V g@DYb|?H'x^O FvL 1.-=Xx'+[BEE)pq3 ? sL1RnQU'ɽ_P`@;) y63YxyǸnDB󛍽!?pqt13HP(^-ݯ8q;T0d(MP/n"W|坠H d\߄ HI+c1@y 9BK5/W2"hDˡC^0 *͊$ʁ 3@X1*Py_5+=fTR-[H* O _bBU=IWaUɎaWP3@iz;ּ]xYnayAT.L*TcP&| W*a  s &΂;0F日Owު;wOto.E_ {[›1E-C$G $uw61 i{DIvp5w"- 0*;_ Wx[$I_aRu;) $}B+up"` $(L:`p`BcwO%~_EBpROx 0 TQ-AQD'{QXظ;wt௎w ~X0@B T884q+Z&InxRu;$PA\|~h ~`X+ƅ p/w80 x9)'&W%a%/ZT/MLu|r}-9A b`L!?>IRKi{q?'")kU?(Wf d)z_ns}xyXF7DO}ɯ7) _vCO+kY"b*B%*oKI Jew;B8!A2A@ (h9)'&W%ea%/ZvOޏe?C' Ā#*{Yp"]%,saFߟ<ɲRqDei9t/k = 9 (=v E#_ ?*kO";d0;wt Wl}|RsyW |:/;E|]|p=9 &B{A 8H_~m&  9Bz J78vU{%U$KwRH*81S( VC82P"hE<B"ywcg _mTLhySp;/ iKT%XPC'2`A/ اBF)v* }|>gLҟ ST(>ܾq$ @\Ɣp)dB/dd _ط+,nosͻ,|l.oVɄuKU7gw,`{\xdy»i<^Ǻ}G)\NUȇ_4JD~w'J^VMjU) 3*D&Z5ѪӪ?(~Kuw*owRq7j3}U~W&5wЯ$MrƏ˳osQxG5|O܀M팷P*Gjpz&;[ |t*@)~QfMg-/>w>~wo"?<<u)lgk&'Cߟژ_ƾ?畔vp9 wrkD:^&+,ԅaPP\kɊN<9Qݰ&?[fώ>ERN)s(iIRhQF zRvrƣ;ڤ> ޴PHIB>wσ~O]8pJˆET3n7aݳ0dBΛ[~rQAIpXʍ&9R.((墇6rI#b"/ح Z,y=ɘMFF[ ٹ]=X!8QٖDf^o /[R  [qk:j6Ζn=>6tmҼ2`M?Qti%^ڨqciǎ&tb( z d ^ed( ąxJUuvvͺAYqv=ѽWBQh'SÌx4gV NȬy~_!s UTzTc:ppzoX#m7#4T,Ej.tE[C.w(SⴳzX5e:j^Dr?Sn`/wk[=/:~0yn!ΘjxwTt׵'kuc3s>!,_΋RK.SiTsI[謨0QȜQy:ZqVkQݭ ;ffI>kݖP\׏Xj\zȸ*ڤa٘+)U6|#B0w QKM\СzӠ< *7TE{e]w4<=:[Rw-i|OҠwD|xJ*bi>mOV? $*p!e{=m]([GKg4U֢yٯNGb䕫^ mtX=?ȷgs>}Y׻]I:`E};'T9#WbuDBsgWb{?r&4o^$ RoA$K<_VC_[6ۚ9Jv[M];! aPw\@ y,]H~Ș 慠εпFڡS0V4.n{C GjO#پ ɑd .w ž}iH{މp{3fX#U)OXڅ|8Ik7im6 ]hF~ нI4i[u9y֚Q$!ji-b;pP:gQyZV΍Os+uڙLVj:Cm%Mprt[W3X7" |+U.f(r*vMu"v,ҍ}n Q!U-J}?ӂ (ڭc\(JuMFE*S'y[d94;*Z@'`4-[w*s{(96mθylۓ&aMYG ^zƍBJ(d4uCˍTQeYiY/:4S-{QVKF o۶=w]XzBT$qs>9Ajc}UyU] >өy6=HUsK.8Wڃy,6;"M L[nJDCZE>wZj:dȘeRjQohYt87Pmj@^w9ڕQ˺ o}念J j8]Tce npܬa&to>oSmd`VqKqviYXh7Vt؝W]|!P3+e^I1U;ճ=tmrR>ZoܣVJZ\ wjOAmX F.Ok Yԃ|:hXNVN[kN2v.jb\a_1蝆Uyi5.[06 y@i~\DJkFZ[2<(EFQ8؞*d|ZՕtsMj-5:5&Xy^iHs(Uk1Vɩ4;? POS HѪ2t*?ZMgsmՁS>Ό"v`t5}ķsC׽ų2[G§v=lV{mI^wwkm*Ȼ_)5~ⴇ6BhgNX@Wi͉|= h=:% Scy_ASgiHimG۷f^. L};4ͽvx6hݴC/5Zu̅1OGPڣZc敕mfCJr)8z~o\P`U-R05>pa2bI_ds<@fQr7I;Xj jC"MǙ|Z -u*׮kgay}FMIN PlfӼ~8Uw^)."U9ɍ٧7LP ٵ.rաڜ5-hGoacS+F&6bv!KƑ7DU|cYyY3Ŝ;u"A&JcR a3'D9t9Mtѯ*.hinͥ~*E)%MiU8Y IƈrZʩ̬QfB|]]헚V52]VK(Ue^ݪ6[g{JN=h*vҚ=iP"snz+&bdBPYwE֚yvpYЃx{7v}Ȅ9WI(3fuϷ7ׅo=g@Eid-g5Sm3(>r0ܡnfFY(7>\Zy:T" u˧-E8q\U9ͬ }{2n cܡJnݴٴ8x{kpޮr`]i3e[TYq~V#[4:=?jܬd6b d :uR*vܪVcijj4q8i(w{ vj>ŝ8m6/vu{ǂ[-"%u']hS{P[l\.}^>nۑOL %y7VX0X"jmBۇehk˲ fl=~$(֢dN FCwKdbU/OR'=qZ~;]#YZwÖkݼ#YI !ʕA5+0([|Z-λa4n~*⒮QSGcuczI|hinQ\(% ]GбA4Pv2#Z)Ӝ?=5~NnHUjeRi!;I3N8aM9ts0юMKlvUMk2f6nw rw>d>}=|yMC4t0V̝ϋ#wSAF\vQ3ԉtjI 5sjz3Se.<[]7+./I03ʖ p/k#α5;lsuϚx `f%Z-m -z#6LȻvKV~ת;^-N-jzޫE1p>m+O: #fw: 4̡+2mV(f+ߞئ҅3c}a;xw]3/e-5(]$vOgAW]@#]k6}zMuj帪Tk8ں8Ec峺W/uIeQֽN3ZFfAE4;flQ8 v쫍!G?vuߔ::l[6IABCJca띓C&٫;{']%sf./'H~1F9x^vG5İ|x] bT|*4 gљkK煕ȠT I%mϺP' :,{*Xj ʍdFsT;ȫ9Ric4raYi_ػ4yKتH1$aAO#1DY텴^*Ia鱳불DẒ4SDE&+,=Uy3IuƩbx8ϕѼ\^uJN'##RI=f)۬D+;a+RA:5C=q+t,ڶkQw2'=ms+;-,z4nv@l'EWh0nڠUY*" ֩]oۧh{20]Ytۍ|PE!a/u̫}aon ڍrA>?LOa~?7θK+}x`sUmޏDcJ?Є2K~ͱGb;7d+VgU;%[`]6<9gүT!C+4Q 2J959 }iHDnXq@1uUws%vsJ+-P-usfYC"GwRi5Z=VIw{0㙇==VWWESuRkE(67w6WܕEZ`J<~'aM6EASRv.­ 5*V&Wƶ‘7o]cg;,\Eb3G.5WA;+(|{~J8Ƶ:),=Jd)nftrᐪ糦Ut|͎ &o;,J`fNhaUq]^7*:맍oGĥ[ɠI6:d.^(T&[G(OfSMb7b s.Rjި@1˗B2wU{xҺ{rR;td.[nݪ+nܝVgS^F{c[kv%qx_|Q Gy>[euir{¯2uIǼw᠚מn_mgny;M\+c"t N.|Z_fZ n`E ZT254Q<];R0'$M(vY[U&ح Nj<,XZOv]7ݚ{ 3,Ʊ5ףeoZ(Yz2a6GF{5 A\sRiFJ'?8F1jW\.^٬Y4ik5RhpjƼ=UuZZ.>6ɪwW< “SqXPeOM?Hr/  gIcKŜj|d; e+6%y GU>!>ҡ.kj mAN׍ /z70HO+=vp5CC% Y}tՒempn[D=oT=f~+HT#^,| Z׹R<l3 xnE4±p@I!()dpr- .JG)|N|0#8 HKHdo !X%*#ۅS|~n`>>J7*4"Út{P'c)4r̆m$0HH3M/-8NIT.EթcwCO;x\ױtg7pS{WtҰoKy=a@k1\ܭNjvWK(w.a.Rlm]\gvC T2xMA,K80`\Ɗa!ka9+Ԅ9< nXvp51SIW asL{] dOnEG@֚ȓݻ7Xُ4N3z I̓!vAPup{lNpHكz9!r8S._=E`A}jGF(Io0XD3lAAIߍĸu@ijS Tacx̦j:Ɓ Zm"ey*9f} ?5 ʲC6UEM}2k!~^`S:,[)e*9`u JjI+!C*m\8Ca@@ᢇ$'!_AR^[(9|I&sΦe$\Sᙜ;6 yQ7ܒGXU6a<${Jiʸ$.Oe֗W,Lۄ=O%%YI3Igm>ٖ.xxEa*FņLY 81KTȕ+f/ Dt[ygMFX7RF|4}Gd}yThzuG/`pXqǙCޓ J:Ng|: 3몃 w4OQ'=joHĆt6A.)szL~YGq!/EhqAsV.&9r:5%|]MĀ#Bq7to0L)9:$R *؉ߙ*TGDJ4A%74gcDz-x@9"'%Kk 3DGr5zi3ha[ɨ [&u!ft=Q0U5ϘKȰw퉫ui 2Ht%4;=M[{*rDăMua0ҲWD7.}1l#spZ7] o s ny+%GOQ{,.FV{ʨj3Ʌ]vݥ_6¸)ni`d.S롷ֹ =lu/!>4Agʳ&-. MLO PJ&19Gy\xEA^7o\[*RK4 LlP=r9qpΦ-L?k)b1.RflBաPi:!xh*Lv [3x\D\ܘP Yox⭊uf{5g3=3.eΙDv ΂ۍ>! &>Yo{P''yڨ|QJQ29(:К) txЃ&)Ď.ٚ]:2Gs^[.Sq啎yfj7)u6C~1P (=buu \2F{WXDGeWKZ(TH k Tb#{;jQjƅ"O-\w> nU>JO7.vC/j|y#g}^ʼnw T H9b3($N$I5% vHj^I Os%'ɎKodKpc 3E 3j0ygy喇FQֱƘ-[-:WNdF phQ2+~%^F Vy* zZH:f~7~G((>:{sǙ❵\ۈP3qҼCҴ0Piv&t1@<ʼ@B WJCi"ܭI:ѱ:GRN ԽK]}uhTQUTlG: , S<] ͬ3i:2e"A;k@lvvy=d('Xm,,?Xt^5#/(:b%{暵cIdK5;/lنX #K9Z#d@6O9Hr*5"G;Xw78X԰U;mf1,Q'S)*Q~NrT w%3.#@Wާ2÷bLGU9EhK;o}Nl7M)g|W۷ x)~=۸`aĝca*@| Q`0q |˰Gnn@}YI^S4v s|[Jrh žIxuΞ 8R-fE Zm R5{l#^:\u7A/$zycJ"i--7Ek¯pUS 5 )˽GJ7() U>_( gba9qp!!"]`AiLng&;An+յ 5f<1-~wDy1-Sk"l!`OC(SWI˦QIR$Ŋ_>];f$D[&KK) T;r]KEY;WԠ1̼Vg;~Z]򑇥=<GrFyidZG?V*PaYC(afuĥZyB,gj8R>hEPi@0k3-zgPvB^mČ?9JϞG~SvOi"-Oy hgwt+"3GէGeyD ,DH{3uI}\ġJ\\zVpUG ɗc-0?Qh|Cҝu7~&ePY9b[g!LGJ@*'#!/"0~,)6G\ s/Yan\Dc(L-Y%%1ĹᚂkUtqw:Wh7xfτm[P W_.H $B`ܝ'ySdx"5|W|a&GNQ\nW_+c:4@*~9sz?6r]SD4C7W.h'dxjN5/r†Ot ɮzdoNBj.3ә:(\Sm"7g9@IBkܪV[Lߤac]!Q@U_l+=b0me[ Q$uv;8jy]r*)ז*øI(k$26L w dJ$ш0FCh4xRFWH7ORX4P5BDיr~ta:լw谨"3+fkZ923Fj3Orzd~z)]!0aݓB3OJS-)w ђjZܩ&= b'[O8 ,3CeIҩNw؆aPtLY7A/34%j4fCYB킻Fl}F#L93̙suZ"/)䕹!UK8,3LS\=] scINˇ3mytԫ.-A\xiͅ!]Hݍ4_ஊuC7KIwB=Rv+3*sΓ iWÙ{' Vh'0iMyܗH+ݶ+xny Oxj@͕061*S^$ТM2=UXoo'.z:H\8HP#D[ wmǧP"l Pl ʼIrudjmIV%_ۻH?4XQZY.Al8>ZQ| iҷ+M?SDkW$fdtkSu+}G3(Hň$S2rӥ}=Ly!!f:HH| qG+]vź6.UXjP-DK#NZv;*! Z^EjGd1$ Lpbd9g#<˯b$G(jȣ}{/ zELSC< 1珔rD2t{L) zl%))V@.?S&nJغ$fKZ x/Sx759TOtQ,]/I+,5-D&%$ >?8r$|C8[ ʢx?/d;BuTOmXɽ$X+n4Ik ͙q8LPUz,E,S033{xob63h#i&-W͹ja{I[Ogݪ|dClu7+L?<Ge g|c]&#2O7@ iy z ޱJq=hhnn~̤၁c4 &@~޽:7uP09x0` n@:?0O wA>+jj*/~G/_gCxceDU0n]C@ ?AsCxy|!J%7Z=}sqr- Lh( nfBs;ʟYc˷wj{a}m w("NV0ћHY#MfBtjR|t"xg$մ$PRбsr`$SBM$/TЙ6=(c iC118L&ta>`L -wT,,N0y3 %kmsܫ4­03J(SpTϝ(t⨠yda H+k ioá9?}sb@.n 9?@=)jbcQ0ObPc˽1M0٘Í?}|`{v>S d 7 C^qǾHˠuL팀}DF;s sW%PP/d}!LAkr0 -h?_}03v0$O8jا)ڟdaoe-Y#}hU1gxL`doDŽL}O fa> P/X!A/3d-}$v61  Q=+g'-}Q@['lgQ~η>y5'*|2|TBbF4#> /~ €TЉuk{( ~B @>5M懾037?ٸ/09#~I;~}e${h~͙g|7?E; 0㼼f(gMUs gvC/)8Z O*0; J=" 6C?XS~oe忖{?TvA j3~b?EdPhS&L#н&;{#=7/1 6Eb Uom39L1 04&:;@0P007 #= ҳLN<@ xo }A ukŀᅣh{gkb0@{Ɏ60\20f@>=b"ܔ|?͚Q_Zсy9\_ y~ܼ+;vthr @|{ <rqA8@_[^_bﹷ߫^wLcOKw}pC>d({yw* #?yg sspwc~y?oHD?# !Wˠc @||/~ qegvL j-RB6}/P7DyW=f'4}?+h ǽW|FR>?})qctpЏ1roWntG{ /O-0hPݗݖG~AIc?V>@R?L~WFIn C&A݋Zc8e"Vc8y n-}|+XU&߄8" F?0*i{J 3Gv²ׯ^AU F#.AQ_:ɏ\o 0 wAZ%J12F $&v1Chk0-1 RKYLYVU\\Q 2ҔCcMA:F2QL$XVGS(*))jrB8K10S={&+a̯|)i$/?'*~v٘'Cx$ajHo>a{ שI4XcUvc;Oj7jpo|VPz"GH 3yAad_ukWq<L G5~Ln mQ( '9E~|Aj1N8ohp&[B}Kſ{y?a{q7ӗ5Dp|q䔋}WHy$HG{ @_ }*0olg~1ST] EB~\Ϫ՗b&˯*W}翧;>?|)x? G MG2*XXh:`j:*NŕdJ()|ph#h>h p)u ,aJF]l]%Vaba/]R֦Q:{ǂ늻Y+J|+̆i5j/=+\<\acn6ѽ?`sw1 M(б^O4ޔ-TQV`⏍o{vK/օI/)l* ߃7bRMSXDGHi>uiLr@ք2J}_9u1xLԷf[2 v>PaL7kc7=w9JF/'%CsAMqf)Ғ=>nu6C("o=3?$xW:wlmD*WuĶhY%|0Ӽb&Tl-|mh8HI^W81k%a"KsUb y?+G+%ޞp A77߸p9'#4^u{ě:Mc]l&7 ^/7N${3H%l=| X+ [bb122rNHe֩[:00,R o. e")Aߘb^;9#46YAQ'$eJOk%_x_PC_c/F9ƒ{- iғ'$դê"LfWdڱG"\m׹'x<6{ _K\u;N><ܸfv4LE^FrcͿ7TpqڡZϷuFr4fz |cS^|a iZ}h;i y)2\S'k\OSTy V"xj["IH̛ߙ=h+q {d˘ZNv`]B]+ozL{{ @UMn-=FE歒qdG{29D)zhKx+&f{C2]1ó'_0AvD?`612yR<$CRx J>a$O}Haᥓ`L̅E(=kR$/$N_Y3R^s`X2PV7ePX^mb]5k}]Lu:ZX*?]3Pwf'C;aݫWeΙ< 8 yCh1x\9=u{ ԕul^^SQ/ͽBZ, ,[<7n(e>I3#EyקO5=922bv C*WƔs%6fp'9 }r*,-ѭj5MƾrZPڄ:m} iAWKtJnO?؏=5w`ΛJ 8A3:E#Q#ArAZ8A1ȩiii}֖*j4o6 gkp޳psenJ炙QW@'86o[ZNw*xWGA0/Ųk؄skpkB0u X^?ViwjqtIJ ҡj*qhh9c5.ф4'oloOg3ׅUVT,PwybnnڕwA=n{~s}<Y|ǘI6jO`>\pp\ѧG c]A(h f5zՎm5Cv9)\L|g 9/ ݽ^ܠVmŚ}*S56^?h v=Qޱ~-FsRYor|0B*z=LuGQO$ײt@%VC1̍:SP)4r.m7kC8C"Uү"Fnu~8L#(I2;&V:tK(VjKτXz!1<:)u*lSM+M񽌄ˌՙ wn2Պ wM vGyf*̊ǼM.b?m=ȘnL9p¾8ְ+ZOMyA=N;IEz#V刮lw7񓉒a+}K"YO:*;z_jlgf$>0o-a9ӛs;õczguɩR)bФmfy }*/;|g3wt-N"X_̷3O{QrKpDq9`N0nl:[@ {9ػ!Bttxʇ`N/#Y}|Z8N{^ В[ iO{ji;2--Ώ[5zaPv bk1/ZܟmV#!fqw6m m>|uWBx}*c}}N9aXG"M7=p?j/|W+@#,?,4-x~_۱°ȉ|Ji:oǒ=EP.>~H|Wu͕tVE 4]Jr-1H̏;+('!T[{:ߢ.s(dVVIxV<1-qO>3YBGr0eT_r*Š[;=*y[sT瀃p9ܬMt6X8]%#Z$ = *QoDd;:4@i{kJVjvQGB:"UXpśK.TVO4Qp rE\7ˇŕ-Vg塛YB3njOAH !]nqÈBE5/y{3tPSEIo) u9ڞk@FKH*/8 5FF.m ?hwǺ금 ipz E[_*v<˥v׾U-hu8\"N{_NKT]*w,ҰlT:YegܨUmie=+@03˞ј\PKq[&lފh`qjQ{xT@sH9w$PO#xN_-a9 UH070 9w,Ɉ5~M7]/p8OfkWHQfY7Ј+obK4 ٱH\]Hg_|_dr@oc)S;kN)w;MۄΜdy;sZn1wޔ2*,h\3pYXy݃I ȓu:뿙ѣ$H//v8yznW qxYՋ9+n}Н-/cxľ[gnxڌkJԗO.1Ulftr.Ή +RkhyTL2rXPlL&Փm_lgfx܇Y\f,dO.h R}uDC(.ydkƖzOGq{X2bAyno/#:qT76V'N7WUQǞ7d:3i 91}@(3@bwvhnM0;`lq8 yr3C_;d5ΛɓY(>K{G8{ q=hxt!3ҀBSSEԍinV:g [XHg_W/=kdrL;|[P&"&߃hƟ rmfj"ՇwU?e^SABԌ a o7sZTxدǶDO_uNL\߆.6^m.3U΄05b\]yʻ&T3BH!tp|wwrq= SJ5Oh Ѐ{ϸ NMMrX\7/Y5k΍jOa oOWbl{\s8qVP9[j Bt]=CZUuԠ}\s(0B{zn'H-ByFsN%W [5(xgfqѐzZlжeF} Ra]skRӦN7Ș@+ufS+RU%FƯw=V޻͖r~̳dGևuf&HB"9[ ȃ8ݶ*x?l<,.Jqڻ/G'}T^K(O7c\rU<2s$0a怎lzKT:Ji.2mM\ D4s֑8JVԻ(V37=y'y4nEk S$ϙZ.ob]MN:Nݸi7*xS`s([9?G 5ͳBЎNa""ZHBģݨ wbfR}WZc}^EʂʋS9 5Km%E֏o<$_no?X0PG#T$霈ءn¼F.d>͚((c2nfGWs(Z'R o`+|/O!Tv)ʈq6q֝+ Ghii@DKu{OJC,X N&:.ľ"Qq&MP#2, ^Aɔ7t~Qz!tU2*FK{[(ԟ5_삔gn'AY.59 *b8GOpRY[鉛|#ʱ%ORet4 )lζ7 aF܈z|*%,UǓ֧6(Dc ՝_׭S6b%ۅ"G9<[Al4 ߆;;˰no Vzp+p|/FbYujG]S:E/0ũnNAA(A&) O rll6GpiUڇ(1_U+ )N𾷖c\Kg uB#OႀutwH! 1QE,1jše c+^yOمw\ vx>4ܭ\^gp6 }TNv<R{t?}7u wl᠀n iž,k^x/hPkگiV'+e1{8y̟;^qɠ+6yyf쨏tAO-[ޡ{G~^qUud\2r߬N=uPj\$7["["OLݺ78pK͙]ϾfvAB;.ΧuOd&7ٿM?&[S{`j8~.OC Fz٣^㌡OR43-wƥNgۑ0P@9yi% ~)c^K= '9Tԇ>f+iNZo-T*hR8D!+G* L@ +V ;2 L R}`|9׬J Ӛ97Ubߢhܳt߲dg*] ś~R Bldž n9#6=Z63`Z=|1פC쉗㸄I; - ]ozWCp3YtCEZޓOB M7 v,Xjl)ܞz 7ȵnM [ey!uj-F駋5 }9uJ&Jl[T`JxtF#vN[ZOTW]أY| (#ap$# vje^>O57p9)JJ^")/7>)wXy9xTy6h[nPֺ|Q$^W֜?=7_sB@hMLvאU;'^^4%+wFĂ '(;;))"H演 :2 "'9jwX|ڠtWRhfrW '5L #MA.׹f޾D1]{#5we7_$^fn` nx?eiޫi5N3*Z襱ЌkgjLT/q.Fg 7i։O=*8Af1cFO84cs2j0’HqV4ZĎEB^ѝL3q(>p+bL.nAb MNeHoO}^ӵKI:c ܝd⡬#,Gw l8iwZim#/Qt0ex%?TvXs^]a0ڐ{R1gMj S_ʴՍɁ  (̜{o؈bK!NĖK3UO >)WX\$bU͜z+C|1M#oz(=Ni*\n#) ]]vՒlkWͤHe\ϕlɸXbi7*&?wz`Q~ #C k;tqL9G-#cE4"cm:]IGʷ_W G*O޹̋P'ȓhVM*0F' 5]eSb V&g ^y]+M;}'q%g}z%D Wgť+񗵨< /9O)^"~My7QG7+uw}핧fZ_- wks\;BCJ؃ăyitxRNc1!^ .>'P@uxu,j͑&=bA߻K8=:uT Q4ߚi,𺏸IV仺|0:WM-+G0QXn+=:~)EȻG !7C_pvg7 n-{/fԐ%vL|@JXW+_20}KDg0A^献-jS&Q"2|6N_vYO/9IhY܃,gC h) =EKMT׉fY2ktiY[V8|-oP/:VztD-: h6zq%U?:YbT1ڏ"HR ;酇 [sIVR{Q^65f 'fawn$ +75v/z#l$Й#,pMղ24)Jp?Y |T;sҐEe}6Ɵ3Yv:fLMnXǾ}"vaIXE[:?yS]-pueI#yO*UBqZͅXdVXcÒ&Zږ.m>H׊^Ol;\bڍ*MsNfjfl*3aI䎀,V`JRbܥDEgv ;|~_L^f~A)JzV֟Fڋ^y<1]T`pbffE\8>cgO殁DI" Kћqu-:w k}g2[Q",hl52L,9YN(J͢Yki'-tS3m<6O_w\ձlꎻ֐ +! z.'>ҵU\ ?I6qU[T%Kw=Q_yG .΍lw=Nс}y|H´Cl8SI>)O/I߁n坼8BMٖn?Bk%2RRv[DvHW@pEM4׽^>J?E23';ٍU -;r7l}}uFR{(|Su\NG W]:/.rukzh*܎[yR;h(\}ΐ'9gï?~u]qmQ5y)5IVY- ̅԰g!rokg-؍Ut2m'i;,WivآoykT 4*.d< G8 }NAgv '.ysT+ "Zd˴z ՈHW;TăliqҋynGE1[ظq;z]00;H Bת/#Ԕol&o-MLt i9 RjXme7\uqTƐr@PI)!:.wnGCa$~$]VBZt3Gç=SW!ʕJpv'}[&_Xк)͂$ rW#%v.޻G/?X[3x^1h\~ݙ̣qyjqH~D\mlV ^EF6o%>y7;m4})5+Kܭd81eY:̒zmEF+kG3tgu6<8Ö͎x8.'̚炮UyihuL/;qi'ٓ^>׊ 1VЅp;/ W:jݜO6S#8<{^9{]Ԅ o사gYNSّӱΣ:#}DSn/,ox>^a $*q^VTWHc`?Nb45IY!~YUeAM]Nբ6nXB ~w?ڋjCXk"p{; s*Bϓ>)A#m#8֭PTp:Diz@d͌kӐy8}Z(aW3G.>U0XA|YB:${:Zi {wct,#&=/)ߪwUzWʏA]*µ:N<[չ[H>cE[vK .2p#;oW eՅ\J3SZ!Fx}bRsD.*ڨϧ8%%NseW~nʣn`z㺘ĸ2ˡ)_^ C65b"iCKħ 2'b߰7dxhY|yFXxsע$(`ƂݬeTaþ {76^1 6Q/NX2K5ls5 e N;u{WVeܨܒwgܵț1:6W+z GZ*Z.RE&;3ctsЮ+bcΦ-VEOlGҳx{Td@iHS  w&kT=6=YO88i(]VAub4lL)8뺔̡Eίci~}c8F2 -#=\'R< 8n0V;ѤSdJe]C>1G'b0Ita@ X)Uڬ8jOX.(D//A11} 0%֎F(T W}@\;:l-#݅O4S=VxFyG8n$V@֎ƗٳO4bš6o z+uj:fּÈxkOa|Y] g<:Kua\Qk]}Y [x}& ;BoL Xzz&x-(w!c=yu&C!(ؠbKSd^_Wgwuë*}=T`nr/Xmi7r]@퉺:fW[u Tp6]TE%,I,zMF,98N9Rf$4 Ǝ݅S f-.%Np:#Q+T\X\}QZ:ǎ3WZ U X/ɉ7UҸBIrg}U&)U_OrzΟrG+zPsp@w jBXm?f4&h*4MmsG$,TƟ[tV gR 5\.˪Ri%W.X{IXi!d "}F%!ԝ ru,澍RpL♭._4#ì|}~ 1x_"gBKxKl0xOR4a#2FkB9(q|187wϳh ֖n?ќF؜ٵ,u 1?^dTX44=L'BP <@UyZ4uLM_/1GbQ5->%E53HKMR"D%pv5'u뼪mvDE&<4"?`K]:An;ӋgtE ~G7줉k"óhYz1\ʶ` Y\@|J581UAnjLѽX|͓< Y8IZh{Cn©I``,T`Ƀ@#.=*غg10]&`: qWxi-X.]'0=skJ j2lזi66_aL!Yj}eRcp|ּɋ PĬk s,D˝~Pf| *HZjeQ:^͊!iMrw7_#0"I%.?* \,1 HM+KޫMέ3!2]>hR.jw["_ 0VH+&=/{Ѽ2&D!*8U'X]o|$1R<20Ȇ/ a@H&f:EU_yq\^i8D1M !/9?0dH .@HBV e`D?* ˙Y(sIzӟwiZ-V?#y$()ɈO^]|$Թ˼Y~~QIB{Գd;}8\)9A o}DD95tz^Rf9gc/p7?־[* x_"2 {^G;Iof6;wFK*!D-v9 uQ:KjMfz(ɹ3lKpJqJ7y#hcc8k9&dl hJl>OVPϴ JJzପ!iއ0_Ha>%B`%1uk90VVR+xwzTC,n1ǽ|,OFҹ\ߪج -2-qKWGU^&0!QɗQ" MTB%1NhGc2WRG#Tɕtuk\]ʈLSeRB;Cvٮ,f &Rk?+J_}q_8X :RZa _4CUi{Mm)ж%TʊqU>0-ajDm_% 3!#a\1ǽ0=>p1sG]G9Ye993鷧I79ܝMSh>ǧxwY1»j5߸:LIds8"9*{t^rlQ+=1+Rᯗ69fjVRC)ڇ;{҇npוt*ΥRWe7I ѫ͔X6(ꇲX\5VwVa3ǩ ԀL7eDj^J6[񖑌 m6)D@ւ/4Dzx̤^*_> )&")iWUꉎPG=9mhтəJVABr[:~ YzH86%乇c4ZacF'YȗzhF?:3@1La@ Sm|; [b;gB@N5s%}CgFNe~!Gy31 )Y}"X#>(s0H=vn%F/>'DEu/[};m5_t@=:#~% K:AJ82Xj'üBb)'*rY7\xS|ݔ$X>zP&."  IL7_Gp8YOt$Ud? pKQ]#47hWI[I7~cni `- ykFNގezo مlM[Dw.X߁v|~Q=A|g⮚8v<+vL S5؉k_p`ă-,%I M`Zxі<, w56$rl p@1O(*K08bOlA1S`K yPS\y[2.2 KuyV< ڟBWXD`Fr**צ9ʽD,&2!Ǯi;t n8TBxa{;5@вډbzbE#yZiG>c|o5 CM@s;X=}k>\_jыGC70b [R6!aYmU/LM}dX0,H.5O2MS(3H?Py?zE-+P~ lNUs-vU9 L!\:C' YZ+ 5i;3a$~OGb%얷Jٗ{zQ,))eId7\ z&Ί~7pX0>F?u ɰ/DFg^6G3tVVE`d<=#rAT3fzhFMaլrb7&V# yjjz1 ʐPF{6涼ѐ=P0̜ į0č/^ 芆y ;h\iwS~i-1-w|aljҹeVcXԨMDW pogNãjOn`[VFW4S1.oS>G*>~3y[< vh4).#7o:z[}Ž8?w%./CN,a"_X+-uU8:l(\~g`W/~4. _䴐cT0: U[2  =A$^FnSm] EP'r- 9FlrBʏi6|q9,ڃGMjEU(&@ۺKߦҞ~ݑ,Ȁg3joN V>$xog&Ypo޾]LnLG.O>hGr٢ "#zzf. V&X\mg"6fcZ6c+ V3 D4uC fAnK#Xanێ"{NfJof"{7D o3YE^uq#}s41㊠t^H۶cwJ"W!*{e}BY'4&XieNnZc||)-6lpڳ'I=AwQh\&Cyi)KywPە"N!↷۞DtY3lk:<$ nf$^++^Y |yW~U#23z þ.ASZ{Vd\HYnFG;Jڏ{Zf-0N `*k%)bd[v{UsB''%t;0bKWY0l@uyh؞8v\䚔>nQcU 1Buq}dsOm $[+t8_%@5ne!fek !{Ky穠m:ȯz"kMB?v?eAyUC.@o!rZgy= .⅓&M^H#w +/q',?@Fk18' s 5)Qp+$3ʓ`u֌/fd<]`b\X ZbK:OuWK EIap1޷(A1yxb{KT„늋IB-4cn=CtQ`zY,4XPu2#2O2b.%aqXߪ?_"830-1DoD8hve]$)619nS?zgrX] ןqFcwSzQz);.I>0?VС*C\ /QJ@S6Z9:8.@ҤޟUr(\Kb֝@jS x5֙oaǧ806Ey:zr1gn?NI%PK [!&Ɍ (#NAkMD<;;uH# [ۂ֚$8Nrv; 6#yuO`kݠHH|V\*#$} س!qK6+eVP\[<EKi}eDh ZϡPf%6tE@r'Gm:jS%=\#~!jUPڱ2{3:zX޿ۻ-_HNuko\ѮMK?7f[8I}n*x$_KLH/ͳ,1Ź% [hչkK3CE0-* ǍN\)_sJ2yB5.EUY>;qz3U3zGF6cPF6X Uc8KW\% O;i:{TڵJ4;3ދ]jŲb  2-qs8FZ_9kX }6G9UBrzC|mmZ*p3QZ:zeCͮeJ`V2Bf3pkƹ%c] <杯+| f֕N;?fFΩ_?Cr0Tsv=-JA\\4L衎vθƒjhZXFC7㬿!NpU"˙d^> g{|,w`xc9Lj^>83vC{ÁlqgZc:@Q kP>$`_P'yqR@D$m4y~: u؉X5lH2C^YeRTcxC|Jn$U wKx絒>i nk<2_;u#/&څ/z?BTߞeSOzOJkaؕmYo^o/WfKXڲpۃ# ;ʎ r eho&ϼVocpN`.X&Uq?sҷ&$B*EyPn>"݇>5/5ki9.?殞R'ܝIKb: attZ_?2i_;3==\YSЖ0ޙg4^Qu%raM:T ""{E|h2[3leXu`N*ingf+YM<#1dWOK<tൄrDT;*lUnOl|_N\#ې%.K D忹|f@Meot%e1^ιDjzR\yz}c=֔e`>>x/S]5B _^-*\>24ۆ Ca)nXQ(h˲IE)4SB=1>(#2Tp.NmK}Fr~Xq-Ёs>AN%oec'(jݡ}} aa̗/܋69TU$گN73mlODKI)΀ jG ^wfX΢ux?' IP(]X$& VV', ^>IIχ `bWl^ڄ'1}9,cH.c.Jr~~#2QtUSLoq4Tkхk~qsjz mӓ0$@WY]v M%ܼ`͎%Њh0, $$îh [lh~+,kP ŵ9>G"?aFAt 3_ =|Ƥ'v˸qTi1Mݖ`^b<o弲jrw. ەڣYK.>u}pbkۣ Xm'E*,[A2B8-"! },d1"q" 0H3R$Ka^`P8T`3sE4,j߬x, o[3D&=%ihO"jzlZJ#S$OeԴddF6e^{c^g['%R3^+fmLBE4 !G@neh\pO>2Ү8e*) Ʋ Mlh0m%YSc\jPvQUoИ),fY]G8ͱ:$~T[N]ι )>aePH8:Ų~DVd(䎡E|t%l2"9(ؤ(x:Թ'eTT٣xfVTܖf &ɡh48ۢj&}]b>T=L+&Wbmn*Srghq9:Y|aUa i۾ [}N6qp"2֖Iy|})E03zf|816sNcTۥu`=QdjPe\Df:Xu3ؔ2 Sաє@W]]1Z)xΖIshJJ(1`(6V|P܇Yʁ -QahR~OaT "am9|JKXvE ORd'^ YaCQY͈B7ֽj> 7z%c?]u*i_G"Ҵ18Jҥ小OuZЗgGynV{̌=e`>&C` į4 Q2Ԑq;SeIT\'ߜ]nn,_緵)V׊dӁuFG$_(ak-BN8o~vFkdRuef+8 ̳V-\8%Rq0$5^ӹZ`*{oE)~)SWH$"Hn`F?DNͿ@$5QJ@E{7ֺ!_ThD쉴dn:Һ͒1bmS_o> 'e/=hpuR~2]V\k.byVs?;U[G^j?l*-=v快7Xw#]؅6.봅l~~\iTyt zG2]\qM;6C;9L'FZҞa׫zܤ߈-:1›lƥkBR\I|vىN4`LM帢]#lj欑{ΦpiptڨAٌS鼮""̒x{)SMu Dq ]w6h[?aA]xRHc8_RSǓa;\b@rִ~DCkğUjŸL wYf@926(~ʃAk}nyJw:edX~x4rA+-ay﫱Z&Ӆ4¿^AicBgcЋ1_SV+6p6nr"5 PKЂzab];aKܐ|E\4JR0ǚ n=(v H\Eݪi ޻.^?s(U:"M&8!n]\9tJk94F{=FAԿms|u99n(O3񟄣:Sp*Æ%]1U(AQkôAXFˡBH휟<9@w#5> F>O޾Pۿ|KG~ub{:Au/Lj1M/4 ʛG,ٟP;eVw }t9&2]Vmh)R LmN.*)*(G "{:aR?pʢ'dVϥ~^X֊ⷽ 'uZƫI/UXgEwsߦNv=he!ăFNw["ߣ g%-~R}c8_Vp(=><PT<*Z3  EBe/&-O&NX`XI`ܼ01x$jڲ9ʚbsH"],:^9AHpkRW%5`Yi1 籬% B\ҷwMK}uY_ƶiSJҢN#5m8D+:|TU0LH.7LyLӄej [D |#l+v%$?ؖJk*b͠4' MɕYv$tdd'aOm%qtR "h=]oܑQpV2C8 lږWsˠᙺ|GÝàMD0Cxǒ'T*H;a#xDDT:1ٷ@12Y;CRBztGp ~wg_ժmE/d:8$'|BgYdZ0N6+FqCHD B"E8"<0J VI2إfe+_ӏMA,=׫:AWo]4Y@nt(爓f-%3q-S˰ièǯ{MV{os{mJC^WSѦXpILح!0;B[N^H=gijdJK#$3U2 $k^wyRU!#@DxruHwcrW'SOun,v*nKLY!~ aC_ aȄۣ5o﯍ #8'6ml͚ }ɼ|Gp.y`ҡCjQjSjLYc]&ڏ$mFiN |xg jߣ|UjOۭAuF5QF}WZUn,фdqR|;3ikݸ TG^EXT4hA7l<q+MRNRՈ(rt̘ n5B- 3;MG!Ucb@^Ĉ&wlYصctl;ضm۶'ضӱmzz޿s9 cV]u_uKd3CynG}2X{WE5G-RKHMh5Fc4\@ !jH&=i?ϲW3?9DjWĒ!m.lFl/ T$޳~Jg[HC'݄vu?5ꊠ1),|ݷ"mbf0?}а <G66wh/8kiN n(=L7AdvgzU:c:lP5}fr@;xRAoԅo܄21 WW^TQ19 m]Brz\%\M(N#џ\GwIBS mi~s4Ke]fF\Y͂Tk w*͜>U>u/wd^cpD^+. M9NØ>4ؘxE枦ji/oM%yHe9p| S.i)9@`vrn\NK O/4c?14W0N2π&#<脣y##1g*sFbE )@PsB8ƠF3c;g;شj+xa (_6Ï ;"Xݺ_pu9kmW$Õt.P{2Ugw3f,Yq\̺\_{[Z*s#M| ptdV3K(x+Fe;`4yqdFAӌ!]g"?s_a 4¯<) _B$r뾚5RuS9kƘʅ6a+l<|9V9㢓F9kvwB[x~"!]au/m3ŵO n r8/h(2e4>v{orbZQ\mH` oB,IzLDeÏq.8dPj $RZ\nC0iNr3 F6.gSN\`;;vg,~^c_YZᗀRoKXnٳǭt)D,V7bgǹNш+krM-EI @g& 1F8xfq}8 "wOBV1ki$H {V:ݠb kvrߖ˛< W3I٢-9Bt&ӛ_p/ LfL.YAMDVd1 1Yr /kO{zШ+" ĪGpeWN3{(!#zǓJS>AlfmsHTc>5tWɯHOS>G5 ;I ]q1 PD۴տ{=tqSqRJ޲Qs64.c¬l>c6˦'G|߭Y =LKTäm%Pz/.$l=!+n <%3ϻ-kً/~IO,^E'/hH Lbje`/݈NѰ4sz+s=zIϟe:U}* -`?$Tt hWxtM]0v^mslǯ s!gPYJF>oppK6-Uj*̀F6okKծY{vq^(rf"0{uū&ץԺs|\ӭ9c]Ythx>@ʞ.7x>z\NK'5NerLYba_*YшH fdR-ҕv0 륪ډ3MDu #2!t77>,~D'm4ZPl( ܿ ٹ^lC'%=c9hFf}s=*&P 773_—*F|WC,Y=XdnZRP~f4Y \,ҍ0=>FuOVpJb-Ee#j۷d _ I*iiReI&uj)Q#leʇ ̮B!PA""!1?їuzz#0]5"v;C 84hF+U? t]PAU@eFba!{d{*"h)BֈݭA¦fTVkfɗu^??%Sd?wX-:nyc˦$g]Ab+eby@[3np]/B "ɃFgW'R髻 O{5m׵d?  _Mpz 5pΥ0ojK:QْQe3Rh/'|hV0o%Gwhe}15ЏDoyOonV04a~yu Ѽ} 7hd8BNvz %=["zFbhg; 0LxCs_^/)$wwd@(*yBayԊcٳ2!.Xjoۄc)/ChGgT`$ ^f86]1,v0-~1/,?(1d8ݮ'$GJ)7ؔCn@WE Y84br*&K 1+H?{1G׳* 2ł-?oE:c81}F#vʗF ERwUҗݥb+|2׻0ϽJn;ƗX=w/DH&NhovacވvC&߃C \^MqJ)- mE$]ߝl/} W1Go5Hn ..t)1_W[~0J{ea[X(M#=䇌?pbqP爴$Y6@J= Nczk-@_FbwME~7Y$v0$bsE "ǟ$%I;OSH/3{PX w=gs.u,Ph7 +Lx_b[ǖSHZVc68LsgŽt$(9 &I=4B MG9z+ sX2_)53'|U+;*F$G:t̍-rSj+w9L<-v t=GM f5HWQ-*ˠ^]~V/ܕa$A 1m!f'Q0jX,(wrCWzvi[v鲯,oِÂ:v&\jٔnU^,U8f}>:k|#l%w,4C`_{X ,oZ͞EO2c8k=:(7}?ٟڣ S6VheA9{nm0Sz޺NӺ5>PYm1d5"s r郯+|C7f̓~`"~y ѡXBL GΪ.Z‰/BNyM$xt裩ǎ_!WZ(u;:eXfnJVBdƀYΈܰ4ʹ2#wЈt%7=d#MzCţt&+Y[$D,9Ew 'bW>1;e5IogcfaѠg14_#5 p0YҤ~nXn; |-̃,.U`10D|>Rc2O&QR e:tJ(}-N&.SK@{tE)gV) B zA-X7##yx}X7j\h ˚*o\-Qg6(cSD%0:M:Dch{/qsΠ|Ν֌d6_[l  A_IVpPybc"WF$mֈS[Y+gYE_+^} MXR Y3ٟyy'(!h7= h*.8|;xA9Uav'1҅5﮵}չ̜0xe/JyB~[냋dG+89A8?Ωl_'IEA9IĢ4w8̮|QWLK$@3sꭅ܍Q`A,# E% 6ðe[=YT&{9ADqirgNϹJn[1nY`pvQuign&G\]Dr (SجQH{pd%b+dTz[RwHv%o<ܒnh,ipl3ew ^5*yb6Uf;/u7:By38[d {0nX`as6V$?#I'[ '8䅃D%ԗqr ArLs,osN,\Es}&E+J)W0b*S`ZպgTV"qά ]rg4%@-y[Id{Xhp ]:ݛqd80W>(^O*lO(e j;pM<6g-Iz#('{O .=X>@*Ʃmݹ _w^L[R*k*Q|\`Fʪf$:"=8|sv169W#rX cٔXP/OY,UԎG D,]r.yMp G@K;MѮ/ >7 QJ#F 3눗}`~a+1iifګ iALEА$=qhBٌ7˔Q$|0sɥ6j3l] ߩ'NT %:e L;>RoO"Wfy~›#L;0sIf3KS^gW'ɢ 6:Ԝ_ S8G ֟QN$t7U7xG)*i'.x'EmoQWJĽpCk'SV!_֧_6RC-3jqf\bLRD6)T8+(rAbb6v,nYKXt9pčڵ,CbJ!Bȏ,8a،wtnI^)Q$Czgӟ[/$?:3Se6 yuJg-X.h`b"˾L!J_(hp@]ʀG-U{A@JYeO(~ÓMF v$BM^ЇYtq؛_&|!s01VlqD4b}zZ'vD E+'6P@R=UeP][yr~čqf`D-9[uEUM؉e4LRwU~3:K+&-A3_Z㧄|.B,^}<Ҕ臨abp0F~zu34%T_qV֎ǀ=QՀ{IɦyO(~u֘D*?)y[S2ǪBnB+qrj`Uɿ4 sif}jȿ0iX$pT Mb WWX>?Veۓ=`qe6o>ñYlxz:6 -(qP<jYzdhǓPiGOFyL^,M6ERמYlFK\b+l\5NXsU\razpr8YN <;"8,rvFNThĵz Z X 7ਫ.X.ʊZ#610J'}eҚ\KYGeA('&k;@zQ^ <]ptG~,vǽL;Jd^(uq ~(sU^ vf&1M`>}TpBUA]MҚ9W[qf(vVY^kgǎI[@|G325^:7$iw{Bmϟ_P,@l<!"QAvb$bwBӝ0w%6Uyqme|q&VLFM\pT#i~nI[ALD72+ ֹ@ p8C 0;v-G캥5⾟`Po>2z^h1i&xN*ls::5SOQp8:9"*~D>) S{#?ASI\a0$?r \EI\EUq30 }"E+z)_jm]cٚ2EX:M>zyzx:s9Ӳ?Uѽj: Om]N嫉nTlFĶ6l0j15i:&=.A7c]ٵ6kQr3͵rʑVX$z:~p-]W]VȆH%I:!uGy$d5x)m Iq$U nW+A*'?`. C3D=Y EU!6ZLIq "W*H=f= MhmetQ .P4Bp_}Nb'폚{o<HܷKQ#K=8:ARqvN~jb{f`WA }olVyRWՊ?^ Uv'ҳ'9Jbr)[ .8&/?3ptXsd.ѭC}[Tl>ZQC_ªN=u|V *3 7St&ze%eMJ}Gg$!H(>XByvCg+힮#cR:*;T~or)lug5>=,R <$ĝ0!CW"^Q'Қ_G+eQJ+qn,|v@+w,`lo[rU ;o Z5ɽ}KQ?ـ!!!{P_k߯SAֶTk;do⟗<|u:m]PC"g/9xH7! *8VFwh[#i-,P,+k*FmdR'& Mjp^<9siH@Zk}K|y8^ݢ{boTᶙ5+.oԂWXi]oz[vh[c/6q, ]`#=c *Ȯl`ޖ3*W ̵[_T@P>Jol@Nh܀gm1oNJ^wXvSw`tlM;N9A1&R1 M[R:7#*NM~m?ƩWr[(IlvSȢ{t?>2,`+(ul{վ`[^uό`A}?v?+s c܆F }mP&3㚯^?WgZne͒y_}qRVW; ~A?iax<uj}$!~dn}ƈ5}&_;? k/HA}a?wEP jjMʠt%<)o9L)1cU3sxXq2hQ u!xQc=G." p Sj;0ھ7h R~˭]H0wYcJUe4`cRNprD<:Qxo:݉^ݻwRdJ̗keroQ<W/J%8?kR!JاqqƿB3]/!Ӟàm;ɧۙ@Ny7Krn y 3Q ̨*nI'FL]+pGe}7E`;ǝ/y2ϷVmu5QaC;G"O}DԘF M$~a.L߭oc*}Z*)6bZD7[-wڋөѴ wdɜenN]4> q:<{l-xfHhߎ'p*7uDkJ/R F"vѤJ!=_bgfO0@!\p4z}}.>TRrAӒh@ELPT)?ҿvN(hS[6$x\]cn.Sy,N=sF~(쁴pM W6>BVlҞrnQEۨ7a 9&u&]dwJ͵6;ʝxج絤Vj5^.{ WqS;6; "jknad:soo^?\NJٳOhqes|X5˵(ҭȰYrS"JPc{xb$U> Bޛ[TGrjtz PѩrټxA%-`ZH2(ڜ)qA`}0.,0&KȔp{};t8Yn/Pٷd& 9)NӮ/0Η*^ھp7 m6s/ t߉W TE7Q#ሯ de'p;R'ژ.ub_qwutrrQo:ʭvnsb.n$V`Y.qڛ;@"^7J Q o#3JJ8"=9yQkQ4.mn !+kE:ضtbsŶmv̎b۶m۶s9{ּQsVPռGiw{~BUPb &:Xov )R+ja:_3?>z3'K e7XX>8&}±$`sM mM9 {R=?k\rK"^?gP\ՃXHFa(sZEAoAҡ2]oT ˳^?_ZVb~_ 紺iey :Jr@?uRZ3V*6OoEj +ҢPkcXI:rH[R)"ةxaN9bU}zHuEɸifnj%OǮ?. q~>+DP̰NK?/a)͠R/  ؕ1u ;|)LW[*\u690"Q) ܅B 1~u_ܒ{ƚ/a-=Rŗ0H&rxRIby'WqԳ^b/,nȶyaꇿEO4/MM1q>ٝ-n\`Ǧ i;kGQleJo9 뙯C \8VVw"- լOK_d;)kq ŃܫT ` f1>|",Փ?#&x`l:gC|6?+;dd.Og7I( IT)͹<]k[FY&RŨbs [J,WLj4^➤H 'щϥX&ˇ!`gO@ EcUᆏ'~i=%z C?;g/ܛ̳Sư{ڿU.*hZ.xb W n.ѧFk0N>@b_臹uyMؗ~Eh!VwQĀfEES ;}FkErsMHq~Xgl){y)Zs?K4պ8ׯoLh-]1wzWv  KUO~L)SDQm2wV}V;nŚ\ fj,56\HNFB=(CPm߽[).J _6kF[J2FNsz%5 jds[*aϼд# P-e5>\I# c]u<~<*6Vu5oRK\Aܼ޶הjPt6kʩwt׃ʝ:nH?l<`midc"јΚcv'yK{kWJgݏ&&c[Agl,rZ'IimaR:hNjٸD D˹=1-'(m4̧Ƨ+4H(S䇃R6cFIz$Hc%._!L[Zs|ͳE׬Uŀ]w 'ȯ&!?"G՛ةT :tax*؂lі[zt~xg2Zc||L&əF~)Q/Z fyM: -'TWDXgK!V xzgelίii*adm]ec׉S%C<D85)9}J98os{ ]#jb@_Qϵ]۴4aix^VgFR$lt>)Ÿx1۩e:e`.U[Y>[ Q1kvo/Ey2aQX{#9AAA^l&'(6Q #,6/E&L>|"Q&wC4u옢v8lc-ŊC+/h? fyq_zTꄂs>OpAs_l/&"Ѓ(ZSkK4c<:n@5R RϱY!,}ƂAGXLq+?H::5q'm_C>pcBgg[ ix[E%?R[4~Oa X1Pgf')A_?اwW-{؜rH*4cJmUi,g[%_3i6艴#^ ;vC{o "R;頶{Un؄&VK'œe(S R, LҵX:[ t8d\^Nf@fL)FI^.NPvSܧԇckAgAD2>իx> $ v٪Y'ɓΣXW ꬌ[EHz̀I}^ ʾ׵G4Okap7/:gUV sR!ŖHnTgCu 2EZ[p@ە##,٩~r.]eZd927Y긽B (΢0&ЍO@+keM0Z7Ie 4ř3JBbN#0_0dŸl=}E۳ :u%SffI72/3ZF %) ,[1'#biRb͞Zbդ2d70dX] 1p3bL0q"|F.RG?m[; v~qyxM;h4'v50 _V7x`386]$˚zglcQGt]NB]yM\'7'4^ٚ{*/#d 唧urTd}ioXՔڠVK-M <&6MʻOXjO*e&(wf |{7wڵbڬD uj!>qc$0J(Ee];e%J"c-.eϳse'x'a&j?D˟RSRx 1 c[٘X B%`Nums R 'i. Q Fi;x8 ]3hogM4VSU]85\d5^-~;~dΰ<͐Cm_igY_ꦟGs#}2L/\]dX^$~ sTyL\@twg1CxdY1"XN$ԋ>>_f$rҍvNk)  zː8005y5`dbOrñu2_Aa:EJ&MD˲)n5?XA̖׊B䈸Uvt5 !ortJ̖AlȑlL+\@mF i茩>l-~e Rg2mEm8;a]©+?~*K0Њ@FN`L~oȉb2}$ LV eԲsaVqYƏ Aib}!o/Jcl[ea!{+pJBi0zrsUy|K") F1c'FRaÃg9/}BGKhUdӭ;jͦljaL\Y "C)CiIedD@m%F`9RQ(O,NF:I_X3(?49 LilW2&KpsI7(Jx~<йY2#\Q 0Ei3nc!`ԳQ=INwl;omrr<5!;4.p嫘B:nG6)]%5gX56 ٽ..Bc҆-VG:;w5SZ .Vq&]rhg#Go݇?;~`<Sz(N[e$WWIUα*"Ì.8.~B6X>ˌ X Զ:?C/dk!L[$%-VvI温L7s NcWp>A!8h9(=V5N qI|`dR@I;Bvm]i")ipu'uYtb:]ڳsr٦]rHj6nNJ<|\apH}WO. ݛЎtHD9 k^i4՜+45ge}" k2~m(cH0/S$npg xB"*R4PzȱNN+HFI~J.{DF `KaYP\A7$"%'{9bDǞ3 ;]&b8}baY%Zl_akwr0KEӂuxԑ&Y m&\bAAQ;k0紪EDq# =zKұl+&U| 0;J4ʁPז 7 #"9}D}B;zW٥Ō¸ k!z".(pٖ%F)#n쳣r|ć * Zc. K*uyMُ<59E }(YS B#v_ J-M+/[G9kEDDM]I 6k? !,/测2sIu0Lܐ+N2EW5S61b)㮱T>H &h2zsc{whΟ\c|؜b5J⯯-vUUDSTx_o)bC"N4JwtRo#%zK1r3?t2DRL?+RJ= ̣!]-蟚7m߱ 6i<"-Y `э~ D._`;FD\zu,]:rgn|!\?])_݄3NS7F*>@͂~,4Ui$EXU8 e0Od[iY;ɋ~{`7 xvb1M`XM%*s*>%bDN`Bbu!{r5.gaVq~@ֶhD`{-ǹ+*tv[#ߧ J P}ǗZI穪/:s0hNV.Ueްu;pߒ7>o,B@K=߄ќH΁o$P0׃"БN^VI&vũc/_)JBN"p[I_D4M;yߓJ~Q%[5FH9xzGӋZ&}o?gৗt+.V;ɍf<ŨJ$Na- jb3~)T \_p|z](e[F5_aOL-%d2Uр$UCt*h\9 p$荷˓f_v\1RpW_43 @+!,2 QGy :),?`I:;*O1is[ @ x%3cLF4!30^ [~@d/ ko5LQkx\yʢe%P7|hR9v H5PxS!h_pK(eթH4DVɶa[XsHsfl9*5zsv% VN+T䇼1r ?!d|wIg֛_K#]XC^gUJq*! ZtjϔaH[tJ풣3UuNIu K&h3EQ%pPє7N\F -.]+:ˈ 1`W6E<'aV$侁ͼ-9Pym6=o6ҕ' t+wo *#?nHҪەv/sծc&Q22RG71ji!OFم}C'mg|ǝ`k~ |K:|2Gz w&1/C >޺)dq4{C AԵ1 )(.zfLZ{_%Q8NqI^Uu D$~}ȚsY=hnx\Y-!jZdU>u*Ū"mm+1غ d / 20w֌ΒmЍ fi%{RBX=L{R+ lFSvgz`M;Om'ҦsS"stx4+ag?٦>8f1h}ъ5:Gn:vܬ|t:9_#k.q>ǜ$WURz$Cjm%3y_w5.kgA: bIۢl2 =-d6 "RjgDUŝrj&`O75줌{l@#0 Z^JX[+醎rڧAqٚVK-e'zPޙpKWJI2jIWQVLhl;}e&@x56M0#s̭V1@bEQ.o2{>0~wޣr''1HNҧ?Sp}b]aA(`S4# 4~* ǽD!t 1~' {m |2g9:3e_pO3?T4c\KC`cY ^?ٿR5!vzη9 E1u Mj"G+I_kA _!Q$ G!&6N ȷ` DU'_wZߒ1H\ N%K(Zc.UҲ.aY-*K 6`$UEm×i\Œ_ Vz6pDyԪgz܄xyjC0-fQϵ2D9!|ar.ٟjץ ?˓1mڰ̊c~T) =fh4U;AW|[#>BSf9ޖZa}22"!(B\1Uܳ ~790Kj'v bԅ\cZ:Oq[3JS/@SBE/Te^{KUh3\DljSm.ZBۥV|`E(-fGK+ 8:j456/w@ MwrH#z)]KِE 9G-.h?'BWk'Km.%.m.{C !G_c_]fxc? DY pSexHL) 7]&mf {vT"p+ E$C$D_oR8 ~h^]΀mOF7fl''W:mOURFܡI o7f(?)ȑha2r>TQS>9Bl[@oe:ie迸5 0"G@ţjk0ZTŶFGs? A:j$j4 TN}pDiRZTk|r J;:lWKG z]蒾"3VCޱ+L4G=WUٜ9z]{5<ֆS}4);q ٵj0s҇2,7U4;!YhgA\Ͻu {M]‡k;d$e l* 9o,rOo,P"E ƴ2l PQ~ s,Ay cPD˃R$n&\>D$CfԷVCƞփXU5FTY/ Gs7eZuk6CK5j,UFZ8V}P#no^b쌩Sl#BOut 1SMm"6ϤaCí dx[hec}6@zVy7ۇ`ZiuEbR$NxWCBi7nqUPT*+3D:ye А0yA@hP@GnƯFKX(>㷔pPkbSЫxVC3bE{Fp^`\ Sϼ <04lO! 35?R4]r+V?mq`2~k',4`&]L 'y>hLE2lD,+*V~ S-LD):k?nst)m%a#=j,EN)$.?3:s{ `_\WNL%7YGd@wu1%Y3o엻zc;xvνbݒ}t=UGJqؒߛ0mtt 쬇X2cut).0jMnG5vh=̑Y_q^ .rR/b92\AĶ}MJ=T2ۿe!"oRLߴ7F+Q]+3*\oY2]e{MUO@h,k%:i|.ya"7#+M[ݢ]= #}1a6[,:ԏ/Ku݅ B!5!'1k6*QmbYU#]AU Jُڬ}7G0.I=l^ C_ $AQNVwl#uKWv8eo,m4nY#;Kꂜ$tl Eiom}cR5*1x`I+Z H [z8> la3?8,'69'Å}?f0d6>n)swpg#mygo$4Y]I65< s)DљgDBJ$H7Zn7:ҡ V" R#?󐏚Gj\0b@co)#D ;?i,|ɺu`(J{K_9L30%1\70 Buଉ4rx׶YÅ׭={)[stYks{UN6P](i^UӄJ<Цw?{*Cߍ.Vm>LKѢZ Q1BDl!ˢHq[mڶ.0%رʸC{a H)-rdlPAG L:%Bv|O.}H@ ^/`c|X׼?vqyH xA(ϺSF>Eo# OB̍HÙgW5F#{.52HSx]'1= g~M_7k:d'AP||PlT@Ug?oj%7b>Z$4pt?lZNe1a !.`<)SxvxC6+LIBWJ `ɔ a asZ'cQѥ9H1MV!)uŒM-yħSZJ=f$&)YEEReَbG4;Z[W2ySzOB9F@"QiˑV";(v(tH y%|c3(ܤnp+xsF>45Xfoc3h]Qr_@&sPF]ٞ˸W&Zi6˓Ύ̻<B(E nAw  0]Ӵ/}V! SzO_e6$0 8\TVdܐxxDN]~_@+}2Ǘ)bd2,tz_r\8\t Qwo:JIHG7|mNd9H VVtDjxm$yIwvm*CEX>.T"EKl篋|C|KG 67.+ z)m:#FUamDBer]\6 D}MmxҹZ^/;|<[BC_!\ipk팯γ."s[ՈTŊQ{ Iw=r pW@O_>6 I==u<ށU5^H/h4Y-r jlX/?`wʙ<|t7 ,;Ė{i4Bps Ʊ-p53 l7+eEmQw1Qf v0ԗɤo #ih\D#QnD4MyD|[!. {lDk?sHR `mϮZD&qUz$jpNԪ.T]5 z~> XV)N|갛~5: 9CizL~',˅kP}SNQǢ';yy1edwvWՔ[ mT|Nּc7&3nq y@{dExo |)3ѮwBYB_B_svm99WE{N+k&C5/ߥD!g3g7u,g(%)XWESTHu50yߛƞoO͑:k>%rT"iu\WV_ . ]It `=ˏVrCpIl.q?66W{PK= Wtav7ZF R ŭ4D(94ȉؼ\1ӓ#w%5Ip3K>gL.q7'fI o׻:k;tK"!0jCbEAY=2Zq)(^ؼg`ʳPr7(֟#]$o #{@pL#1>hԗIǪŹ, 6=f(]a/,c|Ql Y5&F|?FqtO5T(,~LCcp~q;=rǏZW8 ׅev`%6lJ]*h Wl6OPVfkl%jtR&]9:jO֨8wD/?$iG7V[NXlÃPq⺽xK)t}Aڹٳzdi#,Z_Չ]fnIf,SXc˴媩P}% Uãs> (HeXѤi!Du.|h(ZO;h s{~)sTP{[ju&UJ,YzV4 } 6K/<.ُw,L!n=w-MwSB?. )J'ZGJKŘ-^,T{k$lJ]v5^9S}Q4;XIve6 ^eYfW6D(V8=wS%(bլzx]ΊPfcT^rSC} 2f87jN3{,)4oCN=&72: iJv(uX_GYT"Y7PoY[fjVҴļ0p(31_X|dwlOns.Naugm[:f6coE LڈD?%cCwְ)NbQ}nq3NbXJ.LcupfHZKׄf٘3B#D-/Ex5٨l0:;r⭡ů*ncKpwq8^vCиUgfa<ܦ\ɱk-SΏpcͰڳJ|..1hsm]"peϹ iGfLI>hr4?>͌}zPfv,CM#rNil_\|@wz3.q*ƥ-Û C`04`Jm.gEhJUq}2 t$:>\.iS. & 1s_+'vL惕$˪6D;F .11T)El YwVAܟzGUs@1%̔.Nk:,tq^MXϙ([inVzKHh_4L,7bZc_A{Bfg:Ѐ߬J/+L-3R>QZtШa&:𜞸]#jgcYoL_`r->A6uj(I=&Ҭٺz4}qpW+>:OC8 .81K6l왼cvQ̘]$ _5v~0@V${~]vAI@ `Bc2&axP~[|ڔxz)rh}|2_QqjdSniM-Z-QEX8Wt3fvj:M_'˘U9h"DS%f>ҧn|Bn jEWSҏH{~=Bpڵ߳*̡K+Z: 1CE+' h1p@DoyD vy Hz啾2F)A6Yp[) vDvRy\V[B{s\KCnby]vȕ g7Kb0 ?xWVf#cb @n08{X߯]^_,këaC]U;që|+r`>B{-K"B$'^en 5o%Y^Y3=KȲ#}V}vJ>Ƀ2K>l;l<ʬ\CFtHoE .g}Q^cc |td眲 xVt>JzSIY|=*i 04ܓ F"cJдDp2Xß*Šbeym_Uwu^`trԪXj%,Y E'M@ؖp3pi0cEyEOe!jIlU9^l͍!B.ۤg#9%^?Dk>⻠DdfbVLnG; 2gJOW2T poZ&*Pd+iQ']%.EW~QqC0:tO s0g4"Bd0ŢLri?C2*HXRvsJVq93-'}1YϞqs]BxyĉO 0і)7N:KV_Dr7Qԙ%=\-Gbdo6瀅$ڸZ֫ 8f0yV6Y˜ ֫r+i흤ePLxҽYȣPz=\,X&qDi+AC3}UIJ Dd wN#!c7܅أu<ƭ1,FO4S߬Y+ Ցk!Ow H@ kw]V[5ȏYʹt=tED!{6 P0$ x_ϖL`GX6jǦҐHémD)|٨}a9T7y_ ?h㳡7?nmaD P-?epU!NRD 1<gG%oIs?qW`C&$i|_ R&!BP_ ?_;CҠ.?#g!-\s[W06#im`yI'KF(t/-\Pa!>͠܌b0@,*zk~Q XCN@rW:cAed]_ &M;@(p'[v@$h:JXԥ ?./lUn QvV`B[eڊ^oGh췂Sf_rǥz0w6,Z{E9!iZJWmx/?{)uY}ӃQ2fBQ} HR@?p7{[ھb#\΁>QwAJ;y@AtJID>k@ӂ~0f*1H $O.Zsi1+0+!ݫԲ^1򹄌!YE:0-wq.ISofŷxފ)t^$U]v]6\dU=+T =s+r̀VM*p!b^W*Wr r "m-֋w4#L$n~V#bhQE71bhF9*n9j~QgAlҩ?H!$Z_alTiuf+POSSPeZ(zYϜM7A6@l+|ctU#;t3tyQuS~kѷivUf6Gۯk%bo@ >Iv#0g^֏ϸcevkL3zA\Ë% ^F H=o_n$s9п *xu#Lt^~=>鋻N>%23h Xuu/fh8d^bnA ց*Ρu#5w^ԟLoe^ꟴٖ7xǍ1Fh1qpY‚P>?j7CY>cqcj6dL)/!4.#0IfeJScJ-#mwK&>=") #&򬌲I΋1^8D [C Jci- 9н '/LF_LF =dy)7q .~pJ}K,u$,h)60X1xNO|'v}trd}_\:eR]cwi} 3D/ˉ+Jy?=:?*Baw2sB3頠;nX/D^VwҲq],lߩeyNݥɥEn8)1Pr{pŢ&/2h.4)1Aʽ~Ay-dWCM6~ܯD! YBR <5C(0$0 6?̸A102?hxY6ۥGm_!B2bnX0KS1T#4+o 3_b&RHY @gk'$`X,= &#>I]Ѯ~nN'4|aSUDNdNLä3nN'bxTz>Ւx[k&j:R7j0p0A9)v$qVҌ\7 a33g õ Z),Ae tO̹1%7^KI<%0?/)>rR !SW?=Bb"oLG 8 &/}0_gq#կƱ3o{1~ZM|)T29.SԧR!#Ӏ6\4gtdƩIE@ IZA$LiɁˏGAʼnl德N:x9 qdNP+F* >Nc.mb-Ԥ ߚ)@\&n.JDS{n~}ݰZ1LdySe I `l3 [#uwnJ;Yn%vH21JßZd$UE=`,uHD/X8 zYv di+LY^.)ky%r{˜00Ai;l.ki!BMK%Af c7=bIu*bzFص?1^1%1-7iQ:6׳ԙzː؏Tx$\>o3d"WZ0mO[.w)\X"D}mtH>;Njbcq&yԫ5\Ew)t>_jcэM x2?֜C/u~Etz Ti"٤e~+Tq,ܝis)8;Ęe&,ĥ[D}[Łzs-CNފqiȒujeQ*8MxQ [0sC5xiIE/7(jH$GI.@%:2:^Q3,Ti\2ӹ U?zjC@X뀥JףXS0Vٳtmb-D9x|UޤEf簲^'=Zp{ݼ5kԤs13?4 wR/fdb7!W2i%nL6<8?/v$( ?~.%c1E]A;c\ǎ"bJZ8?u̕ ~&D<$ujhش|"LpZ ٿ5}C!:f;LAWI~vI%C6xdΤp Y'MÈh|^X('LO ҃j*+ѿZ _*@U"n2[ݞ.ܡع/%a/n>]R ጷ%GG KϓT`.[b ?x `y+Z[C R/(wc{/b}Z/ 44)q!={C֮.M10zc 0SfV=]G e^ n,SQHLO<\G$5P4xߔ%GNMw*v6:B. Y=kM>d<82Ț(QJ$݆C-vYEJ6UZ-&Q]Iiz0i(JyEX&$=`kbA9g]>xrJ^hSin NnѸ=z(%9Ŵ^&u TOd]%YGZ?qZf<"ڟ*G8#`WWŝ$1ڴljzKAkLM^Uioݗ t N9u&=HQ]Z?w(hp\XX=⟐{0ozDz'#؊O3VŹj:]w]%;w=L#2CFn\bM!x~h7uc}sJɈ\LVB+hx & #Y]DIUBw(zG{ܚyV1±r#T%Z\^W}QaX bԨWڥlF y/3Zupmp[ӐnY.s$Jv@dpRe"f?UZ ZQK{Ϲ+Ǔ_"%s8})܌Â먇0E%3}5lŔO~]T5ﷴ_N#>GFpYǎzT߬ێȦ>2R~ğ80_JiP0ҍVO C%Xgp`D*,Kow>dxĢ9ej'4ztZՎ[$eǍUўpiWSz(3 l|C iVQ34V*O=a`XqX0cr,@!ըީ~9\ C.!8][LZXubDyEbNqG+fJo ͇mzЖ>BH^' [NY$a"Y+oggQnnEmï٪ f]UBجK{_w)tC᥼n~]&}* eAT!]4|kb&ZDN=Ά#TN F:_nQ?#n 4ԃ̔ YE?ZUW+i_hn #YkaSO|]]GVeqr2 ry@V0Usr)&_5CBB$<-&>z9 >JQJ^M9Lx&I[LÝ-\P4<a!/|t(GZmهGGed[^N4vv[H<|/OX0%f3:/Yxd@1H-8,3%~9;|6#Az٦C15k1uԝ-Lx6N=q 7I i_(H`cUM!XV06'KUX\/h]،|)ClenrfiDE v|~Wz­N@$=oѹ\lcDZ ~vGWmˮfNZ}rٴ.)rwg\OHs]mNf1ϬNw4jNJ.9aFԞ : UI@#ʑ5b6Z'z [$T >:ĻSڥvT(QD4HbR5uP`M&;ZN4!G#bIMOfe33$3|g5|yӫbԉ4]G;0[gX !afCSȅIԒDSBZ|O~T wCH{l1 qשvE!1M܇hrjE2 _Qdiw?h@,{wM͖Dcd1dԡs~^ח8~+, t.`]UX{N< Htգ^҅|-pJ4uk_[8c{G7t[嗂Fl3Ce7rnvšv:,cIO@̪7݄+5;Hw=vA+>@af{mD~;nkN} bqs]r9~9 dyu4oNuAvz/|WA~;eoȈ &$sbvT#"H,N4pj?0i +32D:O^6_I~ubWcOI~;#Ql(AϢMeL}>NdUY@ƨ] NǎR=9z"e)`?1}c9 1 Xԩ7k4y`Eyvۄ_`Q8_xh\fף,]ѭ&;aletD~Y)IB%;fuhπBdalv;KK=[ d H@.e)p0TMB3I#;?s^"L>d @+nMoن;6 ?~HnlQv+5}$AX\6 Ư;q;2 0PZq֔o FȎ?K43aщ;.ٸ+#;TALZ!0bYI,u %>ޠNgi&&fG**bvDݍmR^o_["!~u 5^ E;,.<ݾ,ۑYFI 1$x1 ]y dnVCz DHioW{X;-@+ ?7x.\9}f* ` .CUb 'Iiq P8E@= "ћ@YXHMƝ9lٺW9N=П&))2fZphIdo !v D-IֈA!%0||!.)̮aOݙ@Q;5S[Z}J쾨fgå&cR`PDSuGd3 BNrOeUq:mh<Ӯ>i\*EkIf!X|;AԷǍu`Q? #t&YcE=$eT)Rn>Qd _g_/=ꄝM{UDM {H{ 4yNPuM da:,> rtވzQ AЌRfR؁u C^cb>Vr;mK 0eT=[۾}p)AIw?kKMNö'ŹTy87u6R2D oA(;"6ċmMS>CA@u.c*!vdZdxQfqp/Ҧ)+K:uT'4Z|+ǽRjuq2cckñsϔzX׍ש72G 8Mf.Nnӵ =G>lX8Gj L6N6Y-z-'4nKi SbcLK3$!~2!cND$ ޙSbAM"zN jep2w};C/7"TE,cnku?$u#ЛW`=uj-Q1eB'49OYņ'V4 e_R};z̔~C1 TRYtdd[H?Z=L2:Yy`5ҍ}F?>/([8:޼3Lg -q_DML`` P%KUDpJ'Tj,F0!ͮ8JȚB&b592Eۊx#z7)lfa4\ݼU RwsE~c֤RՉaZE-Ao}g@Y, Nڡz?(\rb? Fc% D5AaJ#b }sPI^.SII-lJ$s}o Om >3R#1Kon9_U,,F"xi#j,,XŧPCۡYP lh=~DcS4[Nf,_ AnINknw#jQrF j(/|ʆXAzkndL@rMiGzպ "7`<tQj3x$kMyѝh0d!k}4~B7-@7sVg!t|mfҁ%眩pmswXc#dAŠ_ƥ(vfuѨFe \QT@Δi{cZ்J 5[D3IGϦzڛqb)ԪĖ.cT \]yiAG{ީh77ޜasR-:VII$E*Րx*Y?6O>LN8:] 9WRH7N$YpHY+]Nctܺc8ji+LbٲrwGՎ?!]}_5V;w(&L \Y'^uCZyp6V&ÿV+!t,q88ZcDp^'!un56Y_} RUD]l&k NlH`_'сY* 4 q cExrcR5m-t]/(9Y7 ~G>XU&WSQ/9gFJvr>1J{'P zcago  ܇7lpݒ?ُ!3%ȌSHu,؍&I}HK!C07\ڂϥHDw7]o<ʥ Hkhn=N+ XuvBY%׼iFGTjt-`:9uh7RpO KPiT)Sv ױ wfcWxfPv;Iu__ !F8';{j<5* Q.*'k_xC#ܻѢhF%*}M^ԫE1Tlnŏٛ zzTi0c!/E 4! TB7i^MyZ 45p~%c 2~ijl#DBzJ<<I#'[#p IpC31 Eh7gD7mUt\B9."]h"r|Brd>ax?~9xyo$i\q=&rE{-6 9T,NɋB|,uWx%ឰvjzX,!$F~ȬA"iO JA4BOE^ )D>ZE Avܾ-*y{)QY-KwG c=)ō9 q{1WdQs$A9r2`hϰ_H kZ0j8F '_}Ac^i}p3anjvrZO D@f6EY~QD83vwzޝ g퉫 ;Iλ7XڽJBg˜>Ha߀ ڱYly򯄴/O'l^yf㏛*^ӌjЛy~5Cͯ'0>2ST-_Ib䔈! E+!ay-,BgUD_+/9{h3$ᰇ jT/s4aCH#qi*iKVf~!Bk۰:Wj.3.g"%͉x"ܪxƬG΅Np5y\oÓP&g>&ũq5Dy\rCbsx1sd8"Rq>]~6ZڣbEP;$SY5}෢lLjJ/h9W;S+E4hH%OZהN3i 5-gVkLLTo9bHy }_PԘDͼ!^İ"jYZe|bs@ DrH | <,l#(CS6NAmЯ *HhΖmӇ^h odO ݪ^!<_q x.s+y]sG fKib1bE/drAA! ܅yii 8J ajGk7pKHmaLAA]xcĢ>'g0РQӗ _wTl3;O@3^ FCtAGe*ŔXs^i|2B_~Á -$ Qg;z* Jb+F)YexQJ2-hW;Vކgcw*ENH6_$ͩ})O+^т]^Ǜ8T#DoXW9bs%ݷ􉂊pC_ڸ-*e%wῌl]&ꭓ{4(R7Y:FR6W\cϜKLoH$QndmvPtϟ+?E3?4/T7Du id5?9{/kzm%uGqxj5O>SZS=!qB0 R6_L~_/S& Uʼnijr|_(w oO+/I1&P͟;ЬoZLFȖ C)vO Pwx i*9tM z"3)ye*+*\#ԥ.]Α,㌨~!fKT9tQT{Ȭ?~K$ (:ԙ eTFǓjv;-tPc܂O ⶕou玉ù>W)0|νlwGǓ(_\"(AI#׾թ7HmZH[K2 2#+[ִh#:#d)Do!)p-G݇¿0ϱu pHjG\) }>`k mz$bta7_lHe䑓C56(*ocgQ #k'0්[l++&u2<@G3TyQ % qC ۫ vP||)}sjC[$e>Tj6XҖD[RA['ޱc>—I6;L;-:kqЉN=lPֈMJ^;[osYfKu3ԟ'[7uM}BSDL(SaL߳ )ă#zJvn||Ypeia bWyʹ=)uu_l̔6' 0bаڟepB=hg)Ƙ^Z *|ŋ%5EMk-&1.9gh&aAYl{_ M R&V+¦BH&Sss<8ԡ )vJ ? $\HI[6BH{3{3f1M*4:5\3׊!)ЮnEŋK_*#j"g}![:]FO;\e"xG=p ݷ"W[ mmQ3֟Ev^vϦ`3,ʊ aS~,{/n ,`|IlV Y=1M?b-WaS:%õ{;k;3K]'Qd\'oBgmjF!9X2e:c!l8Va3cp ~ WvQ5HBA!YvK^@ͽc\P=CEam$+d.~s_4 PljGb~PWȑ{@k-WC&p;ƴ_(!3 Ueή{bt/?5J78B5TO !h3$'=Pј8J M&#&ل7'3f&%I=PN޹* WF"޴~$+ۑS %ԈH2[ (9>!n_=?I')Y)5L_ym:HvjNk0 "!Vӥ_.?QRio8'#VHk*mR$'rS}9&=vW0-zYԀƏvLc?!%_h&;tIv۞k5K,ԓ_*^U54wZΈ[pk׋WwԖ`EPp"6?HB_N9ddL3>9ۇtQ8! <;&Z< }o^- ~7EX i8 zbZSyAIzA#ʊ_gW` Bpl5,̙cGXғ_O2z -0$J&'yT8JCbeO^sfQC%EU7qVZ:J(ėhgۣ3;[E:`9E]?YTk-M,{dkt B3gB?e{I6;ӓiCH4Q2nz28dYbl13h,bffffff,ff̦ٳܸ;ߏ^ѹ2rUΪvچN1ø}$|YùoGnIiE>DQ6rnG%ipZ0hɞ )'ЅQ E{mgJ9HO~~eqd $C9c2‘ѦxדNý 9Zu{ξZ'S)u7dۆOʆb3z_YyEU.Yb, )$Z,-P偖4*+6v Ee 죹u]3 ĥ{0eNÖ>[8 ESYs%ierƙ/ .f2cS?{yff}PLH B4 Yvs7͞EZ4Dcʃ觾ך$5>ZBW|yR$iU8 D8aL#M{I^n_rk3*׮[Hx{i@ l,?LFs:x [vh%.7L 0Ie=q"3}86.b./0ҽqѬpN#v04.ʶmD AWȺlv*TߪZaG=لZa}`}Bc4=(rj ;Vk VxDzgdž.>KJoxd*D΄;_ Ԑ.ޛ%?^X25`Q>L.\KO 5sF5h.wz ikկ6LO}/~g;J vxv߀) k#pw50ڪouQŎXr'v „*!ACڗONj{"ĺCԵ*bhK1Ҍ#ZbϬ#b0j,A)!s[C'4 W.{+9PnDE9MveOKJ=&( uZ+nw#+4sR;1/;Z\r7A:֐'jg3#ա5pyfTg5YS8vȠzٟ|}?)pHsGX#US8LAۜ@u""x6[5e .Pk>ŋ=HIknM!}F:| 8`Z=* buԺv%F/|ϺWVG@>?A,5Md@کH(B2=nTEG,7zu*'),+YSM^6g,E_KD qOp#) k9[{DO6Bp`.٢UPCŦ2hV&[ #b8W(m6F޵ņNӭH6120g1fzc)9xD0['\صwd T8x&?W_s`ϑ4r?= a 3 q2M ݌;EEɮ 'p3)[.cw?.W&߄#)[K)@/(W|Z(-yiY CVQ{cC}3Kzl1#>[Γi&lIqD\5fݖXg'}h.k tKn!T #&WOIJ)Ik/lBAkW_A(@Av9"N:GUрr7Ȥ$16t3'HΞg2lXG(RQHҎF0 ~Nb͟y/TWd%|Ūp`y ͽ>H90fjjSmퟝoAKqs\n =S) ɨ֜ҽhG$SІܝ]4m?14$pKWfGz7pj/Eg8, UuZ%ԟzzқ^x4bșPnYs8eLuo?!EM00gXQ)>kCt)uDw"#{wî1[VϡFuw_(G|{&"^3C^o€/0b@UC-g0k|#ȚI55 IDkJ(` YbU)V5}9E s WQ ~ (*ALiV䡅pg8K <޴JeHJ*gwޝtdK<}}ڔH<G;bTW~ Ē`]?%#<F j4(x+\E]iyMw8щ48ٍH 4U}b 2`Mc&q/6 nUqv'Zr dv?o 6aeI1RѴ vNWJ^UwQyxB-ާ\M( ߨ$b3вBo+QJNo0VB,+Qz}hTUk8IT;v6ئ^y8~H]Phȏ!Ύ@ǜ C*hG$SN?Cҧaٜ*9]n40gmp-频3E/SA* e*IwsՊdxZ:Db ڽ՞6PWDF&R+J[v3yJ 1nO~ kٯS}:"Wt*( :RYQ7`\ScVӢ="Tnbu{ue[&,0_dV{7e04"b " ~Q+DU zS[wJ!f`* g⧜,F5 ޱ_YX3UN|L'B޷WJ؈rϕ\W d+Zիs-}ߜ#`*әQajp96q2I6(#5;u-W'wB FكkhS<1DN`L*dX|[(BBOQF“>D!wչQ24j-ohC/4I㢰T$ʫ<>OiW9MA\-9Lݪ (GGꦰm{!@'7M%9+3yŐ;[ \1ǎ'AJt|IW8{BN?wo#G`'gs]iy2+\^1 ~p/t4v _wȲU= <fV+L^[vZ?X `VP@pL(w% 9,;=pقn$9*֛ ssksz3 -/g|Kkn{boٓ}I~EN@^'/S[/؇x@`JmՕ1Yj?hǿeF责HU.L۬ʢڙAd2dBbB|aɴfUqu/H: qR_j?Ⳋ[n`"zA!ox* ȭP"WU6C?렿Z֥sli+JsН բK4(~T1IP}Tό_4y൳>I 1&)ɊɈ-~$&yFZ6>#6*MY%<̂e, Yn + Q*v+"1Ѕ|K`_f Hh|EփjՃ~M)a3 -Aq)1P)b&Ki0~b7;7\ N֘S+cɧ(JQ7(#;fAրIuD  ߙWeδ5!U+h#@^KlQ)Ϥ\fv-QZT/tdZ_ a6u& g .$}VYU@xл-*,sOLu@u cٺ ߏ񠝜~/!)+\Ҋ#]5yrg{]j6K,B̤ 闛Lz1+a抚Q%;$l53]x#͢d҉:5wz/U1;1zPWVY6 WfX[mTL "Iva~=p].`<@lQpȕM L"Uw?0{حREzOcml"~6<6"pAbܻqYΆ hY䋍Aʘ×$!r 0$0BN-ە&P6&~fԛRs 9 ?c~W!-o5D^;+Ⱥ.jNm~TOڔȐ5ٳ}-$ 7/~~ HbS^tŸW>|R}̑q CYj |陮ƗL~5J8G( oܖ1\6کQZ`XY -Xէ9'~V=<-FڤU(E9z)wi9 oI=Ƿs+lsS;r2;.BUX2 ߏ ,0Co 6 ^ MAeV b6ӇcoNlHvi؃`;kuccBxy[ ʇ^p /bV]^r%ć%?g}H@swl)Y$D eX_ ?̶ZM@1$M&d6!$ ,z7"?>0B~{ @{`8'|AtWI2 ˯gqXe#]-% 6t|xʍ\[p5b,GF(I5=,T Wsj8 I+?{կ/hkC(7EU˽_gKWoO|VQĠ813[!Uz.}ɰrGYibCjFb+$,Ss\O;%/8C~<у#C7Ư|#=8R&;aXMrYߎFVvۢӬ#qpMg|)EN}1Ҩf) }ap sROJRyle|ʸ蔟(b~e(t:G֮=)% Š1r~qcLPY ,/Gc3hK/4#Y_l$EB~أ( ~/}I CԆV:cE" Qz׆a!!rC`cX B#""CkBAE]B"ϼ|Sr*=?{Ԑ.RZT\U\ظhUXhA_2-@Uc)EE >:Rs]fTcHyW5ðIhxlx_;EuG|/D.5J;1ka?v"-}dl|3h'űˢ0CXbl1-V䲿UBV͉&zm=CSE،CNFv!) u/ҳ "(лՖ¼O+02y#!;bK(dA16>w}Lb›7 c|{{ϫoQR7!$IN '8uVݝ%\ˋGi=tCHC,~IDL+ aKCs|rBaHN5k,3%a "F"XAx&u'GXZ.JJ%+WuϔYGӐԖ[ [cJ;y)DZw:{- !oڑ Xȗ.?^g"N 뒣S (h\!HZvJqNs!JT11-q`}e/x %l RWVYNsbvDz g(<;b%! ̬I}V1}.[snku<$;1ֆk-./҈\)ژZvgR"Z&JChx 2}RɄbhf,vh;plmwm'q\ zaXGc\(gqr-3CC䛸R%+yd d]`HW~mtXȞvHBz,KX,uR+ؑ}\Տ$rrlRvYSS+ǙwơȠo9'+y*~ pǎ.C :&oV?hϝ EĹ-XE{}/V kK 8KCώr.%u<ż/Ϫ˟݂mThE*뎸O[>u]K* HwXtIԊ[4S'ՅE`{mC+6nK*yov,wg9E`)J ;sA0+*pNIo=~8F`(pڰM,F@X;~e}Fչ)JLn'u0O98Jq}8k,k/\q=ttE&j{=A};'H1IKw; uAt:d /W{{vvn 0N3rM{K$ 6f; ZeX&PUlfmU<ԏb,䕘¼B Ql.# $@E:| ;X;ֶƯ~fה9M!oo>W쪉bVgZ)Bnnx3@Û=)ua } _gaVc&"nXU"֮y525FkbMnzӚ3rh $Wҵ;W&\ PNF85/%xML6XgVYf8 '"lSk6dWdΟAoU͝Ѳz}!c1}a!3H2ˋCNYQ%2 [0KR9NnYwg wGh_Ego٬4ƿҁS}qXphP*CIxb ؽRsQ^Я9[g7uΤ^.Y086 U_3z**}K3)՜,n#+͂+,\ȁ0.OV1L&,凈9 &#ܺq)]Yw{!6KЫ\+p]†e2Q)\dvjFWhliSW{0~y./ZٯG/of>|8=irT˭`;qZ UlZ /B!T$?(aܚOȟ=zI7%߂%e1d WRp9XODC)CKCڈ9[40ly,gyoXM+;oQ< ʓTsH&U >coM;EO4. AkY݁(#09y'0{{4"1BACOd$3`+-7u. #Tƅc>C|xE",Lq. 4mV&'2'(OV 9ܽ,H1v:Ft~Pl1 0]nՎ|\"4x&rԬJBfP-Z? \o=pUIFz4k?EBGgSDMѤjr,A\B_vO4ٟb#?|~+kT.yGn,R9EgR$`QZ*S+WZ1 J(׿I$I7y W}T!}.8EQ{3*o4twk"c^+ #vS uX=LRxqt>Jpzo֤:2yVF&3lWY܌.ѪOu#sӍ8T9Hh*> ǜTOG_]ǭgÝqA9>!ׇ۫tɵ Q8_ޏbd,[mVsM :L,a6eW5Ćir%̳ϓYPn~// QwI Cd;h%.cS_-ºGXl#(Y\\̙ jZ˲,$"|hXPCƨ)׮A~-T" 13z)qZc7s#]uavw͘{qM+J+!2l{4+E3B'Ϣ P"> !{\ 13SxwGOݯt)" ?M{PMi%:;!U'<}n)<ۑR @ʳafɶ\ҤWǺ ,.DXSߔpSq,|#zCVuh$ȷ3wS V?F"ʾUv`PgM׉zHuA2^"x~(Pu!>[&B,v이pC9?i=2mcdl6>A2%;HcOUtG~Po|SW8;Uv/+~a:kafpYsazCTDxm Ʉ_Zycy3XjEׇ ȬϐSg H%[.+8"Gz~uBt2!_U`k&t>}YQ0(܅?G V {}O;Fgߛ{V}dߖG9p&̦0i8Jb}f"gpZ00ϏWhV(jcU1-f)o!PZ>ձʩALμjm_;z2/_.fR.˜ 6Yi82ީ w.#r k E1^ˑ*Kl9ty2P3)e|^sx+Gk QK# J=41zk^f/eggʜ׸,>s5/nG tMqXOlADZ( OOQ(ZdDFjjٹ-ï>ځ3!o K}\Hd {2YwoNu853nk9BC͓9a}L59T;}4UJ AbM纽Z4J-ፊʊ:2_tzёÿd~ \ą*}?8ؙZAOO ;#+@ ?/^(:VN:%@?m(^6>#; ǯHг70a`f`dgco4vh&v7aZ=[_Vb``gfd(R7pa`##=4#?&Vݲ%'5ajꘘ;kÿt,~`e_Om k%KOB~I/R2Es$s27o=_ /././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/diagrams/worker-engine.graffle.tgz0000664000175000017500000001546700000000000022662 0ustar00zuulzuul00000000000000 TPm6`$x0AM[ X $Bp A }֭ڻu_yNyU-(fC Z!vW;A$@aaQQYO",!  N`ޞ^pVH7_'?36Sr{ G{!=!K2"R$HB<HJJF?TS4,"W7(*  KpuWo/D??Fo!PQ`S63h!;T~B&GK*߿0ؿXEXL* Awag3c|_JJ7Yf̏gm9Y!`GSӇLטMA%m KZO1dtΡT?N['SA4aaAM;㕕?t%c-ion`J@-.F&<9G8}#6=+'~^ַ[sJB=O6 XmI2gJ[QqQAMTLʆ|ƈB tP}%$k.ҩĴ&6'zϙ@#KU)6׌_j(VeIF=#ca3 Fb_dEgN^a:V bLtem"lP,+%LO0_R$"7C+I޳2)}"cezZ]kryi=yB$Nٍq^"QEddۧQ\E=\aiq ;:[R6~qzΝZ g>G:Ȏ 4Ȅ8nLOw($fuY2p̢Vekz/'U6jEYB so y,D"(SƝl*й{0PV1&ת(+ĻXG23@sج"? n4b5ja6~y(5~ )ǜ'fžGJ&wS%]gwϠ6$LdIHiXgyTPz`ӵðǎ&f RB͘C -S}+J84< Vg c#NaY^v&^23(35~w < 0||{߉ePg3ʕL2#7 s;!uE_#8cL1G||^T5y\lʙ iJb֦羾繾ˋqt7R1S5D +E Joq/.LNh.{F“y_c)a4ϕ8ÐLbx#::%,6e( 'u I~N1; cX!ՏY2 E&=l;CovQơ4u0!:tnṡ_O•gj =b'ߙ KMa¤S5}6ѕ'52`EI0IЋAnm|/Ჴi3 Ч**qJ8=h'Wq'%A CFXy2OCOʴlA$>zyʢũp ~`)/`(^)`s_A*3,#CByDk)F}y}dyE Q %@#d\@ToO6>pV7#F׎K݊6xZe"9%OX;ߞFEǫP,KMVbPeьb+)(}6w!D>uڷ-/^Gp1ÆaJ-ܟm?IXl'!Xt 4#'of*H,NJD(n^:%8ga)Y=Yɲdd֙e1mՙ!%Frv%< Qc ۟(;8uT+4&}'_HQUp)슚rD'W#m>r6/lD[qyYK+(-cWQsZɲNy | mDYy93$mT9aUi{dvejbX<_gѷ}K3,ңv2u{M"sG%Y<4u9%5Y520IPW4A)a'2r sUIoM$59bj?1L"Mxt{./̫3A?}G D6=s{R"S%1(wu/z; Q؍+u%t'utL;"YG9F0CWA-<1:x W2 FyBĄn=E9?kEHumj 7f<;b9GP3` fwၸ<Usm W|yQ%%nT )'4z KqUl;݈X%ȓ?o~p-MtI rZyס(2k\%\#O`07UgK6T vtHs;̰XȰۣV!:prMorHw ,F(kl-# \_iΧ >՘/u :jW(00 ֕tjƉV?8Id[Dj͞Qټ!)N_@4>|?,ч/B6znz*. N܇B R<6MnEBgI8&7ȎZ̟hU %!@ǭxBް I0Q01RRĺ`((fCD ҳR\;!*$ƃBXD irNtmkmBHHڝN8; *06Q%BRY)O?vvꮜD Dؗ(nX0pK 19^C#{!HQ(T8+ʿ= Z+]ԪON7a&F0ٰ@I`lS] 2γHf^f*i\2h4O N@d )N@2 ]7~қ5` :1< YOǼb1t"0\$Ad!}MjL( ;XfF|2GKS:Z^?\xR6vd B)=,k-0?ϊu@cnPճB'*GqƑYVF0HoM I%qo6:E~b zVs]p< 9^f:=br,gu<%5nj'W v\SQR _pƚ[V R,XF[kle0wybR|v#>d*5>r`ZbZZq`(.“/*Jo xvƫ&I^eNc(}VG^])`@"*agt^9)+.r no55z8:'AE#-R~r| ynpVL~JymAi/))$K&L:| 9W$(Ktu| 5\Ef)O?q1ą!I$6~R"WFV#;4",J3J $CXzhi!ԼjUː1R,s>1] U HDn1Tg8qזX73_™9lp$Q5+  |Yi[\|hR;'0J|28ý1%>s8Go(A^uO//uqxF&k ž f|{\jz^/nDָv11|OKąAVF|"7#HhlJDG곏Zۮ0[+`Kԝz: x lBpxXM~_~)ց a*/)~yEcH.>j+.JVQrewre;W oۅ}iy03m L_r| -;B$\e &)T8-"^9oKDrwiL w^Q?#Q,l$_:V)\Mkq܀`{W:=2.0.0,!=2.1.0 # BSD openstackdocstheme>=2.2.1 # Apache-2.0 reno>=3.1.0 # Apache-2.0 doc8>=0.6.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.5920405 taskflow-4.6.4/doc/source/0000775000175000017500000000000000000000000015447 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/conf.py0000664000175000017500000000550300000000000016751 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2020 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.extlinks', 'sphinx.ext.inheritance_diagram', 'sphinx.ext.viewcode', 'openstackdocstheme' ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/taskflow' openstackdocs_auto_name = False openstackdocs_bug_project = 'taskflow' openstackdocs_bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # General information about the project. project = u'TaskFlow' copyright = u'%s, OpenStack Foundation' % datetime.date.today().year # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # Prefixes that are ignored for sorting the Python module index modindex_common_prefix = ['taskflow.'] # Shortened external links. source_tree = 'https://opendev.org/openstack/taskflow/src/branch/master/' extlinks = { 'example': (source_tree + '/taskflow/examples/%s.py', ''), 'pybug': ('http://bugs.python.org/issue%s', ''), } # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] html_theme = 'openstackdocs' # -- Options for autoddoc ---------------------------------------------------- # Keep source order autodoc_member_order = 'bysource' # Always include members autodoc_default_options = {'members': None, 'show-inheritance': None} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/index.rst0000664000175000017500000000154500000000000017315 0ustar00zuulzuul00000000000000========== TaskFlow ========== *TaskFlow is a Python library that helps to make task execution easy, consistent and reliable.* [#f1]_ .. note:: If you are just getting started or looking for an overview please visit: https://wiki.openstack.org/wiki/TaskFlow which provides better introductory material, description of high level goals and related content. .. toctree:: :maxdepth: 2 user/index Release Notes ============= Read also the `taskflow Release Notes `_. Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` .. [#f1] It should be noted that even though it is designed with OpenStack integration in mind, and that is where most of its *current* integration is it aims to be generally usable and useful in any project. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.5920405 taskflow-4.6.4/doc/source/templates/0000775000175000017500000000000000000000000017445 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/templates/layout.html0000664000175000017500000000102200000000000021643 0ustar00zuulzuul00000000000000{% extends "!layout.html" %} {% block sidebarrel %}

{{ _('Navigation')}}

{% endblock %} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.5960407 taskflow-4.6.4/doc/source/user/0000775000175000017500000000000000000000000016425 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/arguments_and_results.rst0000664000175000017500000003400000000000000023564 0ustar00zuulzuul00000000000000===================== Arguments and results ===================== .. |task.execute| replace:: :py:meth:`~taskflow.atom.Atom.execute` .. |task.revert| replace:: :py:meth:`~taskflow.atom.Atom.revert` .. |retry.execute| replace:: :py:meth:`~taskflow.retry.Retry.execute` .. |retry.revert| replace:: :py:meth:`~taskflow.retry.Retry.revert` .. |Retry| replace:: :py:class:`~taskflow.retry.Retry` .. |Task| replace:: :py:class:`Task ` In TaskFlow, all flow and task state goes to (potentially persistent) storage (see :doc:`persistence ` for more details). That includes all the information that :doc:`atoms ` (e.g. tasks, retry objects...) in the workflow need when they are executed, and all the information task/retry produces (via serializable results). A developer who implements tasks/retries or flows can specify what arguments a task/retry accepts and what result it returns in several ways. This document will help you understand what those ways are and how to use those ways to accomplish your desired usage pattern. .. glossary:: Task/retry arguments Set of names of task/retry arguments available as the ``requires`` and/or ``optional`` property of the task/retry instance. When a task or retry object is about to be executed values with these names are retrieved from storage and passed to the ``execute`` method of the task/retry. If any names in the ``requires`` property cannot be found in storage, an exception will be thrown. Any names in the ``optional`` property that cannot be found are ignored. Task/retry results Set of names of task/retry results (what task/retry provides) available as ``provides`` property of task or retry instance. After a task/retry finishes successfully, its result(s) (what the ``execute`` method returns) are available by these names from storage (see examples below). .. testsetup:: from taskflow import task Arguments specification ======================= There are different ways to specify the task argument ``requires`` set. Arguments inference ------------------- Task/retry arguments can be inferred from arguments of the |task.execute| method of a task (or the |retry.execute| of a retry object). .. doctest:: >>> class MyTask(task.Task): ... def execute(self, spam, eggs, bacon=None): ... return spam + eggs ... >>> sorted(MyTask().requires) ['eggs', 'spam'] >>> sorted(MyTask().optional) ['bacon'] Inference from the method signature is the ''simplest'' way to specify arguments. Special arguments like ``self``, ``*args`` and ``**kwargs`` are ignored during inference (as these names have special meaning/usage in python). .. doctest:: >>> class UniTask(task.Task): ... def execute(self, *args, **kwargs): ... pass ... >>> sorted(UniTask().requires) [] .. make vim sphinx highlighter* happy** Rebinding --------- **Why:** There are cases when the value you want to pass to a task/retry is stored with a name other than the corresponding arguments name. That's when the ``rebind`` constructor parameter comes in handy. Using it the flow author can instruct the engine to fetch a value from storage by one name, but pass it to a tasks/retries ``execute`` method with another name. There are two possible ways of accomplishing this. The first is to pass a dictionary that maps the argument name to the name of a saved value. For example, if you have task:: class SpawnVMTask(task.Task): def execute(self, vm_name, vm_image_id, **kwargs): pass # TODO(imelnikov): use parameters to spawn vm and you saved ``'vm_name'`` with ``'name'`` key in storage, you can spawn a vm with such ``'name'`` like this:: SpawnVMTask(rebind={'vm_name': 'name'}) The second way is to pass a tuple/list/dict of argument names. The length of the tuple/list/dict should not be less then number of required parameters. For example, you can achieve the same effect as the previous example with:: SpawnVMTask(rebind_args=('name', 'vm_image_id')) This is equivalent to a more elaborate:: SpawnVMTask(rebind=dict(vm_name='name', vm_image_id='vm_image_id')) In both cases, if your task (or retry) accepts arbitrary arguments with the ``**kwargs`` construct, you can specify extra arguments. :: SpawnVMTask(rebind=('name', 'vm_image_id', 'admin_key_name')) When such task is about to be executed, ``name``, ``vm_image_id`` and ``admin_key_name`` values are fetched from storage and value from ``name`` is passed to |task.execute| method as ``vm_name``, value from ``vm_image_id`` is passed as ``vm_image_id``, and value from ``admin_key_name`` is passed as ``admin_key_name`` parameter in ``kwargs``. Manually specifying requirements -------------------------------- **Why:** It is often useful to manually specify the requirements of a task, either by a task author or by the flow author (allowing the flow author to override the task requirements). To accomplish this when creating your task use the constructor to specify manual requirements. Those manual requirements (if they are not functional arguments) will appear in the ``kwargs`` of the |task.execute| method. .. doctest:: >>> class Cat(task.Task): ... def __init__(self, **kwargs): ... if 'requires' not in kwargs: ... kwargs['requires'] = ("food", "milk") ... super(Cat, self).__init__(**kwargs) ... def execute(self, food, **kwargs): ... pass ... >>> cat = Cat() >>> sorted(cat.requires) ['food', 'milk'] .. make vim sphinx highlighter happy** When constructing a task instance the flow author can also add more requirements if desired. Those manual requirements (if they are not functional arguments) will appear in the ``kwargs`` parameter of the |task.execute| method. .. doctest:: >>> class Dog(task.Task): ... def execute(self, food, **kwargs): ... pass >>> dog = Dog(requires=("water", "grass")) >>> sorted(dog.requires) ['food', 'grass', 'water'] .. make vim sphinx highlighter happy** If the flow author desires she can turn the argument inference off and override requirements manually. Use this at your own **risk** as you must be careful to avoid invalid argument mappings. .. doctest:: >>> class Bird(task.Task): ... def execute(self, food, **kwargs): ... pass >>> bird = Bird(requires=("food", "water", "grass"), auto_extract=False) >>> sorted(bird.requires) ['food', 'grass', 'water'] .. make vim sphinx highlighter happy** Results specification ===================== In python, function results are not named, so we can not infer what a task/retry returns. This is important since the complete result (what the task |task.execute| or retry |retry.execute| method returns) is saved in (potentially persistent) storage, and it is typically (but not always) desirable to make those results accessible to others. To accomplish this the task/retry specifies names of those values via its ``provides`` constructor parameter or by its default provides attribute. Examples -------- Returning one value +++++++++++++++++++ If task returns just one value, ``provides`` should be string -- the name of the value. .. doctest:: >>> class TheAnswerReturningTask(task.Task): ... def execute(self): ... return 42 ... >>> sorted(TheAnswerReturningTask(provides='the_answer').provides) ['the_answer'] Returning a tuple +++++++++++++++++ For a task that returns several values, one option (as usual in python) is to return those values via a ``tuple``. :: class BitsAndPiecesTask(task.Task): def execute(self): return 'BITs', 'PIECEs' Then, you can give the value individual names, by passing a tuple or list as ``provides`` parameter: :: BitsAndPiecesTask(provides=('bits', 'pieces')) After such task is executed, you (and the engine, which is useful for other tasks) will be able to get those elements from storage by name: :: >>> storage.fetch('bits') 'BITs' >>> storage.fetch('pieces') 'PIECEs' Provides argument can be shorter then the actual tuple returned by a task -- then extra values are ignored (but, as expected, **all** those values are saved and passed to the task |task.revert| or retry |retry.revert| method). .. note:: Provides arguments tuple can also be longer then the actual tuple returned by task -- when this happens the extra parameters are left undefined: a warning is printed to logs and if use of such parameter is attempted a :py:class:`~taskflow.exceptions.NotFound` exception is raised. Returning a dictionary ++++++++++++++++++++++ Another option is to return several values as a dictionary (aka a ``dict``). :: class BitsAndPiecesTask(task.Task): def execute(self): return { 'bits': 'BITs', 'pieces': 'PIECEs' } TaskFlow expects that a dict will be returned if ``provides`` argument is a ``set``: :: BitsAndPiecesTask(provides=set(['bits', 'pieces'])) After such task executes, you (and the engine, which is useful for other tasks) will be able to get elements from storage by name: :: >>> storage.fetch('bits') 'BITs' >>> storage.fetch('pieces') 'PIECEs' .. note:: If some items from the dict returned by the task are not present in the provides arguments -- then extra values are ignored (but, of course, saved and passed to the |task.revert| method). If the provides argument has some items not present in the actual dict returned by the task -- then extra parameters are left undefined: a warning is printed to logs and if use of such parameter is attempted a :py:class:`~taskflow.exceptions.NotFound` exception is raised. Default provides ++++++++++++++++ As mentioned above, the default base class provides nothing, which means results are not accessible to other tasks/retries in the flow. The author can override this and specify default value for provides using the ``default_provides`` class/instance variable: :: class BitsAndPiecesTask(task.Task): default_provides = ('bits', 'pieces') def execute(self): return 'BITs', 'PIECEs' Of course, the flow author can override this to change names if needed: :: BitsAndPiecesTask(provides=('b', 'p')) or to change structure -- e.g. this instance will make tuple accessible to other tasks by name ``'bnp'``: :: BitsAndPiecesTask(provides='bnp') or the flow author may want to return default behavior and hide the results of the task from other tasks in the flow (e.g. to avoid naming conflicts): :: BitsAndPiecesTask(provides=()) Revert arguments ================ To revert a task the :doc:`engine ` calls the tasks |task.revert| method. This method should accept the same arguments as the |task.execute| method of the task and one more special keyword argument, named ``result``. For ``result`` value, two cases are possible: * If the task is being reverted because it failed (an exception was raised from its |task.execute| method), the ``result`` value is an instance of a :py:class:`~taskflow.types.failure.Failure` object that holds the exception information. * If the task is being reverted because some other task failed, and this task finished successfully, ``result`` value is the result fetched from storage: ie, what the |task.execute| method returned. All other arguments are fetched from storage in the same way it is done for |task.execute| method. To determine if a task failed you can check whether ``result`` is instance of :py:class:`~taskflow.types.failure.Failure`:: from taskflow.types import failure class RevertingTask(task.Task): def execute(self, spam, eggs): return do_something(spam, eggs) def revert(self, result, spam, eggs): if isinstance(result, failure.Failure): print("This task failed, exception: %s" % result.exception_str) else: print("do_something returned %r" % result) If this task failed (ie ``do_something`` raised an exception) it will print ``"This task failed, exception:"`` and a exception message on revert. If this task finished successfully, it will print ``"do_something returned"`` and a representation of the ``do_something`` result. Retry arguments =============== A |Retry| controller works with arguments in the same way as a |Task|. But it has an additional parameter ``'history'`` that is itself a :py:class:`~taskflow.retry.History` object that contains what failed over all the engines attempts (aka the outcomes). The history object can be viewed as a tuple that contains a result of the previous retries run and a table/dict where each key is a failed atoms name and each value is a :py:class:`~taskflow.types.failure.Failure` object. Consider the following implementation:: class MyRetry(retry.Retry): default_provides = 'value' def on_failure(self, history, *args, **kwargs): print(list(history)) return RETRY def execute(self, history, *args, **kwargs): print(list(history)) return 5 def revert(self, history, *args, **kwargs): print(list(history)) Imagine the above retry had returned a value ``'5'`` and then some task ``'A'`` failed with some exception. In this case ``on_failure`` method will receive the following history (printed as a list):: [('5', {'A': failure.Failure()})] At this point (since the implementation returned ``RETRY``) the |retry.execute| method will be called again and it will receive the same history and it can then return a value that subsequent tasks can use to alter their behavior. If instead the |retry.execute| method itself raises an exception, the |retry.revert| method of the implementation will be called and a :py:class:`~taskflow.types.failure.Failure` object will be present in the history object instead of the typical result. .. note:: After a |Retry| has been reverted, the objects history will be cleaned. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/atoms.rst0000664000175000017500000002057500000000000020313 0ustar00zuulzuul00000000000000------------------------ Atoms, tasks and retries ------------------------ Atom ==== An :py:class:`atom ` is the smallest unit in TaskFlow which acts as the base for other classes (its naming was inspired from the similarities between this type and `atoms`_ in the physical world). Atoms have a name and may have a version. An atom is expected to name desired input values (requirements) and name outputs (provided values). .. note:: For more details about atom inputs and outputs please visit :doc:`arguments and results `. .. automodule:: taskflow.atom .. _atoms: http://en.wikipedia.org/wiki/Atom Task ===== A :py:class:`task ` (derived from an atom) is a unit of work that can have an execute & rollback sequence associated with it (they are *nearly* analogous to functions). Your task objects should all derive from :py:class:`~taskflow.task.Task` which defines what a task must provide in terms of properties and methods. **For example:** .. image:: img/tasks.png :width: 525px :align: left :alt: Task outline. Currently the following *provided* types of task subclasses are: * :py:class:`~taskflow.task.Task`: useful for inheriting from and creating your own subclasses. * :py:class:`~taskflow.task.FunctorTask`: useful for wrapping existing functions into task objects. .. note:: :py:class:`~taskflow.task.FunctorTask` task types can not currently be used with the :doc:`worker based engine ` due to the fact that arbitrary functions can not be guaranteed to be correctly located (especially if they are lambda or anonymous functions) on the worker nodes. Retry ===== A :py:class:`retry ` (derived from an atom) is a special unit of work that handles errors, controls flow execution and can (for example) retry other atoms with other parameters if needed. When an associated atom fails, these retry units are *consulted* to determine what the resolution *strategy* should be. The goal is that with this consultation the retry atom will suggest a *strategy* for getting around the failure (perhaps by retrying, reverting a single atom, or reverting everything contained in the retries associated `scope`_). Currently derivatives of the :py:class:`retry ` base class must provide a :py:func:`~taskflow.retry.Retry.on_failure` method to determine how a failure should be handled. The current enumeration(s) that can be returned from the :py:func:`~taskflow.retry.Retry.on_failure` method are defined in an enumeration class described here: .. autoclass:: taskflow.retry.Decision To aid in the reconciliation process the :py:class:`retry ` base class also mandates :py:func:`~taskflow.retry.Retry.execute` and :py:func:`~taskflow.retry.Retry.revert` methods (although subclasses are allowed to define these methods as no-ops) that can be used by a retry atom to interact with the runtime execution model (for example, to track the number of times it has been called which is useful for the :py:class:`~taskflow.retry.ForEach` retry subclass). To avoid recreating common retry patterns the following provided retry subclasses are provided: * :py:class:`~taskflow.retry.AlwaysRevert`: Always reverts subflow. * :py:class:`~taskflow.retry.AlwaysRevertAll`: Always reverts the whole flow. * :py:class:`~taskflow.retry.Times`: Retries subflow given number of times. * :py:class:`~taskflow.retry.ForEach`: Allows for providing different values to subflow atoms each time a failure occurs (making it possibly to resolve the failure by altering subflow atoms inputs). * :py:class:`~taskflow.retry.ParameterizedForEach`: Same as :py:class:`~taskflow.retry.ForEach` but extracts values from storage instead of the :py:class:`~taskflow.retry.ForEach` constructor. .. _scope: http://en.wikipedia.org/wiki/Scope_%28computer_science%29 .. note:: They are *similar* to exception handlers but are made to be *more* capable due to their ability to *dynamically* choose a reconciliation strategy, which allows for these atoms to influence subsequent execution(s) and the inputs any associated atoms require. Area of influence ----------------- Each retry atom is associated with a flow and it can *influence* how the atoms (or nested flows) contained in that flow retry or revert (using the previously mentioned patterns and decision enumerations): *For example:* .. image:: img/area_of_influence.svg :width: 325px :align: left :alt: Retry area of influence In this diagram retry controller (1) will be consulted if task ``A``, ``B`` or ``C`` fail and retry controller (2) decides to delegate its retry decision to retry controller (1). If retry controller (2) does **not** decide to delegate its retry decision to retry controller (1) then retry controller (1) will be oblivious of any decisions. If any of task ``1``, ``2`` or ``3`` fail then only retry controller (1) will be consulted to determine the strategy/pattern to apply to resolve there associated failure. Usage examples -------------- .. testsetup:: import taskflow from taskflow import task from taskflow import retry from taskflow.patterns import linear_flow from taskflow import engines .. doctest:: >>> class EchoTask(task.Task): ... def execute(self, *args, **kwargs): ... print(self.name) ... print(args) ... print(kwargs) ... >>> flow = linear_flow.Flow('f1').add( ... EchoTask('t1'), ... linear_flow.Flow('f2', retry=retry.ForEach(values=['a', 'b', 'c'], name='r1', provides='value')).add( ... EchoTask('t2'), ... EchoTask('t3', requires='value')), ... EchoTask('t4')) In this example the flow ``f2`` has a retry controller ``r1``, that is an instance of the default retry controller :py:class:`~taskflow.retry.ForEach`, it accepts a collection of values and iterates over this collection when each failure occurs. On each run :py:class:`~taskflow.retry.ForEach` retry returns the next value from the collection and stops retrying a subflow if there are no more values left in the collection. For example if tasks ``t2`` or ``t3`` fail, then the flow ``f2`` will be reverted and retry ``r1`` will retry it with the next value from the given collection ``['a', 'b', 'c']``. But if the task ``t1`` or the task ``t4`` fails, ``r1`` won't retry a flow, because tasks ``t1`` and ``t4`` are in the flow ``f1`` and don't depend on retry ``r1`` (so they will not *consult* ``r1`` on failure). .. doctest:: >>> class SendMessage(task.Task): ... def execute(self, message): ... print("Sending message: %s" % message) ... >>> flow = linear_flow.Flow('send_message', retry=retry.Times(5)).add( ... SendMessage('sender')) In this example the ``send_message`` flow will try to execute the ``SendMessage`` five times when it fails. When it fails for the sixth time (if it does) the task will be asked to ``REVERT`` (in this example task reverting does not cause anything to happen but in other use cases it could). .. doctest:: >>> class ConnectToServer(task.Task): ... def execute(self, ip): ... print("Connecting to %s" % ip) ... >>> server_ips = ['192.168.1.1', '192.168.1.2', '192.168.1.3' ] >>> flow = linear_flow.Flow('send_message', ... retry=retry.ParameterizedForEach(rebind={'values': 'server_ips'}, ... provides='ip')).add( ... ConnectToServer(requires=['ip'])) In this example the flow tries to connect a server using a list (a tuple can also be used) of possible IP addresses. Each time the retry will return one IP from the list. In case of a failure it will return the next one until it reaches the last one, then the flow will be reverted. Interfaces ========== .. automodule:: taskflow.task .. autoclass:: taskflow.retry.Retry .. autoclass:: taskflow.retry.History .. autoclass:: taskflow.retry.AlwaysRevert .. autoclass:: taskflow.retry.AlwaysRevertAll .. autoclass:: taskflow.retry.Times .. autoclass:: taskflow.retry.ForEach .. autoclass:: taskflow.retry.ParameterizedForEach Hierarchy ========= .. inheritance-diagram:: taskflow.atom taskflow.task taskflow.retry.Retry taskflow.retry.AlwaysRevert taskflow.retry.AlwaysRevertAll taskflow.retry.Times taskflow.retry.ForEach taskflow.retry.ParameterizedForEach :parts: 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/conductors.rst0000664000175000017500000000526700000000000021354 0ustar00zuulzuul00000000000000---------- Conductors ---------- .. image:: img/conductor.png :width: 97px :alt: Conductor Overview ======== Conductors provide a mechanism that unifies the various concepts under a single easy to use (as plug-and-play as we can make it) construct. They are responsible for the following: * Interacting with :doc:`jobboards ` (examining and claiming :doc:`jobs `). * Creating :doc:`engines ` from the claimed jobs (using :ref:`factories ` to reconstruct the contained tasks and flows to be executed). * Dispatching the engine using the provided :doc:`persistence ` layer and engine configuration. * Completing or abandoning the claimed :doc:`job ` (depending on dispatching and execution outcome). * *Rinse and repeat*. .. note:: They are inspired by and have similar responsibilities as `railroad conductors`_ or `musical conductors`_. Considerations ============== Some usage considerations should be used when using a conductor to make sure it's used in a safe and reliable manner. Eventually we hope to make these non-issues but for now they are worth mentioning. Endless cycling --------------- **What:** Jobs that fail (due to some type of internal error) on one conductor will be abandoned by that conductor and then another conductor may experience those same errors and abandon it (and repeat). This will create a job abandonment cycle that will continue for as long as the job exists in an claimable state. **Example:** .. image:: img/conductor_cycle.png :scale: 70% :alt: Conductor cycling **Alleviate by:** #. Forcefully delete jobs that have been failing continuously after a given number of conductor attempts. This can be either done manually or automatically via scripts (or other associated monitoring) or via the jobboards :py:func:`~taskflow.jobs.base.JobBoard.trash` method. #. Resolve the internal error's cause (storage backend failure, other...). Interfaces ========== .. automodule:: taskflow.conductors.base .. automodule:: taskflow.conductors.backends .. automodule:: taskflow.conductors.backends.impl_executor Implementations =============== Blocking -------- .. automodule:: taskflow.conductors.backends.impl_blocking Non-blocking ------------ .. automodule:: taskflow.conductors.backends.impl_nonblocking Hierarchy ========= .. inheritance-diagram:: taskflow.conductors.base taskflow.conductors.backends.impl_blocking taskflow.conductors.backends.impl_nonblocking taskflow.conductors.backends.impl_executor :parts: 1 .. _musical conductors: http://en.wikipedia.org/wiki/Conducting .. _railroad conductors: http://en.wikipedia.org/wiki/Conductor_%28transportation%29 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/engines.rst0000664000175000017500000005177500000000000020626 0ustar00zuulzuul00000000000000------- Engines ------- Overview ======== Engines are what **really** runs your atoms. An *engine* takes a flow structure (described by :doc:`patterns `) and uses it to decide which :doc:`atom ` to run and when. TaskFlow provides different implementations of engines. Some may be easier to use (ie, require no additional infrastructure setup) and understand; others might require more complicated setup but provide better scalability. The idea and *ideal* is that deployers or developers of a service that use TaskFlow can select an engine that suites their setup best without modifying the code of said service. .. note:: Engines usually have different capabilities and configuration, but all of them **must** implement the same interface and preserve the semantics of patterns (e.g. parts of a :py:class:`.linear_flow.Flow` are run one after another, in order, even if the selected engine is *capable* of running tasks in parallel). Why they exist -------------- An engine being *the* core component which actually makes your flows progress is likely a new concept for many programmers so let's describe how it operates in more depth and some of the reasoning behind why it exists. This will hopefully make it more clear on their value add to the TaskFlow library user. First though let us discuss something most are familiar already with; the difference between `declarative`_ and `imperative`_ programming models. The imperative model involves establishing statements that accomplish a programs action (likely using conditionals and such other language features to do this). This kind of program embeds the *how* to accomplish a goal while also defining *what* the goal actually is (and the state of this is maintained in memory or on the stack while these statements execute). In contrast there is the declarative model which instead of combining the *how* to accomplish a goal along side the *what* is to be accomplished splits these two into only declaring what the intended goal is and not the *how*. In TaskFlow terminology the *what* is the structure of your flows and the tasks and other atoms you have inside those flows, but the *how* is not defined (the line becomes blurred since tasks themselves contain imperative code, but for now consider a task as more of a *pure* function that executes, reverts and may require inputs and provide outputs). This is where engines get involved; they do the execution of the *what* defined via :doc:`atoms `, tasks, flows and the relationships defined there-in and execute these in a well-defined manner (and the engine is responsible for any state manipulation instead). This mix of imperative and declarative (with a stronger emphasis on the declarative model) allows for the following functionality to become possible: * Enhancing reliability: Decoupling of state alterations from what should be accomplished allows for a *natural* way of resuming by allowing the engine to track the current state and know at which point a workflow is in and how to get back into that state when resumption occurs. * Enhancing scalability: When an engine is responsible for executing your desired work it becomes possible to alter the *how* in the future by creating new types of execution backends (for example the `worker`_ model which does not execute locally). Without the decoupling of the *what* and the *how* it is not possible to provide such a feature (since by the very nature of that coupling this kind of functionality is inherently very hard to provide). * Enhancing consistency: Since the engine is responsible for executing atoms and the associated workflow, it can be one (if not the only) of the primary entities that is working to keep the execution model in a consistent state. Coupled with atoms which *should* be immutable and have have limited (if any) internal state the ability to reason about and obtain consistency can be vastly improved. * With future features around locking (using `tooz`_ to help) engines can also help ensure that resources being accessed by tasks are reliably obtained and mutated on. This will help ensure that other processes, threads, or other types of entities are also not executing tasks that manipulate those same resources (further increasing consistency). Of course these kind of features can come with some drawbacks: * The downside of decoupling the *how* and the *what* is that the imperative model where functions control & manipulate state must start to be shifted away from (and this is likely a mindset change for programmers used to the imperative model). We have worked to make this less of a concern by creating and encouraging the usage of :doc:`persistence `, to help make it possible to have state and transfer that state via a argument input and output mechanism. * Depending on how much imperative code exists (and state inside that code) there *may* be *significant* rework of that code and converting or refactoring it to these new concepts. We have tried to help here by allowing you to have tasks that internally use regular python code (and internally can be written in an imperative style) as well as by providing :doc:`examples ` that show how to use these concepts. * Another one of the downsides of decoupling the *what* from the *how* is that it may become harder to use traditional techniques to debug failures (especially if remote workers are involved). We try to help here by making it easy to track, monitor and introspect the actions & state changes that are occurring inside an engine (see :doc:`notifications ` for how to use some of these capabilities). .. _declarative: http://en.wikipedia.org/wiki/Declarative_programming .. _imperative: http://en.wikipedia.org/wiki/Imperative_programming .. _tooz: https://github.com/openstack/tooz Creating ======== .. _creating engines: All engines are mere classes that implement the same interface, and of course it is possible to import them and create instances just like with any classes in Python. But the easier (and recommended) way for creating an engine is using the engine helper functions. All of these functions are imported into the ``taskflow.engines`` module namespace, so the typical usage of these functions might look like:: from taskflow import engines ... flow = make_flow() eng = engines.load(flow, engine='serial', backend=my_persistence_conf) eng.run() ... .. automodule:: taskflow.engines.helpers Usage ===== To select which engine to use and pass parameters to an engine you should use the ``engine`` parameter any engine helper function accepts and for any engine specific options use the ``kwargs`` parameter. Types ===== Serial ------ **Engine type**: ``'serial'`` Runs all tasks on a single thread -- the same thread :py:meth:`~taskflow.engines.base.Engine.run` is called from. .. note:: This engine is used by **default**. .. tip:: If eventlet is used then this engine will not block other threads from running as eventlet automatically creates a implicit co-routine system (using greenthreads and monkey patching). See `eventlet `_ and `greenlet `_ for more details. Parallel -------- **Engine type**: ``'parallel'`` A parallel engine schedules tasks onto different threads/processes to allow for running non-dependent tasks simultaneously. See the documentation of :py:class:`~taskflow.engines.action_engine.engine.ParallelActionEngine` for supported arguments that can be used to construct a parallel engine that runs using your desired execution model. .. tip:: Sharing an executor between engine instances provides better scalability by reducing thread/process creation and teardown as well as by reusing existing pools (which is a good practice in general). .. warning:: Running tasks with a `process pool executor`_ is **experimentally** supported. This is mainly due to the `futures backport`_ and the `multiprocessing`_ module that exist in older versions of python not being as up to date (with important fixes such as :pybug:`4892`, :pybug:`6721`, :pybug:`9205`, :pybug:`16284`, :pybug:`22393` and others...) as the most recent python version (which themselves have a variety of ongoing/recent bugs). Workers ------- .. _worker: **Engine type**: ``'worker-based'`` or ``'workers'`` .. note:: Since this engine is significantly more complicated (and different) then the others we thought it appropriate to devote a whole documentation :doc:`section ` to it. How they run ============ To provide a peek into the general process that an engine goes through when running lets break it apart a little and describe what one of the engine types does while executing (for this we will look into the :py:class:`~taskflow.engines.action_engine.engine.ActionEngine` engine type). Creation -------- The first thing that occurs is that the user creates an engine for a given flow, providing a flow detail (where results will be saved into a provided :doc:`persistence ` backend). This is typically accomplished via the methods described above in `creating engines`_. The engine at this point now will have references to your flow and backends and other internal variables are setup. Compiling --------- During this stage (see :py:func:`~taskflow.engines.base.Engine.compile`) the flow will be converted into an internal graph representation using a compiler (the default implementation for patterns is the :py:class:`~taskflow.engines.action_engine.compiler.PatternCompiler`). This class compiles/converts the flow objects and contained atoms into a `networkx`_ directed graph (and tree structure) that contains the equivalent atoms defined in the flow and any nested flows & atoms as well as the constraints that are created by the application of the different flow patterns. This graph (and tree) are what will be analyzed & traversed during the engines execution. At this point a few helper object are also created and saved to internal engine variables (these object help in execution of atoms, analyzing the graph and performing other internal engine activities). At the finishing of this stage a :py:class:`~taskflow.engines.action_engine.runtime.Runtime` object is created which contains references to all needed runtime components and its :py:func:`~taskflow.engines.action_engine.runtime.Runtime.compile` is called to compile a cache of frequently used execution helper objects. Preparation ----------- This stage (see :py:func:`~taskflow.engines.base.Engine.prepare`) starts by setting up the storage needed for all atoms in the compiled graph, ensuring that corresponding :py:class:`~taskflow.persistence.models.AtomDetail` (or subclass of) objects are created for each node in the graph. Validation ---------- This stage (see :py:func:`~taskflow.engines.base.Engine.validate`) performs any final validation of the compiled (and now storage prepared) engine. It compares the requirements that are needed to start execution and what is currently provided or will be produced in the future. If there are *any* atom requirements that are not satisfied (no known current provider or future producer is found) then execution will **not** be allowed to continue. Execution --------- The graph (and helper objects) previously created are now used for guiding further execution (see :py:func:`~taskflow.engines.base.Engine.run`). The flow is put into the ``RUNNING`` :doc:`state ` and a :py:class:`~taskflow.engines.action_engine.builder.MachineBuilder` state machine object and runner object are built (using the `automaton`_ library). That machine and associated runner then starts to take over and begins going through the stages listed below (for a more visual diagram/representation see the :ref:`engine state diagram `). .. note:: The engine will respect the constraints imposed by the flow. For example, if an engine is executing a :py:class:`~taskflow.patterns.linear_flow.Flow` then it is constrained by the dependency graph which is linear in this case, and hence using a parallel engine may not yield any benefits if one is looking for concurrency. Resumption ^^^^^^^^^^ One of the first stages is to analyze the :doc:`state ` of the tasks in the graph, determining which ones have failed, which one were previously running and determining what the intention of that task should now be (typically an intention can be that it should ``REVERT``, or that it should ``EXECUTE`` or that it should be ``IGNORED``). This intention is determined by analyzing the current state of the task; which is determined by looking at the state in the task detail object for that task and analyzing edges of the graph for things like retry atom which can influence what a tasks intention should be (this is aided by the usage of the :py:class:`~taskflow.engines.action_engine.selector.Selector` helper object which was designed to provide helper methods for this analysis). Once these intentions are determined and associated with each task (the intention is also stored in the :py:class:`~taskflow.persistence.models.AtomDetail` object) the :ref:`scheduling ` stage starts. .. _scheduling: Scheduling ^^^^^^^^^^ This stage selects which atoms are eligible to run by using a :py:class:`~taskflow.engines.action_engine.scheduler.Scheduler` implementation (the default implementation looks at their intention, checking if predecessor atoms have ran and so-on, using a :py:class:`~taskflow.engines.action_engine.selector.Selector` helper object as needed) and submits those atoms to a previously provided compatible `executor`_ for asynchronous execution. This :py:class:`~taskflow.engines.action_engine.scheduler.Scheduler` will return a `future`_ object for each atom scheduled; all of which are collected into a list of not done futures. This will end the initial round of scheduling and at this point the engine enters the :ref:`waiting ` stage. .. _waiting: Waiting ^^^^^^^ In this stage the engine waits for any of the future objects previously submitted to complete. Once one of the future objects completes (or fails) that atoms result will be examined and finalized using a :py:class:`~taskflow.engines.action_engine.completer.Completer` implementation. It typically will persist results to a provided persistence backend (saved into the corresponding :py:class:`~taskflow.persistence.models.AtomDetail` and :py:class:`~taskflow.persistence.models.FlowDetail` objects via the :py:class:`~taskflow.storage.Storage` helper) and reflect the new state of the atom. At this point what typically happens falls into two categories, one for if that atom failed and one for if it did not. If the atom failed it may be set to a new intention such as ``RETRY`` or ``REVERT`` (other atoms that were predecessors of this failing atom may also have there intention altered). Once this intention adjustment has happened a new round of :ref:`scheduling ` occurs and this process repeats until the engine succeeds or fails (if the process running the engine dies the above stages will be restarted and resuming will occur). .. note:: If the engine is suspended while the engine is going through the above stages this will stop any further scheduling stages from occurring and all currently executing work will be allowed to finish (see :ref:`suspension `). Finishing --------- At this point the machine (and runner) that was built using the :py:class:`~taskflow.engines.action_engine.builder.MachineBuilder` class has now finished successfully, failed, or the execution was suspended. Depending on which one of these occurs will cause the flow to enter a new state (typically one of ``FAILURE``, ``SUSPENDED``, ``SUCCESS`` or ``REVERTED``). :doc:`Notifications ` will be sent out about this final state change (other state changes also send out notifications) and any failures that occurred will be reraised (the failure objects are wrapped exceptions). If no failures have occurred then the engine will have finished and if so desired the :doc:`persistence ` can be used to cleanup any details that were saved for this execution. Special cases ============= .. _suspension: Suspension ---------- Each engine implements a :py:func:`~taskflow.engines.base.Engine.suspend` method that can be used to *externally* (or in the future *internally*) request that the engine stop :ref:`scheduling ` new work. By default what this performs is a transition of the flow state from ``RUNNING`` into a ``SUSPENDING`` state (which will later transition into a ``SUSPENDED`` state). Since an engine may be remotely executing atoms (or locally executing them) and there is currently no preemption what occurs is that the engines :py:class:`~taskflow.engines.action_engine.builder.MachineBuilder` state machine will detect this transition into ``SUSPENDING`` has occurred and the state machine will avoid scheduling new work (it will though let active work continue). After the current work has finished the engine will transition from ``SUSPENDING`` into ``SUSPENDED`` and return from its :py:func:`~taskflow.engines.base.Engine.run` method. .. note:: When :py:func:`~taskflow.engines.base.Engine.run` is returned from at that point there *may* (but does not have to be, depending on what was active when :py:func:`~taskflow.engines.base.Engine.suspend` was called) be unfinished work in the flow that was not finished (but which can be resumed at a later point in time). Scoping ======= During creation of flows it is also important to understand the lookup strategy (also typically known as `scope`_ resolution) that the engine you are using will internally use. For example when a task ``A`` provides result 'a' and a task ``B`` after ``A`` provides a different result 'a' and a task ``C`` after ``A`` and after ``B`` requires 'a' to run, which one will be selected? Default strategy ---------------- When an engine is executing it internally interacts with the :py:class:`~taskflow.storage.Storage` class and that class interacts with the a :py:class:`~taskflow.engines.action_engine.scopes.ScopeWalker` instance and the :py:class:`~taskflow.storage.Storage` class uses the following lookup order to find (or fail) a atoms requirement lookup/request: #. Transient injected atom specific arguments. #. Non-transient injected atom specific arguments. #. Transient injected arguments (flow specific). #. Non-transient injected arguments (flow specific). #. First scope visited provider that produces the named result; note that if multiple providers are found in the same scope the *first* (the scope walkers yielded ordering defines what *first* means) that produced that result *and* can be extracted without raising an error is selected as the provider of the requested requirement. #. Fails with :py:class:`~taskflow.exceptions.NotFound` if unresolved at this point (the ``cause`` attribute of this exception may have more details on why the lookup failed). .. note:: To examine this information when debugging it is recommended to enable the ``BLATHER`` logging level (level 5). At this level the storage and scope code/layers will log what is being searched for and what is being found. .. _scope: http://en.wikipedia.org/wiki/Scope_%28computer_science%29 Interfaces ========== .. automodule:: taskflow.engines.base Implementations =============== .. automodule:: taskflow.engines.action_engine.engine Components ---------- .. warning:: External usage of internal engine functions, components and modules should be kept to a **minimum** as they may be altered, refactored or moved to other locations **without** notice (and without the typical deprecation cycle). .. automodule:: taskflow.engines.action_engine.builder .. automodule:: taskflow.engines.action_engine.compiler .. automodule:: taskflow.engines.action_engine.completer .. automodule:: taskflow.engines.action_engine.deciders .. automodule:: taskflow.engines.action_engine.executor .. automodule:: taskflow.engines.action_engine.process_executor .. automodule:: taskflow.engines.action_engine.runtime .. automodule:: taskflow.engines.action_engine.scheduler .. automodule:: taskflow.engines.action_engine.selector .. autoclass:: taskflow.engines.action_engine.scopes.ScopeWalker :special-members: __iter__ .. automodule:: taskflow.engines.action_engine.traversal Hierarchy ========= .. inheritance-diagram:: taskflow.engines.action_engine.engine.ActionEngine taskflow.engines.base.Engine taskflow.engines.worker_based.engine.WorkerBasedActionEngine :parts: 1 .. _automaton: https://docs.openstack.org/automaton/latest/ .. _multiprocessing: https://docs.python.org/2/library/multiprocessing.html .. _future: https://docs.python.org/dev/library/concurrent.futures.html#future-objects .. _executor: https://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.Executor .. _networkx: https://networkx.github.io/ .. _futures backport: https://pypi.org/project/futures .. _process pool executor: https://docs.python.org/dev/library/concurrent.futures.html#processpoolexecutor ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/examples.rst0000664000175000017500000002056700000000000021007 0ustar00zuulzuul00000000000000========== Examples ========== While developing TaskFlow the team has worked *hard* to make sure the various concepts are explained by *relevant* examples. Here are a few selected examples to get started (ordered by *perceived* complexity): To explore more of these examples please check out the `examples`_ directory in the TaskFlow `source tree`_. .. note:: If the examples provided are not satisfactory (or up to your standards) contributions are welcome and very much appreciated to help improve them. The higher the quality and the clearer the examples are the better and more useful they are for everyone. .. _examples: https://opendev.org/openstack/taskflow/src/branch/master/taskflow/examples .. _source tree: https://opendev.org/openstack/taskflow/ Hello world =========== .. note:: Full source located at :example:`hello_world`. .. literalinclude:: ../../../taskflow/examples/hello_world.py :language: python :linenos: :lines: 16- Passing values from and to tasks ================================ .. note:: Full source located at :example:`simple_linear_pass`. .. literalinclude:: ../../../taskflow/examples/simple_linear_pass.py :language: python :linenos: :lines: 16- Using listeners =============== .. note:: Full source located at :example:`echo_listener`. .. literalinclude:: ../../../taskflow/examples/echo_listener.py :language: python :linenos: :lines: 16- Using listeners (to watch a phone call) ======================================= .. note:: Full source located at :example:`simple_linear_listening`. .. literalinclude:: ../../../taskflow/examples/simple_linear_listening.py :language: python :linenos: :lines: 16- Dumping a in-memory backend =========================== .. note:: Full source located at :example:`dump_memory_backend`. .. literalinclude:: ../../../taskflow/examples/dump_memory_backend.py :language: python :linenos: :lines: 16- Making phone calls ================== .. note:: Full source located at :example:`simple_linear`. .. literalinclude:: ../../../taskflow/examples/simple_linear.py :language: python :linenos: :lines: 16- Making phone calls (automatically reverting) ============================================ .. note:: Full source located at :example:`reverting_linear`. .. literalinclude:: ../../../taskflow/examples/reverting_linear.py :language: python :linenos: :lines: 16- Building a car ============== .. note:: Full source located at :example:`build_a_car`. .. literalinclude:: ../../../taskflow/examples/build_a_car.py :language: python :linenos: :lines: 16- Iterating over the alphabet (using processes) ============================================= .. note:: Full source located at :example:`alphabet_soup`. .. literalinclude:: ../../../taskflow/examples/alphabet_soup.py :language: python :linenos: :lines: 16- Watching execution timing ========================= .. note:: Full source located at :example:`timing_listener`. .. literalinclude:: ../../../taskflow/examples/timing_listener.py :language: python :linenos: :lines: 16- Distance calculator =================== .. note:: Full source located at :example:`distance_calculator` .. literalinclude:: ../../../taskflow/examples/distance_calculator.py :language: python :linenos: :lines: 16- Table multiplier (in parallel) ============================== .. note:: Full source located at :example:`parallel_table_multiply` .. literalinclude:: ../../../taskflow/examples/parallel_table_multiply.py :language: python :linenos: :lines: 16- Linear equation solver (explicit dependencies) ============================================== .. note:: Full source located at :example:`calculate_linear`. .. literalinclude:: ../../../taskflow/examples/calculate_linear.py :language: python :linenos: :lines: 16- Linear equation solver (inferred dependencies) ============================================== ``Source:`` :example:`graph_flow.py` .. literalinclude:: ../../../taskflow/examples/graph_flow.py :language: python :linenos: :lines: 16- Linear equation solver (in parallel) ==================================== .. note:: Full source located at :example:`calculate_in_parallel` .. literalinclude:: ../../../taskflow/examples/calculate_in_parallel.py :language: python :linenos: :lines: 16- Creating a volume (in parallel) =============================== .. note:: Full source located at :example:`create_parallel_volume` .. literalinclude:: ../../../taskflow/examples/create_parallel_volume.py :language: python :linenos: :lines: 16- Summation mapper(s) and reducer (in parallel) ============================================= .. note:: Full source located at :example:`simple_map_reduce` .. literalinclude:: ../../../taskflow/examples/simple_map_reduce.py :language: python :linenos: :lines: 16- Sharing a thread pool executor (in parallel) ============================================ .. note:: Full source located at :example:`share_engine_thread` .. literalinclude:: ../../../taskflow/examples/share_engine_thread.py :language: python :linenos: :lines: 16- Storing & emitting a bill ========================= .. note:: Full source located at :example:`fake_billing` .. literalinclude:: ../../../taskflow/examples/fake_billing.py :language: python :linenos: :lines: 16- Suspending a workflow & resuming ================================ .. note:: Full source located at :example:`resume_from_backend` .. literalinclude:: ../../../taskflow/examples/resume_from_backend.py :language: python :linenos: :lines: 16- Creating a virtual machine (resumable) ====================================== .. note:: Full source located at :example:`resume_vm_boot` .. literalinclude:: ../../../taskflow/examples/resume_vm_boot.py :language: python :linenos: :lines: 16- Creating a volume (resumable) ============================= .. note:: Full source located at :example:`resume_volume_create` .. literalinclude:: ../../../taskflow/examples/resume_volume_create.py :language: python :linenos: :lines: 16- Running engines via iteration ============================= .. note:: Full source located at :example:`run_by_iter` .. literalinclude:: ../../../taskflow/examples/run_by_iter.py :language: python :linenos: :lines: 16- Controlling retries using a retry controller ============================================ .. note:: Full source located at :example:`retry_flow` .. literalinclude:: ../../../taskflow/examples/retry_flow.py :language: python :linenos: :lines: 16- Distributed execution (simple) ============================== .. note:: Full source located at :example:`wbe_simple_linear` .. literalinclude:: ../../../taskflow/examples/wbe_simple_linear.py :language: python :linenos: :lines: 16- Distributed notification (simple) ================================= .. note:: Full source located at :example:`wbe_event_sender` .. literalinclude:: ../../../taskflow/examples/wbe_event_sender.py :language: python :linenos: :lines: 16- Distributed mandelbrot (complex) ================================ .. note:: Full source located at :example:`wbe_mandelbrot` Output ------ .. image:: img/mandelbrot.png :height: 128px :align: right :alt: Generated mandelbrot fractal Code ---- .. literalinclude:: ../../../taskflow/examples/wbe_mandelbrot.py :language: python :linenos: :lines: 16- Jobboard producer/consumer (simple) =================================== .. note:: Full source located at :example:`jobboard_produce_consume_colors` .. literalinclude:: ../../../taskflow/examples/jobboard_produce_consume_colors.py :language: python :linenos: :lines: 16- Conductor simulating a CI pipeline ================================== .. note:: Full source located at :example:`tox_conductor` .. literalinclude:: ../../../taskflow/examples/tox_conductor.py :language: python :linenos: :lines: 16- Conductor running 99 bottles of beer song requests ================================================== .. note:: Full source located at :example:`99_bottles` .. literalinclude:: ../../../taskflow/examples/99_bottles.py :language: python :linenos: :lines: 16- ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/exceptions.rst0000664000175000017500000000020600000000000021336 0ustar00zuulzuul00000000000000---------- Exceptions ---------- .. inheritance-diagram:: taskflow.exceptions :parts: 1 .. automodule:: taskflow.exceptions ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/history.rst0000664000175000017500000000004100000000000020653 0ustar00zuulzuul00000000000000.. include:: ../../../ChangeLog ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6000407 taskflow-4.6.4/doc/source/user/img/0000775000175000017500000000000000000000000017201 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/area_of_influence.svg0000664000175000017500000001611200000000000023347 0ustar00zuulzuul00000000000000 2015-04-22 21:13ZArea of influenceLayer 1RetryRetryController (2)Task ATask BTask CTask 1Task 2Task 3RetryController (1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/conductor.png0000664000175000017500000002270000000000000021710 0ustar00zuulzuul00000000000000PNG  IHDR,;Ѥ pHYs  bKGD̿%dIDAT?sK1k4fR{ޫR(Jk϶F(BZm;-B%sw\;~?KWg*-6W2[URRRtS2,cJUAn߸ǪUJwyKG5^|γYH[%%%%^SP2[tT2[-c,dZFXSlcLlP񐅔F+$h E]%6a%Z>wd62"FQloEĵz*MV4BDdqd6ߋ"b:(-yXD˖6h tS:tp)ܥ qEDD4Z\6h"erMLKBDX[ƍ{4^DDDDh 6𕈨4 ~:9[1nvM-0\DDDDDDT񂈩gaZɻ"b+mjSXlu"b%pK7?]lt4ݖ>Q\`;kg6)8F0d1 `NE uRDDDDDDDDDLS}*bl6""={`q~44I2upt֎ou6[Z郂'{p?5(8OuU򆈈xQ ">+m狈xG_ 5~VV1CDܯXDDDDDDD8XV}/ZlP𸈈w-.*`VNGo%sz8ؕWxR >[E5Rfz"-T9@P0@k ]2: ^4Rwuuj#r[Ni"""""""bU7"""7uCd6nE7"UmW-<2tв6?.c[o>G """""""rj[DD6ͣ KHn6;:~eU;bIgqҲ66u7VcC'W """""y v^]=-""""bsgyf2˯ITU4Ə ZRik%wx<*LjU"q@?u63 HDTǺ@o3fskg8W8SEU@4?"~ԷJmn>kqELvn7'EVm E-0S*,1u2T_v`e56".SrœfyT£nsEH?6UTGR,vLf=M먱=Cy`ۍ.vCdM)r3uXI} ݚIpDDո՛Y_ ^m7jG)x;Ɖ7HDD/ Ue ;c8Ɨ"b;c]3>)"V^EDB tODq6QƋ׃ƊTYNU`xrFf}-"S{PWTigQ6 PKhհQ|ozѥn ^ZA"bjKCQiN|ODDLEj{hq̯߳UGç"*_ghg!3:B1D7?>s;•|>r *EDDDDmSn6/zXxq{ޑ1ijO&UcKq]DM_kw:kqeOVfu;yZ\|!Kxo۪Q9֋R`I#EO [ M/µ,"""b543 FG\m ~^+]|(keU&Wm版;TkXU""*"6kc~^fc|@ nǫ""i]8Rܩx$r vq~l 8\ST8PcuF;X[Mکm AǷbK;S#&b7G;ӥ\u,и&>q 8B]~)o{1`|`AU^*'b-GSyM6ւAbPpgM7Ic1yU uJ= b]cGqHcٚ'cе(w9"x5VVLK16k\[yP+jXt ޚbY#-Lo^`=XNڨ fś6ИDvS)>[Cz@|bX_ x*HLEc7czhX!."b1MSfJ'`3Cćx6Z׷Sqk|M?QfFۊEDX1#3*s#mg{1=Ӵw('t OXPq$bwrߊ | tT9C,%ODl5Ѿb,PjIiD V!r#D[}5N){VdDDұQr1|.1D չ"bEܤ龲jmYȆK-=YxӊuwxD[L1FDDDDDRDTzǫ\M5Jշ;zi&:BD>p#D7"q]uTm>wFXwa mlj("/""^1NDD Y[YzvuS*3Wό`V Q]{5׸\l)1 """"""">qCDSc >2>;N.v0W"ֶ}%:yU5Dar1ͻ"""""F+RLq,#winm'F>s&E#E|,"""""""8^m{"Y'a]bsP𲈸\h|lsDD W)""""""35 (f柮P?Jx$9Ra#cj.Ra__16𒈈gēڛ↊a5Պ[ڟE5WYhO`VDDXCа~"̜nucG 2۪tƭkŔY~q}#""""">̴o_C|`u-&vn.t8OO)_vp)%\~ƊxʒfԵ:k\%yB v~&")JߊuXuj ߜ$"nA16Si;ŏ6 ;uwA1*9*leev=}ww]x 6za[sZb'fboM1)niR Lv{̯4JivǘL7Ͷ*f{QMUs/kh ޷}Va~Oz53kEVj'Ujž7֪\l_(o=XJSSm՘_B^*q/ujc6uVGwj>f-?׊mWM[sjYj.Zwݪ)N0^/ҸbƭoV '+ޢ&8Cm=7u;wk"?+ލFꦶb!h̭ъ"UU j+ָLR0nO"WPyUPi[A1]Ӟ^7Y{+nU ġf4Cs9Hg~XX{3)8^R:bY[FCvLjX̌W+rI:(F!9A=ΦZ^+ѰaNӸP3o_k#cjt5isOxEC1޲j;݃Zڊs!fui\gаkqVVQyLvyEfjXJ&Dx7+/D?3+43{\W2M7b/ŹKfiטcf L~eF(3fe[̬bU%Z&hCn1+L01vjB4>w2`~kEl)h5wtQn{snBXI8[>s@Oi*TSq+>Ը=c3)XT!nԸ^3kNWJt8+}5mŲJuXP.Y[Ylr⬠kf\ZoqJi8GɢY)&%zAgVv*NJfHì %HUY_,j3U'\x_gG6FЬ"MZf#buY\cֺ,(S mjbKxZ%W*bb7Ÿ4=b1D-,6S@RZ_mT;hg*F{ߺP+UWKV#*0+M^J\lˉ8]/S睯8E+0O((FkuT+tXCsv t0@#ܠilŻĽuW:oAMsz(=XǪK8USaMqWkMViz#5Ł>PvVd;qQ)v5RVC"Tu4f&+މF+qqS%5fm^V[h,k5BN8\Ӭ&U2_\EvwoSrx3Eos5|w=a_(h~b![]mɗk UMXD ^񶂹Z{Txw+~)~#JkX[U4U/mh5!*NS+4wjQFOO.]4/Zf{T34łbM[q:jEyhj\?q6*X^SMtuxjfdG}#*Īp:ku ~c G)h@#M73}I+4kմJKܦ `ZOlSGhn**8@,.6.>v^PZK(wvFzjOa+:+=OM9MReiVlU_am,֘wɕNET(t'jK7Tb)j`\_hWn1wxsƉ{['Le?SX+zDw)vqũC1VvEStUC |7]*CEswEpJO_[kb/ž.v{_D0PP҈{D_C*Xq/L 6Jr8_[Kh.D*)ZGĞG~ƈ]4I{heU's%M>ﮍ=.b隷Lv yJhE 骋jeT߃s):+u1|p 8;J|.Vt #Ќ2-JVʾe5RfsߛQocJ9Ԑ Ұ}J ^Ԑ]a4 אa74C}!8B>񑂒f1r5d1qf)u5dwf_57!LIZQ4-!ϺAI3[HrQ;ZIk#ѪfQAPPq!u08 yџ] Q|ƙQ{)iƚ >W"~+U"ƘnQ"?QƘљnP" ֌p23zJZȃPעꭤ ZއJZ>TWwS];{TITu T7*i1=D/ĒyJZL˪kC~oP҂S]_@m|ca%-QǪC[m9+iQ^ږu5^j[WJZnWCPۂUҢuԶ(iQxO]\EKu`Nr#SQ?\Eu?RM.RSҢ *l}Pjk[%-=ggA{+iaϹJmOڮv{vxQm:EmCl0}D v,TH*iaۈj뚮jJZ:b>5vLIWVKqԶ觤XD"JZBbc b'Wڶ=u.ʔPՎ󄺶6QIk'~kU(iqm]mj;%-L\ڭb%=m%-ID7ձbuƨqu7+iQmoXoLz.jmI.Q^Wgm-kwt%ǪvS"})?d j`"&cɑ}TM̫ƞ5Yü6jQ>5kު甴mŇڪv;u:+iA0bjӥ:_lm,.5cNRױ8%-}aA 6)i1m-VB:PLS=M%-doq?V?뤤`j[\VbS%-`OqVms1@Ib)kKch1fwP}، Fy[jȃf8ˌ +iFe^Si13ʝ8KI3Yܯ!wFC(i6e;i hHW&i+WАa+*i]|&Ґb5 ;N짤Y!akM(i}MPna ; ZUuVPR^] =Vu'ayӭI%\ "W 1 nr[vw'=Aw?;IV,H;Zַ8g@w{K$""1ܺě"]""""""""""""""""""&8R;_aO{[1s]<;!-g!ݭ o:κ>;X""""F9ra;sxCnp?8vmncXZֵmnk{DLu~1L1ԭ8ѡ53vS1*+f߹(uuWPUQծLWͣKT/RڈU[]ܢyYF ;[<,#k>}E,fqKYr[ɪVug6lk;lgֱTW7 hc [9֟]SߪQ4kapMtWY]nWfw rjW+lg#Ǝv`{ֶmfSlh[6b/;©.p A1vjpԁzN6p\W p G@ K0ށiABZyCAP8C@&*CP=#t] 4}a ٰ;GDxJ>,_“@FXDBX$!k"EHqaYbVabJ0՘cVL6f3bձX'?v 6-V``[a;p~\2n5׌ &x*sb|! ߏƿ' Zk! $l$T4QOt"y\b)AI&NI$R$)TIj"]&=&!:dGrY@^O$ _%?P(&OJEBN9J@y@yCR nXZOD}J}/G3ɭk{%Oחw_.'_!JQ@SVF=IEbbbb5Q%O@%!BӥyҸM:e0G7ӓ e%e[(R0`3R46i^)*n*|"fLUo՝mO0j&jajj.ϧwϝ_4갺zj=U45nɚ4ǴhZ ZZ^0Tf%9->ݫ=cXgN].[7A\SwBOK/X/_Q>QG[ `Aaac#*Z;8cq>[&IIMST`ϴ kh&45ǢYYF֠9<|y+ =X_,,S-,Y)YXmĚk]c}džjcΦ浭-v};]N"&1=xtv(}'{'IߝY) Σ -rqr.d._xpUەZM׍vm=+KGǔ ^WWbj>:>>>v}/avO8 FV> 2 u/_$\BCv< 5 ]s.,4&yUx~xw-bEDCĻHGKwFGEGME{EEKX,YFZ ={$vrK .3\rϮ_Yq*©L_wד+]eD]cIIIOAu_䩔)3ѩiB%a+]3='/40CiU@ёL(sYfLH$%Y jgGeQn~5f5wugv5k֮\۹Nw]m mHFˍenQQ`hBBQ-[lllfjۗ"^bO%ܒY}WwvwXbY^Ю]WVa[q`id2JjGէ{׿m>PkAma꺿g_DHGGu;776ƱqoC{P38!9 ҝˁ^r۽Ug9];}}_~imp㭎}]/}.{^=}^?z8hc' O*?f`ϳgC/Oϩ+FFGGόzˌㅿ)ѫ~wgbk?Jި9mdwi獵ޫ?cǑOO?w| x&mf2:Y~ pHYs B(xiTXtXML:com.adobe.xmp 5 2 1 2@IDATxTEֆQ1#9cœV]\kkiuYWUלb„9 EEIgf =s|]uVU{Wi%@\r 5k @\r 5k o5V\MJ`V# k @)I4-ǂ@ =7X&LkE_w~0-%@AI\ASSk{aFCjٗ'P yB=?ɧZH͎~ْ[ГyB=?ɧ=I:BV%QGZMměZz*v z-3X}#U _"3R"^Hr zSPO|w3׋@Cs s5T횪¼橁9dANJ(!Fk+5| z\r j`C$/}_@'Kr5)fq`; no ΜdhK7@./l {𼵭zZ %UwFGs 4v2mΥ?\Z|hq|OY T'aH 4On M "Z9?ɩ;ػلa`b-ZTRsх[@}#JF3Ovji{.90pu1oo;v5 [p^ة,BSCnWNDSπ. *ր77:/݀ضy,RN~.d;5ZJD_PTBeh@qֺxTLL1J[.U $U9r4D %\ƻiBl x3N7-PB.8*cQ7G@.U9FR<n~ X䄄rx zDȥ5)q.DtzH< `<'|GOrv蒤`@Cr TJhVWڴU@q27r22t 6A (@xF.Ipy/,FQ&A["$4?Ln ,x(nT*aCk4T\ DT)cxX; PV&K*p"it%9FBxL rBPbiPJH^v xu~ ͫmA.YQy` /%$s5PMؒ]'ED~294p2N:r $ɧQh&+]r ;HhU#*%"oȧl()ŻmuXKsvpѺq{}7^9˔ (|?h(!,2Ѳ}4Wjo n%=Kdv=a \[))!-??|%%=D9lnL"g(nG\OIɛ(A.&PyJ۠=Ov"IFgK@`\C,Nt-He.6\GJJM򨋁3Ax ~,Ǐ_lԨQy(C\PADè5NTzWm,ڳDLp: NdԀK% \ >* 6zQ/n6Cpx47 QO:RbzSJNʥ"z:ɋ4]zyAȞD&#!'v%J i6T x!9e րW%<HJDv R2g7nAF/#yAJAfvKB3_mp_3 r-8§%"& y iƿ)s !.8 C])wragtDH4D"T=9Y7$%#˨NdR ~of!Cl駟۷=b\p|n-?}D$1@PzL. gE"<uVq`/!v[hvI*.ȪraS0 xNARBm;fsԝР=_I@rb%?Qqo[Y,/X6(cy=?9r@^G 8pkb}U$%ysht*&)$reA Im9v)i]9o =,F 8 #$ƹp\Κ@B_~2y,X5~ TX ;; hH7 ]K saؙR+ާ~젞ف4f>M8ĤL0aiܯn_nCq<_}u\pA=Rr-jHiqH,`xٲ9puH=}@iQrgz-g|5i)!Ւ%Sd زkpLIz]2L}#fX;0\{\BT9bK/dXtTަJ?;v#_}a7x;lkV19?DUD(! #%!A52P-Fʙ~J=zLc)Y* Tc4wR!g^r%gqT$PW=,l_;t-wXWɰo"^sC=[nFے⡐-8UMl2L"m'ҵ9#z@\z]}88ZB8/'8duF#*'Lah#DPl`E7({oH66&P1 H;!ᓡ[%W^dz3N ?t?:zpIRjJbڇR/\"B{A)ppw=,zDRS؜d;) @}A9(h $YdC2ORN$zA3>a6Wa 쑆|9>~8HKr[j R\JӦ"JEmD5OIVUB@}>)ⳣȃ./.q Y>)rHQ&t6B ,A}?%!G ^O<ѓ_FM.;Vj[zcۇ[fC _ɾ\Oiv<i*"%R=(N?p#~ur)&.Ix>^<$"8E07`W'AJI ,B#@J<F0h* Mht|mX}vwlp",gq~S.77}=|gTR"eo(_5`'Hrޠ/8YET$?mAqr) kwzW` t (KgٯLTMG I8-7m# ɆxF;PV/E%-]r]q;pOݝ i䄔s%:ȁOWAJvz</(0l;;s8HT//H"n .Zi1zP=V:X(ADYyԭhB:}%FH7Qh8Bs=HFY6K6=@4F%!G:;n~^~+JiϾuO,WqHH&"Y9À[q HAy8#P#^Qf ;|$OQV%ZlGBnRg}vLryF!XcJB$!wzׄzWrgz!C;J?SbM鯳 씶KxJWYǾoC Deg/wlr%7u6bĐ~5eK)!uwSɉTyHr0ˁᄏSimߦMUr+8hKC9o:iJ;ڎi/^vѦ$,Sh AHrdԤL3DmWk "@{w1 ќыjR:2W@$q$,^P}m7T4eXP@a?ZhjƜMcFx4W\AI^,pOz>A;`nסC,p]van~曁ҥKE;W8}!bGv}D%]%ݠ+Qh[ϷDt" gCȼ7Xa``[hNgGVك:c2tzO Xv5}^}!YvE݁s,( k0 ()!(K~&#`gABx%yl܎;Z[)'3w|σ?s#!DZ;lvۺ1Tr x`znv HBb4{ALөSZVZieYӲ.;{n:M-s_ Y^y^q~AL ;b\lσb,ngHy<@5 H6&B76Sz5K/xM삗'|r ;3S5,%$XH3"`Yvp"*P.`}`)(_2cj2P&%6Itg@WpCdhU[q;gtB^xsX8'8zꩧ>[w}_gv ;Rt&CZwtG})Sxuȏd#r@PDԱz0-[nrL]tns9G~pRx?CQKK2:Pk7ds2W@ۍ& E'ϳS z%Jm2'oO7@^< &%M`[,  ̀#x(6"p P]Ƕ \ *,@{Y? fymZY9R{+?替'Bdhq(BmXdoUVuސdYW[0ض!dtDAwxZ4G(BL}P&M)x[_$tKe7žq;b(Gd0: M)P .k?Q@.ؾMMT$JHlR; 㒑uKHQO G14omr00l3ms/{.N%/X idd6Iu`O*9H~m_ƾO=_IH{iS$|6O,n du  kX.8;QF Tr84TޡcI(IO7COѱ",t}nWq2 \xW_}L!>Ჴ$$!FzfIfA✦=Eӌ3:x /a<:Ѳ@Hdb؎x 1,Ov7|N:iٕW^y&X:~B!P9xkCZC|4kr L+Ɇs$;oސdztb\` X$WQrD]g#0Ԗ/dz|1 INj#IBHAfh@Dk$sRPQi>;q0`T5z$xcPϰ=򅼛m%yxTЧzLU`bv]\Wouy,Ermr L+^Oyima g,ҕ`Spú\$O l B:NN*\^n7IQ+: k 3pB0RF1{Kx"b% $j8L0DSFi>$!(蕴 !I>"JQn:cVM*#}vwQT݅?s?.XfJd? &6G5M Qov֓v+Q%uA)2~Λ ^ JRavZo߾wfմ@c/wy<:w@ i"%(c8͂H!Cܶ~ӌ .D:Bz 2r[]m14g(iFZM{W_;0Lf0{Z}PP/K]ev^$+I`NXjMHԥ J2i/KٌBA 4. :pvɾ65glI6X/h/9uDٲ.|_ܗɄW@/T*  (! ;e ƣh(Xe(|WuE/O$ɘnS22MR b M7NO?}GhiZ 5@H~[n0tI蹴d<h$=9yeHUmD)>`'Qw`rT;X|N'` 0h}i6@*$:OtBaĝc7DHZ1qdcC(?e4I()BCo %j#KFx!RK-55\(Bs۬):7]ۢ$ hR'zȵ`WOHXp}6SPhB*}ٸ X_2(&.Bq1cm.袅ݶm[ Ig1TXJ@"xaaC@6,S 0n(+G}QWvZ_c_зaăܶh3r;]q7L-rǮüi KsBMc i6 Ntfcf00Mln0k^vn Vcb") X uY}O$_o.I5s`N(DgИ]uU֭[[PN=ԗxiهTg|\nG0Ǹ vI# m+ EJa"2q >iUJFR KF\ܖP餤dZl7_;n3hk-vtbmCHSx`_)bnBW\TAhICd:2=V@qmk=4Y:a"rjgq>ʍ{fauH(KJ.kk+R04RBҐGk. o^ I.\rY#Z8-Pfכn^zU  X Br@*%$$E2mC 1nn*IvR22]򉶑H32]q0H(H3,xTO e5춼bF2`Q£xO>P*Xx|b6Fmhdɿ9OCIuiS:)1V^IB))HJG)㹼5y3opoOm 7|$$6Jۇx ؠAl;K_au00 ;ٰ#n/ Y>țF*MAL-¸ 5I:Z ГSAnG64oQi~8q⯼s?qGyM e۰!S2_q_m1R@s!#ONz\jk "G\_7lf?(5^{xy}LQDd<)<%9TABѩ3xVl Rr;%UhG5 ` g[(K /\mNRI'MoIp2O`(YABQޒ>y=F4ݟg$cD􀂀$pIDۊ@!YږU *ڒжuJiS53t{b< qxJQ~9Jl?TRJɑ\R Dæiw`#n.I+R.xD|yTE!ϾF=I8.벏b4'%ØPdR%?^cxz$} G;%;C<3? $CIQ #Zsx.E 4H"x | e U ]owm(7A!A~؉S%DFFJK2b!m=%q *[dRzERޮv][YvJmjayŌf=Ys[PNmKHAD.)IԡІ$?Be_ 4fmsk}dygf!zdhND#\1Dgԁkg@[~ K0QP  ʘBQA dd[d[F8vذa?pXmr5OK*[R}TN͢Ls$a#j:ճm9|eixBiD$ 9 1]l$%6%t2}u)aY}GgG k5~[a(cXstfmrNѐf9tP0N'D1QND(t@p;Ҵ!9v5-SYsX$_q*9I9hL٢ .,޽z-/͆'d(D#mP 𐢭£]3 /ɴ||}g⋏ A|aÙ3 7XWom<6v-^~\={ai5\yFR ΉeT^D=jW=z,fy-OQOnҜ<00[WƳUiRs#DSV‘x$$1:iPZ3,ιv!Ţmbl[c Hw,ŽSl~ "!!cg²,>9p@'5L #W s"BQU uL 8iG nڔaJƅV?E*l|MX :°]uUjKKHJHx~ ˡe 4! "PB^Dk,ac!1<h}Qa$5o泴D4`L^wu9|unaxBAFaazD9*^IoJH10iqum,>2-k>K tŐL\bc7TX#b7MwןQ~ w|R,FXO"J(_+O1=Ҿ)hkns?\ry$fd"ћwC82Y3 H(,gu;w^FĞ粭DTLTj<$R;}やޑA(B#xaܸq,xGPE{ǴE? X9QQQ;تu֮eSW*4IF,F* ]Rͥkvgvs d9W~EK.ꪳPnR7AfjWgNьKH-vUDaP(5KN4\Zlh `K/}oȪОݬO;Ҿm5j'L.ULYۈg>۷0iHW HHsdt4g(Kj  ?a1`BxcS.Ym\xj$#`Omάr-H"] ʉ(G~=kv# }W(Ү6<)lBhq_Agdr /]¨jr2Jt1iPRrPӆr 4SGQعv ڼ(c6X[-r6saz.8NMh6H (l^}oss@+R('r:P#Ua=ԧGd.$:|H%`;̰&g ͪ/>уDR Y\HAF6RRy晧GqMfRx]JHWU7wL:|eL!yy_;C@Rj#hCц/},\~H4tDv;fPbTVR5vkFڇiڒvT+~ [ܩgqNF Wd`\{Y Ԙi$WqLRJI( \r 'e$%B2Ù ɪ3[2aAHYdHJDQxFyU)a 9ail1d2r@؃6= _{5%-b(ak*?[QHі"֙?ȥךыºԩG\r Fڈ$ t[?&=pa"$o5lVRmdaJ~SF(hiLƅ Kh l#!II'ޓ I@DYRT&M*>m ow2 /B)4\r ԥ ) %I(%F9~UQmxp;1= RvjKlV.肿;zxKi41ziHP s50) ?H4pzK^u#(O6 2J!5\ emL"IJۑ%@}(< &LbA6 [Id !w.@>)^"4"IH# \r LR1hiCOR:裏d uC~V"惔 ~QT-z #CnXaNwf~ k0a41a-r5 )!iOt| #_ѐ <,EHF큄䔭 _4pUg%ٵ$uO[n O?=2e!NL%@C4dd'@RaYlV9'/we/"yU7^31j,5a3՟g5P@}e]vb\Q[BRoI/Y !/ra/ 6+@DRNm-J㤤`Kݺuk eYä264}„ ヌ|Qŵ㪕J'#9 !hV~3Ѐ^6Fhl Es&><ԗ VT:Քҧ~3O\IDAN{FC[R"2wث6k=gqQ,8go!uvd\ǐU=R d6Up֬]wh߾YURFIzF9eUj_ $BO.zQo$v6QMOدUkNF4,O3-%o  LvYl }ƀ6"2]K\H[j?!_nڱvRx L5^ILZVcjUuzLQ MFϧT.AH|& ^rYSo'ZRd #N2,ɨ>1a%#Z#nMĒ/L.h')L^'C"?/ҌWT2HS2JZf2b4QmH %`7%@V K-ikJBw ٳ}7N ~aGBmWb ۭJ"< †,3C7lhD"5$6s5YaCSD$R !`8_z߾< ֑7)a j+>^ɞ+%%sWk9Q4j4U@؋$}R"J ɵq\,#(?Bz %R(/UxBK4a`"-M[[ >s '-M6[n3etcԲbTQnK~/ymK8xvBh;lm^t\dEwq `{{,7jԨa'tҫ^24E>)<* o0 ͉H%R– %VVHq|%%]iz쯺( !<{曯[>馛μW$D]0F(Ka[a_)!\حk^UnoQXD4wt;sW\q8+j'3m_fKj@;ApMOH2r B@)9:%팕zbh*8z o^ahLRKFnNyi]@IDATlPu` Ds5PR"V250}Ff3{B,A> ˴I#<4ܜ,'m-nǿ_rѨiAe:ϛk @G@-zGR+aUo$Ϋb%Qe|)k@g3B8Kޅu=omEFN4pd\Y56 I`g<=|;S:kBгEGW^9{4?=Eo=oR9!5Vyy5D]oW8wW}-T{#_WWd F= ֔ZH5kF<#\Y5vcXCD(H°GxOvJPtJDmzVZ0H 2mSė?Uab CfUܝՀ R<3w_i" d3Wodj3jeѣG.SϦiScQջhPF. @Lj[;+ 6$6Wdi=N+Q7^.U߰gԵk?k<TBJpѐgZE_y啇l&Ʊ}jKPb+-H y⿣js5P Ddf9Њ"WҾ w^"[{vJ G}~ܧOǏ?Z*WCR"lp]y\h@ RҦ v$%Is27F<ȩղR;_ڐc nŨUVe)ū~dém+Hȃ)lyyD7r%m0uh'fCHFFۆi'|X1(ⵍ8[ifZoK]{ysD)'( _IC{i|w}whg>ߢ.NM}m$rA~%.x\I[0x=WB5mU"RfSd*<>?Q#p{@ob!I'פ؎xNJj"j ) VܬTC+|ia.6#bUVRKI@EgAG^2u&}ɐ@TJ$PDIJ̗9zB&ŵ)ˆ,_:rl d <ʴ*y61wqžRR/%$ "  DD8/' 3 /i[zM7BaRmd`('!t=|1`ψg;ґxNF5٬ iڇD7榛n+]Y/BڰQxa{ DT+8ixGz4WAV_PrB*GͳLL+FD-brNK|Q6hAH1Q9e3J]WWs9M˛Yve)dňμ@@؀vQ !¶+jY54c0V2S3j#h0T7@_):w^~VFR)!K @Q PAFmygVNCN.m7cm;Q9JLe1bİ#,l6l4Gǎe!/AJb5x9j$"%+c=[ve~qFwqzaoVxV#kZ; "zGzFb+#B<,(?? WJ YDsi!c`*LטY?c^`xEaÆIFD&LLQV2RI6(%#00?_ZY~綡N/ ȫg_`zp eD.MHj'0'PM@\BK"4CYJzq.կmtmhlx_M+6ű  ݀;Оkv9uHf+@I4גZ\'ZBcHVn4+ 8𻫯e o3θr% 7u\y&@a];t۷|zr/y{M }jatLш6+2ߤ~j['XY.HF35ѩ> ą kCE鯾^L?=vh%%CL4*@x юhujȾ{_ Kv۔D!k!NލV̓2߀g  HDb;F1\N[ RiX3c=AԈ5蜐PB&lG/:yХN;OYg=CQ83FfSx eoRm=튋J9]5`q"x< uI4XEwd#7|sYgD]ַ?4R8'Vk#";{L*b͗ϳ" [GizF5%Q4&#rjP`"ºײ/MҠ Ds_@'ѹ32XN5'ܕ J!{)Ih@HZ8W^=z+[z2s=ڥv- v,/%;j"% OUW?ISv$zG0faPέ 6$$\uׂ\Q>?+*T Eؤ aS<9N:Wٹ1*~jU0tWy_ƃuIwi\ah1v^FСCޏ% d޽L۞éz$T4\*G@"g=pzڸ~cgy&$I@20HI2 BeVt&^1?_=jX̽uhh.->#OFu3 (=q7K/'gnn FM(\*@Dd{vܲ1#WZuYYl~= '.cgث66L^9@zsWxMFX&%Cq4~RJma 5 "BR=W.[x3xG޽עqÔL= QLN|shLcvۻGY-ꪱ3ڟDdW$!m5Ȱ!WoZdsػ AczHADa~Ā%iuYW^yTaV2C eL)%|vwºm< Fc x_SS2mKjQiJF$OR6ds#!yAP땩MF̡8pN @ kmkB))INc}#$/UĈQRVZw}w*qM sTqDs"r`vBxwBwx3kc.{{SEX1Ei(Iʉx]NFHpL׀ 09@/'&8Z`~X pS[";_!C\k8qٿ# Pm(lA,`{w >+=y&8 J@ Kcr,X>ɴm14J lȁԧy.À,e%a6)x؈u&R˟HNh1\\W׎8b5F7aի3y1]3l6[zY^7Tע( PoxwiFrhp#ǨD {r-axiG%k M4@7J/99*`swQXoiݤ=> cFf<`! X l v9d /9a„Ox;K<< '5@U[o3oo^{O۷ouP&5(L?c.,Ď,nA)@; &VP|CE^Ϩ]D=$ݼ -Ibt9(V3]tEr)IFI۶mba_ٚ4DGhGj;amFIr.5$>խ駟ލ^3[k>h!#M;Cj{kS ڗy.|$$vH:ibſXԿT *Ue~%+H958huaM;s<[6v;:x\]۷_~\УDoGڽ #i/WmR6ǝˤ5f$.ղ/~Fnpxv6Hۓ  v./l&P_p.X(]JHķ=Hk6x7=[`j=1C$a@UvPhFk2'Р{A+u|fb#7ũnq.a椄&!ABўI .XQtlX=3\HDC;#hG݁v}iwLm; ںY"I |<+AKr&/$MkZ3I-!(*!&,հ4 $ OBW^5}H78ؐe-CJBюVXaX9vo|'ǎKDǁJI$K6'.b47U?c]ױʲSS)v/x Y`:al\NIk{1BILm"i8RBS;3|i74jdM)]{ꩧsK608ڈ"Ymv^/wJTxK;|=IرT# Fiqußmm8vG ٰ:`-"K@b^ÊG~= YHdC[KHרe_izH`>(XX`;'ptꪷ}#`]2y9uw<6P\D[iڈԇzꫯg_ s\lL|{)XA`sJ۫zRq}4 '*@tsU'>;ٶ#iTi$lうy~m߾}5Z IZHq{HJ귮Il}<_֔=֪&Hw(%EIBDRׁy7+Oj'0 6%r%bz6ϛYPڸA!i)!iXXm8Bj|&D3}o_ OIz;I] Vx&۱pRm@vq eu֙ z7 j G4|eԖNG IG@(&$ vFosToOf3A.ЀxH8X`g yQTdKx' P?u100RBZ|ga:ePoez|&rG2܌I%Q]؁[x~KB_n˫|Wd]c3]D-qÔ<ӝN|p<ċ~]ӧOgrSء"OWO\1$βã{񲲇tДi~ꤓN~sIHp 8u]AC="u9RJH[kQ }mӀ}V`xCbc!Kt'qC%w4%l"cP TmDY=.Q!i8=$/k0N95ؓ94>=zq8`C`Ǔ`KHPv\~<Y*3)ݗ%ޓ̃p{M!Q+I"? Zu!zE&GS^UrK<.1)B} ;aj 0p"8\.x=/j[igJ!7Y_:wٜWdv!每R ̥iudv"~-8,dph!G%?]g D\6q_,2/%:RPJD&A%xC GCI6OR*v'pv \q_E S!!`(ӗl2u*tTHd Sx7}8pYݴ|捉s]?_kL୪>9H*r%q$E !ˡRqB#S,$5gR09JHEr&'Dp@TA>?x^~{9=p~kk^K1Pw%!0!D o>p$YBb|`)4#~΍֠А}06@QAz{dP hpZ AFCdۂ X\e[,+VQa[A!RÛ3=?2l(E6ܫE߄ *J /aNhC|pBBL7cЂzl߿>kcǎƖ-[z]nI'護 Q !Rh P1!rQ+.!B5YF u !HC3 3*>u9 Y~bL48 s(p=cAR=>Td*\eǨ% u%i/R—k.~'(NG $L\eɸ#H!2 12ee s\mN;ԑy-4燔Ή]` ~PZ_OGs{G2Ebc IVB }W4b"r 65D A[V|0Yx$2N൹@9<½ :O7D4 qWН #aUQw| p5dQ"q!*F9BbE Q=f٠ks;sGW5OQZ;LJy3>Mꫯ~zذa*+BC =`/)F r `0\0< (&H{4dc3kcpS I U kEt:*p< kXʵ7 s!Y7Ph1@ hduCX}7O[N>}@Q<7,!ڈ2÷!ky=(D<nP٘wi Żp TF ^Wa.R1 : ^H ^?:p) o@kqLaٻccpEl<ҹ6_+aVc|rtBvFcOU5˜_|kOtA<^+(. 'È{CDA01T\zr:L*B fМ`GIpؖ}9w[0ʱSp:.>8@t;/p)֍tLSV h̅@g cAGm\E\[@'N^3I Ι̖?̤^xW %Wd E((#:FME{>zϞ=;[bs}#y۶mVo1_b'φ!JGk9EǴBqӅ浸[$h@K5W(ȍ# :9˵q{^[7Qp4€Y;f`Ο`"c/0W'"b (!<K6mtxFb8p^Bc<jnNj0Za\H? Iv0Lrl_ n9\ ޙ,ӟӈ+A1QRPBhYB q3 R6mr֭[ۍ6hMhUY^su":vX<~!8 M]w5oԩQ_1PE+2 aK l#X״ǴEؐ7'P 1';OgB֐FGXNq:o8<ƴ]i̛&0 jnpjQ߆`Eߔ^x Dq+^ehOP99 Q2TPa1| ]bb:(V,t.^WWn[fչs5| 65xi1|ݜ_߷Dׯo#8L>}Mwŋ(( !" y!:!8g+<u5׌7$Dgqtօo;Cv7#H(? lkKv|ߛfq59tNU }Ms~o"*+;[h *fwD=۷-X [nzXOn(WlKVl[><1_'C6yz´|)w7oX;}!ۈ 8FM! 7 p1C9!wywbWr1'ӊX )Bwfg]ܻ;"' \Na8 RL# qp‰/< qgXڞae}/y1c"v/j\Q9#o I{']aq 6BlgE9)̷^G͆!>rxyN y|'{C/B/laY6`>ya}Y ۹9 `VEq28 M!C Fz" :ITcwP  餙.@wyBah"wG\g]ӅalcikXL3n%cEȾs#î];s^{ ۍ#n" #nvLƃH;#YH~C6|pB׳ԕYF8aAX{""Q$L5lI=`j\Z{\Ѡ@c ѦwtІ|4Hht7WxBTLn@f$p/,>oF%>^|Ț072 c1(Yţp`ҰY&T~o<ȖSl"߲t %o,z"8gY( Qn6nĈ="N]Slm9H8?O}H^gas vx=Z]PWMG6}Qymn9:kІ|40*L%D6;I0HNNCa# X>! 9)ǐgO䏹}&ӥ8\aKoډޅhs1Qx<qe"E"4n.fc_#Bb,Vo+S^'-Iu 3;Nk~͆nмY?NƒtG|xcg? -4ԁvޠio L`C>V1{$Lc5.aWIqP _^t r( $(:Sp:\C_p /7Nc՘#PHM{,tN2aqvh(>.RE0ȸe,hN\ ! 1|:O8C#&X>2KK( k:@;fBc@ W 9y Y{Ddo=+6+ s!tj?61m\!6 E oBt%xGt1P96Ŵa鈛lrqNIw+! !Hi~3ⶭBLcӢ-F "mh}[G PeCcSo @-m  Uv7o|n3'LjNZ `=x>]`2l ssA ?*LS/-!~4  ; nc~\gC;C}#q,0[KMqy7l'V<V8lo{xl͡R-'ɉ%61J,Pp(0.w$-wׁulH :11Xza[YlY6-akm͠ Xk+pk@vJJFp7R̜[shBy71 X>E_m<}9<cN6C[VЏ& y(lOE\)t blLG8W!. ܅_B'l}x||y+tށm3ڵ-eEɻ´)t e7"`{yRKSxΏv! MY7,A / zv5$_6'։3A8 ulBDٖ[>On 6Q9 )ι_[,ެ8S;}b(eŎ%ڋݘew?;, FbDr51y8o&sS!: 鷹`&gnYAÃ\?6ΜGJ9.5'Α& LQ ViZ5@R *vP.H De#k,()ZwS$͋ec]t)XOq`[hJt\GoZV\ڵ563oP(>[;NQ n!D$Ϛ %gӶWh*<^~rFS(JNzZC籬Y7$D(OOCBH)EwR+D agGDqrS0 s_L/pp miP= n! ; %Px| 2\"PJ]W '?anUʪ41pvfٮA7P䞀`hq7jEh;cW uMO1G&qB"ebedy Re~+Zdrݩ^乐.r,w^CS{+EZZ72ʵS\K? ;4(ЄDIʛ_$xTOΔI<`ؗ ffjG'݋hmiO.JїB_VTzy i8<= C{8\oИBE]+H;Ʋ{hhBc5zq (Baẅ́)HA|(SXs仠b(TWBA>/݆dn*n!ULDzGDQ(HiGTJևl+?IDAT` 姕XQ|Ë#a5yP_a=%Tx[ KY]Ev͠:6&_XBk(Rq9dz ǵBۏ 3Sz{`]~<'"wg ǺJ d{ɒ@C,TvxG)^ٗ CAsW1|D{O͒ "TaTQh~ΗUp w Ǐ%|?@Zz]t9`Rۥ{/'>p<w=\(S6~#6#=.wJ?ˇɒjwS~_y0w1ĝR #d҉7ѱE:Kqʊ(= Ac? S`OPXɒjw>9$ -}jVA>pnlhXwî,y؀} RzI]K7Rۥ>v5Rw䁚{;=$AS,MƱOuӧf&$T6zfZx(YRo$8Ȗ~kC%$Tݩ;N?pѠ)L~8RL%߲ Rz%%$TAT 2(2V 1?1>Bɒ@U˚痙 `6,y y y*'Dtx@?9ndUy#U? Ee?8vhkqpI[䁪=/-'Bߦ^ Td k^~J7%$$!H~bGTR@RmYu䁚x RMܚMHI*Z<g.rZ>! E?!"1ʗ H?|_{(ߞ!ĈB!1]-Z2Ph`<ژ/%q+V%)V %(VT>UB"# z\3FK\z ^ .nBbD!t1ŸoKL-5?xJ*A {7B]<GpdOTdz z1BH((2/Y6*ɖ@g#OZ[TƫNC*U)'N!d z=9šwq:vX*2..` JU!Q!2YLgmmsV2sYtsЙ;KWO!b"Ÿ}olX'=?YY] X"NbD!Ȥ1&[`óg BL-6@o`8[۸}Wy ![ %6 dڕmVVVqqq?So"t1s2sL-~'ht5:Z`kg'R!x z ھI9=O|ܔV"KQ!hLg t!ْ hMeOo:GF)Bu h& zie$BCbD!xKb:kkX+&ٗ%ɖ4h#->!I|fڕIxtYĈBtֱMϞE1]FM*"OС}QJ !2Bb:lޱ}9hɖX|]ʪqvB:s)_|voee>..n ,#D 1B1]ʈ1&Kk` Zп?,*QAQB!H^jߙ]8}0OK[k4 VZI KbD!1]|y˖-{_6[?bгxb92$gu='~8Gv 1O.r;J!2?V`mm}266E"F,QQ_͐Qdq`b1_)N41rb3+- v(/xb:{"Ÿ4(1u:o8p=.noi9t}o4Qߦ^|֧ }DiR4Xܽuky1vc;szd˖+":3eOe#.\hќA\1$-10:sRL`Yxyl@F4QFɋ$޺."II6#_z&[Z݊t~hbƤ 7&aoY.\a7*2Oww&/WV%7qa54=p/VRU}<*2R{h10+yBvNɖ@Pڥ+sɸ[!<~SǔP*~ֽ`:D+bp>nE٧;D]vwV[ Pœ ɞ~O\N?mDM_E-B2Yc!h=%:BH(D|=itN29Kq?Ҿ3gOxbpoX`đLPEϖF *^V ɣʔvR[?82}(6]ɠQ6o9UzV$kU.4B^L!RHA#ȕǑ0ӀB i!$FG'Z SoB~:{sW Q3+CBo|ZͺԨP}F5:sRMrq`S ğdn&~][֩xT'0-np56=u0nKpWڞ%{V7g"K 46w5hֆ)Ȑ򩷓*v=M_dTTzgf.a /~ ͞=>&𯴎# Q' `jeNTk }K)o7TMmμٓؿgp1M <,̗1"ta;/x0έӥ֙oqqq=) x28r ߮mY+3'_|=[KMə\y aY˸/)/TeOo€^X++qqqmC-Ĉ-WJu-JǼθ+6 5Z|l%ch}-t.}0K6૗9ռj]g<:te:V|RAd.έ s+bFTj]\\]$*2>VBbDv3֔0WE/טAvsg|_vu6L86^gcj)c=p \==;6S+U@r)2n*u( IhU@=b%l-9sWPd*v9hݱ+e߯D\qȕb%״ÿFcmm|n9 \TR8Ƕ.>Z.8RTh)(UC]J4{ n+ϙj5RnC:STTՍ"EK7K!1x;9u.=ՠ kշx<[vtGE*{zӸeʕD3Wrqh4jіQ_}kI"&4%=*Tg-v(XReM&ɖa\yYye$!oC_W-5?U[LbD!_gNҭM} z=y]yu-~gJWE![[QuV$.6'ੴx U Q!Df׹98wZ5oy9[ցqqqn?MPG^!x 621HzĈB!2*D?wpk:ÈS܊d#2W!0 oD?hlVoBd QthXЛ!dn3:&W]ǫ#omm&...9)VT^!!':݅F7KˈĈB!lٳStY6 5cc \Um܂^uB!حnJk0JĈB!^uhܪ=ϞdaoU*ÈjlXtBB!D"KN-h5J7UD&1B,0|V2 oOs۳EksrB!^MH4iI("r̀_`B Grܞ-م݊zlՅBOJD?H,HbD!YA{F=ylt>{=(9KNE-[+Ϟ`u5B!akT^_QZDd1# !HD?|@[ܽGxD& ϙ{m\(Xɕ1S_gNҭM}bcccccKW^$[nJ6脼KBd~Jxp/ .WZEd!# !-*2{qhNN;۷y;䤌GE<*V|O*Wa{I҇ d˶Y-dKl={SsiپBW0od +fK _,CbD!ط{Ơ[<|N8s"gr;^.  I8>f-n$b9[;;jnHxkFPgzL[/K9(2ԁ !ADT+ETd$@ഴ$FBiؾazFZ*x׫w=/wGHc߽ϟ'ry9ǜ Gi_L[ڙz/Z6aY_y'!a.kk.""˖=bbI(0w# f}+1t:4NxVIН0vl/K~Rt}wK\ܳXԲ/Jrqd 2BnߢW9b C\\\q XZEdb# !,DEF2alX&Y|?^q$ $^ˁ̙#'TY&~;ʤ~ ىLd-mb DuB!^S{nskkkqqq0"SQa.%z}{VLkZm&=Yquw݇>y]Ξ>ϵKXrr Um܊du"+-Mlz8Yk4VVVٶc^L"IIGf/)&Ⱥ.?oS/5b <XAFd65Y%FB[ŐDEFn|JÈ%ɏQMaEf[Ft6⬭?eJTY?od/ ebB˷ϟhP)Ǡ 4g:m}[3fH"RmA|p"U g/.Mlg8{2&[ll9U&}f>3|KI1aJ !!`04ٲܿ{ƭ3eOi35ZNgmlM&6oogȆbMebB˿]#N; THD`DCMXYY! ,LjBԵ:pF|FeڅTVQEsǟ( IA#tW/SMDo߹OM1Fu'lq1"#r'ԫĆUh5zweωmL9-JE$K~gփXr.U0&-ks_HBTѪCgueNϠh;3j(H}lgʘ!L9Q-"ݕ+_9Kgj?kܛw+V*V!`ZtC-倳8 ~K _'^Bк} %]{[SN!d|WF?|@ʼnu>q%?0~ '2ͨ1"]~mxPOiaYspGҴTHDS.~SV:3"XJ4J !>ȁ S!+}2q5`"#Ȑ1"m[>>L!@eht@_ط !Ch5qwNtb֮\;zWwWjAo`ox@=(Z(ll)J.Ӱ { vmMԋG1;:-m}[}ݻskVkDEF%?ռc?ω#'Y a ֯DXh˕Wʕ/kqװkBJFlG̟Cń_)Ҭ>U߲nVoߗɛω:jٿEO}ãhЬ>OYbr;fO^ҕNL=3Ni4}Y}S!w,-v9r4t?Z$%D3ѱ7g}MT# #x#x*hJW 3oW(EFikMD?b/M*FyV]NЎ߹|2c&T/Yl?CؿdٰzS;{re/_®{l zOrqđC8w]s8} 徭 9w<>:p X8r[v~j4Z !7XB 'ncmwlܷF-43zXumuy&Ϝ&Z>lǵ-đS8r ɖ|͒y,3lXb~mwpF(\xiC:u`̅\<'^*hT;KlQLtLMH߱W.~~z@応` ,[HM$O"XrwpoN]?±ˇ܅Do'C?`A52/{ =͞ՑkVà7߽Oēo;VP[Ӟekl"rɭ ::i)~۱MҸe#51;}I3Ƴ:tik L]ڱVn Dl^ͬmS->`Ӿ1"bw>5Ҡi=uY[;[]7MGNRtqM㖍xBdNG-4{VV{10l)Yc^t;mp"#ijoߴȨD+QE)]2u3jtd-k^r̴ʹ*CfR+`bmoY#a6[OooA8rGcƤQRRF<*VA%*ۜ xlǃ{aqqq҈"-dQbrp.j֭RŅFh w4͒y@Td8~$˕ӻ:#&~'C?NjUj2UXqݦĈ;_ك+WwȘԬ@FY^JyVG6s T4m}[}5;CX7QK ׯ+DGG$vJ.θ1Ȩ=nL Lkt:%J֍PMTh5,6&I)>Y8g17f)d٤P3~[taSN֍$\i&Ϝ`qA{kPFJ\ê忱rk2h%LL2d\aˆ~SBbǒ3eT1=?J,)&zl(\(W/ν9%v(K}FC )R[LjBwD?bwqṆXΞ>ω#'-ޘ)W,Nc0w˕AzuH|xRy֪-QQZN4oVDmSSD˞y9quwe} {_??1Np.+8} NNh̪ڱJ y_3_􂉎҅+s-׼%JD\p3'5[hZ z}qӗhaD*.()f86 3' tD@)ap5h,}3ԫH0C̤4c<sgOzBd=fŒQf]b ) -BƈBaBԴUtߗ(1mu`ڄ;EK'ZDeٴj" CSVI&)L8sE).:::1v藴k䈊۰ѽz3vjj9@.Ц^7E:s`NVM^qnG1+`cЊymԺUO`r4{+iqȖMW:^ЗL%Ŋ`I1ղacŸDoe?1ݳjW:IhNd)Lm`5jU}j"J !njxrj|rqD"yc.vhe9v .Fm:s"`dy׻s$IV$xNjYu9:(]#R,cE<ĔYХ+jnJ:tin*?{ \VEbH]9ʭx׫=թ'LSL_I8gitCe?)9EjԮ̣1UjLJiՊF[L_2u3m1_\CSVÁ:6MbbbMZQRcbNNB]Zaqܬ (17ϓgӰ#'|=$vm3p'[4~ʄIqBǦW1u:|:fq\wq}ֻZIMsHܺJЎYv:&:?_<2G&f?o\~wo 8n_%ߘ{w[Vz|դ_+~R,œ8 =|NDGGfz}F( ͊ſM,-%ńY7٢͖>Ft)K}g#, *Rٳ# !Gm%؟f1`6ͮ{,+ŸDPT1<'O"s(>|sPhҲaٿGL=[)&^>lٍmՂt:uNcw/SՕ.˒__S{*%JPoHs&l߸Sb~4p'|ԡ'ޝř{w':WM^]X| tmO[ᕏ~Ӻj/Pf2K*ph#E]~PIŠ7`kg۰q%Z+U+^bme#F,[/T'7':SӣW *)&bɖ"Jٳg_GE zpA+FBW喒(1-?SںQrIz{9y|N]>[Lwy|9+v^K4k$8gRnzԬ/t\p1u4._CN{fH WP]ء_˒_#uSS2UuPgٿcW]OzzUaҌQݕY'XowWf.&9Y&ZjԪNi1]_r):59V-@vM$7ɲW|x&ݍ)<\f: k],&ؚ5+sulcD7}biBɾvE1y߽J6[4UP'U륎[kqwqC(X؅?Q? c&ءrtx`aiBēu:&@thkgZgjΓ[/SFEFӱ54snos4@!n"NG˨#3kysj0M[Rڔlqp.P0?UR|XЋ> ۷PNŷgZt}}9R|-|+u8.22bΓ~"u#b(߳X<^lyUx>A5nPMڜ;-lҲ!'djrylX#ƿ'jܲ_zBd=]~r&EǍ# !σ{a !K4Z uJ0Dbh5Tu:sJZllߤ+2nr/Z=k<|InJRvJN N`a]~Fy(m9N{i{ɗe*4M˶vvl*;Y4I3Rhғ>uՅ/*I7:U^W|y^%w#ʕݿ{5+֫?'&reR!#imX4$(*2q8'C?FqF(Y3()&u b%%""bHޑԪW(._B-jpPHwdMs:i_[[;@N{i~buҰAo`q~Nw.ٿ;lf >MI}YFI\l忟zɬϻըU%3'bJΜ-<v۲*fxMAoh wRZ_TzqͥGI1!DOڃ5Z-#&*5iTR-FB1G=%::GNjԪ?jE7txv%l\q"Mb[7B՟л'jߏ>H2 E=*W}GP;~>k4_oHtyYr$1K~\fdMjr4IFIU: }$XQQ=OR'{75U#WZA -Ldl^-_?Z}3c['::V3AS1%ըbLx4"ң"kљd=UиU{b aEJHQ!Dơj2z /b ,[[>~Z=?CH̾DKҢ؝0UjcEK38jbllٰzݹs;Lجt1is'4v|)Y8gN|z 5|%N8&zH]evw˕bߞD$RC޹@>Acqiςs5m%::v }iա&lL8Ӻ} ֬XghZ+C^'MrΓ^z0+`._a+-:}`1^~Ӻ ]_Se׶}{}LbB,hSg OMi66Y!oO%9eD1N~RHn8JΓ]mNJ2s}yVfa0 4nوce[o8w:웘:s][Ʌ^zO]fj6sXb= W|Yq}c%JgΒ8sG,LMtҎ3y{֍Pvm݃S>'<*U^O~̰qӞ(<}c]{3fH-W"Z8~|%u^[;ɗ7%ńYsuBo.Br1ͥK(Bd޸NT`EbYRKΣh^^]Ul/^O5Ÿ'Netj֦ e/ɣtT2cDG$:An jbf7<{ [ ءܻ{ATdT::u{Vq O)Xȅ򕔊Gubωmw^}핚%^'>Ɋ:]a'E]$'^T6Ajmit4֘ѺM&U+9[Vh$˞okgb_JNni[RL;2̕ˮC̝V qRH!"tAOJ,JbBtRV'͕RG_l$"BNq+ GI1!xى#L$B$B!jB!HcR@!*$sNB"4ȆU) 3@ݷC7zSg_ ,`૗xb~6t:V:|7fgcCq+VzI[#?pTH24rP /lJV$5̥1$IYg=*rAfoR)i1JҧJy9fs 0w J򦐱ۮEǓj';,c^)Bd}l"~nEq+V'Fħn%n_{~UǑ}ÍiZf./QBdz\j_rܭ"#7 7< I?`dwH8/n$,iln7&_`+%28>\&QRқP3Nc2cq9Sg_$˙(y~u&a馱>BI0OX}c=2o@ !"d@\+H\FвXHhZt- z;63}Hlߴub|zs1~^4G\A2<&$G.cʐm(I-JҤ.ʰqơ ;$q-H>tE4Jd.J#`q(I>XU8o(CL{8L>FQRθ4z$l@ dZl +zL\%!os3;ϙg\bq`d\!"S9[F}-?Δ1C2f=YzzJ`L3tQV$s7\`odv7IЋF N?a#G- dO4LQ*S_Ll1JxƟX$o{2aBAr)p$!BdRl"p)ʩ$?{wUuSn0Q$FSS45-}RSSs5sLRҤ45ӒsECey?0gΜs9gs--/qrp| _`7hJ\l4/^!˔7lϲ[;;:xF/H!]Vd ZDDDD$)l)@˔g딌\ʹRM&"fŊZy4fH^A """)l)B\+DDDD1u#)B\-vܾ}K""cj|R9w&.R)l)BSaw&Z[0{IܙzkBwH""ĵ)l0DDr0qey]'S3B+q򓔔]MR%׸q=IGrŋB\yU09P"""RWښ?wr|r< `VPxǸ=hdܾCIk޶cs!5i=Np`Avy]]J 7o:/^\bP"""R8v!.6:^S`-К:9T`92 Ƶ 3ҤU#-"dCٖFc}%.7wRIqgX. .u.ɚKHNI1}ZzՌKo sd?GFx:vmc߮m̛6Ҫ}AE{cYI>n%32>nU0 _ ӭc E@gpe}}elet7S 0c?SuͱZ3l.Lh._d&0Xc<ݍh1_e*ӷ&}-'*-5uc0+ҿCN,!dnBvb߮m|w'Lw?۷oѣW>Kn NJ2ŊpMuKz7&^#9S|ŨHII'Ozu"3Uq%sÁ}fen/ѣM>9sg"۷naHV5q8[7s6ϢqK̚04_ZCMǹL|[ΝapĘ o߶ZU)7dǦ-߰vNEcGn&]q3Hzh{eApx Xq:t;0‡n|O51u[k쫆Xaz`B45[iځF80nlBD2 g28Vu#,0 Ե&lG,?2tcݛVƢ9}>):xrR 1Q8qÿoWq)FmYx _Oڼ_V#vwn !B$vvt~|RӘ;}`@ >D]J\ "d訹l\kW1g'*liJGL)vvJD8JڕН;7ծ e\1i6n8,+1~+_IKK#.6bem#&|'L3_`Ĥ$ݸΊ3AOvp[;;OcMz!>5ks&$ͦkz}5'nݦ'N$Y7߉!wb/2j.vmc^Y.~i$s7Ƈ'һD_C@OL@< mF&O~&ak}S%fHL9ŲULf 1un3QFpc6dzL]#@c3W` u6>P<e- (<s9™ѻ>|NNl HTpueqWw{w>|72r3V2eI=#&Z_>Lw5[ne[ԅHZӕ나 X^f3X2czvi7|ҷǛߴ0}e`!,YȴpQؒKS M)ؕssf7XLE[4oۑس=˗p.SW-NJ2Wo4?{ѻ]ǧf$ ?}aߚINfITΝFmΝFo,]FW/]ZO?@&-פ%qZ PҸ:n߇a|?_w+8XpBpcbK6XuURO)`bs M)d)5m).l|Ν=MtGt#isSwRMS#߸N:/dU$竲[6gqqW]OF$o7Yʱg ~{xþMu3qJg,M1N/#hR{+PI|_&1QZX8glCgٷk[A [b1ZN |1K%yIrXԅ&4M)yclni]ƺA~gbj~=x>XF޲J6UǍyΤ 0ʭw.S"Y(;#?|=vJHL6m !h<,a1Q[ٺ1˗p)ƨi֯-^=M4ϝi'O>5iI&س=w'Z8+q֮ʴf.D6e53F~0o6!܇+ڔb3 ,M[;;;8k+˗x: aRS׳ *ꤐzΎm߱=tD`mmM]ߤ%I7S5s+l;]iؼy*y0q<ψIx~#*Ӡ_ze0m}VM.v_X>0ӔbSQR,B$,]a-sN{rQbi]%*e1hۓͮmF')w7J;;q%>hs`V>n ;@P#i3|K@%vf1ueSwG#4L4Lc4v#8Itw`DdKzg0U6=%05j[LH]Ҷc,˝ YgTqz9ۖ+9-/L> XƍIjoz#B664`$u%hgV~4ز+b1,o7z"J;;o>Oڜ8eFsYfMAFؒ'?l vv|-BWqL{o֤}K7b@wy'>8w&qW(WxG;^̏K6Na'0GFu4NJ#cFl#lq4{#$o">eVcZNUGLG[#w[l1 wLLr 7B@#X1JR)ǃeSƱhcnzku3(Jϙk<.wh]^GyvGyyeWFY{f>q TpU+E{-\2IIIa/9y {Vq=ek ,-[4XqJ}eGf eߺ ˰w;s<ُ{EoΔ3epXb≋& {!'S%[]o|y$\_dt=6RL$`jٍ@)Т7$Krn.D2h\ewlݒ˗iwrq ){W-`l.k[hA:j<$ޝdSŊ0HtYyƞ۴|]RDo9P(_ [4XշǛ4kǍI̝@$\ʢ6ce1qt0lt]b?EDPp(Qd'6ʖR"eBdr5V.4Xa0Cؼ~ c 2;x݁*yL-n.899rzb +a˧˸}붦+*Up[椦fZH>÷-}<VM)Vt y4u[HIIQC,jxB܊\آ)Ŋ*֮ʭ[9 DDDDD$}665`vQtד s+ra+ZZ5`* {XgaX2\O(aKJDS-^ GU""*Q z9`^*yT_Ŋ^m;^ҀiT˗/ŵ<zs$ݸNH*vvmO/6ߠ)_ %BׯXY[S'wYg"T [~%ZSO 8:d V>}eW&l17o`^_q﮿5~?C:{E>WX.p~wyb_?5ܼ︹%˹3tl-_@Ej̖)ʹ9^Q^ qx۷nMZ?K{B/_~$j?".G}e7>nf}; f?)&K_jcoƺ'@LVʰbvKg}(cc|u!"yɵs6=޳XްynXO9a?W_ľ]hԢ9h^ۏ=-t:i ެ QbPE*lєbES ;[|7sZj*iZWx~Zu!ҼӴ >q+Q;#t FM*ԭNH*F N6 ۈI.oR24/{vW~+UOMR5XnVG$/ӥ, G1[D "Ս(}Jv+_8ĖΝаE[5g}R>[q׉fԠ^m $ӿskRRR=u?SX1:`w(VFoXAy&j}fM-dtҰ H0z@񻖱j_1Xaq; `j-ؔau3s=j q;S˙^i9`e/"RS>v([7aqu@ڹ4KųZnؼ v%^GqQLսͳh#" [DQؒLExU4%F"[7K97}U`a!;Ii׵~7սc?g[Ν9M\l4#&ͦU. _m^O8E$ߪn3-6!F#,ÖmK#;8z?0((9u7m0~'r7hIwS˖Abɩ5J"w<g'dJZՑ=J+nCZj*-Rn\r?~ϲސ۷nq!Ƶ+ڷ{c"'Xq[$ϴɛ-n*bmhm(隷Ȝɣ/egrO9ߠvv҈HP=~DTt}DesdJtզ׸;MϰNNRȓoW81vBAg#NUצXq|eܗ}-sуz3GMΎ:էkIIIf v 8)q(l) "'dl^3NxԠ#N8JEyGҤu;8qGzlVmm|;dtEv7+fzK*d pd~iLIre%0j 5n_4J0yz0ӱkxWAZj*g"NFVܾu3!3?t Wıf<,f-jNW6 )S#gѻ]zkBʞΎ ΒƌEB$R@)lyN8uAhz\mReO[t,wF\ƅ}-ϒlZƨss'ٟ:81i[@,ÖZƈʴ 3>g=ѳ3W,B?s7J!" Y{vm"%\ʕg)xUC|){%ٶfzOֶ5}ٻ} Q)4Ư`-;AJJ2o4iE&}HS"D]4MjFA[n{ߤvme;xyy5$`[6-c!ZzMZwaITR0 z{~.Rj0 j#F;0%rx,LcÜʰ=w);l"C1u5(V306qEig'z{з{z{x,|CȖ=Er*UpU3vhy-_}KpfNHI'G:5[٥%>ҲHI'M6s/'vӀ 1'һHdj,4n'KlAƺ %hlc{#I4ELH,\$13S(ضy s>'GG*7p7/bNP38;9ASlrjZv&t^.PՕVѦA<|m'~D*+GM//lIy\ ̙1ʸ+=to]˔U!(l^уf3XziZԋпskN;囓j#KNcr=X)#g!ʁ62e ~EaL: 7Q6KH•xS`֨~¾0"08AgfNhl/4bw{2ܼ'+BYʬ);Y™riM>zm;pvlZ@igT"Ƹ)ȅ_b GܼuνR˃q#5CV.YgY^zҀi, Xv{`^2Wr<0~DlCK񤦥1qb/f'$6gŊ-^EH~P"pvr',dtUk٪?e1tw;dMw7ˋGOӹXfuk[Ŭ\~RۃuR#ha(ϦRW65uBo4f1zbZwʡ}n fWK-׮%$-^z3`!HON= LwgnͲ,2u2LwkXq>|2OH`ҧٳ j 4/]J},3-[Zg٬ش&v ^ ƀfϡCt>svVFl8'EzsS@)VAG߼a <_GY,I7O8=W&hپy+$ J\ !;cjb@$KI'GvlZ%g'0bA};݅5qiڕL¥{%+ ܼu;WJ|wtR WY0&u^d?w>bO<+Vٳ$$&1NN9{{OSd%}?>oBٳ@LNƠ~ xspEb"n߿nD򝇯rԴ"y^XY[S6vIvOdtn(lG~awX|s/`ӖoRSӘ2seݐ2n}WZx_^ =/v|!*U7Z15x>zZ'xtӘ."t'%x덗M~FVVxVH; "d.ʖ*%h#G'Ttu50@ctiْ ܭҘD&M(_>ίJMnL?fNPק?  -Z@}ÎТG3u2ׯOņ q~U^ޝ-ѫcԼy[uf9ѹ)EDW ٍOl\ =;hVGg GsҾ+"xIR,/zV0F92+ Ԫ"66r|̋iצA![hҹFOZL} ]7%IMK卖k KW KԮ Kӭcs;;GPrmi7:6X[I$$E?Mέy덗 ];)Ïxgi]IIDDF舳$߹_|\M//Zy0ޖagk 5v,{hARŅk[נi]ϞIA!J|Bg4dzR%{.CCy65f S-!stxn޾M޳\^%d.n@ XW#H?7ԫQMBcHq) g~=r(*{zSbنe59vxϝܙ͜Hߤ ɷ8t*ˬD))>#Qq֮J>oey\쥫%$ kk+x br>y˹mGX~=iݱڴ|竷eOSˣعyy왮q%>{r%>8:g})_ 灿*kO\\_ϟ;˄38uCN=͓7!rbb8tv,=:kr&řoܹO/2;0[o3EX*+@DآE;۷~v=#6ܽz _v  '`(w]zQ"aK_ϱCUy,co]*J;ćrˌgӥwKQk{d-4dsNYUEv9HrJJ8}[v.$\cITV-bOBv wqaxϞ9NYnS|Գ>^^ ˑӧ$"2+W2[7& W~ {[l֖ 2qb"Νik++[V7][",sy~2:ixnK$` Usx/௿?o[|X<>y1 ::0pTʖ*E#_lÖS?gyGO{H[GƍYe_|EI:'GGz&|}yys6M&ŘQkV9N&Mg\zk׈bGeY/qeVŊ]osSDaCzi5nQ kӀ{s-_F˦1hldvm یkk+Z5g?A R\-lvl%(Joْ]$"E_ㇸё&Ѹo_ωj['2* wl[ik¨m B;;0;whӠA(`j1u5IzѪ^=\:Kז-goFBb"6ŊYHjZ=۴2hG|n4HD?te09q3RB?`tr$08bBvM8|4#/rjjɵDig3|pWKSfb8:ӳK\KF@f/Cs +BBQ9v@~/D6mHHLd'm 7o'$dϡCRT)5nl M'!1Qsf;;OH/F:,,v5ur1&NM//vO=4,^@ZqܢsSDԲEDv]`YѾ8?1LƱg>'{ҭcsID`p""]q#{gEN€r IDAT{:oL^e\Oܦe=|kWe@б]#RSӘ`-b@)W=2Ya%JҦ^UzrnDuz>0a ٽMd۝(7kdHn Z۶tk ϊIMKcE :܊޽A~_:[ PՕII;rG=L:5n͚ش .-[ƪ/֪X2ұysN5u3׼kܸ1>فѮqcnݾ͜ ""#عC" [DD$8mvτi˙9yCm߮a̤% ea:3nd,aN>o^ޅSd&dV /h`rOڊ_3Ylsf㰱)ƈ1i􃏋Tɑ^]["(RΥT [/_I#O83gp:~C>jz[ Ã)˖1,[&S 2B}W3urCCr>bԼ9c}"X?kamX9t)V:XL|?5bY$dmeE 3En~.v ٓ}^D\gDD_7w/,ntCRΖ2s@_J|%GMҴ/914xg,h>ybkft,qt f (ᎍ5ɼ F3s@n[ƈ<Kwn1[K/Y~m4M'$p1&+++*8Ɖ=3>~Ř.]=˕6k+I;[[͌?Bl,`y̗=o;[['Aa(=S* sw8v'*UT<);w8syrHRӛyc81F"yçy6U^'ȣϟ~11\Q}.ySܺ}*EP""5~rkBo\!8'ZE "Miآ-il| E4z?gu~"q~xLgiX7K<-|}."ED FL`ׄn g´eG"9v,.އO8ϱC BhaL]>7`dN})\ʹ1kZܚӖ(pƻVo)'vl##;r$N.w?m,l3O9-^l 5-TD$r)˔eUn4U<,<-pEDF2h4mˡSE$Q)j?}0}N RRlSAa7o Kqiٶzѷ">@*QjUJ;=Cbϋ[ɕqE$R""K97/\Mw|w'.Fsb G,8)K݆=\+@D @}~1&1E [DP7f/!-5sg#~ʝdNfkgӳTx/x@DTsGJaJVxzWӻ CDDcrEDDDDDDDQ ADDDP"""O8>B)!-""""""""yHaHR""""""""!-""""""""yHaHVHQx=I9_TIll$H6o:O[gk+^e(oG$&L[N>a [DDDDDDD^][[Ų;lÞ}Gh}UEDDDDDD]eY޷Ǜ=;C{)_jj^|Ba}֪ʞ}G5KWY&%ڴgeާpws5j,ZcptgY8VB쥫+[Pl)d×M1կC˦YsL .(55M[~8wpw+CvTU/#EDDDDDD> r.wCM[[񤦥RǓnd&|kWeǗ)ag ET 9ʖTD$+B \2^&`M1~"08Ӈ7`bS8ի=O쥫,YukY%$t//Z5_kfܼuo&|q^HI;??櫼YOX( F%1P)P(\Qq%qSK\\r5%1qA$1,-5L͔\E03q5{\\<Ĝh6AP>2B!B摞9NbQ?=Spo;>Jk7 Jkj<)kH=-gq#m DtT #͕D>@浛$$ 1(#-Ӎ8_O`̑<c'- %q -rv r#GOWPx{=wI٭et5Z'~uJЪ;~SJ͢\XTti%+G%aSxQȱS'ݲ_\҂А7>&}f^+Z-!ZͧvYt:!B!ijRr6Anُ]Gr(agwhظemTXdbPJ1f\x_y@A2fSd^]I2ʴ٫HN9/WTtekq>#|jt΄I f}䔓L<ϦK1L8LzFfRǔtރ>ݻ0yƊR/+6xRw;SX|p<2vbڽ]c٪/L+h:cȓF!B}k4譮{PJ3s: VfHw4iT.yh \\׽ {t9/ɃH | On3u*Qn ##w;6C9oG;A9٫=cn'~O2oי:{?5[l?@.#q39 qWe-̢eݾnsJ&q;J?=xMYcdT?h}|NA[أ5{Uz []ڵLMN9~ ~Cјq4i>bN'B!aSŚ#w|\ہgk$ޅ&hŜH=Go?,;1P/&y&/_U73J_۝9~xKzFS>#jvjUK1ُDPr4钊UK+K vV[H"C`*+l lZt/^۳&&''f9gQBۘ~'"2r%"F5y ֎<j,-jrpA[͚+ʗ-S} |M2u*lm{> 7G*Uz:0/O7s2-lst]UF65=-B!B{C>3U6U}ೆ_x*9 (ሩ \ucFcFw< [CεL_ΖIÞ)gM2sQDx΃5tIZ^mX3j֎nJB!Bs7E!B!xY,lSgxA'Z*E!B!x; 39ӪE!B!xYeБ1[B!B4m*n;>m (݋.ؖA[nSndbn^ /O7hSV9tjcrrH!B!1E7nΦ}oYH|Z:Q~Olܺ,=EhT~NIk06֧ aL?X !B!xUmmAHzF~rLG2*ґ۩ܖi۪)-0iTaR1 [sQSnSD 5WV*K!B! G9k7IT%'7O_y\ہ51'@oO>OKGp ʅ bg>;)'Y l!B! @3MlZ%'7.=072Pv/5XWmdrNDLYJzFTB!Ba-/vH=_3m*eUmm8{sF0g]K&m%Z<;ҝH!B!(-/|݉Z74CDa:XӅ_pFcƊEeÿ7!;B!BQЧ3~T>sJ|s9_Kh4fswhۗ<𿜈cga}$s bUmm \9g~D!B!(A– o277Ā>M./9UX5;[TѦXYZ^̦B!B!nDB!B!J!B!Br$aB!B!D9E!B!I"B!BQd6"!B!00et#lJ`>Ki_k -B!Ba`׶G.[aKlvmžf- [$lB!BFc׶XRAB1[B!y̌˾~;RaB$lB!B'Pղ2r6M_5 h:Mjj),dկF۷Ѭ~5~;[ lAo~)M*\ @ !B!9q$;6kZSر9?on"gNdǖ? wOo~eO֋ _ȳԶsoe3[{n޸ ;p".D aB!B<[ʀ;0q";u5/.+zxe =ޏKi [tAK,ݰOop!* [B!1پb7g2).-SXHʕ=my*?NeH;a_:.lmhhE!B!%)_'}!_"#*wb-c~]Y2c:.4kM}66}+QB!B񘊵ZF zB973[#rZ4*Mw?3}a#صѮqھbǺyٖ?$RBT !B!cJ>=tϳpf'M@":0gzJgXTv5OoNKk̟6N*[ D!B!xLWBolg<\q_59ɨܛwŽE&߾Kwnٖ]bI:Q*\ BlB!BU^v MݻrvkЪ} _n;; @/i }L-3sl{_"/sNZ!B!cf؏B:C@x;3}a?I`ڂ(Z$v2B:CHwC`2aWDLDBT ҲE!B!`ڂ(-*#ҳ\MDessjL|/gaiɼ6YWLѩ6*4%5#>LE!B!ebRG.+B!B!D9E!B!I"B!BQ$lB!B!(G2@PܹSHeG//{.5B!Br"aP̉HeC8q4B!B!-B^*Ha*TxM`"= lx+ ˡ=um(/ y# DR2q wa-qʥ4~em)j7K ؼlK8ϙ”O18soe3[{n޸ ;*d&n:%¹3VDžVt yhpҭ%>/9|5y~g<<9g>}P ?`?6ǣVu9\υ빳 ?yʋ1;T&:йw[7>@vm"qW>L/_2g /doVzI;u.=ڧs Θh2l)xxzW6/deܽ{K կ-vURJDB M͛p+XSV/ bNu\^spu"g&, aA`CH`$@@[hx.?!wQ=F768`N€z?mM8+jz@2sKGiSͰycm+`=h>-}~C;TE}ޚ[YZФQ}RժTQ2ɼvSg.pӽj k}vkkc? Ν BlYEe\M[>3eu]Iڻyvmdh"2jҬR-[7t=Ҙ3y 'PbOs"| -BraRח#-ң//{+vYݻE%?o.u_XY/^8~\˼ @p6yJS{H .1C{~FMпy~w58tI 5hlQÕjᯆ4Otf25h鮆/y 5HS1YeF xKA +B2G5v!G0z^qj wUϭz.嫁T%ҠPZLQ;ojAKs\t9FUz# ss y=lٮ^ϓ^>q,1w{)bmײlt_B 0صH=~QYK 8VF&*rat4R%v2F z]!gጉ4r1 )/+3{eb{w ISk9u5Kֳ#IZ!aQlY-\28q4@Kc/_,,-xT;6ԯ5o5í֞Ki $H`sF,4..d&WD[0a >]wltFcainwjFu)Gkq1Rv?y5|u>fgPIAi"RC(kPo_TmuAQ2ʌ@aa!M<\Y$\q=7D>L?[~ XXZ2cj2 nCVf}7IH=~TjXeêŸ6lļ 4’[B<;TǽEDΜD ݅SXhTξ#_k~>ʲy؂ocw\ly纮Ĭ\B S~>ZzuK"YmL!n",-,&i?FXb ?95#-x8Z[uJ1Cp`& JeJ]`0C["\VUuUHN I@W@O& C׵<@9@GJO Z+zr%4#GK"sm֯žm,빽C-2sHΝ3 _ bpT:6|Y=Y}];b,#iN{}!k ZEw̛iߑ%q<ߜO*ڰ6||=BpGzQ5_eƢkxtX{.#sN!DML$okCʆ XZXp(q-a:U u$V"P\jPZ?a ͣ>Qp]=R T_G^ѠYۀT%p5ajJPejz05h-+4\F}Yb \|ypq-!ޡUcg#ć[9W.9sGoѴ~u6-sup͠>"Rcz[MaӺʖ'!;TV5Qۉޏb KKӊq3}JN;bTީ u_oHȀaxbtnܸ;Ə'x".NN:e>y!ިz8ZwF.2b JS ǣM2KMq]RM.#Q2ftDiG:3_ PجP a\]QZQq Cc2LJ˝eϫu՜0DYq su|Y)sd8s5{"l!*W̮m|L~\wjîm)jiך#tHc&뛟6c<qѽEeef`Wݞ#Ʋz^J',|fk*y"!aQ FMclXo:Nu\FԥG<ɮ][I6FPƻ3EVmfSX7IJ5Qr2B5fM_'%~& KK;)ck_DŽd6"!#/U!,|B-O ǹ+w}= G,^&޾쎋%zZ-*i{w|AÓ~¿c~{44e(f<#J @ y҈rӸ1\FO2 eh+QZFiJ7LO5 LENT`Q%J `g> uVtY?eKJ&]xaP-um YO=aHIduV$bO=R7coՠDW@]vAh8.QƏ}NFn4G}+“4Y'sQi_σ{d쏉ޏ}c-Ncǖh\6¹+|Q0sH<M)Ö?.|7SWe!QxN~8s<V@SCߺ1V P>jH`APL5TDl8@5Xcģ uRCi(c0Xʘ*kQu6O4TAZ lK=f/smHu;APz:`,@C1j,2㽠2}@u 3g$-JCƵ:~m_}O' X2cѿ':McP:qLҒ>C>dDE®z {a^L}Ŏ Wv~رl>Uk[\\LVfI!-BQ6]\&M2SQΜ!uؒZX݇x8ZkjjyVFSu eV น^OG8ܭJLerˎ @)y1ؘ֭`\WjpQ`}QD= IJOU8-X-rdVc`ܞ빟\\ہcznA4Ǔj!ڰO^$+3CR%8?{~G0FOcSXӛ]D4Z-qLxnwu2Jlc3j[6A+%W}~}}U ӗw] IDAT'M_N lk*ھUNݠ&]7&iRذ4__Px~ 2f^@ٴnޏjAmO4!$lBT2a"ŸJٰr1vk1}[ KK>@9jQmsg8C*W,h6i ^nK?5HYhkźB@>!.*7_"p  P|oF(J k(Rb@A -lJ߈Ң- yw`zNqaiSjWxu=1&X py+!;w `XWU@^o%s0gh꺺f sESU;Ƅ2~W|8~%d_&,n?wK r[\\LVfo=~wVPWhQFgZG( M7 Et}>lOkX]S&ӪR2D)(c[VL[TSk1SL(J u1 e +DVWn_Mǣ,V6*3̘!>ƴڰolKǮ!E [xU+vZ7*11umHY"s 8m k:zɧ\[JpT;L)9SNg!@IcPE1[h>iOv$mV(ABMוv˕EKl]Pfup6Giq`Y.D{PZny8ZIMkrH.(-GnpecGec5w@'b]gt7+/[Nj}?(B%I׵Ķ-Dš˿cPrn~҂[XXZѽ`̽E>F% [oK8!*:HciךصҢ1czӥycD- 1V-~.h7IGؒAmҼ1s&_SmIڏ5˃#ijqJѺdkQf 4(c ^;^  ]&beJT5HQn2L[oQL<׋jXa\)=FKyۃX gWZ^ttP+pJKG!_ mXS2Jsw'xiؙ-JܓJ)*33奝~WmƆyl9&xB3Laن/I>_q52oz6juַtȎC'Åsg\e4oelObw&[7w0YWof{>ilԦm\KV}U4!u (cUg~XҪeZ|6ʠjje}Ej2ث.0 )G `fa\dLEu(-BV(3\=ޣܟ\ce~PZtPpʌDMP> $Ժ9֭z9nb(e@πݍLyxzܿh4rlY7IiFk8q!*r*ċ?oir-m`=1hZ8ʕ+cWRqCEׂfOp&8<@BH"MXG*Cڰ -h45+A4yii7y Wh|N\ƫho { O?q2(_m{zڏΜ̯C<4<?pIpn~h_..6S3 <[ .:DE BhҌE3@aQ}1Lrrs7`5p.;2u>]kaKx}~bpyߓߒ@ 8L NqlJQti#QƵdpNQZ&˖nԠh[UC0\"W,0;Fj$aSrjUѾi.ܼNT}>3O\tjvUyxy=p{$)*sm[yaeiQj] sJ|Z:\z\,]ϲ3/Kg@De4359a$s&6 HFMI`>X3a(jlxqҝ]69O!$lBk܂, ̌^;2w(}r:-vCNkk>ݛI0&ş^*bmͬ#۹s0(tK/K'MVdW% HRikbp12X9Yuب ,7Y ղBDz )y<灒(}اϪickP=0Eyҳ<x2&hQZUeXfߋ)jg#Cx5;[IHves#c]:\XbJ%=#I.NW>_=z.N[ ^ENn^pg=~YL^ŕpr'1(aSJߞZ=L8LNntχ!t 1y_O1HLVS,,-9qaֽKVoXXZPҪ]g\JCTEw.y 7m_ӑ]DkҥH [Űoe^33YihX4~7??:-eL_JW/}:aII6nP[Kٳ|t)>@ϧˈF7ٹ,߼0䱋 >MҾ׬o.[қУnY*l ܢT7=;PގI,g톝˞m1ZPj12v"6_h&Z5}?C?ўw|V,o8.m۽q ! [P/vKň:uH9yj?tZ4[ϝ#7/[DGc^Ǐ79KW_eth}5uqBR BEjUVXi]X':{=[ G (ŁBQ( 2TZ4&&#$EE&~Ocr{n89kאkkg)԰+}~;wb"`jb6MOb9a<_98`Ŏdn&M=z<&&!ށЉ))IHoCܫWPF V5w^rHs؉H?nD|ӳ=f/܌Heޮ+ ZTF ArZebJ?k$z 5-`KDd|OO'4i젾?>1; "2* 1[g7rkOu[ MX[WB˪""{Zr Xn{91Bd c^~HMNcS%õ07G=J H+KKD!-#]ڴ 3 ?eݹ6oéUb;j}~c9ZDYk#Gm:>__gmLXCV%K 7Z9~Ӂ]:߼ Lߣ'y!%)v"s>:KETUTKN*s;tz\,i^(4iPbEmPb+:n}`p~6LMԮ921ijly}6t 꿇ѓ@,AZ8y: pJ6 SsFxIs:; 2l!'m\+m RfUw%%Uly(`ȑ/0J{]vrX$Tej Ռrl {v66pl7oΎ>I¨agcmPV 4"&`Ĝ9߀?2.f{`oߠ AkjZ\qg: | ]c8gz3`of"d?eou,]0tD'_b@XMBD  ؃L8w7o9/\ WKwa FX(/NٲB5mmd$u<ir llxDEהѽm[oPhְ!Θf |P"ՁUƵW/lwܹ{R'Jkޘv-\QϚ9L޽ǝ'ӮysrrשSX=m13umZPUTrK$/^t};y"NXcm`Ͳa8bn2q+. r.ΝЧg{e.UE}\Ez;x#''OԮ9}N 3+[ V.k+K8rª{\i!BND` jUqB]c8Aϋ%Zim\TdZ8CO۱+Pfl?_};.) =Z<-ϔo651AF죏0oKvTABQ}k׼95oB]@\Bѣu+5l7oP$-"nRseꅹ6+ @l9kktkGz38wJgegS^ZhKnhf8sNB&taԃ SE]ߊ|cpsV9qb3:Jz7b6p^NUWV(:jpe|L(BB#;IP' ~׶sRm k?B$bHLMMѩ{t j㦍ykn\v1\A5KVЪǏGǡCqq2AVPX9&!eegbR n_rC\_o"}L DmU+L;,͚߱mZikW '/<`>|8zݻNrP֮Ed\:|ܳn geAP㪂4r׼1c,LL^kNDcêit5 <}ԮKоJgVX. V/-*ɘI#AZhF~z:iN!?`3~0Zu?'"]e& /rR\ =K/̑#e֨!0o(`]DY6m4ALB΄;w"+;[iG :U=_agc8Mzk-8u*unh/85Dϯ_[!9~2bFjZF_S+|Ұ>vy",6HDiF=J1s֖5)u[ 2WC Vog9,~۠^ܡ:xtr,c0mj?,1nrDDƩ9~nFjZޫi[cYѝeHݟ1BDTZ5jD+VuLLptti^hտ?*h֭qIICE031Ѹo_Tj5;t~@Vv66͙v͕,-qp*@ǡCѪ4sqAQ`cmV|5LH=z{-v,\<m B}6kjרQ쳡KF sཚe~Mwx-*Uz;:r SUmz4i\0+NñPUn-Ю\q#QrJ];99yhv}7 S^ ;f5mTn<"2XBD|s37ӸqR 6>fc;]Wr:SR `oۗX{۶=qBB XYZapرXe@Y9Qx=u2qȐ綝w>yq.#{{^%Rվ={yyЧSby\L4n_>nme CkWt:o]bK['G܎ DD~zXEӏЧS n+Y%qp E|b r>tGnmՉtUt}}aiБӈX,FUbjA{q<~_,"b^+07/VVUF}:vDK%ݺa IDATy׵Kz6g}Ŝ-DS\'M;qֱI1r*0мu3<>[HkDb:91^ށ1Bz#""""wl f ᨟\w`cI1äjĈA |U ? мu3fP3LwĪVac 9z:\ z,3m tFoӠ-,)fTIj֮ """"q@$aߡI`kv~zn.=8 lf K\ɤDD2'红Np>[XtLV XY c56?^3,FTׅ)Dr{*FZrmuΆ#D\zTQTKٟ1ߣ%Nc}l=r?= _;r O|kP3<>i91CT- =b>[6UmDϙK*sNwlbyp*0Ke&S 3Krbm;|>\`cP7 >h!@*?Cu纪S4s-k~Sp>=^MVҴ? \XhS[T%ܿ6ѭ#Zt܎=>ظwb055emKo\ LZ|}annee+Onhf8sNB&tv>a!cz\ s ݩ*;ܬ?`ظr {̹+8s M]j὚]ԄVrrp?= IwQ,X,B)dQ!ڥ:W QA0zkIl=3'XfO3gΆISR0jFDzyPX =Ե'rsp|K/->mQ3_ր\>}j,3nGDTT*):fBB#`navܳS6Vn^+ <3gT7FҐwX]Ƕе=|n%KKl 0c$GѲڷ3\t~1n!rFM.=(e0Զ\/bApjbs"\]X0>XXZS Sg.b cNϋöߎސ˅۟S10Go wn'#3#iwܜ\6R9353EU[|VE!0% %Qz-.:BU{QC0qx6i1/޵ wBZЦIԄH ge!8, iH ;u^-,xrIe?²wAZh؄Op?= A!aHM+źӟSE~]*-L` +) vLq-aqa}6@F` X,¢5 з6^m,gI$ 1%CUza0wԲs_4Bd(K?'"`K,)S0m:xޭa@8>nB!#6& q7 y99lfnajv?'DDg0'Y FHhܿ0w7&bb4l 3GDHl)Kij=nt')v cT ""Ұ"_&%tx%[Xh=%nfx64h"5MJcD"dbfccN/`[S0[=c\"""`$Ud0JA-Zyǔt """ | (c\"CRfm@Z]6鴴dF:{OU;<DDD~/ y9011 ($$EDd2322 ggfkUbOc9fo+r$%iHBM-5Ul($$# j`˃l YJ V015{&% (-026)wniݾdDƱ#V0#ϿlIɷ24/%)Au3( F$"S n:}"PY]cYd3[Py)pi.)FDX LjD}N]{B$#P,n"]qv2*6h޺u]+o[h4KFI1"} lx LjD}UlиiK"6Ϝ@$k=%"A:F$"2rl+8 pv2|_8KË[PP}f Œb˴QR󵐮'1fD\ĪEkffE[nv 8v%HWi>TZ +󵐮e0t 0I\@qn:ƺMBcѮc74ވ>l咱H~/Mc\8uŒbsURt_7rs`ddtZHh|HD²hH;OB`Vɤ2v̌Lqi||nzfD}` pFv{YRtJ;ZPPADcD"s ;l!lzP?,fiKG,߸Kp:9\Hc23210?ꯜmaԲ})2s_lFK ?';Oc;:h;Kic=U7w5HicHD>+@_ׁRD.:t7P˾FFFKh?#BI1"mwlVUoODEHieHD2xԕf{|%&Nsҿ _:ƕ WaY /EA綵KU7fD˂-(((X;7ѝ^k~9bUB))FD-/7;ԓYXrƈD$l0c&N\F߁np?& O?]Ͽl,Lװg> u}Sg\yadx lH;9׽'zO##9I qékODf-|.LvoۋĿoc00'*\fF&FCt #"SH4(ݿtAN`SNpǰ [c3l[RDk6nO?W@d=HGsԭO*UUžz@Hw)rtk!sxG5oeJcD"-d!(8 sI'l;@ԵW$%¹C|"@ݒ-Wn1cJ|b̘vahN*)6q7~]b֫ ?T_8^\n+zk"]'-B #HXWo<[Yґ{i|gfwkzMTө4 nX|WjP`KNAA,=ށO*)6{; &g-1e$4iޘZzc!.`µ} :3~ZdDTf2V.Ts-] *Z#lVAN:!&:8X@`qic\^ )&:ڣ{7'bD~ݰRxDPkv r1"Qr,3pV˪،<`ҹ'Kkʒbt鮤8پ mNʑ`ƈDDD= 3WjQ#-,J (t~YұiB; \+)ڴ{tDNw ܿ8p/^U715>Y܌%.ɒ1Bۿm>T r%1"Qy' 6ʗ>p͈:'Ǐc٦]li&y4lL p75XI1UKҔkQ%ňHwѮ nx$[h` UAOp26#ǫw\_x` ,0Q}VpbT]*)FD+/7N %,&l~5;A^GP P.mT\-wCZ]xLJPrً1dA7K]/)FDkܩ8kp O.m4to!LL̐!+#L*7][+Ȑ͈ZP52["Q^F*JDD.(8g ζK~5R $%QEtL*Q>356#<@睻x`  Mˏ"""qBvlZ]PrHx$S|;!'4cD"")?M$c(xsN߷a][+xYyH%%aƸ|dIY!iLjDD~ݰLyZTK>sX |-0;ƓP0鳬W2E,օ $#s'6 PTv}<-*|pVW^&1֭@!ޑI撟rG C:q e%"Ҙ"ck#ٓU%fDk\p"aY >sRo\ybz?)f'""1b@c|,y+̓H1" NP?,pI،mey V5E弿.ePeh~q, <Dy IDATkwѳ;@DD$T2& 9-rFKdy]<_7dg HO13Zh)c Wo}}11p@{Wl#.HdR) 4շP3?}c^oi;&E#""IX0}Vl? :\zyc@EfFʀ~،28İp!"-,a|qW>O'X@Z޾̮zM~2Hcc0e䠜 fP1"Z{#KHpHDD%&ˆ͈Z@B݇1232ѺmxY@ ݙ-*U6w?_0$%aޔѲW `|x>3cO8*$bKڊ¥fDE5C]kq3ZYu8fpK",P魷L&II"_—'K+rtk!@zcD""_-W?zШp& xXL2l;@Ե-,*dl):дe /fiGNaմ#?DDL! lZSvLduPV2 'N-[Pع9/>G{̊2M5;ؾ[vk¦]W^ 6n=ށ2KMk[[xAZW|=di~L8}pވıÞO|͗J$+g6kIcsK㇃LjDD#%)'уK{(8[,׵c2҅q: V XA3pn_k*bEDDOxzp׶i)IyVyYT/6  tCϝOvRɲNg'DŲؔxږ:DO5m!iߩG6N]\KDr9b"|T^HDZ1QEf8S FRB\B.燆 LjDD7 NKBMr Oʎ膹7t-+WxQ\1SS&-Z:k±}}~ډ*F$.=(JOyXY,“%9l1`/q9 LjDD]e.sȞ|“%؍tig=Σafڶp i jg^XkԮ#kGZm{un}_""8y9;-y_$I qbLpP.:SyC]&"'|0u+ LjDDNí舜ĿoMM1ybZ˜.\@-eznږmH׬%7222;6bDDT:y̸Lj$Է=̮&^O<8Nq/-AcD"PȑD'G (n[c^1]P.PHvx2rNږU|c oODDO -\)sml2x# fLdI6[E"9[ꩢe\o_x;""zojLR}FE 8D#qL'0mkw }IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/engine_states.svg0000664000175000017500000005752700000000000022572 0ustar00zuulzuul00000000000000 Engines statesGAME_OVERREVERTEDrevertedSUCCESSsuccessSUSPENDEDsuspendedFAILUREfailedUNDEFINEDRESUMINGstartSCHEDULINGschedule nextANALYZINGcompletedschedule nextWAITINGwait finishedwait finishedexamine finishedstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/flow_states.svg0000664000175000017500000006435500000000000022271 0ustar00zuulzuul00000000000000 Flow statesPENDINGRUNNINGFAILURESUSPENDINGREVERTEDSUCCESSRESUMINGSUSPENDEDstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/job_states.svg0000664000175000017500000003233300000000000022063 0ustar00zuulzuul00000000000000 Jobs statesUNCLAIMEDCLAIMEDCOMPLETEstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/jobboard.png0000664000175000017500000032726400000000000021507 0ustar00zuulzuul00000000000000PNG  IHDRDAE AiCCPICC ProfileH wTSϽ7" %z ;HQIP&vDF)VdTG"cE b PQDE݌k 5ޚYg}׺PtX4X\XffGD=HƳ.d,P&s"7C$ E6<~&S2)212 "įl+ɘ&Y4Pޚ%ᣌ\%g|eTI(L0_&l2E9r9hxgIbטifSb1+MxL 0oE%YmhYh~S=zU&ϞAYl/$ZUm@O ޜl^ ' lsk.+7oʿ9V;?#I3eE妧KD d9i,UQ h A1vjpԁzN6p\W p G@ K0ށiABZyCAP8C@&*CP=#t] 4}a ٰ;GDxJ>,_“@FXDBX$!k"EHqaYbVabJ0՘cVL6f3bձX'?v 6-V``[a;p~\2n5׌ &x*sb|! ߏƿ' Zk! $l$T4QOt"y\b)AI&NI$R$)TIj"]&=&!:dGrY@^O$ _%?P(&OJEBN9J@y@yCR nXZOD}J}/G3ɭk{%Oחw_.'_!JQ@SVF=IEbbbb5Q%O@%!BӥyҸM:e0G7ӓ e%e[(R0`3R46i^)*n*|"fLUo՝mO0j&jajj.ϧwϝ_4갺zj=U45nɚ4ǴhZ ZZ^0Tf%9->ݫ=cXgN].[7A\SwBOK/X/_Q>QG[ `Aaac#*Z;8cq>[&IIMST`ϴ kh&45ǢYYF֠9<|y+ =X_,,S-,Y)YXmĚk]c}džjcΦ浭-v};]N"&1=xtv(}'{'IߝY) Σ -rqr.d._xpUەZM׍vm=+KGǔ ^WWbj>:>>>v}/avO8 FV> 2 u/_$\BCv< 5 ]s.,4&yUx~xw-bEDCĻHGKwFGEGME{EEKX,YFZ ={$vrK .3\rϮ_Yq*©L_wד+]eD]cIIIOAu_䩔)3ѩiB%a+]3='/40CiU@ёL(sYfLH$%Y jgGeQn~5f5wugv5k֮\۹Nw]m mHFˍenQQ`hBBQ-[lllfjۗ"^bO%ܒY}WwvwXbY^Ю]WVa[q`id2JjGէ{׿m>PkAma꺿g_DHGGu;776ƱqoC{P38!9 ҝˁ^r۽Ug9];}}_~imp㭎}]/}.{^=}^?z8hc' O*?f`ϳgC/Oϩ+FFGGόzˌㅿ)ѫ~wgbk?Jި9mdwi獵ޫ?cǑOO?w| x&mf2:Y~ pHYs B(xiTXtXML:com.adobe.xmp 5 2 1 2@IDATx4E]_] >E\ JF(I@(QAtw{g93L@ @ >@ JۘCP+/y@ @Ӂ@ 8?< y4A s"@"z @_!_wbB<"1 2X&@  -8.A}a|G"g~j7q@ bHgpɟD 9@ =+$?HDOKG?{"ǀY+ f@ hAt\&r"( * #H2~wtL^1L<8@p! F|@&dd,bEԪoq@ G 豋@d ?B{w x? rC@ @a9@bٓ1#'t҂H!Ha!@ 4@lH!@N2깏}cX!7XP7Uo @ A{@/!>| Db_ X$ K!kt!${op~N q|@ t A&@#P&<,"fe0V>G ~M62ukճŌz%yq)<-qy|~,q5?v"Dhǵ>A@>| Fx㍺ pAXDeÑ<$\xx"ߏ/DpLcS:5@lg Ї$K2E$]ܫZL= #__iyr@pzCԼ|z 8Ik93@ @fy}d)-d0$P"k- b{7G7h2A(X҆˭<&ɢq K 3?Ǜ|8@^֋T  +?&r;γQO^qAȘIk$?X%)c;Fyqe0V2(㾼7x[z@ eD8ď*!&XvSrCBuyE$ ʅ`A!C}˧;/;~('J|Ǔǹ'  ~N ! ɃhtC b d! e1! dsn u A4l]1ъp ԃ@zP48D@RǭA~N8uQZ17PFCn'8IO?QbMu}C `+EC%~Z,PN B3}B94ܫD܀2Ru;+@ G=])e'!Z*"A}AN~!(C08Q@)") gC '!:{rLHzfky|טA!Lp"̑{} IɗpH ?:;C# Nǜ:xع)xW@-ehѦcE7,) vQt#} ЛzRDA!}LM23TS/BoWIZP)`)E qS?B `wm#0/a>ם5DA^d DpBICڙ$\G@ Z2=@}I ʔSNwޙ<ƉXH+q7TsW&IC@w";u%3$}6cm!|_[(#y^馛 :71R*\hDVh:@o!IFU(m 0ni#g;nh7\|K_jA%/"n>j!-y8..{fE?n0‚X@cMЌ|:d` dFjn%rV ډ@v JP$"HL2I\KQ 7pC1s^H܎%BWeC66/39:@aWc$@b$F/OC1r(3l\BDBPB3@l?q>E@K t!EofuؽW@rGVBiA݁@(8F@-:K@aZ׺p5=\!h%wy'Y %B@">J}PQ2*:6;뮻E*^5rHKh 3P萱AE-@=lq>B 't|X=fo4z* tHC:7τ!v^@&"0cG&>BV<묳 &"s`BH=Bعz+'A㮛@&Ĭ4vi|8EV@:ak$8,2 c/?gy&Y`zd>&ăz `wg%G(J3~"8~6H!tXn!ud}-~{"Ǥ> uUG6` bB BJAǂP2IcXmx /,>$^\,"yo"V !z뭴ʘzsQ VD >[Qg? Dj9q"ıa.B:CN9D,A& ԍ$z ~xXe̋B ! _QC `I< t@|p,o~d=r@@v!7S ,Bq- SQ#Du%R 8ȟ.'1* .(UAI%cH!3K[0Эf^F;@Aky@01pT/s뭷SP7lIcXQ>z| $ut8KhuVK/{/'ul=[yϻܳ-䘡D-It@A/}щM6di&2-HzekIoF" ,L8A,$%]@,'|'>,y iuH;B(D/S\6"e=q9>;½@X{'1,Yr5+ j!`|"? 5*BݘXuEO$kB8OFh p!Hc=VpPpi-fabODt!AgHcޮQZXdyeYYg5 F8W#Zy^,Al$\b8 |ND;lT+駟Nt ~ S,3%ғgۉ>Bmg~LHC=4yjsQ0R?-8Q01~MUhK<ۻN\uk͌P7Ï8 /-tMŝwY<裩 JСHq_NiyqL/H!_bs5W/JWL9TĩKG?ʅ/ IJV*{{|xR_vaa Dx9|IϷ@~.$@GY MsO|xa/pSO=u~6=Ss9gzG6\0_2KqSS~va8t{'kk|\;P=(2aq ꪫdsϝϞ$a):~8Y5lO",+&ˆNkܵJrT.7\J"-￿̉?u+a%oO~Aa},GuTj. .N V!ڵ?u8s|p̋eh0w:Huk'O3S&b)AvBG}׫Sx%bqmZm'tugC `cxu$  ι[i> ,P,2RK-U,iRu+ok6:/| kQg>J8Webr!0V@$0'+ͼ~Iv${c9lg+[v ] ؑG:b¡Vl$R.$e .x->e,&cpz.<Z' s\r2:Mlv? 8>m%tz D98$A%`N*>dxmź뮛WK Wg ~ũZǢ.Zl׿4BH *fr38P9 HġQh߳KQwu/BlpZv}4:h㎫ (!*f:\u!R:ځa%Öx?A<spG>yL` /\R9d uLiU6vC!:=t{sqj#D6.UQJ VO>9YF6|bO@7ߜVy[lQlF)18Nߨ!>J΀c,XFwe? ꫯ&rGPIܭ3}IoGy:ڰ$Ko. _ʖ[%IR^Nsq3Gzrdž%g|ذ!rx.#$Q}fi FZhbfH!,~9i`\6on#枳]yWI BOg}a!`+V-r84)ַUlŧ>R<&D=U&`;OG! )-R 9%q7qqui.{7wfy+^zU:+Yo!}Z \ s@zmC Y0b\:x0yp|sx{œ7$4V9;皾7Cƒf-^~*Fn}ni+S[c>ucE9?$Tt *|[p]v٥XuU{VC> \niu) \&N}*@"*Xc$?C3peslT'Z$tCxKL\g2?W[mAaaqס\5pC–2(>c:#+v5"H2E@y8w}!;/PKuw @m(%~LJA[#'ϼbE^Z8֑0猓zO:r5Ȳ2nنi=Ӟ!3uF>>Λ]W^UCQ ׿u^{ǜd7;N/berG1%nbi26;$z_|P\s5FGtr >ϑ9QGNT:ƶ%ǰ0, 15k#]w]ȵ!Xp@X9G,3 m(O_+79q1a%Ώ!ăcqGD0xgz|y^N0`~,hwc9aڱn(\CY)ơ۟)jS"jk_VQt;C a୶ڪF.;N n%T+eC.8qk-H}pw`p]w%ALq'!?)K.>8; qaz.t{pu;mgY%ADĘ"8feZV*dHC[ou}Q*+C|/_~DEyOHx/KqytJ:'>8*ǹOLM/8W9"l'Ƥ5q*lylɈA 9oD?e :D\^H((a|$8p7{C|wjAQ@|V_׊g"[o^TEBY?֋xaxYi3Nge\#E#lQE)$T* O~uvf_{GynV~ž]B[6fp< ďH?)q%crg<>δa>(_’Ip|8.+m>BR!gĄg!?}@%QVQ&$~6NQx|qCŞZDt/y7YguV9.}^\*h_BD[osW+_!R& u6nj[e@!$~ꫯ..체uØ/83pK}{{IT?9$ ˧$Nka%PX`utn:.(]XERw=2l,{HǛ1Xb(੾Ax^2!p*qїA!!ips9!d(`Ec s8 x u;jvj[skd<>}3Dh,;:$%KC^l1|$+kVcdntMU;@2!8I^-Z$RO",BrR$s L%,t{y]{~qqLX"(6|qi!^q,aDXNi>hJADžn<| ̨8tq?胡IqɟƯ "d_b@ $W4 J/?׿p N;[ N1yGؗ2X"(ė2ǎXae cOE i.3jvii zP}>u\slR=D!(Qo(7߼x饗W h!G'HWžT(O?t_Zi G@J8F!$eN$Vg yǒA”˝O'gS>1q9!$~rB(co{l5Ʀd747K.ٔ< ߦr p::V@M9S',n"O<3ar.teA6 !>NbOb(%X|W ;4sV]~ߤ^qC=c3"؄Ta REwm]ҏ>贺 ,F@{w]wmFHt e50.I$F1¸n0B eÜ i=ԅ~ǡ$΃I!AI!a!!xW/[n袋GuTA 5mБA)P2:1h6נ*L!Nbk>{#50 Spщ'U JK҇oXDZG"tJT w㺽;)BIa-B()P"\2dM5T] q<̮-x+턏/Y_՝IDP]8Dpf0% N߆%} +P(cl߅+W-ƊC1I$? DtG%RIbBC PRh`*^ve/662[3_>mB Uՠ'Սd{ c;D|HgM$g;<_GIynh[gpu]N$|lM|BD0hk>J$?t[}9A.'B| !A6&૯Z1Ӗ[҆Λz(m=$$}:D*uI!aw=iw}%NYF&"Os9gqGoqs"m`r@|_NrŜf DIjXc1A`(BB|_"9Do!3KVA'}22AGƨf 0EiȘD/䒢?{V{(r}Iz [!|Y+dN(TIɠ$!`|> 0 ؊m M[PAа?|b!Z6"v ^d#&Zrso :BϠ(5ń5^G]G#օ$ꅬ7*Zq|rJORȌB$ZC񄀄uЗj !$PR(IƒE2Tug{vZ70t{+}vHPpxSgW_K>g2reBSq6{饗=CQduIӟtڂʼF Uy䑩sbD08 %@H?|R.iQ#Kg"OKayfx<ϱ{]lV4GtIvm׌<#!n_s5 . ݤЦ>ޚwS- M!d?i HFo+tI2 )dLx$BHEFt\H`]D@#@GA{Hy<!Vu]XjM7ݔ6gKD` 7L/leD F `L15^F|6>74,|=ɾ_XhHLCO>oN>ɝXj $ JHG%#\bhwɟtT|;q P3U>ogpgQ<#i[ҎU^zC_ΖmЎE*8<1#}x|gQZ5|o~>݇J( od'l 7ZpvdvZA87 > Ϫ/V d0a:8pH  `=Ai*[9~饗 >~7ߨ Bbc_4O:5vւł6PI4"0ؘCSL@n0|ƣlȅ> C:HFH%?Ґ>FC@`0><3ѳN,ƇBsg bkpL0aBUu1vFj菙^upEpoQ< F !:)Ȟ/^,'CPiYQʢ/tx#”^C o}kΕ0,u`٠ X9~W~(R$(v;Xp@|:$Xçjӹ@ oy<0:i$#DŽ=a6,Dzq]wvHI| A|yv2@e0&5cldXD^xC%׿&n_ щsaP2u BHcǃOge}DF@dzudz%LijB!R2_}@'|2x1GˊE*F D9L`O~'àHa*:)=ﴂFRL/W_c-WyXliPH`A\tECPƣ }yim On[H{N Ӡr>*Fdn1t .`\,I$`$"s(B0JR]sw}/'¨V{|pE)ogB9|GO9#J)=?B2E-ӟT,NE\"@b7%@.-zGDpm;unnk$}6ôlϻ_ u`n! ofګjf#HM X%@hgcMmeaf|7wd9h s@!HM0pI#3Pby:/~)O~yg)BW! /Lm: ^zZX uKÑڰ~ZoudI:u Bgaa"v"[lQras9'EAjγ+sb/fu<~NJk`a}*ռH X`RnHa|%Hz+]{-j44Xk3ұ)CF'61W樦 3R~$S=|H{a&v&<-ɛboHBΰ^{1>~96y'-/_dE]D/fd JyiaqOZ7I+ҿ''wiIqǗp 'EKqs^.?3ON?餓R[gx>t{,t;ۻ1m aPޝ6g6/qd"0 mYf{eJ—;3yngLI. !Hy@ 9 K @b͓<-첉&lRݨ!ԭEgח9,_ȁS@gr)%\aqJ?v u@ ,#ɠƄ% GEWd}MH^{i2d(ۍ qσBAgKNp`\s͂6GuUWSO=u5C,XxV="B5('d!uY'ExBea?W)ק z}/P X򕯤K%iIG z CF|"bWu!KoV6M|&4c˟_mD](V$ xaؽW$t5n3/ ? u@ 0F؈7 Xhp" e-L%kVrŠ+&c襷򽖏Q@HEʐw"<>߶Zk|"9yGa"߾J9";_dnw$Ir:"pC 1|gTpn cuS~Tzz!:(@"H[쐌z! ϭϬ9 /7\OjnX;vYl<³> "8BhmX(3!;}sLZo%c\i9ML/ <pS]p 'ՀN#:ݱU%/ bmaK>svm_L9(rR;Ltwhk m?zftۛa0U"z\"lAG-=t,ϵM= ]6%umt?nϟ2]o1ﻹ{~Z]xa~ްPlы}b?[@0՝;4̵bw zF:>|$8zvc~8ӆx(@ K/U˟6O>tnWOVZӑEʺ뮛2_>'pIӼAhYAi|N;)?կR1ĸ~WS6n6# 2?A|q8}+twXu゗\a ̽&v0 >C/$XpAzMxc'!cxCyvY:Krst8: ;| a>1ζ<&H nCa[AC,@L˼= -ìGqD:m#.YBc x(hꪫ&~ttOkK>+_WaX4!"%N2@Ia>\$inf_r.3KBiFxpp!VGMdTHKCӬ;VBb^h,D+&, GNd6EE]4-r-uo_my~j5 H( >jLZh;e eodrO!3UnFnF]|qJGҕwXPP v> 0d0_ڛٺE"|OaqǢ=T;j\ߡ?@$XH Q,OEYqXJj GJV1XDsOQt vC#!2Vl"H68DR b J UJ{gґ۩ǐ"XN5S1WzAvorau1ׄxkd< z>݃|K} `儁0+Xʢ$)uvԣfwI1@ B|%AeߢC7l0CQ e`sZ>M[]I kZ׽uGQ@ huϽVA|yQ ݑ6t{Dp$( c|!8M%;|;j=wS,|U/3ϜH;[?O| 3Edl`ߪ\CX5<n> rAomS+f;X|Vg=[vE1_x'W_].´!zzP;D#7믿xWF-Nw/l>>W;e3 :EP_C)DV<Č'ӛlI"z66j|]*%X"7z0 {96beviiᅴsn8t{*I>z J(p aA P !s9#;)RH= e-hwp[oxӭԡo{d!俷rKʂy^5%aY&wTO84lĿiؘ&+w?4l0A6 ޲ѶP^r%tf 5׈7OKNдC u`'IPax 0t/lIY%/aW2x*S-j 9+Zh4^#T vοIJq?T=3Y(Dvmi>"'zdt饗N9o%-u3< B^"t "8~5JTt [`ob1Bq0W kuݼ#hA[AoFXra5\S{)Yo҂o=I8 /s _tE队6ڨ&൱0/QRGYwum !d/ b-|~,/HO?–7OKRxۚyD$9q 9dN3Ψ#H rL2IZw'C׉5]D&a V 49c5'VDV"s=V^z)Gka݊E'ach8{7˜OL"X暈D˥8Λwn4,4 it}u:]}1+n-r_SALY-Jdxx+;찴]6#O5uzq!EF} x`|H9~,Ă,)uQ)(~9~Ov5fN!ݜ_Iٴֈ1$\K!14 "W >v։!dAk9`U!cH1/~4GGKV7`+Ka` _aA_}jXe-{{1 3jEů`Qe HDhLp2Zxͧz*c("p"{ z/]pS4ۃ@ܪn;APb QЂ1yC3=kiO8:ҷH.I酋RǜNyD-";s_jzbU-'[X:p.D& +w<%eO'+?|) [oV~c>sθ<VBj9|H{[uACE49Y\X141꘷tA΅B:5Íkl/h0,utߟ13maX^|t[o 򅐙g9߀]iO_J1!!_p)X+)t{X 2pd$|wM61$<餓Jǐ"@=cm+@AF B"zB t{nuB70{4CXZćD(M0!m!EB/>I]J(iX;[_q@ncf;`0a,D( |H p$(&~3< ҟgiC`,ԕy Gؙ@ t{'P5C e! 0J!Ze}6DBvbiIǺI$?H` +$]X)#)t5x:`0F jJƧk~c \*{+r"֋5PˠuDpD t{TZU1r$в$`>p}D $|;i+VRQoyS,׌@o#/t{k(`j-C>D,`*a|Vb; \./\d]5Pk;@G5DA\Q!5P2ȗYfdZ$o6 > >Yƿԃu9E5P2"} Bww=no]l[ o dP\ai lf|A/Ih8Cskľ5\@޽uDps5Q_WGY|׿qw'  xI u@߳"B43=dqM>/Z{ho&ō7Xzi9Ae2ؼEN@ ЋnZ ޞ"8Fi>䏰Dǒ?<}t ;ykvLpfbth@ #NݞӞp\f]7GdE`Oa'ӷ7p4rˍ?pxG/ ܚkY2,PP=u@a67.]@ cZ2`@ fH_B[%ybUW-ѷ=ӊw1m=s&KV| DjW-q@ P5ai[dA.Dp x,BJs"Hx @ >;m836ڨXr%2$|@HD . k`XG@޹~cr1P@2tB$%(]4…@CCC-6`b//%8}7<4xJ` +V^ sDСa  @`$r{]Dn~cw "Rlraa'e]6YX@o}J5`H͢&§UȱнBq7@ lB7nIc"dsA8?X,qSO1S<;m )z*-kV[uY[L:餉@Ƞ@=k  7tCc3al* ePx `~曯|ɋK.wy; /6x4:ԧ dW+ >OEMhw?!>MQZ>,MVa_N@8`?az':(}d6˾{b+vi|/yC:/PR:/0o 1#h2t?=td33"L4+yQ(8q#`]L+d?Oո[o3㯴J ,PA`e d^ Xx\7-C@?[X8`Փ;΃c} 1sCrxd .(<=]KNj=3q>ld_bWVA+I I Vq@ yՔp7]z?t{yaR@Ty8D8 q)(fi^+=ⷿm~s/}k]\veixeI+sr`QaΏ(r{ (k t/uzt;2_/5"t{vU:kv_O8dA,`?cQ4oAz&0>/R$ORYYJ^_~8 >*s1GZ f.[s D@ng:C}i ӂvekB?hHӽ "؆)+ H!DPB($?<'/|dM;,*ŦnZzs݆j`O>8Ss9 dB[$?!~AW ġxXD@ y;c|H!Zw}O?^r˞+tTGDMRAvUdP !Oo8 o^ q|qI'%|B2Iai /6lr`a@sGXH dƵ@`$K?:-Z!yXG t'/*1Ӎ}iM[}au;[ݮN(`Wa9`y8K1AHP|u駧/`9\z饋Z*?c_k?3<0 [l>PP 2q0 B%@;(vsݞkj Gsp~)5쬠ASK]L3M"| A&CNylDp -?*+ Q8!p@!okֈew'bv 7ebقyy>{rZF0K!}衇~8{oo 2҉E.7v $P'T)?Gء|RvV@ 6rWt G%|}uaC O˷E3=]SηzkӒ; [no޶:BAѾTio*W^AYq*, ;(xyA,}ӟNU ŔOdB3F@ |YJ$}q(lC;|Z?}B@ $Bt>~D⯅PIOF).ߺk߹瞛`έwsNp2_"@ 0O~R<^CNi@Br}u4G2皩!8$g@)F}@p%h qT}!~("aC !oT-[/^:3!CM۷?vI }%*:}Ie% x @"{C/Е7 1W`cjFg8ʳKQ&@} rH^9 ql. *ć üN9S%>^]ћg"%À!`ql VfuRNRSs_ޟʱkC_s ?tPRQpeS]hL'x";裏B"Y95$dt !SInbUV]}/^ zPGco]&u'/8vJĜu^)z ߾s/?|!SHg;{ͫ`p=+"͏GA<ƧSבECק`" IdyδҴ;s= c=SDj W^yԞ%lW1/믿~k w&e|WoG`( 'XU2cHxeZI? v A ]#zR}EGս|x %~Aˍo~FVa?('yDiB#Iqp*87 N| !39ju>묳yUJ' /x-0(s\KOFGSja_o#W#rW4|ܦesω :Q>m|ȟ$8-qfeTn=FRov J za#kSxҗ<~>/O"TW_}u1a„NfS}Rg*ßph"l|:;zG[ou4_׊.(#0%'pBIeQu0Ts z{.:묃Mzo|GO[lE>iIX*e[Y@nam waS6EL*Qrma8Tof=4rnIIPW/{rЉF^y&x1UC@H);@0uKD9*#Bw!@GGCpCJBq>o(@(U(ߐ&_đ3do/+@1BC8,ʁXlvpm-XE2,,Ŧ ` 85^v>+(;4'>eN{&aͫMۋ/.^p/љl[ձn ml׋u7DKj(TLP?<ȁd[BRQљPfysǩ;2u<8Nxxrb$)_H$('7W[ 𣊏$9 O\/ x7tSo{Цx(.eV9˼1dSrY# RP/lM"寳:S5_?ϻn^Z Sι+e3pW/fXCTI'+JFm)VP64ǴuN%\R9ة;o<>Dap9?_OXkϐ"/i+[mU"l><$1qn)Ca9)`K=YG']✗WNF)qi_ڧyK*!Jp0{V1vzm-VZ"Aa+zj!pf_V|K$z?חǒX!. b)Bf8‘b>,Nȳ0$K!0?u5&$ۨhVf14ȜB^.wmA^ ͻ?RyϦnj0A uqNwONެZD ,hLڮA}Fc}g";.c\0Acґܡ0fn'N5J+B4%t\8}4:ycb-,nYV@1+yN>(T[i]ނ u0(g?M2N҅E HB# eroX*q{''e)t"Op/~\kݞaJkBfrla-0Bb:/X=}=WZGi/8+Ye04pHG>xp|8ȠwXz/i93vbezBZgi<#,0ea1 ZRӐ!iXԂ'o5OYYe 3z8,q=zH} eH#B.&^ve+s&{+PJ'8bQXy'V![y[Xx=V:=V:H&Vi n?Q=ph@+si*C4W,8?B K&V,kst:Όǯ N̯K]ǹ\K|eZe3?pI* jRFJW.+s}{aѾs_N@R!;\ʭ|m?:^и`GtvZ* 0Y13\2IN 8cP"I Fr ojU^-|8(p$ChX*D0aBD|@⍾,)sXjT(x5+EOR_X^by9ߨpVi4zG,X_~sֵNꤖps XkS뿭\g([vRN7 w֚9RybV-X| ⱂGnC`[\.9ѝ+g9P,2)jiDZ/T^!G)T&_ q<`-;sq:C=|-):C Bun8HL#2\D[NN,@΃xKYO+5lNS/U(g\~P-N seCX%R SNi9.y|B#0B)#[x' Oه4@-]W+VAgs~)Iy0j@#([کuvIKyht tRfďV٠x|}SEIU!ADw;B {˒A ֚λ5}O'WVD-*Vʖ-cT4(r2 ,w  U $86`<aG&1d"8p ><4i"g$( $@ C${%~AuC=QXBi,.^ 5ai_$ >$PSBط ECdx U@ :%. r|NZX͂G8,[jws]̚Zr ߖdlV}Q6dСaDf@ "K/z0[1W 'ulqGێB+ oMݲm/@T0(ܡ|TLa l[ @#E y)/|u–ZgoA;Y_;oE<8Mp^Ԝ $pHw D "2IRnk 8x}3 @ hA:8N/4~䄰=UC p\a`'-3,,W@ ^| a >ڑO0 mjћ17"Z!z駓2 S2X&A@ dI@u7/):[p@<T09\7K| ';UƸn Srg/Zk:G?4>>l\pA;P 4~}P:=?f9 D䊆J%@ */dt5_.garrŵ?@ X%>D<4yYY\}|(", X]c{ĚX"jT5DOch=Sc^P@E.7Y{woޝ,3fd*}}?s{=Zft~gϞoJGJskGUW0XB8f[=#fܸqGC:*'N4/[xqMo|F34ov?zTy<쳦_~pdM̮jnVW_<_?Oĝfm&`wn1^ F2ks"]wݼj/Ǖ 8 1G8 RI<6|s7ok5v""J/H}1S;Ǥ]H@e _LIi( AL*ā4$Էo+?0hܹs?v :ȧ~t/2e_O{eUxP`0[AlyqǙ[ϼ /JvlK+;'D,]-Gldh T;HPߍsZ( >Z*8|@XVFc R 6<_ϩQz~1vs!`dލ\q >Z?(}2@RɢDyL:նi>~%ണ!6$]Nc:!_9L' iC=~;,)9"$11K zR.!әϷY}0h Db Ü<,~7xc'j+7HfexX<0h 7O={OךQ(\wuΊۮGX JPT`qnvi'@X+-g>(qHG-}Dh9 qC',f, @E|1$K $LzAwCxWv /*-Kiy(&xʕP=;5\ӐOҐli-X֪n3ci)k;J}r~[Dy|XC!](]`PR;1.",1qj$q/wۺFno~cN9唶=DyxHaGFu~[CH" 1Dmikߎ y?>%΅ƅXP5H-9pM7~Kcq.GxCmmmq73p ʣ(47 ).~'P+#H _4@4|MV@ Paa_I@D_9^s Jz!裏5X >n^ynkkأ8- Ψ-Zµ ƍg&Mg̷W97sc=fr.繡ݿxСβ5ٻscƌ1ض,5!GyzoFydM^I=CáHq:tJW{nIv :1$1(`(%R&63?m5#YRH<@ީc@]*C(sÁKgtn?!,\8:Wm1òGvo}1p" 2_۷Yoߴ ^s5Κffmŵc=m<\|ABt8뙿呙7I8OpL2o1m,3.1eÀ.`Abּy ݱ 4ȬeNU&Nҽ{wC֭"J@(Qp`Ǧu,^ 6L0$o}aܼl-Vq#;,D(+0UW]1_J>ßB3/ʣsoyHǁReAsNߢѹ[o'馟Ā+Z9*,a֬Ynl"-LQބ 8G O9 ~(BY@((Ѐ$/2r-~żV[_Rk Li?NFy$KQɒG[A8F1KÁKצ E8 v$(H4P)j8U S:F1#M'C ki]TH@e>y:_^!R*r)~`(hb!R◁Ҧ =r rP}L] I}YIKj;2}P|c@XHbh$ B=EGDc\N&9欻M=q PS*;1݊~@$?@bqy8`L $ "dzKېXY9Z/[$IM|k*Ӕ:d*j)['VT^ζAÖw3@q2UntbtHQiؿVmmyg֓O>i dEI{r97}tByUɓPbg2!1<{=W_}e>3JgϞh<%b:11AA&:Z\TMZ$ !tS;8`LzȦXsx"6vm @WyhM`RFŠ!^" Ge*5zs 9r,ydafRܧOq)ߙ="̞W9I!OGΧREr! @.?B)D6puG ;7 uYhV+|_g|u߁#FT1 Xqx:4Y V[m5x@8ڵk YH7V<`ٳ͗_~i6`hp; M|Qw53gteJ xXk<%l(]gX$K)-qh4|mz 59YQ ϹF/ι4ۿ曎 04[o}'M䮱Ge]f <=yGnv ~k|r-n ?1Aʚ20`9*HnFy ӉKy2Xj?{搡>|Fy$KlRu>S(5.w:얩 z鿔P c62T5І b(.s뭷\XN>dgU/>âb?ްG}d?=uС~]t]Lo\uUB(bЉe `5rf^ H*Um! %,`pJ΢ 3|>!(d#U>OO>'-1Y{zOY~?>'+`j1I Y3vXQFe|Xf !eO68x;;Pw83]~u\tECzW1o7wҀTk VZHkŌۻC~;2I^8I?I "1m&E y}2 ɿ;?}۷s_;[XYK؟Py/jkEBfi]9;~.3 kYwvԎy k,hd֓(dï+> TImoB|E Xe*Η!Iz/7q  f"&=yXhx]ws=*a_,9;찃9k,na v{Nb馛~B |izf43?krec3kf/loe\ֽYsfS1S?lf}iNn7y,L sfO3u\Gy$K/ҝ_3q`;~5|:fsa-g}E s+p-q]vicN3 1ˢa(E",BA]!B%t<7۞Rs8wt@|+ YppAć~ح^޽1$3 +:w.^?bF~Y/0?VܴaÆ ʙ'( &|?%=믿 r osOsvۭq,,ҸtM[֮Iݧn:v /]X  kF5ИyffF0]f(ycnӬsլ۴qĉ]>Sy@C\-^K9Ya=C!'{{a1m 9RwӧX1CtڵtrcNffINRPK,\t4*h<hX/o:vX9OjGm7s5h X,8οGmk1U2Fg(yvEr䑥ؚ&ۙ7@StD āX1>ZkFUO6 M~|[_KY \~K챽ι{gj,8sSN-k+}oY_tviX.NXx8z;gaMÊvZђy߱hk-;yh'X_k;g3ʣ%? =RS\(Á6"0XXV-t 'TKV9,r;Xw1 ,d bzعKab%s|a(b|o}~ʟ鞉JˁKߦxBA))ۃt8r rxg &/ [!l9+=c2 gl' ]0@ |OCQXȁȁ<8ۚ7n3xC|6lXofWlL6|Or?I>|˟ϻT'H9TY7 JIC_EDkѣ^r-s=WW7p{}޽~/i{R<52R-P!TwJ )ֱ@ qH ^( >_aRo~i$7qڵMϞ=(ؖ< Ϟ= 4\ve欳(B[h%m}SO=eZk^ut8XǾn֭'ki" !eߌ@p/Jz$08A(()3J{챦ww >YUկ~e{「(ŠG[qO?~jцӟL[|<ʣKr:і-&;T\v:gtܖurӥ9b^5fhXIIz 76.i'#"r`5H;'t9ꖠ o&0yw3+R+wQ?.w{r@񫠻+rΟ??4,!Q{㳑zks)8}x29 {7ģձ'^i7o^SH&RX@,mB>?($3gNˇt(qȁȁq EW_}yԽ1L|W3cƌɘu}s1zSHftL@ NW ! [Re`M?ѿs}'|#[9ss߁رc6ğ5zyf6'pywMNEW)SPݫSb 2&et~ӟ6YZrM\ȃ||GuyrjE7|yӏ'wd2eV2'%~@3Jup(ZTsm^(Hx?nep3h{Wr0a9^5,+#Fps R[Uh7Ycְa}Wi[fMVuYf|a"0t3NCTߤAcO : I2Ds >#7\nC7'O}~3Wp-?SUIw]'iIs))ˇ,o0L|g2E?|W3#tR kb@=I@iŸ[/A RACBTV?=^Ua]wm>r-'|m6M~ڐCǏwAcC5jB.Dy_S.Rh>@i5SqWmveð$ue֬YAl[eaʔ)n*$˃ {pN]>gI_|qSyB 4{[mjA%#2I|Re(P EA 0%p )B>BA߾}٫i4Z8uK:a V;3? 9"oj,* PQ*1Ԃ?Ayq` tڴif5l9ŧ`%S%ʣV+QgE=YE]kk庎VA1gq:MqDqtoZk-4qzcTD@ǨT uaX{f})qJ2^ri`b\gf~+3($x, ?|\ Eڳ8dX2ԇ3^} .)ZmG~yYoy М5q[5 m/ʣuſ֟nyUD`K3`vo'大vΐRd_O]Ȱ+1S($iX4q)r r r r &x:=uZrq5ט{WK؎60$r VS܋~04|@@߫@0dw#""tt9M{].9! #@e=REW9lHb. :@XL[v@@@@z ncBtzk٫tV-:TۧP9`P(`yU)CZa7d"ul )(-bۢ /Ќ99"nx}<" GQIB4  ә~NZ{&kT(C|`.qHxB1 $`>`G R,dH8pM7~m閝myt1Ǹ\a6!R9Q|(B@u-k ~mofޅia Pv}D XrDL꫗)U?EP"inTe#rʍP+vxGka 7fhvm^{厙p8>Ý[Q/Zk7L4Ɂ38Y\ooqG$,ydJ z Y\uk .doB6O 8BZpq"VP  'Hp)QϒV3[!Y+0lywݔrz\fm̧~j=L&t';0x衇:ps?N:p!ٻ6qHNI#9h-%kA[mɅs1hJLy} }B/¾ծVW[8* =L@&3?D P 24j"&{>β7jԨ&'Xp~C 1#FpjCCn19SݫqӰ{3-bx ;ꨣ @*<,Fy$KFY?Ky1\rINSEodP[H),EWSϐaHbx^H|2m|{رn+#VD|~1[Wbd"q&B]w<7fFeGy$KQɒG>IkB ;,dCP@t@!&xsQ9-Eeg/S%B@~Š.@\8\ H*H_1l r8k"p ;zXO$7kN92pw6SL4G?2/; CoFE6SZ;w-ʣ˵7<#nC 2 !/6uX8xc=<'i޽B8buDv/ҟ(d 6#Y'5f}ҨDu^My0s̛Dr$MxDGh"@e@bA*ab*$[( TM! ;,;zhgs \o߾f3ҢV 3LXqlBV~t5<^|)#Y% #YHMbLʥ'ӦMs<ȴ@Pm@)UOqi΁eTR8,J .t9s8 "WoʓKZz-獞# W\RD1CA;faÆ &Ajw/$g˦nYa)d0@'ҏؚUR=c<.Y<-$(t(\!Y_t|1U\q=V ^E=ǙwJI4e!{9Qzt/=I}_j0 b L(`4jAMtbx-L dV[O5Xo/ע<%(d#55/ZtĊakujՆ(N|7 @JM,Xk A /AʅX 0+oMd2U)r r8@5Pb|5Pp7ş?U$v e>?Ų.+U >b)C(e<@ C̍HxNaK"It0& qi݉Epj%(D`@Uf̛>}zPr(||̅'DT9/R/65g2cpQרW /POkcbɥgϞye@rDq?|Vo =Pi]Ɓ,G*J_C`slizR6Ĥ1Rep9ȋǔ-3Fw9,a[Q#IrEhO݃8w;IwO@:877ӧa3o×}X{Xeig_Vs $@ z_7> >g]w+NGI:0L.S~@rX'LqŅ, T3q ZA@sf&ʄ2yrLZ=RI t#?N&Is>madxrXnvy,FLpڋ @"+/E!ju Q>O<tʹƎ?9)C Gǔ!ߦ8n\!.EَTe$BA\F A!(/ec^ ?D~~;48j?YjNɸq:묳p+װ\qnXgq[5w|Ĉ鬎&v2 XcQְWCA3)/Gf?aky!;=![5 y%K/ V[9UdBĴnR+g5=T6dY.'/*[a֩NG2Sy9&pL2BebPR_~{}siI0 mna 4i>.D3pqG5k>Oo TLY#=")s(ƪT@7a{00*ٙ kx,CR5cYSO[曻 uЅ8 }@ tBc^,oH#V;4dE0@ 5b(y\!w}bC JI (nC Ȍ:q,2 i,5tRE#RT*H S]`PU="#:Tm9@y㼈iXT,~yq¯TQO IXmp l cb7`&b;t0dO桐x]w宽⋆E ?þkvp[,"[ylM7dns-[j/?3f2LdfO gfgM׶C513?}fkIfnxX]gϚʝf9bx 3wƞn6YYkWfNo"k^gsc{kv7ZlлgCAM^6^߆kog{v>gxVf.LFftt204CK@M,ơ_47="Ui^odp꽥Ƃd%y)1c8 W_&>"Xg s4.M B>Ӝa_ X$+ a)d5? U Nrֵο'/7YnvWީ:'S5yXGCbڀZ>:35LL΍sU3}MfD_ճY~EktNV6긡4tOӡZrرfq+3M6yt.Z`jaz;m)ӹt`˼Fh"Xg`H}5 O_ P1:U *:cb:zӎ5 iЋR';;t1ٓGEEFP 2iH,K =Рyd# *8M #WLIa1Ī҃$q6l[1hРLx2aϳK8o19d1s=\`vmDZ bj馛6N֮|}}{ֱ˚rltb: m~XA \;LZπL+#=wDzwlwEdb\,8$Yt 6t2+YkmҿQ-7+ֽlw[t[9trFL6W_gPk}{7~ef-c6pB=u"ѹ<8 oxN$=u( m]T@]μf-GyСߥ-dScЕEiϷ́hlG%@\81ʼ BzyVX;se9{J1nx 騵km?jŝ,󟬅c'ۘ9˜E:`КG~x8kaN_+U%Mr[liXnUVV^-a]=\.gҪesÒVw ʥ,lSkePg;I:kedE]P(y;e{AwjtcVT>񜀠ncH+[:²fOA;V Q&T((乲}ySQbQ<3 5gC!!hV̈́H A7^$(v6vHw_~esg9?[>{s˔Ch ,|AzP(SeLXI 2ާ ʋʎHW˙O? ?ԤSߕӏ ][,`c'KGZmd]:6;~.NFǦζsmCZj ]<(E^6^BY:d9༃DA:5s+ۡ Eɥ3f Ihzk}[MdAywr:J6k5vb^7Zg5KW}_w 6!#mR3JEm'3 ´e_ yz *'IJ=7337lե;T؞y-gaK01Om VXt[]\vyRZH:f5ZXju ^pZvtaYq|$Z)4ZèGKFRH] D Zb/(jh#hd TF>}M]sه7/-cO2PTÿor{ u*@&ST8&p{3EHœNGiQ(1ҋrK!ڞݣצ۪Aa; `Hs==ߎs!uʫ18G+pY/޽β ʦJ\,T`8aHs:CfQBv ~W}PmڸԶVh̙j>ĩ!޽{= :d77X|Cs@PqP Q]u Di)?E%`+JFbb*:a0P%j뮻. 4gXneWvpH۷Լp1lɶ>&`p]wOczw$F"* G=(U"U,b+؍#$F VՃEL_v@@8p}u,@h +kj-#򓟘w0Dh6Ok Vsuai,?7nkγ$/HEEG=*Yۢe23+L{TF93M[a8|=_'~qHz^@RV|2db@ P oj^8ȏG0oW^y~}s=ާLI}(ȁpF֔i(ka&*P"ΆdC{C=ԓ#okX߉q@@@vo^͒7~pd^ΫS;ЋXXD{w$[0Z:$dh-oK׵#5Am;HOk~zg{- zLQYj %O9F; iY.ֹ H8{=7|d##mGBuvH3t? 0C,Ĥ~<Δ;Vl5;"̆s@9?S2+!az|d(e|x@p axe=A@i>\rI{ubX`P@^yı$駙)$.&nIw@0A%@GX0 &L uq&0觙| #E=sܐSO=p5 0 q"NH+埲<̩+Ų/ 6Ibw>I0ZitnK\,0Syy*U0,`_dqϞ=A<@Q Q̕AR(D!.yb#)o#%B89"JiXBJ9Ty;Rt*# 󤘢)`X]ґt#MAd*}N$ Ș9I$*{)~U@THb>s&aҧ,@4fH<.$2W/$19^.VAKH;s34l[( ~CK}.o$^d*<(?P;)7Vaw]B'='ʃ% H`W^A񦘼*e8_yw:Ch 74n=o-^EyO;%vbżcns\턮 J)x!&{:&?j@L _oWjgCӎѦ$,u-DLASPT&zofQ E]CJ?yKS OW9h࣭O>|1p[UPln_-] #hTo$c$$,iFm6Gq=rH04|"Gyyybu91c xj"_uH)| \O25"d@0¤('%+ 6z뭃aV8LeWW#DѡO9OT@@"U7d=sR6ڨdnɿlVC-h\&0h@SI*I1W-Ry8o~QC)F7qnS=Hcf;J#x8$|'9?Mr]2/k5]`B%GQC+3gK aH"XYA[jOuiX@qJK/u.`0LteE#e@-nU !|3JfC[{Xg@~=~? *ƹ4Up9߲)_J1e8D1c8,z뮻\3 ^<%.?QL5F#$JT|KJ? gRU+?T|#Y|@=x=Ûzޔ-•lsLG%svwP#,ԝ믿>YAoLObrZ!ϳ[$7w};/a*PlPOD @(IT J Q&~UT4vf2wT3X 4aACϻ9IL_inCIo;6`fas>u?Nj^ '|N&0N~|/[+W]u>P3?3]KzJ'XBTPb`qM mhـCUҌ"G(%BPvܓ.9Ͻb3`z"{˥PoeZSA}]w[D"?wdLAΥz/41 }뭷\#0UCĀY, O.3TD/6G֩#LP EQ)|1_H#`hUxʣcqxd~kwªvxDfHG  ,yy: {yv\D:~m /h:F4]@@2UkvЩVci~'wp9ǷJʃR)V|=CJ[nM( "?*'c+?Ǒpy`ؔ)GF h|#MMu:8sszctсzgQt8]":{׿)# J/#WHZOJ"t[M2󜱒b=yXWIT;tJͣht D1DLAS>6 f8 04ѫe53"#Pn*D TH9|TTh*fذaO6S蕒jN'ovϟnI'd6 ^c :묳ܾ; ` xY [neAilaY?sna9`P@py=IO?ݹ!?S~[%~rG?[Msț .]yp-'iw<:cE]G )@G'DA IlF(DJ@ZD ?DzR9+>&<ӡDGJ#{I^t+Y`^JG81?30ݶfG@ @ }\}]+, vAmMP9X^!jKRoXG3<\'dDcL?{wi'jM6it`nADau)x$X`A{( *PBy|Ni"A@ )m}mT4Hbt{:C(+4Vo1ty&4VJ'/NGw}p"zA 9bBH",F8}pDZ@ z~ ]hB=<o̡D@O:+dnߑG >:FBWJNߕ L2BASp+'Y|UV |T he|iʉ2R񊊽L'26[r9ꨣ̹[y H*|=~DG1Z8!9,D̿Z|~r<@>Kcx{sMwO@5<-b8pAiKh')Ț1d$?n`1` -M\E!e _ݻwBSdWrt RTNvNOu <׿f`3r r(@C=@1FTB3GX<Ü\AFVTN@@_@P;y^!osM{*"?Yp9hLJ$VCx(#<|/2#, QzQ_QPT׋S?( h];~C՛*=ywygsi)Kn. ;D~Ј<%CVL2/'|mޕ ?}aw}/~ Ct1=+&@ňa\!|0 C?M . <|'M9& }S uc`+?݀x,X'|s#z N:!g_Š"CJC=Eb@Hs>}BY=zi BCH],=}~qqW?n.3 93?CTU^ D`#p "{=gjx饗ZX4AG&_1uL N,GOKb+#g6a̝ermҔ)S܆,ajN^`1Xw x1ʂB%QϵW|} btj G$QF5ǃv1baUni٠U#!burSC zh l0\EuIwؒNL(8:@#X$=:'*=VkL|N@8ۯ'9>ؙM[EwYYL;{ZE (N$( )( LI!҃HG\#P^RorE /3T(^Jo?mPy;ZۆH8pק;sOf0zrkЁD,.`C㎄E,c(2t O O@/8<1AcCSI_Rtj&<)d-pꢳOD\B<0^ 3پOM.o D@AG\86nlw6|?s\#?cx]ܶR^ E-}+¹f|.\G1Ds*G$X2@!Zf t<⾄v#F;bGH/"`I }@؞@ cE6Đ p΅LwSA%x$Q$)w%$f 8_=5&mqen)i=f8?Ol׳̄Hwi>bdd(7b"]#}Acf mm>f1#,4{ĺط CŤ :)L- x&F>;]5hG dD*E`1Xw l8pIb_얟]Dz(zlP0֑ :yviLbCSd Ғ.| ^? ¶7ddULa`jPɆ_@F2/w!+V;ԜԷE@|UGRS}6M\~蜐t (уIy+:S)k(SA' ^?pE.b6Qȁȁ9@$&Fd d^ C ;aAvɖxX?c@@ zXA -A@^{Gڜ\ k" 7J\P2+E X .&YJ64p?)Đ$sb Êw8m4'"PX :p[p1뮻6bmza Obe._r1 +nm<كO@8(C ;3TI)7`y̕t9P0rp 281*GYPY!'ETlb#Op [+@8lQ1}nBaQߋɬ<S65b/tK/m>}r~tOo767xcȁ$sEalMU)kNrTC48d| A_> A 0|9  '@22ڗ(C@Ʋ (,uYf); @,Q,;7zӇϹ &A+o)<na3s>@lX}+"ߖʴ/Jp.ntkbČo 93<.RytPLmѣouQ`^vaf,Wj*d!DF*2V&Mr9?܂9Ud-S;gy=8A;:??{sr.l:GoIWPTŻv\x-zeaH,ީ}JbXQJȷ3$[oM=hqpVWm9묳pp1nܨdc$ondl9Voa}SE Y &XGYQ,bSoz{ !FlP)1_S9P=ϳ:T4cp)gxW b,L ʗ8wK@`C{5kF MlSN>dcE> VbQq\6moFWb&#G4>!H#<⦶4I ib8(C 8okw'UŲ{uYݯK@9P`TrqB--I-r*  (" WYA˱HUWYZ`/ s2d̝gʼn{{ŊXbr8qQV Uqp x&L  vmnS,@(^h h TXL}kel}}%V0r@ˁ&s@2aN CU`T'X߭i 2Co$ (M9gc%|#Q2ɭ֑jEr9&Ux-Si4;AШRNUo[N;T>}p %>;D +*?{]vYٟ7ۼ@ˁVJ_Mw޹DHYL@@<kX1cK>K.ov ,ۺZ:kE%( 538,x y}!f@AT50MOx[j9r`8M[fY4L^ASY~D0 {sO a@ ĀXDYbaB!+@1Hm~+fV/}K;݉ay9ϹH)=@Д4D$eouA~̹idwSN9Laӏ_M{ﶼ-ycզ^OI@~_"s=f{3x!@5 HA}m>9-@'"6BLF,DFt}ܧ|O{p6>/`\nZէ@`2S,rH=SK\^Jk-N,- <PJ7n,SJxxPrfV۞-Z@ ?N@`,K2M28e x7aJ}> Kq9>I_7 SEO:7@,:FL%j/~%P#Xf?,AO1ϊi񫯾-OgAEێ;X[Im(_vt^#@ҔxM@#?J+LPK-ZfW.q>wI:̔0h do@#ab̌X!}(\M1 Քܢ=mmneQvhƳUW]U|ܷ|ߞo9r`zv)4M!St0͓x' jMr;Z_~U* .Ž@P}+[Li мe/+CL}fzg ^{UouM"ᓀ^mQ2?:CcL6ly_]VJ[//ء#!$-h90{h(g_6FLA 0y]tcvDq1{ (dL}ZZ[hrUe/B| +q|ϛ'|rYnGg8'?O)f,>K#8 }\Bd~4eU;S笇 A'дFMGm`@fRˁ fco9ޯ/(q1 V|ӷ侥2o}[.7+ý"!`'W}&5*##3þ{vܫ|@0:.\rn+BPY-d҉F},+ PnhzeY\rIsR=7r@痾lLԛ4.|ÂQDDt$L{q}{veXg]wIOzR`u۪ߗef})d(Bӊ~$N\12m@ˁoy]x^@`,:@l<2 'c`550a mc&(4:1XcS߰S~S_YԵR T{_8`*T7 GHm[}[IEA S7G@Vf>fɷMP11q㸼ȅM҃.{TK`~y GΤQ'o,@p9u#e6ER Y@æ@K/ttpzJ'SXP_WvmkÁGprv(;SGȮrV>u."NR[8U[YN;uSrp8 8`\\K]cLq^C  E젒ՄFVvNVsE 鸥-n‶4mcYG)h95s1R\Zk@ܨ 33C< Mi'hm@wVb+z=<@5w}Ĩ"(/fLV6gy8ZWXg@x3lh븦,Mb35(Ad;9[) &GgDQDQ~s(7ϦlQ7KCp %³@ҦY4)tEr`9` ]B}IvrJPfmFx)Ōr ;5e_-0GDZnݒ? x{[I QnԻî f?_azRmK~E " T@\uggE.u -XTo/l_eV{^礓N*mހ,>fJq#O d DM\tEu=}`@ǙVW0 ZZ] 1h46#~|éUc5.asu濦FΦRL_H(JXg IA6 YYYzP}schS7tK=skug|3V8t כ#p}_-:X_vGηIwZ3X?;y{߻JwnN>ͳþ ky:QGLL@6ї9vm9cHn;b-Ũ ] T ֐\&M"/<'?g>3׽=XLJP+~glÆ )֩gY2zfq ̵yʍUZǎ5Y@ag}vF@hɽGXM+hg.uw~)9LOJyg?f~owvU黝;t~:Mw;w~W.gT6WhV:xe~7.;vfϧk 6fmz׻C=# Z~!Q` dq/oV% 8,tsE mXgt9.̈́ˀk`aL[-EP#Ӹ~vwΕW^Y DY]V4 W_}u ov5jcfZa!EBiM4zJWt9眎4KWU%~R@{ڪKR\Zϧzj*=أHε#':_s-Gsv~tJt:o;tOmqC7~m?]zo *4Hǟ.Rsv<5%AGfV1AN=5w+b4鮺b7ַY0Xw)?u L.(xA&@"w3,.%u\[k8슱t92W}[6kPYwM+Tl Y#?H@J$VA s-aH>L[F O{ 1Iha e$Ԫ xqwbyK Ruݣ,X ~__F~5]K]q⦁vuݮŖ{7qWV]0yew~u.,>ekzB\r"uu{;O xoe|tYHnzՀw,g12ЦwM_X+g @?Ӓ]I<*C/>W_ru>LTrTYyI@#,>O,gV W Ȥ :ۯXXRc7 >.ȗ0}H!\JxHI;]Y$adhU jMuoܸ7󲗽a`^L-u9#b[uw;[v]ﶼmG'] ށ(zo@IDAT7v ٝ"ίk)Mgu2{/>g~cOx):7Ys\d ׾c 7+R"?|^\Ukkג\ SM .U\@e2% X[M O F*HX9#(G,P)g(u,ieBI 0R4 8V‬@Pl꽿 әdioJM-no]_C.s  _ ]]txjI|ibYӡˑΩtAPEt t|+:U("v);vgXιXf`뙵EIM9ϫMjxwBdfD'5LYqԔ#Jl@&gS&>|3x%*}Ddq$v']|4y~ n1աzWzhit?].&Uʱ eIhzԣ´xR$D6u8|}7c r/>t"ir' 1J^>,/%Wo@<@Ă@)ԫAAʻwWz}-lܽs-sS]Wz.xfx}wi 2.U7o~?n xàT)ҼOx-MXN#cYZ@)jKjuGolN~tK^,&Ñts;]Oj/g:çO}SeaBO SeZ*`ϛ'e.~j飺3؆T3x 0I/ajCbu:cgȹH%VK7q,q@Eݛm5xuʂKIX@P)VE:N彌0 N#%ȦLPH k~*3r>H ]Sʿ锆,;<,шAmtix\ro$ros-HgeP ޲EvF熟ӵukRޮw7t@egz xxA[y~89sR LR 57VN ^mEBFD.6+'LϞaÆ "T\q*ݎx@ҽaiȽY1S,@h,~S"jO-}vZY@M`+3l.7w겖cԽtO笳*?ڢېnD#<ËYQfJ\7!6 2ꅆ%BI,5k,Txt̙1ѿr`q{t/ s{-z`(QP+K'KuV^[dGUJSS@P]N~|:ŝέ6N>;],r.Oέ=sRcǃi">lEr9Db<='?ysdY0<󊵐Urݺu=q%On"3Qg%Ctk'}/GtiC?iX=w#;S룕>w7UoqǺ$HʹAײ6 /l- J@."NFvJ1poVdfFKtGC``B#*"^J$ʍr(Li 5 0ÇAUF9._{\r(]+Viv?-6s+<ز~M vӝ6wݩne{݌=_x.|_xQ'r+'C{b+tVI5] gpJ^tڑDv4Ţn~4YVQ,AMt,̠^}d/XGV P Ǒru?BXP@#V~,mY$-Z\eAL<^a#(߼E<^\X֯_SL@ CYNG g»ux, p:y#bD!*c$|gDXM3D^ި[ ?Ch2MLҩ {ڹ\3aƹKreS:LLnuv]/޹ݺ?v]zLv`[u-Q[vmM?m{Q{=5On@#ɐcPGa*+%ӹyt/IJNywr&y]|6Egrͬ0mG^_2Ȓw"OÐ5z^~BGڡ2 @2MJkVBOÅ]|WX2nۘ&@M{70tw>ų$X@P㣰UD.u'p;>(mߺ jM?B6V)x JJ‚x~o-xEN4yBd ^ <<ǽOQ_ UM!@Y?S]> -ݜUr Xk,h:sbi4iH[$76Z;`eIs'çy#CA^^тEC>c  ym Id/;9IzQOy䑝M]l'^ үr#"Ar4ʧ s{CAqEm!3t%2—])ÀAH8E"wܗ ;ǒAҠJAGaPr NgPȑzM;/8'rbll$c馑w\XcY"IM{i7@G^w x5"S"Y]o!zЃe$8|O^?gގg[zjZ> #<h{Л|LE~׊U?m>|3zbX,dcY o2b:~?]h@7f t~@>@J0*TeTg#ӎt^'uWP$|)bR֚S/;>kVaA`C]~|t7qh,i`\LͦO,&uB=cof jGo# _?j:YyR ˵ܗ ) Jz;e:79:"sLF8D Y:SWxO]qRe _GO< ~3U5s ^Ph?mQۏ,)Z+|jk$|SJy Iꅅ; >exÆ 2& kg~PI.0;v5^1~_]8 PyN "}lԮ[Y.cA!fX!%@:lUG54 bJ%LlZ|/Io@<([;F2ԹceBreX G0Epdс4t@މu\Z ّZ.tެ+}4'|{bűc|^ YsϲKGWF;Jq| B L2X9Yr{: KO,dF7yD~> 'BޤA漐oy@DiU(HʬOz~SɽXdoj,euJ,R 7%2*+K c(`p, 0~+UU̖62/G9`Zԧ&FЖ{Ƣ^pkDF)8Zy}E\s=yzʑs e$ BE9V-S^)tNg)6N9# d) '6G Z&]+wyKd+mZ[S^}n: q AyZN5j7RtlXϪ&Ac^eI]婯wk:Xc F?="a{?K.. iqjbֆCe2i'\ Ȭ{P>ߔc3_V@?MWGӱ%92V6oaKCYTI/qV 8湙V X~P|_;׀8KTa/&8Ur-Hȉ@¹(7A`ȴ!BA|(p?óg;)C>}Ҏ TZ!ɀ@L$`EDiQo<ʬνSd@Ȁe-(vK( cAVV,3"J}vYJF-O-#"?!٧?ϛ6٦.햿 ߞ$?s@_/YYtz|j5>" #C2$0!8 0e%uƠg   pLq@=AYoYQg >p$9<|^E.2*!뤮)aۉ75(t(s݇GKčbV"{^uUmvǯQ,;APUȏܬk:Vh7ϴ8k=uɬ)>oBrXL1 2je*8tFA}k+[83=O4]JίFۤ <18歷6ȳިXeF]6 (8KĖɁZ ?:O"3Dr,g>c'yD)w B#`Pwy:qmHρܶYMNgfsbN']YA8|H|z ;tԳ_%F~,ŅAW+ (,vu- 5#>hV ^@ Fo{kGp gHXm[Q>/}0ihS7$A&A)mwC`wb0W rM5>`0ɹ@HYJ~[PHхL)0L2p]:7-H@p S@!|zcjR &y/ 9"^0 wO:qK8j9[8^MхV":>%p!;(} E^d&rCv$  ˻ #@w7lN?>,/ Ю:HRO\}841Ex Ξ'Hu]kP"8I]Q pwNò ݀"[<-y} |<@P3:tU=S+KM诸^bߤЍvě!.YimDFP\#;dXr?Jw[5n|c1R .Pe/W~l~!1@h^Q~ CMq \=mY( YUnjUtŨm ХSğȻXs͏eV2I:$V@Vʞ5o%wV]ɶ^iexyqyYʁjHY1<bݦTV 7$}a|@r SMu5,ZKˁ5 4!c j>/ tG>fdo6Rkb͢<,HRMέv=P:?P49XEo&`p<)O,\D^s+O GVȆ~q74W62xӦMAk 73)uіs'(J-ʍ!'"(O5Olu?S`9i0y8ӱNޱr*ts>j]fU |zZi c6 X͓&xI'xew-z;08,\d -'[@p~baKB-L Fu]W,N?Wx=Y( gѧWI.mذD 88d ׂ c+MO\7~d|`!| ɸdOdb9s67R9+ʗ^.mJڂk)0"FIJTzT-< "k|u^ u]tQ +4]TXAzzRӢ<,I +l]b E\kFZB?~:SdS${|vR\h/ER/nF轣fZ]>Kx$)"Շe_ Gpط4}Tk^~}o" 'U~ N*Aqf9]qў /z F8='.A;P]5Gfjrb{YВ@N~E B!H+!;npnr?ʯ՝V:ZI{DX}q.0xb׀r|ykJq[Z=o裏.Q?˒6"+)]t<Hh ܢ Aaf9e O (vHSO-anXq+}gHT)ӲEu˓Ww. Ј#Q@u̴g[@c ?>PҹT˝Gh:%.']CL4Xl H"^HV0=9\{j-a4/կ~BSiu2Ya`y{uH_8sG[MMYup_@,w`N3H6۔!9<~0(st7 `ffL,oE۴ꫯ,z5n1QH᥍aT3}hl5n6MB:.%c%y(րLz/AN9t0:l|,§O/W|Hj /,Y9q:ʪUBA6"ɗ+ AKd ,r!ei˕g^( /=}?qj0[L@P K/a4{LYg7uzFԹRT׵sIwU?<`Af RYc ovot܂o!^K"ӺD]@_ێ(E5G=\Z3: H&,6 h WY @?`Vs\=q<YF~[*>d3$K 8)q?a|G9)tQr|A{VKZ-6nq]u]{E3Եv KZѻ:c=tOyt =Zt=@:- Dq? tæ~} >WGC?(`˹:2 nVŧ_}㙙~!!yHΙ?Z?fP:7r/ZAm 5dzfid?\A4x|@|k6Xm[̐gTQr9VE&y!N4{~Q,N/ʹOHQnIGxFk] %6༺D`It~&2WbVs@=v 099+o$7(y(KV{vq)~Ŕ+9, ֓H O>Òoc D_b|PV%syFM`Qpk-Fa4X;xHD^QƚoXLYf`\q~Yi Ɂ[rLkFDFV /KtTq u;8'kS ?q+ F`rkGd3RYgHȌ4Myi5qt5t>. %a|擕0 c%7XbhXם"|=m}pkN G6"8^tN' s.s wm1fr3aKgFNDxtR2-Q-?7]wus.u\]й4rfAzfQ!G <0 _Y_ϊgD 2zީ 'R0 sZ \qBTs> O\_/8?_b--ոS( xH`H, @ uHT!kKF+ngu}XV?k\de2-MmwY@1O Z N\shAs>׊9/I\)Xu]J|]? M9V hi~9R`N|Leͳlk ί.#~^,:tTlv*s\Moi:ht>u pY8śEL9 9V<wiˠS`㲏CnwBز,Spc=ʂ5dqF}?3ik8So ~ wzu݃L$mi|^q?PQutܦ%Φ6%'9^;Q:>9S55N'cYnш{c+9w6-[64sܜo JiMaqEAm^np.~X[d л}_]4m-<ߥA$rK+TW ^23MO}x6:NDpDҊT|O]Jv]PjKa(@s,pPB2Jb$٪},`mG{Ӎ@r6-j{4o0ͮVƺ<ڠ~`c;6X /pk<ד 3s)X^~7XseetUْ裏U*UKq`j@P kOE#*cy#9 XҸFoذD߁5#f.š1+<!30'G.%L/tGwȍ:IJ14ر׶Xdwv%nK  g `Q)`X8ZӏLY ?}hi<LHY^9Λ`r]88ڟ~\w3* V_UEdq l"jF,=RZxBAyk0ʁXk0AuV"S-5p>0(J!8 ]/50PƊ@@I k}p8I[‰H# 4&mJgP$JGۿVߏʼ?>O9gxA~gQ~ US0?#¼~1la3n@Adg?lCp?ˡp ذ{!/{{!0(:(W': $Au[+aO,h^yV^I 2-X 2Snqb{^}%> ص:$\>.S\u}`-ol+8׫F7ti@ ˙wg  \#3Y({ji|.ea{%pfՓ4 0R: J>+)(sfJPڣ3Ǯ;ϞytbiYIXg#b6O_XL{ޫ+ڪ̖TKV>sPr=V"o` Pt[VgH90 [NP3~/_bj _ dkph%*/aT(~oj'_M4_{)#Au2 /`dSr>yƃಠD/*ewoYlN3/H|(ּX|v,Zk_X\ʀ&cg(u Y pKq`a@ !jhu5T1S/&7 @o,}0t `Q딈w^1B vk49Ԋq{<ԝq,u߭ϹFvY |^ו-fp¿-ȨӋDO;3@j+ ],u}PE]]w 0`Pjfq57_BW:4@̲ jঠvC @`vrQ8tL5^Fח{No嵟K5 K@<>cVk""Ѡwm-;|Xs/`@ɽt?griX񸼮;o bmh[G)Ff4ƲH +)$ƦꀦYfjEm Hǰ)#-w-\ 8X:LtX> J$>K8sE@ .x\=JlMypsXdթ4|;UCG[Bl{j/ ʵU@JMjΥ}"I#gm4=;? \IKQ$ *䜁˦BaV\I,["OV ^P@=qK? F+KhO+NIdJpO}tmtj>T@$H"/{E_|qN֏sWS^@p 0akc^gqFKr}=,:[ D)X)Q=nF?ac,/ů\R WR:9Xw2x7nkM$ &kKRMbvo;uD Eԛ:lʯGQ,. }ikױ=FhMP['u3u}fѴܶ,jR/}KW9LHZoR 0/QjbЁ7͝g>#e0 ʣV$Q :q%B`" ,`xSE1 t :t*dd}rōB@ɵI$ W´"`nʰe>(+cHX1P` INdrki*KR>0X8׵ͺmXORJrjiP4ySN9ڸYrRN7p)!obc9V?CKA`D霍uM@]YAbq0UFxfrrh #Ut'ǁ?O Sd2yݑ8N'8G6c%̱{Y MoqrܛLIM*6KAE]Tz=V'Z&A)kcbqm2\,BY ]uڒq8;uL՞ӦqF1s*H>O.k 3+gzްps)a HVh75)3U ʼnS,@=ׯ:48_.]︩O&6i+J8[ omY};&db1/qR:"`бN>2T hF@3B/,r,QOթ/SGت{NBG[z  h l҆$v7kz rNڥ:V%M{Vo,)tfu{хP8|* JaCYQIMtdq> IMy: ui<C='r J n#NiBNG x;)MS*a{W᷾1n y'ZW mD{I8Btɵ;)1//mw^޿_9w@S4{Y>Cv "(ԛ>D̸SY*S񞅐Uƶr|Xs4`a@)M0NмwK~` $霴s;icraY\m $W@I@ЂV>#KڭO,>{lY@:䗼%% h A]\kauꎥ@9rmQJ]9nni|Э|;jάooD+dEd]cp K,YM?tdjƔ i/'pLѰA1inܸ0iW`EaJa9'{"=KQ@ h~K M}?O/alZ=J2(qA1$>)UP@IDAT@ѩaY}Lc79?ݕ+xI@"U,OF5ib}٧4}n *B&Ҭ 7K_G?*mN A_ڈs9{Ṃqp#*~M(7g)>9py~ "(,A({R/-@ C*Iͤ~s Y)7+Y >* ?Q A/-U^vVAĖd `R(d L\ z? OYi8)~:no|cғ9뻩68#e5IϦ Հz|9gpYe_|Ivii8ЙdqLlv| o8@'44*PL)r^G8m!u`JyT,;?Й"`u,VyhGFY ,4gqc,YZ}]|2^%:$u R H~Q*:@(9'9;n! "c歶Ε 'PZz@^0Ll>zr#&Ȯd ./˽<#ƊN wȿ iZZGA%<ˀEF(,FbQ~D˗w  hc\ޙhfXLӰv5x*VTHcVpgZG ;!3шVe%;b>~+`A?p@]IQ ɇ\tf~> $O@бuȂ` k43|v]dB ioBjElqШ}G_ &bXwz2wkʡRT+ V|AcE$ CᗾW\q#ʘϖ7@bL1OSQ!1Rr@XLmts P tֳpX M8'MV׶<ƞJTӽ;ֿ}kE҅` z~Em&m"Z&4<#Ct'Y$3ȇndPo,: ,reH:oIQ~ ~2zHh/ӿF+}:]K@ڇmr"=rsȕ<@  (;FsK`%4EK$ fm Xh2m`,Ey4sJ2:I HD,;BҽuLguK())LXb!0s/k }ǀVi>v^*w8Jξo<5{{YLsY^okKN\VDҹ{~P{#A hH\FܖnɁ Az }KF!ߡ_]jĥsF՝<;5 (P/({YXP?̪@0c)`2(9@ǣ e[>a@F'AW|řd}Eo0>$ɶ`BFv Or 5%r0(!:82#OxX/\Yz1 Ee%!0y衇=?g Xtx@#Y$ $z<֩}qw徟o 9b`C=sYs̑Y|}+FƆ! HLqs? JôԿ&(V/ nOӀ`,giF@+Q-ݣu*5U#X6Er0SK:9ǀ10p8Or9ɹz& lzNɪdbk|rpa|;R5K&tlm&w./ Ϥ|&pqtLwf""wl?ߒd,@ v-C"}r!%(}y94 y|儸`% ]~eEKq@40[be)(pJSΦ XR裩Qj+<##,Ǻ|s><$ul:wLlV( L-?7q&k,֌9Esn9&>˓I#}G?",! -=., AT5Mlp u9 C}g]~K+Ѧs~*@S ׾ ,E/3LW^WtЇkڃU@@ j4dnYip8 j']#);Xc`H҂We?@:O ot\@\uΩ9ع*X_5#WOtzd" @7yd'#,vɉ@;4@ON攃<܍ Zo\xeV@V `j0zOr. wjGn\K[_堎.g?[6Nh9zn HY[7=G虝2+i[BN%{P`2kAd]b)ݢ˴DKdErxSKGT4O̊tʘ[}:r >`I֊) {ĢӀB pG]+NǴ$+lmFR'qcZώ)AIy @ld- u])ҖVykuv@`.P; ww6rz е:Kn0b&H-"i~-ݒ~" !1Y:r+ -J }m;%DX+xnz9bd?^ bt Kit7 qFB佣,Gy{*e;MP! a &%,psB{S}--KG3; 1=3,KVjkHz׻x7h,ڲD%V [{%T>d-Rl$ϱwv\Ä)xq\|EuNsLGOrR U%. 7t cd>]/~@v9@ ]H-I5DeK@P_m-%Z1d<;n6%&Jȁ`!O3PE(G!e2aO[F&P3 }/S,L.Zb"|'990\'rK1%(Ȟ)K.OOҜ EA:.SꛬHsЗD+"#e:SO=NV ՕٛEQߏ3\IQ޺aբm#[/\z Y[b TRv+A͚ A&B>& Gso1M'o gj|Q~GGd8Pˆ@!n` Ջi@EGIkdޤ}J9 giXc $R {_ G]\[D(7ͥL}YzVYo?ʄbdhiys;Ocަo>#A>kvob'N°bT!sy&H VvŸ=߯g&~50 =9fl{RwV  |pި9ejCy) PTIڬ%wNW":ppI'e52`q bw%Y<< ]Zj90)PXXvhO87OiRL\PpMG0X<óP}^9Pwގ-J`7`5 05ހH`*p[1etޖf0f]R+` ko_[|s-7{ӦMthJ83*"fJPh8%}:wF= %I5Pp,r"t niAi޸eH9l)ydDh`'T/HG՟@/5;^P_y啍J5%~;߹XFr`XdL@;%Y@D91?;|c<9ir$FpY&⨿MI1Z=u]k8>Xbt׉MWn1vz\ިe;MHS_gu*VRJ]7d`d.-Z4j#|43ɊϠhc&M'7MGi=~+bTЧӫ>G%;ڃ1**d2vTH@Pd52HB5F$[hhF]ƍ;;sC-{T QzTɨ8'җ(7/,18(RxՇJ._aJwZ L=L钅(_JW;5@>!)VGuT畯|+ɇ>ӞҾL ҫ.$TvA(5Yp8dxle]yWt^2f{Ou [ V #O?O7e;˳R(Vf٬۰aCA02%ġ\# .(/ĈFc3Z7Qԭk6zj\9B)'< e嵯}m#!Fp]ZX{յΉee\oT{V&vک HKuح,Yگ63賺 l{ gz0X{kHf'ħbҚb_=eіGgMڪwY`"4ɝ12蠃"9X[ 8'?ɢhqS(G2nge@? 7PdQ RXŰkiu9p'  |~r@U(P}jrgdM%2ܯ6 GA 8Sˆe.XYFrĭb b&PTٛBXgBJXcr(LdqkQ~ |#)fQ g~D|5rSνnFqL :R*M! 2e6R_ w, 1]4 oy|s ԭ\R2%0k@DC4]Гi@R$ѝIKi5ɧ[=12GGxABd3kMW{Wrү!O}I]0m~d}dJ%:K}Bڑ29Gy<;4aJ_ϾMBDN;J뙬*C>ce`bDxbAh44{<=e(UVC3=UW]Uv8c*=uk47;UG=`]׊XEh,"e:&(< >e+/u 4=znTR@:k~1l6rU@Ѐ6Q1+XlD+9${EAZS g;hT-BCǕe]ʢP7 D n?V[],"79,yMݘ6l(#M͔ռNP(K֞חPnn>*|vԣ: rp9g8s2M^qݛli:ce?1)zPyϤjJג;;oZd||54?lBx$ آZ1!znCrm$iQ4{ Q BL2OʠnF)<5'FmrLeTx"L3AE400Ty R|| ET-3lufivwιs=wskf~]w>g%#\R$1O)H C9ꛘAH# :hl>%*mR[̖BbZ􏡀=ܳ~/;)!޹c#! FԤe"  2ZZ"@ }A.!ozl-=-!mMXmr9*G_]YbYj"H%`0!>D"H}%,ch1kjx1HReYC+Y#jgB{jD-Ji@t]22HFH9"SNU2zs4zB5:~>R?$;/iHAq? Eb!O | 﫳D>X+ $ +ֆ}@/H -C,H`|'/?W+m+xLҞ"<ZDE^ o 'TҧoNwg1b38(%D`S>{9!I5iQCg]GIwμ$L"u_I$jsɏw_\ aPaM;Z iX$2vh#iH1*8$0LȀFe`!-S8OU,R?e6$! cʺrO͙<#*R-] xUbG L򓟬m}$.k76 [9fN)$CnWwi<=C%翉PA%bpm3qc)XHsWǥà )$'yDi-U,Q-.Ȏ(6~W^YuLݱ&Q'Z]__y>Ta+ MjG_*A<.HE 0' $ s(s!v[pHFY)u1E `QcB+ݯ~K$dq K/~f+HP͚|y$$;H4H/8&3@rp FәZowO[_}8wI a@q\L ). ɤ7 .9X=5hq_~#nH5~ fNE lU`~0h?ꫯis 7XB/P1n{47٘xWсBBHlpJoSo`\{EJf o83vq)=4~pWҥ|V#&$׼%mrom9;̨dP.g^oڵ.`$Kt *X¶r4%dq{6AkqQ'!:0X> q|Ͱa*KE>kXhpt}~&e`J%EAێwkQc͔/=HOzI2G쇩Q&74 CIy }}?Hsw}Lz< ЇHƏ~;0OOIc[4Y(:O4Li9裫jf$%Nsk HQE'Jph@MHށ4Gc'E# +QYyt\ zo{#A[袋,EMt)ۦ*M 1QYt\u\'tfL[+hVb_`(IY$ x i(A6 0xrcJA=gOldp޻`)<*WS%-H $ƾꪫw=h/g:uo k\"~4(Os|8?q{ m^D9!C: ~]gAu L7{rKm6ζf!5\ZT4$lL^9`gA?_@j]T `@YZ<~`$K!(4 qBZ :Bf,iz9䙓&#B(#Nj"8i>G@ǫL*`D gm_?\|=恺7? (`~UE o:'WHճ]b^e3]_tLK'`$I;tOIGM}gKϯFȠiMODDÙB$0>bV=/1SP`n=T!F :X )r2,uIKH)cQl[QsϭJ$̓AJ!_DH+$R`,ȟm0ȡ}HZ;a+QIԋHגi" l7Ui\+H| L>J8ЎA2DI (cAQ=q~4xiV}d!$q0yuF`q:f[wcοQf!4cbE"O<꘽DՖ϶%m fuDꗲ.G| u F\W  4;( 6gai//6/lQg­@@8qU'A<% >Q*kD@Q^LkrI|`_!%1}rاAd wqG4 7}Z', )F[Fu2|r +>hF&Lԗ,DUlF_aD(kAcHb,+Hmȶ},[#t>=-S6"pDW{zcđ:c\0:C8V_%z]%sX;k.z" I8|X!6b772ݷ(KYy<İ<9}I!)Ju>V{3܄Οfb|iqUp$0: nK<[M|A"(QW-]1#$:(H I,›n:ev- vsfBGhƛҽiؤEL2XSp'_Q.<>c =>Sx_w$؛gP8iyZVY_iG@'RR tU!r3dӇZ^;"M1ŤHӷ?oҏ/;7Y=5>jAK[6y0@[>Y#1UƻǶD,!Hanƶ31go1!{SRXrH-$&tCùnt#g yڸ{.f 7 V:qS7u/L˸8FUVHL<_'H &0 (f|z"RIF"v=kؽuߕ7 ^}풋IjpS(Lϡ0닔kq\ Jf&Xv 9ȟ52Ytʱh w` x9AJE>܋c/|;|0c-Abv}a9ABϢ(/o05||G(L"6!'ʀ D8{'1-?O<˿j ɇ>Po0,#:$pF[kecH`{"zE B|quv{IuvQ~Eoy[j kQةZ%- >\~É!E',HM2%"AEn& /Ƞd:g'1}k;$K-Ò&QP9S,_6,Qi{=0FH64 n"VX#guVUCqi1bw*=0!zRxe-!L2w݃wqx.nps 0=*ND`SdۢS~K@Cm;_6gA"źK.Iٖ/Zk |{M'$/%yg 0G{NTR IAD JOV ?Դ:u ~թ@wvF&uVCGoK9!ԑ#OcN6KL!:"EC@6#2$t$Xsv Q?ˢL`?BVIx'7dz=K^,ih/ oF?a@._d*G>ӟtsy:ӰP6i@E@5j;y~2>DC]b_8Yl:Ȟ}߽v<39|X$.uI^w9ǹ?ψ<{Pl"3ʾ9 'RV72Q=үtw0"Sg,,I>Y$ϢsSyU=]a JPL=yzݼoO2aQ=B(=*!" w>W /{˪yI7(H\>$NB=ێ5:@)H`Y~<7Z!yNH^m[wt9qu7ʦ)ȋF@usV3RƂ(*hQnPp>(=-GM#S%Xr߭.`&S"A%>?Km7_^,Dݏs垝͆'pBM;ꨣÕxuqA{|53cZ|5_1_W5o鞍 ʏ6'ڀyaǔDp̯[1|p _Б3Ca]n{]u8{^~1>כK*Y_W#8?mo{ے~3ޕ38j9WѾszL {xwT=ț>k~/[ғ9 (SdS AM@whK5y-=餓O|mҖJі-H衇ŝN{%EGޯHo~m#(clXXJ.ex\kK~",hˊ|fm Ncǎ tU{SI;݇(NVbDg~K_jQf[|--):2CWWfmNeJ~"UNkozΥ^'mR/SmU䶐D*3HW_}u]W]uU=re1piXoXs̊ʼnwHu}I`9Y)j^N'o5i~^#}}龜'?YN:z:"\(c]:4Kq|iǏ63d6WɌ`:_uӞG54&C>Xa\TՅ2bU]0}h!IDAT,H8vI zV¿Q/鹶џX8qv$߾ye!)H^܏^?I#hrJ];l1{UHKKu!A"%yN!{mJ}`aXILI ;}LIG|:n{Gx$sBu >FңvwI`J=0d%7FUWDgVEDO:Wi 0I4=f@vS1Uh*bt8?yTSA~x`C;NBMywϗ֭[+lzFzD.d'nfv:H{#KlE<ȸ"xW"xD[ǜ'!]PX/M:z ~Pf??nvi]4@[Vu3.$?]#11X4mtɠ}*Q7U&q.9L2:H&EKj+SuNDčU:ңC=+kt yI#ET1!QqMGȥٸm(LBpA\ӹWQ%z͐=Q,ngtPB:HXě#)qs$S DDLS>,MH-V5MfH#uc_SfGl[|%0ATmCfZqHGY$p lXɈT򕯬1-1x}OD`5 @ȎԆ,:aUƘ`S_ͳ6@R0mutdG;liMw{;*C)by@kxGH /袚=FɠoLP__HoddB2Pq d y +):! "!QL+_g:{~5ʢǞ{Y%5jH p,) Ie˖lQq[tƹdΥNAXiUU!2jՆ!"oV[ss=-ܢRK'S"$"PHB5*S{KF%&mU/￿!Omk೶?p@J>w#΅Nha/2RS&LWm!m!5_"ޮH)?L5>3 P$ ^þsn 1m -y]BInWjX zN[f5Ww5p[^}VTVK0)bB>2U5%eʼۿ(k-"QhO>y °^{U+&K-2=ZU^C-"9%ԴS|@Y7OAY]{gEߣV2PcPD HV@@aa%{ohj[ FHBqMm$7`4,`x}dH̪Ujq_|7(ҽ"&FM%}V$:;TF?,~|u_QWu7+n ,Y *grbΙD X-ֶ">Ð /vPfbjàWECO[T 'Ň\FfxCf i,2g4 Eפ)KgP:.S5n乜wa(cċ8H_VHLJ4f2_ O{Yj2ʯA#c[>4p}H/s})Hah7E0 Ѧ>՚xB?.}ŷ o~%k%K!Dp֤fR|䐥)?BE'9$lc rHi;P}Ź\kIei.R:zxa) #{H9ND`Q$$p%1#qmkyOF$^Jƒiv$53r1n@l+.`8+e0 a*g$H)b ɢҌ2}HIe*!$Lv$(W7kS,L+Rc.:s<ȟԓ* 9Ե?H6'ˑ@n; GI3惎#HRD@ý zmfD`3 %}WF5tq5v[%/| (`i\ր_D(`i;S.cn!Qiak=R0a/Dx~3A轑dحtx<3׉@"0L1MQ1 k4;C%##;t]vQʌY:zM"UO4rg6%F%udPbjS2EAWT0=CPCS#lY,7tMu;%N__n/>t;oŶcH8? '|Ynvg'}DޱcG5A|YQ )HqUibdD<A*XmW"3F~H5u2C'-vۭg?۔_zv!D‼ cPT옾O9-a-[jdH %uRGTMo'<^ (TKo Fm}9/H@1ѹ;s[mcz C֪2(#8RT7%qʊi 4yGV,MjMv?kT+!}#

Gd~"EHI+;l۶pjCVu*d￿ZvQ5/qd3s$-:n`Vm7  ^)A ֑J4^ik-DC@K.$P72H WT*="/z-}Gw&"EBIA )$3J|k*gvZ  ph4ΦMpcDII4>HOD6P-: DB"mi`Ŵ,ID &4E*f!΃$AY ػr!Dw^%B4{S<>wC$DP>_od?qHI `W@aW ST5DX90LR>MjeLvyuJ˭s1D%1E F.:JRae}NEloD 2HLd DÀ8HIHYbZڻNB38:[RIE7Mn$nD^E·W9ֻ~I?`ظ'Wi/4uX59H&~L)u+soGQ6"ϔ$fD >G贑Y}A?0DWZbGw^Ia77Q>Ba׬}` վWv톞] 'pš*P™$#w 3JXCJj%U{!'l(B7jw y,HEN)\;iq5)HDtI r$peE.i"Ijઇx!T7aObN"qZ}&M+/JD`-|{kDsȴ<ˑ o\V8 H`J;ica, i1H"qHV뮛^_͉.΋D`!Whw5|ZD qLv%f^җV_|0]lI%_$QAXM~mj\G!n63%&A__c;{K^2Uň '͎;&|!ܻ﬎Eqpқ ̻Q5BW5ƿ7$"0(4$o朓L%pNA{fjTR\HnaEkN=1<3>w}|35N^[GDL@"y@`H>7tSN9{zaD@~k5$:گZG?D0u9pkͶFc?գG|F) so7Ї>TwvyJ^֘\pAPox뭷QBB<2FCkz(S"lLmֶhQݛ0^Y6I¡27'w_>CH?p/[?#PjsL=.{{y5YHP tt΂V2Hg0$ B""Dp~ݚrnx≕ЙEN #låa%5Hq]v٥'ܴFXӸiI̓caiS\^!jfL}6T[C2O?[n ywVc'B?퓶{N#Y3!xಳ$@ML'>qɚw=+pC~KDDm< O!9A Jɇv6uLUxSő(v 1 uK(5;4¦2%C_z#-Ұ$mFhHwqA$لg=Yȝwy6k{Y??QIRH.PA`6}]O5wQ"i/WZ{y!)W]@$atIc>|փ$ wݡlE J,g:DDrdO;\DuO{Ӗ]"@'=\f|jC) H)2[E ~5$lݔ[{߿a bmT}9j=ؚS,:~W\f0̻^"/NBR95Nlm;?J* }A `BHר|zI2xlV Z6]7so/={-=1HOO`=H"2/D㫡-hQ{$ ~D ؜E/Eɤu!Ew~wUA&!]c{}RWϓ WZ>gu7VɢYt]gdT׉@H"ǯyJD HD` @G$@"$ RMri^ $)OJD HxZL+ #wD H@"׉@"$"]GzK_Zw}j1QoH"؛OID جM ~{g?{]@4}_ѥ7Lzgp>Hf'HD`!7:hnH T*c#@ I%@"{キwU"8, F^Vx3jzN Y~F@)!;vcŹv[b÷O~OyX o[$y5Js\OFZҗTŏH*D$u}|t}r/R"BHD ؜j"wݡ0 /?y~%E|U>韮Ev)b:}_^A 0ԧ>ub^@"$s?Do}kÁɟ4;ssA5Ox^~sϭoNKoDN,~E(>$DH"'5\sMu^/ysG6 FX ^kFRN;m|!I>K7SF :3%@"$#@JٟG儫zG\fG1i>q$Δnb$H!K4,I]7i=3$<$@"$@/L$@"$@" $B:$@"$@H"سID HD`V$D HD Hz@}N"$@"$B ଐ$@"$@"3dvD HD Igt>'HD H!Dg$$@"$H"8+9@"$@" $= D HD Hf j%IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/mandelbrot.png0000664000175000017500000005272300000000000022047 0ustar00zuulzuul00000000000000PNG  IHDR&UIDATx-86|?{-*Qt;tB'tЧ>Yi`cuDU;߹$%U*Jah㒞4#K kdxIj#LC w/DS8W7.*`:0ě>};yVAГV1ch7)3HX@a#&0q &y0c$[B+@)d@t[fAS@<8{JoAHmͩ*^&&Nfx5fKluzp#0L GCG1,ҶǮ}ݭVHVIn|.K.HmZjbK ;$4ʘ#:{^xh~r `L0,iƧ6SW#ˀ/j +efl8>,7BNE\܏EkNpztI`T p/PH rc=t]$:{_P +Փ$)j-pb<综mw69v " [mA( ^զ+?\?)+Uݽ "b{|a `8@]miLkHo'k^?A?J?!, 8u#J@H("{^cpֿ߮0 A=oIQЏtNntjBz!D),Ԃ@oѓ1TxclaRQ0PB!߼-ψꕍ;B]_j P?!DrR6`T/ g|MO({A׉7}LSXTc׋맀/Lr%aRb2Hg_P2L† I~gJ/߉Li΍zq~@a'bq(6;hl8Ewǭ)>?W"DWS' J@[_9 |)`Z$> M-m\jӃ9&Tf8 4/Z[#$b|ݛP=u пq߈$ BҕMa:y4{wWIO#R@,?Q 7D.B)A>3p;`eF@D߅2 :yT7H[:F{}Tt*>?`V0 O!"E x&=o,h^ ؎+̌)&*.7 $a)V}!):s8/.l7كiG (, !Nn ظ߻Dhpy=KE0Z +h? ᨹ>tQnY// JX~Ջ|Edt)@]sZ`{/9 ڿ[whX1Q38^qՋa!B;;xh%p OΠo@L| W y0g:rSM)/FWM<"O DXT5+/]Q z?/ElVX/(%Qp\VgB(,(KP +"`//Ʀ \jɍ t#Uq_ eƒAaiʵa]:wa$jKw Fke6f$nچa}xX 0"ÿD$As_࣮У{ѨhKs LqGt2@s}_Mÿ7AKmN.W1߷ vR_ Q| >܁!Oى*9ؼ+^Sh% Ej%jsPt1@con/ƣ 6ԫiB; vOEVJ4kaDu_ pspy㷕  (PJ$D6ݿKc~D p)z䷞m)&eFiWOGS\Bb5v5GG͝x&ơo_"{6-^uW9&w%#ѿg5w;vT6+M?no&E߭1}[$|V#t4ܚ'.m*@0uۮBSh+_I  #> J{a}-O03xs3{]xJ*5\JҷAֿڅ]`c\B` ;^oxPr#R}Pcja+2;ON?EZyt3޲ߣ. `jހDCnzܦޙgO6 C5>GoGD  ܺIJr .x[DKUX6[>B#2P)qWTO"yI⬐~.o@`mL +y%@D_,w%Gm5R *bd+z= D(8"zNdm׍M^4Um&X9@uu [P BQY1W#E˦=@h F|4ͯ5m^Ra6{ߕF+d ۮߐDp&Zv&iR|f|d#O:CePevM_q^oy #`hW'O T3-'BAH-љ!NM,U \~_KܐƢ?ۨB~H'4>"@yK!?Bg$Nw CԀ &@+ƁB82js;KTBtdADo6f_z E;3"#vZYؼWo!5 h怩ERمH$")W(d.} K$>tɀ(btH2î`2['c__]GE#P1f3o&GHɅ&@kzÒg[k^{`[~/ !ͼDD4OEK-Z9).(@ߘ6!3`lZu hh +XAlQ2HD$&6ɬD820_]n2߳Xmkgc ҿ`M|={lGP@~s R%, \gfygwNTdTWJ\$xon(&gaRuT\c|Qyͱj)":d𓩄; SnVJ\JR**ړ{'H3)6U6Rdx /66)C-ݐbm.BmꓐQ~jאss@F$O )o2kH-/8Kb]SҤ2r1!g"Y9# =TDb ᝈC:2}6KHq憻 G3rPpZc&ʳ=_%%i2&V1Zr ok8-z7!X@PMݽcwv*ylNctp #4Ͽ k-bX-@=omc {5@>SB"B@N6DSu:\сo>B VuTfL(>MFxR4ȷ&hn p f8e'w;IZ^;ADCPU#LO@@;;{ҸE \Ikba7 s@.1DbO@ 6Ngk(4ނrU^`LQCD+w~J/?R%W!ED$εxUC>  @o$pt%snzB%lfLPpD;B58[X{u 7;2M0Djpz>Nؓ] LZd" ڔz.T: 2|~r \paq]v{.ˮ:B'HI%^' $"/Ms%Nӣ{b^]}P @pTu"G\ d!PӋ t峯|/w 0t78}@QIqPw\ HuE N;gj7FL.US}J;?=֘O!l[ BnFX_7J|A? mmAtBXɀj= KKֆ>_93GayzZ_H$ S?<B?) TS%|.ܛ!Kc^ G@<5,Cc%*d>c`S!M6'--n)oHzo}3?Mu?8A ;XxMb/)ؓm b }.=>4k43՛`#g(Բ)u{o \F@f_T"6f-hbs6 >9vˍ%@/B3կ (3 pkh- nmișpU@/&dB|] 4r6tx4qOm_ !)ϣv[<[b*I}JAqktH#%ͦQ n8Z̹ 8`Iа{Em@J47 &p;@MM3MF9OOu]@ `f ?#Hfs &n|K.:WDآ_ `\x93yO_џ"n82u'qM's翃ץ 9tn 0黩S'%Vxv{1L9_J{D .Nl|1`;.$tQv{}qܠML>-.O O|S䃓^?gE4G!X} Fg|\''mByjM=%AhM@`.aO>I<{7 "zJu&}rL"}m 8 6KIʊh>#0,dT=#^2CTe hb8@sDZI e!h}ay*f qG f`Vǿ"s_f\O`Sƀ c!sH;jlbK_-r#tLD/#DhR%: __.W* ]4މ@R%`V(~ĝn8P&*]jvi*8߀ x$D2LL-I?.KG ׶_Q>M (4 ֩"UTϒZĩ^F rI0XLUh ʥkjk]ci! ]MLc%k(4>ygT_\ A158mv 8, 'xWN2~I>]ZRh;9[zmN]$^5" Lkwdr⨱cNm OGNreu^DC!&js0%NEߦK<\i$R-`"'bXrʾ6NתgTUl]v5Tk@@*[BqtsT{?jx"$f( `@iJ 6 p8-יҨ]#$S3A!tCtdӥ[t8%A*D#DH `a-=:pTx1uz%"n/ԒDDL6OA.~Ơ{N U|{ZN YhfY7^hx!Igksqw7J6$ @ @nmpx7+J=ٶᴋjݩQ_43bRu,=2fT"7-nAѝ, mӕD*A=x +f':ceC@Y_Db=@?j-}iA`9Q":qNZ)wݫ{3׉]qX̑oPC!NC%J0E n \}E.% w \EX:Az{ӎ 0d輪ÕLh`>Oԅ TqA<Fȃ!M?bɠZ7 [ڨQ6ZB ,noD-X]B!T6v&SI!sѹ1)ua8 pO:MM[((]>Zc "OV'g%ˠGB}!M<­tFE^EQDn#QծyqZ/&l ߁D8 >" !Il$/JO`ΠM~o 6z eI[K0 /}"$[BHM~  +Ƨ? (_zD$%A>x"*LPM(;g@^7ƵCݶ3DXJqg,D.ZA^%M*e2H_97d+T$ؚ2@;&#)$lb_@!z v/)ԃ*j~`6й)` bvkxt ޲ԟ+"RtL$tW(:'DzuLDS&X&u;'H !.imx/e&$^2>/F~D2v jg8~l#^/JXOS[EHL)l%|Oy. -W=0ԝ]L-Dnm++oSqB~,BKs5P4&יx΂mS@9 BfmP"RVV@HBI2hzQbv@qbm)`FPBgAsiHR$(`ðSǷ=΢!l8QmJ5֢™9v41J Ύ`(0L)~`HaѴBApo& &.<;0G\Y-wp']"$9x ҂` >d4QY /ol+PK䓶܅q|t">!+ @Do,bA'wp[Ή&"S0ﱷ@x* 4PsPS?2pS>r̓N@ڃߧu !sU ^敀_ MG8'"~MȌJ`>w`_C4;烪-ei~vnF|Z(t=Ͷί\Rְ7x`ķ-Z2&5Ӵ hwL]y&|;~R%$!XIֻ^:?zX<]pSIC@6^āBEc}.EC9,d۹DjxS?X{+`bìWI@5ljd P& Z@n#X1n7]G_b;푀\L~Nx0Q!FBrƆrutQ.7=XD~*DGDR)}xHOXzfiTQMDR`:'?.𓂪t?mQGrΉ7RQN.\t_J` eH9x) v(fmFp_v#O65P34sg`TYSn!5x;s\Q2/lͨ*ߜ=`$Q.@Jw)"@ȱOW spD ,62:gdhTiǧcd?wL{(<|gkG9l",UAk%9bmJI<7 بѺү)DYx? QŸtP)MD3"S!QD3 .\ )R &U76$YXcx[efHSOyrTLc|*h%-o9g6, kBH̔}U8_o364Eq +Y@eE A?1WA? - nfu1krҏM3gg[EY0碀@XR{hRaOp]!WlŀxTk73oI0Wbџ,rR<$H "faɺ@ H',5|hetNsNqTAB@DKHHGm8ͷ V{`Z$FDE` ["0!y1 `/f|8a`0:Ql43rWT$UP\&AĒ!s D(hr-<#QT b#6N!yOlo qM wBPQ'NQ8 D(b S$`7 Wn#IW $?gg XqVH 03pS cG<v?alL9; > cPN6{Tj)6aaen X +6-A) L%Js̸jv #"ă7P?s, I@W{pD̚R@ 3Y s@_ D쇑>aA~{ T:Pm6zNAK9Xk[ɢ?|@-շ`esF;lJRIÇM !חWt? x\[5k&&gͳ\l|QYq@zr'Ypu;x[@K\ ;b t |dn_rcAv %XQ+Fʮ11kAl9[f?:.i͠ w5:P%O ͏2c=vy3J} ~<:\~wX@/nlՓF *]cO '7? NLepRpP %!K8 YEj]3xE@*'k1ɠk!\b'".|=>`B̀Zt a e.f .- @lI@JI>xY03x?wC}u?McUu =3A*1f`̼enm42u5aY| M}ݲv$IY'C-73x @9vGj 0,;^`iցn;n`Ea]v2lspIwV|[Ul>7(z,:@Ͼr'VM#Baa:>pW8u=]L=ݫ9;Fc?ɩ} GVJUDq+X=kN#aIϛݿ'\n:4^53G 7Ѕ">y@kTYK} <-'dRuG@/Zv~(NbǨX^st4*`= (NlR'339hBff+ş&-Ҙ;L.𴋙2•,d K| 0(C "}) kɤu+T%QC!J`p0 74m @$b -ֲOdd9J2evĖk+Y=Jԁ[y~_S5r`“)Ok AyyO|( 8t- X@.ledZ/H+zX4A\6"_3- g-ZSۑ#$hB ' r>LPУ<$ =w0F#'j66>b6]00y &!%l 0Z,s=4̓H#!"c(6[ 3Ԟ4wZF8 5 '&E`$ gzauKxGz@'BύCO!amzg)?JLVn>n^RFL$v@Ԁ?0f0йrp@lv3=|~!yg_a@DfA2g˟-c'Ą ֛$~ jL;Y/~yb^쟌e>q~泴BDwiȱKV`ȨX~5><@ts&Byw *~\Xu? K.~Ll@F/"dF+-ce~(%613ƛ`7v]>@j񋍍N~48\`"&84 0{`hƊ pp(U[!WhwkٴՒ D 0wy™a I#]Eǖx}|ADB 0G^Ѱ,d1͂ӂ9?jCƈxV "K!3|bKy$J3 sIOz!@r?6lL} /@n~1EN/Q{3$H Va Bkg1D(]pAyB`yfn -ěMqIkH!3,n \0 0aʘu097[Ҿגr;Y ^sS&hɏ.Sh/lIOf ikkN^,|",?_CHT W,PT9 xmn_N1z+=tA{zKdϐmۈkH5 qbitfaFȞpgr:5Lb5a T3M.fj+K3kJm Ρa}\zeIIf!ʶi m [a{SPxg8—WC#.=s Z+w#=~>GxҖyyv<-qd倖גϟHN ۃfL^wr\`74: Z: 3.(EI@LZt\Z rlcUs3t>fX 17wpQYg>-Vv4mI^J)v[6tɊG8X&"'ӘǖGft-U\gqX?[$7DD e r_Վ]Kl wOO22;%mm f>"V:ZtHIoA WW" Ckcq@aSy}hk. 995U܍/IJz3nf051B2㛍t7A00homl!bV>΋N*5fZ@=L1BcȈ08rWliN@gMմDZ5}PV&f+r45_G5~qX0xz/ֆ7v!l?x3&2.g J &"r,d>@o;7μa|“"ca=(D'^ޘl /u9 /=y}p ikWN8K ZPSY@a,OrڢU2Q #-j[dyB5ɘtu?$֘ss|_j p_ևrm^"%&Up݂N\BƛԿQ|L9 P/2N%X-Q@Ivfc- >:q՚ nT1oCnCD&g^]ȡ+8Nb`ft.< % -ilY$,N[)4#TZ,+34D grwyRLx+0fR1B `<9U;y@$rb ;͇@TDh6fCH?ł33s |>i 6ali /٘N Xk t&hVX%mKLfZ y^6C6Uj+‰wh';F[0Gᮌ l~1C3gj Ҹ> #zzy؃nfw)mY5`}I賛Sƽ|=1ü}[ntjx&0#<|}n{hPB {Z~`,UaX@Gx33pQO}HE ')'N_3CV?hn=ߵv(40`Z0mRLc`SR5\Nf;x 0*ٴ~J&7[?-I< nxq2XJ#'˹d]B_T6]L/~k)ohtCy%\zZOOQX92l&t Rq jdmSvyV| 7'Z$=l$ڕ!dGw-dXYQҢ-ޞubvi &/ VQq9h$]A4 '[WQ${MwN @fW8\*6zcChY`3fw%Ϧ.-][ei4O2[iқqѓn{rA;z84WQK֒m2oxThj-5NJcM6IA~MPYb-$3@ojPR}QTqVڎ?@b '\*\ĦCLy/ ef2R ;1Ju~Gt 6H} U~5m pg;Mj^ F{6( q+b<Z+ g>GN V4{Fp(x^Y$d~sR b%VM# [/ "zE:B/PݖDn/}{@rx)Z -2ܘ'.}Z\@뾡#V #us p9wN Ѐ^PpC;C6n=-ٯ T\ïƁhU_޾ #QzYFt,#fxa߻)ް)QEe2|f'{zhEw]ZߌtҌ`-wW5H#78p6l39m>wܹ$ o)_/wcl(<mhłam E߳`ϠgO\X Q!{ ~s0J-U[ly`Q_Nfw^x };';65s?f3|נIL!cԔ﹡"**n p (zOrk+ .9j p˜0H'; ^*._=QC?Q/x-9xґPc/ 'UY /A7aLgA(zk; !7~0:V5>S8crmX] 19U pbxf! bko\T=&kp  Gm Xʥ0Kr {WVD v'fr/FSn_S˥CA4\Շ[ D$UܚI&fLǞ, Q/䢇U+GN   H> DN80jpS\G9v wa/#q)`S]0xl p;i H3/Ӱ`-3'˭ p`V 0Ш0q f~|5)c,NX:xZ}e m\f.X4gG`ġӁїx!.C0bp$KÌa%kt;>E6b~3;ѺHZ1M˄pFz'Cס 2O H^1L^9ď!h{ME?L3 d$B@Eyo W#`[~I&elʠ8l9ʜA_Pbp EEJi H~kFn(`q%|aORD+V?r$9>zv.@ϋu1q$'\O 0JV#ƲVM9k3[^`T8ӧ_ڵU}UVI$emw*8qX_G:}08KIxURM81jE333Pp$tji#"p$**w5V@=F>|+Al>kGN JO3q|06/kVG*:1;B]̻P$pz qdsR'mJY$+M_l lc08le0|`f?`ǺCoe 4`d@`GJ8 ut}k^]bZ 5!NԀ+ , +(Ӌf= ;/})L r2?V^d}`ˉ1u?;"ڼ`Ľ{.GRr V0dznxW *`bNrB, iͽ2FLy7xqG.fmt( `\%9ݱ È8*n8R ٚ#V=I~cؓ@xJ_ң &J%}3*`".<|*E7`iǜSt^'X=o</ǟokHҰ>zarq!&w!^l}=1AwUbh+G4#?^:]tѺ9[M.Y> ddguO I`̞4ekĿ6Rn# zD `f6&ޮ|$4v0~nZ󪑭XnĚf7{tMzkB/`υGIy~Ӂ^}B%mhc U?)қ" IENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/retry_states.svg0000664000175000017500000005401200000000000022454 0ustar00zuulzuul00000000000000 Retries statesPENDINGIGNORERUNNINGSUCCESSFAILURERETRYINGREVERTINGREVERTEDREVERT_FAILUREstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/task_states.svg0000664000175000017500000005027200000000000022255 0ustar00zuulzuul00000000000000 Tasks statesPENDINGIGNORERUNNINGFAILURESUCCESSREVERTINGREVERTEDREVERT_FAILUREstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/tasks.png0000664000175000017500000072703400000000000021051 0ustar00zuulzuul00000000000000PNG  IHDR,TiCCPICC Profile(c``RH,(a``+) rwRR` \\À|/­+JI-N8;1V./){ll0{]t NO@wjB@6_/cJ~R$ (I(E% JUKQ0204e`;D@px2A!BlR?1^: Sbj  $Aad2f` I>N pHYsgR@IDATx @Tǿ 0lȢJô2E^j%J_EK\^R%&KMQ7Q̝ٗaa~q{sn߱S&`L 0&`L #H 0&`L 0&`"VXpG`L 0&`L 0&VX&a`L 0&`L 0VXp`L 0&`L 0&VX&a`L 0&`L 0VXp`L 0&`L 0&VX&a`L 0&`L 0VXp`L 0&`L 0&VX&a`L 0&`L 0ŵFP[Ŝ*PApp#v׺dz]J˫ 7'ص]*ƥ"pBW衷NNN04j-J+^f-?KZ]Css|&`L 0&Љ)\4Uص, ӌxaXyw-7+c_@6_4j~i`w?R{׊Q[g>Bܡ"ke'ìsn)nEUv)^L\*rJ5ݨ7*+v_fU*!=GG J*x^큧=UC|=NeE@nI: rXSC)dOZzTS'l?%q`L 0&`L$ =),rCx!8(M^ ȝ'n iQY_JUܹ `舰p"Ha-Ԓ" l9Ԉ=qdh;BI!UeW j槄\fB^-\}\0pu j%D*g=me!KQN˙uILWV#3#<1XZ>|e]@_+ȽB54 ]Cݏ,j:nC-/d#;U*]=Юp\GiT7WTfkj4{-^rJdGJ 'ri{U tќ*EFqr> =zbGȾt&O@xLG-jtZrB?SiwUP[$ nV+#=pϭA@tFmEqPЮ=B 89"( 8siNܻB}J^adKը&3c | ve1ɧm e8S#0 :;QВ?pOTIM)UNWRv \tD%튪suBc>Q<Wn gΡR7: ڙ: J1{3ܽiΦ -r8.a!>C}-9V- .v Կ= y]rN)LZ]`"O&`L 0&a 44mUM_CxVb WOp 0W}yʏT PKS.a(BMaJbOݻ+pơEt_H ɗ258Xj=<1f'zh˱s5*hbXza X]6};~bT@L3tiC{M4]g_QQUE=rGF{{G8)8"|AcЪO"UtB 9I-+58p<9گϝ~W4w|lL dAuyl|9tjhEGvprq[7Ϗ n58U|I>[T )g|uoȳ&u^{ 0e+?%ˤZ8cR_-ZhKG8F@a9֭"euĞOKq\5iL<<@4K lJ1RRjRPƍRcTx~bq :d:ڝY?,~{>)ZꟵ4-w'<澧!";hʫQPdPX|# H^.4;ϏsPRA=y8PL.ops' İI}Eeqޜ*hh[o.nĹu ^)J.$#ΟqRPƛA!U{@bDRP <1W1vx]T*3&`L 0K f%jvBM+v䧢~'}ȗ9`0OxajI554%WZiWf_ ")$hto GZ>ދ-}j\|v'=R @>M\iN#|Ot%(s"HijI5X=\̭ ),lF,r.]ƩStVi"$MUr3O̽nbF(;RFDopR-Ƒ8J*_xZ<Z`p7?˙)V$JF)Y xt%9bBF}?mފ?!-*M!v_™9_/ 7 [nM[jQUv >֝{qY;nE%(8} r _b:B} jTv ]T1 ܫUxl+R?*^ԝ85t-5\#WkPyRG:FդWۣIFS12}ASKpfc \|z\H-rtꑏQ=0f\UvO%rk~ȨAHOFV) E !bR$NVQ|ǿ/qx "HXrB_8*YrFT0W.EF9)8j +4X@񇮪'.sSRTUBG[BP^p}b=zWǛ:u! N:ßᠭ$&֒EwSo<;~] Hv;Iwz7oxzŸ,ʮj^JJzŶJ?u~I>JuL^|m8`L 0&@%jEM5m8ז$<{ÄX%*ktxKځWIHI_ '>:SWU?bۖJ(î2I fn~#kN0Nؑr|R+n6Fܡ½&TԯѥcCKC8K%9}̫شGU9ܦҵݸr/ZmE&J)@&aH:%KVӕ;^J@959Ha!LGuU8Ncwa \IەYUu[c9~|<1^&G5UTNj7V/){RFN7>PNJlFCXzV+m8Cc>Bj=޻$YڶE؟VJDW3B?oG*$eTzY¥HY7]TVDOώͽ$rH?`L 0&@'!jE=NƟ{ %/0+MP?G/㗤2TMtW Zp/A7C}xta)J.Oe|`!{od|\RX;`=x%+|YL  ֛^?`'yiިx|f4B0O/y9zt\~I)^AE@xZlxвMǓɊ!sZ86pr#F|\9>⿢"p`ƌ'Z&mdVpR s/20w H]O|i _#/LYT`O d9L "o@y?inmȺ!B#-Y+LTX4W;0J'-)x)체 Erܸ9ä{HuKqD/"/A:F7xya Et[C[!Ft,*L Yi<\`GJW~)=*r({ltiG)*'{"b}5Mut,9.i*u'c/ aS1/ {8z`K/~^m`UV2.ΕjtuNhјeD ƒ?}}d4>ބN&i;+3%a_&GWWVVh%`L 0&@$jv33iki?F>=J'a'h(>m +̃ CA:  qÐ d jF~+詒N`TV)JWE{,É zO-GǸ_޳  rCPg\MazCFy"'e&m)(AVٻ!_oԒ{yi4mDXֵS¿'ˆ" ab+d 'NSp,]r396tll[. _p w0SV+iv+{ <5gph)p7Mzӊ4*(JKl&Wplop_ј0 1xhTxWB~9Y+)BںC6l""PAd~CA|eЂso0~/z*!(=quʵB1{ Y[9I]0-ڻ|ybH{l?LNOd!(,r+qښAn91qn" :ycYO\AnN5.Ggp&ws3@>+|m /o-m9(T[YI;܀P;{?H[bSIU9!p -X]rWrF.eĴؒtZ7 'LD8)+ȷ)&`L 0& ZaaG{ 9f}+^خG@D߭SVICW;p1j7e\2 4&|=!3 v}w?##ntĨ@.{ G P 7xi/SF9_ᴐC1>BД_nl93<߂ɏ?bc{KoNT*S,n{vt, dR<: V8)&Bty*sN39+.+?;w|_^go0cCqʍ*qikƴGȔ8Ł|KNXSb:I958{u탕NqqǸG\m!NjkPBAfzIxv‚WB 0 }&kH 7SF *d?K~Y2O z) UZh?~c| }"}'}ICt Q-|\Ԕ=)"'͏cHaE\`L 0&:#\ k3(Gr(lQaNG^8[uN2T0U7d`)~ I>vW+L]L;{rxi)4,s޿iu^kwF#MDPQPRTlN^37' WJn|nJ@耱$<}BI`I 9h5-Nhx%Img_E4EJq[-E#_7OYfErEEەդPoi8w-ΑLБ5r pbwN7X#ZM,l)5܍'&;Ɨ3;? IH\D8EprM=H*Gz~|lضUؾi!xr=e>x3,yX2k1?ѣNfe^`L 0&}հ44[ @N9)FŪ9(,!Š4 Wj񿯳 c =øI*Gö+HקizSP5 qpR yR3ij_? P{6`,DW%EdCaUUvG~ol^6'==l؞b@.< ?6rJm6'~!H[Ŝ6o;[35akANPq^~S_s͐H`fP> :E#V͋wߟTd3\_C  r[|Cm'(:lJ̙n(_uxoVLl)M#=n\~EH(=&<틧gKoRƿ흽%tno\)_O%^jPڙ4)&Qr2?G)x Ț;cHִkQu9Oݛ EǛ+yC$G^^+gwaopc"#o4_oUR# S\CSPX NssA&/`L 0&@'!?]SnM :@ZMeَ= UP%W. f9*'r)A+p%,ك5鞃=TtB 3XM'}CV!?'@Pzd OɮT2g7M&\qCq<5NSB[_㪸]B.C^Gp!_*Oɲ!`$/Eغ>0싗h5\\O(AYf?=IJ<釷;o!Pބxo\NOm6<5db VD_!C1h@7M=cqH\{'1 TVGN&X[5mӡ&sH۳0JԂ)g||l3Zv$/ Cp&IhE(`L 0&19))ZB~}@ lM9A(h% ʡ}G0[(逋0 ^Jl/ :i;kƅW@I AjC /O#ӓ>`L 0&h{|=E`L 0&`L 0N~ #a?ˉxaL 0&`L tbKHh pN]AA80&`L 0&:tõ W 4Fdzr <ԝز<ǦwØ Otط'\`d&S4c}a!9/fʩFg5kHlfB#naxqvOݍɻ73gD7ҙ:Q7B1&@'%`IfL(;CIĉ7[^/;PwIeݏvpe e1c-"jdʂ@TVǰ beRJ1c`S`;u4 Q/ï~&4,kM(::? YDA'r OmZDKtש;+`L t8%5)W ɴBe;JGgSP] =)^C?QJ[us L])-PV|U. UB#vKioiHnd5Fb;. I˱`L 0kMյ.gL 4A 2iý'A77D@I8$:W8@D4xL/GhM ?MjX=%N= Wn@jN}}tR,]"pdJL)yn**2Hu0 `|6zsXcEطa!?1͆3X euiFM^\}YPemn5M]W*1|AÑUxnWWgAUG,CεJy :! ec‹+q0~Ť.ߌ]}yu9Rh+/GHOy/7?yCױ|rlؗSw(:K3 NkbBC? cV p̉Tg<[o] MVCR/;H}b1>'0f&<3 05Kd\ $rgAS5K8h ͲԗYb(c88w•}.۪ݤ 0&@N790&ҔX W-&|K-_\huyJyKKΛI^d ZdIyG-I4S;'OH4O=_XY>yN_:009J?kT/1U>&٠_(+r&0r"Gz}= yhDZROm ){22[F҅ 6w$Ye{\A!ϛΔ*yYUu2oo ^Y͕'*|[On^rnּQڎOY?v1hj ٸ\aO^y:إm>vMC@0f$->mүL+&`-'!&+~~S:.lv[["u0bZL*X$MĶexBʌM뮦,CXX;K-hJ)iXl[S6eqHK]geeXT$IQKLnp`0l0A6׀KGX$1I` шFGAҗG3"cD$&lBLy⠅6zV&FaLjmqqԏ`6vJ .k3Αb$XDbc`lsGGL[~糘??SLf(n Qc+ iDaŦ$&nC=!a]j̤- aYQa6*F#iF( A sXY~NHo4 㸒;C4vF4%] pR>sdo˰i61u1ek&`&r]dL -Dd1Me)@y|ӊ^2δ-BRYjjquW\oK7ęFl2A*\dVriȊbEd}tl4di" o;iCN5,>lL6Kl;'P'I/f \JKIیP)&8Ijo2|Z|.7D/\n#o7CޅQV ȤaYMo:) +ɪ(ߗ$ա`Q3H)!(j$qT})& 1PF[+dah'r&s +0ޒq#]8UpҜSKg8c}<e/%aU\e9奚ǕTa錬O]ϓYEI}ƯdrN'UÔ/T?bL 0&|lajg$>ihA]X-h[% ˩0n1T2fcZ+G=b֦wnZ5T嫓`:8B}Ҙ΢@w q ꎭ YR>ar#+B1w*~WI(?גį$A&٘ .f߂d9yc/8k_,O 1G*Lz m") 6]ke.*5kJ. i}y2}N L᏿%1d5wclP UfE~و֯rpJ9aVV3ob!S<`M`/MtrNnQp ʈLm-eA߭/H*"x,=5Șm8$]i>ؖ9OͮwgXqC ÌU0&j ]L6<~ qdWbS`pp/t M]9ࠧUV1&h+U`L- x϶aN1SakGK1C>(.4zE:ʊPVZ|gr:'X*:d!Pf:lDsј ӏ̺ݴe(*3" J%ƩIHA~1 $r$ 񋓤;m}#֣SF4&c$MVSͲsEja|5.d|`s;|Zn 0&@` 6Y0&pm (B+3ΎӸ ^I0~2q@RDD4HB1U㒵өkf!䩃dZjpEx(Lۚ"{@IDAT[[ԏ`L 0LL"0dJH-+7gwqTimמ\Lړ\ ԰Ȩ[ Y2C/Yc!t"2W˺iI1o(NTH4&Q=-ER w2J5a`LXuS=0jvr2siVPʗA- ۈZ \q W+|.Si:,i&u]GpTlv\`Lh?bL=ELJQՓb2SpkcoEZ)+,EV=6 }=UA6Vާg Ҝ0Zjer=nNl+K"p'׾AYxm ;=Yj^>mpn|1嘅yK:T#2,1kn|֚҇.g.\ c (|{N4}gs`*h )ƾ˩^ |AQѢј=ЗN1Js՜A:XXrN ?u,McΘ_+勊;оx;h[Փ>cX-:ev{4Pӄ5+G߷$ŠIl[pS> tDp) EbX-,:·3"[Tqo;oф2dl=q8_OUQrfR ?tAN=oZ&R$Y^uDuP} }?E(+=K~ɾI_&[lo!~lGY]*1{I "|x^*t9YBN~j?@|`LFGcLPҊk܉\ h'/}PI!|_&¤kŨeOk VMgueGRɺ]hN|yyCZrZV&caL}_jcLP8hk?/݌J %Ő1O.^=MON1y,+EeA,%L"#- }ԒV+%6"m8*7Yoi|F>4, $c RCd|RZ!1u,ʬ@=;*Mo0>}뷝r`L‚́ 0A@![~ֈP*L{/VhîĤ#v%N`֊DV^RWq?J7$ RIҰ~+fI}ǬOƮEcdy*0|fݒ$8l[^ ꮻJ:S*ie,TF0n|O&o)NJ9+یT6!X^!qw̔XFBV"Q1~$ﴄo?՘-@˃' if&"~HbZ"7a_2|ƚ, ]Bz!Λ[ qJKxH†<..l|s#ƭ*[2>s\fIX_";k""1 7e,#ޮ=|6i1#&i*z ˜  %FI~ۙ?`L 4hF|a|h>[ꊐ?Pz}ר? -&KZ aI`-ɵW<@hm3^v58!$T1-Y9-)L6KG.P6ё*tc?*?u=*7L3Wmɘ|4HF/zA}dC!!dE/\6Р%j3Rrם6聁7j F_xCqiOjq?7uLrr#]Vi[g,/Z4>ɩӨ$[VMC9~h$n!+lQ ht:J#傗$YMWw`L#`EGjM `L),h{ 99.,"-;U{) Y,Ɖ;Y}-f懝@*),5|L 0&l%[Bl%`L*(&F7N{nObhE)E t4VV%'eL 0&@ ¢8%`L AxXQ4:!fw ):XH$6֒vRY 0&`+,:|s`mOZ:v2]COP %Q$PH Sc[f}qV҂9I!HǮ9 j0&؇&1&:RM3z2m/ӡL RertjPr'HVTTůXKyB'rT!ȏt2&@` 6ə1&:ڡB eC bre_}Xoa};N++:~[s `L`ŵ˹3&@&3gf5}wQ[[j 0&`L6%6N 0&b 6 {{{:@VVz!DhhYZ`L 0&4h`L 0&`F@cڴiBPL|FeO?GPd `L 0&Xaa3*`L@W_Ŏ;DG[nXn[. 0&`LexKH˸q*&[L0AӧO7#k.Ѳ |J`L 0 8&`L trgϞO޽{72&`L F-,lı`L&Mͦ>Lt9j(VVt>UdL 0&VX\?\`L cEɓ!(-IS?Y:"3g63&`L 4+,2&@%0w\_5YYa;y&s&`L 0 vV 0&:(JbժUÆ nSM%gAi 0&`LͶc91& HѣQUU7xM@׮]Q^^Q#lJǑ`L 0&l#-bL 0&> .\hs%Nep*++lŦCƾ8vpN!0nxh#u8{Q\oL_t?\Swb6ލG6(;ӗ ?Wic:c45>/Yfg`̩ޅ.;mj G0n_(VoP};~ 0` F=%?u7~:SD>ةV۞o2N@3w4'W 0&l% C*lB^n:qK {I("}イߖj0.~ ۩&cμ$`ʒ[SYԪGpTߟ,d$j`"C'Vt"0okQ&eqm$"A$ir<UC?Q*aڗOr􋘆:TY:Lxߠ-[]\{EAA(^Ē`ƒ[_,h4EL hb&&b"XAяk{ԛμy7o{s2 &&/ǪiM6A}zO14n:1VC1&FJi%u: =)SX=L{ s5(++Zڿqƨ?6h.Mj'*JPYUp3:t~!®]$t!OG ww$rMu;R@Es;dMS]gI DX(;h菗#;-BF6|FT$G#0QxjjtKc] : 4E-%()V=$6C M}IB$Wŕu1XwF\QTIJ˿oܛ1ydD|+W-;RQ**2%)KDјl2+הv!L3gb„ Ef[I2fzkn֥)9TEk+l_.(`$0n8 윱%4l(<*3Ͷ<1Yyv~[nN>u&?5E0U)U . 6/1XZQIcINP9ѥM?puUȃ}eu5V\tK c!5L Ik Mڑ^*u77DZes5bT nYyĒ&TjZ粪, ϶E-uš>η8M8W[u Ò?G^V&bGt&kokQSG?rlvyxw?Vqq1Ǝ;2d 8qYYYpttХP@ҳxSMĜ ڊ!BҲzAA=w%Rw${7úusDl]:AT?)R<*2u%F +|vJaYl£:sΩzV{7` 蘻tfcA)X<=tc{ypOIyl%[gyyD 6,fذae2\9Bo.h9,ʼnRʹl*.Wbv?;*d\Mm؋ E_ 2LW` ^']0:HG21<5ͪ7Zf%Z>,mieX7w?mGbCd,4sL5_Jg1ox˲~hlI(iD'SƇVa j>S8LS5;H:Vƀf$_aU&h|r[W=Fcֽ-x^a*cxnغYIAV,݉- jʻ I/s*Gor(SqDsyq?f- >״ͼk_p+`ۘlw~87{;zm9w܍[\/s݃r8'hiI%,F=0n01ٰܢ7OrD@+2mymJP.Ƚҙc2hw_jp#rFqe!zT^ݸNs~LUr?X 6(+9ᆍ̍]|OUME7݅kKvG\'6&5e}ܬnX%kkk947ѣfaJQEx˖-kVw%~LAD>5$I+A=ÑQ.{up.]88Z[N-/lZnJr\l(éS/еǹriu:a~yu@-A<7T%l UoE J'WynBSѨ~ MW"OgM%!5o>-=.Ǡ x੠-OUZod(b Z ,JHNEq"1n/)SR ߩK.ޯhDŜ-;q!\Z2h W^Y]>?sOt a_!ٻ+-J}Gr=]ގ$RXq;sxCC>)r8O@gXB.li?NaQ4Ү>R*=pԄv[6jܽ_q}z/ti$ؚ5k6SSS.11Y0%1ʕ+͂$P@ K"B( p-B Ay;8Gm9x"֪*"5 SnB" .lZ.|U}N4())RU[涬+0"2mj[SVs[*O`8()RE0 Oc9* P\n(@z&#\PE]X"wb.W@BX Vl"QLM%7>[d7%TI9K%+n%52\ÖpQѱ\܊)B4LQ0\q\nCiAiܼ(T$0nɖ(.L,w0bx (A󁂅IY.BeKy4::["'䓏E ] %S6g>NR({aĿljQ\pI~^b-ORSq Uza`ܷߨZw@)PX} VXNsu<׉EB~ eښ*.׍k \h__~ּ4J\ø7>Z(Ϙe[[q?rǎ嶿Շ ק.HڟM\n~ #m殤="5TCV5F¢YXL⪴8E-wyķGo7cʏqǢl!/7-UX0dUI.ݧy-=ٛ6@?mܸѳgfwW r0["yC@"Ay‚>+8-W ptK 1 tm@ZE dJEZ UEī\c LQR hCuЬHNɪA |m9THEp?4RN]jR RQ .JK&|UwpK rC4"NM"5B-9#vB'rBV@\AVH,*Xp*hF7\?WD&T )]4_p,4nxT>kpSQ^ݢ| fŪ߁K ƴ~s2Vj͊Pؕ[(' CT 9܈T*ZR\EI>ozv|@a:'RUL]@1~Qx<iVJ9Hoi [#xzY !mjdg"~rsj}&&F`{3k֠A9n*A^!9=FmGsk([)~$`ba;Sxy[&SZ@4-@H&:{C{gzmh@S#'KQkd kS%<ɦX3Q=}YB{;*j<g֟*se]QhdFMrٵ3=6aGp3' y|otEUCMg℀)y /~n,ث?8@_3ѭ@pkՎQbq> /!McȾ;ܺ*X9WNp"P^w"LIw]IJʴ„ĔR\ $ Cw @4x))E9s%AϔhcZ!.}(R_/TɶsJLGH]''D(%&'1jʹ_jdZiX-E~>"N}"2AajGWKʼ1D1FWD{1VG1~MMR`}_0/d 1\fDC}T`2fM*6^ s9Ht"d.Ћx/8c.uh2Z}WaӪ4l[ZrM0YX05|]hp(}XYSqn10C8{DlF!zq||YTON ~B:~*kSx#EϠ~ #yO/ƻs2V?":{q*6.))IZY[cXқHsZ-֡}|m>[f'U$.=ah엙0zX/{t Cv_ SNXr0݌ےzv|"lgXRXk`,RHt#qRup<\-KSoè1UOo#sqB?|}/EdL LH aR6lI푈yn)Wbu~XV lyc.O2`ae|aA6'ٿ}QFu&)aF:NbɼwAiO ~)a9Jȅ }b]= ÿC^A9)*ظ#Iύh1ql "@eŷZl:H*d /3Xvbǒ07QCZE~z'{8-/Q)܂7Q6i@'H8=#'Ľ18~֐-ODAQax1,׋e,++lѻ_~裏 8!!IP-%4iR*֥]y|f&XaȒ(;Q!)9NsX-˕N)(ұ (+Ȧ!||TB){ݥ/yyJ>'+p\《BnN](CM8,톤dB4VEMxmm(b i? E-$a)]_-ʒx!!AT,c.JWEC@(Vl|mFe}{S;/9zK+bpn AC׎ӠL aO5Z/5hu.UnxuTR~~fOE3߀Vgdx;0+E$67+_zR-Awxo$ٌFKh)<]†Y*UkeEj"Ck09R!)EQLqpdz?f.GG)ЊiF;x"Q Jj]I{]ZUT}x5XFR^13Duu-Cx'""@PSZ^'_ճG~&<;RN!l 9"m@߿2'axj%ԣ5xȦ)R* %Ne$a 9%(7z]PQ^Qq>nW%?E8!<<]TE G/?uZ|nGV\"@+Ju~#B@Sv!pz 1ظ66F]BIiJZ*hRZ"BHCPSϰ%<7!zMLѵk Z2u}:b>|g.棤>m葒"vz]LF1yJO+oQŨgG"e%O٫CBǍ|rp9 u=D053e%Y܂)<CO,(/r eE-B '/~P$ه;JcYJ g+x[9b;7#FW6ԿF{WNb||7ͳxӍӮP_\dφkPZ·-~OCjTWRKɛ)6J,e${"ʈab^cXhʏq^+,Poѷz[ fz/0.f_;xœP]%mR9P8URSVVIɥJfϞW 0\ۈk?M+^?H YH*IXCgZU,5=%qE5\Y\uJgZimL p=}GWOWq D)MTrZ4@k7~4 {k#@MAPmټ2h~ێfǣϘUXdG[i#Fus%[B''kwtnZx?S/0y͜d+aQXzOY壇wB Ĉi'+;'žHy(sk_L4xh,r: 4DVjUuduBG7Z*\$ogch Sl-rlv&N+zn=oJkkLr9ZWB)UH!ɀ1x57`LBjH"r=Q/wGˈVCBSٖ2Zn$j}1dP'bYӌ CbٌPؙꡢ8_o!e fǶbԗaZTPTÖц)%,ZheŻB6S-c *]BxVJQ.~S,Xs%7YOo?y-TJe<`n@DYVO'Ǟ$EGidx|sŁ/&C8T$ScrG X~ HdgS.ŝKَ]8[:Ckl1tRHǶ)ߴ:0| άKE o,I{/ ]A2&'c s{L9@{;7&ެăk)+} q|vTrUZSЃ҇ظk%Đ>_Wk?{=,5r~s(# >g5@zOCmE~T4G<;m']gޥytJʔGʊױqZ؎x>Ed!SDۤTc)%ܽ~rct E7-<5v)؃vʆPKZv!liQ-kHNnlT , Kimч-/ šs 'T6)xK/n^aA:J0g$ƴF&H Sww8uÿ`:@e{0LbH/< Dy7R7^a!A_[Ks &cEÜmF]xP옠NdEؐG%(+ҎmJ$ o s`ˣO~,x__i#-;m8f?;GiO24 ] hh5N:G/)v7M1wI$Ocrlz,h}9K[J=Ibt\Md- 932~5'c%(6$![,@{  %c{<>fd-9U8!h[ 99Aahb"ƇwŅ&ڡF F0Lp/}/O/>RVȫXױIf>;ܦ5̺} },!a T) 񄇩z5L~ a vIqrΕ[j*EU ]}m Vؐ'G>I #TÝ夰|9chb*cbvz%9$G9 ROڲ,ܾ)Ys/m)y{(-LD2e&zA<2R4C41Ȕp#/3 9`RL cdN)>Ih?^ tNǓ=@{oWi7QSUɚ_憡a1gйϑ8Ѿ|19ϼ W N0z(o"`Bm(bIBAdh0Ⱦ++$Uݰ>;,G֔S7y)96֕1K[;XR1[+¬+tԥfFLJ1#xa%,T׻Y j n| I]c1c@c drH[Ԅ*hE |tfA/m7Tf5.d4ǿnj,[Խ7]8'k\RK^ Ŋhy Y,͡V4kS">|}#Sa]Z |G#6m"˖0B9W@@IDATׯox][qv1l:7}9_cuYxiQ ±w}@YdY;'j45"Eu`v,=Fq cJt>4[)9 udg*JQq+9#fVpQdNJpb0FV> RZm@Z@=tkAbMմ'~1 %)&$苘&(~V$ ?-<2,ƹj\?,1M<]}Ա7„m{8&Ab| rTKG27vTδ:.AAj<ʼ{ıs}̷#WZe"&*E$մ"+&*l RgARY[Z+Oz谆O[GU-xzR${ru^ÅĐ71Lc'6~p(0*s)&?i҃ \ Jbg,A0%JXߜT#ý&w9*YF miaHO;7_SЉ R̒EDNiYIy\Cς K Mi!ȽFgH$Q%Dw^%Y?zOb4)sO&EfYlo9ȠgoG[t~7ID'EZ~R0B[HT|#J̢RZ_,_?JA:$'_ ?c&6Zx+ l$DZ [8cFv#+%KD3ȩ.Y9)֮Sʰsح[7l߾ŰYl /@p"V's~Ynq Wp N}TdM--3E}8 -e#F&ya),XXf:`w*R`ErKㅙ7cQ&nzjuޏ pMNB2E1dKhfjs :E^FgaCPWo..UM!V:CzQVHO`8.yM/nOo-UE4s7gM+jk#r:>r)GK1b,$`rRp@TV!GL v!ޝ [vʤ-,|)-[OcPRMseY0 C=9󁡙1݉[Φj *h يdh)Z0/J13CuRB fP+J7IߏؑTknEWt': 4*:F֩[>B} ǒAbY9~:T, ^p 60Q.pbNB|+Â6It6T1T^R8y!Þ0%MA2("!v҇wK2ĸ^!&B\lN>&V(шS{wo*u_r-LF(4 1J`\M)rgn vO熠[gؐcZrY e?Ճ+Ƶ,< 'C"Ľ?Hacrn¹sٿ?BN) >RѴD]' QD#G8,OЮPY6?N :? f A]̘J8bI#Kc)CJ궥S˴5GQZLgt& kz!0_tm[%z52o>,lZ 6^rqȹw FvtTl 9.ۅKH3ON ̚ Xy%/9- E/^Njk$ȼz÷ bs7p\E D7rv} w ~<d-nAO9u 9>?yS"0;[F֬/b2&n۽{7FlԔ']"iW%o"&`,;GMd 6ς/Ri| htא^sj#d/kؽ$G|>2%Bpg*d2N+v%C _%l؉; N $* aO@gf!pb01Df_7߾#0O(^tS«т,dѳ}m(d,@6fD3A.GgA6BR !5ğ!' eeZ/AjV48xVf#5Y]i/#H)'#^GX^Xn59-zuҒ5_Q9]CJ)BR*:8mQ-7껈) 7tm,ʭ7F 4(;Qolʱ͖F`ZM[3|!G,1zec ~iIh"_G9a2޽#.TW JRk`bwo<<O2!T<|D.^6 灷j3Ua.ۊRWpTRunCсM`D"61ݦnbⱬZ HN-3I[<1,zy ^z[7Y@%P # yuFRBP;vҲQɶ'!> ^Y~E$yz#E oO7CY }CcB2SsRq;%U- {98wAtks>C@-5O?]H;< 飊.O)^h{W J~*ne,w6ym[L53?aOG޽Ԯȑ{"^}u-[(IJK̫ e\]|]HA5qL4:z OػBGqVw`G#:KB1u[l\GݔiYK_Ōɣ:7^!qCMs< +^KxtBL}&K&4wMPh1Ɔ"?m M=Xؚ1R$i"2>uabJ Gf,ܨ):(%,R,B0uxOFIiD|l<΄|0PxՇwUx{p&Gr TSJڂQ;#>(#L\V219#'eDo=Gĕe󐀬f013џ#A"WcZ *PPHڧe0ȔS <ۿ#It7=)\(W[OaߞtRxKOB9j)Ľ\ M+g䀐^qܹ~'v_lIeXtl7ֈm IύE n>|SQAϯQl}:ѹW^9d(FvO?)O;ryaA w:@PTR Y6"H*]JhPL$u<.Ide!Rށ\hkS[6 uJ ֞=1s cq7vS.T% gbM,]ArX}A{S@f^hTVx|rRW%03VR8gYAGƴ5ɐ:]pZ%K19 vi.ހԱK'zU9R Rb̩7MH<vܲjxvK;OcI>Ą ȗ}ZͩSuuI՚Խ7^> W!یYTe'C*fCBRubK ښއ47Hp׫VJn瑂Ĉ/l޶=)$e8ih6/ߠ~P- [xQap3P,+,qW]鄇1ĥbXxz:Jl9$ੈr^F6QlKG43ՐO}#А rԊUɋfb%HvE4N-[0(V?C]wl+*i+}ː*eD4wL]9kX>*Z0|TO_I,dDt#ڵ Fw "XvɷL8F䨏?za/5zhܼHPfՕ$ʄnxĥv#3Ҙ2 }uM1p0' -ʉiΰ%>uԾIzBΧg;<6N_}GhT򤑗7i[d*c,`DtQ;Q h( 6'WM/ 24C5)%JRIӳSORXrH!0&EaB1Ha!_3t@'}je_%GjF՚%6c;|,k3])]V_>iΨL&f޸L+5h4#o鿄le;r4dH|B ޕ9rƙzwKL;<]&fȀ9w]&ÃҤ]+nds_v`PO*}:~IVo6IUH>#3!CNN<2^zBBĉśڵܚvK$ȑ#|%v+mfGhmŃւv,8DV,:{+_+hZ2 1leS:*>2YNjBu4+|-#_#;!}3n^C±wjԡ!A|3Y[욟є~5܊vp3ǣ!fSٵad_,ç5QZ 2M4K|-uFK#lAQXPCė$TWUPP*٭5X(9(C2h ‚X` 3$DFZ%3Y*İmڙ *& Q $_ VC `nOX?I//O$TWTT1~QdlWwc!ak&hPp#}\ezذ3(e lv\PLmhiEElO|Cf'jI>mpCYhe_bN~_leUz(N8w&L2& *Zϸo$;տxYaB::31%nߥpmư_g4{nr (y-K|b&ˑUKKk_ˈJYMs~u: ExSRѣX쨬:Y'Kƻ? )Gv^Oߨ9}Aߌ,bӣh[:K`k %uÚ/‚mKB> \}=.mE)eF4O{}~K"c+Z o65I98{XZԖ:c/ 7rק*I 6xv'!GO#%L-? )!ފofneM~zμ8bEV<}) ڡ3m<~.pZSӕ1w#? zF %ݺ@Zw]ziXGqm jO!0zxY :<ꮰFrJIۥ>"2wUupu8#<|[Rdc=>kf MC0&`$#1zNCd2-}zC8pxR1sFҟ-ylGb*`/"_ R{" g~y۸a3`A1rz|pa1bXW4U M=+t85mEhfc$c'5H.fh2a^Җ4sX2 _'bLOvSPJ㩞1 Cu sIX^Bڶblx+<"g1Mڇ8/B=o#!~'2 4a+#i"TzNf$O"Nfk㇄tWW~&e½#3V&Vw/ jAcsz=4Yׄ=+k2|Hip2m6o{_FZO|;ٳouQnl_}RDJX }#4BR/cq14꒎: (㢀 %ՖJ+"``aJ"XTNl_d0݊R`cXP1$ӜI^7qܡղzFԘ5 ᱦ24c`hLZP?5#Q$J'PF },`am(<hN,. m$șP#Zf-)/E#%Eh}\Y ̙R F&NJ6Ã{0u{}O'^wgOښ91bpmAjN 94 emQ*4`\u%DLY2&V?Q#Ts1m  12ҦKk'WJ)|2)m dqbehRlm`sAZQިmIU$+##|S/_}o9_^TgߘemffMj$_/xM(AXG ܂Qg@L&7|&GT`hsLD&a?K:tQ@G2L5]tGP ߾?o}\uxmxrfCAr'fq&97Xn}n \~ܔ)((@HHcȑ8vkvYTM6a8tPkQoMu-R3`ZRQ r"J~ 4{ZqߐT"MjSR rihtuQ@GfP@htUtP,oNc7)3 >. TߊQX=\7EAXy}\G,uɓp H #OrԚo5`(w@uu5ܹɏWRt `M%%%d9kF: (CZQ]+: (PXud&pXAOu!r(¥Gp9Ţ FUV >Gk++X~-puuŰaX.(@YYp%r 'x[ɭzktQ@Gt (/ugu ?'MC:eſ'l\}1dkˌD1!D$ 1a?;w~,ȝmΘ1|/^:*=z+'oJJ \1;;;~uO<tԉ|ou: (: 趄x@GtQ/O&Lٓ7E<R]OOOm!͎;>vt@u(p}^1rgVErm%tQ@G, 薎,Q@Gth~cʊAٻvWPhD]`l4!ĒO{hfIbc-Fӈ^ XPve3wKSuu;^9sʂҏ?(+w΅Մ1ozPRk׮L8qidggК²7jH8h8G#G<{G#Hꫯ ǖ-[U}ڵ1'NLxd2лZ 4(XiE˖-ahѾ}{(~p8.‰p8~*8d!wQ;GE||++ 'OƲe˪uRÇ|X.JJJApHOO7eÆ L:س+OG#xO q8 fϜl2aEΝxj֭[aEӦM~:(..Mt0AE~~~) X]Фpuu-Ugp8Gi@ ,Ycp88<|||}v0՝.^{#"0.]\./5jnH8Ѷm[0-y?|P0YXQ4yzz"88H8"wEa[p8 ; 5&`im7 +*0WkL I4lL_jN0aG#p85װYyoG#`Uow 0n~~~Ž: 9ɘT (99r5o޼i5̙Tn]#R4i\Qp8#@o<yG#*G#P+駟xb0֭4,jPrB'NT*֮] &p(sL`(`uL%--l; 64Қ`B ey&G#p8O.O}>|l22[W2e] FyuA^Lkf}m&se39%|>k?z,jEjK/78q=zTp{Πg链$ XP.]?9} W\i\h}\VVZͰ?p8GiEv=DT4,*^n}`mc͚5`> P|Ƃ+χ|{fy5uuu5K]tkoA ;T\}Y՛oZrwS&iG#pJ!DkXhCifeRL"<<=믿쫬4,ZhaH=0hР25,jtgcǎBYJ|zd|J&Q<lAL+'଺߻ĉB=>k׮!55ƃ6ڱcM[nܩehhT=N2 8G#T':JHNpI޽[T^#p~!>03Ǐm۶Jcق)+Y(o H̙>Ԡ)S`޽U&k׮2UxYG#p8FpFp G#xDGGO?\j#V0M4&ӾbI+\Ldܹhժ +XL`G#p8A ,=^#p,"pU7LPL ƌclM܈"U`Ȑ!5e_Dpp0ϓ>fB2 &x , G#<<>,p8|A(={/xthz&'+GNϣ" * OOO|Kb~ÃExl"s8G#`X"¯9G0a[V8|wlR9di6.9E:e Ï 0bcc3,ftB2SB#p8@<^z 1{\8GfX`o[[[AHS3͛!JѲeK0˾|/D^zQ̇"?{ŀO>X3wf#Pe.]WᬽK!~a/vKa>=u,:L7с[.Xp8B %%!!!`+T ]|YЬDujGux^ S+HB(oĄ~,p^P8{BV`w]DЂ㈝ٙΪ3)=-/FG!w@BE|צ%Ȋ}fi!(f N&i3]^#>GמP$gCaEh("7}m `n YZL='#}8}G1@7ę3g퍟vvvjB 60G<ڶm'Np88Վ@}GYiH8 AFnFDTT6Z1Kw$ihh. kRCƚ-G!\o  @1 TW)Q GW\^"(j/2G#4~-֭['DaAիW6ݻq]A",,y_qAp|{NGON!>f+ f?[&1!:mW7r~A]uv]BiO,$M0bgxe^#AkXp8UFѣ:'_|BI}6%[ҫC\XQ.NSw0"yD8q)2O@aR9`!. +ciirhOGcX&چB-aTƊyo79ed[7>Xөf,N)/[6} BOryJ3FXO_Z&mz*+v&ƢÒC|"!t 17zf`LLr7nӒ@Gy#tiX&sڪÕƖHH z7b• B4kZ+&,'cyKVgBVgKc,\Pg:kB A6.z ,~ZNd~^9kEt#.ձL_+`X>7.Vœs0lؒlrɑjU9_asf+LM2T9RĎ#<*(nݺU+ ޻wO%,{n[&syچ[C_.1n_rrYTT"TrMSpM~f窖j-Qe8|ou5PSKB5CUsK"vB} UǗGX(n'tԀfvZYUYUl_f?K%bp)-9+Um7$OP-/6c] sgA94DL5?T|`nѦM@aj[Zv-]v68=\T v 62A !Pk5BZ|y7v.2!)Rdx-ـ}aHel-Hsj"B` jzA/_X*L^O[mvaI$`a = BF aߞm)|'`G;4F9{} `DpbXS̊֎pD.se0׹4+4lFPx$ Kx}i|<ק[s9ȭfwbZ~h9 ˻ !UÃծ& & .cGhh?R]%m G5;pG]j?Q{hڰs>堲tdOWּbm9]d.BYϜw#$io'Ki9Yop!xozAG6̏X8@mB`MPHBVkXUPjX*((0ݙ Rm.KOw窢;U}TJ@IDATUC: 2u v g#˜SQ堹ZH!T7 ;B}U4Zz*H}s wu;Z7P]Ř?z wә4Acy^FXFoҲ00)K5M:n ?_{ksjx`J 4^A?c6Cjjо㈏kD2aPGДC#(^ci2ԨV-mma0jL<*a_G#xxX|9i&4iR+O; ˕opdôkbkzrJ:ReQ 97sT&LeMk=dH`NqqCŐ{b4٣v`Rg΀:3L'[ uڄa*M7Џ4&u~̍$퀹hVјՒZdQw7:i ZBtP- Cڕg}.1R(k$X$3un y] ܵKkIs=HQGHvq[NA&pV"W_B+8$Ӱ~ҥM4} o+EUq]CI?iPo㙁o^ Ks9 "n P$iUE*ULnn:각V {@mDzTA^$\Y16yN!P\Išj諔rĚֲ⪵Smì @UDI1m`- a6ΝDP)PP)Q0,+W),(X*eF#[=%PirA}\UJ@ S65E)e%Oڪ~yX(*w_a* G7=ž1Я꟦n^>>VmkGqa~]̀6,?USJE@>xv& iAX)q{q!}&x~ 6P@z4~nəl G//7w#=,RR)jX"jCѝ5I=7sb,6EBRBMXcAx0T{uO%E@Э5<oAf"箥!?>rZ$X=z{pљT(?"o2ZVyc{p+Y7杸Xh PwZj9j ~;pѡkOt٦ו%jwv܂"V.AO⌌ :[ MNaB El!%iU ~q%( 9u|;+e2XxU&N먤9\@}]D;º[ҋ(=\Gȓ1Z(23PxNk˶MC8=jJ<j—IW06>a) jRA0G_lvS""Q'X<ߏr 2ؖu)]iEM=s*l'UHC#=]E0NؾDlG_ Xq52(@ߦ4Bzgih׸k͕Ļ f eഖ5[bf(hN!/~{48:"q!m@ !zɣ4~YZa.ЗwǴ7&\Tʣ 8A쫤2 =GЍS?% Y{:Xi.Un&=0s!5*mŁ+X'FUG p'YF]9/Z…нOw_}"1ꒆ4X(]PII4:4*5Cnҹ12Dc9)3dUғ1(m%@t >I1%4 LoP=e)bꀖF5SVqKT㸣K.fT M;#ii1S9#h߿\v ״WӪ[ڒ:7@P"-lIcZ-{TpRISn~ZczL0Hg du)[Bew*2(BZTsNpt"re}*q-rRܨ$~73J9r+**D!9*#@&  q5i,rFlGCI[B{Xis)&Gb}kjIlU%pp,DQD4QCCa1jTQ L*#lԴ >b[;؊߁[ON8Ɇƚ?DJ PHRKhwӎ"G'GWG*Y\\($wTO 7Yb9$J lMS[6CRP-wb 䄐ړUrGm>ɚ'fв9LTD#6&ͥ] BMIaݥԈy|kˁv%eP+ D®vvzй(NCJV4V_*zKv֬,%S7 1T4TBZJZ3ɧ'Iİq$!RTztBl ͝у&HZ?Lbذa\Pa֖*Ue͟Z**keP$ \Q)޲,Ja ?/E i-w'i'8~:믄hKv$KHk {THnKꯄ^[GqCA^~7lnDA[kz<{!{Rí@IҞY1[¯3z@jVB/' ǘdc0a 3%E99 mz6sΤy1\AvC5m%^ xl41Dqz7,&" bu>-6aުw$f@Œo3鈛'1fc r`}&QُļwQQKXjsdTxpk?IF#sb F_mx!W Hh=iP*RbM.yTω7z ѨL||?iG?J("uZ5N0˫KmH\1O9X#Xli޽62dI…+s6ҲI̖*g 16۴zd =.RHI7qEr^4%;p4PV1@:B]1:^@J]ܽw9q"dѠq4omZCcNٍج"xm 2Dqn .]4oLMPUW׏й{$!add^(=4N5Cc&m}q.D#'A-3p)2fPnohٺ-:<.vnKqϋkynȹrg.đVHXNxKvB9wp%)7n!=@k{ԭM0!|܈$Ɵ'٩\]y@eN5,˼#\EZVZvGF~!r%]BR1ZD|e~ʢ$_Ebr*Rnށ_I!#> ($gU_ĥkp|k@+Z-yQ#fb/&7p'3H%$ =\kzD?V$DbKP6`ܰI"Ab($&.|M /eɐ'Ɉݹ "*e~)n.䙴/Ą6bC- r5 `m*Q|#78) JQR1!% l֞p_0Đ!Co>O>DO3 SP-ZBB +g?-ۖ4y ZLrb^PmQL3f4RU yzײQ|0TAQ?pb*9ornI Qt3 əPP$wkWRm^nWҗaRA@"Xd()T,jMkzNڲkR}Kuy>G =y& %(s|/b;T%Z=Cyag%ᷕb"P*9U->h _Eɐrz79KS+ݖ꡼AoR@X nAuH:yw1(ò0K#@rJqvFs0aqX?4ڷ}&U߼Yh@or__Q0hx{?@NQ|. qς(ZoW;B,1½ *Qз;aazɒp$hݫFssUT s4ϓt›2Za{MXb IlunCbƚ?`TɲqfDJmڄq3/0ȡ.E0se-E.hJx*X_2:L*.8MbOKbσ#Isnu&4gk' M%1l3I^Gw!z6'668aAV̴,Zyjk4lcJ;lY.˥?k~KmM̊9MvK~af?\&ʬE{ML㢐v;6ϳn:Zbcemǯ@~FsmVܔoπ7 =H!Jxd7sާM'?(=B·{+h!ݸc8M 5ipXT9ë >_x *ʿwe/=s)zPz ӈ)%LVþsSIhKZ_VQ8w uUIEiC9[Wy܌½pz|︘.+n$ʑo]]'JhYI >~!ЋZ1hͯJ)CŃb4+BQ-)ֱ\I;z3D0Ys|X m%L3/q|84v3%c#%9!!!>:@pZ`aC!3~ǑdS=( ,S`Qp34b{$!A[o$XdX*',I@B(_([QeH <#.g:[l7H@~a>K@֒(JtRx\KZDE\EB4>oUr;% bLThqqؿ.;~k_@4fi&?w%476猀3F>(gR+X$ĭt0842LsW *7(02:e"XmbXep+fX4ihKPFk*t §OmI6hhX:]#2쾌sXρ!bqki:1fچׄ He)EE; Z5[Ow rNKVy$~xs\s!([Mӷ16MmE~IMk "]iгA>bHgc- p1س)&u`|1CY~ &0GH_N@=k0(42mCm(nO>pĶz 3&S}qT6, bgԩ돁/Ë{QCؓz4"#[컥XeHMp^B[Ë ^G#p0Qj,Ƒc":>[I)Z˿]%KSf:ȞMq)徺W/a @P:b8Hq} V#.UUBLcЅcH2X?  y V 43<#ZHh`J:x6:}'قݻ~E"$- G\d½b%H+ ;+n~ٯcjK}%s ١Ȅ$o7ոxGGW_u /D6~tGnv5B/WOqv͍:eލ̴tN-кf/¬yP;G3/T4C%>a)quƏ rBֽ1x` jz1e:eŤaBTG7"^Ao8U8\.H(jE?|e!_n :3p(^CֆC6WnZIǕ,h iUFX[:pgВ wG/@2c0TaK_heG%w%CǤiC6ݤ6>n$R,VaQCww'bzю# )gQ ?Ճ+A8OݑG]hIXCu%my[4e7o~Wgɮ]Bڌc@ 0kakT7enf+[deu`ׯ>v* 6\o- OHAS$d|qZ-/*Q+R}$*LRHeqLyJH(WpVXWS3.fj7#R.pZhMFG6"PQ&߈&z"IT ’}gc\ȟ߃ U5VL[ѳ{[ՑjNBHmK0-`>F;OBn1 j$|.}4 p t3|,0hDXJ & OmQp Xb "%cј`<{߅i$5NXkh -`. gdDM 5nJsdgdbvLI`o }޴Od*!URs_E<,kwHx]Njr,3)Ch/4 _seP"b>dItcgrʧOΓ6"~\+*Ra≮=[tkׄoYIwϞO rT)?:5UiT|7uQ?4uĤfèhNjs3g3 y?EpEx((qDW5 ÏE^>ٝYؑvgy: Ŕ$$sۈ AhD;BɡnI8qCòAz;xG㛕b]Ɠ/ȯEc"׶#PӽA1i単8;7?]KD{ >8n6h7V0n|;{2aUna9V6NKh7_#Y5擊j8s!&'QcQ>;АRNj+Lt=r(Y#=l:RuODZ9KÇĕYh_,+&s]=ZT\9ǯt`9_7J0a]*gGv\W /b\ Kɴi-]A0?iXdcAfJRC(0bd@;cx<>@Ƿ*VА&J9.0pvvǘmUW2)TۼF1_4ޙ0)/U.so$THo9 %텢|ԃS'Oqk͇mgPgz[8@Tٷ%!ٮdl܇ 1 sZuH %AbɈcbfLݻ/ ky)>*EnN Yz#`5HT>}B[ ӆ;Eq߹LwdZ]!!=l}?~~$.iX5å k5KUe{a^3c,z(H[ֱ^C@EDGdVAqX1d4)DוN HbG2mZL.ZƆx)9oXxQ]猶K d J-}Cз`JVwYYHy8}.@hӱi̐1ݪ)JQҍ~]TYy͘~ /Ź}'4rPX7߽ ֺӇ= LqZp@E ;`қ}=nKU!S kʇO.&yڟλO a˴Nh XCMR&S7}C09=4Aյ$1zef\;%SA˺]:ڣ˓>f9)Uśk7?D}إA7 ǵL1HDUl8רL'5 oBcVd>n'cgAN⯈.SӦP~[$C0 oorF0=vs'Pu3WgyR~Q1)/ja+m W#FäR w[=}\ l3ujE#]N !ߴ(GFzB()߆>`{3rd2uE[F*QAv/]QbB;A; ,hƄ]x7" J7dWo}u"8Ĥb/ҒCHtGa%[<٢z.X(u >.E;9%id\JH caŔ)SV6n]2Ҹ$;8B$aH3p!A4i:1g$|ذn h׶ :vlG. yw ?OZ9$ =·>6ؑ_э*/t$VB0 % C0hv@u:%o#Pe$4ԍPK"l *\MD 2Wn{: L'U*N`ۧƞaC)ib̦7LXR;?#u MM _#ÐVĘE+dEt%AshəEީx7aԇ5YT1_GGJUtn/%bs3ě4r\:t %hE$DLB*W'nwԟ8B>(4aV5Q]}K ry6]Azێ bNQ!t?m0:aC˚xyS L9+7FV3סpoHӐFbm6 m{4]<3E|]$ߑc ;?@v[.5}3 3 !-͂cD=medL01Ka@K|힄$Z+FYI#24t$\ICIo_:#hNVHˈע[8 yE6 ?^p"l̘TQJfsȗy/Ƭ) Aؠ+Q='$9SHP%8 G2!K5ʻ4GE7w;dLy1E!5IW7EI= DhkaN[!Chb 'Gn_o$WIɡB[ 9 EINE3j2y|iCJ-WgEl+V 'OzI=rhh%h-w#m L8!wBfaSKcb=YdLv1xZ\ҐCSUgy?&ztnQpivj*V4&*~0j2)ͅڥT-ޢh" A\5;-d>韴}jOg~Mb~7${E{"d`Ek%4۷ !g6mYRD4E {?ZkK٤҃\{2?:b\Ť+QlM)QY?a3 6fk"ff7@ p8Z@@ H)ѳЖ3NZ&̪(ӧ;{x~#*|&FPo/zLa0aG/fS?Oj$"""i&;w͚5_䉿tbO_Ξ^h9ڙ2QK2+f<ŸGaCWMǘ: #~ 3B^' 6KFEIL.Ѱ>YqZo4/B.3接^BR;L `תx|DZ4E² ūV"$B auRw)CE`2Y&ȕQ6N-I"/?{6r2n|.ǦSaXb|X͑cw ,Ih@Up@e0{j?­D̬9'Mrp9פC'NI^r2~*rU +sDTb fEfwQI69q y?_@Iaus<u |زe `Šui|3P'CCpjzsn"sU&ed@X9 yoBQ#' hdfsRBsK {̿&FC7p9eCcD*XNhrlB}W[y;c=2:|{ +EVgTAT[Zbn{iHv8Լb7 A|49%u%]nz0gR7j%4.a(ם[CKkMZvzA8ޙkoaK!?Ek Ux͙{I9,'i4{5-`K-PܹM d"D8#$  *YQ&M:X58R w^͵1\;?#ʂ\䟎CgPBT;]%KеkWs Y̳qf*!h7v~ݚ-F`3V-=L=vٳo݄͟YCP:n3zéW=?f" Ebrj8BNdҚmgr8_|v@5:ɏ{;e`m:{会oI;Q$)Rӟ~g<526'evBo׿( 俍xo'ȸgG"\>~;CB[Dm(,ӭpDׁ}u ^~GwK3rns҅[8#0^\4ght(;.$+r34E { .ӽ[I67>fJ>,kfc3n*ԅXwܨʝyCx`ҹ8yY iD$ B*gLPu DQw;bƌ4?lsKg{(nnhc"7Xx$8Q8ڦhB();}7Tܣ1V]Q}/KbiْؐxBĄ>Jkt[,;6d.q9VE y @X BuGȠYX1;TN zeE mמaZ/(ƵBf_ɓ+Вq+},@VC?+ dәJ'KD?6*WDfV&яBˮ&qeP6$H w9WJі^nRnqo5I)-¾yp8U@ݻعs'n݊3gSJmvLH\)ǟzo.Rx9{jl]k|-pKfޏcgL X:`;m\>SG!#g9hw?9igVtSІRaP,x64C &prb a{n"?]DKSVrjM-\KE:ׯx*1Mkۏ )Т )`ffE6i&:alCW4W6EnVEǓM7i9AZ_y),_٘[l٫Žͼ7b{[ `4 ïŤNcAYhaڟ>Ĥfe!;Ll(Hǵ뤽aUBf.H3SH>Ưtw+h3?~A]jl۰>'ʿr)" uhYJ y`6ےMIHŮBp gr*9ثJDǔ6}/4-#Oπ~D3Zpw)ZK֚yF+zפ)p D1'tbal: `߈Ii@d"o?EJ"0R$ddf $$˗//Pr5gnW&M;n$S=M18adVaS92 wJ$ 且ir53A9X^hvA2I2ɜ l( *EhoFJ}Q_LVDZk$("YRImdʃ^>\6SLf=tyEHcBF,u5LpQ Sv[s;܇XY68@Iܙ=zLZo%kz0;)Wٌ1RLu"m3ruwXwMS$ —6<}b$^?atNbAaXKS=k:ط4l^4?kCpkԭ؟D|=FLn3'Jr *' j_4ΎӺNV R+}3 R3qϵ8yp\2 6з?#h5VBN ED .}`PC#Xe2޾ q*KhL4 99T>,;jXQ͡[Dm;!PS1/+Z,_;GAN'{{ +p7̓p8cuBǏ sY թS' 'a#аaC#Č:PhT:gE"/l,RNZ5_7/e ? ̙|`K ?_WϞ1+k!K{RRRa-F_)W=9$bܥekC Wdf=$gC_x4 &KS֎Z+=qZ8egȱ<ٻ 𨎮nd7F@b-ս_ۯ *BR-E)AB $!d7J\HB,Fμ3wrϙ#rɩ\_K~xx[ȘH y*<17HGc,eϢ<% Idd"aʣ_47 -sT~wD4hMPx .`מ;4Dr@w %-vNG|$JkIGC$WE?\?LeNz$3} p_YMh>u 9VZ u9P䛱w1 Hmj6_<-_,!ı _}FQࠣ[w_f6'hչEo`k][zo< ЭK4jՖɔpwQ4NAV)- %<n4`up?#5R(xN74V:)F6D%D֭K/B믥G?ކ(qFvA@Fc!y>z(6oތQF\Դmds2%ps5qETM;f FT vaVRHLCWH崓p"/shW(ٹA –V>J!֐hΖ F1OhAIyH"f9꾽;jYNtEߤr4QrL*ʖF ,BCM *%'$ 1ҴdMsǬ)Sp4úfBZ36vjPU:-q9+ ‚"5y^0(FZ% HI/*>əJ!ګ<ǛfXɼ(KVi\|e!=WMfprq26GS|)>+K*E2 6_nCzgԤCXn,)lż$)AgVK='M^ɸֈ6`<2A`A",R!!09- FMF﨑(DwjE("Zhq & 0a_W^7uSG=PLc]Ls 4,i%斶HhvgH445Ed]5ii>+o1$S{i񈶄vTN\mh:i-E9&H4H8!RFR>6('aaB%ɈNo- / s$'-Œ3`j@dddzjY <B@1i$ #0ۭu쯌56!#!G O^Q^A"RO:M9竲pT4]m=O%bIkcAUI4z!ocbԞ^. oIQ1)C_ `$ShT7dVq"vC77R **# <[kT>6wF,h`zFN:%iRPMNZ"ʇI1qD8;W!L_#tZE?fK_2W'^\aܙawLC%Isx/²zVf/Ñ"O"jHP_#PH2mL"jQ즲 8KH3gμ0#0 h:vAHqUCsa$!@x91#BE ,.] aاOR拦~hӪ5w,x}9n`!=`UD8R}#FHB1cPxFGF` -ڇE3&"`2Wcȡ] pP=h7rd']Grf^".TcZ2B_~PT5k{r-!ǐiy$5@S^ ͬoDD/<קr F`jp}'R$$CXcђO q.pVF`n7X`q8`n+yƍϥ>;NĮW}Vu1#Pc(Z޽{%-5k 듓I!4*DT8N#0GE.0@A@DyR aE۶m!LA9mf]pDUŠ ^||</^z &Vbn+ {nIHn:/ާqIBiOF#,#p\`[o۷K5MלP8Dbguٳg믿"CuK2 jZZKdž i蹇&L )D(Rʼn`FUFh2O?TGh%tرVWBl"م AhXpI[-\h(Jl۶MI!Lrrr BN<DcnOFhhEtt4N<)99jT3#P{.\ӧK_yw} XhDԩSYպ 4wDd!7oZ蓿$>)n^Fhhc=&۷ hc1#pdggKb7pȐ!-W"1 03@ @ ++ 6mB"??Ы L4I2ݻ>0#4-Z`WVn`G@bڴiV\))h|JEỢiСC752m$|QM;v@8ԧI3WhRt]#0-Er#>c"|a~KhFET ٦0D@hiqصkDHR}j߾E!|RtY#0MX4A`FY*+U?G7[e)ٳn18 L#ЂHHHBb޽K.&Ŕ)SGF`& ,؀09#`FO>6cm#p!/i !?#KBHѦMm>2#4aX`фIcF:{.##}ܹs+Ҭ xVh1cFn?6U:.FHBHq>"9ާOIH!51#4X`D`F.hHO@N.ץ&[F؛gff"00 jt6'Ǝc͉l(@TT$kıc пO 3>aFh~ A4UEC VJl>KKK899mީEXd]YYA… M !'sss 8PI!4΄#0@ AZlٳ n޼>nݺUKR_~*,/n>裕3믿5~.{l (K_3M 3ghɡ"yV"vذaZj)HS#aF'*W n'!mD*8oeeU3ӛΦ΅_%b޾G5q"gkkTFhyϟB6= _AŹ5 ™ F`zG@&^+W0# S;Sr,׵kW+ +\\\v)k.IX!Əߒ0#B@!~׮]3"JG%3f 5'F`Xܞνf&YK/Uj&?cS+VH3`7$͇zF{(W0 @qqFhRY 0#GlXC`FFIH`L#4,bwDY;B]Zj|'xcJΈNǏGo!54#p  B[o.**ž={$sF2 u qIB r)0<F`F` #4nHQQQݻ7/^,}ċ{ɒjV}-YX!+Bpӝ:uba#P{/ڶm 2fsNIH~z'L oC1 #0#Pְ(_2#p+QWW_};p8w^kN_QYԡRE= 4+ΝIg'Oĉ wwwIBDMBFCذa E6Qɓ1h iF` yFh@(h0< W!iV0ׯ_p~HhzG$&}Wwލ/+=*JlݺUҤشirss y%0bC>aF`jkgcFXn] ўV$rta… Akwss?F 5{r ,rrryfIH!BhOB*(o߾dG|dF`kX 6.0@! mڴ*5燅mٲEМô3MWDpzoЊC2o^JM6!EϞ=+1#0uFu 2#pF@، f&駟Ɯ9sZ:a.vf*, }Mz]&_1#0k>|eB |e߿:͛W:*,/n:Fup?cÂqN4|6֯3H̉633z+ڽok )/kC='|6c8?y600TY<_yC]|0# ZE~~>FbGF8(>kaQpJuʕ*5,DەiWgڵkWCWoI7fTHf!-oF(aeDDD@8~y*2)."=F`[@ְr#T\~rل&裏Vu\pcѢER/X &w Lj{ԨQ&aFRVa*(((\.m`FiX`qr#dR؜&! vIQQQX?m'#PoˠA7nwAq\#0#P@*F &\w.TB%#GwK"hP8G˕3#4؇E&`n/>O>AvBApׯ_:q۩WF`F` 5,aeeU(n. RҢ IZo ZU()`%$'W#w0K%m>!**aMo_I 7773ahS(,P03BnވpS#0#4t^=Jc=&u ;LdCZZk(4D{iJ^A\\GJF! J$ y HÒw?WSnxNNa_n~)"#ElӪDVN7k6@t"2,7+,--eÙoڈF fᥧo<׬ލ1#0#P-Z`qi)WC ,4 <4]7rBQE8Խ x;EPIx˳ZpSs&@5s֧q$:v0/_"glJ\=Nj^!O4Cy>#uʬJd 3ĦjELc1"3aƍR3fh598kuA|nb޿aݫqCd`č0#0#hۚ hkZq@|cwTobl/t '3)|6YJ> }S sA HPosI>7Os 8՜kYn8GnG4MExj!25Põr[Za(ՐO Y }d$aDQ$O66^~l͐*D.qrfŋ%gΜ٬P,1bHLyk}*澶a_B& {WoKrИhF`Z!ZU>̜TVs|D9_:DV䀑ڳʞkA\*DN '5̝JRѴ<&ofnNsNKQZޡvFA~ LoG3iFvtsGz_ m1re 6ef/ꇹOi!=n7ofV_`FhzcR9O.Fc+Y.AU#%> .!5[ s sGpbs pt'b "]PF3p1JrMoi匠k gb(t\0--:Կi@hħ+b!vʺ s#mۅ UؓMVZ .<:~p*F86;\hcҏd' .9r+G%F]Bn-q.s8{!$03Q`ci^.Pf =_B]Dm ߠPU˱WQ'UaA#)6QJ$eK3 VYň>A4G#rkK sBHE%AU xI־r +VEj4HTe1ڣ bOKг_xФ4{z(fI+p5"9ZwUwj\]l9/HGY"Z7U +}4'@+sA})фҧΜ=s7T$T6i0NBRnr$ORt,raKw"$lsҤIpr:7WcZKi-!kTȤ#>6MJN8sѷ/;%.GTlk@_?*KN6dNGL[]üHHP(HK n~Ў޵Ҕ )DG2`Է|\8q * tC߮AGUPXa۵(l6zT/%8"ezwmT_tMs(%F4<+"MEDFĎNVe[u0aM13A}BiV7MJL$䁙;ލǟ{IǞNG SYGGCcV7^fK3RcǢ1g.Dť'} &ıy"[T B;O4-1/2gFpl&A/]g<L|&6XJ6~Cxb㈑ri__&_XaW'l 2"My~,EE%GYͳޛ%wEe YD~Jh(`== !_,gƹ%2mp{m0Wc# 8r$} w늩c rBrZF ( X0nLsF{_ŝ?12–,ϗ$@*tdvh> 2íɏƜ>ܐvD@x9gHx:tohap?lECڒ?Ccb{ Q(ƿ>X42r475 vIӞ~$@Cyh)<ޝ`JQȃX`3O/ =.Yø8l;s? s7hMٵ/idB0,{?ߟ(  kfn;s^ GEiLro/ž3Q@'_/~ک?cI{Oa @Mk7sjɚ0*dfv qNwV!3 Vo;^t)U^_u&Ia?exp[7 |0#0@"#?)dli=|6gf!Wsh׽;4y7p21`ɁDP"Yک.̅91L&:eܜ<fxXX[lRIȥv]* ļդk!(~z, JMпRbD T{O_ͳ4Eꤳq7'{,Vw(L#iC L"-n`Ab  ױ:(yt8[tǙb^bIsd(cMUe( ׯw^G|:~Zl1t;Hqa,wW48cBX{32iHH{?{^D})LbNoPTaV^PQDS`/wp\'DzsYȑLdaIlIh/S<73RbbCjM,ׂp8>9{??{‰T1XDَGX{?I`.Cg{4Ye%aps#BUނNȂ]$|܍I̊q;"p)cy0SZ ŚOBd-#;x {mڒ X8S9 ||#i 7Lb ;? iX8 |U_{1ƿc# 7? rUp H`AF>""eϗpv*R<+sA1xooD}kd_>S,_#"#d&’ GKk= Ǘ*k+=f7'ݮZ' ,*KMig㆏Č%AR$Z%82I`dRYjeҮ-vW^!-\1^TD;y;}w6_Nq^~vi>|NvFEx;Wܱt"N"„;7S[7<>q4,NnsZ&G"\J- >vU.|.pORv̗>ю|.ń>K ,@C&Ls/$aM,EQ?=)tb0*7ɘts-Ѥq^Ƌgb'9u_g՞w /* PSu;S[W"-bŒ?y=B= 3175Grx{KS7̚{ 2%.W[An]w+ts 2=Cę~%"!R ~v!m&mL}JffoXb)$05mX`}HV8QΌTYBT7'5b$ lm^ەg֠HL Rۖۜqtf_@D.9mKYHR@`Oi9 t-@۩.pĿl^*6)dB<oWk;p jTP߄4 ܢHfKtAsbḘe!Ү_":T1ޝD}""{*<`%M 1LI',UwxDdGf@!$$lz)axX'AOEZLxd <P;+69.7+9[ӑ]!JR&rs>M.Bz9%~60XP4QJ jN7 sK+t&&%a39I4Fg$Żc? 9]$ax? "-8\E FfoiL '3v^%DzO$aiqOx`@BZwHƎʶu-U<ЙD8}j5ԕы|kbB&K$۰p)1_Gw9<{:IH`2ua„ H0Fȝ 8']刹,hUNB%`drGȤ~rn\DE&s2ػ;KdR& ?ܞ2<$\%} ,'EgZSE9*i!&p MKtycW֕0g*oGN;Lōn,*F`Fhz~/7=n"M%zRЛ >*sBB" u(CW`fI椊/^12[(neMÕ߷NdmM%_|VLdwԆzOFs5ߢAs{ ^z2Z&ޠ2.J+F~6yZ+X'Xkg1 lbPO$EJ̯iK6gѸNItEcNHR$YRI]C((ZIjكI[vEs`2 9NE&u5((ECI3 <z ƸDH] WHkQ2.[ќ'tW̃ 7qyI4(j ᴵUPgx!`a Š4p}vps'GR.A)I:RL$]#눱%S)K!iC|VNh8$;ח  7|odӑ"WE7^ Y H,BJANli0]0ޭJis.} ޤ |ɜ8-l׌E!4ezaI03sCwTCF|/bI&p.G-&DH>QKUN-(4~х>Oa9z} NWHJ>Z981#0#М0r͹ H>SlYJv/Gd62d/e4* {.y1 0Cל.՟vv7ub$ey l=)\~k(U`.vzik$6 ^9Y.ֶptq crw@[SXQ2D!cAb+XG3a/כ<"JlMH&a-=<ц _iaM eeWa:d{/ne=Ӱ@66T#E|l|gdЕ`HYYc2CkŤrӪף@ALelJƶoP)B[0#RHfXe5, SӴ4+2~SNBQHƄ1w6fQdej:Ч.pЬʺM9 =*t^@+$ZCioD }2Owd& \/%M+OeM*BFLN3| ꄹe'R`Ҡq)|!%HS\*H[.\W~;"1N4->ZaZ5yF`@_M&@`lJ ]6NURINĊ.vTH i |vz޸{p)Mٜ2[÷-(l_N+i.Tr0,K21Ujr@%9.JWw;3)Y'Qx6}o‡};SR>Z-eҠyUX- AQDhU^-- -[@EQqkwaFh4I+yS۲: zZoݷi8MN18R;'Ȕl#:M n+GOx#XރQQ1vxF;ϙ#y|9lƕuhzq,1->0 ZtDXL[Ʉ@IDATD;)Ε1VUϰ%Ʀ3l{|7D' ;Sݧ5ݥ"_0tU1Og O:]H .Ԕk䆭F :(.'(dMXUxߒ(m:#F6򑓔dK6 Cuz'G4 Z;+A2fBՕs2J:?\X8wh(5g`kܞԗ"c{qƾ׹bu5B-Łfܜܾ oLT&: LZËZԨC&<ZSD} A34YI6EL!驍MY%d~"1C.nd2%<1\#m#H%n%?1e`( > mIڵ"!nX nhӥ;c$5.^@;虉) !Q]8s-f68_>rƺ!0pR#0@3A_(m7ݞ9X&)H)<5Mfik`s,D,|}"ND"?#ї)`?zv&&F+*ظ (,T( oD!y*1 q׮#-C)1:*>>.9}/<;c%Z̞!=9y79$AAOز(.fj 'DIZ"9E!iԢo^_N MkFD% Gy: .A3܌8Gt¿]d *H 9kx)OA?s)Pq1w'7;qW )=n6b 냢Xڕ[F/X_IF#)G+=g).*UC].S\&m|3XZ_'=-3>.$4m;W5 08{ V-_/JXPXHX:x#LMɺVQ vA;hYVrӘϞ=[{JaÉ`F`@ p[FGGɓ"5JGYtFh{[ ]FbR-KlKfB*)y-\Ⱦ\"IيBX2(d[x{lf wYN9{N­"vͅpXc?T$`Y gl*/7d8xTGhv{!q2cH YKk’\%٠g 5L|ƻ4*;dH%N"sLI3e-Dāy 9c$S%{ȧ%'&&fr dr/?%:jM84Y3 {8/Y8w(ȫ0S( iWޢ䐯 )|})OEtkRGedDd8~7ߩkgƶP E9g?C Ga]$ah}0kn~wyg'c*!$،Kq:lodyvR=i~]}ġMK 2uBIq{4څMEU@@%*kѢERp悮>HZ۹Fd{6F_y?ՊDEVYrDRR)yhP ۠{Dd\;% w !ueB' iqqd_J7 k]\LWP8#3p>RنeZ[OΏcq{; ܪQR&g+"$1x`H0TWG+?gF`Fk[E=cؽ{7ۇԉ awU# 'Z!7.5R Љdig{&"B|5Cm,p)i$l{b@b湹ks |+L&v\F\|r̘ Ěy8JJ%nE6r'R ⰂB^ǬƔo79FPQ,c0lYKKo@e5tFɓU&M׻Mb#݌0TO6lj՘E?c71wׄp$(eR(⇩O"L(-3b[݁;H &ˏ¨xS'N_ 3fpT,Dڳ?xA?._#ljmGs "&u0]:lKh3`ў8$GnkOofEpdǞEljouA1@ !h0#*)UC?[A.9&Gk?=J&?= O<̭Hbt_L:V ;ÈK?bǟ& x ?n2ON:?(KJx=Gac)S5E-,Ar)Pj҄HOF`FhnhENαXMpj$XF KQx17\\7R+Ҟ0~zkXV񋤊,q]B["MQ[BZ 펜\ :(hXwu2ë Cظ "A`:91mswIBSD\-C> 89 k/evSxXyLE8.ws#3|՞vK&C&2;H &S,];?.CkfĕcTm(0o l!,~drI1䛢 `23*-೟ĊE˰UR'3 "Hӓk{x/lgq)>]7ބ#M.v3";3ˁq5"-[^!Om+r p[ 6Ȝ6} A5B>/9Z885A*TBgxD7:)Jvea.0Et-=,ъ#s p8f|K/|"H"FZc.`2~$&3e$ ^ 6ͩ?yis D߾}EKK ~+x6ӜRoA {`"r]o:P}?!-lJj wEioL:v&7x0wa!h}6׶FVRRk}a}[YAn_ .juw>hM.K5d?`d59 wۯOeoa4rtB>8$?\lR{ t^`QxP^~}z# dH0lf!Ox%svA5Yœ+쫈O^؝MāXSj3޲459!'/uO/aEN fΚ$qj7jk͕]kSJs !2HhIffY3R! ///1ƇW/T%4l_R&z+Vcɪ$C`Ƙ +D} ֺse˦@Ϟ=TeTRMgL#0#C<&h|şN؊7>o~&[tY,h8 #.z! ,Kﳙ3Kށ$;`7~_d;m5v׹=F"`Il }rWm##bi!?95t2Ep"T{ъXd5M1 X̫-YD2/;ZUQmJ;/tr# '5Ocf_J}Zއ}q9-,6xa5|9gaVPoLm -֥ Mw3>1̺c[5šXwovF,\h*w~n*ĴP:(2 Sޘ)-5Eq7^_}ŋK̤NG@fL!/%ıwQI1$@s[=;{r#ψIbi$U,XMBe3 V4K u(ÄHpق\c|z]&ocv} Qcks4jƆ}7ƝK`qc \pa@jj*9oooyOc_7l\ <=>rk] g37YFǬ֜֫-W8SM}9s%aذ+`d\(;xqFbРAIMaWXz5ve Γ8H G,BaLQVQ:"*$$q/s;~q2HY{68<`ZC8osD+v ^=Z 1nAF>ū7#ϑYB΅81fVZm"4ώܿaQۚ&Yr$/#Y!0ޗ6A_A"XCtnh漈)sϨ()+]2 € . 0U+(+SmvjtU€ €k`Keߕ- phO-f7 H41e-cH0=_|aI׈]d(ɱgXK;]b8en,77m?61@: m*R$֓XO܍Jec6 EN9RmK]ujӢKJ 뎘g)gN>8fZ3教I3_: \},JO\ ZLiIzb`3HXaSjHFN0l̴&<]-IӸ>Aa7q['ސ8~|l9M 3ԦѤFS': ڑdhlG zwKkX\Ʌ\pa:a`ݺu XvڡשUW3. 0€K+>Fs]0꘎>ܾ8t7G U )NSטb1c*lڶ $v:jTLxߺ2ݤM˗hyh.GKJз6ixX$N5*l 3`XD)1K04*$KxcMU7auHko][ ;XȡfmؘV!L 阀6Je{"r1RX,X63'Ĺe$ƼKǴ+)HZׯx3`v0X_u;- 7/."XR,18IYNHvm@`{sf*Udh_)$ VUxeNn2bg?2N4j4fĹ*c=9ӣḺqdif&&aђEH75SYe 0Ҳ Mƒ&,\B▫c5%,cm Z|U>O\jW.aҞSyVcaݰV'[`r{?ü+`H5+Lؠ:|cl4n7q7͵s6H- j"uUCР5՜’4c/lHq3ְ lS40ibhjEhas|9NaEMߨѰF"ic82N_c n [ 鋤 47 sK%a//P`bL{qƴ i_`J΀Y!@&Ԕc v 77=-B<&%!)i.S`ҩ*#SMObi-,Ƨᘶj#!S+_"pUe|) _pbrQM.w6s7{HLK%?9ov~C Dl? p4Izp+M~Zd՗k-KѢE?F"Z.+^1@"HvԤlMZ4-צ}b># &NSl&`ZۃxEwDP39Knj;QI vVOAss8ě)}uS_L.oB ju()E-ڛJvCo`'xCB(x( Z- !ƅa@_=(A;.z3#S* wrj.֠֜rA_PeăT}hsK=^p &g0\#U<׮A^ñc3m0hB u>gEue1.\(@AS)+E5HnPbʕÇGDDDj+`ЫvCpmUU€A_ Mu%452xx.Y]/-N!;"\}cҘr帾5ŒCqז̡EǀؘM'D`ۼw0y g,46`IVS r;)=}PUTu&)=E87{֧ wk?Lƈ~}ЩMko66pR5лvKNu4[#u2\k?4|eQ:h-ώe7f GPO=Vag3M1vL7cs7Ra$E1Dev&?Skb qZ̞֔<YBM(;'q.rOYtDwzϼ=Է'bTe!_ۢ3GLf]7&;?< Х9"hJU=.Kw6oK[2atv+bw vV"x9"|nLE ܿpb濞^=f>d׬*%rqX=y{ E7P/b8c,kɘR$ M\U}K U膠ǣ֬ǯ e[xYsvYCka]1r@@T?uN pO(%LGjl :iWAC5w.+i1xr]>KC ~6}/і96o~)¡?a_?ĉZ=26aҰvlh# >̿Ƹ裏&M2WwEY~_ <&7f0*jqN,{(ç >Zr\za,p=9qq:u\J WF’`D?30wMLj~Fξi)W.0H)kCĤg0z( I0?e;_\hQxu\,1Tw SDVR- IAU:o,$ {?@,=H]w.5vϤ27wrAFAٷӒ^m,v6s::o7UB5k0f^vJ-i(JNc ;ٳ8qF{&u]:_-t:Gg`er K4ՕCғspL.8̘f>8rj)4](b%jK(~'' ijJb-8??GG?|s3A,Ԗ+L j/Vap̷C_e9ѧZΪ:EYmNtY7Ƴ2sU˭c[[RGQBBjV T|w k~E BmQXb 6m5rl?;%5_Xoqj4X'+;NtĸزeΝ; $$$4+ȥydgqǾVͺM}m5J,a!ahRmtc01k$[Rہjb۪H0Vch\heƭJU2\MQ f]Sc#5-ZZXz<^(Q\pP#֢`.:L:EW0JZgRݢMz5Y,㶷9G;W<|lK&݇(G.ٝz%8٪ueS}---Xx1֮];[so^y{ *FɤĀK)1/jQP$LPJcre6R9:4$|h`u(U?@wq _QA&j:6vl^!Qhj Z N^%QZ&}'<G#|P(If 4&3NpXIPB5E  hhRRmS 9my_Fj-3}5VUMIz6?:v+h U(NeW@_AƖz'+z~kG+WH>זpP}<L΃''D0mĮya;/nyw̤ @qس:ΰ0"{+Uat5NbB_Bǃv̀&^Ce)3H#fIj1;;d~3pK>GYUnU>`pbBf5}&sVgǣyK4@GZk|1/KkF:m&01M$YQ1CKR-UV~Ύ !`CiFM뜽Xb≄ɫKq'/̾͗zmyg4uX*t`"eoӡ(sgX$ ^u (=՘  ,n?O Fd G0 E9ݫr[1*tte<800S% L[r/7(JL̉ ۲oBBZ2p ~Sz"`f/mE[> .s-s{d%vٗCLZ2porerxn=دxae$%uza++qU?*Ԉ7PSQ1M7h2mIjC[iVE5w7.!-,}WU|jZcwE/'6IW^nX (}w䓈@K]܆m)qjfН>he^1f-j{%^ 1{l,\Kbn?^ `D K|;M@p{ZG:az@#wz-#íَy ]oH7 o?7wxG˝jX|eG1d~w\^;n{[gt73)_7g2ʜM E(>_x E{s}S@S wOM2$/b;&sBO'J~RtGZ Pbr[o`{k1K;dF@֖/MBc1MAisܱ`bՆCkۋ=LeO:5_95ѱr5wi:()GA~a>8yZ X5k)Εʡ 35 !k;҅=!{*"c R)`x%>O@B;@NG] H)A-} BB i)P}(idqPHX)3IVGR9!@Y6 :" <Y/qM M74sV 7'UkKPG  ttnkwap:)&;@l(ژP@N؂)Wr a1U %acToMCYA!pfL%KF!7ܚ"X;(p.]b-!3mIa8d2Ҝt ,DQw1~3ۚxmDsûﮦo; w.]4II[ժ䐆*5ZqxUtth]tn-É-?$8`E糸dRl+!$%ڴm?G|[w% /V m hbwChxDwW$řGE2.ymtE} /H-™4[Sr ǎ!rF1!*"΍@jPL'}%AEp&UrйPBLNqHVpBS([*p1(Nʆ^~FE3{;b_=/0ZC9WNb/+BLC9F!h: \okd(;EIA*9COwSl+B9GZoh: e~u[㧔P tlh]FYYf̘qZ U9Vtf'kJqbw;7D5 R|Q}ctVdKUMu0`}/q>#0(SF'x:s=VT :zpNR$Pf<)i*R^OG H]ɋ_2k.sWm͘0ml'7jJL ߥo+c, EqȎazJ {E'YebSclex=#`̧9-2ЎV"9>W.768M=iU缎͓`ݔR¦ۊlZַ=eFhȉ(Q\Fo~Tar2Jhhĸ7'\'YEKC9Q{4P)z'<}8W PsVH8$Z\`p ) Rd؎J[uh\gfA{̍94gK euGzހ+Ae AD4itn!8?筇?'KV(}TK6 x1Uѯb2ۢg3R/žc(et£>l@@ּ_>H#U/O7lw{k(9TmS3Diqh*vΦHcЪe<^{PAiy5}>~eT hM.W´ø;eM4T=}#0ul[䣊ejkVȧ1hdTfիhHBƀ"=ߗඇa$X264?Ez>#T4?nűS\;)0uh<9~,:T́Sڼ_*Vߋ/X*:7a닰\RTզ;cѿG:IOaS/y}[HVc|*⥅qG X܂q:?a!PO.@#Z8ږrbʔ)t0aJH: CH`kF ͘ͅs1Hܸ|'OCzpܷI]icn_2}\|t+|IH\@N݀iLI,[5Sk$E.V#,M@BW$DwJdGMc]lGÉ7&lgVI'@!"{[Gϑfz~á2˭:.ڎ=wtϺ^Mn4B|JFJҟ io:߭;38 Z9` \?V>S R 1Ng)vC^X9<P]~j \D>2Ǐ@FnE!w4gP}{/71XEv헣B*إ$YAʵ8rR@ǫ eN3?+Mdud Q̽^C6S|;& *K5uO%}\hlWsdnQ2lG"Elsq9t(clz <= rrF~ܛBPHPIN0^Eь"B36LT~a,ΕJ!+%x))lSm N#/yvbgŐ?ތa_ֶk{-zRrM!@Aa$E/,>ԕhF\<.dhQJO0,Leg<4jNVמЧQ5%)ٯBmM*w5;S`U]QsXV!8D1?()fF8LA[?L &ڢ6dn>U=Qp[^4(o9QĦޣ=Ap~;ݍ"i۷ǎ{&70JEy."H۽%[*)WDH/vSA?lDnfr @0rr=;=0IتGu9}|6&it֮C*E x'a!/{[Il-hj4!ëh'_/hssQfYkmgo'V &= aHkVDhؾ#(WIJpw#EKsqh*}=oNJoz/9%X,\8F ׵Sϰav< hv;yeȦ枻_( aXY;i'}Q Ψb R#ٹX0+In1QQ8z鶈2C'VTSSqõW'_u߳3̍2"ysK?OdĺCf.Fs|*:򈍍{7fGܥNçڎYTIb2 S`yߩҫwG;I24idHS ?>#s966}5!X<$}n>c$r9Ft$L|ƲffJC%| ܃rkH5NZ. 2Vƚ#fzݲ3^nIyf?Ɣ1ҵ?q}٤1N/$t.3XPFP &mIUn³=j,Nh.iBOo$Jv FH4͝0}iuҤ`d4iZR,6@pdǏtxap*]MU0.6+呩;UhѷZPz<b߮zzcV*:Kaa|ˀ~iυ䄻LWnב?h nnwN 3/> *ƬaQM/`UoRљWBk렆* rWZB厷)R;(Yw\)=NkIɑxMsm1G˟2W5:`B+VH [Ƕb\t {zBKJjl?c]wc+of?vNH=Gѥ}kxS'rqX@:tu^,ÞyP;݇}gpN &‰8ǣ |>9EAu.D0wj=葾|ɷȤ=!~@?D3?}_F ѢkL"IUOͭc;#KB1x@w sr?SD6X&l!"+bb:]?8v wD^bsDcz}qR"eXW=0Z}ŢM#zjx>HL6-mt>SC"/$Xa6ƫHp6v p lE3yqo֟>(D0d"$"VaEq?<. D8&ǸLLnqNm1,6מ1R5ccnoiweئuHc4XiTO!oC-lHZB̔n,m5v3.|&q|wJn4x$\ o3 GJc]m|S瀨, 5iJkbawC^\ސ(uX(fNo^dp< Mw#s'U1e~pc0P͏8=ZS`Aj֡U)x،g% mzP*'c{^|%'u p[G5@*wQ0bL:(EcR=9~x }<Тa)2ڋؿ_xWSJl~`ȵzF/ddکaU Zjt0 wA249\; ,hq0v}X,n7|DH|*ZK^XuSK]0^XI04G4{ JALd/*QC2*tt^7~WJ 9A1F/$43}#[(AEu[0-qxQ~+Dhb?9^31#ӶRqnl:To~yom=1jd< ahè-34'wA6~\)C쁧q:}:2e|0[^z!ԗf`v œO?{B057rrYrv7>Qtj1Z]姰nŷV!(7jѡyv4`tt rj|dU!{1e4􎋀;}( R|~3mÑмQbۦM~M3OMBn-H:76b*UJkahljJ=tG\ []5 7ף2)OGE ٦]k2=1fmB3 9hUtBլ :wp\牫aaOmxbaͥ:_3pX5\wi"ު-­;6EXaP3U- G&3Ą7b?`vGiW#Ai-"ѽ'}X2mo$L{gd*uPVXw`E*p-5~| ahՈ~JcD)]Fob®Mck&ؼi}p+5HW:SȎhF |@7W#uuc& :[bx_1 }?=1f$t2 9Q}Ll M)UM ݍatثB9Ҫ"mI. "MF|7!RRCo~%O9@(𥀦EJ(4:HT slVug:l_*ӃGcn<ʀ3k5ND搆 *3{vXM\F֚ciN.ÍR=𸱷Q=D,g7mŸ7 SqӚR7iu8j\>7m;Ms@%%Ʈ˜FyC;FM+L*5j斨v" SP,Ԃї4zO6(&AtعJ\cwJjVDHe!hۺk`ek #~ 6'haq{`nh=5d/>"sDTN3-y_ 8p??sEXaSV dr,qц;fd wFp2* (;d~g9+uH_(+WVb`-3xz9bNYg@sN.%9'pUUAu,R.Yi;[֗LM;a⸁v>w*F%in#bi5)um:QThDIw ΅:T}ߋgVW'-ܧU9 !xbtL-bj#%!|8,s&)zxǷ_;1e2љsĦRA4_L IQz€ v{RX!RG}ix)Q +޵1t~ѯ{:_7sBl=36Wv}nq|jC62Du3v^ߌ_WUt&@ojQq,Cن4 %kȝ&O;3]Y\x9G88 yd)8Fv!k+F[O 9NH#G=GۏO-A:phߦ7Vᬊas*2t'$ҽ)(J{½xp&Rr 㭂_ hʍIhY STI'^jΣQMuE)SprcwD{ӥ욟Ig OڼT"BqQ JN<婿/X>am=b&ZsĘg7_SkL OBgC£(7y5N􄢎!8~ 현_`:pH ݎIrxV_NfW 0u@k<Ӝ@ HNN,9؋-o9|LHIoK.uorGikNUc6b\Z3?W:n~]}¨/5tY' uNkӇ讀!oD"3TRsA|6*{Gi=? Vh(qlv((EFFQz黂>-zbyR 9tD Ktn+*v=(8 wBlp^=N'e?gǝ4̪Q %!FfG:*OtDC#8{鼓6tX(4ق tuy,|Zzʙ4)EN(JM`tJ=c:D*pJ*Rh4a?ZI#nay%.ؚm|"t2CNBK-;8-pMQx$ M[hEUo~2fm ISU3ߦ~>HxӼ1t7'Ę_T /q ]Ʌn Č[Tm0No* څ&^K6#;?z\}-W;2X|4wZPWZǀNX8J?([gfPHTq!}~(rq/uaFL>-mc:GmE)ӎCЊQ}qFr5iΦK\kˑ[ Vn^a<*"K0qqW qqWE!'}52UPCJxzNZN&ςvL,L.Lܱ䠟 "75$5!ϝ7]!Mahj,'3XWp1~jQsS.8fEYQ5Ҽ'c m\hq>??x< r.q:d2k~]uy+p$kWlu]u5€'L|1/@o TDgL4#mݑAZ5mWvk׮I VYʍ@ {#Ϙ,EEU UE ?,pRNR`2\rTwV!L5o Wz:;jyS& S 48WHm5.882_ŃRQ ZY*-|u qOoRXOFK 2JN& 5!  -}=#\sE7DW_lvڗSA~?[%INst!:>5_}\u:NLC0jF#a8\a믿ݻcٲeDL\J. \M x4V&+׌ru%zzaxe8~F>"\ ?Z͏J=)@8^EF2G1H%>ژ|XyRzYaD?7~Ur-@kGe; F91k ϠpdPտi:g=P}1羄P AFyl>]V]wPK"PO.ΠjBMz\.tel@sE%P.'sQSJ愪P_5b s G BEEOG<^RUsU'TKQJTjmUYT&˨"ǐaȸHƄu2V[0#^r/ /}Eprgf #J u+̗iԙܘ =ói;"Y'C2̬)HTd( Z,+~{h%rORf~!ʷ041!@Jtݒ'5K9p[NG";%H= pOUP N z&چZ:m 3 VzlXF-SS2gS }#q㋹_5󓩸c(ʩIc)%j:HhnjbX>գJ~x^ b/$FM)9Lao)%w~c}x#{r<<wv$θiQT<=7S#пo+7g= Y<yn,쭷]u5€ V h,\Ʌ.wupcoK0mRє‹ cC1{O`L U sq`ZӺ Bzbm0w}&ի@㙢B]غ'2ao1N'~NBĂsp[2eGz.x4>)Cz0O!c]wz/p\8{&~pkA2QF*4B!C Q^X'wt̐Ũ*)h LKkub`_~U﯀# ]3v YҏE _T‹=`av?kmW'K*Pzcz9`mGD 1uXJky6ɼSc/╉o +dz"I9 &yrsC3V OaG5f.l,6|8oֈam0P{{Q  sciKL0>#GГ ˞UaC ؍$%Ňv'Cf4Q#439OZ23:OuD?Nc|ɋy &2f27>'&* fw/0/ fVy+'$ $P@"4"1QTʣƨ􃲛2 UtfyN-/TA_L x*?݁hFyTv 15.nkW}>IcjV ``TҟF{*|m;b!;#j^R4G0#)F( 5K~_&'p? \h+S&SQ@Jfֆ\:|[U&󂏏/|}NœIOj,!HP߆W/o^8V2CGi >9D=Fu[($D__ 4eryoXG|-|i , 5vϐHqr샹/ =|yw'Uq-M b⁾#'!<*?Q()m{%`׿’UER HͶ} 74Z!s襎 θ1ĩGfbGJؚ)1lz1<3e$;LM_wZvG~DvE5Peg\CH'T16.{ٽm'G0p}3e˻20K.P>0p}0 gi:"5k`h߾ujfjۇ[/ҤP1t:7SUGuM5nA ?*{rzBX {cxMUiP9})M1_WL dPpᏖ}GF04@xueWWXO\ |PQq aekA{()Hd}m&"F!݋;s;v kT =26m~BD<!CU2ЇBqq؈+ϥaFMĈh.R¥6ou\KJ /;ݫ׺!D>mĶjPRTΈ1$CI'IzI6PG&]izSg>Vfhl. 0pc` ##;vOFDDEo ]P0pS`} z+++qq2I鄒>>W2 _JO l:%,pW0큎ok? n!>bÂ%1"J2 Jj\*ɔf͔GV($>2 =MN_ ךUPȘ\i8M遠PՁ&t-)5E uʨ!o.}A!HY9ߞ>Ljo5%޺0%K>9%Í z!έL5QlFL߷pfڬO%|*܋Btx4weC]"xq+V@{ѬY̛Gp<;kpYڎ& -1m"]ȼ@3p/c. 0 TUUO>>3 *€ ǀK`xR%4&?T`3:yQ{qQV^Iܭ-' vr [@ %?}6gIGݚ۾9_ UеҐDY>&MtDFR$g"-/o,creyGUl-{! $t EA R,糋bCH&H @z$͖T@Bv32s;sΜ9s?FOX~?[3gryz٥٭ꗠgƌ|l߾|.74> 0zo0`JJС#jRT .g`ʊzi]>#osl 4%A{(d"d#w"nWw5Py;ROvk׮EFFdY1z\ Wж]@ipEnBSJ+hZc~@r )WBgԿv\ƼMSnn.Z_ #VgºHe.6tsسg 9;ٗt.CB#ED2$I :JhvFHP]B$u"v.)ybJm„ Xr%yE=_Z\]hzl RG,TcU3[.NC5J#&Ũ:HbE㊖bBÈ#?<*!6@^NpuXUj9T_P1^y>H)A;cRP4eBb $,,D>}z3 LJ$ .l .>#! K:ۼ[YYa1u2#g 0&2C ꫯnkT8V&`A˖-5kcR2& .< 0&4uEKK3&h~~-йsg 6FM+,|}}Z?0& ޽Ǐ+n ^'`L ~MhymڇҥKk 0&P&MC2&hn™}Vq=4w`uh ᱗Q| 0&2=E1cFWi&`mu+ɓ'ϯgs@^b|Y&CX|9 ѱcG 2!B3N@,גIm=?& "zq@hGYeL 4+%KH g2YȘhbaa <r,L K@,5;\sM &Dyؘ T`^ v@}ѣ ـ:m~_etÆNe&XbX]: ptIRT$''#""R(`L 8޽{ȑ#;,)pplae`~zZ^P@[!ucO/Ǒ8\ r5w5zƀ+諬,0L8kpT r P7tꁫryP{:V"@BdeӨpjqONsY8{(ѥq4y zC) VptKfz;Ù"KV s؞.ĩE8Ź*:z`OtRKyp(ۄ!ʥ;GbJ94FҲ}|p yor'gXݳp uŕW :GԄD N ͠@L/D"/E>>W` *WZ„c[q8ˈ>ۥīi/C>g.rS'qFVFtsCjT+F 퉀w)3oO,L 0&.7auj߾}ovNc@]v=(LCg`\,Ol;)qoXfsSxi 'd$k/.# lm7~;<@Ÿixt#ⵆMe4HS%8s(ˇֻŧN+; ezǪ+ᚌZ94ᶻ<#@x.IGJ Y_s0`)ZVroF?T~yx},1ht^fP]p7X脅 0Dgĩm\3}E'_ՒYMdBFYX[5ŢhS+Y)wD䚬Buˈ/cD(+, x~J&SYO2@i!YD,$`_~~>VX!IAYvEٳtmW2IO?{衇.g`VyɈ3}Y9vM#SpGqmeo@XeB VWYhٷd˖-f#$dZ7`N6999pqq9&?/m,Wp`Lkw킀PVt ΍kaw懅8K(6RXDsxv,T>-_bD wyd#ĈKr0Wo29~ν:P6!k-wǍ]n;8 J.Z3Fq!ABA, 쏹OT L/̀Jq"wK:ޑlE ~C'O3aӔ".;uń w`L ^>bT`Cr:sC@ +fº()%\ p4FȉH[:q-y@W¸oT&wC by5?duN1ϽS]'()'R)k@ǜk">88!o$OX³L 0&p.gIV cM$ &˫pqc]ng_FTfuujhSٯU vLQ41(HQS\m^("<<՘1ՉR=|E0VZEj.1beL 0&6n܈Dx{{c-@# iСC=z4_#+ 0RИ{5&1꧁JVP4bd$`җ~4 V\|pTZmXQE@M[ʢqkoNŲ.{|]t|#`L 0&pBIz^x Y6AMLyddd 22M}dHrC@IDATfiYD*ۚ/6I4HW tQ@,XH%996O!+N`IVI;VcZжVEs|;^/-=JhK+)z|F2g*saÆ ҡiӦU=ߙ`L ;Xzov3lڴy <T/%/SF_g" (!l~_ vҐ1K^9^Odkz"1êĐAdjxDi± [Iy ; * |kCZ(UyilyلP` _AY _HiӋ e| p-w`tlM E7_*rj_w/%i٦,vBxyy1&hPX6C`J-,.orifKO(*"B"5QRm|Y%9~#aFuV“%t+0o7;/,#䫂U:ۏćL]=5v9 %G#ɪrR;cr8+Fվ)G䖝L)/bN3ͻ8vqo;bb_P^fВ aELǀvI~7ak&)hS T֖2+1Z'؈J l߾N&M"3L 8prm7&.ڵk%w8qF3& dmOkM7ť0&\=Mh-o%Yz'rƩ&ggZࡨ֔ՠ{ 0ןS 79TUTMhPIKHBt%\]0R2Z͠vCD KB"o%dA$޴)K$XfDP|UXC#&:Sd^UAg qէV:ߜꊸ1dTPJI;q1dVxQ5)/'ִH~^Z!WQ9)hX*ggRU2_$ *}W}[7L 0&(1c`͚5x!|q`L ¢ˠMH - h- kf| gB",n0jFǿC j4Ҙ,׈r0ͅʬ 򥗑FApp0JJJsN\}՗1uN 0&}; FGGӤ /Y&N \PqH-[ SLeE#J*r H)X{OįT7|pPj(6^eM-HCe֊/@~b <++.O1&@ gA>|8++Es&@VX̟?S.f90&%KӧOoL 0&!+_|!圝m YkmZa!"X?$X8&e" L]wR{20&OVBzz:0~xb@VX2l3&.Hj]1j(ɏ/L -1#{Z'N#aL 0&n h-fq!Ц5@l{ n%`ׯk.y?.mTt 4;&9:!Uok%·+3 ~>lu|믿NW2^핕VLP&x?###_'`jG^0&LP/i|uqFOfۦ:|ذpyYp ~>l,Vi O?mL7&$dAaͭ5\#xzL veaq"..A o^ѸqnݺzϋbƲwu^#M6l 0&8y򤤰ٳgOIq1uTO$eX 0&Ю,,(X:&@Xdn`eE`L tjR]̙gyוl"{iۢa`F@}R$ӧOn&`B@8+U. '0Ϡh꺔1&`'Xaa'b0&-[ x{{[ni9>&`vIE]Br< ٳqС.cL 2VXrpL 0"xb);- `LgaQbӧ.cL VXxN 0&В /HI$q3&hE]2哞ucL VXxN 0&В;>W\qEK&q3&+M ob̙֟`h*;ok*v+jL 0& 4UaSOkT8{L1 1ˍfL >>?<1fi HEEm{쁓.&`L]hʒiӦwiW|8L(IX 0&lFC|YF @DDt|رߘ`L 7n>sdv@[X8fL 0LZ~آMx>890&h/ca1dP( 8$VX8dL 0jaQӧOK{KKC6o\8`L 0G$АtժU#.¢9g 0B!5l2p ի 0&@#p! _^^^m.ߜ!& ¢-* 0vA%! eÆ k2>`La ga 6 ((aƂ3F9L f822k׮2&hs결di2bm+,pr֘hbauA.q`L 0&Ж ԴpvvvիW[6 Imz[ÇcŊtLRp"<<ރO8Q9@̚5 rymFX/)R> w?AU/L 8kabϿ}:6x5۷… y~qaFa"\l,ѱcF/vA ~>O~> %E믿u`GM[X̝;ݺukbN|}}G5⪫4iq! qc}CP<\l, 1W L̐q{eA?MK,( Gx>~m<0 ԩSmoL 8P8,0`L ** gϞO<| 0&@[" Xᥗ^k֖ya+,esh :t:"pϗ|ydL 0&*,xN gD2&p- N|VV8D)L 0&`h BC;|3&@]Goʕ+V]| 0&`L \@VXu]޽;8pA| 0&jZX`ݺuqL 0&`L6HOO2ksppU-,ݱfDFF:xX|&`G@&]܍|`vKMokjY0&@3ZX~ѷof`L 0&0~x4tc@XaQ'öRWa݄V5Q'ڲpUQ:y9ߖ(c?Lj#Z" ;S3iv\@,%Tf+*o36l1S[tήPDX"[Whj*PV]pq2D!ZQ䖢 Fj%=PI oZ T((?7#?5u_xQ\lL)ZEA~nV r ;)rQ M$%ZX9TW_}ӦMs(/]X=SeQ W75M8p ZeJ5TXm-W#'ykxuPa@ʑsxsn-75Ia ͫyC+6<pݣaWWNf6T;?Jrpx*{dI }KCαsk*ڱ?϶)> ?x?n<*Y6#MX+VEa߇c m?'5f&ec]`щSƞ=wt/T7i8q| x9 L>1|ZS| 0#Pv؁.$Vud4 Rp4|sMeHל)r\fZ.AV)nCx-'e?F>B⋅*i 8g2:^QB`) :uM@z-ܠ0)t:BbHސd(G^FrYBfraLvx71]V-@iuug*P^L&%F(hwۙHi"Y\婃AB Whcow'h7:y>4u_"q.:r8;=ʊ+cJʱnq$E}ǦMiDI4%fo'}M;WA=h) .XCʼɠCy^y@'e߫o-IBHtw0edou _#C^T0⚋'½uO˼6V(yyt cf ϚKPXXF ź Y%ՖVπxny:J'91L_vv/o) xH=>^ܭ2&МXaq4ebtKZ:Tx@! Bt rJ ̎X5i_?U'NZ$ȅH9kB _>x`oeêN~Psʑtf5U2\=-M eȒn>poMb88y&rڕՕ85dZa9sprKߕ,AnF2ٷ#yw}uOܩ}5JC )/7ҪN^.[8$I}҄Cs ͕mk d>n<]{B}dzw*L:xxYˢ8@>1i_(y?EEdL. ۥMSN8p"##zk늵H:S,=FeQ- ?;Ku"hQ_@ 4:8"@^&'AzC7KڨL(/Ce@r18b#<";:{2TɨGQ:| b !QׯKq\>y6/>ڨE))d&cU)Jđ#pfiOf@q"0 q&1 g._Dv=¤NS'α*MW:#'DZ ;DUU=Lvß&Dv„Ya  Pݢ^9#,jԵnş,AfnFgW5U⿒1-qqRX}FNf+}B*]/\9- )8~2f% 4 g_v["(u|?8’ xwà!qq]"" "!!b''xclgtS-s0ѣK(s!dAuA~W#.6^' -ϝDjK*-qGx.ԖtjiːAJLڄ}8q:Et5J/3r4pCD79p Cir MYyQJrL; WmH,Gt0h= FD91/@¡HFYuBw %5Dl?3SZAD4<Ԭ(Ыߕ|ZV$jKtR $VYW͐I_3'=BjkeJpDPp]A,u+35zWUkg Sy"KzԬ5.3QZP+ F>+\nUج<;Ck:4"+RFQ=TEBmTnI D0瓎#thK1]v+C$ҲFj"n)49瑞[rY"N'՗NYd4&%?4fL [wtP_%"P x13+@Rz[v;B TkZN@EyH!gN^fy(_eu {v-NߋgRPZDx羸OGz܌T939{3{B&3X-Kҏd껞~_F`ESЛ~d>x[9{[a Ic2t,#ҥK1o<51`)hAJk(-q g3:eHdԙvƒ]1G5RTӋޘ_U.yV{CS|vvo.$BKyKfrtY7taB ~vmUV)/yAIyO^btޞszFVW%}Q1a{xjlIg0w:׿Z8Rg>#0zxa+pP( 3ιxa؅=d)͏gj[GI:E+c@l\fJp?y*T=uo`k/-|%~K6vRµ0ٚ1MM+g&]TݲЄoflB{i~Z|`>3}08y CXt^d'+&=0yE큐&6~s> H,\*ꃪ}3VokwX *=|qoCeQN?aG}w~|eNS&)MJ4}^SxU2T˨ `,Axd,{ĖL]*& \[y#{J\?nKhNǧaBKV?PGw*N'kQVBH J7{Wa%XQn`ΕuR|-½xv?+ZDG\0(uEԶ䀗Jպ`[%W=CA RG.Bg7d(=sTmrhޗkgeֵU1jORB{IcLm!@㷏(H [.pj_.A"li>dhQ#Xdӣwr2zG +1.q{A!xQ(2 6%лLJn66ɥ!Evk<IssRtDDl/\ӝs:'+t,ʪwb6KvQmo,MtEyz:rKK]",7>SծTB)TjWXyh*qRSIφi'fsUBFظ8$=J}2\.VCPB }j>f5Lˊ^}-t{.7+®υQo4$Ǧf$Ϝ3;Irڔ2XC 7p0vRXByQ:C+,oEE(%"JΥ)TL3&j^I02d}@/|~$q:xnT֦^-TP||i sgȌSm,O½Okv gOzl~'oC/RXPnJmk*<^Lܜc`dwKҟ48tNRXuy8wj-^\Gh/'+ʰe4>KwӚ㮸uƵ:=yބ'&b`\$-K>sw,ءН֎ eRZ-dyX$ FRZ{9Jiur(-7h㭝+YJȼ_֙f(c!,îK1(ƮpKx"||521/c/ !sDaFR H}jU>ʰXCO,(H9 ,i4rP!LA[❅:ӌ[<u38-Qݚ8߾P}pnO#S]fgȺH(D:|1~D| c}\Haa%UyeT&If jZ!&&Xq|ZyT^~j©u h_Epn| s؟#)+>m -Wp>֠"uKz8iѰcM߰α0ETVW`=MHɗbOsmɩ^Q-^C.N%ߤKkf$EF?ذmBqÓӆmJ cҀ~=ΧVwÛ!R{qv 'i0.P)u8HKJGKjŧ:ғ€n"CZ+kƷKл\7D;~->!̬{_0Zҡ69;ҸK[ g cCLNIxT)js|B~ǩ̆sX|21 쏂0p2;NˆRXa׊eG݁[ 3BvDyp&;VQvg`[Q@N^=:F9HXRԓb,n>΅kýCFնpDr%k RZZ ,K5 0I6 jo0 Y.)+h4>2C:] 9OgB[k2T)Ќc zg5²{ k-y&]x baA3- mGV`}f(\dh|;yL/: Yp*82+duo/hG4\]{Ii" f^RZ76hy_RX,[wL<0F?-]9u+Rtط8Jf_0枢7CBDB)孄& RJcNl pdlI_1s*E>YV`[=ԭ;5|{Rcݴ6,, \>YL03f5HU \r++J ^'?AqQ2cPݮ|i&S ח|OS&уVC@J#[--y&1kL joOu!kƄej7o7>y6t 5=]4+^TPN"Gt.]SOO&% 7~v|RX\\yWI?-ZQ"L9@:6( l rRVӠ۞Ĉ>MNCld5yg’_Yޯ\m*I7,縹p\DCmIt$|)J&0]H&X3eaM.u{n]x`Rڳv%*VY2JI>IVLb)Vsոdr? we1ϙy|Br񍣥JOy{bȝ<e nm%]/Elټڈ#ЖwC1xq0/N}Yb|'2@8+-%wUȾ˷FLv.e<g}5 NSP$5ɷ F`_J ^L)Ȫ;'DĢGP n3N4G[tXOѐ9_8~}üω&4bDFj/96ش>ik M lYBtss ->IKHaa~(' :STE_sesߘgg`' G #2Nfфx<#b{Nswz"߶(`큀wzQy4%&^G ZjZD]KFkuSi-x SXZF3&6~=9 @s4_|C6n逮dn/5oq2Qv|ɯGꔲR⋇Ww7JyRx"{ }ѵH3'fdJӒs*i=@`xgҴX΁{H_\OM_HLpUЀ5?) O ii+/0ELu܉dOi ZQݼ oz=hм|YE3G1:~ kVTfB|hbA r~nkGȜӝJq߭q2 ]b\%U/VLк:sg[H!խ6mS"qNI#?Sq:IM 3Ȫگ:&*Giƫo}4E-1 ȈPvPx[SV8֧1nUY!(rş],1ɦ:?iGBaqk ԶY4Ӡ-B.-%kX\ eiB1| WRt%'b91)9ؾ&ii$[Nfރ o-Μ3H4Q4+,$ Nh U> b'),^3 E ^KFϴU##3-(>YR*VP{@d!M<-]&,.,Om:ю&W$׳'Yf#]BHb)+i2rGtI0˪%'8.g}0&݋kTuǵH;9#.t=BIY'v>^nՊ0&n1 h4,j27li[44 !4rC UK;C[K흜5N}VP]!.HU}d?AĊu P/OBaS@f=CKljkBlBA&ոQN"_RԸ j7_Oy 6GEamI\6k9 'CeyXSi][E>JEV8cϑr:}Bt${VGQOj=S6ҾJ,j SB-TյmJg)'e5 8z II8fiyE~ߋC%FZGZ*;]RkO8K;4TI[G_;*%QT־u3?*SO[U{?UNZu9v EUm\J H!(]9Hv`ظ>P)GC$$ݽ[R7!f[ܹ3gNٵ^cڡ͠e&#ƴ𣣎p8go9H;~4j> Z<9/7*O<4 cξO?J;FQ -Ró λh<1A<g4ҎNE=0RrOe.ΑS [2VlN+!{"vvf O'l1I=0|kAd3W~x6_@ʕXvq5]0hp6vӮEiԯ+hg8-KKkZӡtHSreб>MOr: n"7_,7efibfpk}bMmx~e ;qj*6?tF NvIwp_-Gv5;sk 'S88P$& $% 1"7d0qɡHz:W%n37w6JX4*ǘRVY}l%^0)*j\<ǤN|i,EV%Y8u ~JbX3J6;jTF}K#Oԍ[ndsCPZO+"u̬|E ' ^"`N:omBGN;nyRWP[h@#xqr HHxYWJzK#P!rD Qn5ѬRbl3 rNbNb@=ЭaUTG( GÀhބ$b'p06ǝ.2MlҏVГ#߼dI.zysH2 N-Yo(`#꨿8Nf$ /Hx7Xt%g\xuL|jEFg;EN`CDˆ ɕncĨRģ|WFI(!9izO<4{mOW8Oب;a@dou$]Ԛ%3ZB+fm 3Poz.rDw|Ͼ.By0$NdMPq⫯30g&Mɪ3f+ >p8q wE:sX9IkP̚>v *eȅw 0aw,@bR+TEM64FUA=q֕U7%*IĸwH'ʐ .$KtJe46lMJɘv<[ M sp t$MpbзA*dbm OƱٍptFH|ܵ^D4)%dűr]z #oe*>[NԷ ȧz)1TxNS^{Q@n'Ȗ 5U!!YW_Ll mw:jV1.S 3+ETd݁ U]ػu;^)Kp 4ր)fjx^!&Kql{On*=X$: T]L:R{'O!6\F#ϓڛ$ 's Lk,?0ޅpҌ8lxYL[M}/w6'3 r[G`6G~3Dho ;qjٱ<&X}E3>DFv{W+#5}+(15eDt(-%\۷ܠo!>SE;2Gk {2돎6¤RADL*1iqF'DL\_Cʥ#XjT#2ZF㚊&sz IHIVR\H'XΑ >ؑKưȕ˰kj @KIǎ6"rCnQ;$䑭$gߟcZ90LJ* 91ȋH2d]z܂Αb ݺ 0ȓ\@<<'<[s.ǯ㨍 CdpSIdکB;]ht.Vy/:tv<| 7Zy4_+&1 qܡJd;ۆM1iM_4ye"[.!4#$y<@1u26z ۰ i[6Z*UN͸C77G+A? |/-VD$I 7o$"ك\L:c1Ef/4:XvŀK_Qٝڛ]f$Ab\8 uy5K4/F`NA/1% ʹygIl!V'â,Vk*0,U̾?^ ap"Yk֬!{;r#hkpE=-bFdI9>r^R0'Eڬ1dH,ّJddL2F _BU"5E"p%&UY!{K8=pvl `P;du#]VT{.F閎ޑu]#)[@tqsKknʍ,;;G 2 )7R7zvqQԺG8\zi:tCyi2%H4ɸA"{z ň={eQdedb i7$cGGwv2iM`;D< DZ}nH+k62+eӑPm*聸+{"<ЕY.iPf?q,2*}VXV!/W2%8EF]yRueV`Pb]AroE]^nBߪqvHYbcAʺ.D@P<-@I+# f6څH\߻ȕssv~O4a^}]Hُ˹rT5/&{ AJy7 "b>(p老CK} C6d.U#&W$Fv Ƅ\23"ݸEnqHj2:5Â*꬯ UTH<.0'bXX.]|μvÉdpPr I^[˂N!=޳+AVz~[3 @ƾsϒ\+d#d0d;t |v ' WaF1$ \'f=1ȃSص V _XF0+ 1ץ/\&0, 1u5Rcqp3}OHJBGIb87\&Exqp)-տ 1\,%7JqI^Ŕw#O~UN.H%?A_rqZ50DIyi$\p/g3:E^|~Q%%yBXB"L@o nfwx03X^w4l^14'j,0"i&։fc aa\8+3P"t('AәzGhpO3V j sTJ][ݴ#S5XJm^#0&Җ<&h0d}`3*m*ɘ;6=d'M0souPbXh6t9$RDE]w9|u=9ĄQ;"UHMrRHi?yECȬhbէFm_RKu/_ |<&iNIVxoxCjJfFcIgL{"%%c ` HȋOG|01KnaA*Iyef…;1QMİǑMv7F 4OdԦ# Z?zE [R1b`~z}O⯸uu_,:ό"{w }%N"J-?%(22%Aa\c#@튞F~İŢ>X! )1?.]B8xT@tP61S %0FJa 7ӸHye~">}`>::xÂwq###Gb'62~!Ā3y<>1@W`EH^?Ij$-y?͓ZSb>TpL&D#?. ޟ{ LnyeP)HB0W"%7^`I0')X]3D!>1E?* Ib7GBOlkXi9ťRta̙YuLnq \zzKi7ȋajv 6좶s!uB̞:`" *V91;-IG#& zu?,iQ$KoMjȳs ,&GP!.*Y^!nx ++n,©8 2XB{>pHD`+rWauFz(+FVɛF-m0t[sOy7%dԍv;HL^d{9 ֎I(&QC#,U_+kS/߰ gAk1W eI &yVчD2Iܑ@Ġj=O5}*$P(TF2տ$wg`5H8ի^o=j lg QySd'C?뎱Kt ƫ+ɽ[LoB&uVa* B&8`RI$ZQ&=n)>u"cR,Q$.AM$Kh!Y ^ldv,WcfЅ ER >ߏ'ZL\G8u  a9y H<!i10f/ѮZ u69`@^uHR٫0< =MjJPyla]lm"eKmlྃI|8ARdݨcg*Y?-h+'|9-Lplۙ@?o^q~"&dzR*G KKW H.[O# 2w-ے{73G@ 5ޭ5g>e(-{]Kѿ=dϩ\Y[[ݘ&x?zH97bxcz:>4wJc*'58;D?67#M@dw@4R췋Za`0rv`.#m!/2,!l8h~5EH7:DǬ`I,=0n@\'W4P]wݏѣм6i:o\WcVD0hy5{H*Zgb#DCe0`NUn7n;atc_51IMHN5 )%I[erVj-%ƓO!.}W_UϤ繴c;nڇCū?Dյ*5]5@H Y- 9gM6EEbi1*[pw4fPQOU$+T6(lJ?61a7Y%y.`P %w %$+:y!)棟a׾J-ַ*J-)o,//)&o2(1Ho6'Gv,,O:JM&cA GVrvտiܚ3ĬQ~5ܾn[I,`ꗎʤHu,޵[#bI}p6ݗfy" :2&cE.ЬiӬbd˜za^biGL-4ٽo81'=7}!6y`67 jor8w ͱst֋6lWƎ|׾gҐZ\ -Lh 4;dE kZ$}{^ y$N_ŮFW55@ַlhӦE7 (Y'o0'nWb88 j4 SK1&KdERIwt!PQ!&Ĩ6|p:A4##sRWcٷvtILYkY5XVV Zbɮ[eV`;O ߵP5ַ:ې uB:"ܹłrgg2~[gs=q6n%_Ǵ?4n I%˸Ĥacc|)f(Wgs}x6HGN ,#TM;߶С/^0cԦ\r\Om*xyu 'C=idJ$~T(;SaMmny[x^$`H8Ő!Cʉ'"<m+GE>3dWTTnp 7nD GmX43v<{ QՋxmJ6'nB%|O~CZE|I >ވ)q%\zWZҥK1wZaZh͆Acs8G#О`F{0L&kTU8QD- $&CBvӢo߾ Ms8fGKX4;ļG#pZ>N ӧ<G%XjyA)e̾"^SZyp Bh>nkG}zCp8@B5e 믜YxVClj'0dȐ&e\xp8m ۚaŋ8sL[Ýp8@3! JkljOпSxVG{8x VX[|<1G#Ќ fͺ(ٕUGU ׋p7J/qY5H^Q 憯 q+W^>KÇK/ ^B.\Q=AJs|N3G#P 6Y9y$mۆ &P}~csQcO?B|0eJ!J(څ6%47X[!Ewy ?:WV"aa3Ɔ_q8(--ۅNJJ3 ̚5x!G#~/sUz+={ӧ:CVV@k['@uLȱB$U"\ bdtC 1,,^^^rž TEsC?Wy@IR.u#^|M/KctW;|G#÷~qՈ[|l_ @{D ** Νÿo0u15ü:uJPf8;&b<+++cbcbXZ7Ssh_OOϖ)a"0B`Dk9`io V(7WW\P;ҥ @/_K+(̩}'{.]Q+I/T@jms1QZRWO`Cq@qE߬#4w}7}QYZn/+f{…8pw62g'u0w%C K3p#9EP#1!7!ʒP.>r1(nW@ tpJ1)G#haٲe>}PN:aÆٳBܘ={6o,_{lSϘǏsV 0 `Jҥ ͝ࡇEgaD-K}L*,,Lg13ֶJ.g~p4pwxOȿrGIvI6&ynU7OD~Ug{ PڔQկƎB'DN}=N/r%̭-]]iѭ_gbgkH,L R <;uŀ̴HPJk[Zi +568(Ug8y/~~ Sc˔! +^~{pZ@K+1t(. WbpH3G.nǺuk2Fz̉ { UӖ6>y?]^fco7WfK( u>~p'N8H>[-Y u&DDD':O?-ةf9)͛7uRJIfff&MHH$F^8RGF^g15#)qLJ%lhq_=ZB~V)W#)pb71((29lfBaL*" Je9rR\J 6'b.A O,J돣뫣d-)(צ?jF.ޞ0#/_@TKѩ&jXb>vyDgtt*Jy RpPTȋpfΟGϡŐW=TV Y\NcHH#J^ r_NR i[$axi-!"|VRRΜF\VH9#3$mat6G#h0i-[ʬ`5cR`6ȁ/9so9B/R`ʕRM `b}ݧ655U!t\tI8~'!.SeLB3 MR]aڵð|Sx@CfX|W8rFo#y`R($4S`LsI+R$RxbMA6*d*[ðgO%B?;уkdITA)A7*w5:ڤaϯk\vVGk|P!1gY eSX?uK <v؆[=_P /Y14 =1,졮,EvF2=nAHH&f U`IZƶ?o&> " CTa_~>0}hreR2t[K)jb0#0MЕPх46G#ht/I2d =k,lݺO=ۇA' 8-l7oNNN^qb65D(fG֭65Bddߚ.,,2 G#~L|p9ݜ0MC%t.\0 Nyf{T)o=}zQPn@V >}\aJ{(0fOOdAR~>wH %fvน=)Ȁpy?QAj:GDAKY'0,lvݏ<#cRF{gb<Ҋg$B=ieRb#0L"O?g*"Lt@{F1>V;Đe$ɸzpw`FEE`j\,Uq8mYuiHlź7̥'f[Vr%%]cL7Hdpev1n=0)p<49Dfh+%r2{*5dEpE CPp(,d(!R?O&İ`-̭F*ccp2@ _ow8:{VrjWL6!1İ0CPDo֛pn+_"Œ^v Xt)y۪2Fƍ\#ÞL*#11Qpܯ27b`RTb`\C]9vgXFkM%' S`$wU@ji)}_7)䅤=cPYwU3 )jf$fe%ʢx\ Tr8VF1)Μ9"¤.%I?"b 8CGy`Nwup%WHHH7(n11(~cG} .O%AIe6"ܫO 0uF1Xǥ[tvAjP{x g_ grs3(v6?40&ldIB*#pZr[yfAd;l\8AGDfy<209SQ#h{T_=9E 0P#bԋ|}{xufZM^QHU03D؈9Xm$޻SIW⛏Az`8VJ;f7: n4݉A=2G#+:t(~a]]"/|盿p^G#P#'{/< Q(dqOv0:vXMh!%lhRڡ7& l;{ZvaК}G~??b0xmSxk"E_,]4d9,z^{D68ڱQdc?D?qǽM.h虫& :44yܩ WJ#x` b { DOW} vfW*}A<5aad,e,'D4M ܰnXAS[?4 SJQQX\ˈ#mIbMɗp8w Bgvnr :rÅCTxwEcf nVG vC {o;!o!aːw%!IYCsYCֆm= l۱tm/wg-(^ ;d]z e+g'#9٥ dBۆ-" Ziiƫ&Jt4KӧOGXX0(5KAjZ*C$Zg3jZ*Ry Y9ݝm`Ʀ`(*BI;YR)\T;Và,#|dra7]h9ؒD,V~yQ \ORre"y;UV 7OmkO=<؄Fp0qF,[ y޽q; ^]@ANx饗zj\xEEE8rK<゗A dffbǎpN .XA;II_y )[ H#!(>wGI|,Xb%lMGLAR iDkŀDmspM0óo_ &1)- 3isS fa0-X`#QPpyars'~#aaOTJOH4VT^|m&bOܹs/"LE#h~GI&5;gmm 1Td ܬ뼼@ $,{jժf)O?111`6ٳϬ`.jdɸ߿SF"pdXJ1驷nyj)Om~oWmAv};d[K-ºci5u_`~T"J|(OaS4M_eD3'5H<]CyV>~}sٛ:ECˤ47Q58D&op7v8U:,Y^t=yw [*;HQvD 6bk؀vm-Zb[toпe޶mm[G^1SKK Y̬,uN~]Wu$/-*T‘.ꈭT^RLUgd程 *RYU_M y%g 2Y]Hԟp8F$CRGWG(QOXAS#j:fO/S7j:`n,Jhs˧iGcE1hm~Q#ΊˣSjj-OttD͛~u.oG{u,͉Tn,pZWҚTF\= TF lt(12AޝGQ$)hvp8w vueOKꫯ"::ZS0pZ!Xaٌ?/}^w ;DSfÑVֳfMxj|º ίz ]!c-rWMbh1S|z`ݹܔ1HIX<ݼ{vlDUpBjҭ'Q܃#ļyobZü7_DZUCXx>SC|u۵q\-irEkp8G˗I^ԩÆ9G#5‡EEAZsjbKhvTWE(V'1`(P$Z} Q^$'D A:t4b$%Ҏ0,,ibT[z0hѪ") J?L"zYLyeQgƈܬ@<71{OCȪ*4ep5r8G#Mxg̋իW1b|Gx=۴ڼZ@+! j6%(TOyC^ZMPN@ebͩ1[|Mؾ.4a?L^.>Az>F2# ^"6E&%,!#$< "yed<ň_E!: ReQJPVky~QwI"Rnć7+4Z[0^G#p8;={ bݺuxi>W 3G#LDEOۨ"eĻxWIQ^b䦧#;;g.iuf@ڜ0րYR8] 2a`,3n|0E!8']7C]Vl^ ^Bɘ"F]!QW> {&ڵa$fXZk#p8@+#`gg'ذXr%lmmk. hexV5 k/oKClJ%ٸ֔`$PQa " cZEPr1:O/p4MZDWQpHflPĢ[o-Aށ}=%|7#JZx6o47~A[8mvjK!ğs8G#hFqV2 ?^H|j%pevߋ*3c|Li6Ϫ)Hb+AU=b:^+P[hvm Zz53,Op8GI Z̚5KPyqw#33Ip8~|hiٯ(Gv1|XYՈru?T/ݱ^|YL^;Ҋ|zL1i~֫ /OJid+'c t1D[%Oc09QՄ⋽q:fHy~2~_4 $ 7r3 E΋ahzӲQ51ySkSR_sE1)8G#49L-y6`FÙȞ={}5yY~>0_u{҈w.Ss{cBa+ ')8 l]񐎉/^mVz wG2(ᅾ'hq?1?74X*b~k/*-R[fX:t>rZX^ҝJ (*o"yIֿgZXUj25Sɔwt*dPBYbɸ!goi9(eq{{^[w\Njˀ RV!3S_@אR.C93tFGr@VbǬ:G>`G1c{*" YPSVR)e8[̘1b[!(8)mr4 mh>1oC6%v`(a5;~GhocTlBF=m?0zw,347XCAfO0.0vǯB5)k#8 x]3NwIz [m5ݮuҢ%$dMmn!Cȑ#ؽ{7Ǝ{ 9 DnR9v~2 ݬ0q4$[wVoA`#wc0?r3(=qkJYvBOH N\m y^.ʎ߀jm`5 z8%9--}V(ޘEQJ3fd7؅9A|(-9,;~'T !]YU DSG<8T/R `[*ehJ68AD T3r(3ph ?=1#{T5r Vɐq(~pT;cᡛ\70V^^^_|?@ŨQn:xyy"UYt%[I*zCakeޜ62o%ʋ@)9 ~56,;;}yosML^KK;xt?]Yd_GAe. b2l$'vNC?}Y+w"3;w!:DN=+0]2 琒p$@0H˒ ,eɧEٺP9uF8fR2pqhXsnt6-5X&:1D uiK*w&+8'âi_75I)%HΖ zfthNU?Vm؃ )¤60 'cQ  |tFh^*  BuҞp%BE\DNJnCLEBeMz˕8:Ć3,x5/y$$+E(ޒJEAbH([C! =uW*&X+ tYJ8OCk(%brgâo,]7e64*A~G.uv WƵ\ z  {Jd%W_TdXؐQOTD֮]+H]Tmp)࿇셱(uB"X ~?8Ϣu![P`e[-aよa&sYC8x y wH_:=FW >>Z\ ?Dv{ip; dVmL Cݮ@!] 87Dm@2# REg-^h4$Bw*@\fCER+[\i7jzb 7uUK Lغfע.,FΦj1OhkTsKU;XP+haJ<eD 1+$R|/\ 'Py**RsACER<62 C$]qLMv.}]D;[{YYE|jRd79}I AܡM8Rz SO_7c_L,:wEaH[mY6}ZVF(4s͒ip)S0Ɍqqf)35glmm+.xj חןaVDprrš5k/p@8#`Ӫ.]'=F>~` Lk=?1ɴzp9m%cDDXWo[tt6.h b.7OK f@C!ױmF$]=hULb3k: T+QE<1kAS=L:O)`j< %qI t4a3ͳo0'9$CB"NV*(JeQVdU(nȉh{!^=SjkN*'y /ϾT̞YO=^dRRb G|cvwq|@2@}K8U$OejX^#W8_]v[ h`bS1qjs9brgBhNXOsm:.Gى(fZgTWw ov@X M-枋'?|9&h mhHP/ @=v,Ys=Kq +uSSDC~|ZD~c׈4\BiQI ݼ|B:ґ'}uif4 oOǵZk $1^緑|rԐ:fPυf.4dۢ Ny쭸k#M71TA(렆w;M}yoyѰy…0`&L~M5;B⼉Jϩxwx]i(LWEL?~+sۗ(FT=4p9f6:<~3RY"cABbDIAW+@8&b ~z*Jqq1x4S[Bg*͜,w MU1rK~  ^+|QQ~sf●#nȮ_Ӌܶuy,\yx{0l6v?/Z9Bf>ve/q+k~I(83S:C\Pžb֣ (|UPWYERuA{=(8Tp 1. 7o>^{YJ7.5\՞*Ft O;Yc*:s!~JWhФmzG8O.\թmdPhA&mf6TFAoϡ򠻺4lI \Y(/< ~Bqaț":YOiu "~r̵W 4.0Q޾}K9B,T+j}>* R0p L`eQȴN PtxFkccxu{y$cXХ2u(@!W?Yꅀqھ$ڝY0 PE E8m~v 2QkcʎEep).>=cш!~0xpDPaw|}c{mh郒4\WEH:xBX\7tZyk7 -#s3}M2OępD:ޞaj{tEf{ג(e-o?p(5'N}NJ=hݲM; m($G7oc}!?~irV@`p4.)č$Ńsʑqf^ a_FI$W c,|Q/n~.1-)[W$WA2ID#e1Bc'vbS2wɸ-Orɷm0wk؞`8{$ E򟧰0\i*QlG.âІ&_=6M+0AhBp_!؉< M:[X35AG7ĠSsd(sO?r2J Ʊww_p*ؽIܙ.Wpw}tU9W(E \Ȋaw&3Ķ݂Z*b,80+~_PBSN K0@0x!N;[PDtEI/B3"(L8#v}1(8E,g4 o(|ŗ}!竈p&x"wdF%32Htȫ8dO ףּO4E) 8Zt=z_<<'"(. {B#5ҏY%Byشn 9x!A졨<+7l~z$j״f?OT"f?m9Lzk׈甘( Q?aw& -WtڔSsE۞O@9*?c?V.xމĒF 0mҲ7&O!ˡ V͜/[SZ|{ez1v}qͬ6dMª%ܩI_ n eG"6CRR}Y̟?3f̠w|hܸq5xiÈm7 ºs|{=[N wO43Oa/xp5A\Modۗ{!lxUE^6lY^z`582;',vĿr?lXpދ6 ~ \25R`*|F4׊`/yO/z莨@23)ؕGYG[!JlpbOQ哂 & v8Ƕ9XvK{bV0Q64Qq|W9,I#q,|t/*Copoq"݈>>~h)B4?F3t Ç*#d]NULWgttil I?2p݄d6Qgo>]z~5 @BļVRU IFd7\C`q\[P@?e lO(3QZ)O1:%DnQ9ZoƬBhKOnaE[Z#aT-6rQ+w!m/jĪ;\ӴuGWpIw{*:}=U<(-}5w>J섮5A"y`u(YwP i,%pk 0a'%^Ȱ:RPsa-t2YY$0@r[zWC(+scLcL֪EX?F9{WTגFw{ÅV1L/f{PHF@ F@0Ӯ5x T| R̩nE[`@W(Nܙ89ĄfBfB W}<wAu%}a:J /фp~)AT]?QǹgsASﵻݲ#,hwRN\'"s<,sD"ڥ~99,ofۥ**ppZCMh:4-%p(/kE.  Atp [v\n1}o w3gB{o?0;H&BVhպ)"Jz ҳKq*u"ʦ~q Q`YAT9v|6vP 1o733Pmx~)܄7^z7]z]voLI©F*P}Ddؼy3v> ÇZB .yHOߎϿ6 k-=u^b omrEMF|QgVnqe}WW_QSe~\ 04Qs6?۔FD"YeXd+otK}gah2Nq~(^ ujzOBbpw 9,)Yعr:&RK#gE²PFڡi+i8q'yߡۇ :?E68%2I$!Pإ5yJT욿/_˲ + sBw_H[c,blN.]=e6ċFǛh~ t#SYϱzZ{S`!_Wt1 sIO_@xaBQBa+n2|][*ޝZGS7*+layk:NEXM*yՌIuhI |Im&G-Tw&{c>0ҊM60'|a|4U7+|- !WrSMFЖs#SJaG65B1~8Jd}$-Q/\cywIxS:t'I7DQi[Uv" QaMRDxꌨg ⒂4\iBjbfD6(-L?Psߢbuy|=.Q+b5 0H@ QHɀ>gs=˱\FEaDѫ ŸϦ~F Q)f(]y]{:4ATULyNbwwdCZj<ƿOO.QZFn|c*#Npu~Pv2&">?3z RU.j^u449]pO01졯L0 [~g G`Abu ({a1ЎkoJp;NI6 gB[bCZdG0q,ܩ<~5^{H?ĈQXo tJƟ+ę6th4pԍǿ.?NC s<&= ঍ Nm/Bw`CpChQЭ[7Td׮][ /_yRZ @aMc/[Wq/QQl&&[ׇ1xܿ<S| ^ĝ%,t{59ݛߢ8CJ߳?LjܵZ3O=7V+GfFش;zi>ꥋ?l2Ri\~ zX<3-<wרlI" h0upM7W^aU70pX$ :7c62M0)2N8Ԍ;WAܱ'׻ Em>'[캓U8>6Al'a XPHU€ 2J#]}fxC4܊׶ ~ 0*(qL.Ӯ5RXN Л>%{q UB]П۝|yҮݧc*Pf=ĸۧhnVm;AӜ,u >0c@WD?oV~=Zr\Btb 4:GMQx.Ӂjm#t*sE |sS +D|x{<|zE3(?Zp TȲtb?{>?Nx $/d@vEK<-񌡝}lwon!>O'#Kx$vR]$ `QC0`Wat311MѴi$ X7 7Ig&tfЮ2u^xcKȼ W :)jjmR"C>CсBQNWj2-2{?CK>܄Gq': QB608'>G Mݳ9r*w46jZf n(M Gs鯨0 6m"9`0#e/SlfB,Ǧ-Z-[SO2o&nF:u!~!Cai&զ@af#0⇦]؞d鰚=wt&jQΒ\n|>2@4FF吆Dh6S+mۂ.(l; ēͨrH!䉗٘۹kԋnHp NXh  ݪ&߄9CHFwM-˜ÖշB3cUZ<7ߞïC@Pom ?AՃ$Μڣ+" tjW:DB^r:^%'\h ~*l0{r刱f~6EKӑ3P۫75&kP4tFG-iψDe\\ ]4鷺|Z֬{/2v;s ğg" Gh4&: #ېe!ttMh &AmĭiIe'pL !ftu"+m=%ܰ0Բ(<[ƾ9t<|\,ƈ5&c޼yC=[0ГB\h^u^ '*qsG(tTpIߜW=xg9yE5wfY C7e0Ům[ "k Y*uhƓC0!j=i<0FĄXM!xv@4֘(Lfo K(f$Fk*鳱pBѿm5E#wHcdfݍ;I4W3n֩oDZ@@dv;\=_" ŢC[cW[TIs᳭>S8̝!'`p;u>c]^xk5mg(#:62w(q2 o ̹ەЉ;2r7V2ĞghȘȇ2c }2J?کWv/>Tr HiK!f[51(޶E yG·uI{04XpR$b.X[@OWUB6DzFʱi >"Hۅdb.tNըs~Ue.JWc+жx#<1BL+B(7dF^C̡.DHW0OE%uv,etLN3⎛P]Q^6Dȟ2e|OxHk0S,J"&= ,mp!/l(GE}E6g0 ,iK јR]?SpaGTGSPFTwqBDٱcԣcHI'Zg"* = ,<48$TT$gDG=X8oGPxS; uKV+Êj[?&F(qST8,<6i( x̺r ܫ7idk[^VeIFv{iR1u]pm2v7)ȋP`!qç'WsT|kA )6Zt%^<]Aќ62oS /kQեe̋#dVcJ̅6!t+.]"O&Њ00Le$$?vb!`Ӏ'=vQ τSc' ]f0N͋vu=ᔰ2\φbZ!&UEۘ\_.2y?9 WӑbGҩДq JӴ#zT\R[ՇZP:(J@YP^IC+0p+o =_ ~/rL/–Tbܔ8kX`֐VGl6C߬:sD*F Uv#1hC& {MeTDaOevzS8O 1G.yϚ5k:,̙y qT69on*Fࡦή eG틾7~~hzAjM^Ѣ%.fr'RrV׊i**h|.dE-2˵/@7 H*ڀ>VvFiFl~W#!p+Ϻ#KI*񫊪$ Ac$FZ$5T'}P v@vܽF{YUhSG vSv+9oi,r7 +cxQHO\aIw4R)N!ҽM4'G`w`g#bؒWTFъ @_UsW[1la¹\[ J> saX=<KD6~}i7"16Z Տqr)JvAl2*REٸA{ec֨C< AxSF7gaVH^.bG\y-T2KK}Ky- ͳ9(-*Yq+(3b$R^ SedxN,svٳUq3Hw!ΝmbxA[QI b3_;J 9ȧ HEY(̐NȜ|SH%@IDAT\%,h^e+W f9jrU[/|DuwemHy\~,~" F5| 6{Hv8U{;GXY\uOr6(@J%èN_|m޴`L{~nߏ?x=)\{g."h)= l{9Mw;[)ua_(l(~h s=G";v | &_2`?X9-9'agk:-B@t >Kl5?"@-$OKR|lZ!~9n$tw9}"5eo[?XP<5i8xNkxiq6Q5{7=0oZ}pw{no.-yTt=M3Ab|?rӒX?}n|4]<>SzOKmUΌMKd^|Gݒ6)?y6TZg]]f̘/?.$5 ?6R5崺)VB1{dRmYc{ִNf=V ;S;Pâ3Ũi-{?mB'ƭqy9x# m[6SųZ b[MhqkBbO(h>F;/\qWZ)hO/U(ڞ"+w:N[xq?UB0 E\ o稉aʪзM(JL#]Tu-y6$O=rPidك|H@=֐Zj? QPDw~FwEh$SpdGI5$mx޼?onRjΛ'h@/=_| ck:C5㘘S!PO}qsji˷Ai㪾8 ! h(zM[xPa{"Џa}[7fjN`K|j鵽HF;-zs|TYۗ`Ҿ)6bdr!44m$b!5j'pCqU_㿿#x0I-=6JaǢӍc6v.:9GY!d>h _4n7(WJ+h!S~Ν;U看Aɑ8IvRV`ܭQat\vIЛC04sט<~e ĿuM=#;NR:I'x}$Wlxp䗦lQ`)naϠO-af)PR ʊ\%{ǦZ5K^3%AsO$`Z2.> Z(kK7}=sZ˦UV?޽{}0[Έ"S *)6m1r;cAի9i؊ j;F"ƴP*^Rbu=|58kK#~ tW'Bckgi}d_F:ߏkN 殧7<0ԺҾ]zvUD0I}WLtxslfB aa$q.w :k| +ٌ(G:54UnAO1%ܮ{v k3UH!P!.&a$vFЬ!-ԬP8ֱQd`23oY1Bk@޲O042yPX|:d#\ =. T1 =;QPKc7bS"}SqqF̌X 6SMA'*k ?"l1_/8:$;a8g"`%#n4G2ki5mQ#iQAhͨ3h_kDND`:PwEҷ.ÒT#1c bUQK4\.FoC ߅QwM?ͨv@RQw1߳EB.y+#IKr"&(H:tĹǏ&`U{ˈn!'))F1GƔoc޴ )e06H%BE+wQ|L]?ᜋ/2V*СGd-\{/E;R!-ᝰ.iB9 / >LE)}q7M>[Gqq)TrVg(N<W*)}WRUh,}=SE?~JHmtyW1L8 Cy/JJJKOoQE>+JJJ/x"pomu;2s9d/XV.})'[SSr]!Ycu6.uR?8N:"3$K)?Tft/ei"^e#FV)?*]:g~v~{ [3/"*|nNVV+ ZkOI Mnn yW{:I׾K)~/HBY!{E9|;$]Q~LY0NCs},9j:WE;(ZNȦUD<ضZIY%VJ]P^y_"pM!?>i>)/.{LLJsO`4SEhG㉠!=P:34i' j4n(cq:y\qmPAHָëW$!# ^9ɱ[-zln9; bw'GhL{\F|$&6X9)K, ik%xet,6 *Ug q-E{ʢ~ؽrڭc -(-9[%;f_DžϜN 9$W;.i}=`ūM=QtS)üm@GG+x6nQBI~m4njTyZ|Fu}7^mՌQ[_pyDፄJ ?)vdˌEAFƥvGpp}.s.Ssna5`4^o6эΧV D^6 ^-*"Fa`d+'=ׇ՗6#ES A#Irt5Tv ]1I;L`F # xb#ЙQ`|3w[|J*NBm@0:AbbB.JknȐ!b̘1ظqj*"/2ʀdb M|E+ (!Fydb*uVx;gNӮP0z:&7LRpW+9]Ғ6VUiяVT̊۱i;c\",]ݏeWNm#w-g3a3b׶-ذ̱]A2mtK scsf=ǁu-6]\糸>juH0A q4V!L&mBz:HT|aW%ek b gH5nyٰ1}tUBhYl۶M5iӦ͵>.~:mNWoǥwRpe,5pT; 8`/@HtM[ON,惬̇o \y߱=cݼ -ϱ}2J/eA¾iK]><?ѡ@]ۧ]-:wzV7Z[|cSHL߬\9Uc& IC؄ﭥ@ #ݲ;g$q#Q@t,#JW(!5W6M?@(>[Ĝ.WDy.֥yv⿺4!H$D@" K" LD^}UOѣG|N5sժUW3~A" ,ūU+g!Xch<,}Ml˷_RTlYއooÞiftQ^䍞dﵕtTٳ0m&R*;HM/c |8%iՍVۦy}sYh绤e%3%ܟeKzO͋OM;fbq\uVUt6YܚEiB%[ݸF1ў@Hi >2I$D@" aOo ?DϞ=frЈB"p #lJ;"zr>ۦN~՘`6Mտe#ʟ}*zEE\ ߿0-^-,p<Z!l| ?OhÓDS#ǯ>(e-&8vÜ1q:~@?f?a9; ֮Ɣ2nyg F w)~c6wmJÃ8h0S0B柘GE)k;hZ4\|(UND|7wgebN.Bja>D@" H$ !lGY:n8̟?>>>**K lDDD_9G2F^Ӯz=)4a9M)6sB>V,!l0ÊHC<":oQHC Q- atwY۷]vU;}LBѿs~<S5hu$$.F- ӑ|>C>pe$( |_Y7_Gg~WXF~"a%&Z%iˈ%tx## \JKQ}!OMM_E___ >rz|Q8iҤSNiXÆ +g{"aBCuVyeB>ӑ|>C>pV3g //Ob;w.&LFQ++JHHV|Enj rzOCk~-zσ3<'A|:\иu;:]lJ2I &!H*%D@" ߫ͅ{ dH$@mN7k'H$D@"P+BCC?'/D!| $H$" 5,j'H$D@"pl۶ w}7ٳgzdD@" ! 1=H$D@"pY<(kvGE! !H$k irmV" H$5@pp0V^w}"ѭ[755ID@" \6eZ6$H$D@"cD$-- xw0qD D@" H!  $D@" HĐI" HH ˞K$D@" *+k{#UV&"۷o*DH$A@ , UD@" H$D$AD߿ !<H$ $o2вD@" H1a,[L%nç~k ND@" E=(H$D@" ?W_a·~ $DX}ZT" H$5c=m۶uHOO 7܀z \SJ$@&!uMH$D@" LG?{ge03" k.kjmj%-jxտ[jE%bfIVh 0faAEλ9sWsga}&M_|$`ׂ[X\ \&`L 0&@ xxx !!k֬+~'kp 0&ZVX6c`L 0&pS9s&Nq5 ˗/%"7eoJ3&p3%!7C+s`L 0&pĜ9sn:}Ny`` TK `L ‚`L 0&@$ !!!/9[eeXh&CA7`L 0&Z3f@nn.F?jA| 0&.C-,.3&`L \ {}^qa g`L^?dL 0&h-N}Y>.ڶmYDpɁ 0&Z'^:ۍfL 0&"#%%{F^^Ǝ%K+N|h-¢`L 0& Js'|/HHH@ppps$&>h 0&`L Jٳgmڴ.80&@ KBZG;L 0&`$0}t:tC~~>N,Z555̉3& ׂ:`L 0&pTUUa裏e96l@HHU bL 0`Eq &`L 0VH૯¬Y _"2~VX 0&ps`\K&`L 0"z6n܈x0&uH}X\"1&`L ]bxg ʖ)seL 0+&W3`L 0&hO]]][,3`L L 0&`L 0&ZIeL 0&&yf2~>|2~*aݺ[̽y8I؜!aK"_S@ܾہJwjvEQߺh ȧo{ (K4233gzPI m7fa؄0lTZGd7;ۍ}$m֗ o_6aL{ZVХnxu`L 0&`@Jljۑe'f9Yt]zFNm݈3nikuGsEUblӈX]qS/߯'b~S.X"\'婺i/S,_БSn3Zkn\>W[oorqRQ5ʋQud8+$N7oxW 0&@ He!L9cðbgv#]zzA*ėOgϞu:R <'둦Y24Fd`2,~;\.ql\.-۳/xě8idyT<) :}j=r[T%/$R*◭%˼տFwnRY&EfL 0&psX#*NcbC-c?SElhlYyODEԵ] (+A  !z=?~e .x}Z{Չjl÷n5j (H(I(2;Y=kt*s/Bۡ)v| 4L 0& L 5nflGtFY8ԑqѦO:KV,SDJiq,Ôب|-g-O5+vgg.,YFń}i`DW{mzV 1^֚,d:KCr&bV"l+j[ )U%VR9{>t yoL5" !ITy{a!"cKV§I(?2` y8`L 0",+vnZ0ik3kqAgm =0]م{QQڟY14۽ߚXxVe&ߍ1AԚD C_ڪ1Xg%t( ZP{' vfNO,֜1!ܖo: 7"*9=uoG xd8!Wj:'69 *Hp tMo@IDAT_'Yۥ"kr&a8pqpŸi'ڰxoݤ>@6oi ޑ!S_HY>u7tB)ajHlJOpO<.ws70Px 6Vd"Ur|2w`$ !ԯRL&-؂LbޅEEQmCF*iB+(O~9IE.G\˝n\DuVύ|vsc|n:q-`L 0kI`.|q8 c%2+3& Ub$.׋y 6Xk?`c1oot ŐsS9i@3jEm1`*/ p620,ȷ5%BϸQmhSˋWXyHmͮ:+3DG{%r9 fb]I;qJ>Ca#@9/mTGIf-f-^wI}G@r%zeF31qCMoS=,jd")xčc2Q;hg2J2k5N=gw%gޱ׹]vHZpT72 Iw})꾱~,h ΨrSo=zխX~>g?dL 0&nhiz@(ER&Q]Zc4?i1߅]sҵxm/c V b[ݏ6˸PX#(2S@V:{h)}BM'w oˡP(7bױ e,d)qlζZ)jRvb.)g}7U67$Pay+:楋dJp|{)iĴ*HVlb9}5ҮB9jh1^S-Y/j%Cu &`LIh0nTTHo3݌') <}}kut~VJ_ bE|~ ã0w{8+`3W Og(ʚ2T0ӰIv}Ad0,qؙL>㷦[DSpGoO] ZL11F`Dv$ucՖZ!#[L\+}.pS&w/;p [e {[eR\;p-nZPwv %Ndy1ݡܼl#^L3=O"/D,#uNSQcu׮a&+˰SOp(&Cх-^yEfk&맆놿mB'e7V9E\k!`(==ziPl䒕&Xh,1`L &@&ĿSgh@I/& DXnby 8.{PHg&Z4B>Jdڌ'G@QYxns=͇hVW^`~! mn_!;zCRWy{m.8rVG7$}Q U;  xqP_P[[ ?WNh7J)DIܒWa\D$ S֣"zOZm:rG{NB| ?.r(q3hRL4%bO}=qoefjwș5\injlk8v4)$%ؼI,1]3zm1wxj{37FKI;["RY䄅l~bcڵVFd27ʒ~G/bМ />6T.[fҠ%alt$4 ϔpҤ#p , zsOnǬ;F*QHE5{|%<!,, uL.&`L !7MG=ؔb;uqBq|Q vOqtBKXJS13Jܜ]j\Uy1-M馭M|HWkMύEDXKɢSh5>]TlJFƶXS\kx}-;=,o(x,i(Vd#Q&kY!˦Hfur숥ᥝ>.S׭ [YۧC? v2nĢ4L 3OOۚJͅN)H:vnj/ڂmOXeDǎD1mi^zS9&iuWԩFd ƚ,}D5 W$@ۓ( BT,!b@imELa3To xS)S!*F_b|ܤqSE Nne 8zL 0&hr*ҳ ɗ!バ%ud PdT鄾!Jf"K CEiݥ ",bqqdj/~f,t}tb}K]ҐzZ_*PC&SBoxXь%XFw zt dhAt ԜmcY$QXIKt\N| M/njMI!J$. IU1pٔw~>% )B ҿԽ/UO䈟rsCSBu@r8vw=]gDÊE=U{0 ϵ` C.2C^3i銔vz[z*'LQXa(\ 0&` $ML$40feEcl|WOVV,᫓o BBBXYqups)L ӹxt}xpyUZը6N*=vddJ#Ӧ`¤;1sqsch4*}T+ɵJ)ƌDVeJ[Xt>v߿UXf`ʙ?><̷Enq my^:nkxV+܀}xx^hxy 0&`L VܙV]*;g'@,/n?9uk"~n3Fpk!ňQw`m}5U]@AE[(*M菕:}eՀt\:P/qviU(؃Ʈ3ۊ #[-8'M@AI^ #}  UɹJgQ]VcwB, _+E+i݈Pv1~(Jѓ_6sP nw n]U`3վB^ZBKi9kK '@,:]Y]uV*e5 qIjG ӆGY4s .æG7iix 0&`L N\a׭-:JlǪK Z'~R X#E(C֙r/"'JT t }BjI]H =tEZ'8:M5!IQ^ema+q%~_"%T(PCbcpjEwC-]QE4Wy[8)Ӕ+ x'*k9X8{ w-$.^9 ^s7-phpbg`1txqxhjP8``'Ȝ.h1IWzf!vn]eH#K!+0װ:-Ɖ3fL 0&`M 2 #YU(-#zz8M[wJqEpt2u[D]h55@*H]d掰vΰ7~i(/R\$J |hU^XYJ $#|dn@ Y%li ~|~>M -zzGߍ˓ghi 9{J6d>! ld:2B[ŬtdGe^np2ЪJq!j @77Ud*Il KYZj̳I"EߦUX6.ZG Z,C~>=w P⢃ ǑO:臮}l8S("b(Q w@tmKt:5ϣ̩ OGLV*ɳa6_ !BVD?Bc0G< *!!eOnij36 L@: C+t\ȥ2Err+CѩG'x[Cj:y,c$pqq_@0zr:cf9 *??>ݠ,+={ >裏…_Bit%b*zGCm .½͓|iX@n" 9 jh C6%)#$.z&֑ڏĜN.OBWq(j 䔪= EY5`L 0&!\ڒf+9L428BBB3.Ny;&.((˰ ]ELLՅហpusBIX87,^eRlY JA':XĉfjC;z࣍]сBX,V|VJڑCe2 4%::iؠ2o0Y&T4 İd`KC |_E't>X~"KZk/XaHr^Bg/sygwक़uFh4+!$^k̸9x1Կ q)c/o=E*ڽC+oc|6,LEw[R1qtY~>6 QWĊgjCKEL^O eI'$嫡eO.drEA,yF۸' QEY#(6D]z_Ő~BtSѺ?K? RjUKz\Ź'RȔ^|TL/x'H#&RWoe=_g2y&ز_Ry;aGߣU2,^YP<;gpX >gL 0&`7fTXhW޾/~.G C.hZ=kk +§YF)G2Ρoִ&rٳ'fM_j%~.o#% *Z#OR'sKk2ue}}*dh@\İNۘsT~>`"[9ؙUޭ8Wq Ji?ڜ^Q`(r_WG+ Ӡ4 נe݊lnר%sK8v, nn4`0z!XX\ ڵEېPRf* ?~_ION}}J_q4a>;±Ťx {}_] w[K=%1uZj LjqC/hU1NaoE5l늂ӻ7H46@!r1?QF)3Ǒ>'F'gƘ[BA àDBVu8:|ZRȚ J`L 0&nD KQ_* +4G\\\P<1~VȽ{iԉLыBN#;z"8/oQr~XC*ZQX\RsC"0ej-0u[uqj:jrrƬB`GtfÛv]=G"HY{3pS]y4wQt/lQa 3UVl)/1Gw+ S[khق n$ 4MG1OG'ܢTGacȢqOB5<:`B ?7Ͷ{WBWiJ䤥>s AT8Eַ Б1cڸq?<Qgxgg!Mĝx)ph ;WX4 mG*[*A1s¸CĠ>~82ZR ܳ(&kaxlxoC Y ˪NցV@FKA|"i Gay/am\v2{ibT;zcߒ,rEN]&E JmS.e}3D~"!q>t{ "oik+=v0yu8v,YKd62e^s8ޟK ._E|`L 0&Z 7Rxe١K S'GpF&$t mRT4s$ђDZQ?Ɍ\3'@VPtQEpvmezCCAh}~Z" rxINj:Y9,ĩ0)1<)+ K$C#cRX|37!SGdȵ^i$TGFr۱m㎉Sݷ sots4LBh)LTE>6J˅LSHuWtT{-Рp88zͣ+lパwvp R!G-'ǡ!AW!̢x'g_/8CC @7zFvUؗ~v:rTUc':[KprH8]ǨozE!;WCIdq၎ehO;Աa=nXvV8S)/F A07tFȟ0oԗgP{ o&r cp 돀fS}1cKcj_g#%1sFIY!d(F- Tt`ϙc0&`L 0LY_ׂ֐c<ߤJEHZ A9.SN˺WѽK6}O[R}Xkm>C{B!98a sHT!lL0$w'1`kA۶z7Vp9ӠYsڶыU@G<  ;x]YXHt(+,C5!(d [/ڲRk5(:_]Oُd]N,iN hHqS6%E$;mi6=sSJQꩴر5ɇE~N_/>(kGא;YXj .Ga!$Pctw F FKEh%,Ip Bk:j"pKQ?:{=7hjT+h$2o3#/|}e>G 9'^ q'ЖvB!9#]{IN-v %3&`L 0&h?(LE7TΒ,*_ApfDX ) hZu72 1^ 5/vĉEغvFi킜n&o#=,c=ia|oTO7hhg:.iIyq UΘt;N_8X*tr07-9i찾.p(FjQ {  >tHݝ o%uVKrNx6A^>1<(,hd('#99q錀tCWv#6uRr2QƖou;^bp0r֨g};;BVw<5݆}?a7[p,J^HF xh_a3 &aWdyZU0Vif-$k rK叞܎j$vܨlZŗI63qeXƺ~ɩ|}8V#ͿbAf0r*حԽzcNq`'>r(^5 `Ϫgi75+;b.Ϙ`L 0&KyO|kЀ4| -gsK*Z+)+i >m%_ƚ} @:B]b"9T9 ~u 3Iaa["n#t'K#X=5(5JW[\XAw"$r-cȎH[\6=K%k˖IqtR}AuaN0t摅E&~hzwov~2Au^TfAڦ=S33 n:KH^a3YPpK ph(m^髷eARz_}z8 R=?H22 YgY!2҂1[64ܶףMg<qB~߶x-ƉÙ8| B*GH[C&(I.5)Jhp>0OL ŃGXPI*EAHGIݝћvrG 0 ad$2PjB@ϑu&e{ApYC=A-SW$'Þ}{2DRQ-E qhԨ&VɥnnHKzt]non =ǒJrjV1 tTsd# -js zIxÍ.t8P*E5B#1, j%v|/> JayP/E;'aXvuW_%& ^@wJ )'P;!la Cq!lm$l4gluv6BH'𤼔<Բ2Tʨ@A݁c0Ũ-V6-{?ⵇQ?O y0q*N)B϶l[Z0h'ǻnt =qdf|_Fo9e@/; ʑ}l'ixr; [VCQv _=~~ x殞6|.$<#Y*A)X>BvENT .`L 0&hGaAܼd|gK>TR@(i 4Z#̴;A}X9N%˱uTd]As?GLiɁ*\daz.êJQw0Fcodx60,JU}"QnS|3+y 33y)bI+Y@%̺POa+0Ӛ̻񿓴-`E}g0oIJVXh+xkLeHݏB-$!e2⍬A y{ jT8Yi{^`7S_9c0=w77A^BYVmu5fͧPq4 C,A'q K@(MdO`L 0&@+&l a.xo x⳱2EP i-h/wG?| \#*8(dztC[NCvCZ9BP5yR ;fD[K](P[Fѓ} pK>-C'8^JS['}j8O<o R?,,#/ö[@ P;pah+ȪYxx Kxˋ/nmwT7I0 .ŵpny3j=ں+)]aَ˷x:=_@) e i<1<` Gd9£6SBRߟ1}xPIczKW\(#(+>9~,zc+Y˴_ؼoD& ' c2v''#0E  #MVEV'ƇmbɒȭRFĥ3YCn5t ⭕AIIꆎ'GS&e`xygrΰ2Ixh֯n>'=FWJ1}ѷ+w$@=ÔsCYdB[܊~AvNn׆4v_z ]R2rL}%0Q>a-"hY֧}bB_st<, F= ƾBN]bdog+3b fL 0&`^H *Gk+Jh'ȿ*\\B  )=$d!#%sӇ@IDATXSvV P~.,ͱUTIt7<!?[oE5JKɚ..jnBNLi^YË`$6ʰ* .u5 QuVF LZC}LÑw6q LdMc=חׁn':MVkcî53ΊiZI,p eJ`L 0&Ig}Y*&pUTVy_<_F1@@}|gtoOxb`L 0&`-2pN%SŏR(R~YdXqcOx<˽`}ᬬ~`L 0& _2"ն*Q͕=J h~޻<{~:ZisL 0&`-H \Κ U(KÚ,CW'n [W_Cv^6䈸L 0&`L‚hi䀶:ù>[Z6j9%D. 0&`L \XaqK`L 0&`L#>,n& 3&`L 0& o# 0&`L 0&MG7]s`L 0&`L \XaqK`L 0&`L# ɹL 0&`L 0&__j*5 uqm+Zl*TkPsήNtsĉ7F{Ic%,q/!E=SDPo\KV[*]J8H*kdZt(jhceHps NuJ-E@UDZ >#:eqpW j%[ g]JQ .nh};@AC7wR|v]QWAY.RP>Ò[O;wg`L 0' +D(KH-֢k="9,*ɵѸёR$>cF{nH)q7W+Эw'')îEЪ48yQZ$u-wan_ D~s5?'GGuƈIٻ{% 7AbQA+^>(b(6t$@@Bzf?6e ی.yMyg{_Z %Ng l aʷYJ7|f/ZWt9 SFthELl^{3uN큸|HnI+WbA1bvs&x_[f5a^.gMxQ(Cm5r=ϨVl$ᛕ?`$x'#"حVj,F" HuHvZHǗ f\d`hVXT*{OiAF V vf `֕};7Kܞ֛bst zxҫK+l,\W [{6Q`a-xt|ڈA`o!>{_ l]%u}f/EbIK(зZl0dc7K~T5k?\qXwƶ#&cT6(EI>,[q7\6R`DOK ~OWLmA5w*u !y%%H-HE+dŹ y* Vaxcy!Ҳm8"o EVU2 (^/yNiƲذj;1_GxEVN.nTd D ])dZ )7MU]tlI8~jWAnBu<[4msDž3-I+,}zV9F[-zLb.lȤhc#Bн;JsbXr׌lrYD@" 谳˰E(+#W .P sFPj9Hka2lH?ns7L8j|4JLrn어PMӣDw2u#֗9M-$ۏXC~Cp%iyT_T0ZPbAQ!kj# 8F-\ >5rSqȣJlvBѫg$*GENPB?%\uޛ'_AE'9Cjr22}T/䘷N&p1PT4N%Q>;` oK!-}9,}d/ 0")v2"|:~Gc#!9F3-٩'z`έU>i6C&3y8u1'OS P t1ݺwD@Bf!`O`}Yp{#sW]#lnVզW#1 \<B!^ӛw0fp~5@33qm弒7%L㊈1ȕ5Q~H$<$߈b%onpPq3^B lԕ<:5 _W>7S@ w;}EY u1eLPσIGWEyNERHؕTVq~s–|QԆiOw)_izO_qbmVZ,~*O}]<~àMiQ.,wS'߽=ypF5V,+ >0E(3(&f[8 h_G87M>o>>ߔ+F9X+vaapW=NJl\,n>jGԮUxUn3=G EhQ4?N*S8U7b-K[|6pJ ,7 c ^{}lڟHjnhͬL Axqը.]o4:YwGGRۡbrn!4j5-x:xBCmOI)X_DTh : _̜Hm?w-N]|&!D)5Rأ| A -N'"RQnD㨱J~ }x 4xXDki[,vD'x;/xF Zz\۬<ոzDŽyxufEq|a,1Fxa^֠6*$r){m ]3\xb.ƛyRG:8=w=x7W[?'Ƿƛ// s݀7WYafja1T{~%ZU_eKn1%MC*׼cЯK(絞y k^Qwr%cP3m l=D@"@w| a:EΥܴ32BP.7UL$UHJ%/wCL)zҳL}j wńbնag)DZ0 W=KRyjdlT ƅѿS&yn8!c/G0޼Qǎ!F+7|q^ԭcxh#gϟ Ç~{fŧw"и`obڐu{;o# p&^2^LD߃Rw,Y`eޱasc_t#QMZ@/4)8qZ!4)q ;HhZ;M4ֿ΋KÍ` ^:L,eHOrCx9 hFx}y"8Q}/#1OH,է8C r>FM0L [N Ojh7 t?["l+MDd_+㉥ :Ҳ6ػO(:v0qd$bű Żv[)%]y"(0# @0n2Q;Zm<|xkƃB0hP49Cb,_gCoj~fbrJ<2l8hKc!df)+(BF9X@#d!;s;_:| a5t۰m\Ԧ($fcW~h?Z2gӷ?|q!!VD#+//?2 6 sR93(qu22jĠA=dFa 轹/nҋs,Ǚ1 {~}:yc/Ag"r-NecpgoN?3LOEZke1xuj~Ь+xc]H./\&\<W V4ڔBȣu q_ [L$nXYo`<-ȡF50w@x*R,~?wWHݗ+s;j8zhRSv&n$:7 sf !z̼/orOsPhp $Â+NT9԰A42|_Uۧ.tJE ^U?d4#n<}xn|n7<{wB߁]0 }"Mǝ|CmxٲϮe+} R~#~"ϛ w R6Y;k/awۑLUĐ;*4Y8U7#A6jKD; §yha.V'kՅ8|9Leta&ݰ# "x:"G qәf p-_߈,۹A`H儳[pwvPEIybAN'Ufҕ>%j|J24[.aCYa#.Ėqf!d?4 Yz|i:at}՚id~^ߎ'֫w[0sDٓsZŋ=#? 2c hvIo CPdoD @[Ęsz?)8OHVteSK{Y!N^pZj\xxfҸc1emŬ\ iOLj/ocS4#2|{u|kVAߩ`8됰L$s\J#Xp`~'vz /BRFs?,mOGƲ o kL+BK7xULB.{{V,accx ৵!3~U$ӷCHxOqq1miXzT,\CcS0i^, 00/ /9Dvbr,&7.&޹SML"x59,+7 1#8'{ܕ1u,3OnŠPL=Wo354\r'z=?AA=ngO"f~xCcN{f8f2%?^?GivĤ0"nL GNv mhOEZtЖn w73tZ" HԽ_fO9L4T~N Q΃.W蚋t #ć: w1nOc>5Un.\@"i1@e-X+GTA94A[0Q|"UMlj'C̈́#˱`kx' XhIy޸`ԐЧ?U^y.~/:Xv^| v>|{nB<&gd.Z_#-L.4I'6R[d;v46x7 }Fȍm\O^F"@tr-RSDaAAd[#сPb`ƨa k>0i^ (4i̔[>B(6Su(9zG}q5l:J:/?+?nZPsjl4r O G ɘ7ν }3rfJB4+FEn4x4|B5x35*z eS 2"йmrdӬKUߜΙqmO?pZ=atBq4{pcIVh"1א>Gp˷/J_$Dߍ\._aڑWì7$б^`4(f(P˹I\&P3 æD`Xesa W-7ϳ+7rG(Vru`Ov5*23,۞,:L$= \NɏF( V|')3+_Gj޻[U!/bB:ojzb̡HĦ^< >eH8Muܱl08U:k.Q YleEPvc'˓EEBn WۈE6uG4K" ĭ~P" ]XjN1bWWjhfS;8I#Un`7SA#P|,E8AŹŖ;;ls)~bevόZz#]0,PIj?͈NvhGU?Z ͚Sf\{#Tk#ʬBMGE)o$kw*rJ68jQؚ ĺm,/ϠH\xØ:LfΣ1lY9?5=М">Ni6 lTH$ -,|[=X~е{9 Rne.Xq`>WP'B_:O(J(l *@/}V06 var0'7NjE0z_x7*oE!7SZA,}aAcTp$D[^f"3" \p@ڃ(9V=Z2G{־Y7*[/ecﻨmðt8cF10Po;>6ktK%nQ]Z\0FM**d|S fOhB?&5M_Dŀ`oLh*9|ܔ 5g BӃ2t}e+턣xvkuӁkc.S9>xw@!U]YOFy[" H$Bެ]P^FnhSNhpIB0peoX`%pKd6]ohdO:SъfFDIs#dȏF~Ωz/ũw|ƒ^]-?FvGZsJS;5F ^ @f)U.ztw6y_Rp1*CLƷtusJ3gPU{iOv4ujuTѶy4؇C8|,ۏ:I~\1 TK% ]{ !=Y t^a4hܺ_}1V9Ee`GOaݏx;0np+{SGߪȘeѹﶼ ՂhCV*f]>"t'WU#gE̹ܡ 0@iMLjpw( ^UPg5WXeF6x1Uf}A7\zECFc68oa 'nZ^Jzl"*Y]M)Yg)#  ):5fcR&iXkmb!1D@" 4@LCPqRu"2~sN`m $6lLŷ;., cIlVy?Xu, ڴvZiqWpe4x3V<9}sΉgcXF7lRʱѤ&$szy3RfZ>餎y*mb'5XJE{MY-|7A8 3gǼ5=:3aM F%vÄ;cUv~Jmʡ49󖪨 fpВn ُ.{״a.aI<1$[Ia>p(no>n5Z 6e`{ izjg3qhxy> f”IaǑq= tڲO3{imfV1_pXѲZ*rY c:oW\zyxDd'|JM"oKVͦ%t;@ &e4p&{'M-4!+=9f22}jɻ4+鈯~|߬"*&/D%B1@Ux_1S+yP A +!oqQ; Αl(B|4R2VmbB);i슅GQ ?~9Iڪ-h0)?,49pĦ$H$VG 7 k wspjB!RYfLGbٔ*Jw܈ 6E"N#*ޭѦ1gRO/ axgk#y̠GakXMgoE Gd{# 1t#~?`u9!=X2+SA ~=CG'a@ ؙ8;  k+㯸W6őuQH6!ʶJ0uuzvVP~6^1xMxyTvP\i6:tЗ[kVms.9@~E1M=v*be $ 8 kilݴ G:Z3K,BN-KTd^̞Iך7TulHST |= (Z](7-\o}a炸v-/<YX3W7veD4 D"YR<+55lLK/w?(f=dNJpumqúqsF",G 3eVfIsƳ?Bg`4qC\5 \#=~ty m߅c豽y&f ޥl\Wn+zst6A:n?XX8Ͼ@Aq)Jh eV$P@IjzZ W Kȫlg.Nbڛc4mi+$:!ŧ?W`c= 'םZ"iܾ;_ß1)@ش>ݡ纍0H[WLgF6͛ˌ2q'ߵ'cã_h6R0Gx?#__w >Ɉ\3J4DV\lCf6@ܶ\\:97j{n'm[b^Sy"":,@in@VBN-6V!^Zv:1h֫B ! Z,0@t+g[WDOMJ|T86n܄SYHgN~_YY`)dxUް'O' OqC4s hxǨ+(U.0e;;ST­hʡsS8q$6d%xOre #-ՇD nd:0'Ug[y2G_,$ğXӧRK*4.!BqބYF/0çst \o]S59ZW_[Y"b1G&&cP\3KǸTox9xtna\3ͪ o#z/ʅt0sr O̜wg Rq"DhHCᙙo{WL[)cd:gހ6uZol)a[IŚeᙧ{ΎdҩFH%mWaP;=nC0ȫycXPZZH8zp3,# Cx-fu1cU MVƊ`s.ػK}ǝf?H$vyP'&u81lm.7tYD9ئWg.\=q5~?ZA'0kO`>q.Vծz Hzc$Kԙ|?R Ovah$2 Cq]wᶎ%!Az?['Sl<"&sS3\ܩy[1xѸgɏ #g/㷯)ua }W.c1v-֥Xj!,Z<_2w"dbX"w:λl&$%|dfXVuBǧS<֟ھ@ o0$2n A0TDcH=}ϟ%<\t6}3!Ռw;?Q3a:)HT4Lc٢\f(+4q'19)w+5VU<5LS/?JG_ ]pcs3Cl)X4>>U:/$ff]+_Lš匤;SSX! q_ZѢS-hɑ(ki$"@f6c.C7jT]qtPx/Ʀ6$'Rhlh@ޛM&x.ߓ&g6=w3?&)~m# p?5BRly7bm@[Ch[ňMߋ +^V |{#zLG (ޭԄ9V(RS5S}`0GW:Y 'Gq*;} d.Qim_Mb/Ƅͯf [ʏ`N5̓»{#BS+KgŇR KF,~^ciOߡnB`at`1[h_&-Or/wSjUTYQ%`㠹q;Wj Q0aԸ_4zxF\iІļBWVF++@'!&c1z8 8>_~Ĝߋ^QvS߬ݤc_Y*rQ o-n4*sf)Q i9}"ِ Gw53k-H$G@7^?7\ 6-.Š #=<|Wܬw/wėp>zFzy  l(1G{ݛzƈͪ^͆!hv)<)C!7!n`Tl_@u/LWؾ+D S1rviE8OQ.Qoa7S, ,|;+f<'A<_XS%#w9zKE;X{p#нSXE% ă #L<@c/`[ DXNnO۩QL4x2%x`lJȣ٢ _[ɩJU:.މ|4K !pK:,ThAWT>@IDATVApoW _Y4SSSfc˿ S5Â3L}:'-WBhe؁C+ :\_zv@?;Kq۴K?W YX ^x۔2m*i D)[E퓑 ̻a'0w*5BVt0kXx&wy+V%|l00cN C<4_TT:sYBœN틛_Y[S~EFNp͔ɌC7uPor)qxVh8ƎR.Y[2P?0|(up.shc߉axb\IzEhzp&abHTk\(1(LC1[\~CU?fI;*:t5J߆Vc`\/*6ryㆳ̵'<<W7Oh_ }a f̴ȮO /oҖC@kq`2S)RF EG]},Z_!ɦ_yn+Ds R Դ! 8Ur[^nNc_ %Õ#bnbHd> 1hSOUR1xn5PHI:=*Q<ޢle.׹2: -\Jff奟"~goEX`f_k Z)!;a`Y8ǹs#)ٶ*z/F&s]N؉X%Eȧpb0x{:P)Ъze +"VɅ1])r #icm)K-55 UͽQST,`zg '=GNfVxrdW1wYv32bFӿbֹN @H"0x0$2 ^JYPqmO&Zsc0QKq"̓k/,QEգ$ȊixTeoU242c>}ƎW#:E{{ ^ӧ%!+Yclhu)NeHK3Tьs4aʣ$RUj7B+ZH*ŦтuMǃKw!S'o.M1VmU`*J$?3?#P-ݴD7 lX<~n7"%IE49k+0-ikѶLtx-N@4.#h2Պ8 IaY?yw'-j^>7V!m)5~jd. p,rK&WPS57k4=B61N{S\04-c=ZIXQIUչ\7Ε 9I .RVzфǿzwά'gj[gǛ.6ckwXR0wP{4]X!ђZH$??*;4N"Wn'{K|EGFf6w&]&]}H^/*Dyˇ?؊A6'|:$]l(d"϶1ҋq1>4jD+(X&Ok*+-ciBXф[Cѷ\r =DZL _#<"`סKjH$?ica }p^CZvWA\~?w}x? Ap_C(& 2xߍLJi.țˠ 𦯗^B5N=tv ?.09%ҷg>;v}ѡwvoa",OA[J$_.3bQ6 pVϣIth =" kpD$^0q?ؐ8 Aj`` f{˦5؎x3ZOFGXCئ(j$&Yѹ{{FŠђJ$@k! ,G" H$D@" H$VCYՊI$D@" H$D@" 8s1%H$D@" H$D@"HE+*H$D@" H$D3P H$D@" H$@+#  ,N" H$D@" H$3G`1`g{-C5!}mlvD15t2Ymxf:TinHhls-ļVv*54s&$3Җe4z.+H$D@" HZX_Fs' "4.͠<323Jjhܵ5 25 {RPTj(BQ";%iV"""fv [9nm.$8" m: W/gpYD@" H$DC@eg몗586oGj C [φO :WO N'a^isx~fk=,oT"`3E fbGr&z}"xT\Kpt<5lU! xX1Fw|3`qCT+Eꎋ oWb+݋:](h0/apiMl,[" H$D@" h8?oNf^eDiC?X) Ef}iRXo;N` +u zjWHYEHB\B`d֋ GSSfX8alUJaaa;mfdoʝs r)n=$D@" H$)8`ҡDW^s0 @4n"D"PwB O|3\jEnZo7]Ģ(q̴ 醾n8B;OO{AQ݇Mc-LG d m:ޠo5I)KH$D@" H @oVMx0dwuAJ>б/SS1lGZlƲϺQX0"5jwfLjU4 iT>dzwu 'v(T8U<~j<[s|I6ԚhmcKㅕZɟW v46rkoNEH,2N.>K. bqеaK^I5~8VКf] #g LmtVʇiE_7U9D@" H$D@"wA@ ,.=Z\}JQf, #z/" |qBbGFoZm@6u1-{9N'c#1w-ނrDrPɑT:Ba|{E]б]5I\V33p,NNF tdi&vV>2sm)+E^V*N:bFCNV\XG-HMB:IQZ-Ca-EPȆl$&DBR" }Ѯc$zt5b\uvf9p,&Sa2=) <(ءHɓH1Rv7&FcĔR7tڝLaZn'%D@" H$sX udOŸ((~f=ֶxl/|#ԟiRZ(>ncCnl24O(d~#* oWPPߊAS"W[W}/0[0tÝCIpX2aWC6aن5Uw#wdl.0* 5 p-fϾL9UZf|7aWGNA \2vpt ܳRV)3]W&\}>"E%D@" H$'6/l/CM`DMb,zyWTޛUpa 0 ޷! L2=H-k,&5뼂йKWttAnFv= SnZw{; ՄzÐ@a: [g|egJL}hc4JMY+imil; Lԟf#00tŐa/݇[}_ u)Q_6.  -&0!&)?~K￉ށ| ѷ/3"6 ?XѡOO*u]C1Gq}@ڷACZ?ǂOM.B.$Ղ*5+yoN {Ќy2.b>Vxw|7i14M9 ,j;ʼn{Ma1w/>~/O)+B/aJ63͓D*܂+MCtvuPSQ||IV+H$D@" u 6!>\wAxdŶr6Ob#4,x0i ^EK?!];1fS!~zy$tGY=8QQ 0i^pƁH*/l_[{8vg0žKP(P P>fZFˆWaag'Im˖em}+qbΠ&7w{z3ydǾt(q8]5nrj{%2;o /D ec-Z?{L *+T{P` 1ӽCѳh 8=#4~2NDr4ՖJUh-j`ak屍=$: g5JÊR VrTZSwHgn^cDi=ӎ#DbcA*>{6rG'uV"B@! 8( uPtr h";/ WMWdˉ6[;L0\{CUMu S#":j5k_lȑEM ǰFP5z{bB^:ѭhTWTzok XK@"oC # ~d\~b̟;SIJ"Ѐ uJꑧ#'Jhv:zZ-VU6x4>@:~N;]3,r+xJ+62)iz,>Ds ad%iafup B@! @!D4W0Q /ή^h]1QWܨ-9TdF9GeE?1T>O0ԇYĴaY^ 5 yx|ht՛}û00i qeƉs`6H=Y36o/Vܕxp _|2jØ}2QQ/ _rPct:9q7G jzP(0 "L,Thc]B@! 8P@ GoaaHǔQ.l/`V~7p3-= 9iqT,()׊O ?Ux;q fϞ@3_æ!7g߉fK_sϿm.ۀ_ xW2)9F(nCf/⛪{{{[]bs3Hi"Ytf&PYEg 4{4_șy 2]SiXۥG|JxaK1QZd"!B! B@N2~N&ԃ5y>}R:]s(6:f6@ZPAE9$uB\|!UD( lA `ӳL= fG3B-w~Do>g8 ნWpD;JN5IRRnU/CX⃏`"<<<%3-G4dVՂ@4ϺwJ ee LC6!*(a, ' Cv\e-CS!^B@! B@ #QX #܃N;CQ/0o73?{>@"Z$L{J01gw#* .r5 U_.-nn1εKf LԄpfʠ,6D - [ 7a ?cq}~z&U7m*CUhnj/M#F%!!:TG&e-W֜9L# ,[6{4Ҧ ^4UV*.bIEN܈GoE3 '5hZPU^6'LGSߚB1uam75x;KoWJ,^ ? GJMX\urv;Z! B@! (ݥkҙ'oفGy2ƋBVui#o٠8$D&^=ZGwzDZ!h{ĉµJ̎I cFY 19ZDPU#{a'Vz4voĎ/gKS*ȬG1s>̹0c>}o{jY|4+[p'hWmuBxk:3kؚ_@7!?noUE)o#B@! 8 aa V#c% =fͬ}LeӠ^ZC X@( 7Ti ? RF<f8#gҁi0w@.-OVl+_c3qſAw+G@n:B > %;jq pIka/ɻw ?,UNL? dly:CBeʘRUU5kل-W4F m/px(P߾:G ǫ"GLEM`D]N]d ,UAce!_%>w).#b+ lٹML>ؖFkD#pÃ=@&X]_gܼ pU'O+ǩWb[|Wvy KI 59Y99Q! B@r0tT8<ZfiDϤ>ha-3tL%9c^a \ZPaڔB$$d]S<(ʚgNc JVcGuy)hlcLG #- z_3Qɑi#cjmp- b5"q)HKOELՌmQRcq7$f!gT"PBhBFSzݭ(++CyeCmq1:A!ڋdžmQ\^VZx2$<)HӣV)HψJlkFVZG?ɳ!GX [s-K+PS_甆Ҏfለ18|p걣p.IIZfFJ@]('d!E:ƻk)V@Wz4B1r;k^ ! B@!p@9,ҩ$a 7-D4 rr?zF9 ٣u{+M!0Q{2D>вA >&zp肛$O"U 3WKL~R?yt* jB@! B`Da1`d$󴢺\O^IBlȐ]@! B@!p{'HB`7EQiÀ,=ZrmðHB@! B@'0rW[yrA-tq=iY! B@(,ёBbNÔ-L#qفB@! B@E@bX/ҮB@! B@!K{8%eB@! B@! (,T(B@! B@-QX-A_! B@! rrRB@! B@!Dae! B@! B` bȑJB)L/> 68]nx5ѡ8fUP3ǶP_>ި^/u׵s'p0@nPB@a% a+D]Mfˇ8Գ_c{x|.'<-vx\{:^}cTw]\p[{u ^: i뷭f2sOk+ xㄣEp\M^#αW끷-xxx;ՂڊZMf F4 f-4}il ylن܆Rl铿~߈f9φT*, gV `d?!'.o Gl; 9=~a۸r6bUW ^6.Yޅ}w&BQ|Gi0](8@>#ꂙh*tsP˰> MdrTvm[!m,G#yH87k7{W[ wm aSt3Abm5o'#tn24AV%Psj875wli;1DX^{a!*~54!=($_2c~hY Ov,4 EFVc{6->O|̿NkLFƄ12"ъ[kB <,۱c=֯-iyx| $i D{&~!X]Z8\%!4MIB@!0A#!j+Pۍn*,!KaJ~g5O&bj~:\嶬gߊ?u?*Bm1K&ۃ?i9Q{T^Q:]L<|B Zk`hw"vP6e0U|Y匿7hyޖe⡸(,8-Bkf4 Sa5|FP4A 7 WڻR_^x%fBtbvTozXl@Y."pFuew%{}y촦xspݯb}*gQ3p1ڃ(; !  I%cƈpU1a=-lk\V,x_:.W{5-T]E)6\V]t7Dx;ѩR~4!/`{#_!Y=>n llg= ` 1 o~2^XmƦb7R0=gZ> 4xpވEOGBn a>}S$gJY כ>KBL@C]seiׅ Q|Nʅ71hiAQw)'1=Szп`WW.u [udc} ¼x-h]pn-ATb/"rC`at21)>c:'Bz7 wK#Vք $$<| 0<\$f~ N|m=6]s|;ycF\o B$;tJ +=K C\6Z$XPRݫEʈMB\[U [1#hAO}̝S`WY쨩vJ-WBk)9ܗnǖ6x f fMYáŘiiF9ЖjjXԓ Z./7a@ pd&%[G=W<#p"gDܮцaЍ'I`mHa>6cE㩃k=]'"2s4ﲏZLQ 3-vU S͸(#bJ wW58miZua0gDSaC|R+aNb5j:ٺcfA$D3H pWp69 !mAy~ h""`Hj6&w\⣕VlkYk~gsf ZH`Lgl&vq}kMJ e+V5c&bLv2L1vnj7@}rO2"u(Lmyرv_8FfkFENl**Fsx"&NQ=:ZS(\ PcShOhoCME]OK'y7ev?_TmxQ\>J!Xqѕw wFSFul+ʖyuud7yQ?6N w޸Q!];ޝJ#AE|b )%n¥gaٷ#:͝lҢ8Xk)tP<(#os5x8{T8"à!I!ƀ xsMrm9poJOcblŃ7ĜH81 F*!|@7Ӈ>,̇7#J"n 33qaH}ELj/'#sʀ]fwUQZݱfT <9$ᦲj9BƝ s:!w?򊯙/O̅h@U)Ӻ} uTD/g!(Dc2'력kߡmtAUif4J0`H?R nI_جRRֽ#1{H}[kn3 %Hx(0)w-٨tJ]QFOeuׄRay(>&T{մ5ZDqaTvZاQGWvAcqh|d3ż =4s"vZ*znZ1RĒhZRCK/}q6ĥ"wr/ZӐ~y{m)Y-#dZE0CPB(hZmb~9i247cGK*$!19;xnRdgkm=lԡ(R3V fk *׿lv SKՎW8h un>{QڙXU+TS?7"8޴V2W[\{]q 8;16ŀ/Fqu=? r,84+0 Mk67hihc&OFqSp1vǫļc}?[X9bn&B`>-M[4[=s%0?M \‡7#67?4TKaU.$ecn [+pٙ5ʃomƉTX(!VIGىpQXR\xS4chO:=?>#cl;ιւZO|چѽc\<~,SAfҷKqM͈I4aĀ@ZS Bc; fu#iamz%H)]g=9kןYtZhhDh)4Wq3WOS?AT\451q/؟:nމ STV,̀BbpLc@1:Z#5,IDAT?tiGBEWx-Ng"i XI!֎um.hyMA5 'MA/FS T1<[)F!E0NU{(lF*fZPM!ň~U<&LKər44]86ei)?tK0 =>iF$q(3P) \Jd}ڛPbcN9!}m_a1m~faEBKA}j_-!)AL<(?-ʊ3"q *+T{|_zzU( :Rza; ĭqg)F3oqW>Z?ӭW=]l_7r=<>Ft(wUх@Ke6UQAo~mE[PheU9D?<8^~&3ԡe8" .&\S..8*ȵ¬#}nH"B`\Pw0[TǠXMML_ ΠK c٬-kD|&I}(Ր0NG"FxIQ0eӆ`7T:wB 0iL̞5 |m(`(iG!dwAKY[z Z<))ș~mm >f;3&p/tMY`ܘmpQB,X_`{8F Ĕ,mD)_*6/ζ8w׸gw#gcaG~Ec} z7Wy}0s% XFTT Rf:Of[ڢc<Ζ)-y4ɉ1 ON <uX=qƍ8a8d%qo9c`f\5-G%3;n>Ǻ@aqCE$gL…\C9PRYo](33/CBrg?~Mjfa9q<-VrR6\RB`?#~4+8LcI>S2SM p堻Hŝfc%zބ qZX*(qKd'S Vh 7cH Vr98zXO=,g1NafJwPKʎzfFn(텨NB5b^0qg>kG{!L]iׇD[)81*T+j;$7,:T0@ްǪO7Yt;\4ehc`X/H Im<]Sn?pzAb Kh&]wLT*r@uw٘J/籠mElhӔ+]#ab_UT C2r/U 8{吁޿~zS "_R_7Y_tDY˰~G+HH^ CQј&8+88qٹ@e@>Uvc<NZx)NϠ V:-`8kA%}xB;re 3h**fMaq8g'o^V @y2V@VݫHً3<;BwB֘ݚ nJM(xqGm]x|tW8/&gY4h=цʃR@Iӆ2fh@v[мd- L5m b5wPD}*^[Sq)C|DcʿQ+$ʑTDC+vjuZf"a]/L8*/ty͟ouy185Ɔ[?Pc`&!n!c' jXSR[MҭD?BLDx^i8jT'>Bmޒg,j%44=vw5LjdZvćw}aUOքPPҵ$D|ueEҥr=@Ed4]zZCv+ xuk8~h7!4InlWG5fࠇ\{pŧg/P3~3t=@~OveEJ}NpbJ̘nQtDqTtp>`][s j;Z%k@3(%'v {T 1cDn=f9Ɍ%шV)wŸOqKv62lSV! w>\T| Ju7 3+z>vGqvwՃHZS0*1$3b ,ȝ*XIw}< 9^XeC>R>$s(R6SMRф-!V+,V BƊh'# dƈHPogl2inTΙCޝ,P*Riz0UT)NS-Z7TõqZy^%I@!鈣r± Ir2v8 ^Q5q< ZKE #  a {J6fijW|!TB]FAtơóO,EEMC?=r,ė_{PcVv#! j:絡Wp8Q\ϝWћ]V8q%1~ 7VVO;tL-8'fƊhs7sG6׍K> >ooNMzTUc]5 ݗQ-2om.71s74dHK\d[FڗN("! f{*t! !@O;:RU]Ygs0hDP=̗WY)iҺ-įPY$G"At0FWhC%bj3CpgRҵ}>ByY!mTFWm3|(1.#? mbBf!ȔnDVc6BGW$kic{?4zxvp|VV]ikv_(" ٜ)>I博CRgl>+;heT t3![;+}<-)k!YpUmM)B刘?[}e?!4a8a>[DE Fq{{xn=L3 V=FTx46WQtux[PSwcGRSWݽ_=q8ZEg mx`WL:jcşEO>)uI@ϛ(6dk4ҒL Î+,m\27D"Z0 |4/oJxBOP9pU1c]MI}H2u5o7o^ m^E 20p6(u ! @v޲R쨬ჳ&klj<մM@|Խ}B7￱ɉF,نLc M|562&@ì [o?S40 U<>KoBOĉ0աd7(Y1GRpp4ml-ش^ 9Rο*Y[;HuM wga]2IE%3~j_AXgVB /g<@p7z[A%rk`_,{sG_8+MaR<3/hm {?:Gxl Ff@B̑~a[H=7v&`Q pr{ŋ;btvKpcT,̾z61 )!?bU=g(-E!3G1o|oɞ7ǧj`j;Zyf T8?UE;h’ MX̀a}3ho_|P^v,Eе%RXm5p[al ۏ!\RNL?X7;3Èd{.4Q玁-pPX̤+@ Yjpl); 2* Nԗj\Ba LQQUzrX^n.:a@c`>6Ƒ0"#x<~Yo:3! ܒ62ƌP66kkWO<2 rsPJkt49!V6mU)0hsT\w&wZYvlNwJ45N8*a3(rJ7ʓ6f :$&fEEEɔ2AZd(啭ye6Xr5Kx'Y8bCbWظ+ܸ,̞|8ico5X_aBvR@o-"PFS{uG&> A(XCh}io֊Fz2d^ v}Eǻ>RHɑ΄W5IvlsK:Qm}WX/%>L>$?V'xO194DN^YjfyJ0LȗJ c.{1@5mM{GYS@EJI=unwE;.B9s;XXf3'4aL$ڵ{OmB`bRÕE κ(GΏ* GQ¹_;(%VG~cvɯ\ Vm`:jԿZ3趠Luw踾gSrGI&X:g 2RE0 NhNBGAC<\~0}h\rZ)(͙~p<^`X'0K۬ה4K9 x5*Hh-T ugR)NKlJm޻v$P#x E(Gdh6Zspzұb*+he| ,geR-y;^ipհ@KIݴ=+k,8BiEc;xGµa.#ҁiܗ 9u)5cܺnRZ'E[VZ epcxmhϥxi ݅X*8koJt6ώ:3݋:]p1M3iIkƈbb}ϭ1#ynՑ K9.GrxcPE8md6fi@..ZYA#19چ0ЦuL?2\h쐴ex 9)a `ZD3Ct2',ZLȎTgĈ 37 ƍ D/,F^x4xNO (,lrDn/ƐL>H}hx"o4cUjO>:޿WOv&6Q"}7h:##rGL'݋0ǐ1q]2f0q1Yo|X0Z~> ߏؐt{҆cy0 13=ټIE`lDOb3'˾CA~1,H`?$,Z|hڈq?~ 9J5&$3!<: sALxwiw8}9i[xԆbq8l_nN! tmFj q X OOeaLh922`gQ =/77%O t%/|qwckI3]G~NL@G2ARPN50{E$G6:4=SH8A37mkCrhCو.Xou%`-Q 2S)ѩ;܎ƌQ ='}1.(j;3c[`u09 aL:^<(_;-)<:Vr%@HD H]d#ĬHnތhzwU յSgkVHsa?]r [,pk^n#,a|Y7*SSؚ8vZۙʓޭ ~I%jj3cqU+"z ɞB@!0 #A60132&FsO(=Or&;fԬ~K^AJIaF@&ѡ@m5 À`WvGhD溺0^We9QC}uSkv_ 1sMi#*&Y8e1lè@chO߇Jvj! O拲b_ ҖB`Da 0'n;$&DKN_c |ԭHi K9@*Q۾iBMDӬM!T W S./&*O a$ ?~"3bSTu JܰM22c%ɰvI* ițZ% j;w EީgR݌i +u 4cһ5Zdzexzv╊B@O@n1! B@! =^c*B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@& &, ! B@! & #B@! B@&ExIENDB`././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/wbe_request_states.svg0000664000175000017500000004052700000000000023642 0ustar00zuulzuul00000000000000 WBE requests statesWAITINGPENDINGFAILURERUNNINGSUCCESSstart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/img/worker-engine.svg0000664000175000017500000006111500000000000022502 0ustar00zuulzuul00000000000000 2015-02-04 22:00ZCanvas 1Layer 1TaskflowUserEngineProxyWorkerServerWorker capabilities,notification(s), result...WorkerWorker capabilities,notification(s), result...SchedulerAnalyzerCompilationCompleterRunnerExecutorExecutorThreadpoolExecutorThreadpoolStatusnotification(s)WorkflowdefinitionTask (via kombu transport)execute/revert request(and any prior results)Task (via kombu transport)execute/revert request(and any prior results)Task (via kombu transport)execute/revert request(and any prior results)Worker capabilities,notification(s), result...Endpoint(s)ServerEndpoint(s) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/index.rst0000664000175000017500000000316200000000000020270 0ustar00zuulzuul00000000000000================ Using TaskFlow ================ Considerations ============== Things to consider before (and during) development and integration with TaskFlow into your project: * Read over the `paradigm shifts`_ and engage the team in `IRC`_ (or via the `openstack-dev`_ mailing list) if these need more explanation (prefix ``[Oslo][TaskFlow]`` to your emails subject to get an even faster response). * Follow (or at least attempt to follow) some of the established `best practices`_ (feel free to add your own suggested best practices). * Keep in touch with the team (see above); we are all friendly and enjoy knowing your use cases and learning how we can help make your lives easier by adding or adjusting functionality in this library. .. _IRC: irc://irc.oftc.net/openstack-oslo .. _best practices: https://wiki.openstack.org/wiki/TaskFlow/Best_practices .. _paradigm shifts: https://wiki.openstack.org/wiki/TaskFlow/Paradigm_shifts .. _openstack-dev: mailto:openstack-dev@lists.openstack.org User Guide ========== .. toctree:: :maxdepth: 2 atoms arguments_and_results inputs_and_outputs patterns engines workers notifications persistence resumption jobs conductors examples Miscellaneous ============= .. toctree:: :maxdepth: 2 exceptions states types utils Bookshelf ========= A useful collection of links, documents, papers, similar projects, frameworks and libraries. .. note:: Please feel free to submit your own additions and/or changes. .. toctree:: :maxdepth: 1 shelf Release notes ============= .. toctree:: :maxdepth: 2 history ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/inputs_and_outputs.rst0000664000175000017500000001310000000000000023121 0ustar00zuulzuul00000000000000================== Inputs and outputs ================== In TaskFlow there are multiple ways to provide inputs for your tasks and flows and get information from them. This document describes one of them, that involves task arguments and results. There are also :doc:`notifications `, which allow you to get notified when a task or flow changes state. You may also opt to use the :doc:`persistence ` layer itself directly. ----------------------- Flow inputs and outputs ----------------------- Tasks accept inputs via task arguments and provide outputs via task results (see :doc:`arguments and results ` for more details). This is the standard and recommended way to pass data from one task to another. Of course not every task argument needs to be provided to some other task of a flow, and not every task result should be consumed by every task. If some value is required by one or more tasks of a flow, but it is not provided by any task, it is considered to be flow input, and **must** be put into the storage before the flow is run. A set of names required by a flow can be retrieved via that flow's ``requires`` property. These names can be used to determine what names may be applicable for placing in storage ahead of time and which names are not applicable. All values provided by tasks of the flow are considered to be flow outputs; the set of names of such values is available via the ``provides`` property of the flow. .. testsetup:: from taskflow import task from taskflow.patterns import linear_flow from taskflow import engines from pprint import pprint For example: .. doctest:: >>> class MyTask(task.Task): ... def execute(self, **kwargs): ... return 1, 2 ... >>> flow = linear_flow.Flow('test').add( ... MyTask(requires='a', provides=('b', 'c')), ... MyTask(requires='b', provides='d') ... ) >>> flow.requires frozenset(['a']) >>> sorted(flow.provides) ['b', 'c', 'd'] .. make vim syntax highlighter happy** As you can see, this flow does not require b, as it is provided by the fist task. .. note:: There is no difference between processing of :py:class:`Task ` and :py:class:`~taskflow.retry.Retry` inputs and outputs. ------------------ Engine and storage ------------------ The storage layer is how an engine persists flow and task details (for more in-depth details see :doc:`persistence `). Inputs ------ As mentioned above, if some value is required by one or more tasks of a flow, but is not provided by any task, it is considered to be flow input, and **must** be put into the storage before the flow is run. On failure to do so :py:class:`~taskflow.exceptions.MissingDependencies` is raised by the engine prior to running: .. doctest:: >>> class CatTalk(task.Task): ... def execute(self, meow): ... print meow ... return "cat" ... >>> class DogTalk(task.Task): ... def execute(self, woof): ... print woof ... return "dog" ... >>> flo = linear_flow.Flow("cat-dog") >>> flo.add(CatTalk(), DogTalk(provides="dog")) >>> engines.run(flo) Traceback (most recent call last): ... taskflow.exceptions.MissingDependencies: 'linear_flow.Flow: cat-dog(len=2)' requires ['meow', 'woof'] but no other entity produces said requirements MissingDependencies: 'execute' method on '__main__.DogTalk==1.0' requires ['woof'] but no other entity produces said requirements MissingDependencies: 'execute' method on '__main__.CatTalk==1.0' requires ['meow'] but no other entity produces said requirements The recommended way to provide flow inputs is to use the ``store`` parameter of the engine helpers (:py:func:`~taskflow.engines.helpers.run` or :py:func:`~taskflow.engines.helpers.load`): .. doctest:: >>> class CatTalk(task.Task): ... def execute(self, meow): ... print meow ... return "cat" ... >>> class DogTalk(task.Task): ... def execute(self, woof): ... print woof ... return "dog" ... >>> flo = linear_flow.Flow("cat-dog") >>> flo.add(CatTalk(), DogTalk(provides="dog")) >>> result = engines.run(flo, store={'meow': 'meow', 'woof': 'woof'}) meow woof >>> pprint(result) {'dog': 'dog', 'meow': 'meow', 'woof': 'woof'} You can also directly interact with the engine storage layer to add additional values, note that if this route is used you can't use the helper method :py:func:`~taskflow.engines.helpers.run`. Instead, you must activate the engine's run method directly :py:func:`~taskflow.engines.base.EngineBase.run`: .. doctest:: >>> flo = linear_flow.Flow("cat-dog") >>> flo.add(CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow'}) >>> eng.storage.inject({"woof": "bark"}) >>> eng.run() meow bark Outputs ------- As you can see from examples above, the run method returns all flow outputs in a ``dict``. This same data can be fetched via :py:meth:`~taskflow.storage.Storage.fetch_all` method of the engines storage object. You can also get single results using the engines storage objects :py:meth:`~taskflow.storage.Storage.fetch` method. For example: .. doctest:: >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) >>> eng.run() meow woof >>> pprint(eng.storage.fetch_all()) {'dog': 'dog', 'meow': 'meow', 'woof': 'woof'} >>> print(eng.storage.fetch("dog")) dog ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/jobs.rst0000664000175000017500000003254100000000000020121 0ustar00zuulzuul00000000000000---- Jobs ---- Overview ======== Jobs and jobboards are a **novel** concept that TaskFlow provides to allow for automatic ownership transfer of workflows between capable owners (those owners usually then use :doc:`engines ` to complete the workflow). They provide the necessary semantics to be able to atomically transfer a job from a producer to a consumer in a reliable and fault tolerant manner. They are modeled off the concept used to post and acquire work in the physical world (typically a job listing in a newspaper or online website serves a similar role). **TLDR:** It's similar to a queue, but consumers lock items on the queue when claiming them, and only remove them from the queue when they're done with the work. If the consumer fails, the lock is *automatically* released and the item is back on the queue for further consumption. .. note:: For more information, please visit the `paradigm shift`_ page for more details. Definitions =========== Jobs A :py:class:`job ` consists of a unique identifier, name, and a reference to a :py:class:`logbook ` which contains the details of the work that has been or should be/will be completed to finish the work that has been created for that job. Jobboards A :py:class:`jobboard ` is responsible for managing the posting, ownership, and delivery of jobs. It acts as the location where jobs can be posted, claimed and searched for; typically by iteration or notification. Jobboards may be backed by different *capable* implementations (each with potentially differing configuration) but all jobboards implement the same interface and semantics so that the backend usage is as transparent as possible. This allows deployers or developers of a service that uses TaskFlow to select a jobboard implementation that fits their setup (and their intended usage) best. High level architecture ======================= .. figure:: img/jobboard.png :height: 350px :align: right **Note:** This diagram shows the high-level diagram (and further parts of this documentation also refer to it as well) of the zookeeper implementation (other implementations will typically have different architectures). Features ======== - High availability - Guarantees workflow forward progress by transferring partially complete work or work that has not been started to entities which can either resume the previously partially completed work or begin initial work to ensure that the workflow as a whole progresses (where progressing implies transitioning through the workflow :doc:`patterns ` and :doc:`atoms ` and completing their associated :doc:`states ` transitions). - Atomic transfer and single ownership - Ensures that only one workflow is managed (aka owned) by a single owner at a time in an atomic manner (including when the workflow is transferred to a owner that is resuming some other failed owners work). This avoids contention and ensures a workflow is managed by one and only one entity at a time. - *Note:* this does not mean that the owner needs to run the workflow itself but instead said owner could use an engine that runs the work in a distributed manner to ensure that the workflow progresses. - Separation of workflow construction and execution - Jobs can be created with logbooks that contain a specification of the work to be done by a entity (such as an API server). The job then can be completed by a entity that is watching that jobboard (not necessarily the API server itself). This creates a disconnection between work formation and work completion that is useful for scaling out horizontally. - Asynchronous completion - When for example a API server posts a job for completion to a jobboard that API server can return a *tracking* identifier to the user calling the API service. This *tracking* identifier can be used by the user to poll for status (similar in concept to a shipping *tracking* identifier created by fedex or UPS). Usage ===== All jobboards are mere classes that implement same interface, and of course it is possible to import them and create instances of them just like with any other class in Python. But the easier (and recommended) way for creating jobboards is by using the :py:meth:`fetch() ` function which uses entrypoints (internally using `stevedore`_) to fetch and configure your backend. Using this function the typical creation of a jobboard (and an example posting of a job) might look like: .. code-block:: python from taskflow.persistence import backends as persistence_backends from taskflow.jobs import backends as job_backends ... persistence = persistence_backends.fetch({ "connection': "mysql", "user": ..., "password": ..., }) book = make_and_save_logbook(persistence) board = job_backends.fetch('my-board', { "board": "zookeeper", }, persistence=persistence) job = board.post("my-first-job", book) ... Consumption of jobs is similarly achieved by creating a jobboard and using the iteration functionality to find and claim jobs (and eventually consume them). The typical usage of a jobboard for consumption (and work completion) might look like: .. code-block:: python import time from taskflow import exceptions as exc from taskflow.persistence import backends as persistence_backends from taskflow.jobs import backends as job_backends ... my_name = 'worker-1' coffee_break_time = 60 persistence = persistence_backends.fetch({ "connection': "mysql", "user": ..., "password": ..., }) board = job_backends.fetch('my-board', { "board": "zookeeper", }, persistence=persistence) while True: my_job = None for job in board.iterjobs(only_unclaimed=True): try: board.claim(job, my_name) except exc.UnclaimableJob: pass else: my_job = job break if my_job is not None: try: perform_job(my_job) except Exception: LOG.exception("I failed performing job: %s", my_job) board.abandon(my_job, my_name) else: # I finished it, now cleanup. board.consume(my_job) persistence.get_connection().destroy_logbook(my_job.book.uuid) time.sleep(coffee_break_time) ... There are a few ways to provide arguments to the flow. The first option is to add a ``store`` to the flowdetail object in the :py:class:`logbook `. You can also provide a ``store`` in the :py:class:`job ` itself when posting it to the job board. If both ``store`` values are found, they will be combined, with the :py:class:`job ` ``store`` overriding the :py:class:`logbook ` ``store``. .. code-block:: python from oslo_utils import uuidutils from taskflow import engines from taskflow.persistence import backends as persistence_backends from taskflow.persistence import models from taskflow.jobs import backends as job_backends ... persistence = persistence_backends.fetch({ "connection': "mysql", "user": ..., "password": ..., }) board = job_backends.fetch('my-board', { "board": "zookeeper", }, persistence=persistence) book = models.LogBook('my-book', uuidutils.generate_uuid()) flow_detail = models.FlowDetail('my-job', uuidutils.generate_uuid()) book.add(flow_detail) connection = persistence.get_connection() connection.save_logbook(book) flow_detail.meta['store'] = {'a': 1, 'c': 3} job_details = { "flow_uuid": flow_detail.uuid, "store": {'a': 2, 'b': 1} } engines.save_factory_details(flow_detail, flow_factory, factory_args=[], factory_kwargs={}, backend=persistence) jobboard = get_jobboard(zk_client) jobboard.connect() job = jobboard.post('my-job', book=book, details=job_details) # the flow global parameters are now the combined store values # {'a': 2, 'b': 1', 'c': 3} ... Types ===== Zookeeper --------- **Board type**: ``'zookeeper'`` Uses `zookeeper`_ to provide the jobboard capabilities and semantics by using a zookeeper directory, ephemeral, non-ephemeral nodes and watches. Additional *kwarg* parameters: * ``client``: a class that provides ``kazoo.client.KazooClient``-like interface; it will be used for zookeeper interactions, sharing clients between jobboard instances will likely provide better scalability and can help avoid creating to many open connections to a set of zookeeper servers. * ``persistence``: a class that provides a :doc:`persistence ` backend interface; it will be used for loading jobs logbooks for usage at runtime or for usage before a job is claimed for introspection. Additional *configuration* parameters: * ``path``: the root zookeeper path to store job information (*defaults* to ``/taskflow/jobs``) * ``hosts``: the list of zookeeper hosts to connect to (*defaults* to ``localhost:2181``); only used if a client is not provided. * ``timeout``: the timeout used when performing operations with zookeeper; only used if a client is not provided. * ``handler``: a class that provides ``kazoo.handlers``-like interface; it will be used internally by `kazoo`_ to perform asynchronous operations, useful when your program uses eventlet and you want to instruct kazoo to use an eventlet compatible handler. .. note:: See :py:class:`~taskflow.jobs.backends.impl_zookeeper.ZookeeperJobBoard` for implementation details. Redis ----- **Board type**: ``'redis'`` Uses `redis`_ to provide the jobboard capabilities and semantics by using a redis hash data structure and individual job ownership keys (that can optionally expire after a given amount of time). .. note:: See :py:class:`~taskflow.jobs.backends.impl_redis.RedisJobBoard` for implementation details. Considerations ============== Some usage considerations should be used when using a jobboard to make sure it's used in a safe and reliable manner. Eventually we hope to make these non-issues but for now they are worth mentioning. Dual-engine jobs ---------------- **What:** Since atoms and engines are not currently `preemptable`_ we can not force an engine (or the threads/remote workers... it is using to run) to stop working on an atom (it is general bad behavior to force code to stop without its consent anyway) if it has already started working on an atom (short of doing a ``kill -9`` on the running interpreter). This could cause problems since the points an engine can notice that it no longer owns a claim is at any :doc:`state ` change that occurs (transitioning to a new atom or recording a result for example), where upon noticing the claim has been lost the engine can immediately stop doing further work. The effect that this causes is that when a claim is lost another engine can immediately attempt to acquire the claim that was previously lost and it *could* begin working on the unfinished tasks that the later engine may also still be executing (since that engine is not yet aware that it has *lost* the claim). **TLDR:** not `preemptable`_, possible to become aware of losing a claim after the fact (at the next state change), another engine could have acquired the claim by then, therefore both would be *working* on a job. **Alleviate by:** #. Ensure your atoms are `idempotent`_, this will cause an engine that may be executing the same atom to be able to continue executing without causing any conflicts/problems (idempotency guarantees this). #. On claiming jobs that have been claimed previously enforce a policy that happens before the jobs workflow begins to execute (possibly prior to an engine beginning the jobs work) that ensures that any prior work has been rolled back before continuing rolling forward. For example: * Rolling back the last atom/set of atoms that finished. * Rolling back the last state change that occurred. #. Delay claiming partially completed work by adding a wait period (to allow the previous engine to coalesce) before working on a partially completed job (combine this with the prior suggestions and *most* dual-engine issues should be avoided). .. _idempotent: https://en.wikipedia.org/wiki/Idempotence .. _preemptable: https://en.wikipedia.org/wiki/Preemption_%28computing%29 Interfaces ========== .. automodule:: taskflow.jobs.base .. automodule:: taskflow.jobs.backends Implementations =============== Zookeeper --------- .. automodule:: taskflow.jobs.backends.impl_zookeeper Redis ----- .. automodule:: taskflow.jobs.backends.impl_redis Hierarchy ========= .. inheritance-diagram:: taskflow.jobs.base taskflow.jobs.backends.impl_redis taskflow.jobs.backends.impl_zookeeper :parts: 1 .. _paradigm shift: https://wiki.openstack.org/wiki/TaskFlow/Paradigm_shifts#Workflow_ownership_transfer .. _zookeeper: http://zookeeper.apache.org/ .. _kazoo: https://kazoo.readthedocs.io/en/latest/ .. _stevedore: https://docs.openstack.org/stevedore/latest .. _redis: https://redis.io/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/notifications.rst0000664000175000017500000001434200000000000022034 0ustar00zuulzuul00000000000000--------------------------- Notifications and listeners --------------------------- .. testsetup:: from taskflow import task from taskflow.patterns import linear_flow from taskflow import engines from taskflow.types import notifier ANY = notifier.Notifier.ANY Overview ======== Engines provide a way to receive notification on task and flow state transitions (see :doc:`states `), which is useful for monitoring, logging, metrics, debugging and plenty of other tasks. To receive these notifications you should register a callback with an instance of the :py:class:`~taskflow.types.notifier.Notifier` class that is attached to :py:class:`~taskflow.engines.base.Engine` attributes ``atom_notifier`` and ``notifier``. TaskFlow also comes with a set of predefined :ref:`listeners `, and provides means to write your own listeners, which can be more convenient than using raw callbacks. Receiving notifications with callbacks ====================================== Flow notifications ------------------ To receive notification on flow state changes use the :py:class:`~taskflow.types.notifier.Notifier` instance available as the ``notifier`` property of an engine. A basic example is: .. doctest:: >>> class CatTalk(task.Task): ... def execute(self, meow): ... print(meow) ... return "cat" ... >>> class DogTalk(task.Task): ... def execute(self, woof): ... print(woof) ... return 'dog' ... >>> def flow_transition(state, details): ... print("Flow '%s' transition to state %s" % (details['flow_name'], state)) ... >>> >>> flo = linear_flow.Flow("cat-dog").add( ... CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) >>> eng.notifier.register(ANY, flow_transition) >>> eng.run() Flow 'cat-dog' transition to state RUNNING meow woof Flow 'cat-dog' transition to state SUCCESS Task notifications ------------------ To receive notification on task state changes use the :py:class:`~taskflow.types.notifier.Notifier` instance available as the ``atom_notifier`` property of an engine. A basic example is: .. doctest:: >>> class CatTalk(task.Task): ... def execute(self, meow): ... print(meow) ... return "cat" ... >>> class DogTalk(task.Task): ... def execute(self, woof): ... print(woof) ... return 'dog' ... >>> def task_transition(state, details): ... print("Task '%s' transition to state %s" % (details['task_name'], state)) ... >>> >>> flo = linear_flow.Flow("cat-dog") >>> flo.add(CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) >>> eng.atom_notifier.register(ANY, task_transition) >>> eng.run() Task 'CatTalk' transition to state RUNNING meow Task 'CatTalk' transition to state SUCCESS Task 'DogTalk' transition to state RUNNING woof Task 'DogTalk' transition to state SUCCESS .. _listeners: Listeners ========= TaskFlow comes with a set of predefined listeners -- helper classes that can be used to do various actions on flow and/or tasks transitions. You can also create your own listeners easily, which may be more convenient than using raw callbacks for some use cases. For example, this is how you can use :py:class:`~taskflow.listeners.printing.PrintingListener`: .. doctest:: >>> from taskflow.listeners import printing >>> class CatTalk(task.Task): ... def execute(self, meow): ... print(meow) ... return "cat" ... >>> class DogTalk(task.Task): ... def execute(self, woof): ... print(woof) ... return 'dog' ... >>> >>> flo = linear_flow.Flow("cat-dog").add( ... CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) >>> with printing.PrintingListener(eng): ... eng.run() ... has moved flow 'cat-dog' (...) into state 'RUNNING' from state 'PENDING' has moved task 'CatTalk' (...) into state 'RUNNING' from state 'PENDING' meow has moved task 'CatTalk' (...) into state 'SUCCESS' from state 'RUNNING' with result 'cat' (failure=False) has moved task 'DogTalk' (...) into state 'RUNNING' from state 'PENDING' woof has moved task 'DogTalk' (...) into state 'SUCCESS' from state 'RUNNING' with result 'dog' (failure=False) has moved flow 'cat-dog' (...) into state 'SUCCESS' from state 'RUNNING' Interfaces ========== .. automodule:: taskflow.listeners.base Implementations =============== Printing and logging listeners ------------------------------ .. autoclass:: taskflow.listeners.logging.LoggingListener .. autoclass:: taskflow.listeners.logging.DynamicLoggingListener .. autoclass:: taskflow.listeners.printing.PrintingListener Timing listeners ---------------- .. autoclass:: taskflow.listeners.timing.DurationListener .. autoclass:: taskflow.listeners.timing.PrintingDurationListener .. autoclass:: taskflow.listeners.timing.EventTimeListener Claim listener -------------- .. autoclass:: taskflow.listeners.claims.CheckingClaimListener Capturing listener ------------------ .. autoclass:: taskflow.listeners.capturing.CaptureListener Formatters ---------- .. automodule:: taskflow.formatters Hierarchy ========= .. inheritance-diagram:: taskflow.listeners.base.DumpingListener taskflow.listeners.base.Listener taskflow.listeners.capturing.CaptureListener taskflow.listeners.claims.CheckingClaimListener taskflow.listeners.logging.DynamicLoggingListener taskflow.listeners.logging.LoggingListener taskflow.listeners.printing.PrintingListener taskflow.listeners.timing.PrintingDurationListener taskflow.listeners.timing.EventTimeListener taskflow.listeners.timing.DurationListener :parts: 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/patterns.rst0000664000175000017500000000076400000000000021026 0ustar00zuulzuul00000000000000-------- Patterns -------- .. automodule:: taskflow.flow Linear flow ~~~~~~~~~~~ .. automodule:: taskflow.patterns.linear_flow Unordered flow ~~~~~~~~~~~~~~ .. automodule:: taskflow.patterns.unordered_flow Graph flow ~~~~~~~~~~ .. automodule:: taskflow.patterns.graph_flow .. automodule:: taskflow.deciders Hierarchy ~~~~~~~~~ .. inheritance-diagram:: taskflow.flow taskflow.patterns.linear_flow taskflow.patterns.unordered_flow taskflow.patterns.graph_flow :parts: 2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/persistence.rst0000664000175000017500000002753200000000000021514 0ustar00zuulzuul00000000000000=========== Persistence =========== Overview ======== In order to be able to receive inputs and create outputs from atoms (or other engine processes) in a fault-tolerant way, there is a need to be able to place what atoms output in some kind of location where it can be re-used by other atoms (or used for other purposes). To accommodate this type of usage TaskFlow provides an abstraction (provided by pluggable `stevedore`_ backends) that is similar in concept to a running programs *memory*. This abstraction serves the following *major* purposes: * Tracking of what was done (introspection). * Saving *memory* which allows for restarting from the last saved state which is a critical feature to restart and resume workflows (checkpointing). * Associating additional metadata with atoms while running (without having those atoms need to save this data themselves). This makes it possible to add-on new metadata in the future without having to change the atoms themselves. For example the following can be saved: * Timing information (how long a task took to run). * User information (who the task ran as). * When a atom/workflow was ran (and why). * Saving historical data (failures, successes, intermediary results...) to allow for retry atoms to be able to decide if they should should continue vs. stop. * *Something you create...* .. _stevedore: https://docs.openstack.org/stevedore/latest/ How it is used ============== On :doc:`engine ` construction typically a backend (it can be optional) will be provided which satisfies the :py:class:`~taskflow.persistence.base.Backend` abstraction. Along with providing a backend object a :py:class:`~taskflow.persistence.models.FlowDetail` object will also be created and provided (this object will contain the details about the flow to be ran) to the engine constructor (or associated :py:meth:`load() ` helper functions). Typically a :py:class:`~taskflow.persistence.models.FlowDetail` object is created from a :py:class:`~taskflow.persistence.models.LogBook` object (the book object acts as a type of container for :py:class:`~taskflow.persistence.models.FlowDetail` and :py:class:`~taskflow.persistence.models.AtomDetail` objects). **Preparation**: Once an engine starts to run it will create a :py:class:`~taskflow.storage.Storage` object which will act as the engines interface to the underlying backend storage objects (it provides helper functions that are commonly used by the engine, avoiding repeating code when interacting with the provided :py:class:`~taskflow.persistence.models.FlowDetail` and :py:class:`~taskflow.persistence.base.Backend` objects). As an engine initializes it will extract (or create) :py:class:`~taskflow.persistence.models.AtomDetail` objects for each atom in the workflow the engine will be executing. **Execution:** When an engine beings to execute (see :doc:`engine ` for more of the details about how an engine goes about this process) it will examine any previously existing :py:class:`~taskflow.persistence.models.AtomDetail` objects to see if they can be used for resuming; see :doc:`resumption ` for more details on this subject. For atoms which have not finished (or did not finish correctly from a previous run) they will begin executing only after any dependent inputs are ready. This is done by analyzing the execution graph and looking at predecessor :py:class:`~taskflow.persistence.models.AtomDetail` outputs and states (which may have been persisted in a past run). This will result in either using their previous information or by running those predecessors and saving their output to the :py:class:`~taskflow.persistence.models.FlowDetail` and :py:class:`~taskflow.persistence.base.Backend` objects. This execution, analysis and interaction with the storage objects continues (what is described here is a simplification of what really happens; which is quite a bit more complex) until the engine has finished running (at which point the engine will have succeeded or failed in its attempt to run the workflow). **Post-execution:** Typically when an engine is done running the logbook would be discarded (to avoid creating a stockpile of useless data) and the backend storage would be told to delete any contents for a given execution. For certain use-cases though it may be advantageous to retain logbooks and their contents. A few scenarios come to mind: * Post runtime failure analysis and triage (saving what failed and why). * Metrics (saving timing information associated with each atom and using it to perform offline performance analysis, which enables tuning tasks and/or isolating and fixing slow tasks). * Data mining logbooks to find trends (in failures for example). * Saving logbooks for further forensics analysis. * Exporting logbooks to `hdfs`_ (or other no-sql storage) and running some type of map-reduce jobs on them. .. _hdfs: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html .. note:: It should be emphasized that logbook is the authoritative, and, preferably, the **only** (see :doc:`inputs and outputs `) source of run-time state information (breaking this principle makes it hard/impossible to restart or resume in any type of automated fashion). When an atom returns a result, it should be written directly to a logbook. When atom or flow state changes in any way, logbook is first to know (see :doc:`notifications ` for how a user may also get notified of those same state changes). The logbook and a backend and associated storage helper class are responsible to store the actual data. These components used together specify the persistence mechanism (how data is saved and where -- memory, database, whatever...) and the persistence policy (when data is saved -- every time it changes or at some particular moments or simply never). Usage ===== To select which persistence backend to use you should use the :py:meth:`fetch() ` function which uses entrypoints (internally using `stevedore`_) to fetch and configure your backend. This makes it simpler than accessing the backend data types directly and provides a common function from which a backend can be fetched. Using this function to fetch a backend might look like: .. code-block:: python from taskflow.persistence import backends ... persistence = backends.fetch(conf={ "connection": "mysql", "user": ..., "password": ..., }) book = make_and_save_logbook(persistence) ... As can be seen from above the ``conf`` parameter acts as a dictionary that is used to fetch and configure your backend. The restrictions on it are the following: * a dictionary (or dictionary like type), holding backend type with key ``'connection'`` and possibly type-specific backend parameters as other keys. Types ===== Memory ------ **Connection**: ``'memory'`` Retains all data in local memory (not persisted to reliable storage). Useful for scenarios where persistence is not required (and also in unit tests). .. note:: See :py:class:`~taskflow.persistence.backends.impl_memory.MemoryBackend` for implementation details. Files ----- **Connection**: ``'dir'`` or ``'file'`` Retains all data in a directory & file based structure on local disk. Will be persisted **locally** in the case of system failure (allowing for resumption from the same local machine only). Useful for cases where a *more* reliable persistence is desired along with the simplicity of files and directories (a concept everyone is familiar with). .. note:: See :py:class:`~taskflow.persistence.backends.impl_dir.DirBackend` for implementation details. SQLAlchemy ---------- **Connection**: ``'mysql'`` or ``'postgres'`` or ``'sqlite'`` Retains all data in a `ACID`_ compliant database using the `sqlalchemy`_ library for schemas, connections, and database interaction functionality. Useful when you need a higher level of durability than offered by the previous solutions. When using these connection types it is possible to resume a engine from a peer machine (this does not apply when using sqlite). Schema ^^^^^^ *Logbooks* ========== ======== ============= Name Type Primary Key ========== ======== ============= created_at DATETIME False updated_at DATETIME False uuid VARCHAR True name VARCHAR False meta TEXT False ========== ======== ============= *Flow details* =========== ======== ============= Name Type Primary Key =========== ======== ============= created_at DATETIME False updated_at DATETIME False uuid VARCHAR True name VARCHAR False meta TEXT False state VARCHAR False parent_uuid VARCHAR False =========== ======== ============= *Atom details* =========== ======== ============= Name Type Primary Key =========== ======== ============= created_at DATETIME False updated_at DATETIME False uuid VARCHAR True name VARCHAR False meta TEXT False atom_type VARCHAR False state VARCHAR False intention VARCHAR False results TEXT False failure TEXT False version TEXT False parent_uuid VARCHAR False =========== ======== ============= .. _sqlalchemy: https://docs.sqlalchemy.org/en/latest/ .. _ACID: https://en.wikipedia.org/wiki/ACID .. note:: See :py:class:`~taskflow.persistence.backends.impl_sqlalchemy.SQLAlchemyBackend` for implementation details. .. warning:: Currently there is a size limit (not applicable for ``sqlite``) that the ``results`` will contain. This size limit will restrict how many prior failures a retry atom can contain. More information and a future fix will be posted to bug `1416088`_ (for the meantime try to ensure that your retry units history does not grow beyond ~80 prior results). This truncation can also be avoided by providing ``mysql_sql_mode`` as ``traditional`` when selecting your mysql + sqlalchemy based backend (see the `mysql modes`_ documentation for what this implies). .. _1416088: https://bugs.launchpad.net/taskflow/+bug/1416088 .. _mysql modes: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html Zookeeper --------- **Connection**: ``'zookeeper'`` Retains all data in a `zookeeper`_ backend (zookeeper exposes operations on files and directories, similar to the above ``'dir'`` or ``'file'`` connection types). Internally the `kazoo`_ library is used to interact with zookeeper to perform reliable, distributed and atomic operations on the contents of a logbook represented as znodes. Since zookeeper is also distributed it is also able to resume a engine from a peer machine (having similar functionality as the database connection types listed previously). .. note:: See :py:class:`~taskflow.persistence.backends.impl_zookeeper.ZkBackend` for implementation details. .. _zookeeper: http://zookeeper.apache.org .. _kazoo: https://kazoo.readthedocs.io/en/latest/ Interfaces ========== .. automodule:: taskflow.persistence.backends .. automodule:: taskflow.persistence.base .. automodule:: taskflow.persistence.path_based Models ====== .. automodule:: taskflow.persistence.models Implementations =============== Memory ------ .. automodule:: taskflow.persistence.backends.impl_memory Files ----- .. automodule:: taskflow.persistence.backends.impl_dir SQLAlchemy ---------- .. automodule:: taskflow.persistence.backends.impl_sqlalchemy Zookeeper --------- .. automodule:: taskflow.persistence.backends.impl_zookeeper Storage ======= .. automodule:: taskflow.storage Hierarchy ========= .. inheritance-diagram:: taskflow.persistence.base taskflow.persistence.backends.impl_dir taskflow.persistence.backends.impl_memory taskflow.persistence.backends.impl_sqlalchemy taskflow.persistence.backends.impl_zookeeper :parts: 2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/resumption.rst0000664000175000017500000001574500000000000021400 0ustar00zuulzuul00000000000000---------- Resumption ---------- Overview ======== **Question**: *How can we persist the flow so that it can be resumed, restarted or rolled-back on engine failure?* **Answer:** Since a flow is a set of :doc:`atoms ` and relations between atoms we need to create a model and corresponding information that allows us to persist the *right* amount of information to preserve, resume, and rollback a flow on software or hardware failure. To allow for resumption TaskFlow must be able to re-create the flow and re-connect the links between atom (and between atoms->atom details and so on) in order to revert those atoms or resume those atoms in the correct ordering. TaskFlow provides a pattern that can help in automating this process (it does **not** prohibit the user from creating their own strategies for doing this). .. _resumption factories: Factories ========= The default provided way is to provide a `factory`_ function which will create (or recreate your workflow). This function can be provided when loading a flow and corresponding engine via the provided :py:meth:`load_from_factory() ` method. This `factory`_ function is expected to be a function (or ``staticmethod``) which is reimportable (aka has a well defined name that can be located by the ``__import__`` function in python, this excludes ``lambda`` style functions and ``instance`` methods). The `factory`_ function name will be saved into the logbook and it will be imported and called to create the workflow objects (or recreate it if resumption happens). This allows for the flow to be recreated if and when that is needed (even on remote machines, as long as the reimportable name can be located). .. _factory: https://en.wikipedia.org/wiki/Factory_%28object-oriented_programming%29 Names ===== When a flow is created it is expected that each atom has a unique name, this name serves a special purpose in the resumption process (as well as serving a useful purpose when running, allowing for atom identification in the :doc:`notification ` process). The reason for having names is that an atom in a flow needs to be somehow matched with (a potentially) existing :py:class:`~taskflow.persistence.models.AtomDetail` during engine resumption & subsequent running. The match should be: * stable if atoms are added or removed * should not change when service is restarted, upgraded... * should be the same across all server instances in HA setups Names provide this although they do have weaknesses: * the names of atoms must be unique in flow * it becomes hard to change the name of atom since a name change causes other side-effects .. note:: Even though these weaknesses names were selected as a *good enough* solution for the above matching requirements (until something better is invented/created that can satisfy those same requirements). Scenarios ========= When new flow is loaded into engine, there is no persisted data for it yet, so a corresponding :py:class:`~taskflow.persistence.models.FlowDetail` object will be created, as well as a :py:class:`~taskflow.persistence.models.AtomDetail` object for each atom that is contained in it. These will be immediately saved into the persistence backend that is configured. If no persistence backend is configured, then as expected nothing will be saved and the atoms and flow will be ran in a non-persistent manner. **Subsequent run:** When we resume the flow from a persistent backend (for example, if the flow was interrupted and engine destroyed to save resources or if the service was restarted), we need to re-create the flow. For that, we will call the function that was saved on first-time loading that builds the flow for us (aka; the flow factory function described above) and the engine will run. The following scenarios explain some expected structural changes and how they can be accommodated (and what the effect will be when resuming & running). Same atoms ++++++++++ When the factory function mentioned above returns the exact same the flow and atoms (no changes are performed). **Runtime change:** Nothing should be done -- the engine will re-associate atoms with :py:class:`~taskflow.persistence.models.AtomDetail` objects by name and then the engine resumes. Atom was added ++++++++++++++ When the factory function mentioned above alters the flow by adding a new atom in (for example for changing the runtime structure of what was previously ran in the first run). **Runtime change:** By default when the engine resumes it will notice that a corresponding :py:class:`~taskflow.persistence.models.AtomDetail` does not exist and one will be created and associated. Atom was removed ++++++++++++++++ When the factory function mentioned above alters the flow by removing a new atom in (for example for changing the runtime structure of what was previously ran in the first run). **Runtime change:** Nothing should be done -- flow structure is reloaded from factory function, and removed atom is not in it -- so, flow will be ran as if it was not there, and any results it returned if it was completed before will be ignored. Atom code was changed +++++++++++++++++++++ When the factory function mentioned above alters the flow by deciding that a newer version of a previously existing atom should be ran (possibly to perform some kind of upgrade or to fix a bug in a prior atoms code). **Factory change:** The atom name & version will have to be altered. The factory should replace this name where it was being used previously. **Runtime change:** This will fall under the same runtime adjustments that exist when a new atom is added. In the future TaskFlow could make this easier by providing a ``upgrade()`` function that can be used to give users the ability to upgrade atoms before running (manual introspection & modification of a :py:class:`~taskflow.persistence.models.LogBook` can be done before engine loading and running to accomplish this in the meantime). Atom was split in two atoms or merged +++++++++++++++++++++++++++++++++++++ When the factory function mentioned above alters the flow by deciding that a previously existing atom should be split into N atoms or the factory function decides that N atoms should be merged in `). This can also be a state that is entered when some owning entity has manually abandoned (or lost ownership of) a previously claimed job. **CLAIMED** - A job that is *actively* owned by some entity; typically that ownership is tied to jobs persistent data via some ephemeral connection so that the job ownership is lost (typically automatically or after some timeout) if that ephemeral connection is lost. **COMPLETE** - The work defined in the job has been finished by its owning entity and the job can no longer be processed (and it *may* be removed at some/any point in the future). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/types.rst0000664000175000017500000000163600000000000020331 0ustar00zuulzuul00000000000000----- Types ----- .. note:: Even though these types **are** made for public consumption and usage should be encouraged/easily possible it should be noted that these may be moved out to new libraries at various points in the future. If you are using these types **without** using the rest of this library it is **strongly** encouraged that you be a vocal proponent of getting these made into *isolated* libraries (as using these types in this manner is not the expected and/or desired usage). Entity ====== .. automodule:: taskflow.types.entity Failure ======= .. automodule:: taskflow.types.failure Graph ===== .. automodule:: taskflow.types.graph Notifier ======== .. automodule:: taskflow.types.notifier :special-members: __call__ Sets ==== .. automodule:: taskflow.types.sets Timing ====== .. automodule:: taskflow.types.timing Tree ==== .. automodule:: taskflow.types.tree ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/utils.rst0000664000175000017500000000176100000000000020324 0ustar00zuulzuul00000000000000--------- Utilities --------- .. warning:: External usage of internal utility functions and modules should be kept to a **minimum** as they may be altered, refactored or moved to other locations **without** notice (and without the typical deprecation cycle). Async ~~~~~ .. automodule:: taskflow.utils.async_utils Banner ~~~~~~ .. automodule:: taskflow.utils.banner Eventlet ~~~~~~~~ .. automodule:: taskflow.utils.eventlet_utils Iterators ~~~~~~~~~ .. automodule:: taskflow.utils.iter_utils Kazoo ~~~~~ .. automodule:: taskflow.utils.kazoo_utils Kombu ~~~~~ .. automodule:: taskflow.utils.kombu_utils Miscellaneous ~~~~~~~~~~~~~ .. automodule:: taskflow.utils.misc Mixins ~~~~~~ .. automodule:: taskflow.utils.mixins Persistence ~~~~~~~~~~~ .. automodule:: taskflow.utils.persistence_utils Redis ~~~~~ .. automodule:: taskflow.utils.redis_utils Schema ~~~~~~ .. automodule:: taskflow.utils.schema_utils Threading ~~~~~~~~~ .. automodule:: taskflow.utils.threading_utils ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/doc/source/user/workers.rst0000664000175000017500000004000400000000000020651 0ustar00zuulzuul00000000000000Overview ======== This is engine that schedules tasks to **workers** -- separate processes dedicated for certain atoms execution, possibly running on other machines, connected via `amqp`_ (or other supported `kombu`_ transports). .. note:: This engine is under active development and is usable and **does** work but is missing some features (please check the `blueprint page`_ for known issues and plans) that will make it more production ready. .. _blueprint page: https://blueprints.launchpad.net/taskflow?searchtext=wbe Terminology ----------- Client Code or program or service (or user) that uses this library to define flows and run them via engines. Transport + protocol Mechanism (and `protocol`_ on top of that mechanism) used to pass information between the client and worker (for example amqp as a transport and a json encoded message format as the protocol). Executor Part of the worker-based engine and is used to publish task requests, so these requests can be accepted and processed by remote workers. Worker Workers are started on remote hosts and each has a list of tasks it can perform (on request). Workers accept and process task requests that are published by an executor. Several requests can be processed simultaneously in separate threads (or processes...). For example, an `executor`_ can be passed to the worker and configured to run in as many threads (green or not) as desired. Proxy Executors interact with workers via a proxy. The proxy maintains the underlying transport and publishes messages (and invokes callbacks on message reception). Requirements ------------ * **Transparent:** it should work as ad-hoc replacement for existing *(local)* engines with minimal, if any refactoring (e.g. it should be possible to run the same flows on it without changing client code if everything is set up and configured properly). * **Transport-agnostic:** the means of transport should be abstracted so that we can use `oslo.messaging`_, `gearmand`_, `amqp`_, `zookeeper`_, `marconi`_, `websockets`_ or anything else that allows for passing information between a client and a worker. * **Simple:** it should be simple to write and deploy. * **Non-uniformity:** it should support non-uniform workers which allows different workers to execute different sets of atoms depending on the workers published capabilities. .. _marconi: https://wiki.openstack.org/wiki/Marconi .. _zookeeper: http://zookeeper.org/ .. _gearmand: http://gearman.org/ .. _oslo.messaging: https://wiki.openstack.org/wiki/Oslo/Messaging .. _websockets: http://en.wikipedia.org/wiki/WebSocket .. _amqp: http://www.amqp.org/ .. _executor: https://docs.python.org/dev/library/concurrent.futures.html#executor-objects .. _protocol: http://en.wikipedia.org/wiki/Communications_protocol Design ====== There are two communication sides, the *executor* (and associated engine derivative) and *worker* that communicate using a proxy component. The proxy is designed to accept/publish messages from/into a named exchange. High level architecture ----------------------- .. image:: img/worker-engine.svg :height: 340px :align: right Executor and worker communication --------------------------------- Let's consider how communication between an executor and a worker happens. First of all an engine resolves all atoms dependencies and schedules atoms that can be performed at the moment. This uses the same scheduling and dependency resolution logic that is used for every other engine type. Then the atoms which can be executed immediately (ones that are dependent on outputs of other tasks will be executed when that output is ready) are executed by the worker-based engine executor in the following manner: 1. The executor initiates task execution/reversion using a proxy object. 2. :py:class:`~taskflow.engines.worker_based.proxy.Proxy` publishes task request (format is described below) into a named exchange using a routing key that is used to deliver request to particular workers topic. The executor then waits for the task requests to be accepted and confirmed by workers. If the executor doesn't get a task confirmation from workers within the given timeout the task is considered as timed-out and a timeout exception is raised. 3. A worker receives a request message and starts a new thread for processing it. 1. The worker dispatches the request (gets desired endpoint that actually executes the task). 2. If dispatched succeeded then the worker sends a confirmation response to the executor otherwise the worker sends a failed response along with a serialized :py:class:`failure ` object that contains what has failed (and why). 3. The worker executes the task and once it is finished sends the result back to the originating executor (every time a task progress event is triggered it sends progress notification to the executor where it is handled by the engine, dispatching to listeners and so-on). 4. The executor gets the task request confirmation from the worker and the task request state changes from the ``PENDING`` to the ``RUNNING`` state. Once a task request is in the ``RUNNING`` state it can't be timed-out (considering that the task execution process may take an unpredictable amount of time). 5. The executor gets the task execution result from the worker and passes it back to the executor and worker-based engine to finish task processing (this repeats for subsequent tasks). .. note:: :py:class:`~taskflow.types.failure.Failure` objects are not directly json-serializable (they contain references to tracebacks which are not serializable), so they are converted to dicts before sending and converted from dicts after receiving on both executor & worker sides (this translation is lossy since the traceback can't be fully retained, due to its contents containing internal interpreter references and details). Protocol ~~~~~~~~ .. automodule:: taskflow.engines.worker_based.protocol Examples ~~~~~~~~ Request (execute) """"""""""""""""" * **task_name** - full task name to be performed * **task_cls** - full task class name to be performed * **action** - task action to be performed (e.g. execute, revert) * **arguments** - arguments the task action to be called with * **result** - task execution result (result or :py:class:`~taskflow.types.failure.Failure`) *[passed to revert only]* Additionally, the following parameters are added to the request message: * **reply_to** - executor named exchange workers will send responses back to * **correlation_id** - executor request id (since there can be multiple request being processed simultaneously) **Example:** .. code:: json { "action": "execute", "arguments": { "x": 111 }, "task_cls": "taskflow.tests.utils.TaskOneArgOneReturn", "task_name": "taskflow.tests.utils.TaskOneArgOneReturn", "task_version": [ 1, 0 ] } Request (revert) """""""""""""""" When **reverting:** .. code:: json { "action": "revert", "arguments": {}, "failures": { "taskflow.tests.utils.TaskWithFailure": { "exc_type_names": [ "RuntimeError", "StandardError", "Exception" ], "exception_str": "Woot!", "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", "version": 1 } }, "result": [ "failure", { "exc_type_names": [ "RuntimeError", "StandardError", "Exception" ], "exception_str": "Woot!", "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", "version": 1 } ], "task_cls": "taskflow.tests.utils.TaskWithFailure", "task_name": "taskflow.tests.utils.TaskWithFailure", "task_version": [ 1, 0 ] } Worker response(s) """""""""""""""""" When **running:** .. code:: json { "data": {}, "state": "RUNNING" } When **progressing:** .. code:: json { "details": { "progress": 0.5 }, "event_type": "update_progress", "state": "EVENT" } When **succeeded:** .. code:: json { "data": { "result": 666 }, "state": "SUCCESS" } When **failed:** .. code:: json { "data": { "result": { "exc_type_names": [ "RuntimeError", "StandardError", "Exception" ], "exception_str": "Woot!", "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", "version": 1 } }, "state": "FAILURE" } Request state transitions ------------------------- .. image:: img/wbe_request_states.svg :width: 520px :align: center :alt: WBE request state transitions **WAITING** - Request placed on queue (or other `kombu`_ message bus/transport) but not *yet* consumed. **PENDING** - Worker accepted request and is pending to run using its executor (threads, processes, or other). **FAILURE** - Worker failed after running request (due to task exception) or no worker moved/started executing (by placing the request into ``RUNNING`` state) with-in specified time span (this defaults to 60 seconds unless overridden). **RUNNING** - Workers executor (using threads, processes...) has started to run requested task (once this state is transitioned to any request timeout no longer becomes applicable; since at this point it is unknown how long a task will run since it can not be determined if a task is just taking a long time or has failed). **SUCCESS** - Worker finished running task without exception. .. note:: During the ``WAITING`` and ``PENDING`` stages the engine keeps track of how long the request has been *alive* for and if a timeout is reached the request will automatically transition to ``FAILURE`` and any further transitions from a worker will be disallowed (for example, if a worker accepts the request in the future and sets the task to ``PENDING`` this transition will be logged and ignored). This timeout can be adjusted and/or removed by setting the engine ``transition_timeout`` option to a higher/lower value or by setting it to ``None`` (to remove the timeout completely). In the future this will be improved to be more dynamic by implementing the blueprints associated with `failover`_ and `info/resilence`_. .. _failover: https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-failover .. _info/resilence: https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-info Usage ===== Workers ------- To use the worker based engine a set of workers must first be established on remote machines. These workers must be provided a list of task objects, task names, modules names (or entrypoints that can be examined for valid tasks) they can respond to (this is done so that arbitrary code execution is not possible). For complete parameters and object usage please visit :py:class:`~taskflow.engines.worker_based.worker.Worker`. **Example:** .. code:: python from taskflow.engines.worker_based import worker as w config = { 'url': 'amqp://guest:guest@localhost:5672//', 'exchange': 'test-exchange', 'topic': 'test-tasks', 'tasks': ['tasks:TestTask1', 'tasks:TestTask2'], } worker = w.Worker(**config) worker.run() Engines ------- To use the worker based engine a flow must be constructed (which contains tasks that are visible on remote machines) and the specific worker based engine entrypoint must be selected. Certain configuration options must also be provided so that the transport backend can be configured and initialized correctly. Otherwise the usage should be mostly transparent (and is nearly identical to using any other engine type). For complete parameters and object usage please see :py:class:`~taskflow.engines.worker_based.engine.WorkerBasedActionEngine`. **Example with amqp transport:** .. code:: python flow = lf.Flow('simple-linear').add(...) eng = taskflow.engines.load(flow, engine='worker-based', url='amqp://guest:guest@localhost:5672//', exchange='test-exchange', topics=['topic1', 'topic2']) eng.run() **Example with filesystem transport:** .. code:: python flow = lf.Flow('simple-linear').add(...) eng = taskflow.engines.load(flow, engine='worker-based', exchange='test-exchange', topics=['topic1', 'topic2'], transport='filesystem', transport_options={ 'data_folder_in': '/tmp/in', 'data_folder_out': '/tmp/out', }) eng.run() Additional supported keyword arguments: * ``executor``: a class that provides a :py:class:`~taskflow.engines.worker_based.executor.WorkerTaskExecutor` interface; it will be used for executing, reverting and waiting for remote tasks. Limitations =========== * Atoms inside a flow must receive and accept parameters only from the ways defined in :doc:`persistence `. In other words, the task that is created when a workflow is constructed will not be the same task that is executed on a remote worker (and any internal state not passed via the :doc:`input and output ` mechanism can not be transferred). This means resource objects (database handles, file descriptors, sockets, ...) can **not** be directly sent across to remote workers (instead the configuration that defines how to fetch/create these objects must be instead). * Worker-based engines will in the future be able to run lightweight tasks locally to avoid transport overhead for very simple tasks (currently it will run even lightweight tasks remotely, which may be non-performant). * Fault detection, currently when a worker acknowledges a task the engine will wait for the task result indefinitely (a task may take an indeterminate amount of time to finish). In the future there needs to be a way to limit the duration of a remote workers execution (and track their liveness) and possibly spawn the task on a secondary worker if a timeout is reached (aka the first worker has died or has stopped responding). Implementations =============== .. automodule:: taskflow.engines.worker_based.engine Components ---------- .. warning:: External usage of internal engine functions, components and modules should be kept to a **minimum** as they may be altered, refactored or moved to other locations **without** notice (and without the typical deprecation cycle). .. automodule:: taskflow.engines.worker_based.dispatcher .. automodule:: taskflow.engines.worker_based.endpoint .. automodule:: taskflow.engines.worker_based.executor .. automodule:: taskflow.engines.worker_based.proxy .. automodule:: taskflow.engines.worker_based.worker .. automodule:: taskflow.engines.worker_based.types .. _kombu: http://kombu.readthedocs.org/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/pylintrc0000664000175000017500000000143100000000000015170 0ustar00zuulzuul00000000000000[MESSAGES CONTROL] # Disable the message(s) with the given id(s). disable=C0111,I0011,R0201,R0922,W0142,W0511,W0613,W0622,W0703 [BASIC] # Variable names can be 1 to 31 characters long, with lowercase and underscores variable-rgx=[a-z_][a-z0-9_]{0,30}$ # Argument names can be 2 to 31 characters long, with lowercase and underscores argument-rgx=[a-z_][a-z0-9_]{1,30}$ # Method names should be at least 3 characters long # and be lowercased with underscores method-rgx=[a-z_][a-z0-9_]{2,50}$ # Don't require docstrings on tests. no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$ [DESIGN] max-args=10 max-attributes=20 max-branchs=30 max-public-methods=100 max-statements=60 min-public-methods=0 [REPORTS] output-format=parseable include-ids=yes [VARIABLES] additional-builtins=_ ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1644397810.58004 taskflow-4.6.4/releasenotes/0000775000175000017500000000000000000000000016073 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6000407 taskflow-4.6.4/releasenotes/notes/0000775000175000017500000000000000000000000017223 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/notes/.placeholder0000664000175000017500000000000000000000000021474 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/notes/add-sentinel-redis-support-9fd16e2a5dd5c0c9.yaml0000664000175000017500000000033200000000000027652 0ustar00zuulzuul00000000000000--- features: - | Allow to use Sentinel for Redis connections. New variable *sentinel* can be passed to Redis jobboard. It is None by default, Sentinel name should be passed to enable this functionality. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/notes/drop-python-2-7-73d3113c69d724d6.yaml0000664000175000017500000000020700000000000024754 0ustar00zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. The minimum version of Python now supported by taskflow is Python 3.6. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/notes/fix-endless-loop-on-storage-error-dd4467f0bbc66abf.yaml0000664000175000017500000000036400000000000031141 0ustar00zuulzuul00000000000000--- fixes: - | Limit retries for storage failures on saving flow/task state in the storage. Previously on StorageFailure exception may cause an endless loop during execution of flows throwing errors and retrying to save details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/notes/zookeeper-ssl-support-b9abf24a39096b62.yaml0000664000175000017500000000032400000000000026637 0ustar00zuulzuul00000000000000--- features: - | SSL support for zookeeper backend (kazoo client). Now the following options can be passed to zookeeper config: *keyfile*, *keyfile_password*, *certfile*, *use_ssl*, *verify_certs*.././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6000407 taskflow-4.6.4/releasenotes/source/0000775000175000017500000000000000000000000017373 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6000407 taskflow-4.6.4/releasenotes/source/_static/0000775000175000017500000000000000000000000021021 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/_static/.placeholder0000664000175000017500000000000000000000000023272 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6000407 taskflow-4.6.4/releasenotes/source/_templates/0000775000175000017500000000000000000000000021530 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/_templates/.placeholder0000664000175000017500000000000000000000000024001 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/conf.py0000664000175000017500000002206600000000000020700 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2020 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # taskflow Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/taskflow' openstackdocs_auto_name = False openstackdocs_bug_project = 'taskflow' openstackdocs_bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'taskflow Release Notes' copyright = u'2016, taskflow Developers' # Release do not need a version number in the title, they # cover multiple versions. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'taskflowReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'taskflowReleaseNotes.tex', u'taskflow Release Notes Documentation', u'taskflow Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'taskflowreleasenotes', u'taskflow Release Notes Documentation', [u'taskflow Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'taskflowReleaseNotes', u'taskflow Release Notes Documentation', u'taskflow Developers', 'taskflowReleaseNotes', 'An OpenStack library for parsing configuration options from the command' ' line and configuration files.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/index.rst0000664000175000017500000000032400000000000021233 0ustar00zuulzuul00000000000000=========================== taskflow Release Notes =========================== .. toctree:: :maxdepth: 1 unreleased victoria ussuri train stein rocky queens pike ocata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/ocata.rst0000664000175000017500000000023000000000000021207 0ustar00zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/pike.rst0000664000175000017500000000021700000000000021055 0ustar00zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/queens.rst0000664000175000017500000000022300000000000021422 0ustar00zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/rocky.rst0000664000175000017500000000022100000000000021247 0ustar00zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/stein.rst0000664000175000017500000000022100000000000021242 0ustar00zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/train.rst0000664000175000017500000000017600000000000021246 0ustar00zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/unreleased.rst0000664000175000017500000000014400000000000022253 0ustar00zuulzuul00000000000000========================== Unreleased Release Notes ========================== .. release-notes:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/ussuri.rst0000664000175000017500000000020200000000000021451 0ustar00zuulzuul00000000000000=========================== Ussuri Series Release Notes =========================== .. release-notes:: :branch: stable/ussuri ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/releasenotes/source/victoria.rst0000664000175000017500000000021200000000000021740 0ustar00zuulzuul00000000000000============================= Victoria Series Release Notes ============================= .. release-notes:: :branch: stable/victoria ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/requirements.txt0000664000175000017500000000203600000000000016667 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here... pbr!=2.1.0,>=2.0.0 # Apache-2.0 # Packages needed for using this library. # Python 2->3 compatibility library. six>=1.10.0 # MIT # For async and/or periodic work futurist>=1.2.0 # Apache-2.0 # For reader/writer + interprocess locks. fasteners>=0.7.0 # Apache-2.0 # Very nice graph library networkx>=2.1.0 # BSD # Used for backend storage engine loading. stevedore>=1.20.0 # Apache-2.0 # Used for structured input validation jsonschema>=3.2.0 # MIT # For the state machine we run with automaton>=1.9.0 # Apache-2.0 # For common utilities oslo.utils>=3.33.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 tenacity>=6.0.0 # Apache-2.0 # For lru caches and such cachetools>=2.0.0 # MIT License # For pydot output tests pydot>=1.2.4 # MIT License ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/run_tests.sh0000775000175000017500000000370400000000000015773 0ustar00zuulzuul00000000000000#!/bin/bash function usage { echo "Usage: $0 [OPTION]..." echo "Run Taskflow's test suite(s)" echo "" echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added." echo " -p, --pep8 Just run pep8" echo " -P, --no-pep8 Don't run static code checks" echo " -v, --verbose Increase verbosity of reporting output" echo " -h, --help Print this usage message" echo "" exit } function process_option { case "$1" in -h|--help) usage;; -p|--pep8) let just_pep8=1;; -P|--no-pep8) let no_pep8=1;; -f|--force) let force=1;; -v|--verbose) let verbose=1;; *) pos_args="$pos_args $1" esac } verbose=0 force=0 pos_args="" just_pep8=0 no_pep8=0 tox_args="" tox="" for arg in "$@"; do process_option $arg done py=`which python` if [ -z "$py" ]; then echo "Python is required to use $0" echo "Please install it via your distributions package management system." exit 1 fi py_envs=`python -c 'import sys; print("py%s%s" % (sys.version_info[0:2]))'` py_envs=${PY_ENVS:-$py_envs} function run_tests { local tox_cmd="${tox} ${tox_args} -e $py_envs ${pos_args}" echo "Running tests for environments $py_envs via $tox_cmd" bash -c "$tox_cmd" } function run_flake8 { local tox_cmd="${tox} ${tox_args} -e pep8 ${pos_args}" echo "Running flake8 via $tox_cmd" bash -c "$tox_cmd" } if [ $force -eq 1 ]; then tox_args="$tox_args -r" fi if [ $verbose -eq 1 ]; then tox_args="$tox_args -v" fi tox=`which tox` if [ -z "$tox" ]; then echo "Tox is required to use $0" echo "Please install it via \`pip\` or via your distributions" \ "package management system." echo "Visit http://tox.readthedocs.org/ for additional installation" \ "instructions." exit 1 fi if [ $just_pep8 -eq 1 ]; then run_flake8 exit fi run_tests || exit if [ $no_pep8 -eq 0 ]; then run_flake8 fi ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6600428 taskflow-4.6.4/setup.cfg0000664000175000017500000000542500000000000015231 0ustar00zuulzuul00000000000000[metadata] name = taskflow summary = Taskflow structured state management library. description_file = README.rst author = OpenStack author_email = openstack-discuss@lists.openstack.org home_page = https://docs.openstack.org/taskflow/latest/ keywords = reliable,tasks,execution,parallel,dataflow,workflows,distributed python_requires = >=3.6 classifier = Development Status :: 4 - Beta Environment :: OpenStack Intended Audience :: Developers Intended Audience :: Information Technology License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3 :: Only Programming Language :: Python :: Implementation :: CPython Topic :: Software Development :: Libraries Topic :: System :: Distributed Computing [files] packages = taskflow [entry_points] taskflow.jobboards = zookeeper = taskflow.jobs.backends.impl_zookeeper:ZookeeperJobBoard redis = taskflow.jobs.backends.impl_redis:RedisJobBoard taskflow.conductors = blocking = taskflow.conductors.backends.impl_blocking:BlockingConductor nonblocking = taskflow.conductors.backends.impl_nonblocking:NonBlockingConductor taskflow.persistence = dir = taskflow.persistence.backends.impl_dir:DirBackend file = taskflow.persistence.backends.impl_dir:DirBackend memory = taskflow.persistence.backends.impl_memory:MemoryBackend mysql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend postgresql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend sqlite = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend zookeeper = taskflow.persistence.backends.impl_zookeeper:ZkBackend taskflow.engines = default = taskflow.engines.action_engine.engine:SerialActionEngine serial = taskflow.engines.action_engine.engine:SerialActionEngine parallel = taskflow.engines.action_engine.engine:ParallelActionEngine worker-based = taskflow.engines.worker_based.engine:WorkerBasedActionEngine workers = taskflow.engines.worker_based.engine:WorkerBasedActionEngine [extras] zookeeper = kazoo>=2.6.0 # Apache-2.0 zake>=0.1.6 # Apache-2.0 redis = redis>=2.10.0 # MIT workers = kombu>=4.3.0 # BSD eventlet = eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT database = SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT alembic>=0.8.10 # MIT SQLAlchemy-Utils>=0.30.11 # BSD License PyMySQL>=0.7.6 # MIT License psycopg2>=2.8.0 # LGPL/ZPL test = pydotplus>=2.0.2 # MIT License hacking<0.11,>=0.10.0 oslotest>=3.2.0 # Apache-2.0 mock>=2.0.0 # BSD testtools>=2.2.0 # MIT testscenarios>=0.4 # Apache-2.0/BSD stestr>=2.0.0 # Apache-2.0 [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/setup.py0000664000175000017500000000200600000000000015112 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6040409 taskflow-4.6.4/taskflow/0000775000175000017500000000000000000000000015234 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/__init__.py0000664000175000017500000000000000000000000017333 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/atom.py0000664000175000017500000004211200000000000016546 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections from collections import abc as cabc import itertools from oslo_utils import reflection import six from six.moves import zip as compat_zip from taskflow.types import sets from taskflow.utils import misc # Helper types tuples... _sequence_types = (list, tuple, cabc.Sequence) _set_types = (set, cabc.Set) # the default list of revert arguments to ignore when deriving # revert argument mapping from the revert method signature _default_revert_args = ('result', 'flow_failures') def _save_as_to_mapping(save_as): """Convert save_as to mapping name => index. Result should follow storage convention for mappings. """ # TODO(harlowja): we should probably document this behavior & convention # outside of code so that it's more easily understandable, since what an # atom returns is pretty crucial for other later operations. if save_as is None: return collections.OrderedDict() if isinstance(save_as, six.string_types): # NOTE(harlowja): this means that your atom will only return one item # instead of a dictionary-like object or a indexable object (like a # list or tuple). return collections.OrderedDict([(save_as, None)]) elif isinstance(save_as, _sequence_types): # NOTE(harlowja): this means that your atom will return a indexable # object, like a list or tuple and the results can be mapped by index # to that tuple/list that is returned for others to use. return collections.OrderedDict((key, num) for num, key in enumerate(save_as)) elif isinstance(save_as, _set_types): # NOTE(harlowja): in the case where a set is given we will not be # able to determine the numeric ordering in a reliable way (since it # may be an unordered set) so the only way for us to easily map the # result of the atom will be via the key itself. return collections.OrderedDict((key, key) for key in save_as) else: raise TypeError('Atom provides parameter ' 'should be str, set or tuple/list, not %r' % save_as) def _build_rebind_dict(req_args, rebind_args): """Build a argument remapping/rebinding dictionary. This dictionary allows an atom to declare that it will take a needed requirement bound to a given name with another name instead (mapping the new name onto the required name). """ if rebind_args is None: return collections.OrderedDict() elif isinstance(rebind_args, (list, tuple)): # Attempt to map the rebound argument names position by position to # the required argument names (if they are the same length then # this determines how to remap the required argument names to the # rebound ones). rebind = collections.OrderedDict(compat_zip(req_args, rebind_args)) if len(req_args) < len(rebind_args): # Extra things were rebound, that may be because of *args # or **kwargs (or some other reason); so just keep all of them # using 1:1 rebinding... rebind.update((a, a) for a in rebind_args[len(req_args):]) return rebind elif isinstance(rebind_args, dict): return rebind_args else: raise TypeError("Invalid rebind value '%s' (%s)" % (rebind_args, type(rebind_args))) def _build_arg_mapping(atom_name, reqs, rebind_args, function, do_infer, ignore_list=None): """Builds an input argument mapping for a given function. Given a function, its requirements and a rebind mapping this helper function will build the correct argument mapping for the given function as well as verify that the final argument mapping does not have missing or extra arguments (where applicable). """ # Build a list of required arguments based on function signature. req_args = reflection.get_callable_args(function, required_only=True) all_args = reflection.get_callable_args(function, required_only=False) # Remove arguments that are part of ignore list. if ignore_list: for arg in ignore_list: if arg in req_args: req_args.remove(arg) else: ignore_list = [] # Build the required names. required = collections.OrderedDict() # Add required arguments to required mappings if inference is enabled. if do_infer: required.update((a, a) for a in req_args) # Add additional manually provided requirements to required mappings. if reqs: if isinstance(reqs, six.string_types): required.update({reqs: reqs}) else: required.update((a, a) for a in reqs) # Update required mappings values based on rebinding of arguments names. required.update(_build_rebind_dict(req_args, rebind_args)) # Determine if there are optional arguments that we may or may not take. if do_infer: opt_args = sets.OrderedSet(all_args) opt_args = opt_args - set(itertools.chain(six.iterkeys(required), iter(ignore_list))) optional = collections.OrderedDict((a, a) for a in opt_args) else: optional = collections.OrderedDict() # Check if we are given some extra arguments that we aren't able to accept. if not reflection.accepts_kwargs(function): extra_args = sets.OrderedSet(six.iterkeys(required)) extra_args -= all_args if extra_args: raise ValueError('Extra arguments given to atom %s: %s' % (atom_name, list(extra_args))) # NOTE(imelnikov): don't use set to preserve order in error message missing_args = [arg for arg in req_args if arg not in required] if missing_args: raise ValueError('Missing arguments for atom %s: %s' % (atom_name, missing_args)) return required, optional @six.add_metaclass(abc.ABCMeta) class Atom(object): """An unit of work that causes a flow to progress (in some manner). An atom is a named object that operates with input data to perform some action that furthers the overall flows progress. It usually also produces some of its own named output as a result of this process. :param name: Meaningful name for this atom, should be something that is distinguishable and understandable for notification, debugging, storing and any other similar purposes. :param provides: A set, string or list of items that this will be providing (or could provide) to others, used to correlate and associate the thing/s this atom produces, if it produces anything at all. :param inject: An *immutable* input_name => value dictionary which specifies any initial inputs that should be automatically injected into the atoms scope before the atom execution commences (this allows for providing atom *local* values that do not need to be provided by other atoms/dependents). :param rebind: A dict of key/value pairs used to define argument name conversions for inputs to this atom's ``execute`` method. :param revert_rebind: The same as ``rebind`` but for the ``revert`` method. If unpassed, ``rebind`` will be used instead. :param requires: A set or list of required inputs for this atom's ``execute`` method. :param revert_requires: A set or list of required inputs for this atom's ``revert`` method. If unpassed, ``requires`` will be used. :ivar version: An *immutable* version that associates version information with this atom. It can be useful in resuming older versions of atoms. Standard major, minor versioning concepts should apply. :ivar save_as: An *immutable* output ``resource`` name :py:class:`.OrderedDict` this atom produces that other atoms may depend on this atom providing. The format is output index (or key when a dictionary is returned from the execute method) to stored argument name. :ivar rebind: An *immutable* input ``resource`` :py:class:`.OrderedDict` that can be used to alter the inputs given to this atom. It is typically used for mapping a prior atoms output into the names that this atom expects (in a way this is like remapping a namespace of another atom into the namespace of this atom). :ivar revert_rebind: The same as ``rebind`` but for the revert method. This should only differ from ``rebind`` if the ``revert`` method has a different signature from ``execute`` or a different ``revert_rebind`` value was received. :ivar inject: See parameter ``inject``. :ivar Atom.name: See parameter ``name``. :ivar optional: A :py:class:`~taskflow.types.sets.OrderedSet` of inputs that are optional for this atom to ``execute``. :ivar revert_optional: The ``revert`` version of ``optional``. :ivar provides: A :py:class:`~taskflow.types.sets.OrderedSet` of outputs this atom produces. """ priority = 0 """A numeric priority that instances of this class will have when running, used when there are multiple *parallel* candidates to execute and/or revert. During this situation the candidate list will be stably sorted based on this priority attribute which will result in atoms with higher priorities executing (or reverting) before atoms with lower priorities (higher being defined as a number bigger, or greater tha an atom with a lower priority number). By default all atoms have the same priority (zero). For example when the following is combined into a graph (where each node in the denoted graph is some task):: a -> b b -> c b -> e b -> f When ``b`` finishes there will then be three candidates that can run ``(c, e, f)`` and they may run in any order. What this priority does is sort those three by their priority before submitting them to be worked on (so that instead of say a random run order they will now be ran by there sorted order). This is also true when reverting (in that the sort order of the potential nodes will be used to determine the submission order). """ default_provides = None def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None, inject=None, ignore_list=None, revert_rebind=None, revert_requires=None): if provides is None: provides = self.default_provides self.name = name self.version = (1, 0) self.inject = inject self.save_as = _save_as_to_mapping(provides) self.provides = sets.OrderedSet(self.save_as) if ignore_list is None: ignore_list = [] self.rebind, exec_requires, self.optional = self._build_arg_mapping( self.execute, requires=requires, rebind=rebind, auto_extract=auto_extract, ignore_list=ignore_list ) revert_ignore = ignore_list + list(_default_revert_args) revert_mapping = self._build_arg_mapping( self.revert, requires=revert_requires or requires, rebind=revert_rebind or rebind, auto_extract=auto_extract, ignore_list=revert_ignore ) (self.revert_rebind, addl_requires, self.revert_optional) = revert_mapping # TODO(bnemec): This should be documented as an ivar, but can't be due # to https://github.com/sphinx-doc/sphinx/issues/2549 #: A :py:class:`~taskflow.types.sets.OrderedSet` of inputs this atom #: requires to function. self.requires = exec_requires.union(addl_requires) def _build_arg_mapping(self, executor, requires=None, rebind=None, auto_extract=True, ignore_list=None): required, optional = _build_arg_mapping(self.name, requires, rebind, executor, auto_extract, ignore_list=ignore_list) # Form the real rebind mapping, if a key name is the same as the # key value, then well there is no rebinding happening, otherwise # there will be. rebind = collections.OrderedDict() for (arg_name, bound_name) in itertools.chain(six.iteritems(required), six.iteritems(optional)): rebind.setdefault(arg_name, bound_name) requires = sets.OrderedSet(six.itervalues(required)) optional = sets.OrderedSet(six.itervalues(optional)) if self.inject: inject_keys = frozenset(six.iterkeys(self.inject)) requires -= inject_keys optional -= inject_keys return rebind, requires, optional def pre_execute(self): """Code to be run prior to executing the atom. A common pattern for initializing the state of the system prior to running atoms is to define some code in a base class that all your atoms inherit from. In that class, you can define a ``pre_execute`` method and it will always be invoked just prior to your atoms running. """ @abc.abstractmethod def execute(self, *args, **kwargs): """Activate a given atom which will perform some operation and return. This method can be used to perform an action on a given set of input requirements (passed in via ``*args`` and ``**kwargs``) to accomplish some type of operation. This operation may provide some named outputs/results as a result of it executing for later reverting (or for other atoms to depend on). NOTE(harlowja): the result (if any) that is returned should be persistable so that it can be passed back into this atom if reverting is triggered (especially in the case where reverting happens in a different python process or on a remote machine) and so that the result can be transmitted to other atoms (which may be local or remote). :param args: positional arguments that atom requires to execute. :param kwargs: any keyword arguments that atom requires to execute. """ def post_execute(self): """Code to be run after executing the atom. A common pattern for cleaning up global state of the system after the execution of atoms is to define some code in a base class that all your atoms inherit from. In that class, you can define a ``post_execute`` method and it will always be invoked just after your atoms execute, regardless of whether they succeeded or not. This pattern is useful if you have global shared database sessions that need to be cleaned up, for example. """ def pre_revert(self): """Code to be run prior to reverting the atom. This works the same as :meth:`.pre_execute`, but for the revert phase. """ def revert(self, *args, **kwargs): """Revert this atom. This method should undo any side-effects caused by previous execution of the atom using the result of the :py:meth:`execute` method and information on the failure which triggered reversion of the flow the atom is contained in (if applicable). :param args: positional arguments that the atom required to execute. :param kwargs: any keyword arguments that the atom required to execute; the special key ``'result'`` will contain the :py:meth:`execute` result (if any) and the ``**kwargs`` key ``'flow_failures'`` will contain any failure information. """ def post_revert(self): """Code to be run after reverting the atom. This works the same as :meth:`.post_execute`, but for the revert phase. """ def __str__(self): return "%s==%s" % (self.name, misc.get_version_string(self)) def __repr__(self): return '<%s %s>' % (reflection.get_class_name(self), self) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.608041 taskflow-4.6.4/taskflow/conductors/0000775000175000017500000000000000000000000017417 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/__init__.py0000664000175000017500000000000000000000000021516 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.608041 taskflow-4.6.4/taskflow/conductors/backends/0000775000175000017500000000000000000000000021171 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/backends/__init__.py0000664000175000017500000000314200000000000023302 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import stevedore.driver from taskflow import exceptions as exc # NOTE(harlowja): this is the entrypoint namespace, not the module namespace. CONDUCTOR_NAMESPACE = 'taskflow.conductors' LOG = logging.getLogger(__name__) def fetch(kind, name, jobboard, namespace=CONDUCTOR_NAMESPACE, **kwargs): """Fetch a conductor backend with the given options. This fetch method will look for the entrypoint 'kind' in the entrypoint namespace, and then attempt to instantiate that entrypoint using the provided name, jobboard and any board specific kwargs. """ LOG.debug('Looking for %r conductor driver in %r', kind, namespace) try: mgr = stevedore.driver.DriverManager( namespace, kind, invoke_on_load=True, invoke_args=(name, jobboard), invoke_kwds=kwargs) return mgr.driver except RuntimeError as e: raise exc.NotFound("Could not find conductor %s" % (kind), e) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/backends/impl_blocking.py0000664000175000017500000000275400000000000024364 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist from taskflow.conductors.backends import impl_executor class BlockingConductor(impl_executor.ExecutorConductor): """Blocking conductor that processes job(s) in a blocking manner.""" MAX_SIMULTANEOUS_JOBS = 1 """ Default maximum number of jobs that can be in progress at the same time. """ @staticmethod def _executor_factory(): return futurist.SynchronousExecutor() def __init__(self, name, jobboard, persistence=None, engine=None, engine_options=None, wait_timeout=None, log=None, max_simultaneous_jobs=MAX_SIMULTANEOUS_JOBS): super(BlockingConductor, self).__init__( name, jobboard, persistence=persistence, engine=engine, engine_options=engine_options, wait_timeout=wait_timeout, log=log, max_simultaneous_jobs=max_simultaneous_jobs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/backends/impl_executor.py0000664000175000017500000003476400000000000024440 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import contextlib import functools import itertools import threading from oslo_utils import excutils from oslo_utils import timeutils import six from taskflow.conductors import base from taskflow import exceptions as excp from taskflow.listeners import logging as logging_listener from taskflow import logging from taskflow import states from taskflow.types import timing as tt from taskflow.utils import iter_utils from taskflow.utils import misc LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class ExecutorConductor(base.Conductor): """Dispatches jobs from blocking :py:meth:`.run` method to some executor. This conductor iterates over jobs in the provided jobboard (waiting for the given timeout if no jobs exist) and attempts to claim them, work on those jobs using an executor (potentially blocking further work from being claimed and consumed) and then consume those work units after completion. This process will repeat until the conductor has been stopped or other critical error occurs. NOTE(harlowja): consumption occurs even if a engine fails to run due to a atom failure. This is only skipped when an execution failure or a storage failure occurs which are *usually* correctable by re-running on a different conductor (storage failures and execution failures may be transient issues that can be worked around by later execution). If a job after completing can not be consumed or abandoned the conductor relies upon the jobboard capabilities to automatically abandon these jobs. """ LOG = None """ Logger that will be used for listening to events (if none then the module level logger will be used instead). """ REFRESH_PERIODICITY = 30 """ Every 30 seconds the jobboard will be resynced (if for some reason a watch or set of watches was not received) using the `ensure_fresh` option to ensure this (for supporting jobboard backends only). """ #: Default timeout used to idle/wait when no jobs have been found. WAIT_TIMEOUT = 0.5 MAX_SIMULTANEOUS_JOBS = -1 """ Default maximum number of jobs that can be in progress at the same time. Negative or zero values imply no limit (do note that if a executor is used that is built on a queue, as most are, that this will imply that the queue will contain a potentially large & unfinished backlog of submitted jobs). This *may* get better someday if https://bugs.python.org/issue22737 is ever implemented and released. """ #: Exceptions that will **not** cause consumption to occur. NO_CONSUME_EXCEPTIONS = tuple([ excp.ExecutionFailure, excp.StorageFailure, ]) _event_factory = threading.Event """This attribute *can* be overridden by subclasses (for example if an eventlet *green* event works better for the conductor user).""" EVENTS_EMITTED = tuple([ 'compilation_start', 'compilation_end', 'preparation_start', 'preparation_end', 'validation_start', 'validation_end', 'running_start', 'running_end', 'job_consumed', 'job_abandoned', ]) """Events will be emitted for each of the events above. The event is emitted to listeners registered with the conductor. """ def __init__(self, name, jobboard, persistence=None, engine=None, engine_options=None, wait_timeout=None, log=None, max_simultaneous_jobs=MAX_SIMULTANEOUS_JOBS): super(ExecutorConductor, self).__init__( name, jobboard, persistence=persistence, engine=engine, engine_options=engine_options) self._wait_timeout = tt.convert_to_timeout( value=wait_timeout, default_value=self.WAIT_TIMEOUT, event_factory=self._event_factory) self._dead = self._event_factory() self._log = misc.pick_first_not_none(log, self.LOG, LOG) self._max_simultaneous_jobs = int( misc.pick_first_not_none(max_simultaneous_jobs, self.MAX_SIMULTANEOUS_JOBS)) self._dispatched = set() def _executor_factory(self): """Creates an executor to be used during dispatching.""" raise excp.NotImplementedError("This method must be implemented but" " it has not been") def stop(self): """Requests the conductor to stop dispatching. This method can be used to request that a conductor stop its consumption & dispatching loop. The method returns immediately regardless of whether the conductor has been stopped. """ self._wait_timeout.interrupt() @property def dispatching(self): """Whether or not the dispatching loop is still dispatching.""" return not self._dead.is_set() def _listeners_from_job(self, job, engine): listeners = super(ExecutorConductor, self)._listeners_from_job( job, engine) listeners.append(logging_listener.LoggingListener(engine, log=self._log)) return listeners def _dispatch_job(self, job): engine = self._engine_from_job(job) listeners = self._listeners_from_job(job, engine) with contextlib.ExitStack() as stack: for listener in listeners: stack.enter_context(listener) self._log.debug("Dispatching engine for job '%s'", job) consume = True details = { 'job': job, 'engine': engine, 'conductor': self, } def _run_engine(): has_suspended = False for _state in engine.run_iter(): if not has_suspended and self._wait_timeout.is_stopped(): self._log.info("Conductor stopped, requesting " "suspension of engine running " "job %s", job) engine.suspend() has_suspended = True try: for stage_func, event_name in [(engine.compile, 'compilation'), (engine.prepare, 'preparation'), (engine.validate, 'validation'), (_run_engine, 'running')]: self._notifier.notify("%s_start" % event_name, details) stage_func() self._notifier.notify("%s_end" % event_name, details) except excp.WrappedFailure as e: if all((f.check(*self.NO_CONSUME_EXCEPTIONS) for f in e)): consume = False if self._log.isEnabledFor(logging.WARNING): if consume: self._log.warn( "Job execution failed (consumption being" " skipped): %s [%s failures]", job, len(e)) else: self._log.warn( "Job execution failed (consumption" " proceeding): %s [%s failures]", job, len(e)) # Show the failure/s + traceback (if possible)... for i, f in enumerate(e): self._log.warn("%s. %s", i + 1, f.pformat(traceback=True)) except self.NO_CONSUME_EXCEPTIONS: self._log.warn("Job execution failed (consumption being" " skipped): %s", job, exc_info=True) consume = False except Exception: self._log.warn( "Job execution failed (consumption proceeding): %s", job, exc_info=True) else: if engine.storage.get_flow_state() == states.SUSPENDED: self._log.info("Job execution was suspended: %s", job) consume = False else: self._log.info("Job completed successfully: %s", job) return consume def _try_finish_job(self, job, consume): try: if consume: self._jobboard.consume(job, self._name) self._notifier.notify("job_consumed", { 'job': job, 'conductor': self, 'persistence': self._persistence, }) else: self._jobboard.abandon(job, self._name) self._notifier.notify("job_abandoned", { 'job': job, 'conductor': self, 'persistence': self._persistence, }) except (excp.JobFailure, excp.NotFound): if consume: self._log.warn("Failed job consumption: %s", job, exc_info=True) else: self._log.warn("Failed job abandonment: %s", job, exc_info=True) def _on_job_done(self, job, fut): consume = False try: consume = fut.result() except KeyboardInterrupt: with excutils.save_and_reraise_exception(): self._log.warn("Job dispatching interrupted: %s", job) except Exception: self._log.warn("Job dispatching failed: %s", job, exc_info=True) try: self._try_finish_job(job, consume) finally: self._dispatched.discard(fut) def _can_claim_more_jobs(self, job): if self._wait_timeout.is_stopped(): return False if self._max_simultaneous_jobs <= 0: return True if len(self._dispatched) >= self._max_simultaneous_jobs: return False else: return True def _run_until_dead(self, executor, max_dispatches=None): total_dispatched = 0 if max_dispatches is None: # NOTE(TheSriram): if max_dispatches is not set, # then the conductor will run indefinitely, and not # stop after 'n' number of dispatches max_dispatches = -1 dispatch_gen = iter_utils.iter_forever(max_dispatches) is_stopped = self._wait_timeout.is_stopped try: # Don't even do any work in the first place... if max_dispatches == 0: raise StopIteration fresh_period = timeutils.StopWatch( duration=self.REFRESH_PERIODICITY) fresh_period.start() while not is_stopped(): any_dispatched = False if fresh_period.expired(): ensure_fresh = True fresh_period.restart() else: ensure_fresh = False job_it = itertools.takewhile( self._can_claim_more_jobs, self._jobboard.iterjobs(ensure_fresh=ensure_fresh)) for job in job_it: self._log.debug("Trying to claim job: %s", job) try: self._jobboard.claim(job, self._name) except (excp.UnclaimableJob, excp.NotFound): self._log.debug("Job already claimed or" " consumed: %s", job) else: try: fut = executor.submit(self._dispatch_job, job) except RuntimeError: with excutils.save_and_reraise_exception(): self._log.warn("Job dispatch submitting" " failed: %s", job) self._try_finish_job(job, False) else: fut.job = job self._dispatched.add(fut) any_dispatched = True fut.add_done_callback( functools.partial(self._on_job_done, job)) total_dispatched = next(dispatch_gen) if not any_dispatched and not is_stopped(): self._wait_timeout.wait() except StopIteration: # This will be raised from 'dispatch_gen' if it reaches its # max dispatch number (which implies we should do no more work). with excutils.save_and_reraise_exception(): if max_dispatches >= 0 and total_dispatched >= max_dispatches: self._log.info("Maximum dispatch limit of %s reached", max_dispatches) def run(self, max_dispatches=None): self._dead.clear() self._dispatched.clear() try: self._jobboard.register_entity(self.conductor) with self._executor_factory() as executor: self._run_until_dead(executor, max_dispatches=max_dispatches) except StopIteration: pass except KeyboardInterrupt: with excutils.save_and_reraise_exception(): self._log.warn("Job dispatching interrupted") finally: self._dead.set() # Inherit the docs, so we can reference them in our class docstring, # if we don't do this sphinx gets confused... run.__doc__ = base.Conductor.run.__doc__ def wait(self, timeout=None): """Waits for the conductor to gracefully exit. This method waits for the conductor to gracefully exit. An optional timeout can be provided, which will cause the method to return within the specified timeout. If the timeout is reached, the returned value will be ``False``, otherwise it will be ``True``. :param timeout: Maximum number of seconds that the :meth:`wait` method should block for. """ return self._dead.wait(timeout) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/backends/impl_nonblocking.py0000664000175000017500000000572500000000000025100 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist import six from taskflow.conductors.backends import impl_executor from taskflow.utils import threading_utils as tu class NonBlockingConductor(impl_executor.ExecutorConductor): """Non-blocking conductor that processes job(s) using a thread executor. NOTE(harlowja): A custom executor factory can be provided via keyword argument ``executor_factory``, if provided it will be invoked at :py:meth:`~taskflow.conductors.base.Conductor.run` time with one positional argument (this conductor) and it must return a compatible `executor`_ which can be used to submit jobs to. If ``None`` is a provided a thread pool backed executor is selected by default (it will have an equivalent number of workers as this conductors simultaneous job count). .. _executor: https://docs.python.org/dev/library/\ concurrent.futures.html#executor-objects """ MAX_SIMULTANEOUS_JOBS = tu.get_optimal_thread_count() """ Default maximum number of jobs that can be in progress at the same time. """ def _default_executor_factory(self): max_simultaneous_jobs = self._max_simultaneous_jobs if max_simultaneous_jobs <= 0: max_workers = tu.get_optimal_thread_count() else: max_workers = max_simultaneous_jobs return futurist.ThreadPoolExecutor(max_workers=max_workers) def __init__(self, name, jobboard, persistence=None, engine=None, engine_options=None, wait_timeout=None, log=None, max_simultaneous_jobs=MAX_SIMULTANEOUS_JOBS, executor_factory=None): super(NonBlockingConductor, self).__init__( name, jobboard, persistence=persistence, engine=engine, engine_options=engine_options, wait_timeout=wait_timeout, log=log, max_simultaneous_jobs=max_simultaneous_jobs) if executor_factory is None: self._executor_factory = self._default_executor_factory else: if not six.callable(executor_factory): raise ValueError("Provided keyword argument 'executor_factory'" " must be callable") self._executor_factory = executor_factory ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/conductors/base.py0000664000175000017500000001627000000000000020711 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import os import threading import fasteners import six from taskflow import engines from taskflow import exceptions as excp from taskflow.types import entity from taskflow.types import notifier from taskflow.utils import misc @six.add_metaclass(abc.ABCMeta) class Conductor(object): """Base for all conductor implementations. Conductors act as entities which extract jobs from a jobboard, assign there work to some engine (using some desired configuration) and then wait for that work to complete. If the work fails then they abandon the claimed work (or if the process they are running in crashes or dies this abandonment happens automatically) and then another conductor at a later period of time will finish up the prior failed conductors work. """ #: Entity kind used when creating new entity objects ENTITY_KIND = 'conductor' def __init__(self, name, jobboard, persistence=None, engine=None, engine_options=None): self._name = name self._jobboard = jobboard self._engine = engine self._engine_options = misc.safe_copy_dict(engine_options) self._persistence = persistence self._lock = threading.RLock() self._notifier = notifier.Notifier() @misc.cachedproperty def conductor(self): """Entity object that represents this conductor.""" hostname = misc.get_hostname() pid = os.getpid() name = '@'.join([self._name, hostname + ":" + str(pid)]) metadata = { 'hostname': hostname, 'pid': pid, } return entity.Entity(self.ENTITY_KIND, name, metadata) @property def notifier(self): """The conductor actions (or other state changes) notifier. NOTE(harlowja): different conductor implementations may emit different events + event details at different times, so refer to your conductor documentation to know exactly what can and what can not be subscribed to. """ return self._notifier def _flow_detail_from_job(self, job): """Extracts a flow detail from a job (via some manner). The current mechanism to accomplish this is the following choices: * If the job details provide a 'flow_uuid' key attempt to load this key from the jobs book and use that as the flow_detail to run. * If the job details does not have have a 'flow_uuid' key then attempt to examine the size of the book and if it's only one element in the book (aka one flow_detail) then just use that. * Otherwise if there is no 'flow_uuid' defined or there are > 1 flow_details in the book raise an error that corresponds to being unable to locate the correct flow_detail to run. """ book = job.book if book is None: raise excp.NotFound("No book found in job") if job.details and 'flow_uuid' in job.details: flow_uuid = job.details["flow_uuid"] flow_detail = book.find(flow_uuid) if flow_detail is None: raise excp.NotFound("No matching flow detail found in" " jobs book for flow detail" " with uuid %s" % flow_uuid) else: choices = len(book) if choices == 1: flow_detail = list(book)[0] elif choices == 0: raise excp.NotFound("No flow detail(s) found in jobs book") else: raise excp.MultipleChoices("No matching flow detail found (%s" " choices) in jobs book" % choices) return flow_detail def _engine_from_job(self, job): """Extracts an engine from a job (via some manner).""" flow_detail = self._flow_detail_from_job(job) store = {} if flow_detail.meta and 'store' in flow_detail.meta: store.update(flow_detail.meta['store']) if job.details and 'store' in job.details: store.update(job.details["store"]) engine = engines.load_from_detail(flow_detail, store=store, engine=self._engine, backend=self._persistence, **self._engine_options) return engine def _listeners_from_job(self, job, engine): """Returns a list of listeners to be attached to an engine. This method should be overridden in order to attach listeners to engines. It will be called once for each job, and the list returned listeners will be added to the engine for this job. :param job: A job instance that is about to be run in an engine. :param engine: The engine that listeners will be attached to. :returns: a list of (unregistered) listener instances. """ # TODO(dkrause): Create a standard way to pass listeners or # listener factories over the jobboard return [] @fasteners.locked def connect(self): """Ensures the jobboard is connected (noop if it is already).""" if not self._jobboard.connected: self._jobboard.connect() @fasteners.locked def close(self): """Closes the contained jobboard, disallowing further use.""" self._jobboard.close() @abc.abstractmethod def run(self, max_dispatches=None): """Continuously claims, runs, and consumes jobs (and repeat). :param max_dispatches: An upper bound on the number of jobs that will be dispatched, if none or negative this implies there is no limit to the number of jobs that will be dispatched, otherwise if positive this run method will return when that amount of jobs has been dispatched (instead of running forever and/or until stopped). """ @abc.abstractmethod def _dispatch_job(self, job): """Dispatches a claimed job for work completion. Accepts a single (already claimed) job and causes it to be run in an engine. Returns a future object that represented the work to be completed sometime in the future. The future should return a single boolean from its result() method. This boolean determines whether the job will be consumed (true) or whether it should be abandoned (false). :param job: A job instance that has already been claimed by the jobboard. """ ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.608041 taskflow-4.6.4/taskflow/contrib/0000775000175000017500000000000000000000000016674 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/contrib/__init__.py0000664000175000017500000000000000000000000020773 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/deciders.py0000664000175000017500000000654300000000000017400 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow.utils import misc class Depth(misc.StrEnum): """Enumeration of decider(s) *area of influence*.""" ALL = 'ALL' """ **Default** decider depth that affects **all** successor atoms (including ones that are in successor nested flows). """ FLOW = 'FLOW' """ Decider depth that affects **all** successor tasks in the **same** flow (it will **not** affect tasks/retries that are in successor nested flows). .. warning:: While using this kind we are allowed to execute successors of things that have been ignored (for example nested flows and the tasks they contain), this may result in symbol lookup errors during running, user beware. """ NEIGHBORS = 'NEIGHBORS' """ Decider depth that affects only **next** successor tasks (and does not traverse past **one** level of successor tasks). .. warning:: While using this kind we are allowed to execute successors of things that have been ignored (for example nested flows and the tasks they contain), this may result in symbol lookup errors during running, user beware. """ ATOM = 'ATOM' """ Decider depth that affects only **targeted** atom (and does **not** traverse into **any** level of successor atoms). .. warning:: While using this kind we are allowed to execute successors of things that have been ignored (for example nested flows and the tasks they contain), this may result in symbol lookup errors during running, user beware. """ @classmethod def translate(cls, desired_depth): """Translates a string into a depth enumeration.""" if isinstance(desired_depth, cls): # Nothing to do in the first place... return desired_depth if not isinstance(desired_depth, six.string_types): raise TypeError("Unexpected desired depth type, string type" " expected, not %s" % type(desired_depth)) try: return cls(desired_depth.upper()) except ValueError: pretty_depths = sorted([a_depth.name for a_depth in cls]) raise ValueError("Unexpected decider depth value, one of" " %s (case-insensitive) is expected and" " not '%s'" % (pretty_depths, desired_depth)) # Depth area of influence order (from greater influence to least). # # Order very much matters here... _ORDERING = tuple([ Depth.ALL, Depth.FLOW, Depth.NEIGHBORS, Depth.ATOM, ]) def pick_widest(depths): """Pick from many depths which has the **widest** area of influence.""" return _ORDERING[min(_ORDERING.index(d) for d in depths)] ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.608041 taskflow-4.6.4/taskflow/engines/0000775000175000017500000000000000000000000016664 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/__init__.py0000664000175000017500000000256500000000000021005 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import eventletutils as _eventletutils # Give a nice warning that if eventlet is being used these modules # are highly recommended to be patched (or otherwise bad things could # happen). _eventletutils.warn_eventlet_not_patched( expected_patched_modules=['time', 'thread']) # Promote helpers to this module namespace (for easy access). from taskflow.engines.helpers import flow_from_detail # noqa from taskflow.engines.helpers import load # noqa from taskflow.engines.helpers import load_from_detail # noqa from taskflow.engines.helpers import load_from_factory # noqa from taskflow.engines.helpers import run # noqa from taskflow.engines.helpers import save_factory_details # noqa ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6120412 taskflow-4.6.4/taskflow/engines/action_engine/0000775000175000017500000000000000000000000021466 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/__init__.py0000664000175000017500000000000000000000000023565 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6120412 taskflow-4.6.4/taskflow/engines/action_engine/actions/0000775000175000017500000000000000000000000023126 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/actions/__init__.py0000664000175000017500000000000000000000000025225 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/actions/base.py0000664000175000017500000000323100000000000024411 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow import states @six.add_metaclass(abc.ABCMeta) class Action(object): """An action that handles executing, state changes, ... of atoms.""" NO_RESULT = object() """ Sentinel use to represent lack of any result (none can be a valid result) """ #: States that are expected to have a result to save... SAVE_RESULT_STATES = (states.SUCCESS, states.FAILURE, states.REVERTED, states.REVERT_FAILURE) def __init__(self, storage, notifier): self._storage = storage self._notifier = notifier @abc.abstractmethod def schedule_execution(self, atom): """Schedules atom execution.""" @abc.abstractmethod def schedule_reversion(self, atom): """Schedules atom reversion.""" @abc.abstractmethod def complete_reversion(self, atom, result): """Completes atom reversion.""" @abc.abstractmethod def complete_execution(self, atom, result): """Completes atom execution.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/actions/retry.py0000664000175000017500000001021000000000000024637 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.engines.action_engine.actions import base from taskflow import retry as retry_atom from taskflow import states from taskflow.types import failure class RetryAction(base.Action): """An action that handles executing, state changes, ... of retry atoms.""" def __init__(self, storage, notifier, retry_executor): super(RetryAction, self).__init__(storage, notifier) self._retry_executor = retry_executor def _get_retry_args(self, retry, revert=False, addons=None): if revert: arguments = self._storage.fetch_mapped_args( retry.revert_rebind, atom_name=retry.name, optional_args=retry.revert_optional ) else: arguments = self._storage.fetch_mapped_args( retry.rebind, atom_name=retry.name, optional_args=retry.optional ) history = self._storage.get_retry_history(retry.name) arguments[retry_atom.EXECUTE_REVERT_HISTORY] = history if addons: arguments.update(addons) return arguments def change_state(self, retry, state, result=base.Action.NO_RESULT): old_state = self._storage.get_atom_state(retry.name) if state in self.SAVE_RESULT_STATES: save_result = None if result is not self.NO_RESULT: save_result = result self._storage.save(retry.name, save_result, state) # TODO(harlowja): combine this with the save to avoid a call # back into the persistence layer... if state == states.REVERTED: self._storage.cleanup_retry_history(retry.name, state) else: if state == old_state: # NOTE(imelnikov): nothing really changed, so we should not # write anything to storage and run notifications. return self._storage.set_atom_state(retry.name, state) retry_uuid = self._storage.get_atom_uuid(retry.name) details = { 'retry_name': retry.name, 'retry_uuid': retry_uuid, 'old_state': old_state, } if result is not self.NO_RESULT: details['result'] = result self._notifier.notify(state, details) def schedule_execution(self, retry): self.change_state(retry, states.RUNNING) return self._retry_executor.execute_retry( retry, self._get_retry_args(retry)) def complete_reversion(self, retry, result): if isinstance(result, failure.Failure): self.change_state(retry, states.REVERT_FAILURE, result=result) else: self.change_state(retry, states.REVERTED, result=result) def complete_execution(self, retry, result): if isinstance(result, failure.Failure): self.change_state(retry, states.FAILURE, result=result) else: self.change_state(retry, states.SUCCESS, result=result) def schedule_reversion(self, retry): self.change_state(retry, states.REVERTING) arg_addons = { retry_atom.REVERT_FLOW_FAILURES: self._storage.get_failures(), } return self._retry_executor.revert_retry( retry, self._get_retry_args(retry, addons=arg_addons, revert=True)) def on_failure(self, retry, atom, last_failure): self._storage.save_retry_failure(retry.name, atom.name, last_failure) arguments = self._get_retry_args(retry) return retry.on_failure(**arguments) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/actions/task.py0000664000175000017500000001363200000000000024447 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from taskflow.engines.action_engine.actions import base from taskflow import logging from taskflow import states from taskflow import task as task_atom from taskflow.types import failure LOG = logging.getLogger(__name__) class TaskAction(base.Action): """An action that handles scheduling, state changes, ... of task atoms.""" def __init__(self, storage, notifier, task_executor): super(TaskAction, self).__init__(storage, notifier) self._task_executor = task_executor def _is_identity_transition(self, old_state, state, task, progress=None): if state in self.SAVE_RESULT_STATES: # saving result is never identity transition return False if state != old_state: # changing state is not identity transition by definition return False # NOTE(imelnikov): last thing to check is that the progress has # changed, which means progress is not None and is different from # what is stored in the database. if progress is None: return False old_progress = self._storage.get_task_progress(task.name) if old_progress != progress: return False return True def change_state(self, task, state, progress=None, result=base.Action.NO_RESULT): old_state = self._storage.get_atom_state(task.name) if self._is_identity_transition(old_state, state, task, progress=progress): # NOTE(imelnikov): ignore identity transitions in order # to avoid extra write to storage backend and, what's # more important, extra notifications. return if state in self.SAVE_RESULT_STATES: save_result = None if result is not self.NO_RESULT: save_result = result self._storage.save(task.name, save_result, state) else: self._storage.set_atom_state(task.name, state) if progress is not None: self._storage.set_task_progress(task.name, progress) task_uuid = self._storage.get_atom_uuid(task.name) details = { 'task_name': task.name, 'task_uuid': task_uuid, 'old_state': old_state, } if result is not self.NO_RESULT: details['result'] = result self._notifier.notify(state, details) if progress is not None: task.update_progress(progress) def _on_update_progress(self, task, event_type, details): """Should be called when task updates its progress.""" try: progress = details.pop('progress') except KeyError: pass else: try: self._storage.set_task_progress(task.name, progress, details=details) except Exception: # Update progress callbacks should never fail, so capture and # log the emitted exception instead of raising it. LOG.exception("Failed setting task progress for %s to %0.3f", task, progress) def schedule_execution(self, task): self.change_state(task, states.RUNNING, progress=0.0) arguments = self._storage.fetch_mapped_args( task.rebind, atom_name=task.name, optional_args=task.optional ) if task.notifier.can_be_registered(task_atom.EVENT_UPDATE_PROGRESS): progress_callback = functools.partial(self._on_update_progress, task) else: progress_callback = None task_uuid = self._storage.get_atom_uuid(task.name) return self._task_executor.execute_task( task, task_uuid, arguments, progress_callback=progress_callback) def complete_execution(self, task, result): if isinstance(result, failure.Failure): self.change_state(task, states.FAILURE, result=result) else: self.change_state(task, states.SUCCESS, result=result, progress=1.0) def schedule_reversion(self, task): self.change_state(task, states.REVERTING, progress=0.0) arguments = self._storage.fetch_mapped_args( task.revert_rebind, atom_name=task.name, optional_args=task.revert_optional ) task_uuid = self._storage.get_atom_uuid(task.name) task_result = self._storage.get(task.name) failures = self._storage.get_failures() if task.notifier.can_be_registered(task_atom.EVENT_UPDATE_PROGRESS): progress_callback = functools.partial(self._on_update_progress, task) else: progress_callback = None return self._task_executor.revert_task( task, task_uuid, arguments, task_result, failures, progress_callback=progress_callback) def complete_reversion(self, task, result): if isinstance(result, failure.Failure): self.change_state(task, states.REVERT_FAILURE, result=result) else: self.change_state(task, states.REVERTED, progress=1.0, result=result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/builder.py0000664000175000017500000004116700000000000023477 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from concurrent import futures import weakref from automaton import machines from oslo_utils import timeutils from taskflow import logging from taskflow import states as st from taskflow.types import failure from taskflow.utils import iter_utils # Default waiting state timeout (in seconds). WAITING_TIMEOUT = 60 # Meta states the state machine uses. UNDEFINED = 'UNDEFINED' GAME_OVER = 'GAME_OVER' META_STATES = (GAME_OVER, UNDEFINED) # Event name constants the state machine uses. SCHEDULE = 'schedule_next' WAIT = 'wait_finished' ANALYZE = 'examine_finished' FINISH = 'completed' FAILED = 'failed' SUSPENDED = 'suspended' SUCCESS = 'success' REVERTED = 'reverted' START = 'start' # Internal enums used to denote how/if a atom was completed.""" FAILED_COMPLETING = 'failed_completing' WAS_CANCELLED = 'was_cancelled' SUCCESSFULLY_COMPLETED = 'successfully_completed' # For these states we will gather how long (in seconds) the # state was in-progress (cumulatively if the state is entered multiple # times) TIMED_STATES = (st.ANALYZING, st.RESUMING, st.SCHEDULING, st.WAITING) LOG = logging.getLogger(__name__) class MachineMemory(object): """State machine memory.""" def __init__(self): self.next_up = set() self.not_done = set() self.failures = [] self.done = set() def cancel_futures(self): """Attempts to cancel any not done futures.""" for fut in self.not_done: fut.cancel() class MachineBuilder(object): """State machine *builder* that powers the engine components. NOTE(harlowja): the machine (states and events that will trigger transitions) that this builds is represented by the following table:: +--------------+------------------+------------+----------+---------+ | Start | Event | End | On Enter | On Exit | +--------------+------------------+------------+----------+---------+ | ANALYZING | completed | GAME_OVER | . | . | | ANALYZING | schedule_next | SCHEDULING | . | . | | ANALYZING | wait_finished | WAITING | . | . | | FAILURE[$] | . | . | . | . | | GAME_OVER | failed | FAILURE | . | . | | GAME_OVER | reverted | REVERTED | . | . | | GAME_OVER | success | SUCCESS | . | . | | GAME_OVER | suspended | SUSPENDED | . | . | | RESUMING | schedule_next | SCHEDULING | . | . | | REVERTED[$] | . | . | . | . | | SCHEDULING | wait_finished | WAITING | . | . | | SUCCESS[$] | . | . | . | . | | SUSPENDED[$] | . | . | . | . | | UNDEFINED[^] | start | RESUMING | . | . | | WAITING | examine_finished | ANALYZING | . | . | +--------------+------------------+------------+----------+---------+ Between any of these yielded states (minus ``GAME_OVER`` and ``UNDEFINED``) if the engine has been suspended or the engine has failed (due to a non-resolveable task failure or scheduling failure) the machine will stop executing new tasks (currently running tasks will be allowed to complete) and this machines run loop will be broken. NOTE(harlowja): If the runtimes scheduler component is able to schedule tasks in parallel, this enables parallel running and/or reversion. """ def __init__(self, runtime, waiter): self._runtime = weakref.proxy(runtime) self._selector = runtime.selector self._completer = runtime.completer self._scheduler = runtime.scheduler self._storage = runtime.storage self._waiter = waiter def build(self, statistics, timeout=None, gather_statistics=True): """Builds a state-machine (that is used during running).""" if gather_statistics: watches = {} state_statistics = {} statistics['seconds_per_state'] = state_statistics watches = {} for timed_state in TIMED_STATES: state_statistics[timed_state.lower()] = 0.0 watches[timed_state] = timeutils.StopWatch() statistics['discarded_failures'] = 0 statistics['awaiting'] = 0 statistics['completed'] = 0 statistics['incomplete'] = 0 memory = MachineMemory() if timeout is None: timeout = WAITING_TIMEOUT # Cache some local functions/methods... do_complete = self._completer.complete do_complete_failure = self._completer.complete_failure get_atom_intention = self._storage.get_atom_intention def do_schedule(next_nodes): with self._storage.lock.write_lock(): return self._scheduler.schedule( sorted(next_nodes, key=lambda node: getattr(node, 'priority', 0), reverse=True)) def iter_next_atoms(atom=None, apply_deciders=True): # Yields and filters and tweaks the next atoms to run... maybe_atoms_it = self._selector.iter_next_atoms(atom=atom) for atom, late_decider in maybe_atoms_it: if apply_deciders: proceed = late_decider.check_and_affect(self._runtime) if proceed: yield atom else: yield atom def resume(old_state, new_state, event): # This reaction function just updates the state machines memory # to include any nodes that need to be executed (from a previous # attempt, which may be empty if never ran before) and any nodes # that are now ready to be ran. with self._storage.lock.write_lock(): memory.next_up.update( iter_utils.unique_seen((self._completer.resume(), iter_next_atoms()))) return SCHEDULE def game_over(old_state, new_state, event): # This reaction function is mainly a intermediary delegation # function that analyzes the current memory and transitions to # the appropriate handler that will deal with the memory values, # it is *always* called before the final state is entered. if memory.failures: return FAILED with self._storage.lock.read_lock(): leftover_atoms = iter_utils.count( # Avoid activating the deciders, since at this point # the engine is finishing and there will be no more further # work done anyway... iter_next_atoms(apply_deciders=False)) if leftover_atoms: # Ok we didn't finish (either reverting or executing...) so # that means we must of been stopped at some point... LOG.trace("Suspension determined to have been reacted to" " since (at least) %s atoms have been left in an" " unfinished state", leftover_atoms) return SUSPENDED elif self._runtime.is_success(): return SUCCESS else: return REVERTED def schedule(old_state, new_state, event): # This reaction function starts to schedule the memory's next # nodes (iff the engine is still runnable, which it may not be # if the user of this engine has requested the engine/storage # that holds this information to stop or suspend); handles failures # that occur during this process safely... with self._storage.lock.write_lock(): current_flow_state = self._storage.get_flow_state() if current_flow_state == st.RUNNING and memory.next_up: not_done, failures = do_schedule(memory.next_up) if not_done: memory.not_done.update(not_done) if failures: memory.failures.extend(failures) memory.next_up.intersection_update(not_done) elif current_flow_state == st.SUSPENDING and memory.not_done: # Try to force anything not cancelled to now be cancelled # so that the executor that gets it does not continue to # try to work on it (if the future execution is still in # its backlog, if it's already being executed, this will # do nothing). memory.cancel_futures() return WAIT def complete_an_atom(fut): # This completes a single atom saving its result in # storage and preparing whatever predecessors or successors will # now be ready to execute (or revert or retry...); it also # handles failures that occur during this process safely... atom = fut.atom try: outcome, result = fut.result() do_complete(atom, outcome, result) if isinstance(result, failure.Failure): retain = do_complete_failure(atom, outcome, result) if retain: memory.failures.append(result) else: # NOTE(harlowja): avoid making any intention request # to storage unless we are sure we are in DEBUG # enabled logging (otherwise we will call this all # the time even when DEBUG is not enabled, which # would suck...) if LOG.isEnabledFor(logging.DEBUG): intention = get_atom_intention(atom.name) LOG.debug("Discarding failure '%s' (in response" " to outcome '%s') under completion" " units request during completion of" " atom '%s' (intention is to %s)", result, outcome, atom, intention) if gather_statistics: statistics['discarded_failures'] += 1 if gather_statistics: statistics['completed'] += 1 except futures.CancelledError: # Well it got cancelled, skip doing anything # and move on; at a further time it will be resumed # and something should be done with it to get it # going again. return WAS_CANCELLED except Exception: memory.failures.append(failure.Failure()) LOG.exception("Engine '%s' atom post-completion" " failed", atom) return FAILED_COMPLETING else: return SUCCESSFULLY_COMPLETED def wait(old_state, new_state, event): # TODO(harlowja): maybe we should start doing 'yield from' this # call sometime in the future, or equivalent that will work in # py2 and py3. if memory.not_done: done, not_done = self._waiter(memory.not_done, timeout=timeout) memory.done.update(done) memory.not_done = not_done return ANALYZE def analyze(old_state, new_state, event): # This reaction function is responsible for analyzing all nodes # that have finished executing/reverting and figuring # out what nodes are now ready to be ran (and then triggering those # nodes to be scheduled in the future); handles failures that # occur during this process safely... next_up = set() with self._storage.lock.write_lock(): while memory.done: fut = memory.done.pop() # Force it to be completed so that we can ensure that # before we iterate over any successors or predecessors # that we know it has been completed and saved and so on... completion_status = complete_an_atom(fut) if (not memory.failures and completion_status != WAS_CANCELLED): atom = fut.atom try: more_work = set(iter_next_atoms(atom=atom)) except Exception: memory.failures.append(failure.Failure()) LOG.exception( "Engine '%s' atom post-completion" " next atom searching failed", atom) else: next_up.update(more_work) current_flow_state = self._storage.get_flow_state() if (current_flow_state == st.RUNNING and next_up and not memory.failures): memory.next_up.update(next_up) return SCHEDULE elif memory.not_done: if current_flow_state == st.SUSPENDING: memory.cancel_futures() return WAIT else: return FINISH def on_exit(old_state, event): LOG.trace("Exiting old state '%s' in response to event '%s'", old_state, event) if gather_statistics: if old_state in watches: w = watches[old_state] w.stop() state_statistics[old_state.lower()] += w.elapsed() if old_state in (st.SCHEDULING, st.WAITING): statistics['incomplete'] = len(memory.not_done) if old_state in (st.ANALYZING, st.SCHEDULING): statistics['awaiting'] = len(memory.next_up) def on_enter(new_state, event): LOG.trace("Entering new state '%s' in response to event '%s'", new_state, event) if gather_statistics and new_state in watches: watches[new_state].restart() state_kwargs = { 'on_exit': on_exit, 'on_enter': on_enter, } m = machines.FiniteMachine() m.add_state(GAME_OVER, **state_kwargs) m.add_state(UNDEFINED, **state_kwargs) m.add_state(st.ANALYZING, **state_kwargs) m.add_state(st.RESUMING, **state_kwargs) m.add_state(st.REVERTED, terminal=True, **state_kwargs) m.add_state(st.SCHEDULING, **state_kwargs) m.add_state(st.SUCCESS, terminal=True, **state_kwargs) m.add_state(st.SUSPENDED, terminal=True, **state_kwargs) m.add_state(st.WAITING, **state_kwargs) m.add_state(st.FAILURE, terminal=True, **state_kwargs) m.default_start_state = UNDEFINED m.add_transition(GAME_OVER, st.REVERTED, REVERTED) m.add_transition(GAME_OVER, st.SUCCESS, SUCCESS) m.add_transition(GAME_OVER, st.SUSPENDED, SUSPENDED) m.add_transition(GAME_OVER, st.FAILURE, FAILED) m.add_transition(UNDEFINED, st.RESUMING, START) m.add_transition(st.ANALYZING, GAME_OVER, FINISH) m.add_transition(st.ANALYZING, st.SCHEDULING, SCHEDULE) m.add_transition(st.ANALYZING, st.WAITING, WAIT) m.add_transition(st.RESUMING, st.SCHEDULING, SCHEDULE) m.add_transition(st.SCHEDULING, st.WAITING, WAIT) m.add_transition(st.WAITING, st.ANALYZING, ANALYZE) m.add_reaction(GAME_OVER, FINISH, game_over) m.add_reaction(st.ANALYZING, ANALYZE, analyze) m.add_reaction(st.RESUMING, START, resume) m.add_reaction(st.SCHEDULING, SCHEDULE, schedule) m.add_reaction(st.WAITING, WAIT, wait) m.freeze() return (m, memory) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/compiler.py0000664000175000017500000003740500000000000023663 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading import fasteners from oslo_utils import excutils import six from taskflow import flow from taskflow import logging from taskflow import task from taskflow.types import graph as gr from taskflow.types import tree as tr from taskflow.utils import iter_utils from taskflow.utils import misc from taskflow.flow import (LINK_INVARIANT, LINK_RETRY) # noqa LOG = logging.getLogger(__name__) # Constants attached to node attributes in the execution graph (and tree # node metadata), provided as constants here and constants in the compilation # class (so that users will not have to import this file to access them); but # provide them as module constants so that internal code can more # easily access them... TASK = 'task' RETRY = 'retry' FLOW = 'flow' FLOW_END = 'flow_end' # Quite often used together, so make a tuple everyone can share... ATOMS = (TASK, RETRY) FLOWS = (FLOW, FLOW_END) class Terminator(object): """Flow terminator class.""" def __init__(self, flow): self._flow = flow self._name = "%s[$]" % (self._flow.name,) @property def flow(self): """The flow which this terminator signifies/marks the end of.""" return self._flow @property def name(self): """Useful name this end terminator has (derived from flow name).""" return self._name def __str__(self): return "%s[$]" % (self._flow,) class Compilation(object): """The result of a compilers ``compile()`` is this *immutable* object.""" #: Task nodes will have a ``kind`` metadata key with this value. TASK = TASK #: Retry nodes will have a ``kind`` metadata key with this value. RETRY = RETRY FLOW = FLOW """ Flow **entry** nodes will have a ``kind`` metadata key with this value. """ FLOW_END = FLOW_END """ Flow **exit** nodes will have a ``kind`` metadata key with this value (only applicable for compilation execution graph, not currently used in tree hierarchy). """ def __init__(self, execution_graph, hierarchy): self._execution_graph = execution_graph self._hierarchy = hierarchy @property def execution_graph(self): """The execution ordering of atoms (as a graph structure).""" return self._execution_graph @property def hierarchy(self): """The hierarchy of patterns (as a tree structure).""" return self._hierarchy def _overlap_occurrence_detector(to_graph, from_graph): """Returns how many nodes in 'from' graph are in 'to' graph (if any).""" return iter_utils.count(node for node in from_graph.nodes if node in to_graph) def _add_update_edges(graph, nodes_from, nodes_to, attr_dict=None): """Adds/updates edges from nodes to other nodes in the specified graph. It will connect the 'nodes_from' to the 'nodes_to' if an edge currently does *not* exist (if it does already exist then the edges attributes are just updated instead). When an edge is created the provided edge attributes dictionary will be applied to the new edge between these two nodes. """ # NOTE(harlowja): give each edge its own attr copy so that if it's # later modified that the same copy isn't modified... for u in nodes_from: for v in nodes_to: if not graph.has_edge(u, v): if attr_dict: graph.add_edge(u, v, attr_dict=attr_dict.copy()) else: graph.add_edge(u, v) else: # Just update the attr_dict (if any). if attr_dict: graph.add_edge(u, v, attr_dict=attr_dict.copy()) class TaskCompiler(object): """Non-recursive compiler of tasks.""" def compile(self, task, parent=None): graph = gr.DiGraph(name=task.name) graph.add_node(task, kind=TASK) node = tr.Node(task, kind=TASK) if parent is not None: parent.add(node) return graph, node class FlowCompiler(object): """Recursive compiler of flows.""" def __init__(self, deep_compiler_func): self._deep_compiler_func = deep_compiler_func def compile(self, flow, parent=None): """Decomposes a flow into a graph and scope tree hierarchy.""" graph = gr.DiGraph(name=flow.name) graph.add_node(flow, kind=FLOW, noop=True) tree_node = tr.Node(flow, kind=FLOW, noop=True) if parent is not None: parent.add(tree_node) if flow.retry is not None: tree_node.add(tr.Node(flow.retry, kind=RETRY)) decomposed = dict( (child, self._deep_compiler_func(child, parent=tree_node)[0]) for child in flow) decomposed_graphs = list(six.itervalues(decomposed)) graph = gr.merge_graphs(graph, *decomposed_graphs, overlap_detector=_overlap_occurrence_detector) for u, v, attr_dict in flow.iter_links(): u_graph = decomposed[u] v_graph = decomposed[v] _add_update_edges(graph, u_graph.no_successors_iter(), list(v_graph.no_predecessors_iter()), attr_dict=attr_dict) # Insert the flow(s) retry if needed, and always make sure it # is the **immediate** successor of the flow node itself. if flow.retry is not None: graph.add_node(flow.retry, kind=RETRY) _add_update_edges(graph, [flow], [flow.retry], attr_dict={LINK_INVARIANT: True}) for node in graph.nodes: if node is not flow.retry and node is not flow: graph.nodes[node].setdefault(RETRY, flow.retry) from_nodes = [flow.retry] attr_dict = {LINK_INVARIANT: True, LINK_RETRY: True} else: from_nodes = [flow] attr_dict = {LINK_INVARIANT: True} # Ensure all nodes with no predecessors are connected to this flow # or its retry node (so that the invariant that the flow node is # traversed through before its contents is maintained); this allows # us to easily know when we have entered a flow (when running) and # do special and/or smart things such as only traverse up to the # start of a flow when looking for node deciders. _add_update_edges(graph, from_nodes, [ node for node in graph.no_predecessors_iter() if node is not flow ], attr_dict=attr_dict) # Connect all nodes with no successors into a special terminator # that is used to identify the end of the flow and ensure that all # execution traversals will traverse over this node before executing # further work (this is especially useful for nesting and knowing # when we have exited a nesting level); it allows us to do special # and/or smart things such as applying deciders up to (but not # beyond) a flow termination point. # # Do note that in a empty flow this will just connect itself to # the flow node itself... and also note we can not use the flow # object itself (primarily because the underlying graph library # uses hashing to identify node uniqueness and we can easily create # a loop if we don't do this correctly, so avoid that by just # creating this special node and tagging it with a special kind); we # may be able to make this better in the future with a multidigraph # that networkx provides?? flow_term = Terminator(flow) graph.add_node(flow_term, kind=FLOW_END, noop=True) _add_update_edges(graph, [ node for node in graph.no_successors_iter() if node is not flow_term ], [flow_term], attr_dict={LINK_INVARIANT: True}) return graph, tree_node class PatternCompiler(object): """Compiles a flow pattern (or task) into a compilation unit. Let's dive into the basic idea for how this works: The compiler here is provided a 'root' object via its __init__ method, this object could be a task, or a flow (one of the supported patterns), the end-goal is to produce a :py:class:`.Compilation` object as the result with the needed components. If this is not possible a :py:class:`~.taskflow.exceptions.CompilationFailure` will be raised. In the case where a **unknown** type is being requested to compile a ``TypeError`` will be raised and when a duplicate object (one that has **already** been compiled) is encountered a ``ValueError`` is raised. The complexity of this comes into play when the 'root' is a flow that contains itself other nested flows (and so-on); to compile this object and its contained objects into a graph that *preserves* the constraints the pattern mandates we have to go through a recursive algorithm that creates subgraphs for each nesting level, and then on the way back up through the recursion (now with a decomposed mapping from contained patterns or atoms to there corresponding subgraph) we have to then connect the subgraphs (and the atom(s) there-in) that were decomposed for a pattern correctly into a new graph and then ensure the pattern mandated constraints are retained. Finally we then return to the caller (and they will do the same thing up until the root node, which by that point one graph is created with all contained atoms in the pattern/nested patterns mandated ordering). Also maintained in the :py:class:`.Compilation` object is a hierarchy of the nesting of items (which is also built up during the above mentioned recusion, via a much simpler algorithm); this is typically used later to determine the prior atoms of a given atom when looking up values that can be provided to that atom for execution (see the scopes.py file for how this works). Note that although you *could* think that the graph itself could be used for this, which in some ways it can (for limited usage) the hierarchy retains the nested structure (which is useful for scoping analysis/lookup) to be able to provide back a iterator that gives back the scopes visible at each level (the graph does not have this information once flattened). Let's take an example: Given the pattern ``f(a(b, c), d)`` where ``f`` is a :py:class:`~taskflow.patterns.linear_flow.Flow` with items ``a(b, c)`` where ``a`` is a :py:class:`~taskflow.patterns.linear_flow.Flow` composed of tasks ``(b, c)`` and task ``d``. The algorithm that will be performed (mirroring the above described logic) will go through the following steps (the tree hierarchy building is left out as that is more obvious):: Compiling f - Decomposing flow f with no parent (must be the root) - Compiling a - Decomposing flow a with parent f - Compiling b - Decomposing task b with parent a - Decomposed b into: Name: b Nodes: 1 - b Edges: 0 - Compiling c - Decomposing task c with parent a - Decomposed c into: Name: c Nodes: 1 - c Edges: 0 - Relinking decomposed b -> decomposed c - Decomposed a into: Name: a Nodes: 2 - b - c Edges: 1 b -> c ({'invariant': True}) - Compiling d - Decomposing task d with parent f - Decomposed d into: Name: d Nodes: 1 - d Edges: 0 - Relinking decomposed a -> decomposed d - Decomposed f into: Name: f Nodes: 3 - c - b - d Edges: 2 c -> d ({'invariant': True}) b -> c ({'invariant': True}) """ def __init__(self, root, freeze=True): self._root = root self._history = set() self._freeze = freeze self._lock = threading.Lock() self._compilation = None self._matchers = [ (flow.Flow, FlowCompiler(self._compile)), (task.Task, TaskCompiler()), ] self._level = 0 def _compile(self, item, parent=None): """Compiles a item (pattern, task) into a graph + tree node.""" item_compiler = misc.match_type(item, self._matchers) if item_compiler is not None: self._pre_item_compile(item) graph, node = item_compiler.compile(item, parent=parent) self._post_item_compile(item, graph, node) return graph, node else: raise TypeError("Unknown object '%s' (%s) requested to compile" % (item, type(item))) def _pre_item_compile(self, item): """Called before a item is compiled; any pre-compilation actions.""" if item in self._history: raise ValueError("Already compiled item '%s' (%s), duplicate" " and/or recursive compiling is not" " supported" % (item, type(item))) self._history.add(item) if LOG.isEnabledFor(logging.TRACE): LOG.trace("%sCompiling '%s'", " " * self._level, item) self._level += 1 def _post_item_compile(self, item, graph, node): """Called after a item is compiled; doing post-compilation actions.""" self._level -= 1 if LOG.isEnabledFor(logging.TRACE): prefix = ' ' * self._level LOG.trace("%sDecomposed '%s' into:", prefix, item) prefix = ' ' * (self._level + 1) LOG.trace("%sGraph:", prefix) for line in graph.pformat().splitlines(): LOG.trace("%s %s", prefix, line) LOG.trace("%sHierarchy:", prefix) for line in node.pformat().splitlines(): LOG.trace("%s %s", prefix, line) def _pre_compile(self): """Called before the compilation of the root starts.""" self._history.clear() self._level = 0 def _post_compile(self, graph, node): """Called after the compilation of the root finishes successfully.""" self._history.clear() self._level = 0 @fasteners.locked def compile(self): """Compiles the contained item into a compiled equivalent.""" if self._compilation is None: self._pre_compile() try: graph, node = self._compile(self._root, parent=None) except Exception: with excutils.save_and_reraise_exception(): # Always clear the history, to avoid retaining junk # in memory that isn't needed to be in memory if # compilation fails... self._history.clear() else: self._post_compile(graph, node) if self._freeze: graph.freeze() node.freeze() self._compilation = Compilation(graph, node) return self._compilation ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/completer.py0000664000175000017500000002153000000000000024033 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import weakref from oslo_utils import reflection from oslo_utils import strutils import six from taskflow.engines.action_engine import compiler as co from taskflow.engines.action_engine import executor as ex from taskflow import logging from taskflow import retry as retry_atom from taskflow import states as st LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class Strategy(object): """Failure resolution strategy base class.""" strategy = None def __init__(self, runtime): self._runtime = runtime @abc.abstractmethod def apply(self): """Applies some algorithm to resolve some detected failure.""" def __str__(self): base = reflection.get_class_name(self, fully_qualified=False) if self.strategy is not None: strategy_name = self.strategy.name else: strategy_name = "???" return base + "(strategy=%s)" % (strategy_name) class RevertAndRetry(Strategy): """Sets the *associated* subflow for revert to be later retried.""" strategy = retry_atom.RETRY def __init__(self, runtime, retry): super(RevertAndRetry, self).__init__(runtime) self._retry = retry def apply(self): tweaked = self._runtime.reset_atoms([self._retry], state=None, intention=st.RETRY) tweaked.extend(self._runtime.reset_subgraph(self._retry, state=None, intention=st.REVERT)) return tweaked class RevertAll(Strategy): """Sets *all* nodes/atoms to the ``REVERT`` intention.""" strategy = retry_atom.REVERT_ALL def __init__(self, runtime): super(RevertAll, self).__init__(runtime) def apply(self): return self._runtime.reset_atoms( self._runtime.iterate_nodes(co.ATOMS), state=None, intention=st.REVERT) class Revert(Strategy): """Sets atom and *associated* nodes to the ``REVERT`` intention.""" strategy = retry_atom.REVERT def __init__(self, runtime, atom): super(Revert, self).__init__(runtime) self._atom = atom def apply(self): tweaked = self._runtime.reset_atoms([self._atom], state=None, intention=st.REVERT) tweaked.extend(self._runtime.reset_subgraph(self._atom, state=None, intention=st.REVERT)) return tweaked class Completer(object): """Completes atoms using actions to complete them.""" def __init__(self, runtime): self._runtime = weakref.proxy(runtime) self._storage = runtime.storage self._undefined_resolver = RevertAll(self._runtime) self._defer_reverts = strutils.bool_from_string( self._runtime.options.get('defer_reverts', False)) self._resolve = not strutils.bool_from_string( self._runtime.options.get('never_resolve', False)) def resume(self): """Resumes atoms in the contained graph. This is done to allow any previously completed or failed atoms to be analyzed, there results processed and any potential atoms affected to be adjusted as needed. This should return a set of atoms which should be the initial set of atoms that were previously not finished (due to a RUNNING or REVERTING attempt not previously finishing). """ atoms = list(self._runtime.iterate_nodes(co.ATOMS)) atom_states = self._storage.get_atoms_states(atom.name for atom in atoms) if self._resolve: for atom in atoms: atom_state, _atom_intention = atom_states[atom.name] if atom_state == st.FAILURE: self._process_atom_failure( atom, self._storage.get(atom.name)) for retry in self._runtime.iterate_retries(st.RETRYING): retry_affected_atoms_it = self._runtime.retry_subflow(retry) for atom, state, intention in retry_affected_atoms_it: if state: atom_states[atom.name] = (state, intention) unfinished_atoms = set() for atom in atoms: atom_state, _atom_intention = atom_states[atom.name] if atom_state in (st.RUNNING, st.REVERTING): unfinished_atoms.add(atom) LOG.trace("Resuming atom '%s' since it was left in" " state %s", atom, atom_state) return unfinished_atoms def complete_failure(self, node, outcome, failure): """Performs post-execution completion of a nodes failure. Returns whether the result should be saved into an accumulator of failures or whether this should not be done. """ if outcome == ex.EXECUTED and self._resolve: self._process_atom_failure(node, failure) # We resolved something, carry on... return False else: # Reverting failed (or resolving was turned off), always # retain the failure... return True def complete(self, node, outcome, result): """Performs post-execution completion of a node result.""" handler = self._runtime.fetch_action(node) if outcome == ex.EXECUTED: handler.complete_execution(node, result) else: handler.complete_reversion(node, result) def _determine_resolution(self, atom, failure): """Determines which resolution strategy to activate/apply.""" retry = self._runtime.find_retry(atom) if retry is not None: # Ask retry controller what to do in case of failure. handler = self._runtime.fetch_action(retry) strategy = handler.on_failure(retry, atom, failure) if strategy == retry_atom.RETRY: return RevertAndRetry(self._runtime, retry) elif strategy == retry_atom.REVERT: # Ask parent retry and figure out what to do... parent_resolver = self._determine_resolution(retry, failure) # In the future, this will be the only behavior. REVERT # should defer to the parent retry if it exists, or use the # default REVERT_ALL if it doesn't. if self._defer_reverts: return parent_resolver # Ok if the parent resolver says something not REVERT, and # it isn't just using the undefined resolver, assume the # parent knows best. if parent_resolver is not self._undefined_resolver: if parent_resolver.strategy != retry_atom.REVERT: return parent_resolver return Revert(self._runtime, retry) elif strategy == retry_atom.REVERT_ALL: return RevertAll(self._runtime) else: raise ValueError("Unknown atom failure resolution" " action/strategy '%s'" % strategy) else: return self._undefined_resolver def _process_atom_failure(self, atom, failure): """Processes atom failure & applies resolution strategies. On atom failure this will find the atoms associated retry controller and ask that controller for the strategy to perform to resolve that failure. After getting a resolution strategy decision this method will then adjust the needed other atoms intentions, and states, ... so that the failure can be worked around. """ resolver = self._determine_resolution(atom, failure) LOG.debug("Applying resolver '%s' to resolve failure '%s'" " of atom '%s'", resolver, failure, atom) tweaked = resolver.apply() # Only show the tweaked node list when trace is on, otherwise # just show the amount/count of nodes tweaks... if LOG.isEnabledFor(logging.TRACE): LOG.trace("Modified/tweaked %s nodes while applying" " resolver '%s'", tweaked, resolver) else: LOG.debug("Modified/tweaked %s nodes while applying" " resolver '%s'", len(tweaked), resolver) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/deciders.py0000664000175000017500000001617100000000000023630 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import itertools import six from taskflow import deciders from taskflow.engines.action_engine import compiler from taskflow.engines.action_engine import traversal from taskflow import logging from taskflow import states LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class Decider(object): """Base class for deciders. Provides interface to be implemented by sub-classes. Deciders check whether next atom in flow should be executed or not. """ @abc.abstractmethod def tally(self, runtime): """Tally edge deciders on whether this decider should allow running. The returned value is a list of edge deciders that voted 'nay' (do not allow running). """ @abc.abstractmethod def affect(self, runtime, nay_voters): """Affects associated atoms due to at least one 'nay' edge decider. This will alter the associated atom + some set of successor atoms by setting there state and intention to ``IGNORE`` so that they are ignored in future runtime activities. """ def check_and_affect(self, runtime): """Handles :py:func:`~.tally` + :py:func:`~.affect` in right order. NOTE(harlowja): If there are zero 'nay' edge deciders then it is assumed this decider should allow running. Returns boolean of whether this decider allows for running (or not). """ nay_voters = self.tally(runtime) if nay_voters: self.affect(runtime, nay_voters) return False return True def _affect_all_successors(atom, runtime): execution_graph = runtime.compilation.execution_graph successors_iter = traversal.depth_first_iterate( execution_graph, atom, traversal.Direction.FORWARD) runtime.reset_atoms(itertools.chain([atom], successors_iter), state=states.IGNORE, intention=states.IGNORE) def _affect_successor_tasks_in_same_flow(atom, runtime): execution_graph = runtime.compilation.execution_graph successors_iter = traversal.depth_first_iterate( execution_graph, atom, traversal.Direction.FORWARD, # Do not go through nested flows but do follow *all* tasks that # are directly connected in this same flow (thus the reason this is # called the same flow decider); retries are direct successors # of flows, so they should also be not traversed through, but # setting this explicitly ensures that. through_flows=False, through_retries=False) runtime.reset_atoms(itertools.chain([atom], successors_iter), state=states.IGNORE, intention=states.IGNORE) def _affect_atom(atom, runtime): runtime.reset_atoms([atom], state=states.IGNORE, intention=states.IGNORE) def _affect_direct_task_neighbors(atom, runtime): def _walk_neighbors(): execution_graph = runtime.compilation.execution_graph for node in execution_graph.successors(atom): node_data = execution_graph.nodes[node] if node_data['kind'] == compiler.TASK: yield node successors_iter = _walk_neighbors() runtime.reset_atoms(itertools.chain([atom], successors_iter), state=states.IGNORE, intention=states.IGNORE) class IgnoreDecider(Decider): """Checks any provided edge-deciders and determines if ok to run.""" _depth_strategies = { deciders.Depth.ALL: _affect_all_successors, deciders.Depth.ATOM: _affect_atom, deciders.Depth.FLOW: _affect_successor_tasks_in_same_flow, deciders.Depth.NEIGHBORS: _affect_direct_task_neighbors, } def __init__(self, atom, edge_deciders): self._atom = atom self._edge_deciders = edge_deciders def tally(self, runtime): voters = { 'run_it': [], 'do_not_run_it': [], 'ignored': [], } history = {} if self._edge_deciders: # Gather all atoms (the ones that were not ignored) results so # that those results can be used by the decider(s) that are # making a decision as to pass or not pass... states_intentions = runtime.storage.get_atoms_states( ed.from_node.name for ed in self._edge_deciders if ed.kind in compiler.ATOMS) for atom_name in six.iterkeys(states_intentions): atom_state, _atom_intention = states_intentions[atom_name] if atom_state != states.IGNORE: history[atom_name] = runtime.storage.get(atom_name) for ed in self._edge_deciders: if (ed.kind in compiler.ATOMS and # It was an ignored atom (not included in history and # the only way that is possible is via above loop # skipping it...) ed.from_node.name not in history): voters['ignored'].append(ed) continue if not ed.decider(history=history): voters['do_not_run_it'].append(ed) else: voters['run_it'].append(ed) if LOG.isEnabledFor(logging.TRACE): LOG.trace("Out of %s deciders there were %s 'do no run it'" " voters, %s 'do run it' voters and %s 'ignored'" " voters for transition to atom '%s' given history %s", sum(len(eds) for eds in six.itervalues(voters)), list(ed.from_node.name for ed in voters['do_not_run_it']), list(ed.from_node.name for ed in voters['run_it']), list(ed.from_node.name for ed in voters['ignored']), self._atom.name, history) return voters['do_not_run_it'] def affect(self, runtime, nay_voters): # If there were many 'nay' edge deciders that were targeted # at this atom, then we need to pick the one which has the widest # impact and respect that one as the decider depth that will # actually affect things. widest_depth = deciders.pick_widest(ed.depth for ed in nay_voters) affector = self._depth_strategies[widest_depth] return affector(self._atom, runtime) class NoOpDecider(Decider): """No-op decider that says it is always ok to run & has no effect(s).""" def tally(self, runtime): """Always good to go.""" return [] def affect(self, runtime, nay_voters): """Does nothing.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/engine.py0000664000175000017500000007155300000000000023320 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import itertools import threading from automaton import runners from concurrent import futures import fasteners import networkx as nx from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils import six from taskflow.engines.action_engine import builder from taskflow.engines.action_engine import compiler from taskflow.engines.action_engine import executor from taskflow.engines.action_engine import process_executor from taskflow.engines.action_engine import runtime from taskflow.engines import base from taskflow import exceptions as exc from taskflow import logging from taskflow import states from taskflow import storage from taskflow.types import failure from taskflow.utils import misc LOG = logging.getLogger(__name__) @contextlib.contextmanager def _start_stop(task_executor, retry_executor): # A teenie helper context manager to safely start/stop engine executors... task_executor.start() try: retry_executor.start() try: yield (task_executor, retry_executor) finally: retry_executor.stop() finally: task_executor.stop() def _pre_check(check_compiled=True, check_storage_ensured=True, check_validated=True): """Engine state precondition checking decorator.""" def decorator(meth): do_what = meth.__name__ @six.wraps(meth) def wrapper(self, *args, **kwargs): if check_compiled and not self._compiled: raise exc.InvalidState("Can not %s an engine which" " has not been compiled" % do_what) if check_storage_ensured and not self._storage_ensured: raise exc.InvalidState("Can not %s an engine" " which has not had its storage" " populated" % do_what) if check_validated and not self._validated: raise exc.InvalidState("Can not %s an engine which" " has not been validated" % do_what) return meth(self, *args, **kwargs) return wrapper return decorator class ActionEngine(base.Engine): """Generic action-based engine. This engine compiles the flow (and any subflows) into a compilation unit which contains the full runtime definition to be executed and then uses this compilation unit in combination with the executor, runtime, machine builder and storage classes to attempt to run your flow (and any subflows & contained atoms) to completion. NOTE(harlowja): during this process it is permissible and valid to have a task or multiple tasks in the execution graph fail (at the same time even), which will cause the process of reversion or retrying to commence. See the valid states in the states module to learn more about what other states the tasks and flow being ran can go through. **Engine options:** +----------------------+-----------------------+------+------------+ | Name/key | Description | Type | Default | +======================+=======================+======+============+ | ``defer_reverts`` | This option lets you | bool | ``False`` | | | safely nest flows | | | | | with retries inside | | | | | flows without retries | | | | | and it still behaves | | | | | as a user would | | | | | expect (for example | | | | | if the retry gets | | | | | exhausted it reverts | | | | | the outer flow unless | | | | | the outer flow has a | | | | | has a separate retry | | | | | behavior). | | | +----------------------+-----------------------+------+------------+ | ``never_resolve`` | When true, instead | bool | ``False`` | | | of reverting | | | | | and trying to resolve | | | | | a atom failure the | | | | | engine will skip | | | | | reverting and abort | | | | | instead of reverting | | | | | and/or retrying. | | | +----------------------+-----------------------+------+------------+ | ``inject_transient`` | When true, values | bool | ``True`` | | | that are local to | | | | | each atoms scope | | | | | are injected into | | | | | storage into a | | | | | transient location | | | | | (typically a local | | | | | dictionary), when | | | | | false those values | | | | | are instead persisted | | | | | into atom details | | | | | (and saved in a non- | | | | | transient manner). | | | +----------------------+-----------------------+------+------------+ """ NO_RERAISING_STATES = frozenset([states.SUSPENDED, states.SUCCESS]) """ States that if the engine stops in will **not** cause any potential failures to be reraised. States **not** in this list will cause any failure/s that were captured (if any) to get reraised. """ IGNORABLE_STATES = frozenset( itertools.chain([states.SCHEDULING, states.WAITING, states.RESUMING, states.ANALYZING], builder.META_STATES)) """ Informational states this engines internal machine yields back while running, not useful to have the engine record but useful to provide to end-users when doing execution iterations via :py:meth:`.run_iter`. """ MAX_MACHINE_STATES_RETAINED = 10 """ During :py:meth:`~.run_iter` the last X state machine transitions will be recorded (typically only useful on failure). """ def __init__(self, flow, flow_detail, backend, options): super(ActionEngine, self).__init__(flow, flow_detail, backend, options) self._runtime = None self._compiled = False self._compilation = None self._compiler = compiler.PatternCompiler(flow) self._lock = threading.RLock() self._storage_ensured = False self._validated = False # Retries are not *currently* executed out of the engines process # or thread (this could change in the future if we desire it to). self._retry_executor = executor.SerialRetryExecutor() self._inject_transient = strutils.bool_from_string( self._options.get('inject_transient', True)) self._gather_statistics = strutils.bool_from_string( self._options.get('gather_statistics', True)) self._statistics = {} @_pre_check(check_compiled=True, # NOTE(harlowja): We can alter the state of the # flow without ensuring its storage is setup for # its atoms (since this state change does not affect # those units). check_storage_ensured=False, check_validated=False) def suspend(self): self._change_state(states.SUSPENDING) @property def statistics(self): return self._statistics @property def compilation(self): """The compilation result. NOTE(harlowja): Only accessible after compilation has completed (None will be returned when this property is accessed before compilation has completed successfully). """ if self._compiled: return self._compilation else: return None @misc.cachedproperty def storage(self): """The storage unit for this engine. NOTE(harlowja): the atom argument lookup strategy will change for this storage unit after :py:func:`~taskflow.engines.base.Engine.compile` has completed (since **only** after compilation is the actual structure known). Before :py:func:`~taskflow.engines.base.Engine.compile` has completed the atom argument lookup strategy lookup will be restricted to injected arguments **only** (this will **not** reflect the actual runtime lookup strategy, which typically will be, but is not always different). """ def _scope_fetcher(atom_name): if self._compiled: return self._runtime.fetch_scopes_for(atom_name) else: return None return storage.Storage(self._flow_detail, backend=self._backend, scope_fetcher=_scope_fetcher) def run(self, timeout=None): """Runs the engine (or die trying). :param timeout: timeout to wait for any atoms to complete (this timeout will be used during the waiting period that occurs when unfinished atoms are being waited on). """ with fasteners.try_lock(self._lock) as was_locked: if not was_locked: raise exc.ExecutionFailure("Engine currently locked, please" " try again later") for _state in self.run_iter(timeout=timeout): pass def run_iter(self, timeout=None): """Runs the engine using iteration (or die trying). :param timeout: timeout to wait for any atoms to complete (this timeout will be used during the waiting period that occurs after the waiting state is yielded when unfinished atoms are being waited on). Instead of running to completion in a blocking manner, this will return a generator which will yield back the various states that the engine is going through (and can be used to run multiple engines at once using a generator per engine). The iterator returned also responds to the ``send()`` method from :pep:`0342` and will attempt to suspend itself if a truthy value is sent in (the suspend may be delayed until all active atoms have finished). NOTE(harlowja): using the ``run_iter`` method will **not** retain the engine lock while executing so the user should ensure that there is only one entity using a returned engine iterator (one per engine) at a given time. """ self.compile() self.prepare() self.validate() # Keep track of the last X state changes, which if a failure happens # are quite useful to log (and the performance of tracking this # should be negligible). last_transitions = collections.deque( maxlen=max(1, self.MAX_MACHINE_STATES_RETAINED)) with _start_stop(self._task_executor, self._retry_executor): self._change_state(states.RUNNING) if self._gather_statistics: self._statistics.clear() w = timeutils.StopWatch() w.start() else: w = None try: closed = False machine, memory = self._runtime.builder.build( self._statistics, timeout=timeout, gather_statistics=self._gather_statistics) r = runners.FiniteRunner(machine) for transition in r.run_iter(builder.START): last_transitions.append(transition) _prior_state, new_state = transition # NOTE(harlowja): skip over meta-states if new_state in builder.META_STATES: continue if new_state == states.FAILURE: failure.Failure.reraise_if_any(memory.failures) if closed: continue try: try_suspend = yield new_state except GeneratorExit: # The generator was closed, attempt to suspend and # continue looping until we have cleanly closed up # shop... closed = True self.suspend() except Exception: # Capture the failure, and ensure that the # machine will notice that something externally # has sent an exception in and that it should # finish up and reraise. memory.failures.append(failure.Failure()) closed = True else: if try_suspend: self.suspend() except Exception: with excutils.save_and_reraise_exception(): LOG.exception("Engine execution has failed, something" " bad must have happened (last" " %s machine transitions were %s)", last_transitions.maxlen, list(last_transitions)) self._change_state(states.FAILURE) else: if last_transitions: _prior_state, new_state = last_transitions[-1] if new_state not in self.IGNORABLE_STATES: self._change_state(new_state) if new_state not in self.NO_RERAISING_STATES: e_failures = self.storage.get_execute_failures() r_failures = self.storage.get_revert_failures() er_failures = itertools.chain( six.itervalues(e_failures), six.itervalues(r_failures)) failure.Failure.reraise_if_any(er_failures) finally: if w is not None: w.stop() self._statistics['active_for'] = w.elapsed() @staticmethod def _check_compilation(compilation): """Performs post compilation validation/checks.""" seen = set() dups = set() execution_graph = compilation.execution_graph for node, node_attrs in execution_graph.nodes(data=True): if node_attrs['kind'] in compiler.ATOMS: atom_name = node.name if atom_name in seen: dups.add(atom_name) else: seen.add(atom_name) if dups: raise exc.Duplicate( "Atoms with duplicate names found: %s" % (sorted(dups))) return compilation def _change_state(self, state): moved, old_state = self.storage.change_flow_state(state) if moved: details = { 'engine': self, 'flow_name': self.storage.flow_name, 'flow_uuid': self.storage.flow_uuid, 'old_state': old_state, } self.notifier.notify(state, details) def _ensure_storage(self): """Ensure all contained atoms exist in the storage unit.""" self.storage.ensure_atoms( self._runtime.iterate_nodes(compiler.ATOMS)) for atom in self._runtime.iterate_nodes(compiler.ATOMS): if atom.inject: self.storage.inject_atom_args(atom.name, atom.inject, transient=self._inject_transient) @fasteners.locked @_pre_check(check_validated=False) def validate(self): # At this point we can check to ensure all dependencies are either # flow/task provided or storage provided, if there are still missing # dependencies then this flow will fail at runtime (which we can avoid # by failing at validation time). if LOG.isEnabledFor(logging.TRACE): execution_graph = self._compilation.execution_graph LOG.trace("Validating scoping and argument visibility for" " execution graph with %s nodes and %s edges with" " density %0.3f", execution_graph.number_of_nodes(), execution_graph.number_of_edges(), nx.density(execution_graph)) missing = set() # Attempt to retain a chain of what was missing (so that the final # raised exception for the flow has the nodes that had missing # dependencies). last_cause = None last_node = None missing_nodes = 0 for atom in self._runtime.iterate_nodes(compiler.ATOMS): exec_missing = self.storage.fetch_unsatisfied_args( atom.name, atom.rebind, optional_args=atom.optional) revert_missing = self.storage.fetch_unsatisfied_args( atom.name, atom.revert_rebind, optional_args=atom.revert_optional) atom_missing = (('execute', exec_missing), ('revert', revert_missing)) for method, method_missing in atom_missing: if method_missing: cause = exc.MissingDependencies(atom, sorted(method_missing), cause=last_cause, method=method) last_cause = cause last_node = atom missing_nodes += 1 missing.update(method_missing) if missing: # For when a task is provided (instead of a flow) and that # task is the only item in the graph and its missing deps, avoid # re-wrapping it in yet another exception... if missing_nodes == 1 and last_node is self._flow: raise last_cause else: raise exc.MissingDependencies(self._flow, sorted(missing), cause=last_cause) self._validated = True @fasteners.locked @_pre_check(check_storage_ensured=False, check_validated=False) def prepare(self): if not self._storage_ensured: # Set our own state to resuming -> (ensure atoms exist # in storage) -> suspended in the storage unit and notify any # attached listeners of these changes. self._change_state(states.RESUMING) self._ensure_storage() self._change_state(states.SUSPENDED) self._storage_ensured = True # Reset everything back to pending (if we were previously reverted). if self.storage.get_flow_state() == states.REVERTED: self.reset() @fasteners.locked @_pre_check(check_validated=False) def reset(self): # This transitions *all* contained atoms back into the PENDING state # with an intention to EXECUTE (or dies trying to do that) and then # changes the state of the flow to PENDING so that it can then run... self._runtime.reset_all() self._change_state(states.PENDING) @fasteners.locked def compile(self): if self._compiled: return self._compilation = self._check_compilation(self._compiler.compile()) self._runtime = runtime.Runtime(self._compilation, self.storage, self.atom_notifier, self._task_executor, self._retry_executor, options=self._options) self._runtime.compile() self._compiled = True class SerialActionEngine(ActionEngine): """Engine that runs tasks in serial manner.""" def __init__(self, flow, flow_detail, backend, options): super(SerialActionEngine, self).__init__(flow, flow_detail, backend, options) self._task_executor = executor.SerialTaskExecutor() class _ExecutorTypeMatch(collections.namedtuple('_ExecutorTypeMatch', ['types', 'executor_cls'])): def matches(self, executor): return isinstance(executor, self.types) class _ExecutorTextMatch(collections.namedtuple('_ExecutorTextMatch', ['strings', 'executor_cls'])): def matches(self, text): return text.lower() in self.strings class ParallelActionEngine(ActionEngine): """Engine that runs tasks in parallel manner. **Additional engine options:** * ``executor``: a object that implements a :pep:`3148` compatible executor interface; it will be used for scheduling tasks. The following type are applicable (other unknown types passed will cause a type error to be raised). ========================= =============================================== Type provided Executor used ========================= =============================================== |cft|.ThreadPoolExecutor :class:`~.executor.ParallelThreadTaskExecutor` |cfp|.ProcessPoolExecutor :class:`~.|pe|.ParallelProcessTaskExecutor` |cf|._base.Executor :class:`~.executor.ParallelThreadTaskExecutor` ========================= =============================================== * ``executor``: a string that will be used to select a :pep:`3148` compatible executor; it will be used for scheduling tasks. The following string are applicable (other unknown strings passed will cause a value error to be raised). =========================== =============================================== String (case insensitive) Executor used =========================== =============================================== ``process`` :class:`~.|pe|.ParallelProcessTaskExecutor` ``processes`` :class:`~.|pe|.ParallelProcessTaskExecutor` ``thread`` :class:`~.executor.ParallelThreadTaskExecutor` ``threaded`` :class:`~.executor.ParallelThreadTaskExecutor` ``threads`` :class:`~.executor.ParallelThreadTaskExecutor` ``greenthread`` :class:`~.executor.ParallelThreadTaskExecutor` (greened version) ``greedthreaded`` :class:`~.executor.ParallelThreadTaskExecutor` (greened version) ``greenthreads`` :class:`~.executor.ParallelThreadTaskExecutor` (greened version) =========================== =============================================== * ``max_workers``: a integer that will affect the number of parallel workers that are used to dispatch tasks into (this number is bounded by the maximum parallelization your workflow can support). * ``wait_timeout``: a float (in seconds) that will affect the parallel process task executor (and therefore is **only** applicable when the executor provided above is of the process variant). This number affects how much time the process task executor waits for messages from child processes (typically indicating they have finished or failed). A lower number will have high granularity but *currently* involves more polling while a higher number will involve less polling but a slower time for an engine to notice a task has completed. .. |pe| replace:: process_executor .. |cfp| replace:: concurrent.futures.process .. |cft| replace:: concurrent.futures.thread .. |cf| replace:: concurrent.futures """ # One of these types should match when a object (non-string) is provided # for the 'executor' option. # # NOTE(harlowja): the reason we use the library/built-in futures is to # allow for instances of that to be detected and handled correctly, instead # of forcing everyone to use our derivatives (futurist or other)... _executor_cls_matchers = [ _ExecutorTypeMatch((futures.ThreadPoolExecutor,), executor.ParallelThreadTaskExecutor), _ExecutorTypeMatch((futures.ProcessPoolExecutor,), process_executor.ParallelProcessTaskExecutor), _ExecutorTypeMatch((futures.Executor,), executor.ParallelThreadTaskExecutor), ] # One of these should match when a string/text is provided for the # 'executor' option (a mixed case equivalent is allowed since the match # will be lower-cased before checking). _executor_str_matchers = [ _ExecutorTextMatch(frozenset(['processes', 'process']), process_executor.ParallelProcessTaskExecutor), _ExecutorTextMatch(frozenset(['thread', 'threads', 'threaded']), executor.ParallelThreadTaskExecutor), _ExecutorTextMatch(frozenset(['greenthread', 'greenthreads', 'greenthreaded']), executor.ParallelGreenThreadTaskExecutor), ] # Used when no executor is provided (either a string or object)... _default_executor_cls = executor.ParallelThreadTaskExecutor def __init__(self, flow, flow_detail, backend, options): super(ParallelActionEngine, self).__init__(flow, flow_detail, backend, options) # This ensures that any provided executor will be validated before # we get to far in the compilation/execution pipeline... self._task_executor = self._fetch_task_executor(self._options) @classmethod def _fetch_task_executor(cls, options): kwargs = {} executor_cls = cls._default_executor_cls # Match the desired executor to a class that will work with it... desired_executor = options.get('executor') if isinstance(desired_executor, six.string_types): matched_executor_cls = None for m in cls._executor_str_matchers: if m.matches(desired_executor): matched_executor_cls = m.executor_cls break if matched_executor_cls is None: expected = set() for m in cls._executor_str_matchers: expected.update(m.strings) raise ValueError("Unknown executor string '%s' expected" " one of %s (or mixed case equivalent)" % (desired_executor, list(expected))) else: executor_cls = matched_executor_cls elif desired_executor is not None: matched_executor_cls = None for m in cls._executor_cls_matchers: if m.matches(desired_executor): matched_executor_cls = m.executor_cls break if matched_executor_cls is None: expected = set() for m in cls._executor_cls_matchers: expected.update(m.types) raise TypeError("Unknown executor '%s' (%s) expected an" " instance of %s" % (desired_executor, type(desired_executor), list(expected))) else: executor_cls = matched_executor_cls kwargs['executor'] = desired_executor try: for (k, value_converter) in executor_cls.constructor_options: try: kwargs[k] = value_converter(options[k]) except KeyError: pass except AttributeError: pass return executor_cls(**kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/executor.py0000664000175000017500000001713100000000000023701 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import futurist import six from taskflow import task as ta from taskflow.types import failure from taskflow.types import notifier # Execution and reversion outcomes. EXECUTED = 'executed' REVERTED = 'reverted' def _execute_retry(retry, arguments): try: result = retry.execute(**arguments) except Exception: result = failure.Failure() return (EXECUTED, result) def _revert_retry(retry, arguments): try: result = retry.revert(**arguments) except Exception: result = failure.Failure() return (REVERTED, result) def _execute_task(task, arguments, progress_callback=None): with notifier.register_deregister(task.notifier, ta.EVENT_UPDATE_PROGRESS, callback=progress_callback): try: task.pre_execute() result = task.execute(**arguments) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. result = failure.Failure() finally: task.post_execute() return (EXECUTED, result) def _revert_task(task, arguments, result, failures, progress_callback=None): arguments = arguments.copy() arguments[ta.REVERT_RESULT] = result arguments[ta.REVERT_FLOW_FAILURES] = failures with notifier.register_deregister(task.notifier, ta.EVENT_UPDATE_PROGRESS, callback=progress_callback): try: task.pre_revert() result = task.revert(**arguments) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. result = failure.Failure() finally: task.post_revert() return (REVERTED, result) class SerialRetryExecutor(object): """Executes and reverts retries.""" def __init__(self): self._executor = futurist.SynchronousExecutor() def start(self): """Prepare to execute retries.""" self._executor.restart() def stop(self): """Finalize retry executor.""" self._executor.shutdown() def execute_retry(self, retry, arguments): """Schedules retry execution.""" fut = self._executor.submit(_execute_retry, retry, arguments) fut.atom = retry return fut def revert_retry(self, retry, arguments): """Schedules retry reversion.""" fut = self._executor.submit(_revert_retry, retry, arguments) fut.atom = retry return fut @six.add_metaclass(abc.ABCMeta) class TaskExecutor(object): """Executes and reverts tasks. This class takes task and its arguments and executes or reverts it. It encapsulates knowledge on how task should be executed or reverted: right now, on separate thread, on another machine, etc. """ @abc.abstractmethod def execute_task(self, task, task_uuid, arguments, progress_callback=None): """Schedules task execution.""" @abc.abstractmethod def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): """Schedules task reversion.""" def start(self): """Prepare to execute tasks.""" def stop(self): """Finalize task executor.""" class SerialTaskExecutor(TaskExecutor): """Executes tasks one after another.""" def __init__(self): self._executor = futurist.SynchronousExecutor() def start(self): self._executor.restart() def stop(self): self._executor.shutdown() def execute_task(self, task, task_uuid, arguments, progress_callback=None): fut = self._executor.submit(_execute_task, task, arguments, progress_callback=progress_callback) fut.atom = task return fut def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): fut = self._executor.submit(_revert_task, task, arguments, result, failures, progress_callback=progress_callback) fut.atom = task return fut class ParallelTaskExecutor(TaskExecutor): """Executes tasks in parallel. Submits tasks to an executor which should provide an interface similar to concurrent.Futures.Executor. """ constructor_options = [ ('max_workers', lambda v: v if v is None else int(v)), ] """ Optional constructor keyword arguments this executor supports. These will typically be passed via engine options (by a engine user) and converted into the correct type before being sent into this classes ``__init__`` method. """ def __init__(self, executor=None, max_workers=None): self._executor = executor self._max_workers = max_workers self._own_executor = executor is None @abc.abstractmethod def _create_executor(self, max_workers=None): """Called when an executor has not been provided to make one.""" def _submit_task(self, func, task, *args, **kwargs): fut = self._executor.submit(func, task, *args, **kwargs) fut.atom = task return fut def execute_task(self, task, task_uuid, arguments, progress_callback=None): return self._submit_task(_execute_task, task, arguments, progress_callback=progress_callback) def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): return self._submit_task(_revert_task, task, arguments, result, failures, progress_callback=progress_callback) def start(self): if self._own_executor: self._executor = self._create_executor( max_workers=self._max_workers) def stop(self): if self._own_executor: self._executor.shutdown(wait=True) self._executor = None class ParallelThreadTaskExecutor(ParallelTaskExecutor): """Executes tasks in parallel using a thread pool executor.""" def _create_executor(self, max_workers=None): return futurist.ThreadPoolExecutor(max_workers=max_workers) class ParallelGreenThreadTaskExecutor(ParallelThreadTaskExecutor): """Executes tasks in parallel using a greenthread pool executor.""" DEFAULT_WORKERS = 1000 """ Default number of workers when ``None`` is passed; being that greenthreads don't map to native threads or processors very well this is more of a guess/somewhat arbitrary, but it does match what the eventlet greenpool default size is (so at least it's consistent with what eventlet does). """ def _create_executor(self, max_workers=None): if max_workers is None: max_workers = self.DEFAULT_WORKERS return futurist.GreenThreadPoolExecutor(max_workers=max_workers) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/process_executor.py0000664000175000017500000006746700000000000025460 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import asyncore import binascii import collections import errno import functools import hashlib import hmac import math import os import pickle import socket import struct import time import futurist from oslo_utils import excutils import six from taskflow.engines.action_engine import executor as base from taskflow import logging from taskflow import task as ta from taskflow.types import notifier as nt from taskflow.utils import iter_utils from taskflow.utils import misc from taskflow.utils import schema_utils as su from taskflow.utils import threading_utils LOG = logging.getLogger(__name__) # Internal parent <-> child process protocol schema, message constants... MAGIC_HEADER = 0xDECAF CHALLENGE = 'identify_yourself' CHALLENGE_RESPONSE = 'worker_reporting_in' ACK = 'ack' EVENT = 'event' SCHEMAS = { # Basic jsonschemas for verifying that the data we get back and # forth from parent <-> child observes at least a basic expected # format. CHALLENGE: { "type": "string", "minLength": 1, }, ACK: { "type": "string", "minLength": 1, }, CHALLENGE_RESPONSE: { "type": "string", "minLength": 1, }, EVENT: { "type": "object", "properties": { 'event_type': { "type": "string", }, 'sent_on': { "type": "number", }, }, "required": ['event_type', 'sent_on'], "additionalProperties": True, }, } # See http://bugs.python.org/issue1457119 for why this is so complex... _DECODE_ENCODE_ERRORS = [pickle.PickleError, TypeError] try: import cPickle _DECODE_ENCODE_ERRORS.append(cPickle.PickleError) del cPickle except (ImportError, AttributeError): pass _DECODE_ENCODE_ERRORS = tuple(_DECODE_ENCODE_ERRORS) # Use the best pickle from here on out... from six.moves import cPickle as pickle class UnknownSender(Exception): """Exception raised when message from unknown sender is recvd.""" class ChallengeIgnored(Exception): """Exception raised when challenge has not been responded to.""" class Reader(object): """Reader machine that streams & parses messages that it then dispatches. TODO(harlowja): Use python-suitcase in the future when the following are addressed/resolved and released: - https://github.com/digidotcom/python-suitcase/issues/28 - https://github.com/digidotcom/python-suitcase/issues/29 Binary format format is the following (no newlines in actual format):: (4 bytes) (4 bytes) (1 or more variable bytes) (4 bytes) (1 or more variable bytes) (4 bytes) (1 or more variable bytes) """ #: Per state memory initializers. _INITIALIZERS = { 'magic_header_left': 4, 'mac_header_left': 4, 'identity_header_left': 4, 'msg_header_left': 4, } #: Linear steps/transitions (order matters here). _TRANSITIONS = tuple([ 'magic_header_left', 'mac_header_left', 'mac_left', 'identity_header_left', 'identity_left', 'msg_header_left', 'msg_left', ]) def __init__(self, auth_key, dispatch_func, msg_limit=-1): if not six.callable(dispatch_func): raise ValueError("Expected provided dispatch function" " to be callable") self.auth_key = auth_key self.dispatch_func = dispatch_func msg_limiter = iter_utils.iter_forever(msg_limit) self.msg_count = six.next(msg_limiter) self._msg_limiter = msg_limiter self._buffer = misc.BytesIO() self._state = None # Local machine variables and such are stored in here. self._memory = {} self._transitions = collections.deque(self._TRANSITIONS) # This is the per state callback handler set. The first entry reads # the data and the second entry is called after reading is completed, # typically to save that data into object memory, or to validate # it. self._handlers = { 'magic_header_left': (self._read_field_data, self._save_and_validate_magic), 'mac_header_left': (self._read_field_data, functools.partial(self._save_pos_integer, 'mac_left')), 'mac_left': (functools.partial(self._read_data, 'mac'), functools.partial(self._save_data, 'mac')), 'identity_header_left': (self._read_field_data, functools.partial(self._save_pos_integer, 'identity_left')), 'identity_left': (functools.partial(self._read_data, 'identity'), functools.partial(self._save_data, 'identity')), 'msg_header_left': (self._read_field_data, functools.partial(self._save_pos_integer, 'msg_left')), 'msg_left': (functools.partial(self._read_data, 'msg'), self._dispatch_and_reset), } # Force transition into first state... self._transition() def _save_pos_integer(self, key_name, data): key_val = struct.unpack("!i", data)[0] if key_val <= 0: raise IOError("Invalid %s length received for key '%s', expected" " greater than zero length" % (key_val, key_name)) self._memory[key_name] = key_val return True def _save_data(self, key_name, data): self._memory[key_name] = data return True def _dispatch_and_reset(self, data): self.dispatch_func( self._memory['identity'], # Lazy evaluate so the message can be thrown out as needed # (instead of the receiver discarding it after the fact)... functools.partial(_decode_message, self.auth_key, data, self._memory['mac'])) self.msg_count = six.next(self._msg_limiter) self._memory.clear() def _transition(self): try: self._state = self._transitions.popleft() except IndexError: self._transitions.extend(self._TRANSITIONS) self._state = self._transitions.popleft() try: self._memory[self._state] = self._INITIALIZERS[self._state] except KeyError: pass self._handle_func, self._post_handle_func = self._handlers[self._state] def _save_and_validate_magic(self, data): magic_header = struct.unpack("!i", data)[0] if magic_header != MAGIC_HEADER: raise IOError("Invalid magic header received, expected 0x%x but" " got 0x%x for message %s" % (MAGIC_HEADER, magic_header, self.msg_count + 1)) self._memory['magic'] = magic_header return True def _read_data(self, save_key_name, data): data_len_left = self._memory[self._state] self._buffer.write(data[0:data_len_left]) if len(data) < data_len_left: data_len_left -= len(data) self._memory[self._state] = data_len_left return '' else: self._memory[self._state] = 0 buf_data = self._buffer.getvalue() self._buffer.reset() self._post_handle_func(buf_data) self._transition() return data[data_len_left:] def _read_field_data(self, data): return self._read_data(self._state, data) @property def bytes_needed(self): return self._memory.get(self._state, 0) def feed(self, data): while len(data): data = self._handle_func(data) class BadHmacValueError(ValueError): """Value error raised when an invalid hmac is discovered.""" def _create_random_string(desired_length): if desired_length <= 0: return b'' data_length = int(math.ceil(desired_length / 2.0)) data = os.urandom(data_length) hex_data = binascii.hexlify(data) return hex_data[0:desired_length] def _calculate_hmac(auth_key, body): mac = hmac.new(auth_key, body, hashlib.md5).hexdigest() if isinstance(mac, six.text_type): mac = mac.encode("ascii") return mac def _encode_message(auth_key, message, identity, reverse=False): message = pickle.dumps(message, 2) message_mac = _calculate_hmac(auth_key, message) pieces = [ struct.pack("!i", MAGIC_HEADER), struct.pack("!i", len(message_mac)), message_mac, struct.pack("!i", len(identity)), identity, struct.pack("!i", len(message)), message, ] if reverse: pieces.reverse() return tuple(pieces) def _decode_message(auth_key, message, message_mac): tmp_message_mac = _calculate_hmac(auth_key, message) if tmp_message_mac != message_mac: raise BadHmacValueError('Invalid message hmac') return pickle.loads(message) class Channel(object): """Object that workers use to communicate back to their creator.""" def __init__(self, port, identity, auth_key): self.identity = identity self.port = port self.auth_key = auth_key self.dead = False self._sent = self._received = 0 self._socket = None self._read_pipe = None self._write_pipe = None def close(self): if self._socket is not None: self._socket.close() self._socket = None self._read_pipe = None self._write_pipe = None def _ensure_connected(self): if self._socket is None: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setblocking(1) try: s.connect(("", self.port)) except socket.error as e: with excutils.save_and_reraise_exception(): s.close() if e.errno in (errno.ECONNREFUSED, errno.ENOTCONN, errno.ECONNRESET): # Don't bother with further connections... self.dead = True read_pipe = s.makefile("rb", 0) write_pipe = s.makefile("wb", 0) try: msg = self._do_recv(read_pipe=read_pipe) su.schema_validate(msg, SCHEMAS[CHALLENGE]) if msg != CHALLENGE: raise IOError("Challenge expected not received") else: pieces = _encode_message(self.auth_key, CHALLENGE_RESPONSE, self.identity) self._do_send_and_ack(pieces, write_pipe=write_pipe, read_pipe=read_pipe) except Exception: with excutils.save_and_reraise_exception(): s.close() else: self._socket = s self._read_pipe = read_pipe self._write_pipe = write_pipe def recv(self): self._ensure_connected() return self._do_recv() def _do_recv(self, read_pipe=None): if read_pipe is None: read_pipe = self._read_pipe msg_capture = collections.deque(maxlen=1) msg_capture_func = (lambda _from_who, msg_decoder_func: msg_capture.append(msg_decoder_func())) reader = Reader(self.auth_key, msg_capture_func, msg_limit=1) try: maybe_msg_num = self._received + 1 bytes_needed = reader.bytes_needed while True: blob = read_pipe.read(bytes_needed) if len(blob) != bytes_needed: raise EOFError("Read pipe closed while reading %s" " bytes for potential message %s" % (bytes_needed, maybe_msg_num)) reader.feed(blob) bytes_needed = reader.bytes_needed except StopIteration: pass msg = msg_capture[0] self._received += 1 return msg def _do_send(self, pieces, write_pipe=None): if write_pipe is None: write_pipe = self._write_pipe for piece in pieces: write_pipe.write(piece) write_pipe.flush() def _do_send_and_ack(self, pieces, write_pipe=None, read_pipe=None): self._do_send(pieces, write_pipe=write_pipe) self._sent += 1 msg = self._do_recv(read_pipe=read_pipe) su.schema_validate(msg, SCHEMAS[ACK]) if msg != ACK: raise IOError("Failed receiving ack for sent" " message %s" % self._metrics['sent']) def send(self, message): self._ensure_connected() self._do_send_and_ack(_encode_message(self.auth_key, message, self.identity)) class EventSender(object): """Sends event information from a child worker process to its creator.""" def __init__(self, channel): self._channel = channel self._pid = None def __call__(self, event_type, details): if not self._channel.dead: if self._pid is None: self._pid = os.getpid() message = { 'event_type': event_type, 'details': details, 'sent_on': time.time(), } LOG.trace("Sending %s (from child %s)", message, self._pid) self._channel.send(message) class DispatcherHandler(asyncore.dispatcher): """Dispatches from a single connection into a target.""" #: Read/write chunk size. CHUNK_SIZE = 8192 def __init__(self, sock, addr, dispatcher): if six.PY2: asyncore.dispatcher.__init__(self, map=dispatcher.map, sock=sock) else: super(DispatcherHandler, self).__init__(map=dispatcher.map, sock=sock) self.blobs_to_write = list(dispatcher.challenge_pieces) self.reader = Reader(dispatcher.auth_key, self._dispatch) self.targets = dispatcher.targets self.tied_to = None self.challenge_responded = False self.ack_pieces = _encode_message(dispatcher.auth_key, ACK, dispatcher.identity, reverse=True) self.addr = addr def handle_close(self): self.close() def writable(self): return bool(self.blobs_to_write) def handle_write(self): try: blob = self.blobs_to_write.pop() except IndexError: pass else: sent = self.send(blob[0:self.CHUNK_SIZE]) if sent < len(blob): self.blobs_to_write.append(blob[sent:]) def _send_ack(self): self.blobs_to_write.extend(self.ack_pieces) def _dispatch(self, from_who, msg_decoder_func): if not self.challenge_responded: msg = msg_decoder_func() su.schema_validate(msg, SCHEMAS[CHALLENGE_RESPONSE]) if msg != CHALLENGE_RESPONSE: raise ChallengeIgnored("Discarding connection from %s" " challenge was not responded to" % self.addr) else: LOG.trace("Peer %s (%s) has passed challenge sequence", self.addr, from_who) self.challenge_responded = True self.tied_to = from_who self._send_ack() else: if self.tied_to != from_who: raise UnknownSender("Sender %s previously identified as %s" " changed there identity to %s after" " challenge sequence" % (self.addr, self.tied_to, from_who)) try: task = self.targets[from_who] except KeyError: raise UnknownSender("Unknown message from %s (%s) not matched" " to any known target" % (self.addr, from_who)) msg = msg_decoder_func() su.schema_validate(msg, SCHEMAS[EVENT]) if LOG.isEnabledFor(logging.TRACE): msg_delay = max(0, time.time() - msg['sent_on']) LOG.trace("Dispatching message from %s (%s) (it took %0.3f" " seconds for it to arrive for processing after" " being sent)", self.addr, from_who, msg_delay) task.notifier.notify(msg['event_type'], msg.get('details')) self._send_ack() def handle_read(self): data = self.recv(self.CHUNK_SIZE) if len(data) == 0: self.handle_close() else: try: self.reader.feed(data) except (IOError, UnknownSender): LOG.warning("Invalid received message", exc_info=True) self.handle_close() except _DECODE_ENCODE_ERRORS: LOG.warning("Badly formatted message", exc_info=True) self.handle_close() except (ValueError, su.ValidationError): LOG.warning("Failed validating message", exc_info=True) self.handle_close() except ChallengeIgnored: LOG.warning("Failed challenge sequence", exc_info=True) self.handle_close() class Dispatcher(asyncore.dispatcher): """Accepts messages received from child worker processes.""" #: See https://docs.python.org/2/library/socket.html#socket.socket.listen MAX_BACKLOG = 5 def __init__(self, map, auth_key, identity): if six.PY2: asyncore.dispatcher.__init__(self, map=map) else: super(Dispatcher, self).__init__(map=map) self.identity = identity self.challenge_pieces = _encode_message(auth_key, CHALLENGE, identity, reverse=True) self.auth_key = auth_key self.targets = {} @property def port(self): if self.socket is not None: return self.socket.getsockname()[1] else: return None def setup(self): self.targets.clear() self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind(("", 0)) LOG.trace("Accepting dispatch requests on port %s", self.port) self.listen(self.MAX_BACKLOG) def writable(self): return False @property def map(self): return self._map def handle_close(self): if self.socket is not None: self.close() def handle_accept(self): pair = self.accept() if pair is not None: sock, addr = pair addr = "%s:%s" % (addr[0], addr[1]) LOG.trace("Potentially accepted new connection from %s", addr) DispatcherHandler(sock, addr, self) class ParallelProcessTaskExecutor(base.ParallelTaskExecutor): """Executes tasks in parallel using a process pool executor. NOTE(harlowja): this executor executes tasks in external processes, so that implies that tasks that are sent to that external process are pickleable since this is how the multiprocessing works (sending pickled objects back and forth) and that the bound handlers (for progress updating in particular) are proxied correctly from that external process to the one that is alive in the parent process to ensure that callbacks registered in the parent are executed on events in the child. """ #: Default timeout used by asyncore io loop (and eventually select/poll). WAIT_TIMEOUT = 0.01 constructor_options = [ ('max_workers', lambda v: v if v is None else int(v)), ('wait_timeout', lambda v: v if v is None else float(v)), ] """ Optional constructor keyword arguments this executor supports. These will typically be passed via engine options (by a engine user) and converted into the correct type before being sent into this classes ``__init__`` method. """ def __init__(self, executor=None, max_workers=None, wait_timeout=None): super(ParallelProcessTaskExecutor, self).__init__( executor=executor, max_workers=max_workers) self._auth_key = _create_random_string(32) self._dispatcher = Dispatcher({}, self._auth_key, _create_random_string(32)) if wait_timeout is None: self._wait_timeout = self.WAIT_TIMEOUT else: if wait_timeout <= 0: raise ValueError("Provided wait timeout must be greater" " than zero and not '%s'" % wait_timeout) self._wait_timeout = wait_timeout # Only created after starting... self._worker = None def _create_executor(self, max_workers=None): return futurist.ProcessPoolExecutor(max_workers=max_workers) def start(self): if threading_utils.is_alive(self._worker): raise RuntimeError("Worker thread must be stopped via stop()" " before starting/restarting") super(ParallelProcessTaskExecutor, self).start() self._dispatcher.setup() self._worker = threading_utils.daemon_thread( asyncore.loop, map=self._dispatcher.map, timeout=self._wait_timeout) self._worker.start() def stop(self): super(ParallelProcessTaskExecutor, self).stop() self._dispatcher.close() if threading_utils.is_alive(self._worker): self._worker.join() self._worker = None def _submit_task(self, func, task, *args, **kwargs): """Submit a function to run the given task (with given args/kwargs). NOTE(harlowja): Adjust all events to be proxies instead since we want those callbacks to be activated in this process, not in the child, also since typically callbacks are functors (or callables) we can not pickle those in the first place... To make sure people understand how this works, the following is a lengthy description of what is going on here, read at will: So to ensure that we are proxying task triggered events that occur in the executed subprocess (which will be created and used by the thing using the multiprocessing based executor) we need to establish a link between that process and this process that ensures that when a event is triggered in that task in that process that a corresponding event is triggered on the original task that was requested to be ran in this process. To accomplish this we have to create a copy of the task (without any listeners) and then reattach a new set of listeners that will now instead of calling the desired listeners just place messages for this process (a dispatcher thread that is created in this class) to dispatch to the original task (using a common accepting socket and per task sender socket that is used and associated to know which task to proxy back too, since it is possible that there many be *many* subprocess running at the same time). Once the subprocess task has finished execution, the executor will then trigger a callback that will remove the task + target from the dispatcher (which will stop any further proxying back to the original task). """ progress_callback = kwargs.pop('progress_callback', None) clone = task.copy(retain_listeners=False) identity = _create_random_string(32) channel = Channel(self._dispatcher.port, identity, self._auth_key) def rebind_task(): # Creates and binds proxies for all events the task could receive # so that when the clone runs in another process that this task # can receive the same notifications (thus making it look like the # the notifications are transparently happening in this process). proxy_event_types = set() for (event_type, listeners) in task.notifier.listeners_iter(): if listeners: proxy_event_types.add(event_type) if progress_callback is not None: proxy_event_types.add(ta.EVENT_UPDATE_PROGRESS) if nt.Notifier.ANY in proxy_event_types: # NOTE(harlowja): If ANY is present, just have it be # the **only** event registered, as all other events will be # sent if ANY is registered (due to the nature of ANY sending # all the things); if we also include the other event types # in this set if ANY is present we will receive duplicate # messages in this process (the one where the local # task callbacks are being triggered). For example the # emissions of the tasks notifier (that is running out # of process) will for specific events send messages for # its ANY event type **and** the specific event # type (2 messages, when we just want one) which will # cause > 1 notify() call on the local tasks notifier, which # causes more local callback triggering than we want # to actually happen. proxy_event_types = set([nt.Notifier.ANY]) if proxy_event_types: # This sender acts as our forwarding proxy target, it # will be sent pickled to the process that will execute # the needed task and it will do the work of using the # channel object to send back messages to this process for # dispatch into the local task. sender = EventSender(channel) for event_type in proxy_event_types: clone.notifier.register(event_type, sender) return bool(proxy_event_types) def register(): if progress_callback is not None: task.notifier.register(ta.EVENT_UPDATE_PROGRESS, progress_callback) self._dispatcher.targets[identity] = task def deregister(fut=None): if progress_callback is not None: task.notifier.deregister(ta.EVENT_UPDATE_PROGRESS, progress_callback) self._dispatcher.targets.pop(identity, None) should_register = rebind_task() if should_register: register() try: fut = self._executor.submit(func, clone, *args, **kwargs) except RuntimeError: with excutils.save_and_reraise_exception(): if should_register: deregister() fut.atom = task if should_register: fut.add_done_callback(deregister) return fut ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/runtime.py0000664000175000017500000003302500000000000023526 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools from futurist import waiters from taskflow import deciders as de from taskflow.engines.action_engine.actions import retry as ra from taskflow.engines.action_engine.actions import task as ta from taskflow.engines.action_engine import builder as bu from taskflow.engines.action_engine import compiler as com from taskflow.engines.action_engine import completer as co from taskflow.engines.action_engine import scheduler as sched from taskflow.engines.action_engine import scopes as sc from taskflow.engines.action_engine import selector as se from taskflow.engines.action_engine import traversal as tr from taskflow import exceptions as exc from taskflow import logging from taskflow import states as st from taskflow.utils import misc from taskflow.flow import (LINK_DECIDER, LINK_DECIDER_DEPTH) # noqa # Small helper to make the edge decider tuples more easily useable... _EdgeDecider = collections.namedtuple('_EdgeDecider', 'from_node,kind,decider,depth') LOG = logging.getLogger(__name__) class Runtime(object): """A aggregate of runtime objects, properties, ... used during execution. This object contains various utility methods and properties that represent the collection of runtime components and functionality needed for an action engine to run to completion. """ def __init__(self, compilation, storage, atom_notifier, task_executor, retry_executor, options=None): self._atom_notifier = atom_notifier self._task_executor = task_executor self._retry_executor = retry_executor self._storage = storage self._compilation = compilation self._atom_cache = {} self._options = misc.safe_copy_dict(options) def _walk_edge_deciders(self, graph, atom): """Iterates through all nodes, deciders that alter atoms execution.""" # This is basically a reverse breadth first exploration, with # special logic to further traverse down flow nodes as needed... predecessors_iter = graph.predecessors nodes = collections.deque((u_node, atom) for u_node in predecessors_iter(atom)) visited = set() while nodes: u_node, v_node = nodes.popleft() u_node_kind = graph.nodes[u_node]['kind'] u_v_data = graph.adj[u_node][v_node] try: decider = u_v_data[LINK_DECIDER] decider_depth = u_v_data.get(LINK_DECIDER_DEPTH) if decider_depth is None: decider_depth = de.Depth.ALL yield _EdgeDecider(u_node, u_node_kind, decider, decider_depth) except KeyError: pass if u_node_kind == com.FLOW and u_node not in visited: # Avoid re-exploring the same flow if we get to this same # flow by a different *future* path... visited.add(u_node) # Since we *currently* jump over flow node(s), we need to make # sure that any prior decider that was directed at this flow # node also gets used during future decisions about this # atom node. nodes.extend((u_u_node, u_node) for u_u_node in predecessors_iter(u_node)) def compile(self): """Compiles & caches frequently used execution helper objects. Build out a cache of commonly used item that are associated with the contained atoms (by name), and are useful to have for quick lookup on (for example, the change state handler function for each atom, the scope walker object for each atom, the task or retry specific scheduler and so-on). """ change_state_handlers = { com.TASK: functools.partial(self.task_action.change_state, progress=0.0), com.RETRY: self.retry_action.change_state, } schedulers = { com.RETRY: self.retry_scheduler, com.TASK: self.task_scheduler, } check_transition_handlers = { com.TASK: st.check_task_transition, com.RETRY: st.check_retry_transition, } actions = { com.TASK: self.task_action, com.RETRY: self.retry_action, } graph = self._compilation.execution_graph for node, node_data in graph.nodes(data=True): node_kind = node_data['kind'] if node_kind in com.FLOWS: continue elif node_kind in com.ATOMS: check_transition_handler = check_transition_handlers[node_kind] change_state_handler = change_state_handlers[node_kind] scheduler = schedulers[node_kind] action = actions[node_kind] else: raise exc.CompilationFailure("Unknown node kind '%s'" " encountered" % node_kind) metadata = {} deciders_it = self._walk_edge_deciders(graph, node) walker = sc.ScopeWalker(self.compilation, node, names_only=True) metadata['scope_walker'] = walker metadata['check_transition_handler'] = check_transition_handler metadata['change_state_handler'] = change_state_handler metadata['scheduler'] = scheduler metadata['edge_deciders'] = tuple(deciders_it) metadata['action'] = action LOG.trace("Compiled %s metadata for node %s (%s)", metadata, node.name, node_kind) self._atom_cache[node.name] = metadata # TODO(harlowja): optimize the different decider depths to avoid # repeated full successor searching; this can be done by searching # for the widest depth of parent(s), and limiting the search of # children by the that depth. @property def compilation(self): return self._compilation @property def storage(self): return self._storage @property def options(self): return self._options @misc.cachedproperty def selector(self): return se.Selector(self) @misc.cachedproperty def builder(self): return bu.MachineBuilder(self, waiters.wait_for_any) @misc.cachedproperty def completer(self): return co.Completer(self) @misc.cachedproperty def scheduler(self): return sched.Scheduler(self) @misc.cachedproperty def task_scheduler(self): return sched.TaskScheduler(self) @misc.cachedproperty def retry_scheduler(self): return sched.RetryScheduler(self) @misc.cachedproperty def retry_action(self): return ra.RetryAction(self._storage, self._atom_notifier, self._retry_executor) @misc.cachedproperty def task_action(self): return ta.TaskAction(self._storage, self._atom_notifier, self._task_executor) def _fetch_atom_metadata_entry(self, atom_name, metadata_key): return self._atom_cache[atom_name][metadata_key] def check_atom_transition(self, atom, current_state, target_state): """Checks if the atom can transition to the provided target state.""" # This does not check if the name exists (since this is only used # internally to the engine, and is not exposed to atoms that will # not exist and therefore doesn't need to handle that case). check_transition_handler = self._fetch_atom_metadata_entry( atom.name, 'check_transition_handler') return check_transition_handler(current_state, target_state) def fetch_edge_deciders(self, atom): """Fetches the edge deciders for the given atom.""" # This does not check if the name exists (since this is only used # internally to the engine, and is not exposed to atoms that will # not exist and therefore doesn't need to handle that case). return self._fetch_atom_metadata_entry(atom.name, 'edge_deciders') def fetch_scheduler(self, atom): """Fetches the cached specific scheduler for the given atom.""" # This does not check if the name exists (since this is only used # internally to the engine, and is not exposed to atoms that will # not exist and therefore doesn't need to handle that case). return self._fetch_atom_metadata_entry(atom.name, 'scheduler') def fetch_action(self, atom): """Fetches the cached action handler for the given atom.""" metadata = self._atom_cache[atom.name] return metadata['action'] def fetch_scopes_for(self, atom_name): """Fetches a walker of the visible scopes for the given atom.""" try: return self._fetch_atom_metadata_entry(atom_name, 'scope_walker') except KeyError: # This signals to the caller that there is no walker for whatever # atom name was given that doesn't really have any associated atom # known to be named with that name; this is done since the storage # layer will call into this layer to fetch a scope for a named # atom and users can provide random names that do not actually # exist... return None # Various helper methods used by the runtime components; not for public # consumption... def iterate_retries(self, state=None): """Iterates retry atoms that match the provided state. If no state is provided it will yield back all retry atoms. """ if state: atoms = list(self.iterate_nodes((com.RETRY,))) atom_states = self._storage.get_atoms_states(atom.name for atom in atoms) for atom in atoms: atom_state, _atom_intention = atom_states[atom.name] if atom_state == state: yield atom else: for atom in self.iterate_nodes((com.RETRY,)): yield atom def iterate_nodes(self, allowed_kinds): """Yields back all nodes of specified kinds in the execution graph.""" graph = self._compilation.execution_graph for node, node_data in graph.nodes(data=True): if node_data['kind'] in allowed_kinds: yield node def is_success(self): """Checks if all atoms in the execution graph are in 'happy' state.""" atoms = list(self.iterate_nodes(com.ATOMS)) atom_states = self._storage.get_atoms_states(atom.name for atom in atoms) for atom in atoms: atom_state, _atom_intention = atom_states[atom.name] if atom_state == st.IGNORE: continue if atom_state != st.SUCCESS: return False return True def find_retry(self, node): """Returns the retry atom associated to the given node (or none).""" graph = self._compilation.execution_graph return graph.nodes[node].get(com.RETRY) def reset_atoms(self, atoms, state=st.PENDING, intention=st.EXECUTE): """Resets all the provided atoms to the given state and intention.""" tweaked = [] for atom in atoms: if state or intention: tweaked.append((atom, state, intention)) if state: change_state_handler = self._fetch_atom_metadata_entry( atom.name, 'change_state_handler') change_state_handler(atom, state) if intention: self.storage.set_atom_intention(atom.name, intention) return tweaked def reset_all(self, state=st.PENDING, intention=st.EXECUTE): """Resets all atoms to the given state and intention.""" return self.reset_atoms(self.iterate_nodes(com.ATOMS), state=state, intention=intention) def reset_subgraph(self, atom, state=st.PENDING, intention=st.EXECUTE): """Resets a atoms subgraph to the given state and intention. The subgraph is contained of **all** of the atoms successors. """ execution_graph = self._compilation.execution_graph atoms_it = tr.depth_first_iterate(execution_graph, atom, tr.Direction.FORWARD) return self.reset_atoms(atoms_it, state=state, intention=intention) def retry_subflow(self, retry): """Prepares a retrys + its subgraph for execution. This sets the retrys intention to ``EXECUTE`` and resets all of its subgraph (its successors) to the ``PENDING`` state with an ``EXECUTE`` intention. """ tweaked = self.reset_atoms([retry], state=None, intention=st.EXECUTE) tweaked.extend(self.reset_subgraph(retry)) return tweaked ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/scheduler.py0000664000175000017500000000772000000000000024024 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import weakref from taskflow import exceptions as excp from taskflow import states as st from taskflow.types import failure class RetryScheduler(object): """Schedules retry atoms.""" def __init__(self, runtime): self._runtime = weakref.proxy(runtime) self._retry_action = runtime.retry_action self._storage = runtime.storage def schedule(self, retry): """Schedules the given retry atom for *future* completion. Depending on the atoms stored intention this may schedule the retry atom for reversion or execution. """ intention = self._storage.get_atom_intention(retry.name) if intention == st.EXECUTE: return self._retry_action.schedule_execution(retry) elif intention == st.REVERT: return self._retry_action.schedule_reversion(retry) elif intention == st.RETRY: self._retry_action.change_state(retry, st.RETRYING) # This will force the subflow to start processing right *after* # this retry atom executes (since they will be blocked on their # predecessor getting out of the RETRYING/RUNNING state). self._runtime.retry_subflow(retry) return self._retry_action.schedule_execution(retry) else: raise excp.ExecutionFailure("Unknown how to schedule retry with" " intention: %s" % intention) class TaskScheduler(object): """Schedules task atoms.""" def __init__(self, runtime): self._storage = runtime.storage self._task_action = runtime.task_action def schedule(self, task): """Schedules the given task atom for *future* completion. Depending on the atoms stored intention this may schedule the task atom for reversion or execution. """ intention = self._storage.get_atom_intention(task.name) if intention == st.EXECUTE: return self._task_action.schedule_execution(task) elif intention == st.REVERT: return self._task_action.schedule_reversion(task) else: raise excp.ExecutionFailure("Unknown how to schedule task with" " intention: %s" % intention) class Scheduler(object): """Safely schedules atoms using a runtime ``fetch_scheduler`` routine.""" def __init__(self, runtime): self._runtime = weakref.proxy(runtime) def schedule(self, atoms): """Schedules the provided atoms for *future* completion. This method should schedule a future for each atom provided and return a set of those futures to be waited on (or used for other similar purposes). It should also return any failure objects that represented scheduling failures that may have occurred during this scheduling process. """ futures = set() for atom in atoms: scheduler = self._runtime.fetch_scheduler(atom) try: futures.add(scheduler.schedule(atom)) except Exception: # Immediately stop scheduling future work so that we can # exit execution early (rather than later) if a single atom # fails to schedule correctly. return (futures, [failure.Failure()]) return (futures, []) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/scopes.py0000664000175000017500000001247000000000000023340 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.engines.action_engine import compiler as co from taskflow.engines.action_engine import traversal as tr from taskflow import logging LOG = logging.getLogger(__name__) class ScopeWalker(object): """Walks through the scopes of a atom using a engines compilation. NOTE(harlowja): for internal usage only. This will walk the visible scopes that are accessible for the given atom, which can be used by some external entity in some meaningful way, for example to find dependent values... """ def __init__(self, compilation, atom, names_only=False): self._node = compilation.hierarchy.find(atom) if self._node is None: raise ValueError("Unable to find atom '%s' in compilation" " hierarchy" % atom) self._level_cache = {} self._atom = atom self._execution_graph = compilation.execution_graph self._names_only = names_only self._predecessors = None def __iter__(self): """Iterates over the visible scopes. How this works is the following: We first grab all the predecessors of the given atom (lets call it ``Y``) by using the :py:class:`~.compiler.Compilation` execution graph (and doing a reverse breadth-first expansion to gather its predecessors), this is useful since we know they *always* will exist (and execute) before this atom but it does not tell us the corresponding scope *level* (flow, nested flow...) that each predecessor was created in, so we need to find this information. For that information we consult the location of the atom ``Y`` in the :py:class:`~.compiler.Compilation` hierarchy/tree. We lookup in a reverse order the parent ``X`` of ``Y`` and traverse backwards from the index in the parent where ``Y`` exists to all siblings (and children of those siblings) in ``X`` that we encounter in this backwards search (if a sibling is a flow itself, its atom(s) will be recursively expanded and included). This collection will then be assumed to be at the same scope. This is what is called a *potential* single scope, to make an *actual* scope we remove the items from the *potential* scope that are **not** predecessors of ``Y`` to form the *actual* scope which we then yield back. Then for additional scopes we continue up the tree, by finding the parent of ``X`` (lets call it ``Z``) and perform the same operation, going through the children in a reverse manner from the index in parent ``Z`` where ``X`` was located. This forms another *potential* scope which we provide back as an *actual* scope after reducing the potential set to only include predecessors previously gathered. We then repeat this process until we no longer have any parent nodes (aka we have reached the top of the tree) or we run out of predecessors. """ graph = self._execution_graph if self._predecessors is None: predecessors = set( node for node in graph.bfs_predecessors_iter(self._atom) if graph.nodes[node]['kind'] in co.ATOMS) self._predecessors = predecessors.copy() else: predecessors = self._predecessors.copy() last = self._node for lvl, parent in enumerate(self._node.path_iter(include_self=False)): if not predecessors: break last_idx = parent.index(last.item) try: visible, removals = self._level_cache[lvl] predecessors = predecessors - removals except KeyError: visible = [] removals = set() atom_it = tr.depth_first_reverse_iterate( parent, start_from_idx=last_idx) for atom in atom_it: if atom in predecessors: predecessors.remove(atom) removals.add(atom) visible.append(atom) if not predecessors: break self._level_cache[lvl] = (visible, removals) if LOG.isEnabledFor(logging.TRACE): visible_names = [a.name for a in visible] LOG.trace("Scope visible to '%s' (limited by parent '%s'" " index < %s) is: %s", self._atom, parent.item.name, last_idx, visible_names) if self._names_only: yield [a.name for a in visible] else: yield visible last = parent ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/selector.py0000664000175000017500000002644600000000000023674 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator import weakref from taskflow.engines.action_engine import compiler as co from taskflow.engines.action_engine import deciders from taskflow.engines.action_engine import traversal from taskflow import logging from taskflow import states as st from taskflow.utils import iter_utils LOG = logging.getLogger(__name__) class Selector(object): """Selector that uses a compilation and aids in execution processes. Its primary purpose is to get the next atoms for execution or reversion by utilizing the compilations underlying structures (graphs, nodes and edge relations...) and using this information along with the atom state/states stored in storage to provide other useful functionality to the rest of the runtime system. """ def __init__(self, runtime): self._runtime = weakref.proxy(runtime) self._storage = runtime.storage self._execution_graph = runtime.compilation.execution_graph def iter_next_atoms(self, atom=None): """Iterate next atoms to run (originating from atom or all atoms).""" if atom is None: return iter_utils.unique_seen((self._browse_atoms_for_execute(), self._browse_atoms_for_revert()), seen_selector=operator.itemgetter(0)) state = self._storage.get_atom_state(atom.name) intention = self._storage.get_atom_intention(atom.name) if state == st.SUCCESS: if intention == st.REVERT: return iter([ (atom, deciders.NoOpDecider()), ]) elif intention == st.EXECUTE: return self._browse_atoms_for_execute(atom=atom) else: return iter([]) elif state == st.REVERTED: return self._browse_atoms_for_revert(atom=atom) elif state == st.FAILURE: return self._browse_atoms_for_revert() else: return iter([]) def _browse_atoms_for_execute(self, atom=None): """Browse next atoms to execute. This returns a iterator of atoms that *may* be ready to be executed, if given a specific atom, it will only examine the successors of that atom, otherwise it will examine the whole graph. """ if atom is None: atom_it = self._runtime.iterate_nodes(co.ATOMS) else: # NOTE(harlowja): the reason this uses breadth first is so that # when deciders are applied that those deciders can be applied # from top levels to lower levels since lower levels *may* be # able to run even if top levels have deciders that decide to # ignore some atoms... (going deeper first would make this # problematic to determine as top levels can have their deciders # applied **after** going deeper). atom_it = traversal.breadth_first_iterate( self._execution_graph, atom, traversal.Direction.FORWARD) for atom in atom_it: is_ready, late_decider = self._get_maybe_ready_for_execute(atom) if is_ready: yield (atom, late_decider) def _browse_atoms_for_revert(self, atom=None): """Browse next atoms to revert. This returns a iterator of atoms that *may* be ready to be be reverted, if given a specific atom it will only examine the predecessors of that atom, otherwise it will examine the whole graph. """ if atom is None: atom_it = self._runtime.iterate_nodes(co.ATOMS) else: atom_it = traversal.breadth_first_iterate( self._execution_graph, atom, traversal.Direction.BACKWARD, # Stop at the retry boundary (as retries 'control' there # surronding atoms, and we don't want to back track over # them so that they can correctly affect there associated # atoms); we do though need to jump through all tasks since # if a predecessor Y was ignored and a predecessor Z before Y # was not it should be eligible to now revert... through_retries=False) for atom in atom_it: is_ready, late_decider = self._get_maybe_ready_for_revert(atom) if is_ready: yield (atom, late_decider) def _get_maybe_ready(self, atom, transition_to, allowed_intentions, connected_fetcher, ready_checker, decider_fetcher, for_what="?"): def iter_connected_states(): # Lazily iterate over connected states so that ready checkers # can stop early (vs having to consume and check all the # things...) for atom in connected_fetcher(): # TODO(harlowja): make this storage api better, its not # especially clear what the following is doing (mainly # to avoid two calls into storage). atom_states = self._storage.get_atoms_states([atom.name]) yield (atom, atom_states[atom.name]) # NOTE(harlowja): How this works is the following... # # 1. First check if the current atom can even transition to the # desired state, if not this atom is definitely not ready to # execute or revert. # 2. Check if the actual atoms intention is in one of the desired/ok # intentions, if it is not there we are still not ready to execute # or revert. # 3. Iterate over (atom, atom_state, atom_intention) for all the # atoms the 'connected_fetcher' callback yields from underlying # storage and direct that iterator into the 'ready_checker' # callback, that callback should then iterate over these entries # and determine if it is ok to execute or revert. # 4. If (and only if) 'ready_checker' returns true, then # the 'decider_fetcher' callback is called to get a late decider # which can (if it desires) affect this ready result (but does # so right before the atom is about to be scheduled). state = self._storage.get_atom_state(atom.name) ok_to_transition = self._runtime.check_atom_transition(atom, state, transition_to) if not ok_to_transition: LOG.trace("Atom '%s' is not ready to %s since it can not" " transition to %s from its current state %s", atom, for_what, transition_to, state) return (False, None) intention = self._storage.get_atom_intention(atom.name) if intention not in allowed_intentions: LOG.trace("Atom '%s' is not ready to %s since its current" " intention %s is not in allowed intentions %s", atom, for_what, intention, allowed_intentions) return (False, None) ok_to_run = ready_checker(iter_connected_states()) if not ok_to_run: return (False, None) else: return (True, decider_fetcher()) def _get_maybe_ready_for_execute(self, atom): """Returns if an atom is *likely* ready to be executed.""" def ready_checker(pred_connected_it): for pred in pred_connected_it: pred_atom, (pred_atom_state, pred_atom_intention) = pred if (pred_atom_state in (st.SUCCESS, st.IGNORE) and pred_atom_intention in (st.EXECUTE, st.IGNORE)): continue LOG.trace("Unable to begin to execute since predecessor" " atom '%s' is in state %s with intention %s", pred_atom, pred_atom_state, pred_atom_intention) return False LOG.trace("Able to let '%s' execute", atom) return True decider_fetcher = lambda: \ deciders.IgnoreDecider( atom, self._runtime.fetch_edge_deciders(atom)) connected_fetcher = lambda: \ traversal.depth_first_iterate(self._execution_graph, atom, # Whether the desired atom # can execute is dependent on its # predecessors outcomes (thus why # we look backwards). traversal.Direction.BACKWARD) # If this atoms current state is able to be transitioned to RUNNING # and its intention is to EXECUTE and all of its predecessors executed # successfully or were ignored then this atom is ready to execute. LOG.trace("Checking if '%s' is ready to execute", atom) return self._get_maybe_ready(atom, st.RUNNING, [st.EXECUTE], connected_fetcher, ready_checker, decider_fetcher, for_what='execute') def _get_maybe_ready_for_revert(self, atom): """Returns if an atom is *likely* ready to be reverted.""" def ready_checker(succ_connected_it): for succ in succ_connected_it: succ_atom, (succ_atom_state, _succ_atom_intention) = succ if succ_atom_state not in (st.PENDING, st.REVERTED, st.IGNORE): LOG.trace("Unable to begin to revert since successor" " atom '%s' is in state %s", succ_atom, succ_atom_state) return False LOG.trace("Able to let '%s' revert", atom) return True noop_decider = deciders.NoOpDecider() connected_fetcher = lambda: \ traversal.depth_first_iterate(self._execution_graph, atom, # Whether the desired atom # can revert is dependent on its # successors states (thus why we # look forwards). traversal.Direction.FORWARD) decider_fetcher = lambda: noop_decider # If this atoms current state is able to be transitioned to REVERTING # and its intention is either REVERT or RETRY and all of its # successors are either PENDING or REVERTED then this atom is ready # to revert. LOG.trace("Checking if '%s' is ready to revert", atom) return self._get_maybe_ready(atom, st.REVERTING, [st.REVERT, st.RETRY], connected_fetcher, ready_checker, decider_fetcher, for_what='revert') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/action_engine/traversal.py0000664000175000017500000001071000000000000024042 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import enum from taskflow.engines.action_engine import compiler as co class Direction(enum.Enum): """Traversal direction enum.""" #: Go through successors. FORWARD = 1 #: Go through predecessors. BACKWARD = 2 def _extract_connectors(execution_graph, starting_node, direction, through_flows=True, through_retries=True, through_tasks=True): if direction == Direction.FORWARD: connected_iter = execution_graph.successors else: connected_iter = execution_graph.predecessors connected_to_functors = {} if through_flows: connected_to_functors[co.FLOW] = connected_iter connected_to_functors[co.FLOW_END] = connected_iter if through_retries: connected_to_functors[co.RETRY] = connected_iter if through_tasks: connected_to_functors[co.TASK] = connected_iter return connected_iter(starting_node), connected_to_functors def breadth_first_iterate(execution_graph, starting_node, direction, through_flows=True, through_retries=True, through_tasks=True): """Iterates connected nodes in execution graph (from starting node). Does so in a breadth first manner. Jumps over nodes with ``noop`` attribute (does not yield them back). """ initial_nodes_iter, connected_to_functors = _extract_connectors( execution_graph, starting_node, direction, through_flows=through_flows, through_retries=through_retries, through_tasks=through_tasks) q = collections.deque(initial_nodes_iter) while q: node = q.popleft() node_attrs = execution_graph.nodes[node] if not node_attrs.get('noop'): yield node try: node_kind = node_attrs['kind'] connected_to_functor = connected_to_functors[node_kind] except KeyError: pass else: q.extend(connected_to_functor(node)) def depth_first_iterate(execution_graph, starting_node, direction, through_flows=True, through_retries=True, through_tasks=True): """Iterates connected nodes in execution graph (from starting node). Does so in a depth first manner. Jumps over nodes with ``noop`` attribute (does not yield them back). """ initial_nodes_iter, connected_to_functors = _extract_connectors( execution_graph, starting_node, direction, through_flows=through_flows, through_retries=through_retries, through_tasks=through_tasks) stack = list(initial_nodes_iter) while stack: node = stack.pop() node_attrs = execution_graph.nodes[node] if not node_attrs.get('noop'): yield node try: node_kind = node_attrs['kind'] connected_to_functor = connected_to_functors[node_kind] except KeyError: pass else: stack.extend(connected_to_functor(node)) def depth_first_reverse_iterate(node, start_from_idx=-1): """Iterates connected (in reverse) **tree** nodes (from starting node). Jumps through nodes with ``noop`` attribute (does not yield them back). """ # Always go left to right, since right to left is the pattern order # and we want to go backwards and not forwards through that ordering... if start_from_idx == -1: # All of them... children_iter = node.reverse_iter() else: children_iter = reversed(node[0:start_from_idx]) for child in children_iter: if child.metadata.get('noop'): # Jump through these... for grand_child in child.dfs_iter(right_to_left=False): if grand_child.metadata['kind'] in co.ATOMS: yield grand_child.item else: yield child.item ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/base.py0000664000175000017500000001137600000000000020160 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow.types import notifier from taskflow.utils import misc @six.add_metaclass(abc.ABCMeta) class Engine(object): """Base for all engines implementations. :ivar Engine.notifier: A notification object that will dispatch events that occur related to the flow the engine contains. :ivar atom_notifier: A notification object that will dispatch events that occur related to the atoms the engine contains. """ def __init__(self, flow, flow_detail, backend, options): self._flow = flow self._flow_detail = flow_detail self._backend = backend self._options = misc.safe_copy_dict(options) self._notifier = notifier.Notifier() self._atom_notifier = notifier.Notifier() @property def notifier(self): """The flow notifier.""" return self._notifier @property def atom_notifier(self): """The atom notifier.""" return self._atom_notifier @property def options(self): """The options that were passed to this engine on construction.""" return self._options @abc.abstractproperty def storage(self): """The storage unit for this engine.""" @abc.abstractproperty def statistics(self): """A dictionary of runtime statistics this engine has gathered. This dictionary will be empty when the engine has never been ran. When it is running or has ran previously it should have (but may not) have useful and/or informational keys and values when running is underway and/or completed. .. warning:: The keys in this dictionary **should** be some what stable (not changing), but there existence **may** change between major releases as new statistics are gathered or removed so before accessing keys ensure that they actually exist and handle when they do not. """ @abc.abstractmethod def compile(self): """Compiles the contained flow into a internal representation. This internal representation is what the engine will *actually* use to run. If this compilation can not be accomplished then an exception is expected to be thrown with a message indicating why the compilation could not be achieved. """ @abc.abstractmethod def reset(self): """Reset back to the ``PENDING`` state. If a flow had previously ended up (from a prior engine :py:func:`.run`) in the ``FAILURE``, ``SUCCESS`` or ``REVERTED`` states (or for some reason it ended up in an intermediary state) it can be desirable to make it possible to run it again. Calling this method enables that to occur (without causing a state transition failure, which would typically occur if :py:meth:`.run` is called directly without doing a reset). """ @abc.abstractmethod def prepare(self): """Performs any pre-run, but post-compilation actions. NOTE(harlowja): During preparation it is currently assumed that the underlying storage will be initialized, the atoms will be reset and the engine will enter the ``PENDING`` state. """ @abc.abstractmethod def validate(self): """Performs any pre-run, post-prepare validation actions. NOTE(harlowja): During validation all final dependencies will be verified and ensured. This will by default check that all atoms have satisfiable requirements (satisfied by some other provider). """ @abc.abstractmethod def run(self): """Runs the flow in the engine to completion (or die trying).""" @abc.abstractmethod def suspend(self): """Attempts to suspend the engine. If the engine is currently running atoms then this will attempt to suspend future work from being started (currently active atoms can not currently be preempted) and move the engine into a suspend state which can then later be resumed from. """ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/helpers.py0000664000175000017500000002534000000000000020704 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_utils import importutils from oslo_utils import reflection import six import stevedore.driver from taskflow import exceptions as exc from taskflow import logging from taskflow.persistence import backends as p_backends from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils LOG = logging.getLogger(__name__) # NOTE(imelnikov): this is the entrypoint namespace, not the module namespace. ENGINES_NAMESPACE = 'taskflow.engines' # The default entrypoint engine type looked for when it is not provided. ENGINE_DEFAULT = 'default' def _extract_engine(engine, **kwargs): """Extracts the engine kind and any associated options.""" kind = engine if not kind: kind = ENGINE_DEFAULT # See if it's a URI and if so, extract any further options... options = {} try: uri = misc.parse_uri(kind) except (TypeError, ValueError): pass else: kind = uri.scheme options = misc.merge_uri(uri, options.copy()) # Merge in any leftover **kwargs into the options, this makes it so # that the provided **kwargs override any URI/engine specific # options. options.update(kwargs) return (kind, options) def _fetch_factory(factory_name): try: return importutils.import_class(factory_name) except (ImportError, ValueError) as e: raise ImportError("Could not import factory %r: %s" % (factory_name, e)) def _fetch_validate_factory(flow_factory): if isinstance(flow_factory, six.string_types): factory_fun = _fetch_factory(flow_factory) factory_name = flow_factory else: factory_fun = flow_factory factory_name = reflection.get_callable_name(flow_factory) try: reimported = _fetch_factory(factory_name) assert reimported == factory_fun except (ImportError, AssertionError): raise ValueError('Flow factory %r is not reimportable by name %s' % (factory_fun, factory_name)) return (factory_name, factory_fun) def load(flow, store=None, flow_detail=None, book=None, backend=None, namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, **kwargs): """Load a flow into an engine. This function creates and prepares an engine to run the provided flow. All that is left after this returns is to run the engine with the engines :py:meth:`~taskflow.engines.base.Engine.run` method. Which engine to load is specified via the ``engine`` parameter. It can be a string that names the engine type to use, or a string that is a URI with a scheme that names the engine type to use and further options contained in the URI's host, port, and query parameters... Which storage backend to use is defined by the backend parameter. It can be backend itself, or a dictionary that is passed to :py:func:`~taskflow.persistence.backends.fetch` to obtain a viable backend. :param flow: flow to load :param store: dict -- data to put to storage to satisfy flow requirements :param flow_detail: FlowDetail that holds the state of the flow (if one is not provided then one will be created for you in the provided backend) :param book: LogBook to create flow detail in if flow_detail is None :param backend: storage backend to use or configuration that defines it :param namespace: driver namespace for stevedore (or empty for default) :param engine: string engine type or URI string with scheme that contains the engine type and any URI specific components that will become part of the engine options. :param kwargs: arbitrary keyword arguments passed as options (merged with any extracted ``engine``), typically used for any engine specific options that do not fit as any of the existing arguments. :returns: engine """ kind, options = _extract_engine(engine, **kwargs) if isinstance(backend, dict): backend = p_backends.fetch(backend) if flow_detail is None: flow_detail = p_utils.create_flow_detail(flow, book=book, backend=backend) LOG.debug('Looking for %r engine driver in %r', kind, namespace) try: mgr = stevedore.driver.DriverManager( namespace, kind, invoke_on_load=True, invoke_args=(flow, flow_detail, backend, options)) engine = mgr.driver except RuntimeError as e: raise exc.NotFound("Could not find engine '%s'" % (kind), e) else: if store: engine.storage.inject(store) return engine def run(flow, store=None, flow_detail=None, book=None, backend=None, namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, **kwargs): """Run the flow. This function loads the flow into an engine (with the :func:`load() ` function) and runs the engine. The arguments are interpreted as for :func:`load() `. :returns: dictionary of all named results (see :py:meth:`~.taskflow.storage.Storage.fetch_all`) """ engine = load(flow, store=store, flow_detail=flow_detail, book=book, backend=backend, namespace=namespace, engine=engine, **kwargs) engine.run() return engine.storage.fetch_all() def save_factory_details(flow_detail, flow_factory, factory_args, factory_kwargs, backend=None): """Saves the given factories reimportable attributes into the flow detail. This function saves the factory name, arguments, and keyword arguments into the given flow details object and if a backend is provided it will also ensure that the backend saves the flow details after being updated. :param flow_detail: FlowDetail that holds state of the flow to load :param flow_factory: function or string: function that creates the flow :param factory_args: list or tuple of factory positional arguments :param factory_kwargs: dict of factory keyword arguments :param backend: storage backend to use or configuration """ if not factory_args: factory_args = [] if not factory_kwargs: factory_kwargs = {} factory_name, _factory_fun = _fetch_validate_factory(flow_factory) factory_data = { 'factory': { 'name': factory_name, 'args': factory_args, 'kwargs': factory_kwargs, }, } if not flow_detail.meta: flow_detail.meta = factory_data else: flow_detail.meta.update(factory_data) if backend is not None: if isinstance(backend, dict): backend = p_backends.fetch(backend) with contextlib.closing(backend.get_connection()) as conn: conn.update_flow_details(flow_detail) def load_from_factory(flow_factory, factory_args=None, factory_kwargs=None, store=None, book=None, backend=None, namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, **kwargs): """Loads a flow from a factory function into an engine. Gets flow factory function (or name of it) and creates flow with it. Then, the flow is loaded into an engine with the :func:`load() ` function, and the factory function fully qualified name is saved to flow metadata so that it can be later resumed. :param flow_factory: function or string: function that creates the flow :param factory_args: list or tuple of factory positional arguments :param factory_kwargs: dict of factory keyword arguments Further arguments are interpreted as for :func:`load() `. :returns: engine """ _factory_name, factory_fun = _fetch_validate_factory(flow_factory) if not factory_args: factory_args = [] if not factory_kwargs: factory_kwargs = {} flow = factory_fun(*factory_args, **factory_kwargs) if isinstance(backend, dict): backend = p_backends.fetch(backend) flow_detail = p_utils.create_flow_detail(flow, book=book, backend=backend) save_factory_details(flow_detail, flow_factory, factory_args, factory_kwargs, backend=backend) return load(flow=flow, store=store, flow_detail=flow_detail, book=book, backend=backend, namespace=namespace, engine=engine, **kwargs) def flow_from_detail(flow_detail): """Reloads a flow previously saved. Gets the flow factories name and any arguments and keyword arguments from the flow details metadata, and then calls that factory to recreate the flow. :param flow_detail: FlowDetail that holds state of the flow to load """ try: factory_data = flow_detail.meta['factory'] except (KeyError, AttributeError, TypeError): raise ValueError('Cannot reconstruct flow %s %s: ' 'no factory information saved.' % (flow_detail.name, flow_detail.uuid)) try: factory_fun = _fetch_factory(factory_data['name']) except (KeyError, ImportError): raise ImportError('Could not import factory for flow %s %s' % (flow_detail.name, flow_detail.uuid)) args = factory_data.get('args', ()) kwargs = factory_data.get('kwargs', {}) return factory_fun(*args, **kwargs) def load_from_detail(flow_detail, store=None, backend=None, namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, **kwargs): """Reloads an engine previously saved. This reloads the flow using the :func:`flow_from_detail() ` function and then calls into the :func:`load() ` function to create an engine from that flow. :param flow_detail: FlowDetail that holds state of the flow to load Further arguments are interpreted as for :func:`load() `. :returns: engine """ flow = flow_from_detail(flow_detail) return load(flow, flow_detail=flow_detail, store=store, backend=backend, namespace=namespace, engine=engine, **kwargs) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6120412 taskflow-4.6.4/taskflow/engines/worker_based/0000775000175000017500000000000000000000000021333 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/__init__.py0000664000175000017500000000000000000000000023432 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/dispatcher.py0000664000175000017500000001575100000000000024044 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from kombu import exceptions as kombu_exc from taskflow import exceptions as excp from taskflow import logging from taskflow.utils import kombu_utils as ku LOG = logging.getLogger(__name__) class Handler(object): """Component(s) that will be called on reception of messages.""" __slots__ = ['_process_message', '_validator'] def __init__(self, process_message, validator=None): self._process_message = process_message self._validator = validator @property def process_message(self): """Main callback that is called to process a received message. This is only called after the format has been validated (using the ``validator`` callback if applicable) and only after the message has been acknowledged. """ return self._process_message @property def validator(self): """Optional callback that will be activated before processing. This callback if present is expected to validate the message and raise :py:class:`~taskflow.exceptions.InvalidFormat` if the message is not valid. """ return self._validator class TypeDispatcher(object): """Receives messages and dispatches to type specific handlers.""" def __init__(self, type_handlers=None, requeue_filters=None): if type_handlers is not None: self._type_handlers = dict(type_handlers) else: self._type_handlers = {} if requeue_filters is not None: self._requeue_filters = list(requeue_filters) else: self._requeue_filters = [] @property def type_handlers(self): """Dictionary of message type -> callback to handle that message. The callback(s) will be activated by looking for a message property 'type' and locating a callback in this dictionary that maps to that type; if one is found it is expected to be a callback that accepts two positional parameters; the first being the message data and the second being the message object. If a callback is not found then the message is rejected and it will be up to the underlying message transport to determine what this means/implies... """ return self._type_handlers @property def requeue_filters(self): """List of filters (callbacks) to request a message to be requeued. The callback(s) will be activated before the message has been acked and it can be used to instruct the dispatcher to requeue the message instead of processing it. The callback, when called, will be provided two positional parameters; the first being the message data and the second being the message object. Using these provided parameters the filter should return a truthy object if the message should be requeued and a falsey object if it should not. """ return self._requeue_filters def _collect_requeue_votes(self, data, message): # Returns how many of the filters asked for the message to be requeued. requeue_votes = 0 for i, cb in enumerate(self._requeue_filters): try: if cb(data, message): requeue_votes += 1 except Exception: LOG.exception("Failed calling requeue filter %s '%s' to" " determine if message %r should be requeued.", i + 1, cb, message.delivery_tag) return requeue_votes def _requeue_log_error(self, message, errors): # TODO(harlowja): Remove when http://github.com/celery/kombu/pull/372 # is merged and a version is released with this change... try: message.requeue() except errors as exc: # This was taken from how kombu is formatting its messages # when its reject_log_error or ack_log_error functions are # used so that we have a similar error format for requeuing. LOG.critical("Couldn't requeue %r, reason:%r", message.delivery_tag, exc, exc_info=True) else: LOG.debug("Message '%s' was requeued.", ku.DelayedPretty(message)) def _process_message(self, data, message, message_type): handler = self._type_handlers.get(message_type) if handler is None: message.reject_log_error(logger=LOG, errors=(kombu_exc.MessageStateError,)) LOG.warning("Unexpected message type: '%s' in message" " '%s'", message_type, ku.DelayedPretty(message)) else: if handler.validator is not None: try: handler.validator(data) except excp.InvalidFormat as e: message.reject_log_error( logger=LOG, errors=(kombu_exc.MessageStateError,)) LOG.warning("Message '%s' (%s) was rejected due to it" " being in an invalid format: %s", ku.DelayedPretty(message), message_type, e) return message.ack_log_error(logger=LOG, errors=(kombu_exc.MessageStateError,)) if message.acknowledged: LOG.debug("Message '%s' was acknowledged.", ku.DelayedPretty(message)) handler.process_message(data, message) else: message.reject_log_error(logger=LOG, errors=(kombu_exc.MessageStateError,)) def on_message(self, data, message): """This method is called on incoming messages.""" LOG.debug("Received message '%s'", ku.DelayedPretty(message)) if self._collect_requeue_votes(data, message): self._requeue_log_error(message, errors=(kombu_exc.MessageStateError,)) else: try: message_type = message.properties['type'] except KeyError: message.reject_log_error( logger=LOG, errors=(kombu_exc.MessageStateError,)) LOG.warning("The 'type' message property is missing" " in message '%s'", ku.DelayedPretty(message)) else: self._process_message(data, message, message_type) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/endpoint.py0000664000175000017500000000326300000000000023531 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import reflection from taskflow.engines.action_engine import executor class Endpoint(object): """Represents a single task with execute/revert methods.""" def __init__(self, task_cls): self._task_cls = task_cls self._task_cls_name = reflection.get_class_name(task_cls) self._executor = executor.SerialTaskExecutor() def __str__(self): return self._task_cls_name @property def name(self): return self._task_cls_name def generate(self, name=None): # NOTE(skudriashev): Note that task is created here with the `name` # argument passed to its constructor. This will be a problem when # task's constructor requires any other arguments. return self._task_cls(name=name) def execute(self, task, **kwargs): event, result = self._executor.execute_task(task, **kwargs).result() return result def revert(self, task, **kwargs): event, result = self._executor.revert_task(task, **kwargs).result() return result ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/engine.py0000664000175000017500000001043300000000000023153 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.engines.action_engine import engine from taskflow.engines.worker_based import executor from taskflow.engines.worker_based import protocol as pr class WorkerBasedActionEngine(engine.ActionEngine): """Worker based action engine. Specific backend options (extracted from provided engine options): :param exchange: broker exchange exchange name in which executor / worker communication is performed :param url: broker connection url (see format in kombu documentation) :param topics: list of workers topics to communicate with (this will also be learned by listening to the notifications that workers emit). :param transport: transport to be used (e.g. amqp, memory, etc.) :param transition_timeout: numeric value (or None for infinite) to wait for submitted remote requests to transition out of the (PENDING, WAITING) request states. When expired the associated task the request was made for will have its result become a :py:class:`~taskflow.exceptions.RequestTimeout` exception instead of its normally returned value (or raised exception). :param transport_options: transport specific options (see: http://kombu.readthedocs.org/ for what these options imply and are expected to be) :param retry_options: retry specific options (see: :py:attr:`~.proxy.Proxy.DEFAULT_RETRY_OPTIONS`) :param worker_expiry: numeric value (or negative/zero/None for infinite) that defines the number of seconds to continue to send messages to workers that have **not** responded back to a prior notification/ping request (this defaults to 60 seconds). """ def __init__(self, flow, flow_detail, backend, options): super(WorkerBasedActionEngine, self).__init__(flow, flow_detail, backend, options) # This ensures that any provided executor will be validated before # we get to far in the compilation/execution pipeline... self._task_executor = self._fetch_task_executor(self._options, self._flow_detail) @classmethod def _fetch_task_executor(cls, options, flow_detail): try: e = options['executor'] if not isinstance(e, executor.WorkerTaskExecutor): raise TypeError("Expected an instance of type '%s' instead of" " type '%s' for 'executor' option" % (executor.WorkerTaskExecutor, type(e))) return e except KeyError: return executor.WorkerTaskExecutor( uuid=flow_detail.uuid, url=options.get('url'), exchange=options.get('exchange', 'default'), retry_options=options.get('retry_options'), topics=options.get('topics', []), transport=options.get('transport'), transport_options=options.get('transport_options'), transition_timeout=options.get('transition_timeout', pr.REQUEST_TIMEOUT), worker_expiry=options.get('worker_expiry', pr.EXPIRES_AFTER), ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/executor.py0000664000175000017500000003300500000000000023544 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import threading from oslo_utils import timeutils import six from taskflow.engines.action_engine import executor from taskflow.engines.worker_based import dispatcher from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy from taskflow.engines.worker_based import types as wt from taskflow import exceptions as exc from taskflow import logging from taskflow.task import EVENT_UPDATE_PROGRESS # noqa from taskflow.utils import kombu_utils as ku from taskflow.utils import misc from taskflow.utils import threading_utils as tu LOG = logging.getLogger(__name__) class WorkerTaskExecutor(executor.TaskExecutor): """Executes tasks on remote workers.""" def __init__(self, uuid, exchange, topics, transition_timeout=pr.REQUEST_TIMEOUT, url=None, transport=None, transport_options=None, retry_options=None, worker_expiry=pr.EXPIRES_AFTER): self._uuid = uuid self._ongoing_requests = {} self._ongoing_requests_lock = threading.RLock() self._transition_timeout = transition_timeout self._proxy = proxy.Proxy(uuid, exchange, on_wait=self._on_wait, url=url, transport=transport, transport_options=transport_options, retry_options=retry_options) # NOTE(harlowja): This is the most simplest finder impl. that # doesn't have external dependencies (outside of what this engine # already requires); it though does create periodic 'polling' traffic # to workers to 'learn' of the tasks they can perform (and requires # pre-existing knowledge of the topics those workers are on to gather # and update this information). self._finder = wt.ProxyWorkerFinder(uuid, self._proxy, topics, worker_expiry=worker_expiry) self._proxy.dispatcher.type_handlers.update({ pr.RESPONSE: dispatcher.Handler(self._process_response, validator=pr.Response.validate), pr.NOTIFY: dispatcher.Handler( self._finder.process_response, validator=functools.partial(pr.Notify.validate, response=True)), }) # Thread that will run the message dispatching (and periodically # call the on_wait callback to do various things) loop... self._helper = None self._messages_processed = { 'finder': self._finder.messages_processed, } def _process_response(self, response, message): """Process response from remote side.""" LOG.debug("Started processing response message '%s'", ku.DelayedPretty(message)) try: request_uuid = message.properties['correlation_id'] except KeyError: LOG.warning("The 'correlation_id' message property is" " missing in message '%s'", ku.DelayedPretty(message)) else: request = self._ongoing_requests.get(request_uuid) if request is not None: response = pr.Response.from_dict(response) LOG.debug("Extracted response '%s' and matched it to" " request '%s'", response, request) if response.state == pr.RUNNING: request.transition_and_log_error(pr.RUNNING, logger=LOG) elif response.state == pr.EVENT: # Proxy the event + details to the task notifier so # that it shows up in the local process (and activates # any local callbacks...); thus making it look like # the task is running locally (in some regards). event_type = response.data['event_type'] details = response.data['details'] request.task.notifier.notify(event_type, details) elif response.state in (pr.FAILURE, pr.SUCCESS): if request.transition_and_log_error(response.state, logger=LOG): with self._ongoing_requests_lock: del self._ongoing_requests[request.uuid] request.set_result(result=response.data['result']) else: LOG.warning("Unexpected response status '%s'", response.state) else: LOG.debug("Request with id='%s' not found", request_uuid) @staticmethod def _handle_expired_request(request): """Handle a expired request. When a request has expired it is removed from the ongoing requests dictionary and a ``RequestTimeout`` exception is set as a request result. """ if request.transition_and_log_error(pr.FAILURE, logger=LOG): # Raise an exception (and then catch it) so we get a nice # traceback that the request will get instead of it getting # just an exception with no traceback... try: request_age = timeutils.now() - request.created_on raise exc.RequestTimeout( "Request '%s' has expired after waiting for %0.2f" " seconds for it to transition out of (%s) states" % (request, request_age, ", ".join(pr.WAITING_STATES))) except exc.RequestTimeout: with misc.capture_failure() as failure: LOG.debug(failure.exception_str) request.set_result(failure) return True return False def _clean(self): if not self._ongoing_requests: return with self._ongoing_requests_lock: ongoing_requests_uuids = set(six.iterkeys(self._ongoing_requests)) waiting_requests = {} expired_requests = {} for request_uuid in ongoing_requests_uuids: try: request = self._ongoing_requests[request_uuid] except KeyError: # Guess it got removed before we got to it... pass else: if request.expired: expired_requests[request_uuid] = request elif request.current_state == pr.WAITING: waiting_requests[request_uuid] = request if expired_requests: with self._ongoing_requests_lock: while expired_requests: request_uuid, request = expired_requests.popitem() if self._handle_expired_request(request): del self._ongoing_requests[request_uuid] if waiting_requests: finder = self._finder new_messages_processed = finder.messages_processed last_messages_processed = self._messages_processed['finder'] if new_messages_processed > last_messages_processed: # Some new message got to the finder, so we can see # if any new workers match (if no new messages have been # processed we might as well not do anything). while waiting_requests: _request_uuid, request = waiting_requests.popitem() worker = finder.get_worker_for_task(request.task) if (worker is not None and request.transition_and_log_error(pr.PENDING, logger=LOG)): self._publish_request(request, worker) self._messages_processed['finder'] = new_messages_processed def _on_wait(self): """This function is called cyclically between draining events.""" # Publish any finding messages (used to locate workers). self._finder.maybe_publish() # If the finder hasn't heard from workers in a given amount # of time, then those workers are likely dead, so clean them out... self._finder.clean() # Process any expired requests or requests that have no current # worker located (publish messages for those if we now do have # a worker located). self._clean() def _submit_task(self, task, task_uuid, action, arguments, progress_callback=None, result=pr.NO_RESULT, failures=None): """Submit task request to a worker.""" request = pr.Request(task, task_uuid, action, arguments, timeout=self._transition_timeout, result=result, failures=failures) # Register the callback, so that we can proxy the progress correctly. if (progress_callback is not None and task.notifier.can_be_registered(EVENT_UPDATE_PROGRESS)): task.notifier.register(EVENT_UPDATE_PROGRESS, progress_callback) request.future.add_done_callback( lambda _fut: task.notifier.deregister(EVENT_UPDATE_PROGRESS, progress_callback)) # Get task's worker and publish request if worker was found. worker = self._finder.get_worker_for_task(task) if worker is not None: if request.transition_and_log_error(pr.PENDING, logger=LOG): with self._ongoing_requests_lock: self._ongoing_requests[request.uuid] = request self._publish_request(request, worker) else: LOG.debug("Delaying submission of '%s', no currently known" " worker/s available to process it", request) with self._ongoing_requests_lock: self._ongoing_requests[request.uuid] = request return request.future def _publish_request(self, request, worker): """Publish request to a given topic.""" LOG.debug("Submitting execution of '%s' to worker '%s' (expecting" " response identified by reply_to=%s and" " correlation_id=%s) - waited %0.3f seconds to" " get published", request, worker, self._uuid, request.uuid, timeutils.now() - request.created_on) try: self._proxy.publish(request, worker.topic, reply_to=self._uuid, correlation_id=request.uuid) except Exception: with misc.capture_failure() as failure: LOG.critical("Failed to submit '%s' (transitioning it to" " %s)", request, pr.FAILURE, exc_info=True) if request.transition_and_log_error(pr.FAILURE, logger=LOG): with self._ongoing_requests_lock: del self._ongoing_requests[request.uuid] request.set_result(failure) def execute_task(self, task, task_uuid, arguments, progress_callback=None): return self._submit_task(task, task_uuid, pr.EXECUTE, arguments, progress_callback=progress_callback) def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): return self._submit_task(task, task_uuid, pr.REVERT, arguments, result=result, failures=failures, progress_callback=progress_callback) def wait_for_workers(self, workers=1, timeout=None): """Waits for geq workers to notify they are ready to do work. NOTE(harlowja): if a timeout is provided this function will wait until that timeout expires, if the amount of workers does not reach the desired amount of workers before the timeout expires then this will return how many workers are still needed, otherwise it will return zero. """ return self._finder.wait_for_workers(workers=workers, timeout=timeout) def start(self): """Starts message processing thread.""" if self._helper is not None: raise RuntimeError("Worker executor must be stopped before" " it can be started") self._helper = tu.daemon_thread(self._proxy.start) self._helper.start() self._proxy.wait() def stop(self): """Stops message processing thread.""" if self._helper is not None: self._proxy.stop() self._helper.join() self._helper = None with self._ongoing_requests_lock: while self._ongoing_requests: _request_uuid, request = self._ongoing_requests.popitem() self._handle_expired_request(request) self._finder.reset() self._messages_processed['finder'] = self._finder.messages_processed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/protocol.py0000664000175000017500000004747600000000000023570 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import threading from automaton import exceptions as machine_excp from automaton import machines import fasteners import futurist from oslo_serialization import jsonutils from oslo_utils import reflection from oslo_utils import timeutils import six from taskflow.engines.action_engine import executor from taskflow import exceptions as excp from taskflow import logging from taskflow.types import failure as ft from taskflow.utils import schema_utils as su # NOTE(skudriashev): This is protocol states and events, which are not # related to task states. WAITING = 'WAITING' PENDING = 'PENDING' RUNNING = 'RUNNING' SUCCESS = 'SUCCESS' FAILURE = 'FAILURE' EVENT = 'EVENT' # During these states the expiry is active (once out of these states the expiry # no longer matters, since we have no way of knowing how long a task will run # for). WAITING_STATES = (WAITING, PENDING) # Once these states have been entered a request can no longer be # automatically expired. STOP_TIMER_STATES = (RUNNING, SUCCESS, FAILURE) # Remote task actions. EXECUTE = 'execute' REVERT = 'revert' # Remote task action to event map. ACTION_TO_EVENT = { EXECUTE: executor.EXECUTED, REVERT: executor.REVERTED } # NOTE(skudriashev): A timeout which specifies request expiration period. REQUEST_TIMEOUT = 60 # NOTE(skudriashev): A timeout which controls for how long a queue can be # unused before it is automatically deleted. Unused means the queue has no # consumers, the queue has not been redeclared, the `queue.get` has not been # invoked for a duration of at least the expiration period. In our case this # period is equal to the request timeout, once request is expired - queue is # no longer needed. QUEUE_EXPIRE_TIMEOUT = REQUEST_TIMEOUT # Workers notify period. NOTIFY_PERIOD = 5 # When a worker hasn't notified in this many seconds, it will get expired from # being used/targeted for further work. EXPIRES_AFTER = 60 # Message types. NOTIFY = 'NOTIFY' REQUEST = 'REQUEST' RESPONSE = 'RESPONSE' # Object that denotes nothing (none can actually be valid). NO_RESULT = object() LOG = logging.getLogger(__name__) def make_an_event(new_state): """Turns a new/target state into an event name.""" return ('on_%s' % new_state).lower() def build_a_machine(freeze=True): """Builds a state machine that requests are allowed to go through.""" m = machines.FiniteMachine() for st in (WAITING, PENDING, RUNNING): m.add_state(st) for st in (SUCCESS, FAILURE): m.add_state(st, terminal=True) # When a executor starts to publish a request to a selected worker but the # executor has not recved confirmation from that worker that anything has # happened yet. m.default_start_state = WAITING m.add_transition(WAITING, PENDING, make_an_event(PENDING)) # When a request expires (isn't able to be processed by any worker). m.add_transition(WAITING, FAILURE, make_an_event(FAILURE)) # Worker has started executing a request. m.add_transition(PENDING, RUNNING, make_an_event(RUNNING)) # Worker failed to construct/process a request to run (either the worker # did not transition to RUNNING in the given timeout or the worker itself # had some type of failure before RUNNING started). # # Also used by the executor if the request was attempted to be published # but that did publishing process did not work out. m.add_transition(PENDING, FAILURE, make_an_event(FAILURE)) # Execution failed due to some type of remote failure. m.add_transition(RUNNING, FAILURE, make_an_event(FAILURE)) # Execution succeeded & has completed. m.add_transition(RUNNING, SUCCESS, make_an_event(SUCCESS)) # No further changes allowed. if freeze: m.freeze() return m def failure_to_dict(failure): """Attempts to convert a failure object into a jsonifyable dictionary.""" failure_dict = failure.to_dict() try: # it's possible the exc_args can't be serialized as JSON # if that's the case, just get the failure without them jsonutils.dumps(failure_dict) return failure_dict except (TypeError, ValueError): return failure.to_dict(include_args=False) @six.add_metaclass(abc.ABCMeta) class Message(object): """Base class for all message types.""" def __repr__(self): return ("<%s object at 0x%x with contents %s>" % (reflection.get_class_name(self, fully_qualified=False), id(self), self.to_dict())) @abc.abstractmethod def to_dict(self): """Return json-serializable message representation.""" class Notify(Message): """Represents notify message type.""" #: String constant representing this message type. TYPE = NOTIFY # NOTE(harlowja): the executor (the entity who initially requests a worker # to send back a notification response) schema is different than the # worker response schema (that's why there are two schemas here). #: Expected notify *response* message schema (in json schema format). RESPONSE_SCHEMA = { "type": "object", 'properties': { 'topic': { "type": "string", }, 'tasks': { "type": "array", "items": { "type": "string", }, } }, "required": ["topic", 'tasks'], "additionalProperties": False, } #: Expected *sender* request message schema (in json schema format). SENDER_SCHEMA = { "type": "object", "additionalProperties": False, } def __init__(self, **data): self._data = data @property def topic(self): return self._data.get('topic') @property def tasks(self): return self._data.get('tasks') def to_dict(self): return self._data @classmethod def validate(cls, data, response): if response: schema = cls.RESPONSE_SCHEMA else: schema = cls.SENDER_SCHEMA try: su.schema_validate(data, schema) except su.ValidationError as e: cls_name = reflection.get_class_name(cls, fully_qualified=False) if response: excp.raise_with_cause(excp.InvalidFormat, "%s message response data not of the" " expected format: %s" % (cls_name, e.message), cause=e) else: excp.raise_with_cause(excp.InvalidFormat, "%s message sender data not of the" " expected format: %s" % (cls_name, e.message), cause=e) _WorkUnit = collections.namedtuple('_WorkUnit', ['task_cls', 'task_name', 'action', 'arguments']) class Request(Message): """Represents request with execution results. Every request is created in the WAITING state and is expired within the given timeout if it does not transition out of the (WAITING, PENDING) states. State machine a request goes through as it progresses (or expires):: +------------+------------+---------+----------+---------+ | Start | Event | End | On Enter | On Exit | +------------+------------+---------+----------+---------+ | FAILURE[$] | . | . | . | . | | PENDING | on_failure | FAILURE | . | . | | PENDING | on_running | RUNNING | . | . | | RUNNING | on_failure | FAILURE | . | . | | RUNNING | on_success | SUCCESS | . | . | | SUCCESS[$] | . | . | . | . | | WAITING[^] | on_failure | FAILURE | . | . | | WAITING[^] | on_pending | PENDING | . | . | +------------+------------+---------+----------+---------+ """ #: String constant representing this message type. TYPE = REQUEST #: Expected message schema (in json schema format). SCHEMA = { "type": "object", 'properties': { # These two are typically only sent on revert actions (that is # why are are not including them in the required section). 'result': {}, 'failures': { "type": "object", }, 'task_cls': { 'type': 'string', }, 'task_name': { 'type': 'string', }, 'task_version': { "oneOf": [ { "type": "string", }, { "type": "array", }, ], }, 'action': { "type": "string", "enum": list(six.iterkeys(ACTION_TO_EVENT)), }, # Keyword arguments that end up in the revert() or execute() # method of the remote task. 'arguments': { "type": "object", }, }, 'required': ['task_cls', 'task_name', 'task_version', 'action'], } def __init__(self, task, uuid, action, arguments, timeout=REQUEST_TIMEOUT, result=NO_RESULT, failures=None): self._action = action self._event = ACTION_TO_EVENT[action] self._arguments = arguments self._result = result self._failures = failures self._watch = timeutils.StopWatch(duration=timeout).start() self._lock = threading.Lock() self._machine = build_a_machine() self._machine.initialize() self.task = task self.uuid = uuid self.created_on = timeutils.now() self.future = futurist.Future() self.future.atom = task @property def current_state(self): """Current state the request is in.""" return self._machine.current_state def set_result(self, result): """Sets the responses futures result.""" self.future.set_result((self._event, result)) @property def expired(self): """Check if request has expired. When new request is created its state is set to the WAITING, creation time is stored and timeout is given via constructor arguments. Request is considered to be expired when it is in the WAITING/PENDING state for more then the given timeout (it is not considered to be expired in any other state). """ if self._machine.current_state in WAITING_STATES: return self._watch.expired() return False def to_dict(self): """Return json-serializable request. To convert requests that have failed due to some exception this will convert all `failure.Failure` objects into dictionaries (which will then be reconstituted by the receiver). """ request = { 'task_cls': reflection.get_class_name(self.task), 'task_name': self.task.name, 'task_version': self.task.version, 'action': self._action, 'arguments': self._arguments, } if self._result is not NO_RESULT: result = self._result if isinstance(result, ft.Failure): request['result'] = ('failure', failure_to_dict(result)) else: request['result'] = ('success', result) if self._failures: request['failures'] = {} for atom_name, failure in six.iteritems(self._failures): request['failures'][atom_name] = failure_to_dict(failure) return request def transition_and_log_error(self, new_state, logger=None): """Transitions *and* logs an error if that transitioning raises. This overlays the transition function and performs nearly the same functionality but instead of raising if the transition was not valid it logs a warning to the provided logger and returns False to indicate that the transition was not performed (note that this is *different* from the transition function where False means ignored). """ if logger is None: logger = LOG moved = False try: moved = self.transition(new_state) except excp.InvalidState: logger.warn("Failed to transition '%s' to %s state.", self, new_state, exc_info=True) return moved @fasteners.locked def transition(self, new_state): """Transitions the request to a new state. If transition was performed, it returns True. If transition was ignored, it returns False. If transition was not valid (and will not be performed), it raises an InvalidState exception. """ old_state = self._machine.current_state if old_state == new_state: return False try: self._machine.process_event(make_an_event(new_state)) except (machine_excp.NotFound, machine_excp.InvalidState) as e: raise excp.InvalidState("Request transition from %s to %s is" " not allowed: %s" % (old_state, new_state, e)) else: if new_state in STOP_TIMER_STATES: self._watch.stop() LOG.debug("Transitioned '%s' from %s state to %s state", self, old_state, new_state) return True @classmethod def validate(cls, data): try: su.schema_validate(data, cls.SCHEMA) except su.ValidationError as e: cls_name = reflection.get_class_name(cls, fully_qualified=False) excp.raise_with_cause(excp.InvalidFormat, "%s message response data not of the" " expected format: %s" % (cls_name, e.message), cause=e) else: # Validate all failure dictionaries that *may* be present... failures = [] if 'failures' in data: failures.extend(six.itervalues(data['failures'])) result = data.get('result') if result is not None: result_data_type, result_data = result if result_data_type == 'failure': failures.append(result_data) for fail_data in failures: ft.Failure.validate(fail_data) @staticmethod def from_dict(data, task_uuid=None): """Parses **validated** data into a work unit. All :py:class:`~taskflow.types.failure.Failure` objects that have been converted to dict(s) on the remote side will now converted back to py:class:`~taskflow.types.failure.Failure` objects. """ task_cls = data['task_cls'] task_name = data['task_name'] action = data['action'] arguments = data.get('arguments', {}) result = data.get('result') failures = data.get('failures') # These arguments will eventually be given to the task executor # so they need to be in a format it will accept (and using keyword # argument names that it accepts)... arguments = { 'arguments': arguments, } if task_uuid is not None: arguments['task_uuid'] = task_uuid if result is not None: result_data_type, result_data = result if result_data_type == 'failure': arguments['result'] = ft.Failure.from_dict(result_data) else: arguments['result'] = result_data if failures is not None: arguments['failures'] = {} for task, fail_data in six.iteritems(failures): arguments['failures'][task] = ft.Failure.from_dict(fail_data) return _WorkUnit(task_cls, task_name, action, arguments) class Response(Message): """Represents response message type.""" #: String constant representing this message type. TYPE = RESPONSE #: Expected message schema (in json schema format). SCHEMA = { "type": "object", 'properties': { 'state': { "type": "string", "enum": list(build_a_machine().states) + [EVENT], }, 'data': { "anyOf": [ { "$ref": "#/definitions/event", }, { "$ref": "#/definitions/completion", }, { "$ref": "#/definitions/empty", }, ], }, }, "required": ["state", 'data'], "additionalProperties": False, "definitions": { "event": { "type": "object", "properties": { 'event_type': { 'type': 'string', }, 'details': { 'type': 'object', }, }, "required": ["event_type", 'details'], "additionalProperties": False, }, # Used when sending *only* request state changes (and no data is # expected). "empty": { "type": "object", "additionalProperties": False, }, "completion": { "type": "object", "properties": { # This can be any arbitrary type that a task returns, so # thats why we can't be strict about what type it is since # any of the json serializable types are allowed. "result": {}, }, "required": ["result"], "additionalProperties": False, }, }, } def __init__(self, state, **data): self.state = state self.data = data @classmethod def from_dict(cls, data): state = data['state'] data = data['data'] if state == FAILURE and 'result' in data: data['result'] = ft.Failure.from_dict(data['result']) return cls(state, **data) def to_dict(self): return dict(state=self.state, data=self.data) @classmethod def validate(cls, data): try: su.schema_validate(data, cls.SCHEMA) except su.ValidationError as e: cls_name = reflection.get_class_name(cls, fully_qualified=False) excp.raise_with_cause(excp.InvalidFormat, "%s message response data not of the" " expected format: %s" % (cls_name, e.message), cause=e) else: state = data['state'] if state == FAILURE and 'result' in data: ft.Failure.validate(data['result']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/proxy.py0000664000175000017500000002252400000000000023073 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import threading import kombu from kombu import exceptions as kombu_exceptions import six from taskflow.engines.worker_based import dispatcher from taskflow import logging LOG = logging.getLogger(__name__) # NOTE(skudriashev): A timeout of 1 is often used in environments where # the socket can get "stuck", and is a best practice for Kombu consumers. DRAIN_EVENTS_PERIOD = 1 # Helper objects returned when requested to get connection details, used # instead of returning the raw results from the kombu connection objects # themselves so that a person can not mutate those objects (which would be # bad). _ConnectionDetails = collections.namedtuple('_ConnectionDetails', ['uri', 'transport']) _TransportDetails = collections.namedtuple('_TransportDetails', ['options', 'driver_type', 'driver_name', 'driver_version']) class Proxy(object): """A proxy processes messages from/to the named exchange. For **internal** usage only (not for public consumption). """ DEFAULT_RETRY_OPTIONS = { # The number of seconds we start sleeping for. 'interval_start': 1, # How many seconds added to the interval for each retry. 'interval_step': 1, # Maximum number of seconds to sleep between each retry. 'interval_max': 1, # Maximum number of times to retry. 'max_retries': 3, } """Settings used (by default) to reconnect under transient failures. See: http://kombu.readthedocs.org/ (and connection ``ensure_options``) for what these values imply/mean... """ # This is the only provided option that should be an int, the others # are allowed to be floats; used when we check that the user-provided # value is valid... _RETRY_INT_OPTS = frozenset(['max_retries']) def __init__(self, topic, exchange, type_handlers=None, on_wait=None, url=None, transport=None, transport_options=None, retry_options=None): self._topic = topic self._exchange_name = exchange self._on_wait = on_wait self._running = threading.Event() self._dispatcher = dispatcher.TypeDispatcher( # NOTE(skudriashev): Process all incoming messages only if proxy is # running, otherwise requeue them. requeue_filters=[lambda data, message: not self.is_running], type_handlers=type_handlers) ensure_options = self.DEFAULT_RETRY_OPTIONS.copy() if retry_options is not None: # Override the defaults with any user provided values... for k in set(six.iterkeys(ensure_options)): if k in retry_options: # Ensure that the right type is passed in... val = retry_options[k] if k in self._RETRY_INT_OPTS: tmp_val = int(val) else: tmp_val = float(val) if tmp_val < 0: raise ValueError("Expected value greater or equal to" " zero for 'retry_options' %s; got" " %s instead" % (k, val)) ensure_options[k] = tmp_val self._ensure_options = ensure_options self._drain_events_timeout = DRAIN_EVENTS_PERIOD if transport == 'memory' and transport_options: polling_interval = transport_options.get('polling_interval') if polling_interval is not None: self._drain_events_timeout = polling_interval # create connection self._conn = kombu.Connection(url, transport=transport, transport_options=transport_options) # create exchange self._exchange = kombu.Exchange(name=self._exchange_name, durable=False, auto_delete=True) @property def dispatcher(self): """Dispatcher internally used to dispatch message(s) that match.""" return self._dispatcher @property def connection_details(self): """Details about the connection (read-only).""" # The kombu drivers seem to use 'N/A' when they don't have a version... driver_version = self._conn.transport.driver_version() if driver_version and driver_version.lower() == 'n/a': driver_version = None if self._conn.transport_options: transport_options = self._conn.transport_options.copy() else: transport_options = {} transport = _TransportDetails( options=transport_options, driver_type=self._conn.transport.driver_type, driver_name=self._conn.transport.driver_name, driver_version=driver_version) return _ConnectionDetails( uri=self._conn.as_uri(include_password=False), transport=transport) @property def is_running(self): """Return whether the proxy is running.""" return self._running.is_set() def _make_queue(self, routing_key, exchange, channel=None): """Make a named queue for the given exchange.""" queue_name = "%s_%s" % (self._exchange_name, routing_key) return kombu.Queue(name=queue_name, routing_key=routing_key, durable=False, exchange=exchange, auto_delete=True, channel=channel) def publish(self, msg, routing_key, reply_to=None, correlation_id=None): """Publish message to the named exchange with given routing key.""" if isinstance(routing_key, six.string_types): routing_keys = [routing_key] else: routing_keys = routing_key # Filter out any empty keys... routing_keys = [r_k for r_k in routing_keys if r_k] if not routing_keys: LOG.warning("No routing key/s specified; unable to send '%s'" " to any target queue on exchange '%s'", msg, self._exchange_name) return def _publish(producer, routing_key): queue = self._make_queue(routing_key, self._exchange) producer.publish(body=msg.to_dict(), routing_key=routing_key, exchange=self._exchange, declare=[queue], type=msg.TYPE, reply_to=reply_to, correlation_id=correlation_id) def _publish_errback(exc, interval): LOG.exception('Publishing error: %s', exc) LOG.info('Retry triggering in %s seconds', interval) LOG.debug("Sending '%s' message using routing keys %s", msg, routing_keys) with kombu.connections[self._conn].acquire(block=True) as conn: with conn.Producer() as producer: ensure_kwargs = self._ensure_options.copy() ensure_kwargs['errback'] = _publish_errback safe_publish = conn.ensure(producer, _publish, **ensure_kwargs) for routing_key in routing_keys: safe_publish(producer, routing_key) def start(self): """Start proxy.""" def _drain(conn, timeout): try: conn.drain_events(timeout=timeout) except kombu_exceptions.TimeoutError: pass def _drain_errback(exc, interval): LOG.exception('Draining error: %s', exc) LOG.info('Retry triggering in %s seconds', interval) LOG.info("Starting to consume from the '%s' exchange.", self._exchange_name) with kombu.connections[self._conn].acquire(block=True) as conn: queue = self._make_queue(self._topic, self._exchange, channel=conn) callbacks = [self._dispatcher.on_message] with conn.Consumer(queues=queue, callbacks=callbacks) as consumer: ensure_kwargs = self._ensure_options.copy() ensure_kwargs['errback'] = _drain_errback safe_drain = conn.ensure(consumer, _drain, **ensure_kwargs) self._running.set() try: while self._running.is_set(): safe_drain(conn, self._drain_events_timeout) if self._on_wait is not None: self._on_wait() finally: self._running.clear() def wait(self): """Wait until proxy is started.""" self._running.wait() def stop(self): """Stop proxy.""" self._running.clear() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/server.py0000664000175000017500000002644700000000000023230 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_utils import reflection from oslo_utils import timeutils from taskflow.engines.worker_based import dispatcher from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy from taskflow import logging from taskflow.types import failure as ft from taskflow.types import notifier as nt from taskflow.utils import kombu_utils as ku from taskflow.utils import misc LOG = logging.getLogger(__name__) class Server(object): """Server implementation that waits for incoming tasks requests.""" def __init__(self, topic, exchange, executor, endpoints, url=None, transport=None, transport_options=None, retry_options=None): type_handlers = { pr.NOTIFY: dispatcher.Handler( self._delayed_process(self._process_notify), validator=functools.partial(pr.Notify.validate, response=False)), pr.REQUEST: dispatcher.Handler( self._delayed_process(self._process_request), validator=pr.Request.validate), } self._executor = executor self._proxy = proxy.Proxy(topic, exchange, type_handlers=type_handlers, url=url, transport=transport, transport_options=transport_options, retry_options=retry_options) self._topic = topic self._endpoints = dict([(endpoint.name, endpoint) for endpoint in endpoints]) def _delayed_process(self, func): """Runs the function using the instances executor (eventually). This adds a *nice* benefit on showing how long it took for the function to finally be executed from when the message was received to when it was finally ran (which can be a nice thing to know to determine bottle-necks...). """ func_name = reflection.get_callable_name(func) def _on_run(watch, content, message): LOG.trace("It took %s seconds to get around to running" " function/method '%s' with" " message '%s'", watch.elapsed(), func_name, ku.DelayedPretty(message)) return func(content, message) def _on_receive(content, message): LOG.debug("Submitting message '%s' for execution in the" " future to '%s'", ku.DelayedPretty(message), func_name) watch = timeutils.StopWatch() watch.start() try: self._executor.submit(_on_run, watch, content, message) except RuntimeError: LOG.error("Unable to continue processing message '%s'," " submission to instance executor (with later" " execution by '%s') was unsuccessful", ku.DelayedPretty(message), func_name, exc_info=True) return _on_receive @property def connection_details(self): return self._proxy.connection_details @staticmethod def _parse_message(message): """Extracts required attributes out of the messages properties. This extracts the `reply_to` and the `correlation_id` properties. If any of these required properties are missing a `ValueError` is raised. """ properties = [] for prop in ('reply_to', 'correlation_id'): try: properties.append(message.properties[prop]) except KeyError: raise ValueError("The '%s' message property is missing" % prop) return properties def _reply(self, capture, reply_to, task_uuid, state=pr.FAILURE, **kwargs): """Send a reply to the `reply_to` queue with the given information. Can capture failures to publish and if capturing will log associated critical errors on behalf of the caller, and then returns whether the publish worked out or did not. """ response = pr.Response(state, **kwargs) published = False try: self._proxy.publish(response, reply_to, correlation_id=task_uuid) published = True except Exception: if not capture: raise LOG.critical("Failed to send reply to '%s' for task '%s' with" " response %s", reply_to, task_uuid, response, exc_info=True) return published def _on_event(self, reply_to, task_uuid, event_type, details): """Send out a task event notification.""" # NOTE(harlowja): the executor that will trigger this using the # task notification/listener mechanism will handle logging if this # fails, so thats why capture is 'False' is used here. self._reply(False, reply_to, task_uuid, pr.EVENT, event_type=event_type, details=details) def _process_notify(self, notify, message): """Process notify message and reply back.""" try: reply_to = message.properties['reply_to'] except KeyError: LOG.warning("The 'reply_to' message property is missing" " in received notify message '%s'", ku.DelayedPretty(message), exc_info=True) else: response = pr.Notify(topic=self._topic, tasks=list(self._endpoints.keys())) try: self._proxy.publish(response, routing_key=reply_to) except Exception: LOG.critical("Failed to send reply to '%s' with notify" " response '%s'", reply_to, response, exc_info=True) def _process_request(self, request, message): """Process request message and reply back.""" try: # NOTE(skudriashev): parse broker message first to get # the `reply_to` and the `task_uuid` parameters to have # possibility to reply back (if we can't parse, we can't respond # in the first place...). reply_to, task_uuid = self._parse_message(message) except ValueError: LOG.warning("Failed to parse request attributes from message '%s'", ku.DelayedPretty(message), exc_info=True) return else: # prepare reply callback reply_callback = functools.partial(self._reply, True, reply_to, task_uuid) # Parse the request to get the activity/work to perform. try: work = pr.Request.from_dict(request, task_uuid=task_uuid) except ValueError: with misc.capture_failure() as failure: LOG.warning("Failed to parse request contents" " from message '%s'", ku.DelayedPretty(message), exc_info=True) reply_callback(result=pr.failure_to_dict(failure)) return # Now fetch the task endpoint (and action handler on it). try: endpoint = self._endpoints[work.task_cls] except KeyError: with misc.capture_failure() as failure: LOG.warning("The '%s' task endpoint does not exist, unable" " to continue processing request message '%s'", work.task_cls, ku.DelayedPretty(message), exc_info=True) reply_callback(result=pr.failure_to_dict(failure)) return else: try: handler = getattr(endpoint, work.action) except AttributeError: with misc.capture_failure() as failure: LOG.warning("The '%s' handler does not exist on task" " endpoint '%s', unable to continue processing" " request message '%s'", work.action, endpoint, ku.DelayedPretty(message), exc_info=True) reply_callback(result=pr.failure_to_dict(failure)) return else: try: task = endpoint.generate(name=work.task_name) except Exception: with misc.capture_failure() as failure: LOG.warning("The '%s' task '%s' generation for request" " message '%s' failed", endpoint, work.action, ku.DelayedPretty(message), exc_info=True) reply_callback(result=pr.failure_to_dict(failure)) return else: if not reply_callback(state=pr.RUNNING): return # Associate *any* events this task emits with a proxy that will # emit them back to the engine... for handling at the engine side # of things... if task.notifier.can_be_registered(nt.Notifier.ANY): task.notifier.register(nt.Notifier.ANY, functools.partial(self._on_event, reply_to, task_uuid)) elif isinstance(task.notifier, nt.RestrictedNotifier): # Only proxy the allowable events then... for event_type in task.notifier.events_iter(): task.notifier.register(event_type, functools.partial(self._on_event, reply_to, task_uuid)) # Perform the task action. try: result = handler(task, **work.arguments) except Exception: with misc.capture_failure() as failure: LOG.warning("The '%s' endpoint '%s' execution for request" " message '%s' failed", endpoint, work.action, ku.DelayedPretty(message), exc_info=True) reply_callback(result=pr.failure_to_dict(failure)) else: # And be done with it! if isinstance(result, ft.Failure): reply_callback(result=result.to_dict()) else: reply_callback(state=pr.SUCCESS, result=result) def start(self): """Start processing incoming requests.""" self._proxy.start() def wait(self): """Wait until server is started.""" self._proxy.wait() def stop(self): """Stop processing incoming requests.""" self._proxy.stop() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/types.py0000664000175000017500000002334000000000000023053 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import threading from oslo_utils import reflection from oslo_utils import timeutils import six from taskflow.engines.worker_based import protocol as pr from taskflow import logging from taskflow.utils import kombu_utils as ku LOG = logging.getLogger(__name__) # TODO(harlowja): this needs to be made better, once # https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-info is finally # implemented we can go about using that instead. class TopicWorker(object): """A (read-only) worker and its relevant information + useful methods.""" _NO_IDENTITY = object() def __init__(self, topic, tasks, identity=_NO_IDENTITY): self.tasks = [] for task in tasks: if not isinstance(task, six.string_types): task = reflection.get_class_name(task) self.tasks.append(task) self.topic = topic self.identity = identity self.last_seen = None def performs(self, task): if not isinstance(task, six.string_types): task = reflection.get_class_name(task) return task in self.tasks def __eq__(self, other): if not isinstance(other, TopicWorker): return NotImplemented if len(other.tasks) != len(self.tasks): return False if other.topic != self.topic: return False for task in other.tasks: if not self.performs(task): return False # If one of the identity equals _NO_IDENTITY, then allow it to match... if self._NO_IDENTITY in (self.identity, other.identity): return True else: return other.identity == self.identity def __ne__(self, other): return not self.__eq__(other) def __repr__(self): r = reflection.get_class_name(self, fully_qualified=False) if self.identity is not self._NO_IDENTITY: r += "(identity=%s, tasks=%s, topic=%s)" % (self.identity, self.tasks, self.topic) else: r += "(identity=*, tasks=%s, topic=%s)" % (self.tasks, self.topic) return r class ProxyWorkerFinder(object): """Requests and receives responses about workers topic+task details.""" def __init__(self, uuid, proxy, topics, beat_periodicity=pr.NOTIFY_PERIOD, worker_expiry=pr.EXPIRES_AFTER): self._cond = threading.Condition() self._proxy = proxy self._topics = topics self._workers = {} self._uuid = uuid self._seen_workers = 0 self._messages_processed = 0 self._messages_published = 0 self._worker_expiry = worker_expiry self._watch = timeutils.StopWatch(duration=beat_periodicity) @property def total_workers(self): """Number of workers currently known.""" return len(self._workers) def wait_for_workers(self, workers=1, timeout=None): """Waits for geq workers to notify they are ready to do work. NOTE(harlowja): if a timeout is provided this function will wait until that timeout expires, if the amount of workers does not reach the desired amount of workers before the timeout expires then this will return how many workers are still needed, otherwise it will return zero. """ if workers <= 0: raise ValueError("Worker amount must be greater than zero") watch = timeutils.StopWatch(duration=timeout) watch.start() with self._cond: while self.total_workers < workers: if watch.expired(): return max(0, workers - self.total_workers) self._cond.wait(watch.leftover(return_none=True)) return 0 @staticmethod def _match_worker(task, available_workers): """Select a worker (from geq 1 workers) that can best perform the task. NOTE(harlowja): this method will be activated when there exists one one greater than one potential workers that can perform a task, the arguments provided will be the potential workers located and the task that is being requested to perform and the result should be one of those workers using whatever best-fit algorithm is possible (or random at the least). """ if len(available_workers) == 1: return available_workers[0] else: return random.choice(available_workers) @property def messages_processed(self): """How many notify response messages have been processed.""" return self._messages_processed def _next_worker(self, topic, tasks, temporary=False): if not temporary: w = TopicWorker(topic, tasks, identity=self._seen_workers) self._seen_workers += 1 return w else: return TopicWorker(topic, tasks) def maybe_publish(self): """Periodically called to publish notify message to each topic. These messages (especially the responses) are how this find learns about workers and what tasks they can perform (so that we can then match workers to tasks to run). """ if self._messages_published == 0: self._proxy.publish(pr.Notify(), self._topics, reply_to=self._uuid) self._messages_published += 1 self._watch.restart() else: if self._watch.expired(): self._proxy.publish(pr.Notify(), self._topics, reply_to=self._uuid) self._messages_published += 1 self._watch.restart() def _add(self, topic, tasks): """Adds/updates a worker for the topic for the given tasks.""" try: worker = self._workers[topic] # Check if we already have an equivalent worker, if so just # return it... if worker == self._next_worker(topic, tasks, temporary=True): return (worker, False) # This *fall through* is done so that if someone is using an # active worker object that already exists that we just create # a new one; so that the existing object doesn't get # affected (workers objects are supposed to be immutable). except KeyError: pass worker = self._next_worker(topic, tasks) self._workers[topic] = worker return (worker, True) def process_response(self, data, message): """Process notify message sent from remote side.""" LOG.debug("Started processing notify response message '%s'", ku.DelayedPretty(message)) response = pr.Notify(**data) LOG.debug("Extracted notify response '%s'", response) with self._cond: worker, new_or_updated = self._add(response.topic, response.tasks) if new_or_updated: LOG.debug("Updated worker '%s' (%s total workers are" " currently known)", worker, self.total_workers) self._cond.notify_all() worker.last_seen = timeutils.now() self._messages_processed += 1 def clean(self): """Cleans out any dead/expired/not responding workers. Returns how many workers were removed. """ if (not self._workers or (self._worker_expiry is None or self._worker_expiry <= 0)): return 0 dead_workers = {} with self._cond: now = timeutils.now() for topic, worker in six.iteritems(self._workers): if worker.last_seen is None: continue secs_since_last_seen = max(0, now - worker.last_seen) if secs_since_last_seen >= self._worker_expiry: dead_workers[topic] = (worker, secs_since_last_seen) for topic in six.iterkeys(dead_workers): self._workers.pop(topic) if dead_workers: self._cond.notify_all() if dead_workers and LOG.isEnabledFor(logging.INFO): for worker, secs_since_last_seen in six.itervalues(dead_workers): LOG.info("Removed worker '%s' as it has not responded to" " notification requests in %0.3f seconds", worker, secs_since_last_seen) return len(dead_workers) def reset(self): """Resets finders internal state.""" with self._cond: self._workers.clear() self._messages_processed = 0 self._messages_published = 0 self._seen_workers = 0 self._cond.notify_all() def get_worker_for_task(self, task): """Gets a worker that can perform a given task.""" available_workers = [] with self._cond: for worker in six.itervalues(self._workers): if worker.performs(task): available_workers.append(worker) if available_workers: return self._match_worker(task, available_workers) else: return None ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/engines/worker_based/worker.py0000664000175000017500000001427300000000000023225 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import platform import socket import sys import futurist from oslo_utils import reflection from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import server from taskflow import logging from taskflow import task as t_task from taskflow.utils import banner from taskflow.utils import misc from taskflow.utils import threading_utils as tu LOG = logging.getLogger(__name__) class Worker(object): """Worker that can be started on a remote host for handling tasks requests. :param url: broker url :param exchange: broker exchange name :param topic: topic name under which worker is stated :param tasks: task list that worker is capable of performing, items in the list can be one of the following types; 1, a string naming the python module name to search for tasks in or the task class name; 2, a python module to search for tasks in; 3, a task class object that will be used to create tasks from. :param executor: custom executor object that can used for processing requests in separate threads (if not provided one will be created) :param threads_count: threads count to be passed to the default executor (used only if an executor is not passed in) :param transport: transport to be used (e.g. amqp, memory, etc.) :param transport_options: transport specific options (see: http://kombu.readthedocs.org/ for what these options imply and are expected to be) :param retry_options: retry specific options (see: :py:attr:`~.proxy.Proxy.DEFAULT_RETRY_OPTIONS`) """ def __init__(self, exchange, topic, tasks, executor=None, threads_count=None, url=None, transport=None, transport_options=None, retry_options=None): self._topic = topic self._executor = executor self._owns_executor = False if self._executor is None: self._executor = futurist.ThreadPoolExecutor( max_workers=threads_count) self._owns_executor = True self._endpoints = self._derive_endpoints(tasks) self._exchange = exchange self._server = server.Server(topic, exchange, self._executor, self._endpoints, url=url, transport=transport, transport_options=transport_options, retry_options=retry_options) @staticmethod def _derive_endpoints(tasks): """Derive endpoints from list of strings, classes or packages.""" derived_tasks = misc.find_subclasses(tasks, t_task.Task) return [endpoint.Endpoint(task) for task in derived_tasks] @misc.cachedproperty def banner(self): """A banner that can be useful to display before running.""" connection_details = self._server.connection_details transport = connection_details.transport if transport.driver_version: transport_driver = "%s v%s" % (transport.driver_name, transport.driver_version) else: transport_driver = transport.driver_name try: hostname = socket.getfqdn() except socket.error: hostname = "???" try: pid = os.getpid() except OSError: pid = "???" chapters = { 'Connection details': { 'Driver': transport_driver, 'Exchange': self._exchange, 'Topic': self._topic, 'Transport': transport.driver_type, 'Uri': connection_details.uri, }, 'Powered by': { 'Executor': reflection.get_class_name(self._executor), 'Thread count': getattr(self._executor, 'max_workers', "???"), }, 'Supported endpoints': [str(ep) for ep in self._endpoints], 'System details': { 'Hostname': hostname, 'Pid': pid, 'Platform': platform.platform(), 'Python': sys.version.split("\n", 1)[0].strip(), 'Thread id': tu.get_ident(), }, } return banner.make_banner('WBE worker', chapters) def run(self, display_banner=True, banner_writer=None): """Runs the worker.""" if display_banner: if banner_writer is None: for line in self.banner.splitlines(): LOG.info(line) else: banner_writer(self.banner) self._server.start() def wait(self): """Wait until worker is started.""" self._server.wait() def stop(self): """Stop worker.""" self._server.stop() if self._owns_executor: self._executor.shutdown() if __name__ == '__main__': import argparse import logging as log parser = argparse.ArgumentParser() parser.add_argument("--exchange", required=True) parser.add_argument("--connection-url", required=True) parser.add_argument("--topic", required=True) parser.add_argument("--task", action='append', metavar="TASK", default=[]) parser.add_argument("-v", "--verbose", action='store_true') args = parser.parse_args() if args.verbose: log.basicConfig(level=logging.DEBUG, format="") w = Worker(args.exchange, args.topic, args.task, url=args.connection_url) w.run() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6240416 taskflow-4.6.4/taskflow/examples/0000775000175000017500000000000000000000000017052 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/99_bottles.py0000664000175000017500000002127500000000000021430 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import functools import logging import os import sys import time import traceback from kazoo import client top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow.conductors import backends as conductor_backends from taskflow import engines from taskflow.jobs import backends as job_backends from taskflow import logging as taskflow_logging from taskflow.patterns import linear_flow as lf from taskflow.persistence import backends as persistence_backends from taskflow.persistence import models from taskflow import task from oslo_utils import timeutils from oslo_utils import uuidutils # Instructions! # # 1. Install zookeeper (or change host listed below) # 2. Download this example, place in file '99_bottles.py' # 3. Run `python 99_bottles.py p` to place a song request onto the jobboard # 4. Run `python 99_bottles.py c` a few times (in different shells) # 5. On demand kill previously listed processes created in (4) and watch # the work resume on another process (and repeat) # 6. Keep enough workers alive to eventually finish the song (if desired). ME = os.getpid() ZK_HOST = "localhost:2181" JB_CONF = { 'hosts': ZK_HOST, 'board': 'zookeeper', 'path': '/taskflow/99-bottles-demo', } PERSISTENCE_URI = r"sqlite:////tmp/bottles.db" TAKE_DOWN_DELAY = 1.0 PASS_AROUND_DELAY = 3.0 HOW_MANY_BOTTLES = 99 class TakeABottleDown(task.Task): def execute(self, bottles_left): sys.stdout.write('Take one down, ') sys.stdout.flush() time.sleep(TAKE_DOWN_DELAY) return bottles_left - 1 class PassItAround(task.Task): def execute(self): sys.stdout.write('pass it around, ') sys.stdout.flush() time.sleep(PASS_AROUND_DELAY) class Conclusion(task.Task): def execute(self, bottles_left): sys.stdout.write('%s bottles of beer on the wall...\n' % bottles_left) sys.stdout.flush() def make_bottles(count): # This is the function that will be called to generate the workflow # and will also be called to regenerate it on resumption so that work # can continue from where it last left off... s = lf.Flow("bottle-song") take_bottle = TakeABottleDown("take-bottle-%s" % count, inject={'bottles_left': count}, provides='bottles_left') pass_it = PassItAround("pass-%s-around" % count) next_bottles = Conclusion("next-bottles-%s" % (count - 1)) s.add(take_bottle, pass_it, next_bottles) for bottle in reversed(list(range(1, count))): take_bottle = TakeABottleDown("take-bottle-%s" % bottle, provides='bottles_left') pass_it = PassItAround("pass-%s-around" % bottle) next_bottles = Conclusion("next-bottles-%s" % (bottle - 1)) s.add(take_bottle, pass_it, next_bottles) return s def run_conductor(only_run_once=False): # This continuously runs consumers until its stopped via ctrl-c or other # kill signal... event_watches = {} # This will be triggered by the conductor doing various activities # with engines, and is quite nice to be able to see the various timing # segments (which is useful for debugging, or watching, or figuring out # where to optimize). def on_conductor_event(cond, event, details): print("Event '%s' has been received..." % event) print("Details = %s" % details) if event.endswith("_start"): w = timeutils.StopWatch() w.start() base_event = event[0:-len("_start")] event_watches[base_event] = w if event.endswith("_end"): base_event = event[0:-len("_end")] try: w = event_watches.pop(base_event) w.stop() print("It took %0.3f seconds for event '%s' to finish" % (w.elapsed(), base_event)) except KeyError: pass if event == 'running_end' and only_run_once: cond.stop() print("Starting conductor with pid: %s" % ME) my_name = "conductor-%s" % ME persist_backend = persistence_backends.fetch(PERSISTENCE_URI) with contextlib.closing(persist_backend): with contextlib.closing(persist_backend.get_connection()) as conn: conn.upgrade() job_backend = job_backends.fetch(my_name, JB_CONF, persistence=persist_backend) job_backend.connect() with contextlib.closing(job_backend): cond = conductor_backends.fetch('blocking', my_name, job_backend, persistence=persist_backend) on_conductor_event = functools.partial(on_conductor_event, cond) cond.notifier.register(cond.notifier.ANY, on_conductor_event) # Run forever, and kill -9 or ctrl-c me... try: cond.run() finally: cond.stop() cond.wait() def run_poster(): # This just posts a single job and then ends... print("Starting poster with pid: %s" % ME) my_name = "poster-%s" % ME persist_backend = persistence_backends.fetch(PERSISTENCE_URI) with contextlib.closing(persist_backend): with contextlib.closing(persist_backend.get_connection()) as conn: conn.upgrade() job_backend = job_backends.fetch(my_name, JB_CONF, persistence=persist_backend) job_backend.connect() with contextlib.closing(job_backend): # Create information in the persistence backend about the # unit of work we want to complete and the factory that # can be called to create the tasks that the work unit needs # to be done. lb = models.LogBook("post-from-%s" % my_name) fd = models.FlowDetail("song-from-%s" % my_name, uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(persist_backend.get_connection()) as conn: conn.save_logbook(lb) engines.save_factory_details(fd, make_bottles, [HOW_MANY_BOTTLES], {}, backend=persist_backend) # Post, and be done with it! jb = job_backend.post("song-from-%s" % my_name, book=lb) print("Posted: %s" % jb) print("Goodbye...") def main_local(): # Run locally typically this is activating during unit testing when all # the examples are made sure to still function correctly... global TAKE_DOWN_DELAY global PASS_AROUND_DELAY global JB_CONF # Make everything go much faster (so that this finishes quickly). PASS_AROUND_DELAY = 0.01 TAKE_DOWN_DELAY = 0.01 JB_CONF['path'] = JB_CONF['path'] + "-" + uuidutils.generate_uuid() run_poster() run_conductor(only_run_once=True) def check_for_zookeeper(timeout=1): sys.stderr.write("Testing for the existence of a zookeeper server...\n") sys.stderr.write("Please wait....\n") with contextlib.closing(client.KazooClient()) as test_client: try: test_client.start(timeout=timeout) except test_client.handler.timeout_exception: sys.stderr.write("Zookeeper is needed for running this example!\n") traceback.print_exc() return False else: test_client.stop() return True def main(): if not check_for_zookeeper(): return if len(sys.argv) == 1: main_local() elif sys.argv[1] in ('p', 'c'): if sys.argv[-1] == "v": logging.basicConfig(level=taskflow_logging.TRACE) else: logging.basicConfig(level=logging.ERROR) if sys.argv[1] == 'p': run_poster() else: run_conductor() else: sys.stderr.write("%s p|c (v?)\n" % os.path.basename(sys.argv[0])) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/alphabet_soup.py0000664000175000017500000000626700000000000022265 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fractions import functools import logging import os import string import sys import time logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow import exceptions from taskflow.patterns import linear_flow from taskflow import task # In this example we show how a simple linear set of tasks can be executed # using local processes (and not threads or remote workers) with minimal (if # any) modification to those tasks to make them safe to run in this mode. # # This is useful since it allows further scaling up your workflows when thread # execution starts to become a bottleneck (which it can start to be due to the # GIL in python). It also offers a intermediary scalable runner that can be # used when the scale and/or setup of remote workers is not desirable. def progress_printer(task, event_type, details): # This callback, attached to each task will be called in the local # process (not the child processes)... progress = details.pop('progress') progress = int(progress * 100.0) print("Task '%s' reached %d%% completion" % (task.name, progress)) class AlphabetTask(task.Task): # Second delay between each progress part. _DELAY = 0.1 # This task will run in X main stages (each with a different progress # report that will be delivered back to the running process...). The # initial 0% and 100% are triggered automatically by the engine when # a task is started and finished (so that's why those are not emitted # here). _PROGRESS_PARTS = [fractions.Fraction("%s/5" % x) for x in range(1, 5)] def execute(self): for p in self._PROGRESS_PARTS: self.update_progress(p) time.sleep(self._DELAY) print("Constructing...") soup = linear_flow.Flow("alphabet-soup") for letter in string.ascii_lowercase: abc = AlphabetTask(letter) abc.notifier.register(task.EVENT_UPDATE_PROGRESS, functools.partial(progress_printer, abc)) soup.add(abc) try: print("Loading...") e = engines.load(soup, engine='parallel', executor='processes') print("Compiling...") e.compile() print("Preparing...") e.prepare() print("Running...") e.run() print("Done: %s" % e.statistics) except exceptions.NotImplementedError as e: print(e) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/build_a_car.py0000664000175000017500000001463200000000000021656 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.types import notifier ANY = notifier.Notifier.ANY import example_utils as eu # noqa # INTRO: This example shows how a graph flow and linear flow can be used # together to execute dependent & non-dependent tasks by going through the # steps required to build a simplistic car (an assembly line if you will). It # also shows how raw functions can be wrapped into a task object instead of # being forced to use the more *heavy* task base class. This is useful in # scenarios where pre-existing code has functions that you easily want to # plug-in to taskflow, without requiring a large amount of code changes. def build_frame(): return 'steel' def build_engine(): return 'honda' def build_doors(): return '2' def build_wheels(): return '4' # These just return true to indiciate success, they would in the real work # do more than just that. def install_engine(frame, engine): return True def install_doors(frame, windows_installed, doors): return True def install_windows(frame, doors): return True def install_wheels(frame, engine, engine_installed, wheels): return True def trash(**kwargs): eu.print_wrapped("Throwing away pieces of car!") def startup(**kwargs): # If you want to see the rollback function being activated try uncommenting # the following line. # # raise ValueError("Car not verified") return True def verify(spec, **kwargs): # If the car is not what we ordered throw away the car (trigger reversion). for key, value in kwargs.items(): if spec[key] != value: raise Exception("Car doesn't match spec!") return True # These two functions connect into the state transition notification emission # points that the engine outputs, they can be used to log state transitions # that are occurring, or they can be used to suspend the engine (or perform # other useful activities). def flow_watch(state, details): print('Flow => %s' % state) def task_watch(state, details): print('Task %s => %s' % (details.get('task_name'), state)) flow = lf.Flow("make-auto").add( task.FunctorTask(startup, revert=trash, provides='ran'), # A graph flow allows automatic dependency based ordering, the ordering # is determined by analyzing the symbols required and provided and ordering # execution based on a functioning order (if one exists). gf.Flow("install-parts").add( task.FunctorTask(build_frame, provides='frame'), task.FunctorTask(build_engine, provides='engine'), task.FunctorTask(build_doors, provides='doors'), task.FunctorTask(build_wheels, provides='wheels'), # These *_installed outputs allow for other tasks to depend on certain # actions being performed (aka the components were installed), another # way to do this is to link() the tasks manually instead of creating # an 'artificial' data dependency that accomplishes the same goal the # manual linking would result in. task.FunctorTask(install_engine, provides='engine_installed'), task.FunctorTask(install_doors, provides='doors_installed'), task.FunctorTask(install_windows, provides='windows_installed'), task.FunctorTask(install_wheels, provides='wheels_installed')), task.FunctorTask(verify, requires=['frame', 'engine', 'doors', 'wheels', 'engine_installed', 'doors_installed', 'windows_installed', 'wheels_installed'])) # This dictionary will be provided to the tasks as a specification for what # the tasks should produce, in this example this specification will influence # what those tasks do and what output they create. Different tasks depend on # different information from this specification, all of which will be provided # automatically by the engine to those tasks. spec = { "frame": 'steel', "engine": 'honda', "doors": '2', "wheels": '4', # These are used to compare the result product, a car without the pieces # installed is not a car after all. "engine_installed": True, "doors_installed": True, "windows_installed": True, "wheels_installed": True, } engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) # This registers all (ANY) state transitions to trigger a call to the # flow_watch function for flow state transitions, and registers the # same all (ANY) state transitions for task state transitions. engine.notifier.register(ANY, flow_watch) engine.atom_notifier.register(ANY, task_watch) eu.print_wrapped("Building a car") engine.run() # Alter the specification and ensure that the reverting logic gets triggered # since the resultant car that will be built by the build_wheels function will # build a car with 4 doors only (not 5), this will cause the verification # task to mark the car that is produced as not matching the desired spec. spec['doors'] = 5 engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) engine.notifier.register(ANY, flow_watch) engine.atom_notifier.register(ANY, task_watch) eu.print_wrapped("Building a wrong car that doesn't match specification") try: engine.run() except Exception as e: eu.print_wrapped("Flow failed: %s" % e) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/buildsystem.py0000664000175000017500000000750500000000000021777 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow import task import example_utils as eu # noqa # In this example we demonstrate use of a target flow (a flow that only # executes up to a specified target) to make an *oversimplified* pseudo # build system. It pretends to compile all sources to object files and # link them into an executable. It also can build docs, but this can be # "switched off" via targeted flow special power -- ability to ignore # all tasks not needed by its target. class CompileTask(task.Task): """Pretends to take a source and make object file.""" default_provides = 'object_filename' def execute(self, source_filename): object_filename = '%s.o' % os.path.splitext(source_filename)[0] print('Compiling %s into %s' % (source_filename, object_filename)) return object_filename class LinkTask(task.Task): """Pretends to link executable form several object files.""" default_provides = 'executable' def __init__(self, executable_path, *args, **kwargs): super(LinkTask, self).__init__(*args, **kwargs) self._executable_path = executable_path def execute(self, **kwargs): object_filenames = list(kwargs.values()) print('Linking executable %s from files %s' % (self._executable_path, ', '.join(object_filenames))) return self._executable_path class BuildDocsTask(task.Task): """Pretends to build docs from sources.""" default_provides = 'docs' def execute(self, **kwargs): for source_filename in kwargs.values(): print("Building docs for %s" % source_filename) return 'docs' def make_flow_and_store(source_files, executable_only=False): flow = gf.TargetedFlow('build-flow') object_targets = [] store = {} for source in source_files: source_stored = '%s-source' % source object_stored = '%s-object' % source store[source_stored] = source object_targets.append(object_stored) flow.add(CompileTask(name='compile-%s' % source, rebind={'source_filename': source_stored}, provides=object_stored)) flow.add(BuildDocsTask(requires=list(store.keys()))) # Try this to see executable_only switch broken: object_targets.append('docs') link_task = LinkTask('build/executable', requires=object_targets) flow.add(link_task) if executable_only: flow.set_target(link_task) return flow, store if __name__ == "__main__": SOURCE_FILES = ['first.c', 'second.cpp', 'main.cpp'] eu.print_wrapped('Running all tasks:') flow, store = make_flow_and_store(SOURCE_FILES) taskflow.engines.run(flow, store=store) eu.print_wrapped('Building executable, no docs:') flow, store = make_flow_and_store(SOURCE_FILES, executable_only=True) taskflow.engines.run(flow, store=store) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/calculate_in_parallel.py0000664000175000017500000000751500000000000023733 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import task # INTRO: These examples show how a linear flow and an unordered flow can be # used together to execute calculations in parallel and then use the # result for the next task/s. The adder task is used for all calculations # and argument bindings are used to set correct parameters for each task. # This task provides some values from as a result of execution, this can be # useful when you want to provide values from a static set to other tasks that # depend on those values existing before those tasks can run. # # NOTE(harlowja): this usage is *depreciated* in favor of a simpler mechanism # that provides those values on engine running by prepopulating the storage # backend before your tasks are ran (which accomplishes a similar goal in a # more uniform manner). class Provider(task.Task): def __init__(self, name, *args, **kwargs): super(Provider, self).__init__(name=name, **kwargs) self._provide = args def execute(self): return self._provide # This task adds two input variables and returns the result of that addition. # # Note that since this task does not have a revert() function (since addition # is a stateless operation) there are no side-effects that this function needs # to undo if some later operation fails. class Adder(task.Task): def execute(self, x, y): return x + y flow = lf.Flow('root').add( # Provide the initial values for other tasks to depend on. # # x1 = 2, y1 = 3, x2 = 5, x3 = 8 Provider("provide-adder", 2, 3, 5, 8, provides=('x1', 'y1', 'x2', 'y2')), # Note here that we define the flow that contains the 2 adders to be an # unordered flow since the order in which these execute does not matter, # another way to solve this would be to use a graph_flow pattern, which # also can run in parallel (since they have no ordering dependencies). uf.Flow('adders').add( # Calculate 'z1 = x1+y1 = 5' # # Rebind here means that the execute() function x argument will be # satisfied from a previous output named 'x1', and the y argument # of execute() will be populated from the previous output named 'y1' # # The output (result of adding) will be mapped into a variable named # 'z1' which can then be refereed to and depended on by other tasks. Adder(name="add", provides='z1', rebind=['x1', 'y1']), # z2 = x2+y2 = 13 Adder(name="add-2", provides='z2', rebind=['x2', 'y2']), ), # r = z1+z2 = 18 Adder(name="sum-1", provides='r', rebind=['z1', 'z2'])) # The result here will be all results (from all tasks) which is stored in an # in-memory storage location that backs this engine since it is not configured # with persistence storage. result = taskflow.engines.run(flow, engine='parallel') print(result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/calculate_linear.py0000664000175000017500000001100600000000000022711 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example a linear flow is used to group four tasks to calculate # a value. A single added task is used twice, showing how this can be done # and the twice added task takes in different bound values. In the first case # it uses default parameters ('x' and 'y') and in the second case arguments # are bound with ('z', 'd') keys from the engines internal storage mechanism. # # A multiplier task uses a binding that another task also provides, but this # example explicitly shows that 'z' parameter is bound with 'a' key # This shows that if a task depends on a key named the same as a key provided # from another task the name can be remapped to take the desired key from a # different origin. # This task provides some values from as a result of execution, this can be # useful when you want to provide values from a static set to other tasks that # depend on those values existing before those tasks can run. # # NOTE(harlowja): this usage is *depreciated* in favor of a simpler mechanism # that just provides those values on engine running by prepopulating the # storage backend before your tasks are ran (which accomplishes a similar goal # in a more uniform manner). class Provider(task.Task): def __init__(self, name, *args, **kwargs): super(Provider, self).__init__(name=name, **kwargs) self._provide = args def execute(self): return self._provide # This task adds two input variables and returns the result. # # Note that since this task does not have a revert() function (since addition # is a stateless operation) there are no side-effects that this function needs # to undo if some later operation fails. class Adder(task.Task): def execute(self, x, y): return x + y # This task multiplies an input variable by a multiplier and returns the # result. # # Note that since this task does not have a revert() function (since # multiplication is a stateless operation) and there are no side-effects that # this function needs to undo if some later operation fails. class Multiplier(task.Task): def __init__(self, name, multiplier, provides=None, rebind=None): super(Multiplier, self).__init__(name=name, provides=provides, rebind=rebind) self._multiplier = multiplier def execute(self, z): return z * self._multiplier # Note here that the ordering is established so that the correct sequences # of operations occurs where the adding and multiplying is done according # to the expected and typical mathematical model. A graph flow could also be # used here to automatically infer & ensure the correct ordering. flow = lf.Flow('root').add( # Provide the initial values for other tasks to depend on. # # x = 2, y = 3, d = 5 Provider("provide-adder", 2, 3, 5, provides=('x', 'y', 'd')), # z = x+y = 5 Adder("add-1", provides='z'), # a = z+d = 10 Adder("add-2", provides='a', rebind=['z', 'd']), # Calculate 'r = a*3 = 30' # # Note here that the 'z' argument of the execute() function will not be # bound to the 'z' variable provided from the above 'provider' object but # instead the 'z' argument will be taken from the 'a' variable provided # by the second add-2 listed above. Multiplier("multi", 3, provides='r', rebind={'z': 'a'}) ) # The result here will be all results (from all tasks) which is stored in an # in-memory storage location that backs this engine since it is not configured # with persistence storage. results = taskflow.engines.run(flow) print(results) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/create_parallel_volume.py0000664000175000017500000001036000000000000024132 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from oslo_utils import reflection from taskflow import engines from taskflow.listeners import printing from taskflow.patterns import unordered_flow as uf from taskflow import task # INTRO: These examples show how unordered_flow can be used to create a large # number of fake volumes in parallel (or serially, depending on a constant that # can be easily changed). @contextlib.contextmanager def show_time(name): start = time.time() yield end = time.time() print(" -- %s took %0.3f seconds" % (name, end - start)) # This affects how many volumes to create and how much time to *simulate* # passing for that volume to be created. MAX_CREATE_TIME = 3 VOLUME_COUNT = 5 # This will be used to determine if all the volumes are created in parallel # or whether the volumes are created serially (in an undefined ordered since # a unordered flow is used). Note that there is a disconnection between the # ordering and the concept of parallelism (since unordered items can still be # ran in a serial ordering). A typical use-case for offering both is to allow # for debugging using a serial approach, while when running at a larger scale # one would likely want to use the parallel approach. # # If you switch this flag from serial to parallel you can see the overall # time difference that this causes. SERIAL = False if SERIAL: engine = 'serial' else: engine = 'parallel' class VolumeCreator(task.Task): def __init__(self, volume_id): # Note here that the volume name is composed of the name of the class # along with the volume id that is being created, since a name of a # task uniquely identifies that task in storage it is important that # the name be relevant and identifiable if the task is recreated for # subsequent resumption (if applicable). # # UUIDs are *not* used as they can not be tied back to a previous tasks # state on resumption (since they are unique and will vary for each # task that is created). A name based off the volume id that is to be # created is more easily tied back to the original task so that the # volume create can be resumed/revert, and is much easier to use for # audit and tracking purposes. base_name = reflection.get_callable_name(self) super(VolumeCreator, self).__init__(name="%s-%s" % (base_name, volume_id)) self._volume_id = volume_id def execute(self): print("Making volume %s" % (self._volume_id)) time.sleep(random.random() * MAX_CREATE_TIME) print("Finished making volume %s" % (self._volume_id)) # Assume there is no ordering dependency between volumes. flow = uf.Flow("volume-maker") for i in range(0, VOLUME_COUNT): flow.add(VolumeCreator(volume_id="vol-%s" % (i))) # Show how much time the overall engine loading and running takes. with show_time(name=flow.name.title()): eng = engines.load(flow, engine=engine) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities # that show you exactly what transitions the engine is going through # while running the various volume create tasks. with printing.PrintingListener(eng): eng.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/delayed_return.py0000664000175000017500000000543600000000000022442 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys from concurrent import futures logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) # INTRO: in this example linear_flow we will attach a listener to an engine # and delay the return from a function until after the result of a task has # occurred in that engine. The engine will continue running (in the background) # while the function will have returned. import taskflow.engines from taskflow.listeners import base from taskflow.patterns import linear_flow as lf from taskflow import states from taskflow import task from taskflow.types import notifier class PokeFutureListener(base.Listener): def __init__(self, engine, future, task_name): super(PokeFutureListener, self).__init__( engine, task_listen_for=(notifier.Notifier.ANY,), flow_listen_for=[]) self._future = future self._task_name = task_name def _task_receiver(self, state, details): if state in (states.SUCCESS, states.FAILURE): if details.get('task_name') == self._task_name: if state == states.SUCCESS: self._future.set_result(details['result']) else: failure = details['result'] self._future.set_exception(failure.exception) class Hi(task.Task): def execute(self): # raise IOError("I broken") return 'hi' class Bye(task.Task): def execute(self): return 'bye' def return_from_flow(pool): wf = lf.Flow("root").add(Hi("hi"), Bye("bye")) eng = taskflow.engines.load(wf, engine='serial') f = futures.Future() watcher = PokeFutureListener(eng, f, 'hi') watcher.register() pool.submit(eng.run) return (eng, f.result()) with futures.ThreadPoolExecutor(1) as pool: engine, hi_result = return_from_flow(pool) print(hi_result) print(engine.storage.get_flow_state()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/distance_calculator.py0000664000175000017500000001054500000000000023434 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import math import os import sys top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.patterns import linear_flow from taskflow import task # INTRO: This shows how to use a tasks/atoms ability to take requirements from # its execute functions default parameters and shows how to provide those # via different methods when needed, to influence those parameters to in # this case calculate the distance between two points in 2D space. # A 2D point. Point = collections.namedtuple("Point", "x,y") def is_near(val, expected, tolerance=0.001): # Floats don't really provide equality... if val > (expected + tolerance): return False if val < (expected - tolerance): return False return True class DistanceTask(task.Task): # See: http://en.wikipedia.org/wiki/Distance#Distance_in_Euclidean_space default_provides = 'distance' def execute(self, a=Point(0, 0), b=Point(0, 0)): return math.sqrt(math.pow(b.x - a.x, 2) + math.pow(b.y - a.y, 2)) if __name__ == '__main__': # For these we rely on the execute() methods points by default being # at the origin (and we override it with store values when we want) at # execution time (which then influences what is calculated). any_distance = linear_flow.Flow("origin").add(DistanceTask()) results = engines.run(any_distance) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 0.0, is_near(results['distance'], 0.0))) results = engines.run(any_distance, store={'a': Point(1, 1)}) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 1.4142, is_near(results['distance'], 1.4142))) results = engines.run(any_distance, store={'a': Point(10, 10)}) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 14.14199, is_near(results['distance'], 14.14199))) results = engines.run(any_distance, store={'a': Point(5, 5), 'b': Point(10, 10)}) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 7.07106, is_near(results['distance'], 7.07106))) # For this we use the ability to override at task creation time the # optional arguments so that we don't need to continue to send them # in via the 'store' argument like in the above (and we fix the new # starting point 'a' at (10, 10) instead of (0, 0)... ten_distance = linear_flow.Flow("ten") ten_distance.add(DistanceTask(inject={'a': Point(10, 10)})) results = engines.run(ten_distance, store={'b': Point(10, 10)}) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 0.0, is_near(results['distance'], 0.0))) results = engines.run(ten_distance) print(results) print("%s is near-enough to %s: %s" % (results['distance'], 14.14199, is_near(results['distance'], 14.14199))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/dump_memory_backend.py0000664000175000017500000000411300000000000023427 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: in this example we create a dummy flow with a dummy task, and run # it using a in-memory backend and pre/post run we dump out the contents # of the in-memory backends tree structure (which can be quite useful to # look at for debugging or other analysis). class PrintTask(task.Task): def execute(self): print("Running '%s'" % self.name) # Make a little flow and run it... f = lf.Flow('root') for alpha in ['a', 'b', 'c']: f.add(PrintTask(alpha)) e = engines.load(f) e.compile() e.prepare() # After prepare the storage layer + backend can now be accessed safely... backend = e.storage.backend print("----------") print("Before run") print("----------") print(backend.memory.pformat()) print("----------") e.run() print("---------") print("After run") print("---------") for path in backend.memory.ls_r(backend.memory.root_path, absolute=True): value = backend.memory[path] if value: print("%s -> %s" % (path, value)) else: print("%s" % (path)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/echo_listener.py0000664000175000017500000000356400000000000022257 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.DEBUG) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.listeners import logging as logging_listener from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: This example walks through a miniature workflow which will do a # simple echo operation; during this execution a listener is associated with # the engine to receive all notifications about what the flow has performed, # this example dumps that output to the stdout for viewing (at debug level # to show all the information which is possible). class Echo(task.Task): def execute(self): print(self.name) # Generate the work to be done (but don't do it yet). wf = lf.Flow('abc') wf.add(Echo('a')) wf.add(Echo('b')) wf.add(Echo('c')) # This will associate the listener with the engine (the listener # will automatically register for notifications with the engine and deregister # when the context is exited). e = engines.load(wf) with logging_listener.DynamicLoggingListener(e): e.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/example_utils.py0000664000175000017500000000635200000000000022305 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import shutil import sys import tempfile from six.moves import urllib_parse from taskflow import exceptions from taskflow.persistence import backends LOG = logging.getLogger(__name__) try: import sqlalchemy as _sa # noqa SQLALCHEMY_AVAILABLE = True except ImportError: SQLALCHEMY_AVAILABLE = False def print_wrapped(text): print("-" * (len(text))) print(text) print("-" * (len(text))) def rm_path(persist_path): if not os.path.exists(persist_path): return if os.path.isdir(persist_path): rm_func = shutil.rmtree elif os.path.isfile(persist_path): rm_func = os.unlink else: raise ValueError("Unknown how to `rm` path: %s" % (persist_path)) try: rm_func(persist_path) except (IOError, OSError): pass def _make_conf(backend_uri): parsed_url = urllib_parse.urlparse(backend_uri) backend_type = parsed_url.scheme.lower() if not backend_type: raise ValueError("Unknown backend type for uri: %s" % (backend_type)) if backend_type in ('file', 'dir'): conf = { 'path': parsed_url.path, 'connection': backend_uri, } elif backend_type in ('zookeeper',): conf = { 'path': parsed_url.path, 'hosts': parsed_url.netloc, 'connection': backend_uri, } else: conf = { 'connection': backend_uri, } return conf @contextlib.contextmanager def get_backend(backend_uri=None): tmp_dir = None if not backend_uri: if len(sys.argv) > 1: backend_uri = str(sys.argv[1]) if not backend_uri: tmp_dir = tempfile.mkdtemp() backend_uri = "file:///%s" % tmp_dir try: backend = backends.fetch(_make_conf(backend_uri)) except exceptions.NotFound as e: # Fallback to one that will work if the provided backend is not found. if not tmp_dir: tmp_dir = tempfile.mkdtemp() backend_uri = "file:///%s" % tmp_dir LOG.exception("Falling back to file backend using temporary" " directory located at: %s", tmp_dir) backend = backends.fetch(_make_conf(backend_uri)) else: raise e try: # Ensure schema upgraded before we continue working. with contextlib.closing(backend.get_connection()) as conn: conn.upgrade() yield backend finally: # Make sure to cleanup the temporary path if one was created for us. if tmp_dir: rm_path(tmp_dir) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/fake_billing.py0000664000175000017500000001526000000000000022036 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import logging import os import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from oslo_utils import uuidutils from taskflow import engines from taskflow.listeners import printing from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.utils import misc # INTRO: This example walks through a miniature workflow which simulates # the reception of an API request, creation of a database entry, driver # activation (which invokes a 'fake' webservice) and final completion. # # This example also shows how a function/object (in this class the url sending) # that occurs during driver activation can update the progress of a task # without being aware of the internals of how to do this by associating a # callback that the url sending can update as the sending progresses from 0.0% # complete to 100% complete. class DB(object): def query(self, sql): print("Querying with: %s" % (sql)) class UrlCaller(object): def __init__(self): self._send_time = 0.5 self._chunks = 25 def send(self, url, data, status_cb=None): sleep_time = float(self._send_time) / self._chunks for i in range(0, len(data)): time.sleep(sleep_time) # As we send the data, each chunk we 'fake' send will progress # the sending progress that much further to 100%. if status_cb: status_cb(float(i) / len(data)) # Since engines save the output of tasks to a optional persistent storage # backend resources have to be dealt with in a slightly different manner since # resources are transient and can *not* be persisted (or serialized). For tasks # that require access to a set of resources it is a common pattern to provide # a object (in this case this object) on construction of those tasks via the # task constructor. class ResourceFetcher(object): def __init__(self): self._db_handle = None self._url_handle = None @property def db_handle(self): if self._db_handle is None: self._db_handle = DB() return self._db_handle @property def url_handle(self): if self._url_handle is None: self._url_handle = UrlCaller() return self._url_handle class ExtractInputRequest(task.Task): def __init__(self, resources): super(ExtractInputRequest, self).__init__(provides="parsed_request") self._resources = resources def execute(self, request): return { 'user': request.user, 'user_id': misc.as_int(request.id), 'request_id': uuidutils.generate_uuid(), } class MakeDBEntry(task.Task): def __init__(self, resources): super(MakeDBEntry, self).__init__() self._resources = resources def execute(self, parsed_request): db_handle = self._resources.db_handle db_handle.query("INSERT %s INTO mydb" % (parsed_request)) def revert(self, result, parsed_request): db_handle = self._resources.db_handle db_handle.query("DELETE %s FROM mydb IF EXISTS" % (parsed_request)) class ActivateDriver(task.Task): def __init__(self, resources): super(ActivateDriver, self).__init__(provides='sent_to') self._resources = resources self._url = "http://blahblah.com" def execute(self, parsed_request): print("Sending billing data to %s" % (self._url)) url_sender = self._resources.url_handle # Note that here we attach our update_progress function (which is a # function that the engine also 'binds' to) to the progress function # that the url sending helper class uses. This allows the task progress # to be tied to the url sending progress, which is very useful for # downstream systems to be aware of what a task is doing at any time. url_sender.send(self._url, json.dumps(parsed_request), status_cb=self.update_progress) return self._url def update_progress(self, progress, **kwargs): # Override the parent method to also print out the status. super(ActivateDriver, self).update_progress(progress, **kwargs) print("%s is %0.2f%% done" % (self.name, progress * 100)) class DeclareSuccess(task.Task): def execute(self, sent_to): print("Done!") print("All data processed and sent to %s" % (sent_to)) class DummyUser(object): def __init__(self, user, id_): self.user = user self.id = id_ # Resources (db handles and similar) of course can *not* be persisted so we # need to make sure that we pass this resource fetcher to the tasks constructor # so that the tasks have access to any needed resources (the resources are # lazily loaded so that they are only created when they are used). resources = ResourceFetcher() flow = lf.Flow("initialize-me") # 1. First we extract the api request into a usable format. # 2. Then we go ahead and make a database entry for our request. flow.add(ExtractInputRequest(resources), MakeDBEntry(resources)) # 3. Then we activate our payment method and finally declare success. sub_flow = gf.Flow("after-initialize") sub_flow.add(ActivateDriver(resources), DeclareSuccess()) flow.add(sub_flow) # Initially populate the storage with the following request object, # prepopulating this allows the tasks that dependent on the 'request' variable # to start processing (in this case this is the ExtractInputRequest task). store = { 'request': DummyUser(user="bob", id_="1.35"), } eng = engines.load(flow, engine='serial', store=store) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities # that show you exactly what transitions the engine is going through # while running the various billing related tasks. with printing.PrintingListener(eng): eng.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/graph_flow.py0000664000175000017500000000660500000000000021563 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task # In this example there are complex *inferred* dependencies between tasks that # are used to perform a simple set of linear equations. # # As you will see below the tasks just define what they require as input # and produce as output (named values). Then the user doesn't care about # ordering the tasks (in this case the tasks calculate pieces of the overall # equation). # # As you will notice a graph flow resolves dependencies automatically using the # tasks symbol requirements and provided symbol values and no orderin # dependency has to be manually created. # # Also notice that flows of any types can be nested into a graph flow; showing # that subflow dependencies (and associated ordering) will be inferred too. class Adder(task.Task): def execute(self, x, y): return x + y flow = gf.Flow('root').add( lf.Flow('nested_linear').add( # x2 = y3+y4 = 12 Adder("add2", provides='x2', rebind=['y3', 'y4']), # x1 = y1+y2 = 4 Adder("add1", provides='x1', rebind=['y1', 'y2']) ), # x5 = x1+x3 = 20 Adder("add5", provides='x5', rebind=['x1', 'x3']), # x3 = x1+x2 = 16 Adder("add3", provides='x3', rebind=['x1', 'x2']), # x4 = x2+y5 = 21 Adder("add4", provides='x4', rebind=['x2', 'y5']), # x6 = x5+x4 = 41 Adder("add6", provides='x6', rebind=['x5', 'x4']), # x7 = x6+x6 = 82 Adder("add7", provides='x7', rebind=['x6', 'x6'])) # Provide the initial variable inputs using a storage dictionary. store = { "y1": 1, "y2": 3, "y3": 5, "y4": 7, "y5": 9, } # This is the expected values that should be created. unexpected = 0 expected = [ ('x1', 4), ('x2', 12), ('x3', 16), ('x4', 21), ('x5', 20), ('x6', 41), ('x7', 82), ] result = taskflow.engines.run( flow, engine='serial', store=store) print("Single threaded engine result %s" % result) for (name, value) in expected: actual = result.get(name) if actual != value: sys.stderr.write("%s != %s\n" % (actual, value)) unexpected += 1 result = taskflow.engines.run( flow, engine='parallel', store=store) print("Multi threaded engine result %s" % result) for (name, value) in expected: actual = result.get(name) if actual != value: sys.stderr.write("%s != %s\n" % (actual, value)) unexpected += 1 if unexpected: sys.exit(1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/hello_world.py0000664000175000017500000001017000000000000021735 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import task # INTRO: This is the defacto hello world equivalent for taskflow; it shows how # an overly simplistic workflow can be created that runs using different # engines using different styles of execution (all can be used to run in # parallel if a workflow is provided that is parallelizable). class PrinterTask(task.Task): def __init__(self, name, show_name=True, inject=None): super(PrinterTask, self).__init__(name, inject=inject) self._show_name = show_name def execute(self, output): if self._show_name: print("%s: %s" % (self.name, output)) else: print(output) # This will be the work that we want done, which for this example is just to # print 'hello world' (like a song) using different tasks and different # execution models. song = lf.Flow("beats") # Unordered flows when ran can be ran in parallel; and a chorus is everyone # singing at once of course! hi_chorus = uf.Flow('hello') world_chorus = uf.Flow('world') for (name, hello, world) in [('bob', 'hello', 'world'), ('joe', 'hellooo', 'worllllld'), ('sue', "helloooooo!", 'wooorllld!')]: hi_chorus.add(PrinterTask("%s@hello" % name, # This will show up to the execute() method of # the task as the argument named 'output' (which # will allow us to print the character we want). inject={'output': hello})) world_chorus.add(PrinterTask("%s@world" % name, inject={'output': world})) # The composition starts with the conductor and then runs in sequence with # the chorus running in parallel, but no matter what the 'hello' chorus must # always run before the 'world' chorus (otherwise the world will fall apart). song.add(PrinterTask("conductor@begin", show_name=False, inject={'output': "*ding*"}), hi_chorus, world_chorus, PrinterTask("conductor@end", show_name=False, inject={'output': "*dong*"})) # Run in parallel using eventlet green threads... try: import eventlet as _eventlet # noqa except ImportError: # No eventlet currently active, skip running with it... pass else: print("-- Running in parallel using eventlet --") e = engines.load(song, executor='greenthreaded', engine='parallel', max_workers=1) e.run() # Run in parallel using real threads... print("-- Running in parallel using threads --") e = engines.load(song, executor='threaded', engine='parallel', max_workers=1) e.run() # Run in parallel using external processes... print("-- Running in parallel using processes --") e = engines.load(song, executor='processes', engine='parallel', max_workers=1) e.run() # Run serially (aka, if the workflow could have been ran in parallel, it will # not be when ran in this mode)... print("-- Running serially --") e = engines.load(song, engine='serial') e.run() print("-- Statistics gathered --") print(e.statistics) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/jobboard_produce_consume_colors.py0000664000175000017500000001543400000000000026050 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import logging import os import random import sys import threading import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import six from six.moves import range as compat_range from zake import fake_client from taskflow import exceptions as excp from taskflow.jobs import backends from taskflow.utils import threading_utils # In this example we show how a jobboard can be used to post work for other # entities to work on. This example creates a set of jobs using one producer # thread (typically this would be split across many machines) and then having # other worker threads with their own jobboards select work using a given # filters [red/blue] and then perform that work (and consuming or abandoning # the job after it has been completed or failed). # Things to note: # - No persistence layer is used (or logbook), just the job details are used # to determine if a job should be selected by a worker or not. # - This example runs in a single process (this is expected to be atypical # but this example shows that it can be done if needed, for testing...) # - The iterjobs(), claim(), consume()/abandon() worker workflow. # - The post() producer workflow. SHARED_CONF = { 'path': "/taskflow/jobs", 'board': 'zookeeper', } # How many workers and producers of work will be created (as threads). PRODUCERS = 3 WORKERS = 5 # How many units of work each producer will create. PRODUCER_UNITS = 10 # How many units of work are expected to be produced (used so workers can # know when to stop running and shutdown, typically this would not be a # a value but we have to limit this example's execution time to be less than # infinity). EXPECTED_UNITS = PRODUCER_UNITS * PRODUCERS # Delay between producing/consuming more work. WORKER_DELAY, PRODUCER_DELAY = (0.5, 0.5) # To ensure threads don't trample other threads output. STDOUT_LOCK = threading.Lock() def dispatch_work(job): # This is where the jobs contained work *would* be done time.sleep(1.0) def safe_print(name, message, prefix=""): with STDOUT_LOCK: if prefix: print("%s %s: %s" % (prefix, name, message)) else: print("%s: %s" % (name, message)) def worker(ident, client, consumed): # Create a personal board (using the same client so that it works in # the same process) and start looking for jobs on the board that we want # to perform. name = "W-%s" % (ident) safe_print(name, "started") claimed_jobs = 0 consumed_jobs = 0 abandoned_jobs = 0 with backends.backend(name, SHARED_CONF.copy(), client=client) as board: while len(consumed) != EXPECTED_UNITS: favorite_color = random.choice(['blue', 'red']) for job in board.iterjobs(ensure_fresh=True, only_unclaimed=True): # See if we should even bother with it... if job.details.get('color') != favorite_color: continue safe_print(name, "'%s' [attempting claim]" % (job)) try: board.claim(job, name) claimed_jobs += 1 safe_print(name, "'%s' [claimed]" % (job)) except (excp.NotFound, excp.UnclaimableJob): safe_print(name, "'%s' [claim unsuccessful]" % (job)) else: try: dispatch_work(job) board.consume(job, name) safe_print(name, "'%s' [consumed]" % (job)) consumed_jobs += 1 consumed.append(job) except Exception: board.abandon(job, name) abandoned_jobs += 1 safe_print(name, "'%s' [abandoned]" % (job)) time.sleep(WORKER_DELAY) safe_print(name, "finished (claimed %s jobs, consumed %s jobs," " abandoned %s jobs)" % (claimed_jobs, consumed_jobs, abandoned_jobs), prefix=">>>") def producer(ident, client): # Create a personal board (using the same client so that it works in # the same process) and start posting jobs on the board that we want # some entity to perform. name = "P-%s" % (ident) safe_print(name, "started") with backends.backend(name, SHARED_CONF.copy(), client=client) as board: for i in compat_range(0, PRODUCER_UNITS): job_name = "%s-%s" % (name, i) details = { 'color': random.choice(['red', 'blue']), } job = board.post(job_name, book=None, details=details) safe_print(name, "'%s' [posted]" % (job)) time.sleep(PRODUCER_DELAY) safe_print(name, "finished", prefix=">>>") def main(): if six.PY3: # TODO(harlowja): Hack to make eventlet work right, remove when the # following is fixed: https://github.com/eventlet/eventlet/issues/230 from taskflow.utils import eventlet_utils as _eu # noqa try: import eventlet as _eventlet # noqa except ImportError: pass with contextlib.closing(fake_client.FakeClient()) as c: created = [] for i in compat_range(0, PRODUCERS): p = threading_utils.daemon_thread(producer, i + 1, c) created.append(p) p.start() consumed = collections.deque() for i in compat_range(0, WORKERS): w = threading_utils.daemon_thread(worker, i + 1, c, consumed) created.append(w) w.start() while created: t = created.pop() t.join() # At the end there should be nothing leftover, let's verify that. board = backends.fetch('verifier', SHARED_CONF.copy(), client=c) board.connect() with contextlib.closing(board): if board.job_count != 0 or len(consumed) != EXPECTED_UNITS: return 1 return 0 if __name__ == "__main__": sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/parallel_table_multiply.py0000664000175000017500000001036500000000000024333 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import csv import logging import os import random import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import futurist from six.moves import range as compat_range from taskflow import engines from taskflow.patterns import unordered_flow as uf from taskflow import task # INTRO: This example walks through a miniature workflow which does a parallel # table modification where each row in the table gets adjusted by a thread, or # green thread (if eventlet is available) in parallel and then the result # is reformed into a new table and some verifications are performed on it # to ensure everything went as expected. MULTIPLER = 10 class RowMultiplier(task.Task): """Performs a modification of an input row, creating a output row.""" def __init__(self, name, index, row, multiplier): super(RowMultiplier, self).__init__(name=name) self.index = index self.multiplier = multiplier self.row = row def execute(self): return [r * self.multiplier for r in self.row] def make_flow(table): # This creation will allow for parallel computation (since the flow here # is specifically unordered; and when things are unordered they have # no dependencies and when things have no dependencies they can just be # ran at the same time, limited in concurrency by the executor or max # workers of that executor...) f = uf.Flow("root") for i, row in enumerate(table): f.add(RowMultiplier("m-%s" % i, i, row, MULTIPLER)) # NOTE(harlowja): at this point nothing has ran, the above is just # defining what should be done (but not actually doing it) and associating # an ordering dependencies that should be enforced (the flow pattern used # forces this), the engine in the later main() function will actually # perform this work... return f def main(): if len(sys.argv) == 2: tbl = [] with open(sys.argv[1], 'rb') as fh: reader = csv.reader(fh) for row in reader: tbl.append([float(r) if r else 0.0 for r in row]) else: # Make some random table out of thin air... tbl = [] cols = random.randint(1, 100) rows = random.randint(1, 100) for _i in compat_range(0, rows): row = [] for _j in compat_range(0, cols): row.append(random.random()) tbl.append(row) # Generate the work to be done. f = make_flow(tbl) # Now run it (using the specified executor)... try: executor = futurist.GreenThreadPoolExecutor(max_workers=5) except RuntimeError: # No eventlet currently active, use real threads instead. executor = futurist.ThreadPoolExecutor(max_workers=5) try: e = engines.load(f, engine='parallel', executor=executor) for st in e.run_iter(): print(st) finally: executor.shutdown() # Find the old rows and put them into place... # # TODO(harlowja): probably easier just to sort instead of search... computed_tbl = [] for i in compat_range(0, len(tbl)): for t in f: if t.index == i: computed_tbl.append(e.storage.get(t.name)) # Do some basic validation (which causes the return code of this process # to be different if things were not as expected...) if len(computed_tbl) != len(tbl): return 1 else: return 0 if __name__ == "__main__": sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/persistence_example.py0000664000175000017500000000750200000000000023467 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys import tempfile import traceback logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow.persistence import models from taskflow import task import example_utils as eu # noqa # INTRO: In this example we create two tasks, one that will say hi and one # that will say bye with optional capability to raise an error while # executing. During execution if a later task fails, the reverting that will # occur in the hi task will undo this (in a ~funny~ way). # # To also show the effect of task persistence we create a temporary database # that will track the state transitions of this hi + bye workflow, this # persistence allows for you to examine what is stored (using a sqlite client) # as well as shows you what happens during reversion and what happens to # the database during both of these modes (failing or not failing). class HiTask(task.Task): def execute(self): print("Hi!") def revert(self, **kwargs): print("Whooops, said hi too early, take that back!") class ByeTask(task.Task): def __init__(self, blowup): super(ByeTask, self).__init__() self._blowup = blowup def execute(self): if self._blowup: raise Exception("Fail!") print("Bye!") # This generates your flow structure (at this stage nothing is run). def make_flow(blowup=False): flow = lf.Flow("hello-world") flow.add(HiTask(), ByeTask(blowup)) return flow # Persist the flow and task state here, if the file/dir exists already blow up # if not don't blow up, this allows a user to see both the modes and to see # what is stored in each case. if eu.SQLALCHEMY_AVAILABLE: persist_path = os.path.join(tempfile.gettempdir(), "persisting.db") backend_uri = "sqlite:///%s" % (persist_path) else: persist_path = os.path.join(tempfile.gettempdir(), "persisting") backend_uri = "file:///%s" % (persist_path) if os.path.exists(persist_path): blowup = False else: blowup = True with eu.get_backend(backend_uri) as backend: # Make a flow that will blow up if the file didn't exist previously, if it # did exist, assume we won't blow up (and therefore this shows the undo # and redo that a flow will go through). book = models.LogBook("my-test") flow = make_flow(blowup=blowup) eu.print_wrapped("Running") try: eng = engines.load(flow, engine='serial', backend=backend, book=book) eng.run() if not blowup: eu.rm_path(persist_path) except Exception: # NOTE(harlowja): don't exit with non-zero status code, so that we can # print the book contents, as well as avoiding exiting also makes the # unit tests (which also runs these examples) pass. traceback.print_exc(file=sys.stdout) eu.print_wrapped("Book contents") print(book.pformat()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/pseudo_scoping.out.txt0000664000175000017500000000035600000000000023446 0ustar00zuulzuul00000000000000Running simple flow: Fetching number for Josh. Calling Josh 777. Calling many people using prefixed factory: Fetching number for Jim. Calling Jim 444. Fetching number for Joe. Calling Joe 555. Fetching number for Josh. Calling Josh 777. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/pseudo_scoping.py0000664000175000017500000000665100000000000022455 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Ivan Melnikov # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: pseudo-scoping by adding prefixes # Sometimes you need scoping -- e.g. for adding several # similar subflows to one flow to do same stuff for different # data. But current version of TaskFlow does not allow that # directly, so you have to resort to some kind of trickery. # One (and more or less recommended, if not the only) way of # solving the problem is to transform every task name, it's # provides and requires values -- e.g. by adding prefix to them. # This example shows how this could be done. # The example task is simple: for each specified person, fetch # his or her phone number from phone book and call. PHONE_BOOK = { 'jim': '444', 'joe': '555', 'iv_m': '666', 'josh': '777' } class FetchNumberTask(task.Task): """Task that fetches number from phone book.""" default_provides = 'number' def execute(self, person): print('Fetching number for %s.' % person) return PHONE_BOOK[person.lower()] class CallTask(task.Task): """Task that calls person by number.""" def execute(self, person, number): print('Calling %s %s.' % (person, number)) # This is how it works for one person: simple_flow = lf.Flow('simple one').add( FetchNumberTask(), CallTask()) print('Running simple flow:') taskflow.engines.run(simple_flow, store={'person': 'Josh'}) # To call several people you'll need a factory function that will # make a flow with given prefix for you. We need to add prefix # to task names, their provides and requires values. For requires, # we use `rebind` argument of task constructor. def subflow_factory(prefix): def pr(what): return '%s-%s' % (prefix, what) return lf.Flow(pr('flow')).add( FetchNumberTask(pr('fetch'), provides=pr('number'), rebind=[pr('person')]), CallTask(pr('call'), rebind=[pr('person'), pr('number')]) ) def call_them_all(): # Let's call them all. We need a flow: flow = lf.Flow('call-them-prefixed') # We'll also need to inject person names with prefixed argument # name to storage to satisfy task requirements. persons = {} for person in ('Jim', 'Joe', 'Josh'): prefix = person.lower() persons['%s-person' % prefix] = person flow.add(subflow_factory(prefix)) taskflow.engines.run(flow, store=persons) print('\nCalling many people using prefixed factory:') call_them_all() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_from_backend.out.txt0000664000175000017500000000124600000000000024416 0ustar00zuulzuul00000000000000----------------------------------- At the beginning, there is no state ----------------------------------- Flow 'resume from backend example' state: None ------- Running ------- executing first==1.0 ------------- After running ------------- Flow 'resume from backend example' state: SUSPENDED boom==1.0: SUCCESS, result=None first==1.0: SUCCESS, result=ok second==1.0: PENDING, result=None -------------------------- Resuming and running again -------------------------- executing second==1.0 ---------- At the end ---------- Flow 'resume from backend example' state: SUCCESS boom==1.0: SUCCESS, result=None first==1.0: SUCCESS, result=ok second==1.0: SUCCESS, result=ok ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_from_backend.py0000664000175000017500000001160000000000000023414 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from oslo_utils import uuidutils import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow.persistence import models from taskflow import task import example_utils as eu # noqa # INTRO: In this example linear_flow is used to group three tasks, one which # will suspend the future work the engine may do. This suspend engine is then # discarded and the workflow is reloaded from the persisted data and then the # workflow is resumed from where it was suspended. This allows you to see how # to start an engine, have a task stop the engine from doing future work (if # a multi-threaded engine is being used, then the currently active work is not # preempted) and then resume the work later. # # Usage: # # With a filesystem directory as backend # # python taskflow/examples/resume_from_backend.py # # With ZooKeeper as backend # # python taskflow/examples/resume_from_backend.py \ # zookeeper://127.0.0.1:2181/taskflow/resume_from_backend/ # UTILITY FUNCTIONS ######################################### def print_task_states(flowdetail, msg): eu.print_wrapped(msg) print("Flow '%s' state: %s" % (flowdetail.name, flowdetail.state)) # Sort by these so that our test validation doesn't get confused by the # order in which the items in the flow detail can be in. items = sorted((td.name, td.version, td.state, td.results) for td in flowdetail) for item in items: print(" %s==%s: %s, result=%s" % item) def find_flow_detail(backend, lb_id, fd_id): conn = backend.get_connection() lb = conn.get_logbook(lb_id) return lb.find(fd_id) # CREATE FLOW ############################################### class InterruptTask(task.Task): def execute(self): # DO NOT TRY THIS AT HOME engine.suspend() class TestTask(task.Task): def execute(self): print('executing %s' % self) return 'ok' def flow_factory(): return lf.Flow('resume from backend example').add( TestTask(name='first'), InterruptTask(name='boom'), TestTask(name='second')) # INITIALIZE PERSISTENCE #################################### with eu.get_backend() as backend: # Create a place where the persistence information will be stored. book = models.LogBook("example") flow_detail = models.FlowDetail("resume from backend example", uuid=uuidutils.generate_uuid()) book.add(flow_detail) with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) # CREATE AND RUN THE FLOW: FIRST ATTEMPT #################### flow = flow_factory() engine = taskflow.engines.load(flow, flow_detail=flow_detail, book=book, backend=backend) print_task_states(flow_detail, "At the beginning, there is no state") eu.print_wrapped("Running") engine.run() print_task_states(flow_detail, "After running") # RE-CREATE, RESUME, RUN #################################### eu.print_wrapped("Resuming and running again") # NOTE(harlowja): reload the flow detail from backend, this will allow us # to resume the flow from its suspended state, but first we need to search # for the right flow details in the correct logbook where things are # stored. # # We could avoid re-loading the engine and just do engine.run() again, but # this example shows how another process may unsuspend a given flow and # start it again for situations where this is useful to-do (say the process # running the above flow crashes). flow2 = flow_factory() flow_detail_2 = find_flow_detail(backend, book.uuid, flow_detail.uuid) engine2 = taskflow.engines.load(flow2, flow_detail=flow_detail_2, backend=backend, book=book) engine2.run() print_task_states(flow_detail_2, "At the end") ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6240416 taskflow-4.6.4/taskflow/examples/resume_many_flows/0000775000175000017500000000000000000000000022610 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_many_flows/my_flows.py0000664000175000017500000000241000000000000025016 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from taskflow.patterns import linear_flow as lf from taskflow import task class UnfortunateTask(task.Task): def execute(self): print('executing %s' % self) boom = os.environ.get('BOOM') if boom: print('> Critical error: boom = %s' % boom) raise SystemExit() else: print('> this time not exiting') class TestTask(task.Task): def execute(self): print('executing %s' % self) def flow_factory(): return lf.Flow('example').add( TestTask(name='first'), UnfortunateTask(name='boom'), TestTask(name='second')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_many_flows/resume_all.py0000664000175000017500000000330100000000000025307 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath( os.path.join(self_dir, os.pardir, os.pardir, os.pardir)) example_dir = os.path.abspath(os.path.join(self_dir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, example_dir) import taskflow.engines from taskflow import states import example_utils # noqa FINISHED_STATES = (states.SUCCESS, states.FAILURE, states.REVERTED) def resume(flowdetail, backend): print('Resuming flow %s %s' % (flowdetail.name, flowdetail.uuid)) engine = taskflow.engines.load_from_detail(flow_detail=flowdetail, backend=backend) engine.run() def main(): with example_utils.get_backend() as backend: logbooks = list(backend.get_connection().get_logbooks()) for lb in logbooks: for fd in lb: if fd.state not in FINISHED_STATES: resume(fd, backend) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_many_flows/run_flow.py0000664000175000017500000000263100000000000025017 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath( os.path.join(self_dir, os.pardir, os.pardir, os.pardir)) example_dir = os.path.abspath(os.path.join(self_dir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) sys.path.insert(0, example_dir) import taskflow.engines import example_utils # noqa import my_flows # noqa with example_utils.get_backend() as backend: engine = taskflow.engines.load_from_factory(my_flows.flow_factory, backend=backend) print('Running flow %s %s' % (engine.storage.flow_name, engine.storage.flow_uuid)) engine.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_many_flows.out.txt0000664000175000017500000000140600000000000024160 0ustar00zuulzuul00000000000000Run flow: Running flow example 18995b55-aaad-49fa-938f-006ac21ea4c7 executing first==1.0 executing boom==1.0 > this time not exiting executing second==1.0 Run flow, something happens: Running flow example f8f62ea6-1c9b-4e81-9ff9-1acaa299a648 executing first==1.0 executing boom==1.0 > Critical error: boom = exit please Run flow, something happens again: Running flow example 16f11c15-4d8a-4552-b422-399565c873c4 executing first==1.0 executing boom==1.0 > Critical error: boom = exit please Resuming all failed flows Resuming flow example f8f62ea6-1c9b-4e81-9ff9-1acaa299a648 executing boom==1.0 > this time not exiting executing second==1.0 Resuming flow example 16f11c15-4d8a-4552-b422-399565c873c4 executing boom==1.0 > this time not exiting executing second==1.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_many_flows.py0000664000175000017500000000617000000000000023166 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import subprocess import sys import tempfile self_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, self_dir) import example_utils # noqa # INTRO: In this example we create a common persistence database (sqlite based) # and then we run a few set of processes which themselves use this persistence # database, those processes 'crash' (in a simulated way) by exiting with a # system error exception. After this occurs a few times we then activate a # script which doesn't 'crash' and it will resume all the given engines flows # that did not complete and run them to completion (instead of crashing). # # This shows how a set of tasks can be finished even after repeatedly being # crashed, *crash resistance* if you may call it, due to the engine concept as # well as the persistence layer which keeps track of the state a flow # transitions through and persists the intermediary inputs and outputs and # overall flow state. def _exec(cmd, add_env=None): env = None if add_env: env = os.environ.copy() env.update(add_env) proc = subprocess.Popen(cmd, env=env, stdin=None, stdout=subprocess.PIPE, stderr=sys.stderr) stdout, _stderr = proc.communicate() rc = proc.returncode if rc != 0: raise RuntimeError("Could not run %s [%s]", cmd, rc) print(stdout.decode()) def _path_to(name): return os.path.abspath(os.path.join(os.path.dirname(__file__), 'resume_many_flows', name)) def main(): backend_uri = None tmp_path = None try: if example_utils.SQLALCHEMY_AVAILABLE: tmp_path = tempfile.mktemp(prefix='tf-resume-example') backend_uri = "sqlite:///%s" % (tmp_path) else: tmp_path = tempfile.mkdtemp(prefix='tf-resume-example') backend_uri = 'file:///%s' % (tmp_path) def run_example(name, add_env=None): _exec([sys.executable, _path_to(name), backend_uri], add_env) print('Run flow:') run_example('run_flow.py') print('\nRun flow, something happens:') run_example('run_flow.py', {'BOOM': 'exit please'}) print('\nRun flow, something happens again:') run_example('run_flow.py', {'BOOM': 'exit please'}) print('\nResuming all failed flows') run_example('resume_all.py') finally: if tmp_path: example_utils.rm_path(tmp_path) if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_vm_boot.py0000664000175000017500000002264400000000000022461 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import hashlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) import futurist from oslo_utils import uuidutils from taskflow import engines from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.persistence import models from taskflow import task import example_utils as eu # noqa # INTRO: These examples show how a hierarchy of flows can be used to create a # vm in a reliable & resumable manner using taskflow + a miniature version of # what nova does while booting a vm. @contextlib.contextmanager def slow_down(how_long=0.5): try: yield how_long finally: if len(sys.argv) > 1: # Only both to do this if user input provided. print("** Ctrl-c me please!!! **") time.sleep(how_long) class PrintText(task.Task): """Just inserts some text print outs in a workflow.""" def __init__(self, print_what, no_slow=False): content_hash = hashlib.md5(print_what.encode('utf-8')).hexdigest()[0:8] super(PrintText, self).__init__(name="Print: %s" % (content_hash)) self._text = print_what self._no_slow = no_slow def execute(self): if self._no_slow: eu.print_wrapped(self._text) else: with slow_down(): eu.print_wrapped(self._text) class DefineVMSpec(task.Task): """Defines a vm specification to be.""" def __init__(self, name): super(DefineVMSpec, self).__init__(provides='vm_spec', name=name) def execute(self): return { 'type': 'kvm', 'disks': 2, 'vcpu': 1, 'ips': 1, 'volumes': 3, } class LocateImages(task.Task): """Locates where the vm images are.""" def __init__(self, name): super(LocateImages, self).__init__(provides='image_locations', name=name) def execute(self, vm_spec): image_locations = {} for i in range(0, vm_spec['disks']): url = "http://www.yahoo.com/images/%s" % (i) image_locations[url] = "/tmp/%s.img" % (i) return image_locations class DownloadImages(task.Task): """Downloads all the vm images.""" def __init__(self, name): super(DownloadImages, self).__init__(provides='download_paths', name=name) def execute(self, image_locations): for src, loc in image_locations.items(): with slow_down(1): print("Downloading from %s => %s" % (src, loc)) return sorted(image_locations.values()) class CreateNetworkTpl(task.Task): """Generates the network settings file to be placed in the images.""" SYSCONFIG_CONTENTS = """DEVICE=eth%s BOOTPROTO=static IPADDR=%s ONBOOT=yes""" def __init__(self, name): super(CreateNetworkTpl, self).__init__(provides='network_settings', name=name) def execute(self, ips): settings = [] for i, ip in enumerate(ips): settings.append(self.SYSCONFIG_CONTENTS % (i, ip)) return settings class AllocateIP(task.Task): """Allocates the ips for the given vm.""" def __init__(self, name): super(AllocateIP, self).__init__(provides='ips', name=name) def execute(self, vm_spec): ips = [] for _i in range(0, vm_spec.get('ips', 0)): ips.append("192.168.0.%s" % (random.randint(1, 254))) return ips class WriteNetworkSettings(task.Task): """Writes all the network settings into the downloaded images.""" def execute(self, download_paths, network_settings): for j, path in enumerate(download_paths): with slow_down(1): print("Mounting %s to /tmp/%s" % (path, j)) for i, setting in enumerate(network_settings): filename = ("/tmp/etc/sysconfig/network-scripts/" "ifcfg-eth%s" % (i)) with slow_down(1): print("Writing to %s" % (filename)) print(setting) class BootVM(task.Task): """Fires off the vm boot operation.""" def execute(self, vm_spec): print("Starting vm!") with slow_down(1): print("Created: %s" % (vm_spec)) class AllocateVolumes(task.Task): """Allocates the volumes for the vm.""" def execute(self, vm_spec): volumes = [] for i in range(0, vm_spec['volumes']): with slow_down(1): volumes.append("/dev/vda%s" % (i + 1)) print("Allocated volume %s" % volumes[-1]) return volumes class FormatVolumes(task.Task): """Formats the volumes for the vm.""" def execute(self, volumes): for v in volumes: print("Formatting volume %s" % v) with slow_down(1): pass print("Formatted volume %s" % v) def create_flow(): # Setup the set of things to do (mini-nova). flow = lf.Flow("root").add( PrintText("Starting vm creation.", no_slow=True), lf.Flow('vm-maker').add( # First create a specification for the final vm to-be. DefineVMSpec("define_spec"), # This does all the image stuff. gf.Flow("img-maker").add( LocateImages("locate_images"), DownloadImages("download_images"), ), # This does all the network stuff. gf.Flow("net-maker").add( AllocateIP("get_my_ips"), CreateNetworkTpl("fetch_net_settings"), WriteNetworkSettings("write_net_settings"), ), # This does all the volume stuff. gf.Flow("volume-maker").add( AllocateVolumes("allocate_my_volumes", provides='volumes'), FormatVolumes("volume_formatter"), ), # Finally boot it all. BootVM("boot-it"), ), # Ya it worked! PrintText("Finished vm create.", no_slow=True), PrintText("Instance is running!", no_slow=True)) return flow eu.print_wrapped("Initializing") # Setup the persistence & resumption layer. with eu.get_backend() as backend: # Try to find a previously passed in tracking id... try: book_id, flow_id = sys.argv[2].split("+", 1) if not uuidutils.is_uuid_like(book_id): book_id = None if not uuidutils.is_uuid_like(flow_id): flow_id = None except (IndexError, ValueError): book_id = None flow_id = None # Set up how we want our engine to run, serial, parallel... try: executor = futurist.GreenThreadPoolExecutor(max_workers=5) except RuntimeError: # No eventlet installed, just let the default be used instead. executor = None # Create/fetch a logbook that will track the workflows work. book = None flow_detail = None if all([book_id, flow_id]): # Try to find in a prior logbook and flow detail... with contextlib.closing(backend.get_connection()) as conn: try: book = conn.get_logbook(book_id) flow_detail = book.find(flow_id) except exc.NotFound: pass if book is None and flow_detail is None: book = models.LogBook("vm-boot") with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) engine = engines.load_from_factory(create_flow, backend=backend, book=book, engine='parallel', executor=executor) print("!! Your tracking id is: '%s+%s'" % (book.uuid, engine.storage.flow_uuid)) print("!! Please submit this on later runs for tracking purposes") else: # Attempt to load from a previously partially completed flow. engine = engines.load_from_detail(flow_detail, backend=backend, engine='parallel', executor=executor) # Make me my vm please! eu.print_wrapped('Running') engine.run() # How to use. # # 1. $ python me.py "sqlite:////tmp/nova.db" # 2. ctrl-c before this finishes # 3. Find the tracking id (search for 'Your tracking id is') # 4. $ python me.py "sqlite:////tmp/cinder.db" "$tracking_id" # 5. Watch it pick up where it left off. # 6. Profit! ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/resume_volume_create.py0000664000175000017500000001340100000000000023635 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import hashlib import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from oslo_utils import uuidutils from taskflow import engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.persistence import models from taskflow import task import example_utils # noqa # INTRO: These examples show how a hierarchy of flows can be used to create a # pseudo-volume in a reliable & resumable manner using taskflow + a miniature # version of what cinder does while creating a volume (very miniature). @contextlib.contextmanager def slow_down(how_long=0.5): try: yield how_long finally: print("** Ctrl-c me please!!! **") time.sleep(how_long) def find_flow_detail(backend, book_id, flow_id): # NOTE(harlowja): this is used to attempt to find a given logbook with # a given id and a given flow details inside that logbook, we need this # reference so that we can resume the correct flow (as a logbook tracks # flows and a flow detail tracks a individual flow). # # Without a reference to the logbook and the flow details in that logbook # we will not know exactly what we should resume and that would mean we # can't resume what we don't know. with contextlib.closing(backend.get_connection()) as conn: lb = conn.get_logbook(book_id) return lb.find(flow_id) class PrintText(task.Task): def __init__(self, print_what, no_slow=False): content_hash = hashlib.md5(print_what.encode('utf-8')).hexdigest()[0:8] super(PrintText, self).__init__(name="Print: %s" % (content_hash)) self._text = print_what self._no_slow = no_slow def execute(self): if self._no_slow: print("-" * (len(self._text))) print(self._text) print("-" * (len(self._text))) else: with slow_down(): print("-" * (len(self._text))) print(self._text) print("-" * (len(self._text))) class CreateSpecForVolumes(task.Task): def execute(self): volumes = [] for i in range(0, random.randint(1, 10)): volumes.append({ 'type': 'disk', 'location': "/dev/vda%s" % (i + 1), }) return volumes class PrepareVolumes(task.Task): def execute(self, volume_specs): for v in volume_specs: with slow_down(): print("Dusting off your hard drive %s" % (v)) with slow_down(): print("Taking a well deserved break.") print("Your drive %s has been certified." % (v)) # Setup the set of things to do (mini-cinder). flow = lf.Flow("root").add( PrintText("Starting volume create", no_slow=True), gf.Flow('maker').add( CreateSpecForVolumes("volume_specs", provides='volume_specs'), PrintText("I need a nap, it took me a while to build those specs."), PrepareVolumes(), ), PrintText("Finished volume create", no_slow=True)) # Setup the persistence & resumption layer. with example_utils.get_backend() as backend: try: book_id, flow_id = sys.argv[2].split("+", 1) except (IndexError, ValueError): book_id = None flow_id = None if not all([book_id, flow_id]): # If no 'tracking id' (think a fedex or ups tracking id) is provided # then we create one by creating a logbook (where flow details are # stored) and creating a flow detail (where flow and task state is # stored). The combination of these 2 objects unique ids (uuids) allows # the users of taskflow to reassociate the workflows that were # potentially running (and which may have partially completed) back # with taskflow so that those workflows can be resumed (or reverted) # after a process/thread/engine has failed in someway. book = models.LogBook('resume-volume-create') flow_detail = models.FlowDetail("root", uuid=uuidutils.generate_uuid()) book.add(flow_detail) with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) print("!! Your tracking id is: '%s+%s'" % (book.uuid, flow_detail.uuid)) print("!! Please submit this on later runs for tracking purposes") else: flow_detail = find_flow_detail(backend, book_id, flow_id) # Load and run. engine = engines.load(flow, flow_detail=flow_detail, backend=backend, engine='serial') engine.run() # How to use. # # 1. $ python me.py "sqlite:////tmp/cinder.db" # 2. ctrl-c before this finishes # 3. Find the tracking id (search for 'Your tracking id is') # 4. $ python me.py "sqlite:////tmp/cinder.db" "$tracking_id" # 5. Profit! ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/retry_flow.out.txt0000664000175000017500000000016400000000000022616 0ustar00zuulzuul00000000000000Calling jim 333. Wrong number, apologizing. Calling jim 444. Wrong number, apologizing. Calling jim 555. Hello Jim! ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/retry_flow.py0000664000175000017500000000433400000000000021624 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import retry from taskflow import task # INTRO: In this example we create a retry controller that receives a phone # directory and tries different phone numbers. The next task tries to call Jim # using the given number. If it is not a Jim's number, the task raises an # exception and retry controller takes the next number from the phone # directory and retries the call. # # This example shows a basic usage of retry controllers in a flow. # Retry controllers allows to revert and retry a failed subflow with new # parameters. class CallJim(task.Task): def execute(self, jim_number): print("Calling jim %s." % jim_number) if jim_number != 555: raise Exception("Wrong number!") else: print("Hello Jim!") def revert(self, jim_number, **kwargs): print("Wrong number, apologizing.") # Create your flow and associated tasks (the work to be done). flow = lf.Flow('retrying-linear', retry=retry.ParameterizedForEach( rebind=['phone_directory'], provides='jim_number')).add(CallJim()) # Now run that flow using the provided initial data (store below). taskflow.engines.run(flow, store={'phone_directory': [333, 444, 555, 666]}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/reverting_linear.out.txt0000664000175000017500000000020400000000000023754 0ustar00zuulzuul00000000000000Calling jim 555. Calling joe 444. Calling 444 and apologizing. Calling 555 and apologizing. Flow failed: Suzzie not home right now. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/reverting_linear.py0000664000175000017500000000672700000000000022777 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example we create three tasks, each of which ~calls~ a given # number (provided as a function input), one of those tasks *fails* calling a # given number (the suzzie calling); this causes the workflow to enter the # reverting process, which activates the revert methods of the previous two # phone ~calls~. # # This simulated calling makes it appear like all three calls occur or all # three don't occur (transaction-like capabilities). No persistence layer is # used here so reverting and executing will *not* be tolerant of process # failure. class CallJim(task.Task): def execute(self, jim_number, *args, **kwargs): print("Calling jim %s." % jim_number) def revert(self, jim_number, *args, **kwargs): print("Calling %s and apologizing." % jim_number) class CallJoe(task.Task): def execute(self, joe_number, *args, **kwargs): print("Calling joe %s." % joe_number) def revert(self, joe_number, *args, **kwargs): print("Calling %s and apologizing." % joe_number) class CallSuzzie(task.Task): def execute(self, suzzie_number, *args, **kwargs): raise IOError("Suzzie not home right now.") # Create your flow and associated tasks (the work to be done). flow = lf.Flow('simple-linear').add( CallJim(), CallJoe(), CallSuzzie() ) try: # Now run that flow using the provided initial data (store below). taskflow.engines.run(flow, store=dict(joe_number=444, jim_number=555, suzzie_number=666)) except Exception as e: # NOTE(harlowja): This exception will be the exception that came out of the # 'CallSuzzie' task instead of a different exception, this is useful since # typically surrounding code wants to handle the original exception and not # a wrapped or altered one. # # *WARNING* If this flow was multi-threaded and multiple active tasks threw # exceptions then the above exception would be wrapped into a combined # exception (the object has methods to iterate over the contained # exceptions). See: exceptions.py and the class 'WrappedFailure' to look at # how to deal with multiple tasks failing while running. # # You will also note that this is not a problem in this case since no # parallelism is involved; this is ensured by the usage of a linear flow # and the default engine type which is 'serial' vs being 'parallel'. print("Flow failed: %s" % e) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/run_by_iter.out.txt0000664000175000017500000000146700000000000022752 0ustar00zuulzuul00000000000000RESUMING SCHEDULING A WAITING ANALYZING SCHEDULING B WAITING ANALYZING SCHEDULING C WAITING ANALYZING SCHEDULING D WAITING ANALYZING SCHEDULING E WAITING ANALYZING SCHEDULING F WAITING ANALYZING SCHEDULING G WAITING ANALYZING SCHEDULING H WAITING ANALYZING SCHEDULING I WAITING ANALYZING SCHEDULING J WAITING ANALYZING SCHEDULING K WAITING ANALYZING SCHEDULING L WAITING ANALYZING SCHEDULING M WAITING ANALYZING SCHEDULING N WAITING ANALYZING SCHEDULING O WAITING ANALYZING SCHEDULING P WAITING ANALYZING SCHEDULING Q WAITING ANALYZING SCHEDULING R WAITING ANALYZING SCHEDULING S WAITING ANALYZING SCHEDULING T WAITING ANALYZING SCHEDULING U WAITING ANALYZING SCHEDULING V WAITING ANALYZING SCHEDULING W WAITING ANALYZING SCHEDULING X WAITING ANALYZING SCHEDULING Y WAITING ANALYZING SCHEDULING Z WAITING ANALYZING SUCCESS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/run_by_iter.py0000664000175000017500000000511400000000000021746 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys import six logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: This example shows how to run a set of engines at the same time, each # running in different engines using a single thread of control to iterate over # each engine (which causes that engine to advanced to its next state during # each iteration). class EchoTask(task.Task): def execute(self, value): print(value) return chr(ord(value) + 1) def make_alphabet_flow(i): f = lf.Flow("alphabet_%s" % (i)) start_value = 'A' end_value = 'Z' curr_value = start_value while ord(curr_value) <= ord(end_value): next_value = chr(ord(curr_value) + 1) if curr_value != end_value: f.add(EchoTask(name="echoer_%s" % curr_value, rebind={'value': curr_value}, provides=next_value)) else: f.add(EchoTask(name="echoer_%s" % curr_value, rebind={'value': curr_value})) curr_value = next_value return f # Adjust this number to change how many engines/flows run at once. flow_count = 1 flows = [] for i in range(0, flow_count): f = make_alphabet_flow(i + 1) flows.append(make_alphabet_flow(i + 1)) engine_iters = [] for f in flows: e = engines.load(f) e.compile() e.storage.inject({'A': 'A'}) e.prepare() engine_iters.append(e.run_iter()) while engine_iters: for it in list(engine_iters): try: print(six.next(it)) except StopIteration: engine_iters.remove(it) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/run_by_iter_enumerate.out.txt0000664000175000017500000000152100000000000025006 0ustar00zuulzuul00000000000000Transition 1: RESUMING Transition 2: SCHEDULING echo_1 Transition 3: WAITING Transition 4: ANALYZING Transition 5: SCHEDULING echo_2 Transition 6: WAITING Transition 7: ANALYZING Transition 8: SCHEDULING echo_3 Transition 9: WAITING Transition 10: ANALYZING Transition 11: SCHEDULING echo_4 Transition 12: WAITING Transition 13: ANALYZING Transition 14: SCHEDULING echo_5 Transition 15: WAITING Transition 16: ANALYZING Transition 17: SCHEDULING echo_6 Transition 18: WAITING Transition 19: ANALYZING Transition 20: SCHEDULING echo_7 Transition 21: WAITING Transition 22: ANALYZING Transition 23: SCHEDULING echo_8 Transition 24: WAITING Transition 25: ANALYZING Transition 26: SCHEDULING echo_9 Transition 27: WAITING Transition 28: ANALYZING Transition 29: SCHEDULING echo_10 Transition 30: WAITING Transition 31: ANALYZING Transition 32: SUCCESS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/run_by_iter_enumerate.py0000664000175000017500000000327000000000000024014 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: These examples show how to run an engine using the engine iteration # capability, in between iterations other activities occur (in this case a # value is output to stdout); but more complicated actions can occur at the # boundary when an engine yields its current state back to the caller. class EchoNameTask(task.Task): def execute(self): print(self.name) f = lf.Flow("counter") for i in range(0, 10): f.add(EchoNameTask("echo_%s" % (i + 1))) e = engines.load(f) e.compile() e.prepare() for i, st in enumerate(e.run_iter(), 1): print("Transition %s: %s" % (i, st)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/share_engine_thread.py0000664000175000017500000000530500000000000023405 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import futurist import six from taskflow import engines from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.utils import threading_utils as tu # INTRO: in this example we create 2 dummy flow(s) with a 2 dummy task(s), and # run it using a shared thread pool executor to show how a single executor can # be used with more than one engine (sharing the execution thread pool between # them); this allows for saving resources and reusing threads in situations # where this is benefical. class DelayedTask(task.Task): def __init__(self, name): super(DelayedTask, self).__init__(name=name) self._wait_for = random.random() def execute(self): print("Running '%s' in thread '%s'" % (self.name, tu.get_ident())) time.sleep(self._wait_for) f1 = uf.Flow("f1") f1.add(DelayedTask("f1-1")) f1.add(DelayedTask("f1-2")) f2 = uf.Flow("f2") f2.add(DelayedTask("f2-1")) f2.add(DelayedTask("f2-2")) # Run them all using the same futures (thread-pool based) executor... with futurist.ThreadPoolExecutor() as ex: e1 = engines.load(f1, engine='parallel', executor=ex) e2 = engines.load(f2, engine='parallel', executor=ex) iters = [e1.run_iter(), e2.run_iter()] # Iterate over a copy (so we can remove from the source list). cloned_iters = list(iters) while iters: # Run a single 'step' of each iterator, forcing each engine to perform # some work, then yield, and repeat until each iterator is consumed # and there is no more engine work to be done. for it in cloned_iters: try: six.next(it) except StopIteration: try: iters.remove(it) except ValueError: pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear.out.txt0000664000175000017500000000004200000000000023240 0ustar00zuulzuul00000000000000Calling jim 555. Calling joe 444. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear.py0000664000175000017500000000471300000000000022254 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: In this example we create two tasks, each of which ~calls~ a given # ~phone~ number (provided as a function input) in a linear fashion (one after # the other). For a workflow which is serial this shows a extremely simple way # of structuring your tasks (the code that does the work) into a linear # sequence (the flow) and then passing the work off to an engine, with some # initial data to be ran in a reliable manner. # # NOTE(harlowja): This example shows a basic usage of the taskflow structures # without involving the complexity of persistence. Using the structures that # taskflow provides via tasks and flows makes it possible for you to easily at # a later time hook in a persistence layer (and then gain the functionality # that offers) when you decide the complexity of adding that layer in # is 'worth it' for your application's usage pattern (which certain # applications may not need). class CallJim(task.Task): def execute(self, jim_number, *args, **kwargs): print("Calling jim %s." % jim_number) class CallJoe(task.Task): def execute(self, joe_number, *args, **kwargs): print("Calling joe %s." % joe_number) # Create your flow and associated tasks (the work to be done). flow = lf.Flow('simple-linear').add( CallJim(), CallJoe() ) # Now run that flow using the provided initial data (store below). taskflow.engines.run(flow, store=dict(joe_number=444, jim_number=555)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear_listening.out.txt0000664000175000017500000000045400000000000025323 0ustar00zuulzuul00000000000000Flow => RUNNING Task __main__.call_jim => RUNNING Calling jim. Context = [('jim_number', 555), ('joe_number', 444)] Task __main__.call_jim => SUCCESS Task __main__.call_joe => RUNNING Calling joe. Context = [('jim_number', 555), ('joe_number', 444)] Task __main__.call_joe => SUCCESS Flow => SUCCESS ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear_listening.py0000664000175000017500000000756600000000000024341 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.types import notifier ANY = notifier.Notifier.ANY # INTRO: In this example we create two tasks (this time as functions instead # of task subclasses as in the simple_linear.py example), each of which ~calls~ # a given ~phone~ number (provided as a function input) in a linear fashion # (one after the other). # # For a workflow which is serial this shows an extremely simple way # of structuring your tasks (the code that does the work) into a linear # sequence (the flow) and then passing the work off to an engine, with some # initial data to be ran in a reliable manner. # # This example shows a basic usage of the taskflow structures without involving # the complexity of persistence. Using the structures that taskflow provides # via tasks and flows makes it possible for you to easily at a later time # hook in a persistence layer (and then gain the functionality that offers) # when you decide the complexity of adding that layer in is 'worth it' for your # applications usage pattern (which some applications may not need). # # It **also** adds on to the simple_linear.py example by adding a set of # callback functions which the engine will call when a flow state transition # or task state transition occurs. These types of functions are useful for # updating task or flow progress, or for debugging, sending notifications to # external systems, or for other yet unknown future usage that you may create! def call_jim(context): print("Calling jim.") print("Context = %s" % (sorted(context.items(), key=lambda x: x[0]))) def call_joe(context): print("Calling joe.") print("Context = %s" % (sorted(context.items(), key=lambda x: x[0]))) def flow_watch(state, details): print('Flow => %s' % state) def task_watch(state, details): print('Task %s => %s' % (details.get('task_name'), state)) # Wrap your functions into a task type that knows how to treat your functions # as tasks. There was previous work done to just allow a function to be # directly passed, but in python 3.0 there is no easy way to capture an # instance method, so this wrapping approach was decided upon instead which # can attach to instance methods (if that's desired). flow = lf.Flow("Call-them") flow.add(task.FunctorTask(execute=call_jim)) flow.add(task.FunctorTask(execute=call_joe)) # Now load (but do not run) the flow using the provided initial data. engine = taskflow.engines.load(flow, store={ 'context': { "joe_number": 444, "jim_number": 555, } }) # This is where we attach our callback functions to the 2 different # notification objects that an engine exposes. The usage of a ANY (kleene star) # here means that we want to be notified on all state changes, if you want to # restrict to a specific state change, just register that instead. engine.notifier.register(ANY, flow_watch) engine.atom_notifier.register(ANY, task_watch) # And now run! engine.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear_pass.out.txt0000664000175000017500000000016200000000000024271 0ustar00zuulzuul00000000000000Constructing... Loading... Compiling... Preparing... Running... Executing 'a' Executing 'b' Got input 'a' Done... ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_linear_pass.py0000664000175000017500000000340000000000000023272 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) from taskflow import engines from taskflow.patterns import linear_flow from taskflow import task # INTRO: This example shows how a task (in a linear/serial workflow) can # produce an output that can be then consumed/used by a downstream task. class TaskA(task.Task): default_provides = 'a' def execute(self): print("Executing '%s'" % (self.name)) return 'a' class TaskB(task.Task): def execute(self, a): print("Executing '%s'" % (self.name)) print("Got input '%s'" % (a)) print("Constructing...") wf = linear_flow.Flow("pass-from-to") wf.add(TaskA('a'), TaskB('b')) print("Loading...") e = engines.load(wf) print("Compiling...") e.compile() print("Preparing...") e.prepare() print("Running...") e.run() print("Done...") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/simple_map_reduce.py0000664000175000017500000000730500000000000023106 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) self_dir = os.path.abspath(os.path.dirname(__file__)) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) # INTRO: These examples show a simplistic map/reduce implementation where # a set of mapper(s) will sum a series of input numbers (in parallel) and # return their individual summed result. A reducer will then use those # produced values and perform a final summation and this result will then be # printed (and verified to ensure the calculation was as expected). import six from taskflow import engines from taskflow.patterns import linear_flow from taskflow.patterns import unordered_flow from taskflow import task class SumMapper(task.Task): def execute(self, inputs): # Sums some set of provided inputs. return sum(inputs) class TotalReducer(task.Task): def execute(self, *args, **kwargs): # Reduces all mapped summed outputs into a single value. total = 0 for (k, v) in six.iteritems(kwargs): # If any other kwargs was passed in, we don't want to use those # in the calculation of the total... if k.startswith('reduction_'): total += v return total def chunk_iter(chunk_size, upperbound): """Yields back chunk size pieces from zero to upperbound - 1.""" chunk = [] for i in range(0, upperbound): chunk.append(i) if len(chunk) == chunk_size: yield chunk chunk = [] # Upper bound of numbers to sum for example purposes... UPPER_BOUND = 10000 # How many mappers we want to have. SPLIT = 10 # How big of a chunk we want to give each mapper. CHUNK_SIZE = UPPER_BOUND // SPLIT # This will be the workflow we will compose and run. w = linear_flow.Flow("root") # The mappers will run in parallel. store = {} provided = [] mappers = unordered_flow.Flow('map') for i, chunk in enumerate(chunk_iter(CHUNK_SIZE, UPPER_BOUND)): mapper_name = 'mapper_%s' % i # Give that mapper some information to compute. store[mapper_name] = chunk # The reducer uses all of the outputs of the mappers, so it needs # to be recorded that it needs access to them (under a specific name). provided.append("reduction_%s" % i) mappers.add(SumMapper(name=mapper_name, rebind={'inputs': mapper_name}, provides=provided[-1])) w.add(mappers) # The reducer will run last (after all the mappers). w.add(TotalReducer('reducer', requires=provided)) # Now go! e = engines.load(w, engine='parallel', store=store, max_workers=4) print("Running a parallel engine with options: %s" % e.options) e.run() # Now get the result the reducer created. total = e.storage.get('reducer') print("Calculated result = %s" % total) # Calculate it manually to verify that it worked... calc_total = sum(range(0, UPPER_BOUND)) if calc_total != total: sys.exit(1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/switch_graph_flow.py0000664000175000017500000000356000000000000023141 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.patterns import graph_flow as gf from taskflow import task class DummyTask(task.Task): def execute(self): print("Running %s" % self.name) def allow(history): print(history) return False # Declare our work to be done... r = gf.Flow("root") r_a = DummyTask('r-a') r_b = DummyTask('r-b') r.add(r_a, r_b) r.link(r_a, r_b, decider=allow) # Setup and run the engine layer. e = engines.load(r) e.compile() e.prepare() e.run() print("---------") print("After run") print("---------") backend = e.storage.backend entries = [os.path.join(backend.memory.root_path, child) for child in backend.memory.ls(backend.memory.root_path)] while entries: path = entries.pop() value = backend.memory[path] if value: print("%s -> %s" % (path, value)) else: print("%s" % (path)) entries.extend(os.path.join(path, child) for child in backend.memory.ls(path)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/timing_listener.py0000664000175000017500000000376200000000000022630 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import random import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.listeners import timing from taskflow.patterns import linear_flow as lf from taskflow import task # INTRO: in this example we will attach a listener to an engine # and have variable run time tasks run and show how the listener will print # out how long those tasks took (when they started and when they finished). # # This shows how timing metrics can be gathered (or attached onto an engine) # after a workflow has been constructed, making it easy to gather metrics # dynamically for situations where this kind of information is applicable (or # even adding this information on at a later point in the future when your # application starts to slow down). class VariableTask(task.Task): def __init__(self, name): super(VariableTask, self).__init__(name) self._sleepy_time = random.random() def execute(self): time.sleep(self._sleepy_time) f = lf.Flow('root') f.add(VariableTask('a'), VariableTask('b'), VariableTask('c')) e = engines.load(f) with timing.PrintingDurationListener(e): e.run() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/tox_conductor.py0000664000175000017500000002045300000000000022322 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import itertools import logging import os import shutil import socket import sys import tempfile import threading import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from oslo_utils import timeutils from oslo_utils import uuidutils import six from zake import fake_client from taskflow.conductors import backends as conductors from taskflow import engines from taskflow.jobs import backends as boards from taskflow.patterns import linear_flow from taskflow.persistence import backends as persistence from taskflow.persistence import models from taskflow import task from taskflow.utils import threading_utils # INTRO: This examples shows how a worker/producer can post desired work (jobs) # to a jobboard and a conductor can consume that work (jobs) from that jobboard # and execute those jobs in a reliable & async manner (for example, if the # conductor were to crash then the job will be released back onto the jobboard # and another conductor can attempt to finish it, from wherever that job last # left off). # # In this example a in-memory jobboard (and in-memory storage) is created and # used that simulates how this would be done at a larger scale (it is an # example after all). # Restrict how long this example runs for... RUN_TIME = 5 REVIEW_CREATION_DELAY = 0.5 SCAN_DELAY = 0.1 NAME = "%s_%s" % (socket.getfqdn(), os.getpid()) # This won't really use zookeeper but will use a local version of it using # the zake library that mimics an actual zookeeper cluster using threads and # an in-memory data structure. JOBBOARD_CONF = { 'board': 'zookeeper://localhost?path=/taskflow/tox/jobs', } class RunReview(task.Task): # A dummy task that clones the review and runs tox... def _clone_review(self, review, temp_dir): print("Cloning review '%s' into %s" % (review['id'], temp_dir)) def _run_tox(self, temp_dir): print("Running tox in %s" % temp_dir) def execute(self, review, temp_dir): self._clone_review(review, temp_dir) self._run_tox(temp_dir) class MakeTempDir(task.Task): # A task that creates and destroys a temporary dir (on failure). # # It provides the location of the temporary dir for other tasks to use # as they see fit. default_provides = 'temp_dir' def execute(self): return tempfile.mkdtemp() def revert(self, *args, **kwargs): temp_dir = kwargs.get(task.REVERT_RESULT) if temp_dir: shutil.rmtree(temp_dir) class CleanResources(task.Task): # A task that cleans up any workflow resources. def execute(self, temp_dir): print("Removing %s" % temp_dir) shutil.rmtree(temp_dir) def review_iter(): """Makes reviews (never-ending iterator/generator).""" review_id_gen = itertools.count(0) while True: review_id = six.next(review_id_gen) review = { 'id': review_id, } yield review # The reason this is at the module namespace level is important, since it must # be accessible from a conductor dispatching an engine, if it was a lambda # function for example, it would not be reimportable and the conductor would # be unable to reference it when creating the workflow to run. def create_review_workflow(): """Factory method used to create a review workflow to run.""" f = linear_flow.Flow("tester") f.add( MakeTempDir(name="maker"), RunReview(name="runner"), CleanResources(name="cleaner") ) return f def generate_reviewer(client, saver, name=NAME): """Creates a review producer thread with the given name prefix.""" real_name = "%s_reviewer" % name no_more = threading.Event() jb = boards.fetch(real_name, JOBBOARD_CONF, client=client, persistence=saver) def make_save_book(saver, review_id): # Record what we want to happen (sometime in the future). book = models.LogBook("book_%s" % review_id) detail = models.FlowDetail("flow_%s" % review_id, uuidutils.generate_uuid()) book.add(detail) # Associate the factory method we want to be called (in the future) # with the book, so that the conductor will be able to call into # that factory to retrieve the workflow objects that represent the # work. # # These args and kwargs *can* be used to save any specific parameters # into the factory when it is being called to create the workflow # objects (typically used to tell a factory how to create a unique # workflow that represents this review). factory_args = () factory_kwargs = {} engines.save_factory_details(detail, create_review_workflow, factory_args, factory_kwargs) with contextlib.closing(saver.get_connection()) as conn: conn.save_logbook(book) return book def run(): """Periodically publishes 'fake' reviews to analyze.""" jb.connect() review_generator = review_iter() with contextlib.closing(jb): while not no_more.is_set(): review = six.next(review_generator) details = { 'store': { 'review': review, }, } job_name = "%s_%s" % (real_name, review['id']) print("Posting review '%s'" % review['id']) jb.post(job_name, book=make_save_book(saver, review['id']), details=details) time.sleep(REVIEW_CREATION_DELAY) # Return the unstarted thread, and a callback that can be used # shutdown that thread (to avoid running forever). return (threading_utils.daemon_thread(target=run), no_more.set) def generate_conductor(client, saver, name=NAME): """Creates a conductor thread with the given name prefix.""" real_name = "%s_conductor" % name jb = boards.fetch(name, JOBBOARD_CONF, client=client, persistence=saver) conductor = conductors.fetch("blocking", real_name, jb, engine='parallel', wait_timeout=SCAN_DELAY) def run(): jb.connect() with contextlib.closing(jb): conductor.run() # Return the unstarted thread, and a callback that can be used # shutdown that thread (to avoid running forever). return (threading_utils.daemon_thread(target=run), conductor.stop) def main(): # Need to share the same backend, so that data can be shared... persistence_conf = { 'connection': 'memory', } saver = persistence.fetch(persistence_conf) with contextlib.closing(saver.get_connection()) as conn: # This ensures that the needed backend setup/data directories/schema # upgrades and so on... exist before they are attempted to be used... conn.upgrade() fc1 = fake_client.FakeClient() # Done like this to share the same client storage location so the correct # zookeeper features work across clients... fc2 = fake_client.FakeClient(storage=fc1.storage) entities = [ generate_reviewer(fc1, saver), generate_conductor(fc2, saver), ] for t, stopper in entities: t.start() try: watch = timeutils.StopWatch(duration=RUN_TIME) watch.start() while not watch.expired(): time.sleep(0.1) finally: for t, stopper in reversed(entities): stopper() t.join() if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wbe_event_sender.py0000664000175000017500000001273700000000000022754 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import string import sys import time top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from six.moves import range as compat_range from taskflow import engines from taskflow.engines.worker_based import worker from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.types import notifier from taskflow.utils import threading_utils ANY = notifier.Notifier.ANY # INTRO: These examples show how to use a remote worker's event notification # attribute to proxy back task event notifications to the controlling process. # # In this case a simple set of events is triggered by a worker running a # task (simulated to be remote by using a kombu memory transport and threads). # Those events that the 'remote worker' produces will then be proxied back to # the task that the engine is running 'remotely', and then they will be emitted # back to the original callbacks that exist in the originating engine # process/thread. This creates a one-way *notification* channel that can # transparently be used in-process, outside-of-process using remote workers and # so-on that allows tasks to signal to its controlling process some sort of # action that has occurred that the task may need to tell others about (for # example to trigger some type of response when the task reaches 50% done...). def event_receiver(event_type, details): """This is the callback that (in this example) doesn't do much...""" print("Recieved event '%s'" % event_type) print("Details = %s" % details) class EventReporter(task.Task): """This is the task that will be running 'remotely' (not really remote).""" EVENTS = tuple(string.ascii_uppercase) EVENT_DELAY = 0.1 def execute(self): for i, e in enumerate(self.EVENTS): details = { 'leftover': self.EVENTS[i:], } self.notifier.notify(e, details) time.sleep(self.EVENT_DELAY) BASE_SHARED_CONF = { 'exchange': 'taskflow', 'transport': 'memory', 'transport_options': { 'polling_interval': 0.1, }, } # Until https://github.com/celery/kombu/issues/398 is resolved it is not # recommended to run many worker threads in this example due to the types # of errors mentioned in that issue. MEMORY_WORKERS = 1 WORKER_CONF = { 'tasks': [ # Used to locate which tasks we can run (we don't want to allow # arbitrary code/tasks to be ran by any worker since that would # open up a variety of vulnerabilities). '%s:EventReporter' % (__name__), ], } def run(engine_options): reporter = EventReporter() reporter.notifier.register(ANY, event_receiver) flow = lf.Flow('event-reporter').add(reporter) eng = engines.load(flow, engine='worker-based', **engine_options) eng.run() if __name__ == "__main__": logging.basicConfig(level=logging.ERROR) # Setup our transport configuration and merge it into the worker and # engine configuration so that both of those objects use it correctly. worker_conf = dict(WORKER_CONF) worker_conf.update(BASE_SHARED_CONF) engine_options = dict(BASE_SHARED_CONF) workers = [] # These topics will be used to request worker information on; those # workers will respond with their capabilities which the executing engine # will use to match pending tasks to a matched worker, this will cause # the task to be sent for execution, and the engine will wait until it # is finished (a response is received) and then the engine will either # continue with other tasks, do some retry/failure resolution logic or # stop (and potentially re-raise the remote workers failure)... worker_topics = [] try: # Create a set of worker threads to simulate actual remote workers... print('Running %s workers.' % (MEMORY_WORKERS)) for i in compat_range(0, MEMORY_WORKERS): # Give each one its own unique topic name so that they can # correctly communicate with the engine (they will all share the # same exchange). worker_conf['topic'] = 'worker-%s' % (i + 1) worker_topics.append(worker_conf['topic']) w = worker.Worker(**worker_conf) runner = threading_utils.daemon_thread(w.run) runner.start() w.wait() workers.append((runner, w.stop)) # Now use those workers to do something. print('Executing some work.') engine_options['topics'] = worker_topics result = run(engine_options) print('Execution finished.') finally: # And cleanup. print('Stopping workers.') while workers: r, stopper = workers.pop() stopper() r.join() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wbe_mandelbrot.out.txt0000664000175000017500000000036300000000000023407 0ustar00zuulzuul00000000000000Calculating your mandelbrot fractal of size 512x512. Running 2 workers. Execution finished. Stopping workers. Writing image... Gathered 262144 results that represents a mandelbrot image (using 8 chunks that are computed jointly by 2 workers). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wbe_mandelbrot.py0000664000175000017500000002133200000000000022411 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import math import os import sys top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from six.moves import range as compat_range from taskflow import engines from taskflow.engines.worker_based import worker from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.utils import threading_utils # INTRO: This example walks through a workflow that will in parallel compute # a mandelbrot result set (using X 'remote' workers) and then combine their # results together to form a final mandelbrot fractal image. It shows a usage # of taskflow to perform a well-known embarrassingly parallel problem that has # the added benefit of also being an elegant visualization. # # NOTE(harlowja): this example simulates the expected larger number of workers # by using a set of threads (which in this example simulate the remote workers # that would typically be running on other external machines). # # NOTE(harlowja): to have it produce an image run (after installing pillow): # # $ python taskflow/examples/wbe_mandelbrot.py output.png BASE_SHARED_CONF = { 'exchange': 'taskflow', } WORKERS = 2 WORKER_CONF = { # These are the tasks the worker can execute, they *must* be importable, # typically this list is used to restrict what workers may execute to # a smaller set of *allowed* tasks that are known to be safe (one would # not want to allow all python code to be executed). 'tasks': [ '%s:MandelCalculator' % (__name__), ], } ENGINE_CONF = { 'engine': 'worker-based', } # Mandelbrot & image settings... IMAGE_SIZE = (512, 512) CHUNK_COUNT = 8 MAX_ITERATIONS = 25 class MandelCalculator(task.Task): def execute(self, image_config, mandelbrot_config, chunk): """Returns the number of iterations before the computation "escapes". Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the mandelbrot set given a fixed number of iterations. """ # Parts borrowed from (credit to mark harris and benoît mandelbrot). # # http://nbviewer.ipython.org/gist/harrism/f5707335f40af9463c43 def mandelbrot(x, y, max_iters): c = complex(x, y) z = 0.0j for i in compat_range(max_iters): z = z * z + c if (z.real * z.real + z.imag * z.imag) >= 4: return i return max_iters min_x, max_x, min_y, max_y, max_iters = mandelbrot_config height, width = image_config['size'] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height block = [] for y in compat_range(chunk[0], chunk[1]): row = [] imag = min_y + y * pixel_size_y for x in compat_range(0, width): real = min_x + x * pixel_size_x row.append(mandelbrot(real, imag, max_iters)) block.append(row) return block def calculate(engine_conf): # Subdivide the work into X pieces, then request each worker to calculate # one of those chunks and then later we will write these chunks out to # an image bitmap file. # And unordered flow is used here since the mandelbrot calculation is an # example of an embarrassingly parallel computation that we can scatter # across as many workers as possible. flow = uf.Flow("mandelbrot") # These symbols will be automatically given to tasks as input to their # execute method, in this case these are constants used in the mandelbrot # calculation. store = { 'mandelbrot_config': [-2.0, 1.0, -1.0, 1.0, MAX_ITERATIONS], 'image_config': { 'size': IMAGE_SIZE, } } # We need the task names to be in the right order so that we can extract # the final results in the right order (we don't care about the order when # executing). task_names = [] # Compose our workflow. height, _width = IMAGE_SIZE chunk_size = int(math.ceil(height / float(CHUNK_COUNT))) for i in compat_range(0, CHUNK_COUNT): chunk_name = 'chunk_%s' % i task_name = "calculation_%s" % i # Break the calculation up into chunk size pieces. rows = [i * chunk_size, i * chunk_size + chunk_size] flow.add( MandelCalculator(task_name, # This ensures the storage symbol with name # 'chunk_name' is sent into the tasks local # symbol 'chunk'. This is how we give each # calculator its own correct sequence of rows # to work on. rebind={'chunk': chunk_name})) store[chunk_name] = rows task_names.append(task_name) # Now execute it. eng = engines.load(flow, store=store, engine_conf=engine_conf) eng.run() # Gather all the results and order them for further processing. gather = [] for name in task_names: gather.extend(eng.storage.get(name)) points = [] for y, row in enumerate(gather): for x, color in enumerate(row): points.append(((x, y), color)) return points def write_image(results, output_filename=None): print("Gathered %s results that represents a mandelbrot" " image (using %s chunks that are computed jointly" " by %s workers)." % (len(results), CHUNK_COUNT, WORKERS)) if not output_filename: return # Pillow (the PIL fork) saves us from writing our own image writer... try: from PIL import Image except ImportError as e: # To currently get this (may change in the future), # $ pip install Pillow raise RuntimeError("Pillow is required to write image files: %s" % e) # Limit to 255, find the max and normalize to that... color_max = 0 for _point, color in results: color_max = max(color, color_max) # Use gray scale since we don't really have other colors. img = Image.new('L', IMAGE_SIZE, "black") pixels = img.load() for (x, y), color in results: if color_max == 0: color = 0 else: color = int((float(color) / color_max) * 255.0) pixels[x, y] = color img.save(output_filename) def create_fractal(): logging.basicConfig(level=logging.ERROR) # Setup our transport configuration and merge it into the worker and # engine configuration so that both of those use it correctly. shared_conf = dict(BASE_SHARED_CONF) shared_conf.update({ 'transport': 'memory', 'transport_options': { 'polling_interval': 0.1, }, }) if len(sys.argv) >= 2: output_filename = sys.argv[1] else: output_filename = None worker_conf = dict(WORKER_CONF) worker_conf.update(shared_conf) engine_conf = dict(ENGINE_CONF) engine_conf.update(shared_conf) workers = [] worker_topics = [] print('Calculating your mandelbrot fractal of size %sx%s.' % IMAGE_SIZE) try: # Create a set of workers to simulate actual remote workers. print('Running %s workers.' % (WORKERS)) for i in compat_range(0, WORKERS): worker_conf['topic'] = 'calculator_%s' % (i + 1) worker_topics.append(worker_conf['topic']) w = worker.Worker(**worker_conf) runner = threading_utils.daemon_thread(w.run) runner.start() w.wait() workers.append((runner, w.stop)) # Now use those workers to do something. engine_conf['topics'] = worker_topics results = calculate(engine_conf) print('Execution finished.') finally: # And cleanup. print('Stopping workers.') while workers: r, stopper = workers.pop() stopper() r.join() print("Writing image...") write_image(results, output_filename=output_filename) if __name__ == "__main__": create_fractal() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wbe_simple_linear.out.txt0000664000175000017500000000022400000000000024077 0ustar00zuulzuul00000000000000Running 2 workers. Executing some work. Execution finished. Result = {"result1": 1, "result2": 666, "x": 111, "y": 222, "z": 333} Stopping workers. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wbe_simple_linear.py0000664000175000017500000001244500000000000023112 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import logging import os import sys import tempfile top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) from taskflow import engines from taskflow.engines.worker_based import worker from taskflow.patterns import linear_flow as lf from taskflow.tests import utils from taskflow.utils import threading_utils import example_utils # noqa # INTRO: This example walks through a miniature workflow which shows how to # start up a number of workers (these workers will process task execution and # reversion requests using any provided input data) and then use an engine # that creates a set of *capable* tasks and flows (the engine can not create # tasks that the workers are not able to run, this will end in failure) that # those workers will run and then executes that workflow seamlessly using the # workers to perform the actual execution. # # NOTE(harlowja): this example simulates the expected larger number of workers # by using a set of threads (which in this example simulate the remote workers # that would typically be running on other external machines). # A filesystem can also be used as the queue transport (useful as simple # transport type that does not involve setting up a larger mq system). If this # is false then the memory transport is used instead, both work in standalone # setups. USE_FILESYSTEM = False BASE_SHARED_CONF = { 'exchange': 'taskflow', } # Until https://github.com/celery/kombu/issues/398 is resolved it is not # recommended to run many worker threads in this example due to the types # of errors mentioned in that issue. MEMORY_WORKERS = 2 FILE_WORKERS = 1 WORKER_CONF = { # These are the tasks the worker can execute, they *must* be importable, # typically this list is used to restrict what workers may execute to # a smaller set of *allowed* tasks that are known to be safe (one would # not want to allow all python code to be executed). 'tasks': [ 'taskflow.tests.utils:TaskOneArgOneReturn', 'taskflow.tests.utils:TaskMultiArgOneReturn' ], } def run(engine_options): flow = lf.Flow('simple-linear').add( utils.TaskOneArgOneReturn(provides='result1'), utils.TaskMultiArgOneReturn(provides='result2') ) eng = engines.load(flow, store=dict(x=111, y=222, z=333), engine='worker-based', **engine_options) eng.run() return eng.storage.fetch_all() if __name__ == "__main__": logging.basicConfig(level=logging.ERROR) # Setup our transport configuration and merge it into the worker and # engine configuration so that both of those use it correctly. shared_conf = dict(BASE_SHARED_CONF) tmp_path = None if USE_FILESYSTEM: worker_count = FILE_WORKERS tmp_path = tempfile.mkdtemp(prefix='wbe-example-') shared_conf.update({ 'transport': 'filesystem', 'transport_options': { 'data_folder_in': tmp_path, 'data_folder_out': tmp_path, 'polling_interval': 0.1, }, }) else: worker_count = MEMORY_WORKERS shared_conf.update({ 'transport': 'memory', 'transport_options': { 'polling_interval': 0.1, }, }) worker_conf = dict(WORKER_CONF) worker_conf.update(shared_conf) engine_options = dict(shared_conf) workers = [] worker_topics = [] try: # Create a set of workers to simulate actual remote workers. print('Running %s workers.' % (worker_count)) for i in range(0, worker_count): worker_conf['topic'] = 'worker-%s' % (i + 1) worker_topics.append(worker_conf['topic']) w = worker.Worker(**worker_conf) runner = threading_utils.daemon_thread(w.run) runner.start() w.wait() workers.append((runner, w.stop)) # Now use those workers to do something. print('Executing some work.') engine_options['topics'] = worker_topics result = run(engine_options) print('Execution finished.') # This is done so that the test examples can work correctly # even when the keys change order (which will happen in various # python versions). print("Result = %s" % json.dumps(result, sort_keys=True)) finally: # And cleanup. print('Stopping workers.') while workers: r, stopper = workers.pop() stopper() r.join() if tmp_path: example_utils.rm_path(tmp_path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/examples/wrapped_exception.py0000664000175000017500000001117400000000000023150 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import sys import time logging.basicConfig(level=logging.ERROR) top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir)) sys.path.insert(0, top_dir) import taskflow.engines from taskflow import exceptions from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.tests import utils from taskflow.types import failure import example_utils as eu # noqa # INTRO: In this example we create two tasks which can trigger exceptions # based on various inputs to show how to analyze the thrown exceptions for # which types were thrown and handle the different types in different ways. # # This is especially important if a set of tasks run in parallel and each of # those tasks may fail while running. This creates a scenario where multiple # exceptions have been thrown and those exceptions need to be handled in a # unified manner. Since an engine does not currently know how to resolve # those exceptions (someday it could) the code using that engine and activating # the flows and tasks using that engine will currently have to deal with # catching those exceptions (and silencing them if this is desired). # # NOTE(harlowja): The engine *will* trigger rollback even under multiple # exceptions being thrown, but at the end of that rollback the engine will # rethrow these exceptions to the code that called the run() method; allowing # that code to do further cleanups (if desired). class FirstException(Exception): """Exception that first task raises.""" class SecondException(Exception): """Exception that second task raises.""" class FirstTask(task.Task): def execute(self, sleep1, raise1): time.sleep(sleep1) if not isinstance(raise1, bool): raise TypeError('Bad raise1 value: %r' % raise1) if raise1: raise FirstException('First task failed') class SecondTask(task.Task): def execute(self, sleep2, raise2): time.sleep(sleep2) if not isinstance(raise2, bool): raise TypeError('Bad raise2 value: %r' % raise2) if raise2: raise SecondException('Second task failed') def run(**store): # Creates a flow, each task in the flow will examine the kwargs passed in # here and based on those kwargs it will behave in a different manner # while executing; this allows for the calling code (see below) to show # different usages of the failure catching and handling mechanism. flow = uf.Flow('flow').add( FirstTask(), SecondTask() ) try: with utils.wrap_all_failures(): taskflow.engines.run(flow, store=store, engine='parallel') except exceptions.WrappedFailure as ex: unknown_failures = [] for a_failure in ex: if a_failure.check(FirstException): print("Got FirstException: %s" % a_failure.exception_str) elif a_failure.check(SecondException): print("Got SecondException: %s" % a_failure.exception_str) else: print("Unknown failure: %s" % a_failure) unknown_failures.append(a_failure) failure.Failure.reraise_if_any(unknown_failures) eu.print_wrapped("Raise and catch first exception only") run(sleep1=0.0, raise1=True, sleep2=0.0, raise2=False) # NOTE(imelnikov): in general, sleeping does not guarantee that we'll have both # task running before one of them fails, but with current implementation this # works most of times, which is enough for our purposes here (as an example). eu.print_wrapped("Raise and catch both exceptions") run(sleep1=1.0, raise1=True, sleep2=1.0, raise2=True) eu.print_wrapped("Handle one exception, and re-raise another") try: run(sleep1=1.0, raise1=True, sleep2=1.0, raise2='boom') except TypeError as ex: print("As expected, TypeError is here: %s" % ex) else: assert False, "TypeError expected" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/exceptions.py0000664000175000017500000002657400000000000020005 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import traceback from oslo_utils import excutils from oslo_utils import reflection import six from taskflow.utils import mixins def raise_with_cause(exc_cls, message, *args, **kwargs): """Helper to raise + chain exceptions (when able) and associate a *cause*. NOTE(harlowja): Since in py3.x exceptions can be chained (due to :pep:`3134`) we should try to raise the desired exception with the given *cause* (or extract a *cause* from the current stack if able) so that the exception formats nicely in old and new versions of python. Since py2.x does **not** support exception chaining (or formatting) our root exception class has a :py:meth:`~taskflow.exceptions.TaskFlowException.pformat` method that can be used to get *similar* information instead (and this function makes sure to retain the *cause* in that case as well so that the :py:meth:`~taskflow.exceptions.TaskFlowException.pformat` method shows them). :param exc_cls: the :py:class:`~taskflow.exceptions.TaskFlowException` class to raise. :param message: the text/str message that will be passed to the exceptions constructor as its first positional argument. :param args: any additional positional arguments to pass to the exceptions constructor. :param kwargs: any additional keyword arguments to pass to the exceptions constructor. """ if not issubclass(exc_cls, TaskFlowException): raise ValueError("Subclass of taskflow exception is required") excutils.raise_with_cause(exc_cls, message, *args, **kwargs) class TaskFlowException(Exception): """Base class for *most* exceptions emitted from this library. NOTE(harlowja): in later versions of python we can likely remove the need to have a ``cause`` here as PY3+ have implemented :pep:`3134` which handles chaining in a much more elegant manner. :param message: the exception message, typically some string that is useful for consumers to view when debugging or analyzing failures. :param cause: the cause of the exception being raised, when provided this should itself be an exception instance, this is useful for creating a chain of exceptions for versions of python where this is not yet implemented/supported natively. """ def __init__(self, message, cause=None): super(TaskFlowException, self).__init__(message) self._cause = cause @property def cause(self): return self._cause def __str__(self): return self.pformat() def _get_message(self): # We must *not* call into the __str__ method as that will reactivate # the pformat method, which will end up badly (and doesn't look # pretty at all); so be careful... return self.args[0] def pformat(self, indent=2, indent_text=" ", show_root_class=False): """Pretty formats a taskflow exception + any connected causes.""" if indent < 0: raise ValueError("Provided 'indent' must be greater than" " or equal to zero instead of %s" % indent) buf = six.StringIO() if show_root_class: buf.write(reflection.get_class_name(self, fully_qualified=False)) buf.write(": ") buf.write(self._get_message()) active_indent = indent next_up = self.cause seen = [] while next_up is not None and next_up not in seen: seen.append(next_up) buf.write(os.linesep) if isinstance(next_up, TaskFlowException): buf.write(indent_text * active_indent) buf.write(reflection.get_class_name(next_up, fully_qualified=False)) buf.write(": ") buf.write(next_up._get_message()) else: lines = traceback.format_exception_only(type(next_up), next_up) for i, line in enumerate(lines): buf.write(indent_text * active_indent) if line.endswith("\n"): # We'll add our own newlines on... line = line[0:-1] buf.write(line) if i + 1 != len(lines): buf.write(os.linesep) if not isinstance(next_up, TaskFlowException): # Don't go deeper into non-taskflow exceptions... as we # don't know if there exception 'cause' attributes are even # useable objects... break active_indent += indent next_up = getattr(next_up, 'cause', None) return buf.getvalue() # Errors related to storage or operations on storage units. class StorageFailure(TaskFlowException): """Raised when storage backends can not be read/saved/deleted.""" # Conductor related errors. class ConductorFailure(TaskFlowException): """Errors related to conducting activities.""" # Job related errors. class JobFailure(TaskFlowException): """Errors related to jobs or operations on jobs.""" class UnclaimableJob(JobFailure): """Raised when a job can not be claimed.""" # Engine/ during execution related errors. class ExecutionFailure(TaskFlowException): """Errors related to engine execution.""" class RequestTimeout(ExecutionFailure): """Raised when a worker request was not finished within allotted time.""" class InvalidState(ExecutionFailure): """Raised when a invalid state transition is attempted while executing.""" # Other errors that do not fit the above categories (at the current time). class DependencyFailure(TaskFlowException): """Raised when some type of dependency problem occurs.""" class AmbiguousDependency(DependencyFailure): """Raised when some type of ambiguous dependency problem occurs.""" class MissingDependencies(DependencyFailure): """Raised when a entity has dependencies that can not be satisfied. :param who: the entity that caused the missing dependency to be triggered. :param requirements: the dependency which were not satisfied. Further arguments are interpreted as for in :py:class:`~taskflow.exceptions.TaskFlowException`. """ #: Exception message template used when creating an actual message. MESSAGE_TPL = ("'%(who)s' requires %(requirements)s but no other entity" " produces said requirements") METHOD_TPL = "'%(method)s' method on " def __init__(self, who, requirements, cause=None, method=None): message = self.MESSAGE_TPL % {'who': who, 'requirements': requirements} if method: message = (self.METHOD_TPL % {'method': method}) + message super(MissingDependencies, self).__init__(message, cause=cause) self.missing_requirements = requirements class CompilationFailure(TaskFlowException): """Raised when some type of compilation issue is found.""" class IncompatibleVersion(TaskFlowException): """Raised when some type of version incompatibility is found.""" class Duplicate(TaskFlowException): """Raised when a duplicate entry is found.""" class NotFound(TaskFlowException): """Raised when some entry in some object doesn't exist.""" class Empty(TaskFlowException): """Raised when some object is empty when it shouldn't be.""" class MultipleChoices(TaskFlowException): """Raised when some decision can't be made due to many possible choices.""" class InvalidFormat(TaskFlowException): """Raised when some object/entity is not in the expected format.""" class DisallowedAccess(TaskFlowException): """Raised when storage access is not possible due to state limitations.""" def __init__(self, message, cause=None, state=None): super(DisallowedAccess, self).__init__(message, cause=cause) self.state = state # Others. class NotImplementedError(NotImplementedError): """Exception for when some functionality really isn't implemented. This is typically useful when the library itself needs to distinguish internal features not being made available from users features not being made available/implemented (and to avoid misinterpreting the two). """ class WrappedFailure(mixins.StrMixin, Exception): """Wraps one or several failure objects. When exception/s cannot be re-raised (for example, because the value and traceback are lost in serialization) or there are several exceptions active at the same time (due to more than one thread raising exceptions), we will wrap the corresponding failure objects into this exception class and *may* reraise this exception type to allow users to handle the contained failures/causes as they see fit... See the failure class documentation for a more comprehensive set of reasons why this object *may* be reraised instead of the original exception. :param causes: the :py:class:`~taskflow.types.failure.Failure` objects that caused this exception to be raised. """ def __init__(self, causes): super(WrappedFailure, self).__init__() self._causes = [] for cause in causes: if cause.check(type(self)) and cause.exception: # NOTE(imelnikov): flatten wrapped failures. self._causes.extend(cause.exception) else: self._causes.append(cause) def __iter__(self): """Iterate over failures that caused the exception.""" return iter(self._causes) def __len__(self): """Return number of wrapped failures.""" return len(self._causes) def check(self, *exc_classes): """Check if any of exception classes caused the failure/s. :param exc_classes: exception types/exception type names to search for. If any of the contained failures were caused by an exception of a given type, the corresponding argument that matched is returned. If not then none is returned. """ if not exc_classes: return None for cause in self: result = cause.check(*exc_classes) if result is not None: return result return None def __bytes__(self): buf = six.BytesIO() buf.write(b'WrappedFailure: [') causes_gen = (six.binary_type(cause) for cause in self._causes) buf.write(b", ".join(causes_gen)) buf.write(b']') return buf.getvalue() def __unicode__(self): buf = six.StringIO() buf.write(u'WrappedFailure: [') causes_gen = (six.text_type(cause) for cause in self._causes) buf.write(u", ".join(causes_gen)) buf.write(u']') return buf.getvalue() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/flow.py0000664000175000017500000001136200000000000016560 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc from oslo_utils import reflection import six # Link metadata keys that have inherent/special meaning. # # This key denotes the link is an invariant that ensures the order is # correctly preserved. LINK_INVARIANT = 'invariant' # This key denotes the link is a manually/user-specified. LINK_MANUAL = 'manual' # This key denotes the link was created when resolving/compiling retries. LINK_RETRY = 'retry' # This key denotes the link was created due to symbol constraints and the # value will be a set of names that the constraint ensures are satisfied. LINK_REASONS = 'reasons' # # This key denotes a callable that will determine if the target is visited. LINK_DECIDER = 'decider' # Chop off full module names of patterns that are built-in to taskflow... _CHOP_PAT = "taskflow.patterns." _CHOP_PAT_LEN = len(_CHOP_PAT) # This key denotes the depth the decider will apply (defaulting to all). LINK_DECIDER_DEPTH = 'decider_depth' @six.add_metaclass(abc.ABCMeta) class Flow(object): """The base abstract class of all flow implementations. A flow is a structure that defines relationships between tasks. You can add tasks and other flows (as subflows) to the flow, and the flow provides a way to implicitly or explicitly define how they are interdependent. Exact structure of the relationships is defined by concrete implementation, while this class defines common interface and adds human-readable (not necessary unique) name. NOTE(harlowja): if a flow is placed in another flow as a subflow, a desired way to compose flows together, then it is valid and permissible that during compilation the subflow & parent flow *may* be flattened into a new flow. """ def __init__(self, name, retry=None): self._name = six.text_type(name) self._retry = retry # NOTE(akarpinska): if retry doesn't have a name, # the name of its owner will be assigned if self._retry is not None and self._retry.name is None: self._retry.name = self.name + "_retry" @property def name(self): """A non-unique name for this flow (human readable).""" return self._name @property def retry(self): """The associated flow retry controller. This retry controller object will affect & control how (and if) this flow and its contained components retry when execution is underway and a failure occurs. """ return self._retry @abc.abstractmethod def add(self, *items): """Adds a given item/items to this flow.""" @abc.abstractmethod def __len__(self): """Returns how many items are in this flow.""" @abc.abstractmethod def __iter__(self): """Iterates over the children of the flow.""" @abc.abstractmethod def iter_links(self): """Iterates over dependency links between children of the flow. Iterates over 3-tuples ``(A, B, meta)``, where * ``A`` is a child (atom or subflow) link starts from; * ``B`` is a child (atom or subflow) link points to; it is said that ``B`` depends on ``A`` or ``B`` requires ``A``; * ``meta`` is link metadata, a dictionary. """ @abc.abstractmethod def iter_nodes(self): """Iterate over nodes of the flow. Iterates over 2-tuples ``(A, meta)``, where * ``A`` is a child (atom or subflow) of current flow; * ``meta`` is link metadata, a dictionary. """ def __str__(self): cls_name = reflection.get_class_name(self) if cls_name.startswith(_CHOP_PAT): cls_name = cls_name[_CHOP_PAT_LEN:] return "%s: %s(len=%d)" % (cls_name, self.name, len(self)) @property def provides(self): """Set of symbol names provided by the flow.""" provides = set() if self._retry is not None: provides.update(self._retry.provides) for item in self: provides.update(item.provides) return frozenset(provides) @abc.abstractproperty def requires(self): """Set of *unsatisfied* symbol names required by the flow.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/formatters.py0000664000175000017500000001642100000000000020000 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from taskflow.engines.action_engine import compiler from taskflow import exceptions as exc from taskflow import states from taskflow.types import tree from taskflow.utils import misc def _cached_get(cache, cache_key, atom_name, fetch_func, *args, **kwargs): """Tries to get a previously saved value or fetches it and caches it.""" value, value_found = None, False try: value, value_found = cache[cache_key][atom_name] except KeyError: try: value = fetch_func(*args, **kwargs) value_found = True except (exc.StorageFailure, exc.NotFound): pass cache[cache_key][atom_name] = value, value_found return value, value_found def _fetch_predecessor_tree(graph, atom): """Creates a tree of predecessors, rooted at given atom.""" root = tree.Node(atom) stack = [(root, atom)] while stack: parent, node = stack.pop() for pred_node in graph.predecessors(node): pred_node_data = graph.nodes[pred_node] if pred_node_data['kind'] == compiler.FLOW_END: # Jump over and/or don't show flow end nodes... for pred_pred_node in graph.predecessors(pred_node): stack.append((parent, pred_pred_node)) else: child = tree.Node(pred_node, **pred_node_data) parent.add(child) # And go further backwards... stack.append((child, pred_node)) return root class FailureFormatter(object): """Formats a failure and connects it to associated atoms & engine.""" _BUILDERS = { states.EXECUTE: (_fetch_predecessor_tree, 'predecessors'), } def __init__(self, engine, hide_inputs_outputs_of=()): self._hide_inputs_outputs_of = hide_inputs_outputs_of self._engine = engine def _format_node(self, storage, cache, node): """Formats a single tree node into a string version.""" if node.metadata['kind'] == compiler.FLOW: flow = node.item flow_name = flow.name return "Flow '%s'" % (flow_name) elif node.metadata['kind'] in compiler.ATOMS: atom = node.item atom_name = atom.name atom_attrs = {} intention, intention_found = _cached_get( cache, 'intentions', atom_name, storage.get_atom_intention, atom_name) if intention_found: atom_attrs['intention'] = intention state, state_found = _cached_get(cache, 'states', atom_name, storage.get_atom_state, atom_name) if state_found: atom_attrs['state'] = state if atom_name not in self._hide_inputs_outputs_of: # When the cache does not exist for this atom this # will be called with the rest of these arguments # used to populate the cache. fetch_mapped_args = functools.partial( storage.fetch_mapped_args, atom.rebind, atom_name=atom_name, optional_args=atom.optional) requires, requires_found = _cached_get(cache, 'requires', atom_name, fetch_mapped_args) if requires_found: atom_attrs['requires'] = requires provides, provides_found = _cached_get( cache, 'provides', atom_name, storage.get_execute_result, atom_name) if provides_found: atom_attrs['provides'] = provides if atom_attrs: return "Atom '%s' %s" % (atom_name, atom_attrs) else: return "Atom '%s'" % (atom_name) else: raise TypeError("Unable to format node, unknown node" " kind '%s' encountered" % node.metadata['kind']) def format(self, fail, atom_matcher): """Returns a (exc_info, details) tuple about the failure. The ``exc_info`` tuple should be a standard three element (exctype, value, traceback) tuple that will be used for further logging. A non-empty string is typically returned for ``details``; it should contain any string info about the failure (with any specific details the ``exc_info`` may not have/contain). """ buff = misc.StringIO() storage = self._engine.storage compilation = self._engine.compilation if fail.exc_info is None: # Remote failures will not have a 'exc_info' tuple, so just use # the captured traceback that was captured by the creator when it # failed... buff.write_nl(fail.pformat(traceback=True)) if storage is None or compilation is None: # Somehow we got called before prepared and/or compiled; ok # that's weird, skip doing the rest... return (fail.exc_info, buff.getvalue()) hierarchy = compilation.hierarchy graph = compilation.execution_graph atom_node = hierarchy.find_first_match(atom_matcher) atom = None atom_intention = None if atom_node is not None: atom = atom_node.item atom_intention = storage.get_atom_intention(atom.name) if atom is not None and atom_intention in self._BUILDERS: # Cache as much as we can, since the path of various atoms # may cause the same atom to be seen repeatedly depending on # the graph structure... cache = { 'intentions': {}, 'provides': {}, 'requires': {}, 'states': {}, } builder, kind = self._BUILDERS[atom_intention] rooted_tree = builder(graph, atom) child_count = rooted_tree.child_count(only_direct=False) buff.write_nl('%s %s (most recent first):' % (child_count, kind)) formatter = functools.partial(self._format_node, storage, cache) direct_child_count = rooted_tree.child_count(only_direct=True) for i, child in enumerate(rooted_tree, 1): if i == direct_child_count: buff.write(child.pformat(stringify_node=formatter, starting_prefix=" ")) else: buff.write_nl(child.pformat(stringify_node=formatter, starting_prefix=" ")) return (fail.exc_info, buff.getvalue()) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6240416 taskflow-4.6.4/taskflow/jobs/0000775000175000017500000000000000000000000016171 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/jobs/__init__.py0000664000175000017500000000000000000000000020270 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6240416 taskflow-4.6.4/taskflow/jobs/backends/0000775000175000017500000000000000000000000017743 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/jobs/backends/__init__.py0000664000175000017500000000552600000000000022064 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from stevedore import driver from taskflow import exceptions as exc from taskflow import logging from taskflow.utils import misc # NOTE(harlowja): this is the entrypoint namespace, not the module namespace. BACKEND_NAMESPACE = 'taskflow.jobboards' LOG = logging.getLogger(__name__) def fetch(name, conf, namespace=BACKEND_NAMESPACE, **kwargs): """Fetch a jobboard backend with the given configuration. This fetch method will look for the entrypoint name in the entrypoint namespace, and then attempt to instantiate that entrypoint using the provided name, configuration and any board specific kwargs. NOTE(harlowja): to aid in making it easy to specify configuration and options to a board the configuration (which is typical just a dictionary) can also be a URI string that identifies the entrypoint name and any configuration specific to that board. For example, given the following configuration URI:: zookeeper:///?a=b&c=d This will look for the entrypoint named 'zookeeper' and will provide a configuration object composed of the URI's components, in this case that is ``{'a': 'b', 'c': 'd'}`` to the constructor of that board instance (also including the name specified). """ board, conf = misc.extract_driver_and_conf(conf, 'board') LOG.debug('Looking for %r jobboard driver in %r', board, namespace) try: mgr = driver.DriverManager(namespace, board, invoke_on_load=True, invoke_args=(name, conf), invoke_kwds=kwargs) return mgr.driver except RuntimeError as e: raise exc.NotFound("Could not find jobboard %s" % (board), e) @contextlib.contextmanager def backend(name, conf, namespace=BACKEND_NAMESPACE, **kwargs): """Fetches a jobboard, connects to it and closes it on completion. This allows a board instance to fetched, connected to, and then used in a context manager statement with the board being closed upon context manager exit. """ jb = fetch(name, conf, namespace=namespace, **kwargs) jb.connect() with contextlib.closing(jb): yield jb ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/jobs/backends/impl_redis.py0000664000175000017500000012511000000000000022444 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import datetime import functools import string import threading import time import fasteners import msgpack from oslo_serialization import msgpackutils from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils from redis import exceptions as redis_exceptions from redis import sentinel import six from six.moves import range as compat_range from taskflow import exceptions as exc from taskflow.jobs import base from taskflow import logging from taskflow import states from taskflow.utils import misc from taskflow.utils import redis_utils as ru LOG = logging.getLogger(__name__) @contextlib.contextmanager def _translate_failures(): """Translates common redis exceptions into taskflow exceptions.""" try: yield except redis_exceptions.ConnectionError: exc.raise_with_cause(exc.JobFailure, "Failed to connect to redis") except redis_exceptions.TimeoutError: exc.raise_with_cause(exc.JobFailure, "Failed to communicate with redis, connection" " timed out") except redis_exceptions.RedisError: exc.raise_with_cause(exc.JobFailure, "Failed to communicate with redis," " internal error") @functools.total_ordering class RedisJob(base.Job): """A redis job.""" def __init__(self, board, name, sequence, key, uuid=None, details=None, created_on=None, backend=None, book=None, book_data=None, priority=base.JobPriority.NORMAL): super(RedisJob, self).__init__(board, name, uuid=uuid, details=details, backend=backend, book=book, book_data=book_data) self._created_on = created_on self._client = board._client self._redis_version = board._redis_version self._sequence = sequence self._key = key self._last_modified_key = board.join(key + board.LAST_MODIFIED_POSTFIX) self._owner_key = board.join(key + board.OWNED_POSTFIX) self._priority = priority @property def key(self): """Key (in board listings/trash hash) the job data is stored under.""" return self._key @property def priority(self): return self._priority @property def last_modified_key(self): """Key the job last modified data is stored under.""" return self._last_modified_key @property def owner_key(self): """Key the job claim + data of the owner is stored under.""" return self._owner_key @property def sequence(self): """Sequence number of the current job.""" return self._sequence def expires_in(self): """How many seconds until the claim expires. Returns the number of seconds until the ownership entry expires or :attr:`~taskflow.utils.redis_utils.UnknownExpire.DOES_NOT_EXPIRE` or :attr:`~taskflow.utils.redis_utils.UnknownExpire.KEY_NOT_FOUND` if it does not expire or if the expiry can not be determined (perhaps the :attr:`.owner_key` expired at/before time of inquiry?). """ with _translate_failures(): return ru.get_expiry(self._client, self._owner_key, prior_version=self._redis_version) def extend_expiry(self, expiry): """Extends the owner key (aka the claim) expiry for this job. NOTE(harlowja): if the claim for this job did **not** previously have an expiry associated with it, calling this method will create one (and after that time elapses the claim on this job will cease to exist). Returns ``True`` if the expiry request was performed otherwise ``False``. """ with _translate_failures(): return ru.apply_expiry(self._client, self._owner_key, expiry, prior_version=self._redis_version) def __lt__(self, other): if not isinstance(other, RedisJob): return NotImplemented if self.board.listings_key == other.board.listings_key: if self.priority == other.priority: return self.sequence < other.sequence else: ordered = base.JobPriority.reorder( (self.priority, self), (other.priority, other)) if ordered[0] is self: return False return True else: # Different jobboards with different listing keys... return self.board.listings_key < other.board.listings_key def __eq__(self, other): if not isinstance(other, RedisJob): return NotImplemented return ((self.board.listings_key, self.priority, self.sequence) == (other.board.listings_key, other.priority, other.sequence)) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self.board.listings_key, self.priority, self.sequence)) @property def created_on(self): return self._created_on @property def last_modified(self): with _translate_failures(): raw_last_modified = self._client.get(self._last_modified_key) last_modified = None if raw_last_modified: last_modified = self._board._loads( raw_last_modified, root_types=(datetime.datetime,)) # NOTE(harlowja): just incase this is somehow busted (due to time # sync issues/other), give back the most recent one (since redis # does not maintain clock information; we could have this happen # due to now clients who mutate jobs also send the time in). last_modified = max(last_modified, self._created_on) return last_modified @property def state(self): listings_key = self._board.listings_key owner_key = self._owner_key listings_sub_key = self._key def _do_fetch(p): # NOTE(harlowja): state of a job in redis is not set into any # explicit 'state' field, but is maintained by what nodes exist in # redis instead (ie if a owner key exists, then we know a owner # is active, if no job data exists and no owner, then we know that # the job is unclaimed, and so-on)... p.multi() p.hexists(listings_key, listings_sub_key) p.exists(owner_key) job_exists, owner_exists = p.execute() if not job_exists: if owner_exists: # This should **not** be possible due to lua code ordering # but let's log an INFO statement if it does happen (so # that it can be investigated)... LOG.info("Unexpected owner key found at '%s' when job" " key '%s[%s]' was not found", owner_key, listings_key, listings_sub_key) return states.COMPLETE else: if owner_exists: return states.CLAIMED else: return states.UNCLAIMED with _translate_failures(): return self._client.transaction(_do_fetch, listings_key, owner_key, value_from_callable=True) class RedisJobBoard(base.JobBoard): """A jobboard backed by `redis`_. Powered by the `redis-py `_ library. This jobboard creates job entries by listing jobs in a redis `hash`_. This hash contains jobs that can be actively worked on by (and examined/claimed by) some set of eligible consumers. Job posting is typically performed using the :meth:`.post` method (this creates a hash entry with job contents/details encoded in `msgpack`_). The users of these jobboard(s) (potentially on disjoint sets of machines) can then iterate over the available jobs and decide if they want to attempt to claim one of the jobs they have iterated over. If so they will then attempt to contact redis and they will attempt to create a key in redis (using a embedded lua script to perform this atomically) to claim a desired job. If the entity trying to use the jobboard to :meth:`.claim` the job is able to create that lock/owner key then it will be allowed (and expected) to perform whatever *work* the contents of that job described. Once the claiming entity is finished the lock/owner key and the `hash`_ entry will be deleted (if successfully completed) in a single request (also using a embedded lua script to perform this atomically). If the claiming entity is not successful (or the entity that claimed the job dies) the lock/owner key can be released automatically (by **optional** usage of a claim expiry) or by using :meth:`.abandon` to manually abandon the job so that it can be consumed/worked on by others. NOTE(harlowja): by default the :meth:`.claim` has no expiry (which means claims will be persistent, even under claiming entity failure). To ensure a expiry occurs pass a numeric value for the ``expiry`` keyword argument to the :meth:`.claim` method that defines how many seconds the claim should be retained for. When an expiry is used ensure that that claim is kept alive while it is being worked on by using the :py:meth:`~.RedisJob.extend_expiry` method periodically. .. _msgpack: https://msgpack.org/ .. _redis: https://redis.io/ .. _hash: https://redis.io/topics/data-types#hashes """ CLIENT_CONF_TRANSFERS = tuple([ # Host config... ('host', str), ('port', int), # See: http://redis.io/commands/auth ('password', str), # Data encoding/decoding + error handling ('encoding', str), ('encoding_errors', str), # Connection settings. ('socket_timeout', float), ('socket_connect_timeout', float), # This one negates the usage of host, port, socket connection # settings as it doesn't use the same kind of underlying socket... ('unix_socket_path', str), # Do u want ssl??? ('ssl', strutils.bool_from_string), ('ssl_keyfile', str), ('ssl_certfile', str), ('ssl_cert_reqs', str), ('ssl_ca_certs', str), # See: http://www.rediscookbook.org/multiple_databases.html ('db', int), ]) """ Keys (and value type converters) that we allow to proxy from the jobboard configuration into the redis client (used to configure the redis client internals if no explicit client is provided via the ``client`` keyword argument). See: http://redis-py.readthedocs.org/en/latest/#redis.Redis See: https://github.com/andymccurdy/redis-py/blob/2.10.3/redis/client.py """ #: Postfix (combined with job key) used to make a jobs owner key. OWNED_POSTFIX = b".owned" #: Postfix (combined with job key) used to make a jobs last modified key. LAST_MODIFIED_POSTFIX = b".last_modified" #: Default namespace for keys when none is provided. DEFAULT_NAMESPACE = b'taskflow' MIN_REDIS_VERSION = (2, 6) """ Minimum redis version this backend requires. This version is required since we need the built-in server-side lua scripting support that is included in 2.6 and newer. """ NAMESPACE_SEP = b':' """ Separator that is used to combine a key with the namespace (to get the **actual** key that will be used). """ KEY_PIECE_SEP = b'.' """ Separator that is used to combine a bunch of key pieces together (to get the **actual** key that will be used). """ #: Expected lua response status field when call is ok. SCRIPT_STATUS_OK = "ok" #: Expected lua response status field when call is **not** ok. SCRIPT_STATUS_ERROR = "error" #: Expected lua script error response when the owner is not as expected. SCRIPT_NOT_EXPECTED_OWNER = "Not expected owner!" #: Expected lua script error response when the owner is not findable. SCRIPT_UNKNOWN_OWNER = "Unknown owner!" #: Expected lua script error response when the job is not findable. SCRIPT_UNKNOWN_JOB = "Unknown job!" #: Expected lua script error response when the job is already claimed. SCRIPT_ALREADY_CLAIMED = "Job already claimed!" SCRIPT_TEMPLATES = { 'consume': """ -- Extract *all* the variables (so we can easily know what they are)... local owner_key = KEYS[1] local listings_key = KEYS[2] local last_modified_key = KEYS[3] local expected_owner = ARGV[1] local job_key = ARGV[2] local result = {} if redis.call("hexists", listings_key, job_key) == 1 then if redis.call("exists", owner_key) == 1 then local owner = redis.call("get", owner_key) if owner ~= expected_owner then result["status"] = "${error}" result["reason"] = "${not_expected_owner}" result["owner"] = owner else -- The order is important here, delete the owner first (and if -- that blows up, the job data will still exist so it can be -- worked on again, instead of the reverse)... redis.call("del", owner_key, last_modified_key) redis.call("hdel", listings_key, job_key) result["status"] = "${ok}" end else result["status"] = "${error}" result["reason"] = "${unknown_owner}" end else result["status"] = "${error}" result["reason"] = "${unknown_job}" end return cmsgpack.pack(result) """, 'claim': """ local function apply_ttl(key, ms_expiry) if ms_expiry ~= nil then redis.call("pexpire", key, ms_expiry) end end -- Extract *all* the variables (so we can easily know what they are)... local owner_key = KEYS[1] local listings_key = KEYS[2] local last_modified_key = KEYS[3] local expected_owner = ARGV[1] local job_key = ARGV[2] local last_modified_blob = ARGV[3] -- If this is non-numeric (which it may be) this becomes nil local ms_expiry = nil if ARGV[4] ~= "none" then ms_expiry = tonumber(ARGV[4]) end local result = {} if redis.call("hexists", listings_key, job_key) == 1 then if redis.call("exists", owner_key) == 1 then local owner = redis.call("get", owner_key) if owner == expected_owner then -- Owner is the same, leave it alone... redis.call("set", last_modified_key, last_modified_blob) apply_ttl(owner_key, ms_expiry) end result["status"] = "${error}" result["reason"] = "${already_claimed}" result["owner"] = owner else redis.call("set", owner_key, expected_owner) redis.call("set", last_modified_key, last_modified_blob) apply_ttl(owner_key, ms_expiry) result["status"] = "${ok}" end else result["status"] = "${error}" result["reason"] = "${unknown_job}" end return cmsgpack.pack(result) """, 'abandon': """ -- Extract *all* the variables (so we can easily know what they are)... local owner_key = KEYS[1] local listings_key = KEYS[2] local last_modified_key = KEYS[3] local expected_owner = ARGV[1] local job_key = ARGV[2] local last_modified_blob = ARGV[3] local result = {} if redis.call("hexists", listings_key, job_key) == 1 then if redis.call("exists", owner_key) == 1 then local owner = redis.call("get", owner_key) if owner ~= expected_owner then result["status"] = "${error}" result["reason"] = "${not_expected_owner}" result["owner"] = owner else redis.call("del", owner_key) redis.call("set", last_modified_key, last_modified_blob) result["status"] = "${ok}" end else result["status"] = "${error}" result["reason"] = "${unknown_owner}" end else result["status"] = "${error}" result["reason"] = "${unknown_job}" end return cmsgpack.pack(result) """, 'trash': """ -- Extract *all* the variables (so we can easily know what they are)... local owner_key = KEYS[1] local listings_key = KEYS[2] local last_modified_key = KEYS[3] local trash_listings_key = KEYS[4] local expected_owner = ARGV[1] local job_key = ARGV[2] local last_modified_blob = ARGV[3] local result = {} if redis.call("hexists", listings_key, job_key) == 1 then local raw_posting = redis.call("hget", listings_key, job_key) if redis.call("exists", owner_key) == 1 then local owner = redis.call("get", owner_key) if owner ~= expected_owner then result["status"] = "${error}" result["reason"] = "${not_expected_owner}" result["owner"] = owner else -- This ordering is important (try to first move the value -- and only if that works do we try to do any deletions)... redis.call("hset", trash_listings_key, job_key, raw_posting) redis.call("set", last_modified_key, last_modified_blob) redis.call("del", owner_key) redis.call("hdel", listings_key, job_key) result["status"] = "${ok}" end else result["status"] = "${error}" result["reason"] = "${unknown_owner}" end else result["status"] = "${error}" result["reason"] = "${unknown_job}" end return cmsgpack.pack(result) """, } """`Lua`_ **template** scripts that will be used by various methods (they are turned into real scripts and loaded on call into the :func:`.connect` method). Some things to note: - The lua script is ran serially, so when this runs no other command will be mutating the backend (and redis also ensures that no other script will be running) so atomicity of these scripts are guaranteed by redis. - Transactions were considered (and even mostly implemented) but ultimately rejected since redis does not support rollbacks and transactions can **not** be interdependent (later operations can **not** depend on the results of earlier operations). Both of these issues limit our ability to correctly report errors (with useful messages) and to maintain consistency under failure/contention (due to the inability to rollback). A third and final blow to using transactions was to correctly use them we would have to set a watch on a *very* contentious key (the listings key) which would under load cause clients to retry more often then would be desired (this also increases network load, CPU cycles used, transactions failures triggered and so on). - Partial transaction execution is possible due to pre/post ``EXEC`` failures (and the lack of rollback makes this worse). So overall after thinking, it seemed like having little lua scripts was not that bad (even if it is somewhat convoluted) due to the above and public mentioned issues with transactions. In general using lua scripts for this purpose seems to be somewhat common practice and it solves the issues that came up when transactions were considered & implemented. Some links about redis (and redis + lua) that may be useful to look over: - `Atomicity of scripts`_ - `Scripting and transactions`_ - `Why redis does not support rollbacks`_ - `Intro to lua for redis programmers`_ - `Five key takeaways for developing with redis`_ - `Everything you always wanted to know about redis`_ (slides) .. _Lua: http://www.lua.org/ .. _Atomicity of scripts: http://redis.io/commands/eval#atomicity-of-\ scripts .. _Scripting and transactions: http://redis.io/topics/transactions#redis-\ scripting-and-transactions .. _Why redis does not support rollbacks: http://redis.io/topics/transa\ ctions#why-redis-does-not-suppo\ rt-roll-backs .. _Intro to lua for redis programmers: http://www.redisgreen.net/blog/int\ ro-to-lua-for-redis-programmers .. _Five key takeaways for developing with redis: https://redislabs.com/bl\ og/5-key-takeaways-fo\ r-developing-with-redis .. _Everything you always wanted to know about redis: http://www.slidesh are.net/carlosabal\ de/everything-you-a\ lways-wanted-to-\ know-about-redis-b\ ut-were-afraid-to-ask """ @classmethod def _make_client(cls, conf): client_conf = {} for key, value_type_converter in cls.CLIENT_CONF_TRANSFERS: if key in conf: if value_type_converter is not None: client_conf[key] = value_type_converter(conf[key]) else: client_conf[key] = conf[key] if conf.get('sentinel') is not None: sentinel_conf = {} # sentinel do not have ssl kwargs for key in client_conf: if 'ssl' not in key: sentinel_conf[key] = client_conf[key] s = sentinel.Sentinel([(sentinel_conf.pop('host'), sentinel_conf.pop('port'))], sentinel_kwargs=conf.get('sentinel_kwargs'), **sentinel_conf) return s.master_for(conf['sentinel']) else: return ru.RedisClient(**client_conf) def __init__(self, name, conf, client=None, persistence=None): super(RedisJobBoard, self).__init__(name, conf) self._closed = True if client is not None: self._client = client self._owns_client = False else: self._client = self._make_client(self._conf) # NOTE(harlowja): This client should not work until connected... self._client.close() self._owns_client = True self._namespace = self._conf.get('namespace', self.DEFAULT_NAMESPACE) self._open_close_lock = threading.RLock() # Redis server version connected to + scripts (populated on connect). self._redis_version = None self._scripts = {} # The backend to load the full logbooks from, since what is sent over # the data connection is only the logbook uuid and name, and not the # full logbook. self._persistence = persistence def join(self, key_piece, *more_key_pieces): """Create and return a namespaced key from many segments. NOTE(harlowja): all pieces that are text/unicode are converted into their binary equivalent (if they are already binary no conversion takes place) before being joined (as redis expects binary keys and not unicode/text ones). """ namespace_pieces = [] if self._namespace is not None: namespace_pieces = [self._namespace, self.NAMESPACE_SEP] else: namespace_pieces = [] key_pieces = [key_piece] if more_key_pieces: key_pieces.extend(more_key_pieces) for i in compat_range(0, len(namespace_pieces)): namespace_pieces[i] = misc.binary_encode(namespace_pieces[i]) for i in compat_range(0, len(key_pieces)): key_pieces[i] = misc.binary_encode(key_pieces[i]) namespace = b"".join(namespace_pieces) key = self.KEY_PIECE_SEP.join(key_pieces) return namespace + key @property def namespace(self): """The namespace all keys will be prefixed with (or none).""" return self._namespace @misc.cachedproperty def trash_key(self): """Key where a hash will be stored with trashed jobs in it.""" return self.join(b"trash") @misc.cachedproperty def sequence_key(self): """Key where a integer will be stored (used to sequence jobs).""" return self.join(b"sequence") @misc.cachedproperty def listings_key(self): """Key where a hash will be stored with active jobs in it.""" return self.join(b"listings") @property def job_count(self): with _translate_failures(): return self._client.hlen(self.listings_key) @property def connected(self): return not self._closed @fasteners.locked(lock='_open_close_lock') def connect(self): self.close() if self._owns_client: self._client = self._make_client(self._conf) with _translate_failures(): # The client maintains a connection pool, so do a ping and # if that works then assume the connection works, which may or # may not be continuously maintained (if the server dies # at a later time, we will become aware of that when the next # op occurs). self._client.ping() is_new_enough, redis_version = ru.is_server_new_enough( self._client, self.MIN_REDIS_VERSION) if not is_new_enough: wanted_version = ".".join([str(p) for p in self.MIN_REDIS_VERSION]) if redis_version: raise exc.JobFailure("Redis version %s or greater is" " required (version %s is to" " old)" % (wanted_version, redis_version)) else: raise exc.JobFailure("Redis version %s or greater is" " required" % (wanted_version)) else: self._redis_version = redis_version script_params = { # Status field values. 'ok': self.SCRIPT_STATUS_OK, 'error': self.SCRIPT_STATUS_ERROR, # Known error reasons (when status field is error). 'not_expected_owner': self.SCRIPT_NOT_EXPECTED_OWNER, 'unknown_owner': self.SCRIPT_UNKNOWN_OWNER, 'unknown_job': self.SCRIPT_UNKNOWN_JOB, 'already_claimed': self.SCRIPT_ALREADY_CLAIMED, } prepared_scripts = {} for n, raw_script_tpl in six.iteritems(self.SCRIPT_TEMPLATES): script_tpl = string.Template(raw_script_tpl) script_blob = script_tpl.substitute(**script_params) script = self._client.register_script(script_blob) prepared_scripts[n] = script self._scripts.update(prepared_scripts) self._closed = False @fasteners.locked(lock='_open_close_lock') def close(self): if self._owns_client: self._client.close() self._scripts.clear() self._redis_version = None self._closed = True @staticmethod def _dumps(obj): try: return msgpackutils.dumps(obj) except (msgpack.PackException, ValueError): # TODO(harlowja): remove direct msgpack exception access when # oslo.utils provides easy access to the underlying msgpack # pack/unpack exceptions.. exc.raise_with_cause(exc.JobFailure, "Failed to serialize object to" " msgpack blob") @staticmethod def _loads(blob, root_types=(dict,)): try: return misc.decode_msgpack(blob, root_types=root_types) except (msgpack.UnpackException, ValueError): # TODO(harlowja): remove direct msgpack exception access when # oslo.utils provides easy access to the underlying msgpack # pack/unpack exceptions.. exc.raise_with_cause(exc.JobFailure, "Failed to deserialize object from" " msgpack blob (of length %s)" % len(blob)) _decode_owner = staticmethod(misc.binary_decode) _encode_owner = staticmethod(misc.binary_encode) def find_owner(self, job): owner_key = self.join(job.key + self.OWNED_POSTFIX) with _translate_failures(): raw_owner = self._client.get(owner_key) return self._decode_owner(raw_owner) def post(self, name, book=None, details=None, priority=base.JobPriority.NORMAL): job_uuid = uuidutils.generate_uuid() job_priority = base.JobPriority.convert(priority) posting = base.format_posting(job_uuid, name, created_on=timeutils.utcnow(), book=book, details=details, priority=job_priority) with _translate_failures(): sequence = self._client.incr(self.sequence_key) posting.update({ 'sequence': sequence, }) with _translate_failures(): raw_posting = self._dumps(posting) raw_job_uuid = six.b(job_uuid) was_posted = bool(self._client.hsetnx(self.listings_key, raw_job_uuid, raw_posting)) if not was_posted: raise exc.JobFailure("New job located at '%s[%s]' could not" " be posted" % (self.listings_key, raw_job_uuid)) else: return RedisJob(self, name, sequence, raw_job_uuid, uuid=job_uuid, details=details, created_on=posting['created_on'], book=book, book_data=posting.get('book'), backend=self._persistence, priority=job_priority) def wait(self, timeout=None, initial_delay=0.005, max_delay=1.0, sleep_func=time.sleep): if initial_delay > max_delay: raise ValueError("Initial delay %s must be less than or equal" " to the provided max delay %s" % (initial_delay, max_delay)) # This does a spin-loop that backs off by doubling the delay # up to the provided max-delay. In the future we could try having # a secondary client connected into redis pubsub and use that # instead, but for now this is simpler. w = timeutils.StopWatch(duration=timeout) w.start() delay = initial_delay while True: jc = self.job_count if jc > 0: curr_jobs = self._fetch_jobs() if curr_jobs: return base.JobBoardIterator( self, LOG, board_fetch_func=lambda ensure_fresh: curr_jobs) if w.expired(): raise exc.NotFound("Expired waiting for jobs to" " arrive; waited %s seconds" % w.elapsed()) else: remaining = w.leftover(return_none=True) if remaining is not None: delay = min(delay * 2, remaining, max_delay) else: delay = min(delay * 2, max_delay) sleep_func(delay) def _fetch_jobs(self): with _translate_failures(): raw_postings = self._client.hgetall(self.listings_key) postings = [] for raw_job_key, raw_posting in six.iteritems(raw_postings): try: job_data = self._loads(raw_posting) try: job_priority = job_data['priority'] job_priority = base.JobPriority.convert(job_priority) except KeyError: job_priority = base.JobPriority.NORMAL job_created_on = job_data['created_on'] job_uuid = job_data['uuid'] job_name = job_data['name'] job_sequence_id = job_data['sequence'] job_details = job_data.get('details', {}) except (ValueError, TypeError, KeyError, exc.JobFailure): with excutils.save_and_reraise_exception(): LOG.warning("Incorrectly formatted job data found at" " key: %s[%s]", self.listings_key, raw_job_key, exc_info=True) LOG.info("Deleting invalid job data at key: %s[%s]", self.listings_key, raw_job_key) self._client.hdel(self.listings_key, raw_job_key) else: postings.append(RedisJob(self, job_name, job_sequence_id, raw_job_key, uuid=job_uuid, details=job_details, created_on=job_created_on, book_data=job_data.get('book'), backend=self._persistence, priority=job_priority)) return sorted(postings, reverse=True) def iterjobs(self, only_unclaimed=False, ensure_fresh=False): return base.JobBoardIterator( self, LOG, only_unclaimed=only_unclaimed, ensure_fresh=ensure_fresh, board_fetch_func=lambda ensure_fresh: self._fetch_jobs()) def register_entity(self, entity): # Will implement a redis jobboard conductor register later pass @base.check_who def consume(self, job, who): script = self._get_script('consume') with _translate_failures(): raw_who = self._encode_owner(who) raw_result = script(keys=[job.owner_key, self.listings_key, job.last_modified_key], args=[raw_who, job.key]) result = self._loads(raw_result) status = result['status'] if status != self.SCRIPT_STATUS_OK: reason = result.get('reason') if reason == self.SCRIPT_UNKNOWN_JOB: raise exc.NotFound("Job %s not found to be" " consumed" % (job.uuid)) elif reason == self.SCRIPT_UNKNOWN_OWNER: raise exc.NotFound("Can not consume job %s" " which we can not determine" " the owner of" % (job.uuid)) elif reason == self.SCRIPT_NOT_EXPECTED_OWNER: raw_owner = result.get('owner') if raw_owner: owner = self._decode_owner(raw_owner) raise exc.JobFailure("Can not consume job %s" " which is not owned by %s (it is" " actively owned by %s)" % (job.uuid, who, owner)) else: raise exc.JobFailure("Can not consume job %s" " which is not owned by %s" % (job.uuid, who)) else: raise exc.JobFailure("Failure to consume job %s," " unknown internal error (reason=%s)" % (job.uuid, reason)) @base.check_who def claim(self, job, who, expiry=None): if expiry is None: # On the lua side none doesn't translate to nil so we have # do to this string conversion to make sure that we can tell # the difference. ms_expiry = "none" else: ms_expiry = int(expiry * 1000.0) if ms_expiry <= 0: raise ValueError("Provided expiry (when converted to" " milliseconds) must be greater" " than zero instead of %s" % (expiry)) script = self._get_script('claim') with _translate_failures(): raw_who = self._encode_owner(who) raw_result = script(keys=[job.owner_key, self.listings_key, job.last_modified_key], args=[raw_who, job.key, # NOTE(harlowja): we need to send this # in as a blob (even if it's not # set/used), since the format can not # currently be created in lua... self._dumps(timeutils.utcnow()), ms_expiry]) result = self._loads(raw_result) status = result['status'] if status != self.SCRIPT_STATUS_OK: reason = result.get('reason') if reason == self.SCRIPT_UNKNOWN_JOB: raise exc.NotFound("Job %s not found to be" " claimed" % (job.uuid)) elif reason == self.SCRIPT_ALREADY_CLAIMED: raw_owner = result.get('owner') if raw_owner: owner = self._decode_owner(raw_owner) raise exc.UnclaimableJob("Job %s already" " claimed by %s" % (job.uuid, owner)) else: raise exc.UnclaimableJob("Job %s already" " claimed" % (job.uuid)) else: raise exc.JobFailure("Failure to claim job %s," " unknown internal error (reason=%s)" % (job.uuid, reason)) @base.check_who def abandon(self, job, who): script = self._get_script('abandon') with _translate_failures(): raw_who = self._encode_owner(who) raw_result = script(keys=[job.owner_key, self.listings_key, job.last_modified_key], args=[raw_who, job.key, self._dumps(timeutils.utcnow())]) result = self._loads(raw_result) status = result.get('status') if status != self.SCRIPT_STATUS_OK: reason = result.get('reason') if reason == self.SCRIPT_UNKNOWN_JOB: raise exc.NotFound("Job %s not found to be" " abandoned" % (job.uuid)) elif reason == self.SCRIPT_UNKNOWN_OWNER: raise exc.NotFound("Can not abandon job %s" " which we can not determine" " the owner of" % (job.uuid)) elif reason == self.SCRIPT_NOT_EXPECTED_OWNER: raw_owner = result.get('owner') if raw_owner: owner = self._decode_owner(raw_owner) raise exc.JobFailure("Can not abandon job %s" " which is not owned by %s (it is" " actively owned by %s)" % (job.uuid, who, owner)) else: raise exc.JobFailure("Can not abandon job %s" " which is not owned by %s" % (job.uuid, who)) else: raise exc.JobFailure("Failure to abandon job %s," " unknown internal" " error (status=%s, reason=%s)" % (job.uuid, status, reason)) def _get_script(self, name): try: return self._scripts[name] except KeyError: exc.raise_with_cause(exc.NotFound, "Can not access %s script (has this" " board been connected?)" % name) @base.check_who def trash(self, job, who): script = self._get_script('trash') with _translate_failures(): raw_who = self._encode_owner(who) raw_result = script(keys=[job.owner_key, self.listings_key, job.last_modified_key, self.trash_key], args=[raw_who, job.key, self._dumps(timeutils.utcnow())]) result = self._loads(raw_result) status = result['status'] if status != self.SCRIPT_STATUS_OK: reason = result.get('reason') if reason == self.SCRIPT_UNKNOWN_JOB: raise exc.NotFound("Job %s not found to be" " trashed" % (job.uuid)) elif reason == self.SCRIPT_UNKNOWN_OWNER: raise exc.NotFound("Can not trash job %s" " which we can not determine" " the owner of" % (job.uuid)) elif reason == self.SCRIPT_NOT_EXPECTED_OWNER: raw_owner = result.get('owner') if raw_owner: owner = self._decode_owner(raw_owner) raise exc.JobFailure("Can not trash job %s" " which is not owned by %s (it is" " actively owned by %s)" % (job.uuid, who, owner)) else: raise exc.JobFailure("Can not trash job %s" " which is not owned by %s" % (job.uuid, who)) else: raise exc.JobFailure("Failure to trash job %s," " unknown internal error (reason=%s)" % (job.uuid, reason)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/jobs/backends/impl_zookeeper.py0000664000175000017500000011131400000000000023342 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import functools import sys import threading import fasteners import futurist from kazoo import exceptions as k_exceptions from kazoo.protocol import paths as k_paths from kazoo.protocol import states as k_states from kazoo.recipe import watchers from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import timeutils from oslo_utils import uuidutils import six from taskflow.conductors import base as c_base from taskflow import exceptions as excp from taskflow.jobs import base from taskflow import logging from taskflow import states from taskflow.utils import kazoo_utils from taskflow.utils import misc LOG = logging.getLogger(__name__) @functools.total_ordering class ZookeeperJob(base.Job): """A zookeeper job.""" def __init__(self, board, name, client, path, uuid=None, details=None, book=None, book_data=None, created_on=None, backend=None, priority=base.JobPriority.NORMAL): super(ZookeeperJob, self).__init__(board, name, uuid=uuid, details=details, backend=backend, book=book, book_data=book_data) self._client = client self._path = k_paths.normpath(path) self._lock_path = self._path + board.LOCK_POSTFIX self._created_on = created_on self._node_not_found = False basename = k_paths.basename(self._path) self._root = self._path[0:-len(basename)] self._sequence = int(basename[len(board.JOB_PREFIX):]) self._priority = priority @property def lock_path(self): """Path the job lock/claim and owner znode is stored.""" return self._lock_path @property def priority(self): return self._priority @property def path(self): """Path the job data znode is stored.""" return self._path @property def sequence(self): """Sequence number of the current job.""" return self._sequence @property def root(self): """The parent path of the job in zookeeper.""" return self._root def _get_node_attr(self, path, attr_name, trans_func=None): try: _data, node_stat = self._client.get(path) attr = getattr(node_stat, attr_name) if trans_func is not None: return trans_func(attr) else: return attr except k_exceptions.NoNodeError: excp.raise_with_cause( excp.NotFound, "Can not fetch the %r attribute of job %s (%s)," " path %s not found" % (attr_name, self.uuid, self.path, path)) except self._client.handler.timeout_exception: excp.raise_with_cause( excp.JobFailure, "Can not fetch the %r attribute of job %s (%s)," " operation timed out" % (attr_name, self.uuid, self.path)) except k_exceptions.SessionExpiredError: excp.raise_with_cause( excp.JobFailure, "Can not fetch the %r attribute of job %s (%s)," " session expired" % (attr_name, self.uuid, self.path)) except (AttributeError, k_exceptions.KazooException): excp.raise_with_cause( excp.JobFailure, "Can not fetch the %r attribute of job %s (%s)," " internal error" % (attr_name, self.uuid, self.path)) @property def last_modified(self): modified_on = None try: if not self._node_not_found: modified_on = self._get_node_attr( self.path, 'mtime', trans_func=misc.millis_to_datetime) except excp.NotFound: self._node_not_found = True return modified_on @property def created_on(self): # This one we can cache (since it won't change after creation). if self._node_not_found: return None if self._created_on is None: try: self._created_on = self._get_node_attr( self.path, 'ctime', trans_func=misc.millis_to_datetime) except excp.NotFound: self._node_not_found = True return self._created_on @property def state(self): owner = self.board.find_owner(self) job_data = {} try: raw_data, _data_stat = self._client.get(self.path) job_data = misc.decode_json(raw_data) except k_exceptions.NoNodeError: pass except k_exceptions.SessionExpiredError: excp.raise_with_cause( excp.JobFailure, "Can not fetch the state of %s," " session expired" % (self.uuid)) except self._client.handler.timeout_exception: excp.raise_with_cause( excp.JobFailure, "Can not fetch the state of %s," " operation timed out" % (self.uuid)) except k_exceptions.KazooException: excp.raise_with_cause( excp.JobFailure, "Can not fetch the state of %s," " internal error" % (self.uuid)) if not job_data: # No data this job has been completed (the owner that we might have # fetched will not be able to be fetched again, since the job node # is a parent node of the owner/lock node). return states.COMPLETE if not owner: # No owner, but data, still work to be done. return states.UNCLAIMED return states.CLAIMED def __lt__(self, other): if not isinstance(other, ZookeeperJob): return NotImplemented if self.root == other.root: if self.priority == other.priority: return self.sequence < other.sequence else: ordered = base.JobPriority.reorder( (self.priority, self), (other.priority, other)) if ordered[0] is self: return False return True else: # Different jobboards with different roots... return self.root < other.root def __eq__(self, other): if not isinstance(other, ZookeeperJob): return NotImplemented return ((self.root, self.sequence, self.priority) == (other.root, other.sequence, other.priority)) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self.path) class ZookeeperJobBoard(base.NotifyingJobBoard): """A jobboard backed by `zookeeper`_. Powered by the `kazoo `_ library. This jobboard creates *sequenced* persistent znodes in a directory in zookeeper and uses zookeeper watches to notify other jobboards of jobs which were posted using the :meth:`.post` method (this creates a znode with job contents/details encoded in `json`_). The users of these jobboard(s) (potentially on disjoint sets of machines) can then iterate over the available jobs and decide if they want to attempt to claim one of the jobs they have iterated over. If so they will then attempt to contact zookeeper and they will attempt to create a ephemeral znode using the name of the persistent znode + ".lock" as a postfix. If the entity trying to use the jobboard to :meth:`.claim` the job is able to create a ephemeral znode with that name then it will be allowed (and expected) to perform whatever *work* the contents of that job described. Once the claiming entity is finished the ephemeral znode and persistent znode will be deleted (if successfully completed) in a single transaction. If the claiming entity is not successful (or the entity that claimed the znode dies) the ephemeral znode will be released (either manually by using :meth:`.abandon` or automatically by zookeeper when the ephemeral node and associated session is deemed to have been lost). Do note that the creation of a kazoo client is achieved by :py:func:`~taskflow.utils.kazoo_utils.make_client` and the transfer of this jobboard configuration to that function to make a client may happen at ``__init__`` time. This implies that certain parameters from this jobboard configuration may be provided to :py:func:`~taskflow.utils.kazoo_utils.make_client` such that if a client was not provided by the caller one will be created according to :py:func:`~taskflow.utils.kazoo_utils.make_client`'s specification .. _zookeeper: http://zookeeper.apache.org/ .. _json: https://json.org/ """ #: Transaction support was added in 3.4.0 so we need at least that version. MIN_ZK_VERSION = (3, 4, 0) #: Znode **postfix** that lock entries have. LOCK_POSTFIX = ".lock" #: Znode child path created under root path that contains trashed jobs. TRASH_FOLDER = ".trash" #: Znode child path created under root path that contains registered #: entities. ENTITY_FOLDER = ".entities" #: Znode **prefix** that job entries have. JOB_PREFIX = 'job' #: Default znode path used for jobs (data, locks...). DEFAULT_PATH = "/taskflow/jobs" STATE_HISTORY_LENGTH = 2 """ Number of prior state changes to keep a history of, mainly useful for history tracking and debugging connectivity issues. """ NO_FETCH_STATES = (k_states.KazooState.LOST, k_states.KazooState.SUSPENDED) """ Client states underwhich we return empty lists from fetching routines, during these states the underlying connection either is being recovered or may be recovered (aka, it has not full disconnected). """ def __init__(self, name, conf, client=None, persistence=None, emit_notifications=True): super(ZookeeperJobBoard, self).__init__(name, conf) if client is not None: self._client = client self._owned = False else: self._client = kazoo_utils.make_client(self._conf) self._owned = True path = str(conf.get("path", self.DEFAULT_PATH)) if not path: raise ValueError("Empty zookeeper path is disallowed") if not k_paths.isabs(path): raise ValueError("Zookeeper path must be absolute") self._path = path self._trash_path = self._path.replace(k_paths.basename(self._path), self.TRASH_FOLDER) self._entity_path = self._path.replace( k_paths.basename(self._path), self.ENTITY_FOLDER) # The backend to load the full logbooks from, since what is sent over # the data connection is only the logbook uuid and name, and not the # full logbook. self._persistence = persistence # Misc. internal details self._known_jobs = {} self._job_cond = threading.Condition() self._open_close_lock = threading.RLock() self._client.add_listener(self._state_change_listener) self._bad_paths = frozenset([path]) self._job_watcher = None # Since we use sequenced ids this will be the path that the sequences # are prefixed with, for example, job0000000001, job0000000002, ... self._job_base = k_paths.join(path, self.JOB_PREFIX) self._worker = None self._emit_notifications = bool(emit_notifications) self._connected = False self._suspended = False self._closing = False self._last_states = collections.deque(maxlen=self.STATE_HISTORY_LENGTH) def _try_emit(self, state, details): # Submit the work to the executor to avoid blocking the kazoo threads # and queue(s)... worker = self._worker if worker is None or not self._emit_notifications: # Worker has been destroyed or we aren't supposed to emit anything # in the first place... return try: worker.submit(self.notifier.notify, state, details) except RuntimeError: # Notification thread is/was shutdown just skip submitting a # notification... pass @property def path(self): """Path where all job znodes will be stored.""" return self._path @property def trash_path(self): """Path where all trashed job znodes will be stored.""" return self._trash_path @property def entity_path(self): """Path where all conductor info znodes will be stored.""" return self._entity_path @property def job_count(self): return len(self._known_jobs) def _fetch_jobs(self, ensure_fresh=False): try: last_state = self._last_states[0] except IndexError: last_state = None if last_state in self.NO_FETCH_STATES: # NOTE(harlowja): on lost clear out all known jobs (from the # in-memory mapping) as we can not safely assume there are any # jobs to continue working on in this state. if last_state == k_states.KazooState.LOST and self._known_jobs: # This will force the jobboard to drop all (in-memory) jobs # that are not in this list (pretty much simulating what # would happen if a jobboard data directory was emptied). self._on_job_posting([], delayed=False) return [] else: if ensure_fresh: self._force_refresh() with self._job_cond: return sorted(six.itervalues(self._known_jobs)) def _force_refresh(self): try: maybe_children = self._client.get_children(self.path) self._on_job_posting(maybe_children, delayed=False) except self._client.handler.timeout_exception: excp.raise_with_cause(excp.JobFailure, "Refreshing failure, operation timed out") except k_exceptions.SessionExpiredError: excp.raise_with_cause(excp.JobFailure, "Refreshing failure, session expired") except k_exceptions.NoNodeError: pass except k_exceptions.KazooException: excp.raise_with_cause(excp.JobFailure, "Refreshing failure, internal error") def iterjobs(self, only_unclaimed=False, ensure_fresh=False): board_removal_func = lambda job: self._remove_job(job.path) return base.JobBoardIterator( self, LOG, only_unclaimed=only_unclaimed, ensure_fresh=ensure_fresh, board_fetch_func=self._fetch_jobs, board_removal_func=board_removal_func) def _remove_job(self, path): if path not in self._known_jobs: return False with self._job_cond: job = self._known_jobs.pop(path, None) if job is not None: LOG.debug("Removed job that was at path '%s'", path) self._try_emit(base.REMOVAL, details={'job': job}) return True else: return False def _process_child(self, path, request, quiet=True): """Receives the result of a child data fetch request.""" job = None try: raw_data, node_stat = request.get() job_data = misc.decode_json(raw_data) job_created_on = misc.millis_to_datetime(node_stat.ctime) try: job_priority = job_data['priority'] job_priority = base.JobPriority.convert(job_priority) except KeyError: job_priority = base.JobPriority.NORMAL job_uuid = job_data['uuid'] job_name = job_data['name'] except (ValueError, TypeError, KeyError): with excutils.save_and_reraise_exception(reraise=not quiet): LOG.warning("Incorrectly formatted job data found at path: %s", path, exc_info=True) except self._client.handler.timeout_exception: with excutils.save_and_reraise_exception(reraise=not quiet): LOG.warning("Operation timed out fetching job data from" " from path: %s", path, exc_info=True) except k_exceptions.SessionExpiredError: with excutils.save_and_reraise_exception(reraise=not quiet): LOG.warning("Session expired fetching job data from path: %s", path, exc_info=True) except k_exceptions.NoNodeError: LOG.debug("No job node found at path: %s, it must have" " disappeared or was removed", path) except k_exceptions.KazooException: with excutils.save_and_reraise_exception(reraise=not quiet): LOG.warning("Internal error fetching job data from path: %s", path, exc_info=True) else: with self._job_cond: # Now we can officially check if someone already placed this # jobs information into the known job set (if it's already # existing then just leave it alone). if path not in self._known_jobs: job = ZookeeperJob(self, job_name, self._client, path, backend=self._persistence, uuid=job_uuid, book_data=job_data.get("book"), details=job_data.get("details", {}), created_on=job_created_on, priority=job_priority) self._known_jobs[path] = job self._job_cond.notify_all() if job is not None: self._try_emit(base.POSTED, details={'job': job}) def _on_job_posting(self, children, delayed=True): LOG.debug("Got children %s under path %s", children, self.path) child_paths = [] for c in children: if (c.endswith(self.LOCK_POSTFIX) or not c.startswith(self.JOB_PREFIX)): # Skip lock paths or non-job-paths (these are not valid jobs) continue child_paths.append(k_paths.join(self.path, c)) # Figure out what we really should be investigating and what we # shouldn't (remove jobs that exist in our local version, but don't # exist in the children anymore) and accumulate all paths that we # need to trigger population of (without holding the job lock). investigate_paths = [] pending_removals = [] with self._job_cond: for path in six.iterkeys(self._known_jobs): if path not in child_paths: pending_removals.append(path) for path in child_paths: if path in self._bad_paths: continue # This pre-check will *not* guarantee that we will not already # have the job (if it's being populated elsewhere) but it will # reduce the amount of duplicated requests in general; later when # the job information has been populated we will ensure that we # are not adding duplicates into the currently known jobs... if path in self._known_jobs: continue if path not in investigate_paths: investigate_paths.append(path) if pending_removals: with self._job_cond: am_removed = 0 try: for path in pending_removals: am_removed += int(self._remove_job(path)) finally: if am_removed: self._job_cond.notify_all() for path in investigate_paths: # Fire off the request to populate this job. # # This method is *usually* called from a asynchronous handler so # it's better to exit from this quickly to allow other asynchronous # handlers to be executed. request = self._client.get_async(path) if delayed: request.rawlink(functools.partial(self._process_child, path)) else: self._process_child(path, request, quiet=False) def post(self, name, book=None, details=None, priority=base.JobPriority.NORMAL): # NOTE(harlowja): Jobs are not ephemeral, they will persist until they # are consumed (this may change later, but seems safer to do this until # further notice). job_priority = base.JobPriority.convert(priority) job_uuid = uuidutils.generate_uuid() job_posting = base.format_posting(job_uuid, name, book=book, details=details, priority=job_priority) raw_job_posting = misc.binary_encode(jsonutils.dumps(job_posting)) with self._wrap(job_uuid, None, fail_msg_tpl="Posting failure: %s", ensure_known=False): job_path = self._client.create(self._job_base, value=raw_job_posting, sequence=True, ephemeral=False) job = ZookeeperJob(self, name, self._client, job_path, backend=self._persistence, book=book, details=details, uuid=job_uuid, book_data=job_posting.get('book'), priority=job_priority) with self._job_cond: self._known_jobs[job_path] = job self._job_cond.notify_all() self._try_emit(base.POSTED, details={'job': job}) return job @base.check_who def claim(self, job, who): def _unclaimable_try_find_owner(cause): try: owner = self.find_owner(job) except Exception: owner = None if owner: message = "Job %s already claimed by '%s'" % (job.uuid, owner) else: message = "Job %s already claimed" % (job.uuid) excp.raise_with_cause(excp.UnclaimableJob, message, cause=cause) with self._wrap(job.uuid, job.path, fail_msg_tpl="Claiming failure: %s"): # NOTE(harlowja): post as json which will allow for future changes # more easily than a raw string/text. value = jsonutils.dumps({ 'owner': who, }) # Ensure the target job is still existent (at the right version). job_data, job_stat = self._client.get(job.path) txn = self._client.transaction() # This will abort (and not create the lock) if the job has been # removed (somehow...) or updated by someone else to a different # version... txn.check(job.path, version=job_stat.version) txn.create(job.lock_path, value=misc.binary_encode(value), ephemeral=True) try: kazoo_utils.checked_commit(txn) except k_exceptions.NodeExistsError as e: _unclaimable_try_find_owner(e) except kazoo_utils.KazooTransactionException as e: if len(e.failures) < 2: raise else: if isinstance(e.failures[0], k_exceptions.NoNodeError): excp.raise_with_cause( excp.NotFound, "Job %s not found to be claimed" % job.uuid, cause=e.failures[0]) if isinstance(e.failures[1], k_exceptions.NodeExistsError): _unclaimable_try_find_owner(e.failures[1]) else: excp.raise_with_cause( excp.UnclaimableJob, "Job %s claim failed due to transaction" " not succeeding" % (job.uuid), cause=e) @contextlib.contextmanager def _wrap(self, job_uuid, job_path, fail_msg_tpl="Failure: %s", ensure_known=True): if job_path: fail_msg_tpl += " (%s)" % (job_path) if ensure_known: if not job_path: raise ValueError("Unable to check if %r is a known path" % (job_path)) if job_path not in self._known_jobs: fail_msg_tpl += ", unknown job" raise excp.NotFound(fail_msg_tpl % (job_uuid)) try: yield except self._client.handler.timeout_exception: fail_msg_tpl += ", operation timed out" excp.raise_with_cause(excp.JobFailure, fail_msg_tpl % (job_uuid)) except k_exceptions.SessionExpiredError: fail_msg_tpl += ", session expired" excp.raise_with_cause(excp.JobFailure, fail_msg_tpl % (job_uuid)) except k_exceptions.NoNodeError: fail_msg_tpl += ", unknown job" excp.raise_with_cause(excp.NotFound, fail_msg_tpl % (job_uuid)) except k_exceptions.KazooException: fail_msg_tpl += ", internal error" excp.raise_with_cause(excp.JobFailure, fail_msg_tpl % (job_uuid)) def find_owner(self, job): with self._wrap(job.uuid, job.path, fail_msg_tpl="Owner query failure: %s", ensure_known=False): try: self._client.sync(job.lock_path) raw_data, _lock_stat = self._client.get(job.lock_path) data = misc.decode_json(raw_data) owner = data.get("owner") except k_exceptions.NoNodeError: owner = None return owner def _get_owner_and_data(self, job): lock_data, lock_stat = self._client.get(job.lock_path) job_data, job_stat = self._client.get(job.path) return (misc.decode_json(lock_data), lock_stat, misc.decode_json(job_data), job_stat) def register_entity(self, entity): entity_type = entity.kind if entity_type == c_base.Conductor.ENTITY_KIND: entity_path = k_paths.join(self.entity_path, entity_type) try: self._client.ensure_path(entity_path) self._client.create(k_paths.join(entity_path, entity.name), value=misc.binary_encode( jsonutils.dumps(entity.to_dict())), ephemeral=True) except k_exceptions.NodeExistsError: pass except self._client.handler.timeout_exception: excp.raise_with_cause( excp.JobFailure, "Can not register entity %s under %s, operation" " timed out" % (entity.name, entity_path)) except k_exceptions.SessionExpiredError: excp.raise_with_cause( excp.JobFailure, "Can not register entity %s under %s, session" " expired" % (entity.name, entity_path)) except k_exceptions.KazooException: excp.raise_with_cause( excp.JobFailure, "Can not register entity %s under %s, internal" " error" % (entity.name, entity_path)) else: raise excp.NotImplementedError( "Not implemented for other entity type '%s'" % entity_type) @base.check_who def consume(self, job, who): with self._wrap(job.uuid, job.path, fail_msg_tpl="Consumption failure: %s"): try: owner_data = self._get_owner_and_data(job) lock_data, lock_stat, data, data_stat = owner_data except k_exceptions.NoNodeError: excp.raise_with_cause(excp.NotFound, "Can not consume a job %s" " which we can not determine" " the owner of" % (job.uuid)) if lock_data.get("owner") != who: raise excp.JobFailure("Can not consume a job %s" " which is not owned by %s" % (job.uuid, who)) txn = self._client.transaction() txn.delete(job.lock_path, version=lock_stat.version) txn.delete(job.path, version=data_stat.version) kazoo_utils.checked_commit(txn) self._remove_job(job.path) @base.check_who def abandon(self, job, who): with self._wrap(job.uuid, job.path, fail_msg_tpl="Abandonment failure: %s"): try: owner_data = self._get_owner_and_data(job) lock_data, lock_stat, data, data_stat = owner_data except k_exceptions.NoNodeError: excp.raise_with_cause(excp.NotFound, "Can not abandon a job %s" " which we can not determine" " the owner of" % (job.uuid)) if lock_data.get("owner") != who: raise excp.JobFailure("Can not abandon a job %s" " which is not owned by %s" % (job.uuid, who)) txn = self._client.transaction() txn.delete(job.lock_path, version=lock_stat.version) kazoo_utils.checked_commit(txn) @base.check_who def trash(self, job, who): with self._wrap(job.uuid, job.path, fail_msg_tpl="Trash failure: %s"): try: owner_data = self._get_owner_and_data(job) lock_data, lock_stat, data, data_stat = owner_data except k_exceptions.NoNodeError: excp.raise_with_cause(excp.NotFound, "Can not trash a job %s" " which we can not determine" " the owner of" % (job.uuid)) if lock_data.get("owner") != who: raise excp.JobFailure("Can not trash a job %s" " which is not owned by %s" % (job.uuid, who)) trash_path = job.path.replace(self.path, self.trash_path) value = misc.binary_encode(jsonutils.dumps(data)) txn = self._client.transaction() txn.create(trash_path, value=value) txn.delete(job.lock_path, version=lock_stat.version) txn.delete(job.path, version=data_stat.version) kazoo_utils.checked_commit(txn) def _state_change_listener(self, state): if self._last_states: LOG.debug("Kazoo client has changed to" " state '%s' from prior states '%s'", state, self._last_states) else: LOG.debug("Kazoo client has changed to state '%s' (from" " its initial/uninitialized state)", state) self._last_states.appendleft(state) if state == k_states.KazooState.LOST: self._connected = False # When the client is itself closing itself down this will be # triggered, but in that case we expect it, so we don't need # to emit a warning message. if not self._closing: LOG.warning("Connection to zookeeper has been lost") elif state == k_states.KazooState.SUSPENDED: LOG.warning("Connection to zookeeper has been suspended") self._suspended = True else: # Must be CONNECTED then (as there are only 3 enums) if self._suspended: self._suspended = False def wait(self, timeout=None): # Wait until timeout expires (or forever) for jobs to appear. watch = timeutils.StopWatch(duration=timeout) watch.start() with self._job_cond: while True: if not self._known_jobs: if watch.expired(): raise excp.NotFound("Expired waiting for jobs to" " arrive; waited %s seconds" % watch.elapsed()) # This is done since the given timeout can not be provided # to the condition variable, since we can not ensure that # when we acquire the condition that there will actually # be jobs (especially if we are spuriously awaken), so we # must recalculate the amount of time we really have left. self._job_cond.wait(watch.leftover(return_none=True)) else: curr_jobs = self._fetch_jobs() fetch_func = lambda ensure_fresh: curr_jobs removal_func = lambda a_job: self._remove_job(a_job.path) return base.JobBoardIterator( self, LOG, board_fetch_func=fetch_func, board_removal_func=removal_func) @property def connected(self): return self._connected and self._client.connected @fasteners.locked(lock='_open_close_lock') def close(self): if self._owned: LOG.debug("Stopping client") self._closing = True kazoo_utils.finalize_client(self._client) if self._worker is not None: LOG.debug("Shutting down the notifier") self._worker.shutdown() self._worker = None with self._job_cond: self._known_jobs.clear() LOG.debug("Stopped & cleared local state") self._connected = False self._last_states.clear() @fasteners.locked(lock='_open_close_lock') def connect(self, timeout=10.0): def try_clean(): # Attempt to do the needed cleanup if post-connection setup does # not succeed (maybe the connection is lost right after it is # obtained). try: self.close() except k_exceptions.KazooException: LOG.exception("Failed cleaning-up after post-connection" " initialization failed") try: if timeout is not None: timeout = float(timeout) self._client.start(timeout=timeout) self._closing = False except (self._client.handler.timeout_exception, k_exceptions.KazooException): excp.raise_with_cause(excp.JobFailure, "Failed to connect to zookeeper") try: if self._conf.get('check_compatible', True): kazoo_utils.check_compatible(self._client, self.MIN_ZK_VERSION) if self._worker is None and self._emit_notifications: self._worker = futurist.ThreadPoolExecutor(max_workers=1) self._client.ensure_path(self.path) self._client.ensure_path(self.trash_path) if self._job_watcher is None: self._job_watcher = watchers.ChildrenWatch( self._client, self.path, func=self._on_job_posting, allow_session_lost=True) self._connected = True except excp.IncompatibleVersion: with excutils.save_and_reraise_exception(): try_clean() except (self._client.handler.timeout_exception, k_exceptions.KazooException): exc_type, exc, exc_tb = sys.exc_info() try: try_clean() excp.raise_with_cause(excp.JobFailure, "Failed to do post-connection" " initialization", cause=exc) finally: del(exc_type, exc, exc_tb) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/jobs/base.py0000664000175000017500000005307000000000000017462 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import contextlib import time import enum from oslo_utils import timeutils from oslo_utils import uuidutils import six from taskflow import exceptions as excp from taskflow import states from taskflow.types import notifier from taskflow.utils import iter_utils class JobPriority(enum.Enum): """Enum of job priorities (modeled after hadoop job priorities).""" #: Extremely urgent job priority. VERY_HIGH = 'VERY_HIGH' #: Mildly urgent job priority. HIGH = 'HIGH' #: Default job priority. NORMAL = 'NORMAL' #: Not needed anytime soon job priority. LOW = 'LOW' #: Very much not needed anytime soon job priority. VERY_LOW = 'VERY_LOW' @classmethod def convert(cls, value): if isinstance(value, cls): return value try: return cls(value.upper()) except (ValueError, AttributeError): valids = [cls.VERY_HIGH, cls.HIGH, cls.NORMAL, cls.LOW, cls.VERY_LOW] valids = [p.value for p in valids] raise ValueError("'%s' is not a valid priority, valid" " priorities are %s" % (value, valids)) @classmethod def reorder(cls, *values): """Reorders (priority, value) tuples -> priority ordered values.""" if len(values) == 0: raise ValueError("At least one (priority, value) pair is" " required") elif len(values) == 1: v1 = values[0] # Even though this isn't used, we do the conversion because # all the other branches in this function do it so we do it # to be consistent (this also will raise on bad values, which # we want to do)... p1 = cls.convert(v1[0]) return v1[1] else: # Order very very much matters in this tuple... priority_ordering = (cls.VERY_HIGH, cls.HIGH, cls.NORMAL, cls.LOW, cls.VERY_LOW) if len(values) == 2: # It's common to use this in a 2 tuple situation, so # make it avoid all the needed complexity that is done # for greater than 2 tuples. v1 = values[0] v2 = values[1] p1 = cls.convert(v1[0]) p2 = cls.convert(v2[0]) p1_i = priority_ordering.index(p1) p2_i = priority_ordering.index(p2) if p1_i <= p2_i: return v1[1], v2[1] else: return v2[1], v1[1] else: buckets = collections.defaultdict(list) for (p, v) in values: p = cls.convert(p) buckets[p].append(v) values = [] for p in priority_ordering: values.extend(buckets[p]) return tuple(values) @six.add_metaclass(abc.ABCMeta) class Job(object): """A abstraction that represents a named and trackable unit of work. A job connects a logbook, a owner, a priority, last modified and created on dates and any associated state that the job has. Since it is a connected to a logbook, which are each associated with a set of factories that can create set of flows, it is the current top-level container for a piece of work that can be owned by an entity (typically that entity will read those logbooks and run any contained flows). Only one entity will be allowed to own and operate on the flows contained in a job at a given time (for the foreseeable future). NOTE(harlowja): It is the object that will be transferred to another entity on failure so that the contained flows ownership can be transferred to the secondary entity/owner for resumption, continuation, reverting... """ def __init__(self, board, name, uuid=None, details=None, backend=None, book=None, book_data=None): if uuid: self._uuid = uuid else: self._uuid = uuidutils.generate_uuid() self._name = name if not details: details = {} self._details = details self._backend = backend self._board = board self._book = book if not book_data: book_data = {} self._book_data = book_data @abc.abstractproperty def last_modified(self): """The datetime the job was last modified.""" @abc.abstractproperty def created_on(self): """The datetime the job was created on.""" @property def board(self): """The board this job was posted on or was created from.""" return self._board @abc.abstractproperty def state(self): """Access the current state of this job.""" @abc.abstractproperty def priority(self): """The :py:class:`~.JobPriority` of this job.""" def wait(self, timeout=None, delay=0.01, delay_multiplier=2.0, max_delay=60.0, sleep_func=time.sleep): """Wait for job to enter completion state. If the job has not completed in the given timeout, then return false, otherwise return true (a job failure exception may also be raised if the job information can not be read, for whatever reason). Periodic state checks will happen every ``delay`` seconds where ``delay`` will be multiplied by the given multipler after a state is found that is **not** complete. Note that if no timeout is given this is equivalent to blocking until the job has completed. Also note that if a jobboard backend can optimize this method then its implementation may not use delays (and backoffs) at all. In general though no matter what optimizations are applied implementations must **always** respect the given timeout value. """ if timeout is not None: w = timeutils.StopWatch(duration=timeout) w.start() else: w = None delay_gen = iter_utils.generate_delays(delay, max_delay, multiplier=delay_multiplier) while True: if w is not None and w.expired(): return False if self.state == states.COMPLETE: return True sleepy_secs = six.next(delay_gen) if w is not None: sleepy_secs = min(w.leftover(), sleepy_secs) sleep_func(sleepy_secs) return False @property def book(self): """Logbook associated with this job. If no logbook is associated with this job, this property is None. """ if self._book is None: self._book = self._load_book() return self._book @property def book_uuid(self): """UUID of logbook associated with this job. If no logbook is associated with this job, this property is None. """ if self._book is not None: return self._book.uuid else: return self._book_data.get('uuid') @property def book_name(self): """Name of logbook associated with this job. If no logbook is associated with this job, this property is None. """ if self._book is not None: return self._book.name else: return self._book_data.get('name') @property def uuid(self): """The uuid of this job.""" return self._uuid @property def details(self): """A dictionary of any details associated with this job.""" return self._details @property def name(self): """The non-uniquely identifying name of this job.""" return self._name def _load_book(self): book_uuid = self.book_uuid if self._backend is not None and book_uuid is not None: # TODO(harlowja): we are currently limited by assuming that the # job posted has the same backend as this loader (to start this # seems to be a ok assumption, and can be adjusted in the future # if we determine there is a use-case for multi-backend loaders, # aka a registry of loaders). with contextlib.closing(self._backend.get_connection()) as conn: return conn.get_logbook(book_uuid) # No backend to fetch from or no uuid specified return None def __str__(self): """Pretty formats the job into something *more* meaningful.""" cls_name = type(self).__name__ return "%s: %s (priority=%s, uuid=%s, details=%s)" % ( cls_name, self.name, self.priority, self.uuid, self.details) class JobBoardIterator(six.Iterator): """Iterator over a jobboard that iterates over potential jobs. It provides the following attributes: * ``only_unclaimed``: boolean that indicates whether to only iterate over unclaimed jobs * ``ensure_fresh``: boolean that requests that during every fetch of a new set of jobs this will cause the iterator to force the backend to refresh (ensuring that the jobboard has the most recent job listings) * ``board``: the board this iterator was created from """ _UNCLAIMED_JOB_STATES = (states.UNCLAIMED,) _JOB_STATES = (states.UNCLAIMED, states.COMPLETE, states.CLAIMED) def __init__(self, board, logger, board_fetch_func=None, board_removal_func=None, only_unclaimed=False, ensure_fresh=False): self._board = board self._logger = logger self._board_removal_func = board_removal_func self._board_fetch_func = board_fetch_func self._fetched = False self._jobs = collections.deque() self.only_unclaimed = only_unclaimed self.ensure_fresh = ensure_fresh @property def board(self): """The board this iterator was created from.""" return self._board def __iter__(self): return self def _next_job(self): if self.only_unclaimed: allowed_states = self._UNCLAIMED_JOB_STATES else: allowed_states = self._JOB_STATES job = None while self._jobs and job is None: maybe_job = self._jobs.popleft() try: if maybe_job.state in allowed_states: job = maybe_job except excp.JobFailure: self._logger.warn("Failed determining the state of" " job '%s'", maybe_job, exc_info=True) except excp.NotFound: # Attempt to clean this off the board now that we found # it wasn't really there (this **must** gracefully handle # removal already having happened). if self._board_removal_func is not None: self._board_removal_func(maybe_job) return job def __next__(self): if not self._jobs: if not self._fetched: if self._board_fetch_func is not None: self._jobs.extend( self._board_fetch_func( ensure_fresh=self.ensure_fresh)) self._fetched = True job = self._next_job() if job is None: raise StopIteration else: return job @six.add_metaclass(abc.ABCMeta) class JobBoard(object): """A place where jobs can be posted, reposted, claimed and transferred. There can be multiple implementations of this job board, depending on the desired semantics and capabilities of the underlying jobboard implementation. NOTE(harlowja): the name is meant to be an analogous to a board/posting system that is used in newspapers, or elsewhere to solicit jobs that people can interview and apply for (and then work on & complete). """ def __init__(self, name, conf): self._name = name self._conf = conf @abc.abstractmethod def iterjobs(self, only_unclaimed=False, ensure_fresh=False): """Returns an iterator of jobs that are currently on this board. NOTE(harlowja): the ordering of this iteration should be by posting order (oldest to newest) with higher priority jobs being provided before lower priority jobs, but it is left up to the backing implementation to provide the order that best suits it.. NOTE(harlowja): the iterator that is returned may support other attributes which can be used to further customize how iteration can be accomplished; check with the backends iterator object to determine what other attributes are supported. :param only_unclaimed: boolean that indicates whether to only iteration over unclaimed jobs. :param ensure_fresh: boolean that requests to only iterate over the most recent jobs available, where the definition of what is recent is backend specific. It is allowable that a backend may ignore this value if the backends internal semantics/capabilities can not support this argument. """ @abc.abstractmethod def wait(self, timeout=None): """Waits a given amount of time for **any** jobs to be posted. When jobs are found then an iterator will be returned that can be used to iterate over those jobs. NOTE(harlowja): since a jobboard can be mutated on by multiple external entities at the **same** time the iterator that can be returned **may** still be empty due to other entities removing those jobs after the iterator has been created (be aware of this when using it). :param timeout: float that indicates how long to wait for a job to appear (if None then waits forever). """ @abc.abstractproperty def job_count(self): """Returns how many jobs are on this jobboard. NOTE(harlowja): this count may change as jobs appear or are removed so the accuracy of this count should not be used in a way that requires it to be exact & absolute. """ @abc.abstractmethod def find_owner(self, job): """Gets the owner of the job if one exists.""" @property def name(self): """The non-uniquely identifying name of this jobboard.""" return self._name @abc.abstractmethod def consume(self, job, who): """Permanently (and atomically) removes a job from the jobboard. Consumption signals to the board (and any others examining the board) that this job has been completed by the entity that previously claimed that job. Only the entity that has claimed that job is able to consume the job. A job that has been consumed can not be reclaimed or reposted by another entity (job postings are immutable). Any entity consuming a unclaimed job (or a job they do not have a claim on) will cause an exception. :param job: a job on this jobboard that can be consumed (if it does not exist then a NotFound exception will be raised). :param who: string that names the entity performing the consumption, this must be the same name that was used for claiming this job. """ @abc.abstractmethod def post(self, name, book=None, details=None, priority=JobPriority.NORMAL): """Atomically creates and posts a job to the jobboard. This posting allowing others to attempt to claim that job (and subsequently work on that job). The contents of the provided logbook, details dictionary, or name (or a mix of these) must provide *enough* information for consumers to reference to construct and perform that jobs contained work (whatever it may be). Once a job has been posted it can only be removed by consuming that job (after that job is claimed). Any entity can post/propose jobs to the jobboard (in the future this may be restricted). Returns a job object representing the information that was posted. """ @abc.abstractmethod def claim(self, job, who): """Atomically attempts to claim the provided job. If a job is claimed it is expected that the entity that claims that job will at sometime in the future work on that jobs contents and either fail at completing them (resulting in a reposting) or consume that job from the jobboard (signaling its completion). If claiming fails then a corresponding exception will be raised to signal this to the claim attempter. :param job: a job on this jobboard that can be claimed (if it does not exist then a NotFound exception will be raised). :param who: string that names the claiming entity. """ @abc.abstractmethod def abandon(self, job, who): """Atomically attempts to abandon the provided job. This abandonment signals to others that the job may now be reclaimed. This would typically occur if the entity that has claimed the job has failed or is unable to complete the job or jobs it had previously claimed. Only the entity that has claimed that job can abandon a job. Any entity abandoning a unclaimed job (or a job they do not own) will cause an exception. :param job: a job on this jobboard that can be abandoned (if it does not exist then a NotFound exception will be raised). :param who: string that names the entity performing the abandoning, this must be the same name that was used for claiming this job. """ @abc.abstractmethod def trash(self, job, who): """Trash the provided job. Trashing a job signals to others that the job is broken and should not be reclaimed. This is provided as an option for users to be able to remove jobs from the board externally. The trashed job details should be kept around in an alternate location to be reviewed, if desired. Only the entity that has claimed that job can trash a job. Any entity trashing a unclaimed job (or a job they do not own) will cause an exception. :param job: a job on this jobboard that can be trashed (if it does not exist then a NotFound exception will be raised). :param who: string that names the entity performing the trashing, this must be the same name that was used for claiming this job. """ @abc.abstractmethod def register_entity(self, entity): """Register an entity to the jobboard('s backend), e.g: a conductor. :param entity: entity to register as being associated with the jobboard('s backend) :type entity: :py:class:`~taskflow.types.entity.Entity` """ @abc.abstractproperty def connected(self): """Returns if this jobboard is connected.""" @abc.abstractmethod def connect(self): """Opens the connection to any backend system.""" @abc.abstractmethod def close(self): """Close the connection to any backend system. Once closed the jobboard can no longer be used (unless reconnection occurs). """ # Jobboard events POSTED = 'POSTED' # new job is/has been posted REMOVAL = 'REMOVAL' # existing job is/has been removed class NotifyingJobBoard(JobBoard): """A jobboard subclass that can notify others about board events. Implementers are expected to notify *at least* about jobs being posted and removed. NOTE(harlowja): notifications that are emitted *may* be emitted on a separate dedicated thread when they occur, so ensure that all callbacks registered are thread safe (and block for as little time as possible). """ def __init__(self, name, conf): super(NotifyingJobBoard, self).__init__(name, conf) self.notifier = notifier.Notifier() # Internal helpers for usage by board implementations... def check_who(meth): @six.wraps(meth) def wrapper(self, job, who, *args, **kwargs): if not isinstance(who, six.string_types): raise TypeError("Job applicant must be a string type") if len(who) == 0: raise ValueError("Job applicant must be non-empty") return meth(self, job, who, *args, **kwargs) return wrapper def format_posting(uuid, name, created_on=None, last_modified=None, details=None, book=None, priority=JobPriority.NORMAL): posting = { 'uuid': uuid, 'name': name, 'priority': priority.value, } if created_on is not None: posting['created_on'] = created_on if last_modified is not None: posting['last_modified'] = last_modified if details: posting['details'] = details else: posting['details'] = {} if book is not None: posting['book'] = { 'name': book.name, 'uuid': book.uuid, } return posting ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6280417 taskflow-4.6.4/taskflow/listeners/0000775000175000017500000000000000000000000017244 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/__init__.py0000664000175000017500000000000000000000000021343 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/base.py0000664000175000017500000001636300000000000020541 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc from oslo_utils import excutils import six from taskflow import logging from taskflow import states from taskflow.types import failure from taskflow.types import notifier LOG = logging.getLogger(__name__) #: These states will results be usable, other states do not produce results. FINISH_STATES = (states.FAILURE, states.SUCCESS, states.REVERTED, states.REVERT_FAILURE) #: What is listened for by default... DEFAULT_LISTEN_FOR = (notifier.Notifier.ANY,) def _task_matcher(details): """Matches task details emitted.""" if not details: return False if 'task_name' in details and 'task_uuid' in details: return True return False def _retry_matcher(details): """Matches retry details emitted.""" if not details: return False if 'retry_name' in details and 'retry_uuid' in details: return True return False def _bulk_deregister(notifier, registered, details_filter=None): """Bulk deregisters callbacks associated with many states.""" while registered: state, cb = registered.pop() notifier.deregister(state, cb, details_filter=details_filter) def _bulk_register(watch_states, notifier, cb, details_filter=None): """Bulk registers a callback associated with many states.""" registered = [] try: for state in watch_states: if not notifier.is_registered(state, cb, details_filter=details_filter): notifier.register(state, cb, details_filter=details_filter) registered.append((state, cb)) except ValueError: with excutils.save_and_reraise_exception(): _bulk_deregister(notifier, registered, details_filter=details_filter) else: return registered class Listener(object): """Base class for listeners. A listener can be attached to an engine to do various actions on flow and atom state transitions. It implements the context manager protocol to be able to register and unregister with a given engine automatically when a context is entered and when it is exited. To implement a listener, derive from this class and override ``_flow_receiver`` and/or ``_task_receiver`` and/or ``_retry_receiver`` methods (in this class, they do nothing). """ def __init__(self, engine, task_listen_for=DEFAULT_LISTEN_FOR, flow_listen_for=DEFAULT_LISTEN_FOR, retry_listen_for=DEFAULT_LISTEN_FOR): if not task_listen_for: task_listen_for = [] if not retry_listen_for: retry_listen_for = [] if not flow_listen_for: flow_listen_for = [] self._listen_for = { 'task': list(task_listen_for), 'retry': list(retry_listen_for), 'flow': list(flow_listen_for), } self._engine = engine self._registered = {} def _flow_receiver(self, state, details): pass def _task_receiver(self, state, details): pass def _retry_receiver(self, state, details): pass def deregister(self): if 'task' in self._registered: _bulk_deregister(self._engine.atom_notifier, self._registered['task'], details_filter=_task_matcher) del self._registered['task'] if 'retry' in self._registered: _bulk_deregister(self._engine.atom_notifier, self._registered['retry'], details_filter=_retry_matcher) del self._registered['retry'] if 'flow' in self._registered: _bulk_deregister(self._engine.notifier, self._registered['flow']) del self._registered['flow'] def register(self): if 'task' not in self._registered: self._registered['task'] = _bulk_register( self._listen_for['task'], self._engine.atom_notifier, self._task_receiver, details_filter=_task_matcher) if 'retry' not in self._registered: self._registered['retry'] = _bulk_register( self._listen_for['retry'], self._engine.atom_notifier, self._retry_receiver, details_filter=_retry_matcher) if 'flow' not in self._registered: self._registered['flow'] = _bulk_register( self._listen_for['flow'], self._engine.notifier, self._flow_receiver) def __enter__(self): self.register() return self def __exit__(self, type, value, tb): try: self.deregister() except Exception: # Don't let deregistering throw exceptions LOG.warning("Failed deregistering listeners from engine %s", self._engine, exc_info=True) @six.add_metaclass(abc.ABCMeta) class DumpingListener(Listener): """Abstract base class for dumping listeners. This provides a simple listener that can be attached to an engine which can be derived from to dump task and/or flow state transitions to some target backend. To implement your own dumping listener derive from this class and override the ``_dump`` method. """ @abc.abstractmethod def _dump(self, message, *args, **kwargs): """Dumps the provided *templated* message to some output.""" def _flow_receiver(self, state, details): self._dump("%s has moved flow '%s' (%s) into state '%s'" " from state '%s'", self._engine, details['flow_name'], details['flow_uuid'], state, details['old_state']) def _task_receiver(self, state, details): if state in FINISH_STATES: result = details.get('result') exc_info = None was_failure = False if isinstance(result, failure.Failure): if result.exc_info: exc_info = tuple(result.exc_info) was_failure = True self._dump("%s has moved task '%s' (%s) into state '%s'" " from state '%s' with result '%s' (failure=%s)", self._engine, details['task_name'], details['task_uuid'], state, details['old_state'], result, was_failure, exc_info=exc_info) else: self._dump("%s has moved task '%s' (%s) into state '%s'" " from state '%s'", self._engine, details['task_name'], details['task_uuid'], state, details['old_state']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/capturing.py0000664000175000017500000001035100000000000021612 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.listeners import base def _freeze_it(values): """Freezes a set of values (handling none/empty nicely).""" if not values: return frozenset() else: return frozenset(values) class CaptureListener(base.Listener): """A listener that captures transitions and saves them locally. NOTE(harlowja): this listener is *mainly* useful for testing (where it is useful to test the appropriate/expected transitions, produced results... occurred after engine running) but it could have other usages as well. :ivar values: Captured transitions + details (the result of the :py:meth:`._format_capture` method) are stored into this list (a previous list to append to may be provided using the constructor keyword argument of the same name); by default this stores tuples of the format ``(kind, state, details)``. """ # Constant 'kind' strings used in the default capture formatting (to # identify what was captured); these are saved into the accumulated # values as the first index (so that your code may differentiate between # what was captured). #: Kind that denotes a 'flow' capture. FLOW = 'flow' #: Kind that denotes a 'task' capture. TASK = 'task' #: Kind that denotes a 'retry' capture. RETRY = 'retry' def __init__(self, engine, task_listen_for=base.DEFAULT_LISTEN_FOR, flow_listen_for=base.DEFAULT_LISTEN_FOR, retry_listen_for=base.DEFAULT_LISTEN_FOR, # Easily override what you want captured and where it # should save into and what should be skipped... capture_flow=True, capture_task=True, capture_retry=True, # Skip capturing *all* tasks, all retries, all flows... skip_tasks=None, skip_retries=None, skip_flows=None, # Provide your own list (or previous list) to accumulate # into... values=None): super(CaptureListener, self).__init__( engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) self._capture_flow = capture_flow self._capture_task = capture_task self._capture_retry = capture_retry self._skip_tasks = _freeze_it(skip_tasks) self._skip_flows = _freeze_it(skip_flows) self._skip_retries = _freeze_it(skip_retries) if values is None: self.values = [] else: self.values = values @staticmethod def _format_capture(kind, state, details): """Tweak what is saved according to your desire(s).""" return (kind, state, details) def _task_receiver(self, state, details): if self._capture_task: if details['task_name'] not in self._skip_tasks: self.values.append(self._format_capture(self.TASK, state, details)) def _retry_receiver(self, state, details): if self._capture_retry: if details['retry_name'] not in self._skip_retries: self.values.append(self._format_capture(self.RETRY, state, details)) def _flow_receiver(self, state, details): if self._capture_flow: if details['flow_name'] not in self._skip_flows: self.values.append(self._format_capture(self.FLOW, state, details)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/claims.py0000664000175000017500000001004300000000000021064 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import os import six from taskflow import exceptions from taskflow.listeners import base from taskflow import states LOG = logging.getLogger(__name__) class CheckingClaimListener(base.Listener): """Listener that interacts [engine, job, jobboard]; ensures claim is valid. This listener (or a derivative) can be associated with an engines notification system after the job has been claimed (so that the jobs work can be worked on by that engine). This listener (after associated) will check that the job is still claimed *whenever* the engine notifies of a task or flow state change. If the job is not claimed when a state change occurs, a associated handler (or the default) will be activated to determine how to react to this *hopefully* exceptional case. NOTE(harlowja): this may create more traffic than desired to the jobboard backend (zookeeper or other), since the amount of state change per task and flow is non-zero (and checking during each state change will result in quite a few calls to that management system to check the jobs claim status); this could be later optimized to check less (or only check on a smaller set of states) NOTE(harlowja): if a custom ``on_job_loss`` callback is provided it must accept three positional arguments, the first being the current engine being ran, the second being the 'task/flow' state and the third being the details that were sent from the engine to listeners for inspection. """ def __init__(self, engine, job, board, owner, on_job_loss=None): super(CheckingClaimListener, self).__init__(engine) self._job = job self._board = board self._owner = owner if on_job_loss is None: self._on_job_loss = self._suspend_engine_on_loss else: if not six.callable(on_job_loss): raise ValueError("Custom 'on_job_loss' handler must be" " callable") self._on_job_loss = on_job_loss def _suspend_engine_on_loss(self, engine, state, details): """The default strategy for handling claims being lost.""" try: engine.suspend() except exceptions.TaskFlowException as e: LOG.warning("Failed suspending engine '%s', (previously owned by" " '%s'):%s%s", engine, self._owner, os.linesep, e.pformat()) def _flow_receiver(self, state, details): self._claim_checker(state, details) def _task_receiver(self, state, details): self._claim_checker(state, details) def _has_been_lost(self): try: job_state = self._job.state job_owner = self._board.find_owner(self._job) except (exceptions.NotFound, exceptions.JobFailure): return True else: if job_state == states.UNCLAIMED or self._owner != job_owner: return True else: return False def _claim_checker(self, state, details): if not self._has_been_lost(): LOG.debug("Job '%s' is still claimed (actively owned by '%s')", self._job, self._owner) else: LOG.warning("Job '%s' has lost its claim" " (previously owned by '%s')", self._job, self._owner) self._on_job_loss(self._engine, state, details) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/logging.py0000664000175000017500000002050400000000000021245 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from taskflow import formatters from taskflow.listeners import base from taskflow import logging from taskflow import states from taskflow import task from taskflow.types import failure from taskflow.utils import misc LOG = logging.getLogger(__name__) class LoggingListener(base.DumpingListener): """Listener that logs notifications it receives. It listens for task and flow notifications and writes those notifications to a provided logger, or logger of its module (``taskflow.listeners.logging``) if none is provided (and no class attribute is overridden). The log level can also be configured, ``logging.DEBUG`` is used by default when none is provided. """ #: Default logger to use if one is not provided on construction. _LOGGER = None def __init__(self, engine, task_listen_for=base.DEFAULT_LISTEN_FOR, flow_listen_for=base.DEFAULT_LISTEN_FOR, retry_listen_for=base.DEFAULT_LISTEN_FOR, log=None, level=logging.DEBUG): super(LoggingListener, self).__init__( engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) self._logger = misc.pick_first_not_none(log, self._LOGGER, LOG) self._level = level def _dump(self, message, *args, **kwargs): self._logger.log(self._level, message, *args, **kwargs) def _make_matcher(task_name): """Returns a function that matches a node with task item with same name.""" def _task_matcher(node): item = node.item return isinstance(item, task.Task) and item.name == task_name return _task_matcher class DynamicLoggingListener(base.Listener): """Listener that logs notifications it receives. It listens for task and flow notifications and writes those notifications to a provided logger, or logger of its module (``taskflow.listeners.logging``) if none is provided (and no class attribute is overridden). The log level can *slightly* be configured and ``logging.DEBUG`` or ``logging.WARNING`` (unless overridden via a constructor parameter) will be selected automatically based on the execution state and results produced. The following flow states cause ``logging.WARNING`` (or provided level) to be used: * ``states.FAILURE`` * ``states.REVERTED`` The following task states cause ``logging.WARNING`` (or provided level) to be used: * ``states.FAILURE`` * ``states.RETRYING`` * ``states.REVERTING`` * ``states.REVERT_FAILURE`` When a task produces a :py:class:`~taskflow.types.failure.Failure` object as its result (typically this happens when a task raises an exception) this will **always** switch the logger to use ``logging.WARNING`` (if the failure object contains a ``exc_info`` tuple this will also be logged to provide a meaningful traceback). """ #: Default logger to use if one is not provided on construction. _LOGGER = None #: States which are triggered under some type of failure. _FAILURE_STATES = (states.FAILURE, states.REVERT_FAILURE) def __init__(self, engine, task_listen_for=base.DEFAULT_LISTEN_FOR, flow_listen_for=base.DEFAULT_LISTEN_FOR, retry_listen_for=base.DEFAULT_LISTEN_FOR, log=None, failure_level=logging.WARNING, level=logging.DEBUG, hide_inputs_outputs_of=(), fail_formatter=None): super(DynamicLoggingListener, self).__init__( engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) self._failure_level = failure_level self._level = level self._task_log_levels = { states.FAILURE: self._failure_level, states.REVERTED: self._failure_level, states.RETRYING: self._failure_level, states.REVERT_FAILURE: self._failure_level, } self._flow_log_levels = { states.FAILURE: self._failure_level, states.REVERTED: self._failure_level, } self._hide_inputs_outputs_of = frozenset(hide_inputs_outputs_of) self._logger = misc.pick_first_not_none(log, self._LOGGER, LOG) if fail_formatter is None: self._fail_formatter = formatters.FailureFormatter( self._engine, hide_inputs_outputs_of=self._hide_inputs_outputs_of) else: self._fail_formatter = fail_formatter def _flow_receiver(self, state, details): """Gets called on flow state changes.""" level = self._flow_log_levels.get(state, self._level) self._logger.log(level, "Flow '%s' (%s) transitioned into state '%s'" " from state '%s'", details['flow_name'], details['flow_uuid'], state, details.get('old_state')) def _task_receiver(self, state, details): """Gets called on task state changes.""" task_name = details['task_name'] task_uuid = details['task_uuid'] if 'result' in details and state in base.FINISH_STATES: # If the task failed, it's useful to show the exception traceback # and any other available exception information. result = details.get('result') if isinstance(result, failure.Failure): exc_info, fail_details = self._fail_formatter.format( result, _make_matcher(task_name)) if fail_details: self._logger.log(self._failure_level, "Task '%s' (%s) transitioned into state" " '%s' from state '%s'%s%s", task_name, task_uuid, state, details['old_state'], os.linesep, fail_details, exc_info=exc_info) else: self._logger.log(self._failure_level, "Task '%s' (%s) transitioned into state" " '%s' from state '%s'", task_name, task_uuid, state, details['old_state'], exc_info=exc_info) else: # Otherwise, depending on the enabled logging level/state we # will show or hide results that the task may have produced # during execution. level = self._task_log_levels.get(state, self._level) show_result = (self._logger.isEnabledFor(self._level) or state == states.FAILURE) if show_result and \ task_name not in self._hide_inputs_outputs_of: self._logger.log(level, "Task '%s' (%s) transitioned into" " state '%s' from state '%s' with" " result '%s'", task_name, task_uuid, state, details['old_state'], result) else: self._logger.log(level, "Task '%s' (%s) transitioned into" " state '%s' from state '%s'", task_name, task_uuid, state, details['old_state']) else: # Just a intermediary state, carry on! level = self._task_log_levels.get(state, self._level) self._logger.log(level, "Task '%s' (%s) transitioned into state" " '%s' from state '%s'", task_name, task_uuid, state, details['old_state']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/printing.py0000664000175000017500000000321300000000000021447 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import traceback from taskflow.listeners import base class PrintingListener(base.DumpingListener): """Writes the task and flow notifications messages to stdout or stderr.""" def __init__(self, engine, task_listen_for=base.DEFAULT_LISTEN_FOR, flow_listen_for=base.DEFAULT_LISTEN_FOR, retry_listen_for=base.DEFAULT_LISTEN_FOR, stderr=False): super(PrintingListener, self).__init__( engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) if stderr: self._file = sys.stderr else: self._file = sys.stdout def _dump(self, message, *args, **kwargs): print(message % args, file=self._file) exc_info = kwargs.get('exc_info') if exc_info is not None: traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], file=self._file) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/listeners/timing.py0000664000175000017500000001546100000000000021114 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import six import time from oslo_utils import timeutils from taskflow.engines.action_engine import compiler as co from taskflow import exceptions as exc from taskflow.listeners import base from taskflow import logging from taskflow import states STARTING_STATES = frozenset((states.RUNNING, states.REVERTING)) FINISHED_STATES = frozenset((base.FINISH_STATES + (states.REVERTED,))) WATCH_STATES = frozenset(itertools.chain(FINISHED_STATES, STARTING_STATES, [states.PENDING])) LOG = logging.getLogger(__name__) # TODO(harlowja): get rid of this when we can just support python 3.x and use # its print function directly instead of having to wrap it in a helper function # due to how python 2.x print is a language built-in and not a function... def _printer(message): print(message) class DurationListener(base.Listener): """Listener that captures task duration. It records how long a task took to execute (or fail) to storage. It saves the duration in seconds as float value to task metadata with key ``'duration'``. """ def __init__(self, engine): super(DurationListener, self).__init__(engine, task_listen_for=WATCH_STATES, flow_listen_for=WATCH_STATES) self._timers = {co.TASK: {}, co.FLOW: {}} def deregister(self): super(DurationListener, self).deregister() # There should be none that still exist at deregistering time, so log a # warning if there were any that somehow still got left behind... for item_type, timers in six.iteritems(self._timers): leftover_timers = len(timers) if leftover_timers: LOG.warning("%s %s(s) did not enter %s states", leftover_timers, item_type, FINISHED_STATES) timers.clear() def _record_ending(self, timer, item_type, item_name, state): meta_update = { 'duration': timer.elapsed(), } try: storage = self._engine.storage # Don't let storage failures throw exceptions in a listener method. if item_type == co.FLOW: storage.update_flow_metadata(meta_update) else: storage.update_atom_metadata(item_name, meta_update) except exc.StorageFailure: LOG.warning("Failure to store duration update %s for %s %s", meta_update, item_type, item_name, exc_info=True) def _task_receiver(self, state, details): task_name = details['task_name'] self._receiver(co.TASK, task_name, state) def _flow_receiver(self, state, details): flow_name = details['flow_name'] self._receiver(co.FLOW, flow_name, state) def _receiver(self, item_type, item_name, state): if state == states.PENDING: self._timers[item_type].pop(item_name, None) elif state in STARTING_STATES: self._timers[item_type][item_name] = timeutils.StopWatch().start() elif state in FINISHED_STATES: timer = self._timers[item_type].pop(item_name, None) if timer is not None: timer.stop() self._record_ending(timer, item_type, item_name, state) class PrintingDurationListener(DurationListener): """Listener that prints the duration as well as recording it.""" def __init__(self, engine, printer=None): super(PrintingDurationListener, self).__init__(engine) if printer is None: self._printer = _printer else: self._printer = printer def _record_ending(self, timer, item_type, item_name, state): super(PrintingDurationListener, self)._record_ending( timer, item_type, item_name, state) self._printer("It took %s '%s' %0.2f seconds to" " finish." % (item_type, item_name, timer.elapsed())) def _receiver(self, item_type, item_name, state): super(PrintingDurationListener, self)._receiver(item_type, item_name, state) if state in STARTING_STATES: self._printer("'%s' %s started." % (item_name, item_type)) class EventTimeListener(base.Listener): """Listener that captures task, flow, and retry event timestamps. It records how when an event is received (using unix time) to storage. It saves the timestamps under keys (in atom or flow details metadata) of the format ``{event}-timestamp`` where ``event`` is the state/event name that has been received. This information can be later extracted/examined to derive durations... """ def __init__(self, engine, task_listen_for=base.DEFAULT_LISTEN_FOR, flow_listen_for=base.DEFAULT_LISTEN_FOR, retry_listen_for=base.DEFAULT_LISTEN_FOR): super(EventTimeListener, self).__init__( engine, task_listen_for=task_listen_for, flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) def _record_atom_event(self, state, atom_name): meta_update = {'%s-timestamp' % state: time.time()} try: # Don't let storage failures throw exceptions in a listener method. self._engine.storage.update_atom_metadata(atom_name, meta_update) except exc.StorageFailure: LOG.warning("Failure to store timestamp %s for atom %s", meta_update, atom_name, exc_info=True) def _flow_receiver(self, state, details): meta_update = {'%s-timestamp' % state: time.time()} try: # Don't let storage failures throw exceptions in a listener method. self._engine.storage.update_flow_metadata(meta_update) except exc.StorageFailure: LOG.warning("Failure to store timestamp %s for flow %s", meta_update, details['flow_name'], exc_info=True) def _task_receiver(self, state, details): self._record_atom_event(state, details['task_name']) def _retry_receiver(self, state, details): self._record_atom_event(state, details['retry_name']) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/logging.py0000664000175000017500000000351600000000000017241 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging _BASE = __name__.split(".", 1)[0] # Add a BLATHER/TRACE level, this matches the multiprocessing # utils.py module (and oslo.log, kazoo and others) that declares a similar # level, this level is for information that is even lower level than regular # DEBUG and gives out so much runtime information that it is only # useful by low-level/certain users... BLATHER = 5 TRACE = BLATHER # Copy over *select* attributes to make it easy to use this module. CRITICAL = logging.CRITICAL DEBUG = logging.DEBUG ERROR = logging.ERROR FATAL = logging.FATAL INFO = logging.INFO NOTSET = logging.NOTSET WARN = logging.WARN WARNING = logging.WARNING class _TraceLoggerAdapter(logging.LoggerAdapter): def trace(self, msg, *args, **kwargs): """Delegate a trace call to the underlying logger.""" self.log(TRACE, msg, *args, **kwargs) def warn(self, msg, *args, **kwargs): """Delegate a warning call to the underlying logger.""" self.warning(msg, *args, **kwargs) def getLogger(name=_BASE, extra=None): logger = logging.getLogger(name) if not logger.handlers: logger.addHandler(logging.NullHandler()) return _TraceLoggerAdapter(logger, extra=extra) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6280417 taskflow-4.6.4/taskflow/patterns/0000775000175000017500000000000000000000000017074 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/patterns/__init__.py0000664000175000017500000000000000000000000021173 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/patterns/graph_flow.py0000664000175000017500000003566500000000000021615 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import six from taskflow import deciders as de from taskflow import exceptions as exc from taskflow import flow from taskflow.types import graph as gr def _unsatisfied_requires(node, graph, *additional_provided): requires = set(node.requires) if not requires: return requires for provided in additional_provided: # This is using the difference() method vs the - # operator since the latter doesn't work with frozen # or regular sets (when used in combination with ordered # sets). # # If this is not done the following happens... # # TypeError: unsupported operand type(s) # for -: 'set' and 'OrderedSet' requires = requires.difference(provided) if not requires: return requires for pred in graph.bfs_predecessors_iter(node): requires = requires.difference(pred.provides) if not requires: return requires return requires class Flow(flow.Flow): """Graph flow pattern. Contained *flows/tasks* will be executed according to their dependencies which will be resolved by using the *flows/tasks* provides and requires mappings or by following manually created dependency links. From dependencies a `directed graph`_ is built. If it has edge ``A -> B``, this means ``B`` depends on ``A`` (and that the execution of ``B`` must wait until ``A`` has finished executing, on reverting this means that the reverting of ``A`` must wait until ``B`` has finished reverting). Note: `cyclic`_ dependencies are not allowed. .. _directed graph: https://en.wikipedia.org/wiki/Directed_graph .. _cyclic: https://en.wikipedia.org/wiki/Cycle_graph """ def __init__(self, name, retry=None): super(Flow, self).__init__(name, retry) self._graph = gr.DiGraph(name=name) self._graph.freeze() #: Extracts the unsatisified symbol requirements of a single node. _unsatisfied_requires = staticmethod(_unsatisfied_requires) def link(self, u, v, decider=None, decider_depth=None): """Link existing node u as a runtime dependency of existing node v. Note that if the addition of these edges creates a `cyclic`_ graph then a :class:`~taskflow.exceptions.DependencyFailure` will be raised and the provided changes will be discarded. If the nodes that are being requested to link do not exist in this graph than a :class:`ValueError` will be raised. :param u: task or flow to create a link from (must exist already) :param v: task or flow to create a link to (must exist already) :param decider: A callback function that will be expected to decide at runtime whether ``v`` should be allowed to execute (or whether the execution of ``v`` should be ignored, and therefore not executed). It is expected to take as single keyword argument ``history`` which will be the execution results of all ``u`` decidable links that have ``v`` as a target. It is expected to return a single boolean (``True`` to allow ``v`` execution or ``False`` to not). :param decider_depth: One of the :py:class:`~taskflow.deciders.Depth` enumerations (or a string version of) that will be used to influence what atoms are ignored when the decider provided results false. If not provided (and a valid decider is provided then this defaults to :py:attr:`~taskflow.deciders.Depth.ALL`). .. _cyclic: https://en.wikipedia.org/wiki/Cycle_graph """ if not self._graph.has_node(u): raise ValueError("Node '%s' not found to link from" % (u)) if not self._graph.has_node(v): raise ValueError("Node '%s' not found to link to" % (v)) if decider is not None: if not six.callable(decider): raise ValueError("Decider boolean callback must be callable") self._swap(self._link(u, v, manual=True, decider=decider, decider_depth=decider_depth)) return self def _link(self, u, v, graph=None, reason=None, manual=False, decider=None, decider_depth=None): mutable_graph = True if graph is None: graph = self._graph mutable_graph = False # NOTE(harlowja): Add an edge to a temporary copy and only if that # copy is valid then do we swap with the underlying graph. attrs = graph.get_edge_data(u, v) if not attrs: attrs = {} if decider is not None: attrs[flow.LINK_DECIDER] = decider try: # Remove existing decider depth, if one existed. del attrs[flow.LINK_DECIDER_DEPTH] except KeyError: pass if decider_depth is not None: if decider is None: raise ValueError("Decider depth requires a decider to be" " provided along with it") else: decider_depth = de.Depth.translate(decider_depth) attrs[flow.LINK_DECIDER_DEPTH] = decider_depth if manual: attrs[flow.LINK_MANUAL] = True if reason is not None: if flow.LINK_REASONS not in attrs: attrs[flow.LINK_REASONS] = set() attrs[flow.LINK_REASONS].add(reason) if not mutable_graph: graph = gr.DiGraph(graph) graph.add_edge(u, v, **attrs) return graph def _swap(self, graph): """Validates the replacement graph and then swaps the underlying graph. After swapping occurs the underlying graph will be frozen so that the immutability invariant is maintained (we may be able to relax this constraint in the future since our exposed public api does not allow direct access to the underlying graph). """ if not graph.is_directed_acyclic(): raise exc.DependencyFailure("No path through the node(s) in the" " graph produces an ordering that" " will allow for logical" " edge traversal") self._graph = graph.freeze() def add(self, *nodes, **kwargs): """Adds a given task/tasks/flow/flows to this flow. Note that if the addition of these nodes (and any edges) creates a `cyclic`_ graph then a :class:`~taskflow.exceptions.DependencyFailure` will be raised and the applied changes will be discarded. :param nodes: node(s) to add to the flow :param kwargs: keyword arguments, the two keyword arguments currently processed are: * ``resolve_requires`` a boolean that when true (the default) implies that when node(s) are added their symbol requirements will be matched to existing node(s) and links will be automatically made to those providers. If multiple possible providers exist then a :class:`~taskflow.exceptions.AmbiguousDependency` exception will be raised and the provided additions will be discarded. * ``resolve_existing``, a boolean that when true (the default) implies that on addition of a new node that existing node(s) will have their requirements scanned for symbols that this newly added node can provide. If a match is found a link is automatically created from the newly added node to the requiree. .. _cyclic: https://en.wikipedia.org/wiki/Cycle_graph """ # Let's try to avoid doing any work if we can; since the below code # after this filter can create more temporary graphs that aren't needed # if the nodes already exist... nodes = [i for i in nodes if not self._graph.has_node(i)] if not nodes: return self # This syntax will *hopefully* be better in future versions of python. # # See: http://legacy.python.org/dev/peps/pep-3102/ (python 3.0+) resolve_requires = bool(kwargs.get('resolve_requires', True)) resolve_existing = bool(kwargs.get('resolve_existing', True)) # Figure out what the existing nodes *still* require and what they # provide so we can do this lookup later when inferring. required = collections.defaultdict(list) provided = collections.defaultdict(list) retry_provides = set() if self._retry is not None: for value in self._retry.requires: required[value].append(self._retry) for value in self._retry.provides: retry_provides.add(value) provided[value].append(self._retry) for node in self._graph.nodes: for value in self._unsatisfied_requires(node, self._graph, retry_provides): required[value].append(node) for value in node.provides: provided[value].append(node) # NOTE(harlowja): Add node(s) and edge(s) to a temporary copy of the # underlying graph and only if that is successful added to do we then # swap with the underlying graph. tmp_graph = gr.DiGraph(self._graph) for node in nodes: tmp_graph.add_node(node) # Try to find a valid provider. if resolve_requires: for value in self._unsatisfied_requires(node, tmp_graph, retry_provides): if value in provided: providers = provided[value] if len(providers) > 1: provider_names = [n.name for n in providers] raise exc.AmbiguousDependency( "Resolution error detected when" " adding '%(node)s', multiple" " providers %(providers)s found for" " required symbol '%(value)s'" % dict(node=node.name, providers=sorted(provider_names), value=value)) else: self._link(providers[0], node, graph=tmp_graph, reason=value) else: required[value].append(node) for value in node.provides: provided[value].append(node) # See if what we provide fulfills any existing requiree. if resolve_existing: for value in node.provides: if value in required: for requiree in list(required[value]): if requiree is not node: self._link(node, requiree, graph=tmp_graph, reason=value) required[value].remove(requiree) self._swap(tmp_graph) return self def _get_subgraph(self): """Get the active subgraph of _graph. Descendants may override this to make only part of self._graph visible. """ return self._graph def __len__(self): return self._get_subgraph().number_of_nodes() def __iter__(self): for n, _n_data in self.iter_nodes(): yield n def iter_links(self): return self._get_subgraph().edges(data=True) def iter_nodes(self): g = self._get_subgraph() for n in g.topological_sort(): yield n, g.nodes[n] @property def requires(self): requires = set() retry_provides = set() if self._retry is not None: requires.update(self._retry.requires) retry_provides.update(self._retry.provides) g = self._get_subgraph() for node in g.nodes: requires.update(self._unsatisfied_requires(node, g, retry_provides)) return frozenset(requires) def _reset_cached_subgraph(func): """Resets cached subgraph after execution, in case it was affected.""" @six.wraps(func) def wrapper(self, *args, **kwargs): result = func(self, *args, **kwargs) self._subgraph = None return result return wrapper class TargetedFlow(Flow): """Graph flow with a target. Adds possibility to execute a flow up to certain graph node (task or subflow). """ def __init__(self, *args, **kwargs): super(TargetedFlow, self).__init__(*args, **kwargs) self._subgraph = None self._target = None def set_target(self, target_node): """Set target for the flow. Any node(s) (tasks or subflows) not needed for the target node will not be executed. """ if not self._graph.has_node(target_node): raise ValueError("Node '%s' not found" % target_node) self._target = target_node self._subgraph = None def reset_target(self): """Reset target for the flow. All node(s) of the flow will be executed. """ self._target = None self._subgraph = None add = _reset_cached_subgraph(Flow.add) link = _reset_cached_subgraph(Flow.link) def _get_subgraph(self): if self._subgraph is not None: return self._subgraph if self._target is None: return self._graph nodes = [self._target] nodes.extend(self._graph.bfs_predecessors_iter(self._target)) self._subgraph = gr.DiGraph( incoming_graph_data=self._graph.subgraph(nodes)) self._subgraph.freeze() return self._subgraph ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/patterns/linear_flow.py0000664000175000017500000000536400000000000021757 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import flow from taskflow.types import graph as gr class Flow(flow.Flow): """Linear flow pattern. A linear (potentially nested) flow of *tasks/flows* that can be applied in order as one unit and rolled back as one unit using the reverse order that the *tasks/flows* have been applied in. """ _no_last_item = object() """Sentinel object used to denote no last item has been assigned. This is used to track no last item being added, since at creation there is no last item, but since the :meth:`.add` routine can take any object including none, we have to use a different object to be able to distinguish the lack of any last item... """ def __init__(self, name, retry=None): super(Flow, self).__init__(name, retry) self._graph = gr.OrderedDiGraph(name=name) self._last_item = self._no_last_item def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" for item in items: if not self._graph.has_node(item): self._graph.add_node(item) if self._last_item is not self._no_last_item: self._graph.add_edge(self._last_item, item, attr_dict={flow.LINK_INVARIANT: True}) self._last_item = item return self def __len__(self): return len(self._graph) def __iter__(self): for item in self._graph.nodes: yield item @property def requires(self): requires = set() prior_provides = set() if self._retry is not None: requires.update(self._retry.requires) prior_provides.update(self._retry.provides) for item in self: requires.update(item.requires - prior_provides) prior_provides.update(item.provides) return frozenset(requires) def iter_nodes(self): for (n, n_data) in self._graph.nodes(data=True): yield (n, n_data) def iter_links(self): for (u, v, e_data) in self._graph.edges(data=True): yield (u, v, e_data) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/patterns/unordered_flow.py0000664000175000017500000000400700000000000022465 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import flow from taskflow.types import graph as gr class Flow(flow.Flow): """Unordered flow pattern. A unordered (potentially nested) flow of *tasks/flows* that can be executed in any order as one unit and rolled back as one unit. """ def __init__(self, name, retry=None): super(Flow, self).__init__(name, retry) self._graph = gr.Graph(name=name) def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" for item in items: if not self._graph.has_node(item): self._graph.add_node(item) return self def __len__(self): return len(self._graph) def __iter__(self): for item in self._graph: yield item def iter_links(self): for (u, v, e_data) in self._graph.edges(data=True): yield (u, v, e_data) def iter_nodes(self): for n, n_data in self._graph.nodes(data=True): yield (n, n_data) @property def requires(self): requires = set() retry_provides = set() if self._retry is not None: requires.update(self._retry.requires) retry_provides.update(self._retry.provides) for item in self: item_requires = item.requires - retry_provides requires.update(item_requires) return frozenset(requires) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6280417 taskflow-4.6.4/taskflow/persistence/0000775000175000017500000000000000000000000017560 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/__init__.py0000664000175000017500000000000000000000000021657 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.632042 taskflow-4.6.4/taskflow/persistence/backends/0000775000175000017500000000000000000000000021332 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/__init__.py0000664000175000017500000000643000000000000023446 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from stevedore import driver from taskflow import exceptions as exc from taskflow import logging from taskflow.utils import misc # NOTE(harlowja): this is the entrypoint namespace, not the module namespace. BACKEND_NAMESPACE = 'taskflow.persistence' LOG = logging.getLogger(__name__) def fetch(conf, namespace=BACKEND_NAMESPACE, **kwargs): """Fetch a persistence backend with the given configuration. This fetch method will look for the entrypoint name in the entrypoint namespace, and then attempt to instantiate that entrypoint using the provided configuration and any persistence backend specific kwargs. NOTE(harlowja): to aid in making it easy to specify configuration and options to a backend the configuration (which is typical just a dictionary) can also be a URI string that identifies the entrypoint name and any configuration specific to that backend. For example, given the following configuration URI:: mysql:///?a=b&c=d This will look for the entrypoint named 'mysql' and will provide a configuration object composed of the URI's components, in this case that is ``{'a': 'b', 'c': 'd'}`` to the constructor of that persistence backend instance. """ backend, conf = misc.extract_driver_and_conf(conf, 'connection') # If the backend is like 'mysql+pymysql://...' which informs the # backend to use a dialect (supported by sqlalchemy at least) we just want # to look at the first component to find our entrypoint backend name... if backend.find("+") != -1: backend = backend.split("+", 1)[0] LOG.debug('Looking for %r backend driver in %r', backend, namespace) try: mgr = driver.DriverManager(namespace, backend, invoke_on_load=True, invoke_args=(conf,), invoke_kwds=kwargs) return mgr.driver except RuntimeError as e: raise exc.NotFound("Could not find backend %s: %s" % (backend, e)) @contextlib.contextmanager def backend(conf, namespace=BACKEND_NAMESPACE, **kwargs): """Fetches a backend, connects, upgrades, then closes it on completion. This allows a backend instance to be fetched, connected to, have its schema upgraded (if the schema is already up to date this is a no-op) and then used in a context manager statement with the backend being closed upon context manager exit. """ with contextlib.closing(fetch(conf, namespace=namespace, **kwargs)) as be: with contextlib.closing(be.get_connection()) as conn: conn.upgrade() yield be ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/impl_dir.py0000664000175000017500000001370500000000000023511 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import errno import io import os import shutil import cachetools import fasteners from oslo_serialization import jsonutils from oslo_utils import fileutils from taskflow import exceptions as exc from taskflow.persistence import path_based from taskflow.utils import misc @contextlib.contextmanager def _storagefailure_wrapper(): try: yield except exc.TaskFlowException: raise except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: exc.raise_with_cause(exc.NotFound, 'Item not found: %s' % e.filename, cause=e) else: exc.raise_with_cause(exc.StorageFailure, "Storage backend internal error", cause=e) class DirBackend(path_based.PathBasedBackend): """A directory and file based backend. This backend does *not* provide true transactional semantics. It does guarantee that there will be no interprocess race conditions when writing and reading by using a consistent hierarchy of file based locks. Example configuration:: conf = { "path": "/tmp/taskflow", # save data to this root directory "max_cache_size": 1024, # keep up-to 1024 entries in memory } """ DEFAULT_FILE_ENCODING = 'utf-8' """ Default encoding used when decoding or encoding files into or from text/unicode into binary or binary into text/unicode. """ def __init__(self, conf): super(DirBackend, self).__init__(conf) max_cache_size = self._conf.get('max_cache_size') if max_cache_size is not None: max_cache_size = int(max_cache_size) if max_cache_size < 1: raise ValueError("Maximum cache size must be greater than" " or equal to one") self.file_cache = cachetools.LRUCache(max_cache_size) else: self.file_cache = {} self.encoding = self._conf.get('encoding', self.DEFAULT_FILE_ENCODING) if not self._path: raise ValueError("Empty path is disallowed") self._path = os.path.abspath(self._path) self.lock = fasteners.ReaderWriterLock() def get_connection(self): return Connection(self) def close(self): pass class Connection(path_based.PathBasedConnection): def _read_from(self, filename): # This is very similar to the oslo-incubator fileutils module, but # tweaked to not depend on a global cache, as well as tweaked to not # pull-in the oslo logging module (which is a huge pile of code). mtime = os.path.getmtime(filename) cache_info = self.backend.file_cache.setdefault(filename, {}) if not cache_info or mtime > cache_info.get('mtime', 0): with io.open(filename, 'r', encoding=self.backend.encoding) as fp: cache_info['data'] = fp.read() cache_info['mtime'] = mtime return cache_info['data'] def _write_to(self, filename, contents): contents = misc.binary_encode(contents, encoding=self.backend.encoding) with io.open(filename, 'wb') as fp: fp.write(contents) self.backend.file_cache.pop(filename, None) @contextlib.contextmanager def _path_lock(self, path): lockfile = self._join_path(path, 'lock') with fasteners.InterProcessLock(lockfile) as lock: with _storagefailure_wrapper(): yield lock def _join_path(self, *parts): return os.path.join(*parts) def _get_item(self, path): with self._path_lock(path): item_path = self._join_path(path, 'metadata') return misc.decode_json(self._read_from(item_path)) def _set_item(self, path, value, transaction): with self._path_lock(path): item_path = self._join_path(path, 'metadata') self._write_to(item_path, jsonutils.dumps(value)) def _del_tree(self, path, transaction): with self._path_lock(path): shutil.rmtree(path) def _get_children(self, path): if path == self.book_path: filter_func = os.path.isdir else: filter_func = os.path.islink with _storagefailure_wrapper(): return [child for child in os.listdir(path) if filter_func(self._join_path(path, child))] def _ensure_path(self, path): with _storagefailure_wrapper(): fileutils.ensure_tree(path) def _create_link(self, src_path, dest_path, transaction): with _storagefailure_wrapper(): try: os.symlink(src_path, dest_path) except OSError as e: if e.errno != errno.EEXIST: raise @contextlib.contextmanager def _transaction(self): """This just wraps a global write-lock.""" lock = self.backend.lock.write_lock with lock(): yield def validate(self): with _storagefailure_wrapper(): for p in (self.flow_path, self.atom_path, self.book_path): if not os.path.isdir(p): raise RuntimeError("Missing required directory: %s" % (p)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/impl_memory.py0000664000175000017500000003214300000000000024240 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import copy import itertools import posixpath as pp import fasteners import six from taskflow import exceptions as exc from taskflow.persistence import path_based from taskflow.types import tree class FakeInode(tree.Node): """A in-memory filesystem inode-like object.""" def __init__(self, item, path, value=None): super(FakeInode, self).__init__(item, path=path, value=value) class FakeFilesystem(object): """An in-memory filesystem-like structure. This filesystem uses posix style paths **only** so users must be careful to use the ``posixpath`` module instead of the ``os.path`` one which will vary depending on the operating system which the active python is running in (the decision to use ``posixpath`` was to avoid the path variations which are not relevant in an implementation of a in-memory fake filesystem). **Not** thread-safe when a single filesystem is mutated at the same time by multiple threads. For example having multiple threads call into :meth:`~taskflow.persistence.backends.impl_memory.FakeFilesystem.clear` at the same time could potentially end badly. It is thread-safe when only :meth:`~taskflow.persistence.backends.impl_memory.FakeFilesystem.get` or other read-only actions (like calling into :meth:`~taskflow.persistence.backends.impl_memory.FakeFilesystem.ls`) are occurring at the same time. Example usage: >>> from taskflow.persistence.backends import impl_memory >>> fs = impl_memory.FakeFilesystem() >>> fs.ensure_path('/a/b/c') >>> fs['/a/b/c'] = 'd' >>> print(fs['/a/b/c']) d >>> del fs['/a/b/c'] >>> fs.ls("/a/b") [] >>> fs.get("/a/b/c", 'blob') 'blob' """ #: Root path of the in-memory filesystem. root_path = pp.sep @classmethod def normpath(cls, path): """Return a normalized absolutized version of the pathname path.""" if not path: raise ValueError("This filesystem can only normalize paths" " that are not empty") if not path.startswith(cls.root_path): raise ValueError("This filesystem can only normalize" " paths that start with %s: '%s' is not" " valid" % (cls.root_path, path)) return pp.normpath(path) #: Split a pathname into a tuple of ``(head, tail)``. split = staticmethod(pp.split) @staticmethod def join(*pieces): """Join many path segments together.""" return pp.sep.join(pieces) def __init__(self, deep_copy=True): self._root = FakeInode(self.root_path, self.root_path) self._reverse_mapping = { self.root_path: self._root, } if deep_copy: self._copier = copy.deepcopy else: self._copier = copy.copy def ensure_path(self, path): """Ensure the path (and parents) exists.""" path = self.normpath(path) # Ignore the root path as we already checked for that; and it # will always exist/can't be removed anyway... if path == self._root.item: return node = self._root for piece in self._iter_pieces(path): child_node = node.find(piece, only_direct=True, include_self=False) if child_node is None: child_node = self._insert_child(node, piece) node = child_node def _insert_child(self, parent_node, basename, value=None): child_path = self.join(parent_node.metadata['path'], basename) # This avoids getting '//a/b' (duplicated sep at start)... # # Which can happen easily if something like the following is given. # >>> x = ['/', 'b'] # >>> pp.sep.join(x) # '//b' if child_path.startswith(pp.sep * 2): child_path = child_path[1:] child_node = FakeInode(basename, child_path, value=value) parent_node.add(child_node) self._reverse_mapping[child_path] = child_node return child_node def _fetch_node(self, path, normalized=False): if not normalized: normed_path = self.normpath(path) else: normed_path = path try: return self._reverse_mapping[normed_path] except KeyError: raise exc.NotFound("Path '%s' not found" % path) def get(self, path, default=None): """Fetch the value of given path (and return default if not found).""" try: return self._get_item(self.normpath(path)) except exc.NotFound: return default def _get_item(self, path, links=None): node = self._fetch_node(path, normalized=True) if 'target' in node.metadata: # Follow the link (and watch out for loops)... path = node.metadata['target'] if links is None: links = [] if path in links: raise ValueError("Recursive link following not" " allowed (loop %s detected)" % (links + [path])) else: links.append(path) return self._get_item(path, links=links) else: return self._copier(node.metadata['value']) def _up_to_root_selector(self, root_node, child_node): # Build the path from the child to the root and stop at the # root, and then form a path string... path_pieces = [child_node.item] for parent_node in child_node.path_iter(include_self=False): if parent_node is root_node: break path_pieces.append(parent_node.item) if len(path_pieces) > 1: path_pieces.reverse() return self.join(*path_pieces) @staticmethod def _metadata_path_selector(root_node, child_node): return child_node.metadata['path'] def ls_r(self, path, absolute=False): """Return list of all children of the given path (recursively).""" node = self._fetch_node(path) if absolute: selector_func = self._metadata_path_selector else: selector_func = self._up_to_root_selector return [selector_func(node, child_node) for child_node in node.bfs_iter()] def ls(self, path, absolute=False): """Return list of all children of the given path (not recursive).""" node = self._fetch_node(path) if absolute: selector_func = self._metadata_path_selector else: selector_func = self._up_to_root_selector child_node_it = iter(node) return [selector_func(node, child_node) for child_node in child_node_it] def clear(self): """Remove all nodes (except the root) from this filesystem.""" self._reverse_mapping = { self.root_path: self._root, } for node in list(self._root.reverse_iter()): node.disassociate() def delete(self, path, recursive=False): """Deletes a node (optionally its children) from this filesystem.""" path = self.normpath(path) node = self._fetch_node(path, normalized=True) if node is self._root and not recursive: raise ValueError("Can not delete '%s'" % self._root.item) if recursive: child_paths = (child.metadata['path'] for child in node.bfs_iter()) else: node_child_count = node.child_count() if node_child_count: raise ValueError("Can not delete '%s', it has %s children" % (path, node_child_count)) child_paths = [] if node is self._root: # Don't drop/pop the root... paths = child_paths drop_nodes = [] else: paths = itertools.chain([path], child_paths) drop_nodes = [node] for path in paths: self._reverse_mapping.pop(path, None) for node in drop_nodes: node.disassociate() def _iter_pieces(self, path, include_root=False): if path == self._root.item: # Check for this directly as the following doesn't work with # split correctly: # # >>> path = "/" # path.split(pp.sep) # ['', ''] parts = [] else: parts = path.split(pp.sep)[1:] if include_root: parts.insert(0, self._root.item) for piece in parts: yield piece def __delitem__(self, path): self.delete(path, recursive=True) @staticmethod def _stringify_node(node): if 'target' in node.metadata: return "%s (link to %s)" % (node.item, node.metadata['target']) else: return six.text_type(node.item) def pformat(self): """Pretty format this in-memory filesystem.""" return self._root.pformat(stringify_node=self._stringify_node) def symlink(self, src_path, dest_path): """Link the destionation path to the source path.""" dest_path = self.normpath(dest_path) src_path = self.normpath(src_path) try: dest_node = self._fetch_node(dest_path, normalized=True) except exc.NotFound: parent_path, basename = self.split(dest_path) parent_node = self._fetch_node(parent_path, normalized=True) dest_node = self._insert_child(parent_node, basename) dest_node.metadata['target'] = src_path def __getitem__(self, path): return self._get_item(self.normpath(path)) def __setitem__(self, path, value): path = self.normpath(path) value = self._copier(value) try: node = self._fetch_node(path, normalized=True) node.metadata.update(value=value) except exc.NotFound: parent_path, basename = self.split(path) parent_node = self._fetch_node(parent_path, normalized=True) self._insert_child(parent_node, basename, value=value) class MemoryBackend(path_based.PathBasedBackend): """A in-memory (non-persistent) backend. This backend writes logbooks, flow details, and atom details to a in-memory filesystem-like structure (rooted by the ``memory`` instance variable). This backend does *not* provide true transactional semantics. It does guarantee that there will be no inter-thread race conditions when writing and reading by using a read/write locks. """ #: Default path used when none is provided. DEFAULT_PATH = pp.sep def __init__(self, conf=None): super(MemoryBackend, self).__init__(conf) self.memory = FakeFilesystem(deep_copy=self._conf.get('deep_copy', True)) self.lock = fasteners.ReaderWriterLock() def get_connection(self): return Connection(self) def close(self): pass class Connection(path_based.PathBasedConnection): def __init__(self, backend): super(Connection, self).__init__(backend) self.upgrade() @contextlib.contextmanager def _memory_lock(self, write=False): if write: lock = self.backend.lock.write_lock else: lock = self.backend.lock.read_lock with lock(): try: yield except exc.TaskFlowException: raise except Exception: exc.raise_with_cause(exc.StorageFailure, "Storage backend internal error") def _join_path(self, *parts): return pp.join(*parts) def _get_item(self, path): with self._memory_lock(): return self.backend.memory[path] def _set_item(self, path, value, transaction): self.backend.memory[path] = value def _del_tree(self, path, transaction): del self.backend.memory[path] def _get_children(self, path): with self._memory_lock(): return self.backend.memory.ls(path) def _ensure_path(self, path): with self._memory_lock(write=True): self.backend.memory.ensure_path(path) def _create_link(self, src_path, dest_path, transaction): self.backend.memory.symlink(src_path, dest_path) @contextlib.contextmanager def _transaction(self): """This just wraps a global write-lock.""" with self._memory_lock(write=True): yield def validate(self): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/impl_sqlalchemy.py0000664000175000017500000006160100000000000025073 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import copy import functools import threading import time from oslo_utils import strutils import six import sqlalchemy as sa from sqlalchemy import exc as sa_exc from sqlalchemy import pool as sa_pool from sqlalchemy import sql import tenacity from taskflow import exceptions as exc from taskflow import logging from taskflow.persistence.backends.sqlalchemy import migration from taskflow.persistence.backends.sqlalchemy import tables from taskflow.persistence import base from taskflow.persistence import models from taskflow.utils import eventlet_utils from taskflow.utils import misc LOG = logging.getLogger(__name__) # NOTE(harlowja): This is all very similar to what oslo-incubator uses but is # not based on using oslo.cfg and its global configuration (which should not be # used in libraries such as taskflow). # # TODO(harlowja): once oslo.db appears we should be able to use that instead # since it's not supposed to have any usage of oslo.cfg in it when it # materializes as a library. # See: http://dev.mysql.com/doc/refman/5.0/en/error-messages-client.html MY_SQL_CONN_ERRORS = ( # Lost connection to MySQL server at '%s', system error: %d '2006', # Can't connect to MySQL server on '%s' (%d) '2003', # Can't connect to local MySQL server through socket '%s' (%d) '2002', ) MY_SQL_GONE_WAY_AWAY_ERRORS = ( # Lost connection to MySQL server at '%s', system error: %d '2006', # Lost connection to MySQL server during query '2013', # Commands out of sync; you can't run this command now '2014', # Can't open shared memory; no answer from server (%lu) '2045', # Lost connection to MySQL server at '%s', system error: %d '2055', ) # See: http://www.postgresql.org/docs/9.1/static/errcodes-appendix.html POSTGRES_CONN_ERRORS = ( # connection_exception '08000', # connection_does_not_exist '08003', # connection_failure '08006', # sqlclient_unable_to_establish_sqlconnection '08001', # sqlserver_rejected_establishment_of_sqlconnection '08004', # Just couldn't connect (postgres errors are pretty weird) 'could not connect to server', ) POSTGRES_GONE_WAY_AWAY_ERRORS = ( # Server terminated while in progress (postgres errors are pretty weird). 'server closed the connection unexpectedly', 'terminating connection due to administrator command', ) # These connection urls mean sqlite is being used as an in-memory DB. SQLITE_IN_MEMORY = ('sqlite://', 'sqlite:///', 'sqlite:///:memory:') # Transacation isolation levels that will be automatically applied, we prefer # strong read committed isolation levels to avoid merging and using dirty # data... # # See: http://en.wikipedia.org/wiki/Isolation_(database_systems) DEFAULT_TXN_ISOLATION_LEVELS = { 'mysql': 'READ COMMITTED', 'postgresql': 'READ COMMITTED', 'postgres': 'READ COMMITTED', } def _log_statements(log_level, conn, cursor, statement, parameters, *args): if LOG.isEnabledFor(log_level): LOG.log(log_level, "Running statement '%s' with parameters %s", statement, parameters) def _in_any(reason, err_haystack): """Checks if any elements of the haystack are in the given reason.""" for err in err_haystack: if reason.find(six.text_type(err)) != -1: return True return False def _is_db_connection_error(reason): return _in_any(reason, list(MY_SQL_CONN_ERRORS + POSTGRES_CONN_ERRORS)) def _as_bool(value): if isinstance(value, bool): return value # This is different than strutils, but imho is an acceptable difference. if value is None: return False # NOTE(harlowja): prefer strictness to avoid users getting accustomed # to passing bad values in and this *just working* (which imho is a bad # habit to encourage). return strutils.bool_from_string(value, strict=True) def _thread_yield(dbapi_con, con_record): """Ensure other greenthreads get a chance to be executed. If we use eventlet.monkey_patch(), eventlet.greenthread.sleep(0) will execute instead of time.sleep(0). Force a context switch. With common database backends (eg MySQLdb and sqlite), there is no implicit yield caused by network I/O since they are implemented by C libraries that eventlet cannot monkey patch. """ time.sleep(0) def _set_sql_mode(sql_mode, dbapi_con, connection_rec): """Set the sql_mode session variable. MySQL supports several server modes. The default is None, but sessions may choose to enable server modes like TRADITIONAL, ANSI, several STRICT_* modes and others. Note: passing in '' (empty string) for sql_mode clears the SQL mode for the session, overriding a potentially set server default. """ cursor = dbapi_con.cursor() cursor.execute("SET SESSION sql_mode = %s", [sql_mode]) def _ping_listener(dbapi_conn, connection_rec, connection_proxy): """Ensures that MySQL connections checked out of the pool are alive. Modified + borrowed from: http://bit.ly/14BYaW6. """ try: dbapi_conn.cursor().execute('select 1') except dbapi_conn.OperationalError as ex: if _in_any(six.text_type(ex.args[0]), MY_SQL_GONE_WAY_AWAY_ERRORS): LOG.warning('Got mysql server has gone away', exc_info=True) raise sa_exc.DisconnectionError("Database server went away") elif _in_any(six.text_type(ex.args[0]), POSTGRES_GONE_WAY_AWAY_ERRORS): LOG.warning('Got postgres server has gone away', exc_info=True) raise sa_exc.DisconnectionError("Database server went away") else: raise class _Alchemist(object): """Internal <-> external row <-> objects + other helper functions. NOTE(harlowja): for internal usage only. """ def __init__(self, tables): self._tables = tables @staticmethod def convert_flow_detail(row): return models.FlowDetail.from_dict(dict(row.items())) @staticmethod def convert_book(row): return models.LogBook.from_dict(dict(row.items())) @staticmethod def convert_atom_detail(row): row = dict(row.items()) atom_cls = models.atom_detail_class(row.pop('atom_type')) return atom_cls.from_dict(row) def atom_query_iter(self, conn, parent_uuid): q = (sql.select([self._tables.atomdetails]). where(self._tables.atomdetails.c.parent_uuid == parent_uuid)) for row in conn.execute(q): yield self.convert_atom_detail(row) def flow_query_iter(self, conn, parent_uuid): q = (sql.select([self._tables.flowdetails]). where(self._tables.flowdetails.c.parent_uuid == parent_uuid)) for row in conn.execute(q): yield self.convert_flow_detail(row) def populate_book(self, conn, book): for fd in self.flow_query_iter(conn, book.uuid): book.add(fd) self.populate_flow_detail(conn, fd) def populate_flow_detail(self, conn, fd): for ad in self.atom_query_iter(conn, fd.uuid): fd.add(ad) class SQLAlchemyBackend(base.Backend): """A sqlalchemy backend. Example configuration:: conf = { "connection": "sqlite:////tmp/test.db", } """ def __init__(self, conf, engine=None): super(SQLAlchemyBackend, self).__init__(conf) if engine is not None: self._engine = engine self._owns_engine = False else: self._engine = self._create_engine(self._conf) self._owns_engine = True self._validated = False self._upgrade_lock = threading.Lock() try: self._max_retries = misc.as_int(self._conf.get('max_retries')) except TypeError: self._max_retries = 0 @staticmethod def _create_engine(conf): # NOTE(harlowja): copy the internal one so that we don't modify it via # all the popping that will happen below. conf = copy.deepcopy(conf) engine_args = { 'echo': _as_bool(conf.pop('echo', False)), 'convert_unicode': _as_bool(conf.pop('convert_unicode', True)), 'pool_recycle': 3600, } if 'idle_timeout' in conf: idle_timeout = misc.as_int(conf.pop('idle_timeout')) engine_args['pool_recycle'] = idle_timeout sql_connection = conf.pop('connection') e_url = sa.engine.url.make_url(sql_connection) if 'sqlite' in e_url.drivername: engine_args["poolclass"] = sa_pool.NullPool # Adjustments for in-memory sqlite usage. if sql_connection.lower().strip() in SQLITE_IN_MEMORY: engine_args["poolclass"] = sa_pool.StaticPool engine_args["connect_args"] = {'check_same_thread': False} else: for (k, lookup_key) in [('pool_size', 'max_pool_size'), ('max_overflow', 'max_overflow'), ('pool_timeout', 'pool_timeout')]: if lookup_key in conf: engine_args[k] = misc.as_int(conf.pop(lookup_key)) if 'isolation_level' not in conf: # Check driver name exact matches first, then try driver name # partial matches... txn_isolation_levels = conf.pop('isolation_levels', DEFAULT_TXN_ISOLATION_LEVELS) level_applied = False for (driver, level) in six.iteritems(txn_isolation_levels): if driver == e_url.drivername: engine_args['isolation_level'] = level level_applied = True break if not level_applied: for (driver, level) in six.iteritems(txn_isolation_levels): if e_url.drivername.find(driver) != -1: engine_args['isolation_level'] = level break else: engine_args['isolation_level'] = conf.pop('isolation_level') # If the configuration dict specifies any additional engine args # or engine arg overrides make sure we merge them in. engine_args.update(conf.pop('engine_args', {})) engine = sa.create_engine(sql_connection, **engine_args) log_statements = conf.pop('log_statements', False) if _as_bool(log_statements): log_statements_level = conf.pop("log_statements_level", logging.TRACE) sa.event.listen(engine, "before_cursor_execute", functools.partial(_log_statements, log_statements_level)) checkin_yield = conf.pop('checkin_yield', eventlet_utils.EVENTLET_AVAILABLE) if _as_bool(checkin_yield): sa.event.listen(engine, 'checkin', _thread_yield) if 'mysql' in e_url.drivername: if _as_bool(conf.pop('checkout_ping', True)): sa.event.listen(engine, 'checkout', _ping_listener) mode = None if 'mysql_sql_mode' in conf: mode = conf.pop('mysql_sql_mode') if mode is not None: sa.event.listen(engine, 'connect', functools.partial(_set_sql_mode, mode)) return engine @property def engine(self): return self._engine def get_connection(self): conn = Connection(self, upgrade_lock=self._upgrade_lock) if not self._validated: conn.validate(max_retries=self._max_retries) self._validated = True return conn def close(self): # NOTE(harlowja): Only dispose of the engine if we actually own the # engine in the first place. If the user passed in their own engine # we should not be disposing it on their behalf... if self._owns_engine: self._engine.dispose() self._validated = False class Connection(base.Connection): def __init__(self, backend, upgrade_lock): self._backend = backend self._upgrade_lock = upgrade_lock self._engine = backend.engine self._metadata = sa.MetaData() self._tables = tables.fetch(self._metadata) self._converter = _Alchemist(self._tables) @property def backend(self): return self._backend def validate(self, max_retries=0): """Performs basic **connection** validation of a sqlalchemy engine.""" def _retry_on_exception(exc): LOG.warning("Engine connection (validate) failed due to '%s'", exc) if isinstance(exc, sa_exc.OperationalError) and \ _is_db_connection_error(six.text_type(exc.args[0])): # We may be able to fix this by retrying... return True if isinstance(exc, (sa_exc.TimeoutError, sa_exc.ResourceClosedError, sa_exc.DisconnectionError)): # We may be able to fix this by retrying... return True # Other failures we likely can't fix by retrying... return False @tenacity.retry( stop=tenacity.stop_after_attempt(max(0, int(max_retries))), wait=tenacity.wait_exponential(), reraise=True, retry=tenacity.retry_if_exception(_retry_on_exception) ) def _try_connect(engine): # See if we can make a connection happen. # # NOTE(harlowja): note that even though we are connecting # once it does not mean that we will be able to connect in # the future, so this is more of a sanity test and is not # complete connection insurance. with contextlib.closing(engine.connect()): pass _try_connect(self._engine) def upgrade(self): try: with self._upgrade_lock: with contextlib.closing(self._engine.connect()) as conn: # NOTE(imelnikov): Alembic does not support SQLite, # and we don't recommend to use SQLite in production # deployments, so migrations are rarely needed # for SQLite. So we don't bother about working around # SQLite limitations, and create the database directly # from the tables when it is in use... if 'sqlite' in self._engine.url.drivername: self._metadata.create_all(bind=conn) else: migration.db_sync(conn) except sa_exc.SQLAlchemyError: exc.raise_with_cause(exc.StorageFailure, "Failed upgrading database version") def clear_all(self): try: logbooks = self._tables.logbooks with self._engine.begin() as conn: conn.execute(logbooks.delete()) except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed clearing all entries") def update_atom_details(self, atom_detail): try: atomdetails = self._tables.atomdetails with self._engine.begin() as conn: q = (sql.select([atomdetails]). where(atomdetails.c.uuid == atom_detail.uuid)) row = conn.execute(q).first() if not row: raise exc.NotFound("No atom details found with uuid" " '%s'" % atom_detail.uuid) e_ad = self._converter.convert_atom_detail(row) self._update_atom_details(conn, atom_detail, e_ad) return e_ad except sa_exc.SQLAlchemyError: exc.raise_with_cause(exc.StorageFailure, "Failed updating atom details" " with uuid '%s'" % atom_detail.uuid) def _insert_flow_details(self, conn, fd, parent_uuid): value = fd.to_dict() value['parent_uuid'] = parent_uuid conn.execute(sql.insert(self._tables.flowdetails, value)) for ad in fd: self._insert_atom_details(conn, ad, fd.uuid) def _insert_atom_details(self, conn, ad, parent_uuid): value = ad.to_dict() value['parent_uuid'] = parent_uuid value['atom_type'] = models.atom_detail_type(ad) conn.execute(sql.insert(self._tables.atomdetails, value)) def _update_atom_details(self, conn, ad, e_ad): e_ad.merge(ad) conn.execute(sql.update(self._tables.atomdetails) .where(self._tables.atomdetails.c.uuid == e_ad.uuid) .values(e_ad.to_dict())) def _update_flow_details(self, conn, fd, e_fd): e_fd.merge(fd) conn.execute(sql.update(self._tables.flowdetails) .where(self._tables.flowdetails.c.uuid == e_fd.uuid) .values(e_fd.to_dict())) for ad in fd: e_ad = e_fd.find(ad.uuid) if e_ad is None: e_fd.add(ad) self._insert_atom_details(conn, ad, fd.uuid) else: self._update_atom_details(conn, ad, e_ad) def update_flow_details(self, flow_detail): try: flowdetails = self._tables.flowdetails with self._engine.begin() as conn: q = (sql.select([flowdetails]). where(flowdetails.c.uuid == flow_detail.uuid)) row = conn.execute(q).first() if not row: raise exc.NotFound("No flow details found with" " uuid '%s'" % flow_detail.uuid) e_fd = self._converter.convert_flow_detail(row) self._converter.populate_flow_detail(conn, e_fd) self._update_flow_details(conn, flow_detail, e_fd) return e_fd except sa_exc.SQLAlchemyError: exc.raise_with_cause(exc.StorageFailure, "Failed updating flow details with" " uuid '%s'" % flow_detail.uuid) def destroy_logbook(self, book_uuid): try: logbooks = self._tables.logbooks with self._engine.begin() as conn: q = logbooks.delete().where(logbooks.c.uuid == book_uuid) r = conn.execute(q) if r.rowcount == 0: raise exc.NotFound("No logbook found with" " uuid '%s'" % book_uuid) except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed destroying logbook '%s'" % book_uuid) def save_logbook(self, book): try: logbooks = self._tables.logbooks with self._engine.begin() as conn: q = (sql.select([logbooks]). where(logbooks.c.uuid == book.uuid)) row = conn.execute(q).first() if row: e_lb = self._converter.convert_book(row) self._converter.populate_book(conn, e_lb) e_lb.merge(book) conn.execute(sql.update(logbooks) .where(logbooks.c.uuid == e_lb.uuid) .values(e_lb.to_dict())) for fd in book: e_fd = e_lb.find(fd.uuid) if e_fd is None: e_lb.add(fd) self._insert_flow_details(conn, fd, e_lb.uuid) else: self._update_flow_details(conn, fd, e_fd) return e_lb else: conn.execute(sql.insert(logbooks, book.to_dict())) for fd in book: self._insert_flow_details(conn, fd, book.uuid) return book except sa_exc.DBAPIError: exc.raise_with_cause( exc.StorageFailure, "Failed saving logbook '%s'" % book.uuid) def get_logbook(self, book_uuid, lazy=False): try: logbooks = self._tables.logbooks with contextlib.closing(self._engine.connect()) as conn: q = (sql.select([logbooks]). where(logbooks.c.uuid == book_uuid)) row = conn.execute(q).first() if not row: raise exc.NotFound("No logbook found with" " uuid '%s'" % book_uuid) book = self._converter.convert_book(row) if not lazy: self._converter.populate_book(conn, book) return book except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed getting logbook '%s'" % book_uuid) def get_logbooks(self, lazy=False): gathered = [] try: with contextlib.closing(self._engine.connect()) as conn: q = sql.select([self._tables.logbooks]) for row in conn.execute(q): book = self._converter.convert_book(row) if not lazy: self._converter.populate_book(conn, book) gathered.append(book) except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed getting logbooks") for book in gathered: yield book def get_flows_for_book(self, book_uuid, lazy=False): gathered = [] try: with contextlib.closing(self._engine.connect()) as conn: for fd in self._converter.flow_query_iter(conn, book_uuid): if not lazy: self._converter.populate_flow_detail(conn, fd) gathered.append(fd) except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed getting flow details in" " logbook '%s'" % book_uuid) for flow_details in gathered: yield flow_details def get_flow_details(self, fd_uuid, lazy=False): try: flowdetails = self._tables.flowdetails with self._engine.begin() as conn: q = (sql.select([flowdetails]). where(flowdetails.c.uuid == fd_uuid)) row = conn.execute(q).first() if not row: raise exc.NotFound("No flow details found with uuid" " '%s'" % fd_uuid) fd = self._converter.convert_flow_detail(row) if not lazy: self._converter.populate_flow_detail(conn, fd) return fd except sa_exc.SQLAlchemyError: exc.raise_with_cause(exc.StorageFailure, "Failed getting flow details with" " uuid '%s'" % fd_uuid) def get_atom_details(self, ad_uuid): try: atomdetails = self._tables.atomdetails with self._engine.begin() as conn: q = (sql.select([atomdetails]). where(atomdetails.c.uuid == ad_uuid)) row = conn.execute(q).first() if not row: raise exc.NotFound("No atom details found with uuid" " '%s'" % ad_uuid) return self._converter.convert_atom_detail(row) except sa_exc.SQLAlchemyError: exc.raise_with_cause(exc.StorageFailure, "Failed getting atom details with" " uuid '%s'" % ad_uuid) def get_atoms_for_flow(self, fd_uuid): gathered = [] try: with contextlib.closing(self._engine.connect()) as conn: for ad in self._converter.atom_query_iter(conn, fd_uuid): gathered.append(ad) except sa_exc.DBAPIError: exc.raise_with_cause(exc.StorageFailure, "Failed getting atom details in flow" " detail '%s'" % fd_uuid) for atom_details in gathered: yield atom_details def close(self): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/impl_zookeeper.py0000664000175000017500000001372000000000000024733 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 AT&T Labs All Rights Reserved. # Copyright (C) 2015 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from kazoo import exceptions as k_exc from kazoo.protocol import paths from oslo_serialization import jsonutils from taskflow import exceptions as exc from taskflow.persistence import path_based from taskflow.utils import kazoo_utils as k_utils from taskflow.utils import misc MIN_ZK_VERSION = (3, 4, 0) class ZkBackend(path_based.PathBasedBackend): """A zookeeper-backed backend. Example configuration:: conf = { "hosts": "192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181", "path": "/taskflow", } Do note that the creation of a kazoo client is achieved by :py:func:`~taskflow.utils.kazoo_utils.make_client` and the transfer of this backend configuration to that function to make a client may happen at ``__init__`` time. This implies that certain parameters from this backend configuration may be provided to :py:func:`~taskflow.utils.kazoo_utils.make_client` such that if a client was not provided by the caller one will be created according to :py:func:`~taskflow.utils.kazoo_utils.make_client`'s specification """ #: Default path used when none is provided. DEFAULT_PATH = '/taskflow' def __init__(self, conf, client=None): super(ZkBackend, self).__init__(conf) if not paths.isabs(self._path): raise ValueError("Zookeeper path must be absolute") if client is not None: self._client = client self._owned = False else: self._client = k_utils.make_client(self._conf) self._owned = True self._validated = False def get_connection(self): conn = ZkConnection(self, self._client, self._conf) if not self._validated: conn.validate() self._validated = True return conn def close(self): self._validated = False if not self._owned: return try: k_utils.finalize_client(self._client) except (k_exc.KazooException, k_exc.ZookeeperError): exc.raise_with_cause(exc.StorageFailure, "Unable to finalize client") class ZkConnection(path_based.PathBasedConnection): def __init__(self, backend, client, conf): super(ZkConnection, self).__init__(backend) self._conf = conf self._client = client with self._exc_wrapper(): # NOOP if already started. self._client.start() @contextlib.contextmanager def _exc_wrapper(self): """Exception context-manager which wraps kazoo exceptions. This is used to capture and wrap any kazoo specific exceptions and then group them into corresponding taskflow exceptions (not doing that would expose the underlying kazoo exception model). """ try: yield except self._client.handler.timeout_exception: exc.raise_with_cause(exc.StorageFailure, "Storage backend timeout") except k_exc.SessionExpiredError: exc.raise_with_cause(exc.StorageFailure, "Storage backend session has expired") except k_exc.NoNodeError: exc.raise_with_cause(exc.NotFound, "Storage backend node not found") except k_exc.NodeExistsError: exc.raise_with_cause(exc.Duplicate, "Storage backend duplicate node") except (k_exc.KazooException, k_exc.ZookeeperError): exc.raise_with_cause(exc.StorageFailure, "Storage backend internal error") def _join_path(self, *parts): return paths.join(*parts) def _get_item(self, path): with self._exc_wrapper(): data, _ = self._client.get(path) return misc.decode_json(data) def _set_item(self, path, value, transaction): data = misc.binary_encode(jsonutils.dumps(value)) if not self._client.exists(path): transaction.create(path, data) else: transaction.set_data(path, data) def _del_tree(self, path, transaction): for child in self._get_children(path): self._del_tree(self._join_path(path, child), transaction) transaction.delete(path) def _get_children(self, path): with self._exc_wrapper(): return self._client.get_children(path) def _ensure_path(self, path): with self._exc_wrapper(): self._client.ensure_path(path) def _create_link(self, src_path, dest_path, transaction): if not self._client.exists(dest_path): transaction.create(dest_path) @contextlib.contextmanager def _transaction(self): transaction = self._client.transaction() with self._exc_wrapper(): yield transaction k_utils.checked_commit(transaction) def validate(self): with self._exc_wrapper(): try: if self._conf.get('check_compatible', True): k_utils.check_compatible(self._client, MIN_ZK_VERSION) except exc.IncompatibleVersion: exc.raise_with_cause(exc.StorageFailure, "Backend storage is" " not a compatible version") ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.632042 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/0000775000175000017500000000000000000000000023474 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/__init__.py0000664000175000017500000000000000000000000025573 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.632042 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/0000775000175000017500000000000000000000000025070 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/README0000664000175000017500000000063300000000000025752 0ustar00zuulzuul00000000000000Please see https://alembic.readthedocs.org/en/latest/index.html for general documentation To create alembic migrations you need to have alembic installed and available in PATH: # pip install alembic $ cd ./taskflow/persistence/backends/sqlalchemy/alembic $ alembic revision -m "migration_description" See Operation Reference https://alembic.readthedocs.org/en/latest/ops.html#ops for a short list of commands ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/alembic.ini0000664000175000017500000000064400000000000027171 0ustar00zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # This is set inside of migration script # sqlalchemy.url = driver://user:pass@localhost/dbname ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/env.py0000664000175000017500000000464300000000000026241 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import context from sqlalchemy import engine_from_config, pool # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = None # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = config.get_main_option("sqlalchemy.url") context.configure(url=url) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ connectable = config.attributes.get('connection', None) if connectable is None: connectable = engine_from_config( config.get_section(config.config_ini_section), prefix='sqlalchemy.', poolclass=pool.NullPool) with connectable.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata) with context.begin_transaction(): context.run_migrations() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/script.py.mako0000664000175000017500000000063400000000000027677 0ustar00zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.636042 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000026740 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022300000000000011452 xustar0000000000000000125 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/0bc3e1a3c135_set_result_meduimtext_type.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/0bc3e1a3c135_set_result_med0000664000175000017500000000244300000000000033561 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """set_result_meduimtext_type Revision ID: 0bc3e1a3c135 Revises: 2ad4984f2864 Create Date: 2019-08-08 16:11:36.221164 """ # revision identifiers, used by Alembic. revision = '0bc3e1a3c135' down_revision = '2ad4984f2864' from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import mysql def upgrade(): bind = op.get_bind() engine = bind.engine if engine.name == 'mysql': op.alter_column('atomdetails', 'results', type_=mysql.LONGTEXT, existing_nullable=True) def downgrade(): bind = op.get_bind() engine = bind.engine if engine.name == 'mysql': op.alter_column('atomdetails', 'results', type_=sa.Text(), existing_nullable=True) ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/14b227d79a87_add_intention_column.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/14b227d79a87_add_intention_0000664000175000017500000000234700000000000033414 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # revision identifiers, used by Alembic. revision = '14b227d79a87' down_revision = '84d6e888850' from alembic import op import sqlalchemy as sa from taskflow import states def upgrade(): bind = op.get_bind() intention_type = sa.Enum(*states.INTENTIONS, name='intention_type') column = sa.Column('intention', intention_type, server_default=states.EXECUTE) impl = intention_type.dialect_impl(bind.dialect) impl.create(bind, checkfirst=True) op.add_column('taskdetails', column) def downgrade(): op.drop_column('taskdetails', 'intention') ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_exception_an.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_except0000664000175000017500000000263500000000000033415 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Replace exception and stacktrace with failure column Revision ID: 1c783c0c2875 Revises: 1cea328f0f65 Create Date: 2013-09-26 12:33:30.970122 """ # revision identifiers, used by Alembic. revision = '1c783c0c2875' down_revision = '1cea328f0f65' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('taskdetails', sa.Column('failure', sa.Text(), nullable=True)) op.drop_column('taskdetails', 'exception') op.drop_column('taskdetails', 'stacktrace') def downgrade(): op.drop_column('taskdetails', 'failure') op.add_column('taskdetails', sa.Column('stacktrace', sa.Text(), nullable=True)) op.add_column('taskdetails', sa.Column('exception', sa.Text(), nullable=True)) ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logbook_deta.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logboo0000664000175000017500000001252300000000000033555 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """initial_logbook_details_tables Revision ID: 1cea328f0f65 Revises: None Create Date: 2013-08-23 11:41:49.207087 """ # revision identifiers, used by Alembic. revision = '1cea328f0f65' down_revision = None import logging from alembic import op import sqlalchemy as sa from taskflow.persistence.backends.sqlalchemy import tables LOG = logging.getLogger(__name__) def _get_indexes(): # Ensure all uuids are indexed since they are what is typically looked # up and fetched, so attempt to ensure that that is done quickly. indexes = [ { 'index_name': 'logbook_uuid_idx', 'table_name': 'logbooks', 'columns': ['uuid'], }, { 'index_name': 'flowdetails_uuid_idx', 'table_name': 'flowdetails', 'columns': ['uuid'], }, { 'index_name': 'taskdetails_uuid_idx', 'table_name': 'taskdetails', 'columns': ['uuid'], }, ] return indexes def _get_foreign_keys(): f_keys = [ # Flow details uuid -> logbook parent uuid { 'constraint_name': 'flowdetails_ibfk_1', 'source_table': 'flowdetails', 'referent_table': 'logbooks', 'local_cols': ['parent_uuid'], 'remote_cols': ['uuid'], 'ondelete': 'CASCADE', }, # Task details uuid -> flow details parent uuid { 'constraint_name': 'taskdetails_ibfk_1', 'source_table': 'taskdetails', 'referent_table': 'flowdetails', 'local_cols': ['parent_uuid'], 'remote_cols': ['uuid'], 'ondelete': 'CASCADE', }, ] return f_keys def upgrade(): op.create_table('logbooks', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('meta', sa.Text(), nullable=True), sa.Column('name', sa.String(length=tables.NAME_LENGTH), nullable=True), sa.Column('uuid', sa.String(length=tables.UUID_LENGTH), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table('flowdetails', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('parent_uuid', sa.String(length=tables.UUID_LENGTH)), sa.Column('meta', sa.Text(), nullable=True), sa.Column('state', sa.String(length=tables.STATE_LENGTH), nullable=True), sa.Column('name', sa.String(length=tables.NAME_LENGTH), nullable=True), sa.Column('uuid', sa.String(length=tables.UUID_LENGTH), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table('taskdetails', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('parent_uuid', sa.String(length=tables.UUID_LENGTH)), sa.Column('meta', sa.Text(), nullable=True), sa.Column('name', sa.String(length=tables.NAME_LENGTH), nullable=True), sa.Column('results', sa.Text(), nullable=True), sa.Column('version', sa.String(length=tables.VERSION_LENGTH), nullable=True), sa.Column('stacktrace', sa.Text(), nullable=True), sa.Column('exception', sa.Text(), nullable=True), sa.Column('state', sa.String(length=tables.STATE_LENGTH), nullable=True), sa.Column('uuid', sa.String(length=tables.UUID_LENGTH), primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') try: for fkey_descriptor in _get_foreign_keys(): op.create_foreign_key(**fkey_descriptor) except NotImplementedError as e: LOG.warning("Foreign keys are not supported: %s", e) try: for index_descriptor in _get_indexes(): op.create_index(**index_descriptor) except NotImplementedError as e: LOG.warning("Indexes are not supported: %s", e) def downgrade(): for table in ['logbooks', 'flowdetails', 'taskdetails']: op.drop_table(table) ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/2ad4984f2864_switch_postgres_to_json_native.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/2ad4984f2864_switch_postgre0000664000175000017500000000336200000000000033504 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Switch postgres to json native type. Revision ID: 2ad4984f2864 Revises: 3162c0f3f8e4 Create Date: 2015-06-04 13:08:36.667948 """ # revision identifiers, used by Alembic. revision = '2ad4984f2864' down_revision = '3162c0f3f8e4' from alembic import op _ALTER_TO_JSON_TPL = 'ALTER TABLE %s ALTER COLUMN %s TYPE JSON USING %s::JSON' _TABLES_COLS = tuple([ ('logbooks', 'meta'), ('flowdetails', 'meta'), ('atomdetails', 'meta'), ('atomdetails', 'failure'), ('atomdetails', 'revert_failure'), ('atomdetails', 'results'), ('atomdetails', 'revert_results'), ]) _ALTER_TO_TEXT_TPL = 'ALTER TABLE %s ALTER COLUMN %s TYPE TEXT' def upgrade(): b = op.get_bind() if b.dialect.name.startswith('postgresql'): for (table_name, col_name) in _TABLES_COLS: q = _ALTER_TO_JSON_TPL % (table_name, col_name, col_name) op.execute(q) def downgrade(): b = op.get_bind() if b.dialect.name.startswith('postgresql'): for (table_name, col_name) in _TABLES_COLS: q = _ALTER_TO_TEXT_TPL % (table_name, col_name) op.execute(q) ././@PaxHeader0000000000000000000000000000023700000000000011457 xustar0000000000000000137 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/3162c0f3f8e4_add_revert_results_and_revert_failure_.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/3162c0f3f8e4_add_revert_res0000664000175000017500000000244300000000000033474 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add 'revert_results' and 'revert_failure' atom detail column. Revision ID: 3162c0f3f8e4 Revises: 589dccdf2b6e Create Date: 2015-06-17 15:52:56.575245 """ # revision identifiers, used by Alembic. revision = '3162c0f3f8e4' down_revision = '589dccdf2b6e' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('atomdetails', sa.Column('revert_results', sa.Text(), nullable=True)) op.add_column('atomdetails', sa.Column('revert_failure', sa.Text(), nullable=True)) def downgrade(): op.drop_column('atomdetails', 'revert_results') op.drop_column('atomdetails', 'revert_failure') ././@PaxHeader0000000000000000000000000000023200000000000011452 xustar0000000000000000132 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/589dccdf2b6e_rename_taskdetails_to_atomdetails.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/589dccdf2b6e_rename_taskdet0000664000175000017500000000202700000000000033722 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Rename taskdetails to atomdetails Revision ID: 589dccdf2b6e Revises: 14b227d79a87 Create Date: 2014-03-19 11:49:16.533227 """ # revision identifiers, used by Alembic. revision = '589dccdf2b6e' down_revision = '14b227d79a87' from alembic import op def upgrade(): op.rename_table("taskdetails", "atomdetails") def downgrade(): op.rename_table("atomdetails", "taskdetails") ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/6df9422fcb43_fix_flowdetails_meta_size.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/6df9422fcb43_fix_flowdetail0000664000175000017500000000206000000000000033563 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """fix flowdetails meta size Revision ID: 6df9422fcb43 Revises: 0bc3e1a3c135 Create Date: 2021-04-27 14:51:53.618249 """ # revision identifiers, used by Alembic. revision = '6df9422fcb43' down_revision = '0bc3e1a3c135' from alembic import op from sqlalchemy.dialects import mysql def upgrade(): bind = op.get_bind() engine = bind.engine if engine.name == 'mysql': op.alter_column('flowdetails', 'meta', type_=mysql.LONGTEXT, existing_nullable=True) ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/84d6e888850_add_task_detail_type.py 22 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/84d6e888850_add_task_detail0000664000175000017500000000244300000000000033401 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add task detail type Revision ID: 84d6e888850 Revises: 1c783c0c2875 Create Date: 2014-01-20 18:12:42.503267 """ # revision identifiers, used by Alembic. revision = '84d6e888850' down_revision = '1c783c0c2875' from alembic import op import sqlalchemy as sa from taskflow.persistence import models def upgrade(): atom_types = sa.Enum(*models.ATOM_TYPES, name='atom_types') column = sa.Column('atom_type', atom_types) bind = op.get_bind() impl = atom_types.dialect_impl(bind.dialect) impl.create(bind, checkfirst=True) op.add_column('taskdetails', column) def downgrade(): op.drop_column('taskdetails', 'atom_type') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/alembic/versions/README0000664000175000017500000000004600000000000027620 0ustar00zuulzuul00000000000000Directory for alembic migration files ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/migration.py0000664000175000017500000000216400000000000026042 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" import os from alembic import command from alembic import config def _make_alembic_config(): path = os.path.join(os.path.dirname(__file__), 'alembic', 'alembic.ini') return config.Config(path) def db_sync(connection, revision='head'): cfg = _make_alembic_config() cfg.attributes['connection'] = connection command.upgrade(cfg, revision) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/backends/sqlalchemy/tables.py0000664000175000017500000001132300000000000025320 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils from sqlalchemy import Table, Column, String, ForeignKey, DateTime, Enum from sqlalchemy_utils.types import json as json_type from taskflow.persistence import models from taskflow import states Tables = collections.namedtuple('Tables', ['logbooks', 'flowdetails', 'atomdetails']) # Column length limits... NAME_LENGTH = 255 UUID_LENGTH = 64 STATE_LENGTH = 255 VERSION_LENGTH = 64 class JSONType(json_type.JSONType): """Customized JSONType using oslo.serialization for json operations""" def process_bind_param(self, value, dialect): if dialect.name == 'postgresql' and json_type.has_postgres_json: return value if value is not None: value = jsonutils.dumps(value) return value def process_result_value(self, value, dialect): if dialect.name == 'postgresql': return value if value is not None: value = jsonutils.loads(value) return value def fetch(metadata): """Returns the master set of table objects (which is also there schema).""" logbooks = Table('logbooks', metadata, Column('created_at', DateTime, default=timeutils.utcnow), Column('updated_at', DateTime, onupdate=timeutils.utcnow), Column('meta', JSONType), Column('name', String(length=NAME_LENGTH)), Column('uuid', String(length=UUID_LENGTH), primary_key=True, nullable=False, unique=True, default=uuidutils.generate_uuid)) flowdetails = Table('flowdetails', metadata, Column('created_at', DateTime, default=timeutils.utcnow), Column('updated_at', DateTime, onupdate=timeutils.utcnow), Column('parent_uuid', String(length=UUID_LENGTH), ForeignKey('logbooks.uuid', ondelete='CASCADE')), Column('meta', JSONType), Column('name', String(length=NAME_LENGTH)), Column('state', String(length=STATE_LENGTH)), Column('uuid', String(length=UUID_LENGTH), primary_key=True, nullable=False, unique=True, default=uuidutils.generate_uuid)) atomdetails = Table('atomdetails', metadata, Column('created_at', DateTime, default=timeutils.utcnow), Column('updated_at', DateTime, onupdate=timeutils.utcnow), Column('meta', JSONType), Column('parent_uuid', String(length=UUID_LENGTH), ForeignKey('flowdetails.uuid', ondelete='CASCADE')), Column('name', String(length=NAME_LENGTH)), Column('version', String(length=VERSION_LENGTH)), Column('state', String(length=STATE_LENGTH)), Column('uuid', String(length=UUID_LENGTH), primary_key=True, nullable=False, unique=True, default=uuidutils.generate_uuid), Column('failure', JSONType), Column('results', JSONType), Column('revert_results', JSONType), Column('revert_failure', JSONType), Column('atom_type', Enum(*models.ATOM_TYPES, name='atom_types')), Column('intention', Enum(*states.INTENTIONS, name='intentions'))) return Tables(logbooks, flowdetails, atomdetails) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/base.py0000664000175000017500000000777200000000000021061 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow.persistence import models @six.add_metaclass(abc.ABCMeta) class Backend(object): """Base class for persistence backends.""" def __init__(self, conf): if not conf: conf = {} if not isinstance(conf, dict): raise TypeError("Configuration dictionary expected not '%s' (%s)" % (conf, type(conf))) self._conf = conf @abc.abstractmethod def get_connection(self): """Return a Connection instance based on the configuration settings.""" @abc.abstractmethod def close(self): """Closes any resources this backend has open.""" @six.add_metaclass(abc.ABCMeta) class Connection(object): """Base class for backend connections.""" @abc.abstractproperty def backend(self): """Returns the backend this connection is associated with.""" @abc.abstractmethod def close(self): """Closes any resources this connection has open.""" @abc.abstractmethod def upgrade(self): """Migrate the persistence backend to the most recent version.""" @abc.abstractmethod def clear_all(self): """Clear all entries from this backend.""" @abc.abstractmethod def validate(self): """Validates that a backend is still ok to be used. The semantics of this *may* vary depending on the backend. On failure a backend specific exception should be raised that will indicate why the failure occurred. """ @abc.abstractmethod def update_atom_details(self, atom_detail): """Updates a given atom details and returns the updated version. NOTE(harlowja): the details that is to be updated must already have been created by saving a flow details with the given atom detail inside of it. """ @abc.abstractmethod def update_flow_details(self, flow_detail): """Updates a given flow details and returns the updated version. NOTE(harlowja): the details that is to be updated must already have been created by saving a logbook with the given flow detail inside of it. """ @abc.abstractmethod def save_logbook(self, book): """Saves a logbook, and all its contained information.""" @abc.abstractmethod def destroy_logbook(self, book_uuid): """Deletes/destroys a logbook matching the given uuid.""" @abc.abstractmethod def get_logbook(self, book_uuid, lazy=False): """Fetches a logbook object matching the given uuid.""" @abc.abstractmethod def get_logbooks(self, lazy=False): """Return an iterable of logbook objects.""" @abc.abstractmethod def get_flows_for_book(self, book_uuid): """Return an iterable of flowdetails for a given logbook uuid.""" @abc.abstractmethod def get_flow_details(self, fd_uuid, lazy=False): """Fetches a flowdetails object matching the given uuid.""" @abc.abstractmethod def get_atom_details(self, ad_uuid): """Fetches a atomdetails object matching the given uuid.""" @abc.abstractmethod def get_atoms_for_flow(self, fd_uuid): """Return an iterable of atomdetails for a given flowdetails uuid.""" def _format_atom(atom_detail): return { 'atom': atom_detail.to_dict(), 'type': models.atom_detail_type(atom_detail), } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/models.py0000664000175000017500000011760600000000000021430 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import copy import os from oslo_utils import timeutils from oslo_utils import uuidutils import six from taskflow import exceptions as exc from taskflow import states from taskflow.types import failure as ft from taskflow.utils import misc # Internal helpers... def _format_meta(metadata, indent): """Format the common metadata dictionary in the same manner.""" if not metadata: return [] lines = [ '%s- metadata:' % (" " * indent), ] for (k, v) in metadata.items(): # Progress for now is a special snowflake and will be formatted # in percent format. if k == 'progress' and isinstance(v, misc.NUMERIC_TYPES): v = "%0.2f%%" % (v * 100.0) lines.append("%s+ %s = %s" % (" " * (indent + 2), k, v)) return lines def _format_shared(obj, indent): """Format the common shared attributes in the same manner.""" if obj is None: return [] lines = [] for attr_name in ("uuid", "state"): if not hasattr(obj, attr_name): continue lines.append("%s- %s = %s" % (" " * indent, attr_name, getattr(obj, attr_name))) return lines def _is_all_none(arg, *args): if arg is not None: return False for more_arg in args: if more_arg is not None: return False return True def _copy_function(deep_copy): if deep_copy: return copy.deepcopy else: return lambda x: x def _safe_marshal_time(when): if not when: return None return timeutils.marshall_now(now=when) def _safe_unmarshal_time(when): if not when: return None return timeutils.unmarshall_time(when) def _fix_meta(data): # Handle the case where older schemas allowed this to be non-dict by # correcting this case by replacing it with a dictionary when a non-dict # is found. meta = data.get('meta') if not isinstance(meta, dict): meta = {} return meta class LogBook(object): """A collection of flow details and associated metadata. Typically this class contains a collection of flow detail entries for a given engine (or job) so that those entities can track what 'work' has been completed for resumption, reverting and miscellaneous tracking purposes. The data contained within this class need **not** be persisted to the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save occurs via some backend connection. NOTE(harlowja): the naming of this class is analogous to a ship's log or a similar type of record used in detailing work that has been completed (or work that has not been completed). :ivar created_at: A ``datetime.datetime`` object of when this logbook was created. :ivar updated_at: A ``datetime.datetime`` object of when this logbook was last updated at. :ivar meta: A dictionary of meta-data associated with this logbook. """ def __init__(self, name, uuid=None): if uuid: self._uuid = uuid else: self._uuid = uuidutils.generate_uuid() self._name = name self._flowdetails_by_id = {} self.created_at = timeutils.utcnow() self.updated_at = None self.meta = {} def pformat(self, indent=0, linesep=os.linesep): """Pretty formats this logbook into a string. >>> from taskflow.persistence import models >>> tmp = models.LogBook("example") >>> print(tmp.pformat()) LogBook: 'example' - uuid = ... - created_at = ... """ cls_name = self.__class__.__name__ lines = ["%s%s: '%s'" % (" " * indent, cls_name, self.name)] lines.extend(_format_shared(self, indent=indent + 1)) lines.extend(_format_meta(self.meta, indent=indent + 1)) if self.created_at is not None: lines.append("%s- created_at = %s" % (" " * (indent + 1), self.created_at.isoformat())) if self.updated_at is not None: lines.append("%s- updated_at = %s" % (" " * (indent + 1), self.updated_at.isoformat())) for flow_detail in self: lines.append(flow_detail.pformat(indent=indent + 1, linesep=linesep)) return linesep.join(lines) def add(self, fd): """Adds a new flow detail into this logbook. NOTE(harlowja): if an existing flow detail exists with the same uuid the existing one will be overwritten with the newly provided one. Does not *guarantee* that the details will be immediately saved. """ self._flowdetails_by_id[fd.uuid] = fd self.updated_at = timeutils.utcnow() def find(self, flow_uuid): """Locate the flow detail corresponding to the given uuid. :returns: the flow detail with that uuid :rtype: :py:class:`.FlowDetail` (or ``None`` if not found) """ return self._flowdetails_by_id.get(flow_uuid, None) def merge(self, lb, deep_copy=False): """Merges the current object state with the given ones state. If ``deep_copy`` is provided as truthy then the local object will use ``copy.deepcopy`` to replace this objects local attributes with the provided objects attributes (**only** if there is a difference between this objects attributes and the provided attributes). If ``deep_copy`` is falsey (the default) then a reference copy will occur instead when a difference is detected. NOTE(harlowja): If the provided object is this object itself then **no** merging is done. Also note that this does **not** merge the flow details contained in either. :returns: this logbook (freshly merged with the incoming object) :rtype: :py:class:`.LogBook` """ if lb is self: return self copy_fn = _copy_function(deep_copy) if self.meta != lb.meta: self.meta = copy_fn(lb.meta) if lb.created_at != self.created_at: self.created_at = copy_fn(lb.created_at) if lb.updated_at != self.updated_at: self.updated_at = copy_fn(lb.updated_at) return self def to_dict(self, marshal_time=False): """Translates the internal state of this object to a ``dict``. NOTE(harlowja): The returned ``dict`` does **not** include any contained flow details. :returns: this logbook in ``dict`` form """ if not marshal_time: marshal_fn = lambda x: x else: marshal_fn = _safe_marshal_time return { 'name': self.name, 'meta': self.meta, 'uuid': self.uuid, 'updated_at': marshal_fn(self.updated_at), 'created_at': marshal_fn(self.created_at), } @classmethod def from_dict(cls, data, unmarshal_time=False): """Translates the given ``dict`` into an instance of this class. NOTE(harlowja): the ``dict`` provided should come from a prior call to :meth:`.to_dict`. :returns: a new logbook :rtype: :py:class:`.LogBook` """ if not unmarshal_time: unmarshal_fn = lambda x: x else: unmarshal_fn = _safe_unmarshal_time obj = cls(data['name'], uuid=data['uuid']) obj.updated_at = unmarshal_fn(data['updated_at']) obj.created_at = unmarshal_fn(data['created_at']) obj.meta = _fix_meta(data) return obj @property def uuid(self): """The unique identifer of this logbook.""" return self._uuid @property def name(self): """The name of this logbook.""" return self._name def __iter__(self): for fd in six.itervalues(self._flowdetails_by_id): yield fd def __len__(self): return len(self._flowdetails_by_id) def copy(self, retain_contents=True): """Copies this logbook. Creates a shallow copy of this logbook. If this logbook contains flow details and ``retain_contents`` is truthy (the default) then the flow details container will be shallow copied (the flow details contained there-in will **not** be copied). If ``retain_contents`` is falsey then the copied logbook will have **no** contained flow details (but it will have the rest of the local objects attributes copied). :returns: a new logbook :rtype: :py:class:`.LogBook` """ clone = copy.copy(self) if not retain_contents: clone._flowdetails_by_id = {} else: clone._flowdetails_by_id = self._flowdetails_by_id.copy() if self.meta: clone.meta = self.meta.copy() return clone class FlowDetail(object): """A collection of atom details and associated metadata. Typically this class contains a collection of atom detail entries that represent the atoms in a given flow structure (along with any other needed metadata relevant to that flow). The data contained within this class need **not** be persisted to the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save (or update) occurs via some backend connection. :ivar meta: A dictionary of meta-data associated with this flow detail. """ def __init__(self, name, uuid): self._uuid = uuid self._name = name self._atomdetails_by_id = {} # TODO(bnemec): This should be documented as an ivar, but can't be due # to https://github.com/sphinx-doc/sphinx/issues/2549 #: The state of the flow associated with this flow detail. self.state = None self.meta = {} def update(self, fd): """Updates the objects state to be the same as the given one. This will assign the private and public attributes of the given flow detail directly to this object (replacing any existing attributes in this object; even if they are the **same**). NOTE(harlowja): If the provided object is this object itself then **no** update is done. :returns: this flow detail :rtype: :py:class:`.FlowDetail` """ if fd is self: return self self._atomdetails_by_id = fd._atomdetails_by_id self.state = fd.state self.meta = fd.meta return self def pformat(self, indent=0, linesep=os.linesep): """Pretty formats this flow detail into a string. >>> from oslo_utils import uuidutils >>> from taskflow.persistence import models >>> flow_detail = models.FlowDetail("example", ... uuid=uuidutils.generate_uuid()) >>> print(flow_detail.pformat()) FlowDetail: 'example' - uuid = ... - state = ... """ cls_name = self.__class__.__name__ lines = ["%s%s: '%s'" % (" " * indent, cls_name, self.name)] lines.extend(_format_shared(self, indent=indent + 1)) lines.extend(_format_meta(self.meta, indent=indent + 1)) for atom_detail in self: lines.append(atom_detail.pformat(indent=indent + 1, linesep=linesep)) return linesep.join(lines) def merge(self, fd, deep_copy=False): """Merges the current object state with the given one's state. If ``deep_copy`` is provided as truthy then the local object will use ``copy.deepcopy`` to replace this objects local attributes with the provided objects attributes (**only** if there is a difference between this objects attributes and the provided attributes). If ``deep_copy`` is falsey (the default) then a reference copy will occur instead when a difference is detected. NOTE(harlowja): If the provided object is this object itself then **no** merging is done. Also this does **not** merge the atom details contained in either. :returns: this flow detail (freshly merged with the incoming object) :rtype: :py:class:`.FlowDetail` """ if fd is self: return self copy_fn = _copy_function(deep_copy) if self.meta != fd.meta: self.meta = copy_fn(fd.meta) if self.state != fd.state: # NOTE(imelnikov): states are just strings, no need to copy. self.state = fd.state return self def copy(self, retain_contents=True): """Copies this flow detail. Creates a shallow copy of this flow detail. If this detail contains flow details and ``retain_contents`` is truthy (the default) then the atom details container will be shallow copied (the atom details contained there-in will **not** be copied). If ``retain_contents`` is falsey then the copied flow detail will have **no** contained atom details (but it will have the rest of the local objects attributes copied). :returns: a new flow detail :rtype: :py:class:`.FlowDetail` """ clone = copy.copy(self) if not retain_contents: clone._atomdetails_by_id = {} else: clone._atomdetails_by_id = self._atomdetails_by_id.copy() if self.meta: clone.meta = self.meta.copy() return clone def to_dict(self): """Translates the internal state of this object to a ``dict``. NOTE(harlowja): The returned ``dict`` does **not** include any contained atom details. :returns: this flow detail in ``dict`` form """ return { 'name': self.name, 'meta': self.meta, 'state': self.state, 'uuid': self.uuid, } @classmethod def from_dict(cls, data): """Translates the given ``dict`` into an instance of this class. NOTE(harlowja): the ``dict`` provided should come from a prior call to :meth:`.to_dict`. :returns: a new flow detail :rtype: :py:class:`.FlowDetail` """ obj = cls(data['name'], data['uuid']) obj.state = data.get('state') obj.meta = _fix_meta(data) return obj def add(self, ad): """Adds a new atom detail into this flow detail. NOTE(harlowja): if an existing atom detail exists with the same uuid the existing one will be overwritten with the newly provided one. Does not *guarantee* that the details will be immediately saved. """ self._atomdetails_by_id[ad.uuid] = ad def find(self, ad_uuid): """Locate the atom detail corresponding to the given uuid. :returns: the atom detail with that uuid :rtype: :py:class:`.AtomDetail` (or ``None`` if not found) """ return self._atomdetails_by_id.get(ad_uuid) @property def uuid(self): """The unique identifer of this flow detail.""" return self._uuid @property def name(self): """The name of this flow detail.""" return self._name def __iter__(self): for ad in six.itervalues(self._atomdetails_by_id): yield ad def __len__(self): return len(self._atomdetails_by_id) @six.add_metaclass(abc.ABCMeta) class AtomDetail(object): """A collection of atom specific runtime information and metadata. This is a base **abstract** class that contains attributes that are used to connect a atom to the persistence layer before, during, or after it is running. It includes any results it may have produced, any state that it may be in (for example ``FAILURE``), any exception that occurred when running, and any associated stacktrace that may have occurring during an exception being thrown. It may also contain any other metadata that should also be stored along-side the details about the connected atom. The data contained within this class need **not** be persisted to the backend storage in real time. The data in this class will only be guaranteed to be persisted when a save (or update) occurs via some backend connection. :ivar intention: The execution strategy of the atom associated with this atom detail (used by an engine/others to determine if the associated atom needs to be executed, reverted, retried and so-on). :ivar meta: A dictionary of meta-data associated with this atom detail. :ivar version: A version tuple or string that represents the atom version this atom detail is associated with (typically used for introspection and any data migration strategies). :ivar results: Any results the atom produced from either its ``execute`` method or from other sources. :ivar revert_results: Any results the atom produced from either its ``revert`` method or from other sources. :ivar AtomDetail.failure: If the atom failed (due to its ``execute`` method raising) this will be a :py:class:`~taskflow.types.failure.Failure` object that represents that failure (if there was no failure this will be set to none). :ivar revert_failure: If the atom failed (possibly due to its ``revert`` method raising) this will be a :py:class:`~taskflow.types.failure.Failure` object that represents that failure (if there was no failure this will be set to none). """ def __init__(self, name, uuid): self._uuid = uuid self._name = name # TODO(bnemec): This should be documented as an ivar, but can't be due # to https://github.com/sphinx-doc/sphinx/issues/2549 #: The state of the atom associated with this atom detail. self.state = None self.intention = states.EXECUTE self.results = None self.failure = None self.revert_results = None self.revert_failure = None self.meta = {} self.version = None @property def last_results(self): """Gets the atoms last result. If the atom has produced many results (for example if it has been retried, reverted, executed and ...) this returns the last one of many results. """ return self.results def update(self, ad): """Updates the object's state to be the same as the given one. This will assign the private and public attributes of the given atom detail directly to this object (replacing any existing attributes in this object; even if they are the **same**). NOTE(harlowja): If the provided object is this object itself then **no** update is done. :returns: this atom detail :rtype: :py:class:`.AtomDetail` """ if ad is self: return self self.state = ad.state self.intention = ad.intention self.meta = ad.meta self.failure = ad.failure self.results = ad.results self.revert_results = ad.revert_results self.revert_failure = ad.revert_failure self.version = ad.version return self @abc.abstractmethod def merge(self, other, deep_copy=False): """Merges the current object state with the given ones state. If ``deep_copy`` is provided as truthy then the local object will use ``copy.deepcopy`` to replace this objects local attributes with the provided objects attributes (**only** if there is a difference between this objects attributes and the provided attributes). If ``deep_copy`` is falsey (the default) then a reference copy will occur instead when a difference is detected. NOTE(harlowja): If the provided object is this object itself then **no** merging is done. Do note that **no** results are merged in this method. That operation **must** to be the responsibilty of subclasses to implement and override this abstract method and provide that merging themselves as they see fit. :returns: this atom detail (freshly merged with the incoming object) :rtype: :py:class:`.AtomDetail` """ copy_fn = _copy_function(deep_copy) # NOTE(imelnikov): states and intentions are just strings, # so there is no need to copy them (strings are immutable in python). self.state = other.state self.intention = other.intention if self.failure != other.failure: # NOTE(imelnikov): we can't just deep copy Failures, as they # contain tracebacks, which are not copyable. if other.failure: if deep_copy: self.failure = other.failure.copy() else: self.failure = other.failure else: self.failure = None if self.revert_failure != other.revert_failure: # NOTE(imelnikov): we can't just deep copy Failures, as they # contain tracebacks, which are not copyable. if other.revert_failure: if deep_copy: self.revert_failure = other.revert_failure.copy() else: self.revert_failure = other.revert_failure else: self.revert_failure = None if self.meta != other.meta: self.meta = copy_fn(other.meta) if self.version != other.version: self.version = copy_fn(other.version) return self @abc.abstractmethod def put(self, state, result): """Puts a result (acquired in the given state) into this detail.""" def to_dict(self): """Translates the internal state of this object to a ``dict``. :returns: this atom detail in ``dict`` form """ if self.failure: failure = self.failure.to_dict() else: failure = None if self.revert_failure: revert_failure = self.revert_failure.to_dict() else: revert_failure = None return { 'failure': failure, 'revert_failure': revert_failure, 'meta': self.meta, 'name': self.name, 'results': self.results, 'revert_results': self.revert_results, 'state': self.state, 'version': self.version, 'intention': self.intention, 'uuid': self.uuid, } @classmethod def from_dict(cls, data): """Translates the given ``dict`` into an instance of this class. NOTE(harlowja): the ``dict`` provided should come from a prior call to :meth:`.to_dict`. :returns: a new atom detail :rtype: :py:class:`.AtomDetail` """ obj = cls(data['name'], data['uuid']) obj.state = data.get('state') obj.intention = data.get('intention') obj.results = data.get('results') obj.revert_results = data.get('revert_results') obj.version = data.get('version') obj.meta = _fix_meta(data) failure = data.get('failure') if failure: obj.failure = ft.Failure.from_dict(failure) revert_failure = data.get('revert_failure') if revert_failure: obj.revert_failure = ft.Failure.from_dict(revert_failure) return obj @property def uuid(self): """The unique identifer of this atom detail.""" return self._uuid @property def name(self): """The name of this atom detail.""" return self._name @abc.abstractmethod def reset(self, state): """Resets this atom detail and sets ``state`` attribute value.""" @abc.abstractmethod def copy(self): """Copies this atom detail.""" def pformat(self, indent=0, linesep=os.linesep): """Pretty formats this atom detail into a string.""" cls_name = self.__class__.__name__ lines = ["%s%s: '%s'" % (" " * (indent), cls_name, self.name)] lines.extend(_format_shared(self, indent=indent + 1)) lines.append("%s- version = %s" % (" " * (indent + 1), misc.get_version_string(self))) lines.append("%s- results = %s" % (" " * (indent + 1), self.results)) lines.append("%s- failure = %s" % (" " * (indent + 1), bool(self.failure))) lines.extend(_format_meta(self.meta, indent=indent + 1)) return linesep.join(lines) class TaskDetail(AtomDetail): """A task detail (an atom detail typically associated with a |tt| atom). .. |tt| replace:: :py:class:`~taskflow.task.Task` """ def reset(self, state): """Resets this task detail and sets ``state`` attribute value. This sets any previously set ``results``, ``failure``, and ``revert_results`` attributes back to ``None`` and sets the state to the provided one, as well as setting this task details ``intention`` attribute to ``EXECUTE``. """ self.results = None self.failure = None self.revert_results = None self.revert_failure = None self.state = state self.intention = states.EXECUTE def put(self, state, result): """Puts a result (acquired in the given state) into this detail. Returns whether this object was modified (or whether it was not). """ was_altered = False if state != self.state: self.state = state was_altered = True if state == states.REVERT_FAILURE: if self.revert_failure != result: self.revert_failure = result was_altered = True if not _is_all_none(self.results, self.revert_results): self.results = None self.revert_results = None was_altered = True elif state == states.FAILURE: if self.failure != result: self.failure = result was_altered = True if not _is_all_none(self.results, self.revert_results, self.revert_failure): self.results = None self.revert_results = None self.revert_failure = None was_altered = True elif state == states.SUCCESS: if not _is_all_none(self.revert_results, self.revert_failure, self.failure): self.revert_results = None self.revert_failure = None self.failure = None was_altered = True # We don't really have the ability to determine equality of # task (user) results at the current time, without making # potentially bad guesses, so assume the task detail always needs # to be saved if they are not exactly equivalent... if result is not self.results: self.results = result was_altered = True elif state == states.REVERTED: if not _is_all_none(self.revert_failure): self.revert_failure = None was_altered = True if result is not self.revert_results: self.revert_results = result was_altered = True return was_altered def merge(self, other, deep_copy=False): """Merges the current task detail with the given one. NOTE(harlowja): This merge does **not** copy and replace the ``results`` or ``revert_results`` if it differs. Instead the current objects ``results`` and ``revert_results`` attributes directly becomes (via assignment) the other objects attributes. Also note that if the provided object is this object itself then **no** merging is done. See: https://bugs.launchpad.net/taskflow/+bug/1452978 for what happens if this is copied at a deeper level (for example by using ``copy.deepcopy`` or by using ``copy.copy``). :returns: this task detail (freshly merged with the incoming object) :rtype: :py:class:`.TaskDetail` """ if not isinstance(other, TaskDetail): raise exc.NotImplementedError("Can only merge with other" " task details") if other is self: return self super(TaskDetail, self).merge(other, deep_copy=deep_copy) self.results = other.results self.revert_results = other.revert_results return self def copy(self): """Copies this task detail. Creates a shallow copy of this task detail (any meta-data and version information that this object maintains is shallow copied via ``copy.copy``). NOTE(harlowja): This copy does **not** copy and replace the ``results`` or ``revert_results`` attribute if it differs. Instead the current objects ``results`` and ``revert_results`` attributes directly becomes (via assignment) the cloned objects attributes. See: https://bugs.launchpad.net/taskflow/+bug/1452978 for what happens if this is copied at a deeper level (for example by using ``copy.deepcopy`` or by using ``copy.copy``). :returns: a new task detail :rtype: :py:class:`.TaskDetail` """ clone = copy.copy(self) clone.results = self.results clone.revert_results = self.revert_results if self.meta: clone.meta = self.meta.copy() if self.version: clone.version = copy.copy(self.version) return clone class RetryDetail(AtomDetail): """A retry detail (an atom detail typically associated with a |rt| atom). .. |rt| replace:: :py:class:`~taskflow.retry.Retry` """ def __init__(self, name, uuid): super(RetryDetail, self).__init__(name, uuid) self.results = [] def reset(self, state): """Resets this retry detail and sets ``state`` attribute value. This sets any previously added ``results`` back to an empty list and resets the ``failure`` and ``revert_failure`` and ``revert_results`` attributes back to ``None`` and sets the state to the provided one, as well as setting this retry details ``intention`` attribute to ``EXECUTE``. """ self.results = [] self.revert_results = None self.failure = None self.revert_failure = None self.state = state self.intention = states.EXECUTE def copy(self): """Copies this retry detail. Creates a shallow copy of this retry detail (any meta-data and version information that this object maintains is shallow copied via ``copy.copy``). NOTE(harlowja): This copy does **not** copy the incoming objects ``results`` or ``revert_results`` attributes. Instead this objects ``results`` attribute list is iterated over and a new list is constructed with each ``(data, failures)`` element in that list having its ``failures`` (a dictionary of each named :py:class:`~taskflow.types.failure.Failure` object that occured) copied but its ``data`` is left untouched. After this is done that new list becomes (via assignment) the cloned objects ``results`` attribute. The ``revert_results`` is directly assigned to the cloned objects ``revert_results`` attribute. See: https://bugs.launchpad.net/taskflow/+bug/1452978 for what happens if the ``data`` in ``results`` is copied at a deeper level (for example by using ``copy.deepcopy`` or by using ``copy.copy``). :returns: a new retry detail :rtype: :py:class:`.RetryDetail` """ clone = copy.copy(self) results = [] # NOTE(imelnikov): we can't just deep copy Failures, as they # contain tracebacks, which are not copyable. for (data, failures) in self.results: copied_failures = {} for (key, failure) in six.iteritems(failures): copied_failures[key] = failure results.append((data, copied_failures)) clone.results = results clone.revert_results = self.revert_results if self.meta: clone.meta = self.meta.copy() if self.version: clone.version = copy.copy(self.version) return clone @property def last_results(self): """The last result that was produced.""" try: return self.results[-1][0] except IndexError: exc.raise_with_cause(exc.NotFound, "Last results not found") @property def last_failures(self): """The last failure dictionary that was produced. NOTE(harlowja): This is **not** the same as the local ``failure`` attribute as the obtained failure dictionary in the ``results`` attribute (which is what this returns) is from associated atom failures (which is different from the directly related failure of the retry unit associated with this atom detail). """ try: return self.results[-1][1] except IndexError: exc.raise_with_cause(exc.NotFound, "Last failures not found") def put(self, state, result): """Puts a result (acquired in the given state) into this detail. Returns whether this object was modified (or whether it was not). """ # Do not clean retry history (only on reset does this happen). was_altered = False if state != self.state: self.state = state was_altered = True if state == states.REVERT_FAILURE: if result != self.revert_failure: self.revert_failure = result was_altered = True if not _is_all_none(self.revert_results): self.revert_results = None was_altered = True elif state == states.FAILURE: if result != self.failure: self.failure = result was_altered = True if not _is_all_none(self.revert_results, self.revert_failure): self.revert_results = None self.revert_failure = None was_altered = True elif state == states.SUCCESS: if not _is_all_none(self.failure, self.revert_failure, self.revert_results): self.failure = None self.revert_failure = None self.revert_results = None # Track what we produced, so that we can examine it (or avoid # using it again). self.results.append((result, {})) was_altered = True elif state == states.REVERTED: # We don't really have the ability to determine equality of # task (user) results at the current time, without making # potentially bad guesses, so assume the retry detail always needs # to be saved if they are not exactly equivalent... if result is not self.revert_results: self.revert_results = result was_altered = True if not _is_all_none(self.revert_failure): self.revert_failure = None was_altered = True return was_altered @classmethod def from_dict(cls, data): """Translates the given ``dict`` into an instance of this class.""" def decode_results(results): if not results: return [] new_results = [] for (data, failures) in results: new_failures = {} for (key, data) in six.iteritems(failures): new_failures[key] = ft.Failure.from_dict(data) new_results.append((data, new_failures)) return new_results obj = super(RetryDetail, cls).from_dict(data) obj.results = decode_results(obj.results) return obj def to_dict(self): """Translates the internal state of this object to a ``dict``.""" def encode_results(results): if not results: return [] new_results = [] for (data, failures) in results: new_failures = {} for (key, failure) in six.iteritems(failures): new_failures[key] = failure.to_dict() new_results.append((data, new_failures)) return new_results base = super(RetryDetail, self).to_dict() base['results'] = encode_results(base.get('results')) return base def merge(self, other, deep_copy=False): """Merges the current retry detail with the given one. NOTE(harlowja): This merge does **not** deep copy the incoming objects ``results`` attribute (if it differs). Instead the incoming objects ``results`` attribute list is **always** iterated over and a new list is constructed with each ``(data, failures)`` element in that list having its ``failures`` (a dictionary of each named :py:class:`~taskflow.types.failure.Failure` objects that occurred) copied but its ``data`` is left untouched. After this is done that new list becomes (via assignment) this objects ``results`` attribute. Also note that if the provided object is this object itself then **no** merging is done. See: https://bugs.launchpad.net/taskflow/+bug/1452978 for what happens if the ``data`` in ``results`` is copied at a deeper level (for example by using ``copy.deepcopy`` or by using ``copy.copy``). :returns: this retry detail (freshly merged with the incoming object) :rtype: :py:class:`.RetryDetail` """ if not isinstance(other, RetryDetail): raise exc.NotImplementedError("Can only merge with other" " retry details") if other is self: return self super(RetryDetail, self).merge(other, deep_copy=deep_copy) results = [] # NOTE(imelnikov): we can't just deep copy Failures, as they # contain tracebacks, which are not copyable. for (data, failures) in other.results: copied_failures = {} for (key, failure) in six.iteritems(failures): if deep_copy: copied_failures[key] = failure.copy() else: copied_failures[key] = failure results.append((data, copied_failures)) self.results = results return self _DETAIL_TO_NAME = { RetryDetail: 'RETRY_DETAIL', TaskDetail: 'TASK_DETAIL', } _NAME_TO_DETAIL = dict((name, cls) for (cls, name) in six.iteritems(_DETAIL_TO_NAME)) ATOM_TYPES = list(six.iterkeys(_NAME_TO_DETAIL)) def atom_detail_class(atom_type): try: return _NAME_TO_DETAIL[atom_type] except KeyError: raise TypeError("Unknown atom type '%s'" % (atom_type)) def atom_detail_type(atom_detail): try: return _DETAIL_TO_NAME[type(atom_detail)] except KeyError: raise TypeError("Unknown atom '%s' (%s)" % (atom_detail, type(atom_detail))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/persistence/path_based.py0000664000175000017500000002254600000000000022235 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import six from taskflow import exceptions as exc from taskflow.persistence import base from taskflow.persistence import models @six.add_metaclass(abc.ABCMeta) class PathBasedBackend(base.Backend): """Base class for persistence backends that address data by path Subclasses of this backend write logbooks, flow details, and atom details to a provided base path in some filesystem-like storage. They will create and store those objects in three key directories (one for logbooks, one for flow details and one for atom details). They create those associated directories and then create files inside those directories that represent the contents of those objects for later reading and writing. """ #: Default path used when none is provided. DEFAULT_PATH = None def __init__(self, conf): super(PathBasedBackend, self).__init__(conf) self._path = self._conf.get('path', None) if not self._path: self._path = self.DEFAULT_PATH @property def path(self): return self._path @six.add_metaclass(abc.ABCMeta) class PathBasedConnection(base.Connection): """Base class for path based backend connections.""" def __init__(self, backend): self._backend = backend self._book_path = self._join_path(backend.path, "books") self._flow_path = self._join_path(backend.path, "flow_details") self._atom_path = self._join_path(backend.path, "atom_details") @staticmethod def _serialize(obj): if isinstance(obj, models.LogBook): return obj.to_dict(marshal_time=True) elif isinstance(obj, models.FlowDetail): return obj.to_dict() elif isinstance(obj, models.AtomDetail): return base._format_atom(obj) else: raise exc.StorageFailure("Invalid storage class %s" % type(obj)) @staticmethod def _deserialize(cls, data): if issubclass(cls, models.LogBook): return cls.from_dict(data, unmarshal_time=True) elif issubclass(cls, models.FlowDetail): return cls.from_dict(data) elif issubclass(cls, models.AtomDetail): atom_class = models.atom_detail_class(data['type']) return atom_class.from_dict(data['atom']) else: raise exc.StorageFailure("Invalid storage class %s" % cls) @property def backend(self): return self._backend @property def book_path(self): return self._book_path @property def flow_path(self): return self._flow_path @property def atom_path(self): return self._atom_path @abc.abstractmethod def _join_path(self, *parts): """Accept path parts, and return a joined path""" @abc.abstractmethod def _get_item(self, path): """Fetch a single item from the backend""" @abc.abstractmethod def _set_item(self, path, value, transaction): """Write a single item to the backend""" @abc.abstractmethod def _del_tree(self, path, transaction): """Recursively deletes a folder from the backend.""" @abc.abstractmethod def _get_children(self, path): """Get a list of child items of a path""" @abc.abstractmethod def _ensure_path(self, path): """Recursively ensure that a path (folder) in the backend exists""" @abc.abstractmethod def _create_link(self, src_path, dest_path, transaction): """Create a symlink-like link between two paths""" @abc.abstractmethod def _transaction(self): """Context manager that yields a transaction""" def _get_obj_path(self, obj): if isinstance(obj, models.LogBook): path = self.book_path elif isinstance(obj, models.FlowDetail): path = self.flow_path elif isinstance(obj, models.AtomDetail): path = self.atom_path else: raise exc.StorageFailure("Invalid storage class %s" % type(obj)) return self._join_path(path, obj.uuid) def _update_object(self, obj, transaction, ignore_missing=False): path = self._get_obj_path(obj) try: item_data = self._get_item(path) existing_obj = self._deserialize(type(obj), item_data) obj = existing_obj.merge(obj) except exc.NotFound: if not ignore_missing: raise self._set_item(path, self._serialize(obj), transaction) return obj def get_logbooks(self, lazy=False): for book_uuid in self._get_children(self.book_path): yield self.get_logbook(book_uuid, lazy=lazy) def get_logbook(self, book_uuid, lazy=False): book_path = self._join_path(self.book_path, book_uuid) book_data = self._get_item(book_path) book = self._deserialize(models.LogBook, book_data) if not lazy: for flow_details in self.get_flows_for_book(book_uuid): book.add(flow_details) return book def save_logbook(self, book): book_path = self._get_obj_path(book) with self._transaction() as transaction: self._update_object(book, transaction, ignore_missing=True) for flow_details in book: flow_path = self._get_obj_path(flow_details) link_path = self._join_path(book_path, flow_details.uuid) self._do_update_flow_details(flow_details, transaction, ignore_missing=True) self._create_link(flow_path, link_path, transaction) return book def get_flows_for_book(self, book_uuid, lazy=False): book_path = self._join_path(self.book_path, book_uuid) for flow_uuid in self._get_children(book_path): yield self.get_flow_details(flow_uuid, lazy) def get_flow_details(self, flow_uuid, lazy=False): flow_path = self._join_path(self.flow_path, flow_uuid) flow_data = self._get_item(flow_path) flow_details = self._deserialize(models.FlowDetail, flow_data) if not lazy: for atom_details in self.get_atoms_for_flow(flow_uuid): flow_details.add(atom_details) return flow_details def _do_update_flow_details(self, flow_detail, transaction, ignore_missing=False): flow_path = self._get_obj_path(flow_detail) self._update_object(flow_detail, transaction, ignore_missing=ignore_missing) for atom_details in flow_detail: atom_path = self._get_obj_path(atom_details) link_path = self._join_path(flow_path, atom_details.uuid) self._create_link(atom_path, link_path, transaction) self._update_object(atom_details, transaction, ignore_missing=True) return flow_detail def update_flow_details(self, flow_detail, ignore_missing=False): with self._transaction() as transaction: return self._do_update_flow_details(flow_detail, transaction, ignore_missing=ignore_missing) def get_atoms_for_flow(self, flow_uuid): flow_path = self._join_path(self.flow_path, flow_uuid) for atom_uuid in self._get_children(flow_path): yield self.get_atom_details(atom_uuid) def get_atom_details(self, atom_uuid): atom_path = self._join_path(self.atom_path, atom_uuid) atom_data = self._get_item(atom_path) return self._deserialize(models.AtomDetail, atom_data) def update_atom_details(self, atom_detail, ignore_missing=False): with self._transaction() as transaction: return self._update_object(atom_detail, transaction, ignore_missing=ignore_missing) def _do_destroy_logbook(self, book_uuid, transaction): book_path = self._join_path(self.book_path, book_uuid) for flow_uuid in self._get_children(book_path): flow_path = self._join_path(self.flow_path, flow_uuid) for atom_uuid in self._get_children(flow_path): atom_path = self._join_path(self.atom_path, atom_uuid) self._del_tree(atom_path, transaction) self._del_tree(flow_path, transaction) self._del_tree(book_path, transaction) def destroy_logbook(self, book_uuid): with self._transaction() as transaction: return self._do_destroy_logbook(book_uuid, transaction) def clear_all(self): with self._transaction() as transaction: for path in (self.book_path, self.flow_path, self.atom_path): self._del_tree(path, transaction) def upgrade(self): for path in (self.book_path, self.flow_path, self.atom_path): self._ensure_path(path) def close(self): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/retry.py0000664000175000017500000003434400000000000016763 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import enum import six from taskflow import atom from taskflow import exceptions as exc from taskflow.utils import misc @enum.unique class Decision(misc.StrEnum): """Decision results/strategy enumeration.""" REVERT = "REVERT" """Reverts only the surrounding/associated subflow. This strategy first consults the parent atom before reverting the associated subflow to determine if the parent retry object provides a different reconciliation strategy. This allows for safe nesting of flows with different retry strategies. If the parent flow has no retry strategy, the default behavior is to just revert the atoms in the associated subflow. This is generally not the desired behavior, but is left as the default in order to keep backwards-compatibility. The ``defer_reverts`` engine option will let you change this behavior. If that is set to True, a REVERT will always defer to the parent, meaning that if the parent has no retry strategy, it will be reverted as well. """ REVERT_ALL = "REVERT_ALL" """Reverts the entire flow, regardless of parent strategy. This strategy will revert every atom that has executed thus far, regardless of whether the parent flow has a separate retry strategy associated with it. """ #: Retries the surrounding/associated subflow again. RETRY = "RETRY" # Retain these aliases for a number of releases... REVERT = Decision.REVERT REVERT_ALL = Decision.REVERT_ALL RETRY = Decision.RETRY # Constants passed into revert/execute kwargs. # # Contains information about the past decisions and outcomes that have # occurred (if available). EXECUTE_REVERT_HISTORY = 'history' # # The cause of the flow failure/s REVERT_FLOW_FAILURES = 'flow_failures' class History(object): """Helper that simplifies interactions with retry historical contents.""" def __init__(self, contents, failure=None): self._contents = contents self._failure = failure @property def failure(self): """Returns the retries own failure or none if not existent.""" return self._failure def outcomes_iter(self, index=None): """Iterates over the contained failure outcomes. If the index is not provided, then all outcomes are iterated over. NOTE(harlowja): if the retry itself failed, this will **not** include those types of failures. Use the :py:attr:`.failure` attribute to access that instead (if it exists, aka, non-none). """ if index is None: contents = self._contents else: contents = [ self._contents[index], ] for (provided, outcomes) in contents: for (owner, outcome) in six.iteritems(outcomes): yield (owner, outcome) def __len__(self): return len(self._contents) def provided_iter(self): """Iterates over all the values the retry has attempted (in order).""" for (provided, outcomes) in self._contents: yield provided def __getitem__(self, index): return self._contents[index] def caused_by(self, exception_cls, index=None, include_retry=False): """Checks if the exception class provided caused the failures. If the index is not provided, then all outcomes are iterated over. NOTE(harlowja): only if ``include_retry`` is provided as true (defaults to false) will the potential retries own failure be checked against as well. """ for (name, failure) in self.outcomes_iter(index=index): if failure.check(exception_cls): return True if include_retry and self._failure is not None: if self._failure.check(exception_cls): return True return False def __iter__(self): """Iterates over the raw contents of this history object.""" return iter(self._contents) @six.add_metaclass(abc.ABCMeta) class Retry(atom.Atom): """A class that can decide how to resolve execution failures. This abstract base class is used to inherit from and provide different strategies that will be activated upon execution failures. Since a retry object is an atom it may also provide :meth:`~taskflow.retry.Retry.execute` and :meth:`~taskflow.retry.Retry.revert` methods to alter the inputs of connected atoms (depending on the desired strategy to be used this can be quite useful). NOTE(harlowja): the :meth:`~taskflow.retry.Retry.execute` and :meth:`~taskflow.retry.Retry.revert` and :meth:`~taskflow.retry.Retry.on_failure` will automatically be given a ``history`` parameter, which contains information about the past decisions and outcomes that have occurred (if available). """ def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None): super(Retry, self).__init__(name=name, provides=provides, requires=requires, rebind=rebind, auto_extract=auto_extract, ignore_list=[EXECUTE_REVERT_HISTORY]) @property def name(self): return self._name @name.setter def name(self, name): self._name = name @abc.abstractmethod def execute(self, history, *args, **kwargs): """Executes the given retry. This execution activates a given retry which will typically produce data required to start or restart a connected component using previously provided values and a ``history`` of prior failures from previous runs. The historical data can be analyzed to alter the resolution strategy that this retry controller will use. For example, a retry can provide the same values multiple times (after each run), the latest value or some other variation. Old values will be saved to the history of the retry atom automatically, that is a list of tuples (result, failures) are persisted where failures is a dictionary of failures indexed by task names and the result is the execution result returned by this retry during that failure resolution attempt. :param args: positional arguments that retry requires to execute. :param kwargs: any keyword arguments that retry requires to execute. """ def revert(self, history, *args, **kwargs): """Reverts this retry. On revert call all results that had been provided by previous tries and all errors caused during reversion are provided. This method will be called *only* if a subflow must be reverted without the retry (that is to say that the controller has ran out of resolution options and has either given up resolution or has failed to handle a execution failure). :param args: positional arguments that the retry required to execute. :param kwargs: any keyword arguments that the retry required to execute. """ @abc.abstractmethod def on_failure(self, history, *args, **kwargs): """Makes a decision about the future. This method will typically use information about prior failures (if this historical failure information is not available or was not persisted the provided history will be empty). Returns a retry constant (one of): * ``RETRY``: when the controlling flow must be reverted and restarted again (for example with new parameters). * ``REVERT``: when this controlling flow must be completely reverted and the parent flow (if any) should make a decision about further flow execution. * ``REVERT_ALL``: when this controlling flow and the parent flow (if any) must be reverted and marked as a ``FAILURE``. """ class AlwaysRevert(Retry): """Retry that always reverts subflow.""" def on_failure(self, *args, **kwargs): return REVERT def execute(self, *args, **kwargs): pass class AlwaysRevertAll(Retry): """Retry that always reverts a whole flow.""" def on_failure(self, **kwargs): return REVERT_ALL def execute(self, **kwargs): pass class Times(Retry): """Retries subflow given number of times. Returns attempt number. :param attempts: number of attempts to retry the associated subflow before giving up :type attempts: int :param revert_all: when provided this will cause the full flow to revert when the number of attempts that have been tried has been reached (when false, it will only locally revert the associated subflow) :type revert_all: bool Further arguments are interpreted as defined in the :py:class:`~taskflow.atom.Atom` constructor. """ def __init__(self, attempts=1, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert_all=False): super(Times, self).__init__(name, provides, requires, auto_extract, rebind) self._attempts = attempts if revert_all: self._revert_action = REVERT_ALL else: self._revert_action = REVERT def on_failure(self, history, *args, **kwargs): if len(history) < self._attempts: return RETRY return self._revert_action def execute(self, history, *args, **kwargs): return len(history) + 1 class ForEachBase(Retry): """Base class for retries that iterate over a given collection.""" def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert_all=False): super(ForEachBase, self).__init__(name, provides, requires, auto_extract, rebind) if revert_all: self._revert_action = REVERT_ALL else: self._revert_action = REVERT def _get_next_value(self, values, history): # Fetches the next resolution result to try, removes overlapping # entries with what has already been tried and then returns the first # resolution strategy remaining. remaining = misc.sequence_minus(values, history.provided_iter()) if not remaining: raise exc.NotFound("No elements left in collection of iterable " "retry controller %s" % self.name) return remaining[0] def _on_failure(self, values, history): try: self._get_next_value(values, history) except exc.NotFound: return self._revert_action else: return RETRY class ForEach(ForEachBase): """Applies a statically provided collection of strategies. Accepts a collection of decision strategies on construction and returns the next element of the collection on each try. :param values: values collection to iterate over and provide to atoms other in the flow as a result of this functions :py:meth:`~taskflow.atom.Atom.execute` method, which other dependent atoms can consume (for example, to alter their own behavior) :type values: list :param revert_all: when provided this will cause the full flow to revert when the number of attempts that have been tried has been reached (when false, it will only locally revert the associated subflow) :type revert_all: bool Further arguments are interpreted as defined in the :py:class:`~taskflow.atom.Atom` constructor. """ def __init__(self, values, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert_all=False): super(ForEach, self).__init__(name, provides, requires, auto_extract, rebind, revert_all) self._values = values def on_failure(self, history, *args, **kwargs): return self._on_failure(self._values, history) def execute(self, history, *args, **kwargs): # NOTE(harlowja): This allows any connected components to know the # current resolution strategy being attempted. return self._get_next_value(self._values, history) class ParameterizedForEach(ForEachBase): """Applies a dynamically provided collection of strategies. Accepts a collection of decision strategies from a predecessor (or from storage) as a parameter and returns the next element of that collection on each try. :param revert_all: when provided this will cause the full flow to revert when the number of attempts that have been tried has been reached (when false, it will only locally revert the associated subflow) :type revert_all: bool Further arguments are interpreted as defined in the :py:class:`~taskflow.atom.Atom` constructor. """ def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert_all=False): super(ParameterizedForEach, self).__init__(name, provides, requires, auto_extract, rebind, revert_all) def on_failure(self, values, history, *args, **kwargs): return self._on_failure(values, history) def execute(self, values, history, *args, **kwargs): return self._get_next_value(values, history) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/states.py0000664000175000017500000001536500000000000017123 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as exc # Job states. CLAIMED = 'CLAIMED' COMPLETE = 'COMPLETE' UNCLAIMED = 'UNCLAIMED' # Flow states. FAILURE = 'FAILURE' PENDING = 'PENDING' REVERTING = 'REVERTING' REVERTED = 'REVERTED' RUNNING = 'RUNNING' SUCCESS = 'SUCCESS' SUSPENDING = 'SUSPENDING' SUSPENDED = 'SUSPENDED' RESUMING = 'RESUMING' # Task states (mainly a subset of the flow states). FAILURE = FAILURE PENDING = PENDING REVERTED = REVERTED REVERTING = REVERTING SUCCESS = SUCCESS RUNNING = RUNNING RETRYING = 'RETRYING' IGNORE = 'IGNORE' REVERT_FAILURE = 'REVERT_FAILURE' # Atom intentions. EXECUTE = 'EXECUTE' IGNORE = IGNORE REVERT = 'REVERT' RETRY = 'RETRY' INTENTIONS = (EXECUTE, IGNORE, REVERT, RETRY) # Additional engine states SCHEDULING = 'SCHEDULING' WAITING = 'WAITING' ANALYZING = 'ANALYZING' # Job state transitions # See: https://docs.openstack.org/taskflow/latest/user/states.html _ALLOWED_JOB_TRANSITIONS = frozenset(( # Job is being claimed. (UNCLAIMED, CLAIMED), # Job has been lost (or manually unclaimed/abandoned). (CLAIMED, UNCLAIMED), # Job has been finished. (CLAIMED, COMPLETE), )) def check_job_transition(old_state, new_state): """Check that job can transition from from ``old_state`` to ``new_state``. If transition can be performed, it returns true. If transition should be ignored, it returns false. If transition is not valid, it raises an InvalidState exception. """ if old_state == new_state: return False pair = (old_state, new_state) if pair in _ALLOWED_JOB_TRANSITIONS: return True raise exc.InvalidState("Job transition from '%s' to '%s' is not allowed" % pair) # Flow state transitions # See: https://docs.openstack.org/taskflow/latest/user/states.html#flow _ALLOWED_FLOW_TRANSITIONS = frozenset(( (PENDING, RUNNING), # run it! (RUNNING, SUCCESS), # all tasks finished successfully (RUNNING, FAILURE), # some of task failed (RUNNING, REVERTED), # some of task failed and flow has been reverted (RUNNING, SUSPENDING), # engine.suspend was called (RUNNING, RESUMING), # resuming from a previous running (SUCCESS, RUNNING), # see note below (FAILURE, RUNNING), # see note below (REVERTED, PENDING), # try again (SUCCESS, PENDING), # run it again (SUSPENDING, SUSPENDED), # suspend finished (SUSPENDING, SUCCESS), # all tasks finished while we were waiting (SUSPENDING, FAILURE), # some tasks failed while we were waiting (SUSPENDING, REVERTED), # all tasks were reverted while we were waiting (SUSPENDING, RESUMING), # resuming from a previous suspending (SUSPENDED, RUNNING), # restart from suspended (RESUMING, SUSPENDED), # after flow resumed, it is suspended )) # NOTE(imelnikov) SUCCESS->RUNNING and FAILURE->RUNNING transitions are # useful when flow or flowdetails backing it were altered after the flow # was finished; then, client code may want to run through flow again # to ensure all tasks from updated flow had a chance to run. # NOTE(imelnikov): Engine cannot transition flow from SUSPENDING to # SUSPENDED while some tasks from the flow are running and some results # from them are not retrieved and saved properly, so while flow is # in SUSPENDING state it may wait for some of the tasks to stop. Then, # flow can go to SUSPENDED, SUCCESS, FAILURE or REVERTED state depending # of actual state of the tasks -- e.g. if all tasks were finished # successfully while we were waiting, flow can be transitioned from # SUSPENDING to SUCCESS state. _IGNORED_FLOW_TRANSITIONS = frozenset( (a, b) for a in (PENDING, FAILURE, SUCCESS, SUSPENDED, REVERTED) for b in (SUSPENDING, SUSPENDED, RESUMING) if a != b ) def check_flow_transition(old_state, new_state): """Check that flow can transition from ``old_state`` to ``new_state``. If transition can be performed, it returns true. If transition should be ignored, it returns false. If transition is not valid, it raises an InvalidState exception. """ if old_state == new_state: return False pair = (old_state, new_state) if pair in _ALLOWED_FLOW_TRANSITIONS: return True if pair in _IGNORED_FLOW_TRANSITIONS: return False raise exc.InvalidState("Flow transition from '%s' to '%s' is not allowed" % pair) # Task state transitions # See: https://docs.openstack.org/taskflow/latest/user/states.html#task _ALLOWED_TASK_TRANSITIONS = frozenset(( (PENDING, RUNNING), # run it! (PENDING, IGNORE), # skip it! (RUNNING, SUCCESS), # the task executed successfully (RUNNING, FAILURE), # the task execution failed (FAILURE, REVERTING), # task execution failed, try reverting... (SUCCESS, REVERTING), # some other task failed, try reverting... (REVERTING, REVERTED), # the task reverted successfully (REVERTING, REVERT_FAILURE), # the task failed reverting (terminal!) (REVERTED, PENDING), # try again (IGNORE, PENDING), # try again )) def check_task_transition(old_state, new_state): """Check that task can transition from ``old_state`` to ``new_state``. If transition can be performed, it returns true, false otherwise. """ pair = (old_state, new_state) if pair in _ALLOWED_TASK_TRANSITIONS: return True return False # Retry state transitions # See: https://docs.openstack.org/taskflow/latest/user/states.html#retry _ALLOWED_RETRY_TRANSITIONS = list(_ALLOWED_TASK_TRANSITIONS) _ALLOWED_RETRY_TRANSITIONS.extend([ (SUCCESS, RETRYING), # retrying retry controller (RETRYING, RUNNING), # run retry controller that has been retrying ]) _ALLOWED_RETRY_TRANSITIONS = frozenset(_ALLOWED_RETRY_TRANSITIONS) def check_retry_transition(old_state, new_state): """Check that retry can transition from ``old_state`` to ``new_state``. If transition can be performed, it returns true, false otherwise. """ pair = (old_state, new_state) if pair in _ALLOWED_RETRY_TRANSITIONS: return True return False ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/storage.py0000664000175000017500000014564600000000000017272 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import functools import fasteners from oslo_utils import reflection from oslo_utils import uuidutils import six import tenacity from taskflow import exceptions from taskflow import logging from taskflow.persistence.backends import impl_memory from taskflow.persistence import models from taskflow import retry from taskflow import states from taskflow import task from taskflow.utils import misc LOG = logging.getLogger(__name__) RETRY_ATTEMPTS = 3 RETRY_WAIT_TIMEOUT = 5 _EXECUTE_STATES_WITH_RESULTS = ( # The atom ``execute`` worked out :) states.SUCCESS, # The atom ``execute`` didn't work out :( states.FAILURE, # In this state we will still have access to prior SUCCESS (or FAILURE) # results, so make sure extraction is still allowed in this state... states.REVERTING, ) _REVERT_STATES_WITH_RESULTS = ( # The atom ``revert`` worked out :) states.REVERTED, # The atom ``revert`` didn't work out :( states.REVERT_FAILURE, # In this state we will still have access to prior SUCCESS (or FAILURE) # results, so make sure extraction is still allowed in this state... states.REVERTING, ) # Atom states that may have results... STATES_WITH_RESULTS = set() STATES_WITH_RESULTS.update(_REVERT_STATES_WITH_RESULTS) STATES_WITH_RESULTS.update(_EXECUTE_STATES_WITH_RESULTS) STATES_WITH_RESULTS = tuple(sorted(STATES_WITH_RESULTS)) # TODO(harlowja): do this better (via a singleton or something else...) _TRANSIENT_PROVIDER = object() # Only for these intentions will we cache any failures that happened... _SAVE_FAILURE_INTENTIONS = (states.EXECUTE, states.REVERT) # NOTE(harlowja): Perhaps the container is a dictionary-like object and that # key does not exist (key error), or the container is a tuple/list and a # non-numeric key is being requested (index error), or there was no container # and an attempt to index into none/other unsubscriptable type is being # requested (type error). # # Overall this (along with the item_from* functions) try to handle the vast # majority of wrong indexing operations on the wrong/invalid types so that we # can fail extraction during lookup or emit warning on result reception... _EXTRACTION_EXCEPTIONS = (IndexError, KeyError, ValueError, TypeError) # Atom detail metadata key used to inject atom non-transient injected args. META_INJECTED = 'injected' # Atom detail metadata key(s) used to set atom progress (with any details). META_PROGRESS = 'progress' META_PROGRESS_DETAILS = 'progress_details' class _ProviderLocator(object): """Helper to start to better decouple the finding logic from storage. WIP: part of the larger effort to cleanup/refactor the finding of named arguments so that the code can be more unified and easy to follow... """ def __init__(self, transient_results, providers_fetcher, result_fetcher): self.result_fetcher = result_fetcher self.providers_fetcher = providers_fetcher self.transient_results = transient_results def _try_get_results(self, looking_for, provider, look_into_results=True, find_potentials=False): if provider.name is _TRANSIENT_PROVIDER: # TODO(harlowja): This 'is' check still sucks, do this # better in the future... results = self.transient_results else: try: results = self.result_fetcher(provider.name) except (exceptions.NotFound, exceptions.DisallowedAccess): if not find_potentials: raise else: # Ok, likely hasn't produced a result yet, but # at a future point it hopefully will, so stub # out the *expected* result. results = {} if look_into_results: _item_from_single(provider, results, looking_for) return results def _find(self, looking_for, scope_walker=None, short_circuit=True, find_potentials=False): if scope_walker is None: scope_walker = [] default_providers, atom_providers = self.providers_fetcher(looking_for) searched_providers = set() providers_and_results = [] if default_providers: for p in default_providers: searched_providers.add(p) try: provider_results = self._try_get_results( looking_for, p, find_potentials=find_potentials, # For default providers always look into there # results as default providers are statically setup # and therefore looking into there provided results # should fail early. look_into_results=True) except exceptions.NotFound: if not find_potentials: raise else: providers_and_results.append((p, provider_results)) if short_circuit: return (searched_providers, providers_and_results) if not atom_providers: return (searched_providers, providers_and_results) atom_providers_by_name = dict((p.name, p) for p in atom_providers) for accessible_atom_names in iter(scope_walker): # *Always* retain the scope ordering (if any matches # happen); instead of retaining the possible provider match # order (which isn't that important and may be different from # the scope requested ordering). maybe_atom_providers = [atom_providers_by_name[atom_name] for atom_name in accessible_atom_names if atom_name in atom_providers_by_name] tmp_providers_and_results = [] if find_potentials: for p in maybe_atom_providers: searched_providers.add(p) tmp_providers_and_results.append((p, {})) else: for p in maybe_atom_providers: searched_providers.add(p) try: # Don't at this point look into the provider results # as calling code will grab all providers, and then # get the result from the *first* provider that # actually provided it (or die). provider_results = self._try_get_results( looking_for, p, find_potentials=find_potentials, look_into_results=False) except exceptions.DisallowedAccess as e: if e.state != states.IGNORE: exceptions.raise_with_cause( exceptions.NotFound, "Expected to be able to find output %r" " produced by %s but was unable to get at" " that providers results" % (looking_for, p)) else: LOG.trace("Avoiding using the results of" " %r (from %s) for name %r because" " it was ignored", p.name, p, looking_for) else: tmp_providers_and_results.append((p, provider_results)) if tmp_providers_and_results and short_circuit: return (searched_providers, tmp_providers_and_results) else: providers_and_results.extend(tmp_providers_and_results) return (searched_providers, providers_and_results) def find_potentials(self, looking_for, scope_walker=None): """Returns the accessible **potential** providers.""" _searched_providers, providers_and_results = self._find( looking_for, scope_walker=scope_walker, short_circuit=False, find_potentials=True) return set(p for (p, _provider_results) in providers_and_results) def find(self, looking_for, scope_walker=None, short_circuit=True): """Returns the accessible providers.""" return self._find(looking_for, scope_walker=scope_walker, short_circuit=short_circuit, find_potentials=False) class _Provider(object): """A named symbol provider that produces a output at the given index.""" def __init__(self, name, index): self.name = name self.index = index def __repr__(self): # TODO(harlowja): clean this up... if self.name is _TRANSIENT_PROVIDER: base = " failure mapping. self._failures = {} for ad in self._flowdetail: fail_cache = {} if ad.failure is not None: fail_cache[states.EXECUTE] = ad.failure if ad.revert_failure is not None: fail_cache[states.REVERT] = ad.revert_failure self._failures[ad.name] = fail_cache self._atom_name_to_uuid = dict((ad.name, ad.uuid) for ad in self._flowdetail) try: source, _clone = self._atomdetail_by_name( self.injector_name, expected_type=models.TaskDetail) except exceptions.NotFound: pass else: names_iter = six.iterkeys(source.results) self._set_result_mapping(source.name, dict((name, name) for name in names_iter)) def _with_connection(self, functor, *args, **kwargs): # Run the given functor with a backend connection as its first # argument (providing the additional positional arguments and keyword # arguments as subsequent arguments). with contextlib.closing(self._backend.get_connection()) as conn: return functor(conn, *args, **kwargs) @staticmethod def _create_atom_detail(atom_name, atom_detail_cls, atom_version=None, atom_state=states.PENDING): ad = atom_detail_cls(atom_name, uuidutils.generate_uuid()) ad.state = atom_state if atom_version is not None: ad.version = atom_version return ad @fasteners.write_locked def ensure_atoms(self, atoms): """Ensure there is an atomdetail for **each** of the given atoms. Returns list of atomdetail uuids for each atom processed. """ atom_ids = [] missing_ads = [] for i, atom in enumerate(atoms): match = misc.match_type(atom, self._ensure_matchers) if not match: raise TypeError("Unknown atom '%s' (%s) requested to ensure" % (atom, type(atom))) atom_detail_cls, kind = match atom_name = atom.name if not atom_name: raise ValueError("%s name must be non-empty" % (kind)) try: atom_id = self._atom_name_to_uuid[atom_name] except KeyError: missing_ads.append((i, atom, atom_detail_cls)) # This will be later replaced with the uuid that is created... atom_ids.append(None) else: ad = self._flowdetail.find(atom_id) if not isinstance(ad, atom_detail_cls): raise exceptions.Duplicate( "Atom detail '%s' already exists in flow" " detail '%s'" % (atom_name, self._flowdetail.name)) else: atom_ids.append(ad.uuid) self._set_result_mapping(atom_name, atom.save_as) if missing_ads: needs_to_be_created_ads = [] for (i, atom, atom_detail_cls) in missing_ads: ad = self._create_atom_detail( atom.name, atom_detail_cls, atom_version=misc.get_version_string(atom)) needs_to_be_created_ads.append((i, atom, ad)) # Add the atom detail(s) to a clone, which upon success will be # updated into the contained flow detail; if it does not get saved # then no update will happen. source, clone = self._fetch_flowdetail(clone=True) for (_i, _atom, ad) in needs_to_be_created_ads: clone.add(ad) self._with_connection(self._save_flow_detail, source, clone) # Insert the needed data, and get outta here... for (i, atom, ad) in needs_to_be_created_ads: atom_name = atom.name atom_ids[i] = ad.uuid self._atom_name_to_uuid[atom_name] = ad.uuid self._set_result_mapping(atom_name, atom.save_as) self._failures.setdefault(atom_name, {}) return atom_ids @property def lock(self): """Reader/writer lock used to ensure multi-thread safety. This does **not** protect against the **same** storage objects being used by multiple engines/users across multiple processes (or different machines); certain backends handle that situation better than others (for example by using sequence identifiers) and it's a ongoing work in progress to make that better). """ return self._lock def ensure_atom(self, atom): """Ensure there is an atomdetail for the **given** atom. Returns the uuid for the atomdetail that corresponds to the given atom. """ return self.ensure_atoms([atom])[0] @property def flow_name(self): """The flow detail name this storage unit is associated with.""" # This never changes (so no read locking needed). return self._flowdetail.name @property def flow_uuid(self): """The flow detail uuid this storage unit is associated with.""" # This never changes (so no read locking needed). return self._flowdetail.uuid @property def flow_meta(self): """The flow detail metadata this storage unit is associated with.""" return self._flowdetail.meta @property def backend(self): """The backend this storage unit is associated with.""" # This never changes (so no read locking needed). return self._backend @tenacity.retry(retry=tenacity.retry_if_exception_type( exception_types=exceptions.StorageFailure), stop=tenacity.stop_after_attempt(RETRY_ATTEMPTS), wait=tenacity.wait_fixed(RETRY_WAIT_TIMEOUT)) def _save_flow_detail(self, conn, original_flow_detail, flow_detail): # NOTE(harlowja): we need to update our contained flow detail if # the result of the update actually added more (aka another process # added item to the flow detail). original_flow_detail.update(conn.update_flow_details(flow_detail)) return original_flow_detail def _fetch_flowdetail(self, clone=False): source = self._flowdetail if clone: return (source, source.copy()) else: return (source, source) def _atomdetail_by_name(self, atom_name, expected_type=None, clone=False): try: ad = self._flowdetail.find(self._atom_name_to_uuid[atom_name]) except KeyError: exceptions.raise_with_cause(exceptions.NotFound, "Unknown atom name '%s'" % atom_name) else: # TODO(harlowja): we need to figure out how to get away from doing # these kinds of type checks in general (since they likely mean # we aren't doing something right). if expected_type and not isinstance(ad, expected_type): raise TypeError("Atom '%s' is not of the expected type: %s" % (atom_name, reflection.get_class_name(expected_type))) if clone: return (ad, ad.copy()) else: return (ad, ad) @tenacity.retry(retry=tenacity.retry_if_exception_type( exception_types=exceptions.StorageFailure), stop=tenacity.stop_after_attempt(RETRY_ATTEMPTS), wait=tenacity.wait_fixed(RETRY_WAIT_TIMEOUT)) def _save_atom_detail(self, conn, original_atom_detail, atom_detail): # NOTE(harlowja): we need to update our contained atom detail if # the result of the update actually added more (aka another process # is also modifying the task detail), since python is by reference # and the contained atom detail will reflect the old state if we don't # do this update. original_atom_detail.update(conn.update_atom_details(atom_detail)) return original_atom_detail @fasteners.read_locked def get_atom_uuid(self, atom_name): """Gets an atoms uuid given a atoms name.""" source, _clone = self._atomdetail_by_name(atom_name) return source.uuid @fasteners.write_locked def set_atom_state(self, atom_name, state): """Sets an atoms state.""" source, clone = self._atomdetail_by_name(atom_name, clone=True) if source.state != state: clone.state = state self._with_connection(self._save_atom_detail, source, clone) @fasteners.read_locked def get_atom_state(self, atom_name): """Gets the state of an atom given an atoms name.""" source, _clone = self._atomdetail_by_name(atom_name) return source.state @fasteners.write_locked def set_atom_intention(self, atom_name, intention): """Sets the intention of an atom given an atoms name.""" source, clone = self._atomdetail_by_name(atom_name, clone=True) if source.intention != intention: clone.intention = intention self._with_connection(self._save_atom_detail, source, clone) @fasteners.read_locked def get_atom_intention(self, atom_name): """Gets the intention of an atom given an atoms name.""" source, _clone = self._atomdetail_by_name(atom_name) return source.intention @fasteners.read_locked def get_atoms_states(self, atom_names): """Gets a dict of atom name => (state, intention) given atom names.""" details = {} for name in set(atom_names): source, _clone = self._atomdetail_by_name(name) details[name] = (source.state, source.intention) return details @fasteners.write_locked def _update_atom_metadata(self, atom_name, update_with, expected_type=None): source, clone = self._atomdetail_by_name(atom_name, expected_type=expected_type, clone=True) if update_with: clone.meta.update(update_with) self._with_connection(self._save_atom_detail, source, clone) def update_atom_metadata(self, atom_name, update_with): """Updates a atoms associated metadata. This update will take a provided dictionary or a list of (key, value) pairs to include in the updated metadata (newer keys will overwrite older keys) and after merging saves the updated data into the underlying persistence layer. """ self._update_atom_metadata(atom_name, update_with) def set_task_progress(self, task_name, progress, details=None): """Set a tasks progress. :param task_name: task name :param progress: tasks progress (0.0 <-> 1.0) :param details: any task specific progress details """ update_with = { META_PROGRESS: progress, } if details is not None: # NOTE(imelnikov): as we can update progress without # updating details (e.g. automatically from engine) # we save progress value with details, too. if details: update_with[META_PROGRESS_DETAILS] = { 'at_progress': progress, 'details': details, } else: update_with[META_PROGRESS_DETAILS] = None self._update_atom_metadata(task_name, update_with, expected_type=models.TaskDetail) @fasteners.read_locked def get_task_progress(self, task_name): """Get the progress of a task given a tasks name. :param task_name: tasks name :returns: current task progress value """ source, _clone = self._atomdetail_by_name( task_name, expected_type=models.TaskDetail) try: return source.meta[META_PROGRESS] except KeyError: return 0.0 @fasteners.read_locked def get_task_progress_details(self, task_name): """Get the progress details of a task given a tasks name. :param task_name: task name :returns: None if progress_details not defined, else progress_details dict """ source, _clone = self._atomdetail_by_name( task_name, expected_type=models.TaskDetail) try: return source.meta[META_PROGRESS_DETAILS] except KeyError: return None def _check_all_results_provided(self, atom_name, container): """Warn if an atom did not provide some of its expected results. This may happen if atom returns shorter tuple or list or dict without all needed keys. It may also happen if atom returns result of wrong type. """ result_mapping = self._result_mappings.get(atom_name) if not result_mapping: return for name, index in six.iteritems(result_mapping): try: _item_from(container, index) except _EXTRACTION_EXCEPTIONS: LOG.warning("Atom '%s' did not supply result " "with index %r (name '%s')", atom_name, index, name) @fasteners.write_locked def save(self, atom_name, result, state=states.SUCCESS): """Put result for atom with provided name to storage.""" source, clone = self._atomdetail_by_name(atom_name, clone=True) if clone.put(state, result): self._with_connection(self._save_atom_detail, source, clone) # We need to somehow place more of this responsibility on the atom # detail class itself, vs doing it here; since it ties those two # together (which is bad)... if state in (states.FAILURE, states.REVERT_FAILURE): # NOTE(imelnikov): failure serialization looses information, # so we cache failures here, in atom name -> failure mapping so # that we can later use the better version on fetch/get. if clone.intention in _SAVE_FAILURE_INTENTIONS: fail_cache = self._failures[clone.name] fail_cache[clone.intention] = result if state == states.SUCCESS and clone.intention == states.EXECUTE: self._check_all_results_provided(clone.name, result) @fasteners.write_locked def save_retry_failure(self, retry_name, failed_atom_name, failure): """Save subflow failure to retry controller history.""" source, clone = self._atomdetail_by_name( retry_name, expected_type=models.RetryDetail, clone=True) try: failures = clone.last_failures except exceptions.NotFound: exceptions.raise_with_cause(exceptions.StorageFailure, "Unable to fetch most recent retry" " failures so new retry failure can" " be inserted") else: if failed_atom_name not in failures: failures[failed_atom_name] = failure self._with_connection(self._save_atom_detail, source, clone) @fasteners.write_locked def cleanup_retry_history(self, retry_name, state): """Cleanup history of retry atom with given name.""" source, clone = self._atomdetail_by_name( retry_name, expected_type=models.RetryDetail, clone=True) clone.state = state clone.results = [] self._with_connection(self._save_atom_detail, source, clone) @fasteners.read_locked def _get(self, atom_name, results_attr_name, fail_attr_name, allowed_states, fail_cache_key): source, _clone = self._atomdetail_by_name(atom_name) failure = getattr(source, fail_attr_name) if failure is not None: fail_cache = self._failures[atom_name] try: fail = fail_cache[fail_cache_key] if failure.matches(fail): # Try to give the version back that should have the # backtrace instead of one that has it # stripped (since backtraces are not serializable). failure = fail except KeyError: pass return failure else: if source.state not in allowed_states: raise exceptions.DisallowedAccess( "Result for atom '%s' is not known/accessible" " due to it being in %s state when result access" " is restricted to %s states" % (atom_name, source.state, allowed_states), state=source.state) return getattr(source, results_attr_name) def get_execute_result(self, atom_name): """Gets the ``execute`` results for an atom from storage.""" try: results = self._get(atom_name, 'results', 'failure', _EXECUTE_STATES_WITH_RESULTS, states.EXECUTE) except exceptions.DisallowedAccess as e: if e.state == states.IGNORE: exceptions.raise_with_cause(exceptions.NotFound, "Result for atom '%s' execution" " is not known (as it was" " ignored)" % atom_name) else: exceptions.raise_with_cause(exceptions.NotFound, "Result for atom '%s' execution" " is not known" % atom_name) else: return results @fasteners.read_locked def _get_failures(self, fail_cache_key): failures = {} for atom_name, fail_cache in six.iteritems(self._failures): try: failures[atom_name] = fail_cache[fail_cache_key] except KeyError: pass return failures def get_execute_failures(self): """Get all ``execute`` failures that happened with this flow.""" return self._get_failures(states.EXECUTE) # TODO(harlowja): remove these in the future? get = get_execute_result get_failures = get_execute_failures def get_revert_result(self, atom_name): """Gets the ``revert`` results for an atom from storage.""" try: results = self._get(atom_name, 'revert_results', 'revert_failure', _REVERT_STATES_WITH_RESULTS, states.REVERT) except exceptions.DisallowedAccess as e: if e.state == states.IGNORE: exceptions.raise_with_cause(exceptions.NotFound, "Result for atom '%s' revert is" " not known (as it was" " ignored)" % atom_name) else: exceptions.raise_with_cause(exceptions.NotFound, "Result for atom '%s' revert is" " not known" % atom_name) else: return results def get_revert_failures(self): """Get all ``revert`` failures that happened with this flow.""" return self._get_failures(states.REVERT) @fasteners.read_locked def has_failures(self): """Returns true if there are **any** failures in storage.""" for fail_cache in six.itervalues(self._failures): if fail_cache: return True return False @fasteners.write_locked def reset(self, atom_name, state=states.PENDING): """Reset atom with given name (if the atom is not in a given state).""" if atom_name == self.injector_name: return source, clone = self._atomdetail_by_name(atom_name, clone=True) if source.state == state: return clone.reset(state) self._with_connection(self._save_atom_detail, source, clone) self._failures[clone.name].clear() def inject_atom_args(self, atom_name, pairs, transient=True): """Add values into storage for a specific atom only. :param transient: save the data in-memory only instead of persisting the data to backend storage (useful for resource-like objects or similar objects which can **not** be persisted) This method injects a dictionary/pairs of arguments for an atom so that when that atom is scheduled for execution it will have immediate access to these arguments. .. note:: Injected atom arguments take precedence over arguments provided by predecessor atoms or arguments provided by injecting into the flow scope (using the :py:meth:`~taskflow.storage.Storage.inject` method). .. warning:: It should be noted that injected atom arguments (that are scoped to the atom with the given name) *should* be serializable whenever possible. This is a **requirement** for the :doc:`worker based engine ` which **must** serialize (typically using ``json``) all atom :py:meth:`~taskflow.atom.Atom.execute` and :py:meth:`~taskflow.atom.Atom.revert` arguments to be able to transmit those arguments to the target worker(s). If the use-case being applied/desired is to later use the worker based engine then it is highly recommended to ensure all injected atoms (even transient ones) are serializable to avoid issues that *may* appear later (when a object turned out to not actually be serializable). """ if atom_name not in self._atom_name_to_uuid: raise exceptions.NotFound("Unknown atom name '%s'" % atom_name) def save_transient(): self._injected_args.setdefault(atom_name, {}) self._injected_args[atom_name].update(pairs) def save_persistent(): source, clone = self._atomdetail_by_name(atom_name, clone=True) injected = source.meta.get(META_INJECTED) if not injected: injected = {} injected.update(pairs) clone.meta[META_INJECTED] = injected self._with_connection(self._save_atom_detail, source, clone) with self._lock.write_lock(): if transient: save_transient() else: save_persistent() @fasteners.write_locked def inject(self, pairs, transient=False): """Add values into storage. This method should be used to put flow parameters (requirements that are not satisfied by any atom in the flow) into storage. :param transient: save the data in-memory only instead of persisting the data to backend storage (useful for resource-like objects or similar objects which can **not** be persisted) .. warning:: It should be noted that injected flow arguments (that are scoped to all atoms in this flow) *should* be serializable whenever possible. This is a **requirement** for the :doc:`worker based engine ` which **must** serialize (typically using ``json``) all atom :py:meth:`~taskflow.atom.Atom.execute` and :py:meth:`~taskflow.atom.Atom.revert` arguments to be able to transmit those arguments to the target worker(s). If the use-case being applied/desired is to later use the worker based engine then it is highly recommended to ensure all injected atoms (even transient ones) are serializable to avoid issues that *may* appear later (when a object turned out to not actually be serializable). """ def save_persistent(): try: source, clone = self._atomdetail_by_name( self.injector_name, expected_type=models.TaskDetail, clone=True) except exceptions.NotFound: # Ensure we have our special task detail... # # TODO(harlowja): get this removed when # https://review.openstack.org/#/c/165645/ merges. source = self._create_atom_detail(self.injector_name, models.TaskDetail, atom_state=None) fd_source, fd_clone = self._fetch_flowdetail(clone=True) fd_clone.add(source) self._with_connection(self._save_flow_detail, fd_source, fd_clone) self._atom_name_to_uuid[source.name] = source.uuid clone = source clone.results = dict(pairs) clone.state = states.SUCCESS else: clone.results.update(pairs) result = self._with_connection(self._save_atom_detail, source, clone) return (self.injector_name, six.iterkeys(result.results)) def save_transient(): self._transients.update(pairs) return (_TRANSIENT_PROVIDER, six.iterkeys(self._transients)) if transient: provider_name, names = save_transient() else: provider_name, names = save_persistent() self._set_result_mapping(provider_name, dict((name, name) for name in names)) def _fetch_providers(self, looking_for, providers=None): """Return pair of (default providers, atom providers).""" if providers is None: providers = self._reverse_mapping.get(looking_for, []) default_providers = [] atom_providers = [] for p in providers: if p.name in (_TRANSIENT_PROVIDER, self.injector_name): default_providers.append(p) else: atom_providers.append(p) return default_providers, atom_providers def _set_result_mapping(self, provider_name, mapping): """Sets the result mapping for a given producer. The result saved with given name would be accessible by names defined in mapping. Mapping is a dict name => index. If index is None, the whole result will have this name; else, only part of it, result[index]. """ provider_mapping = self._result_mappings.setdefault(provider_name, {}) if mapping: provider_mapping.update(mapping) # Ensure the reverse mapping/index is updated (for faster lookups). for name, index in six.iteritems(provider_mapping): entries = self._reverse_mapping.setdefault(name, []) provider = _Provider(provider_name, index) if provider not in entries: entries.append(provider) @fasteners.read_locked def fetch(self, name, many_handler=None): """Fetch a named ``execute`` result.""" def _many_handler(values): # By default we just return the first of many (unless provided # a different callback that can translate many results into # something more meaningful). return values[0] if many_handler is None: many_handler = _many_handler try: maybe_providers = self._reverse_mapping[name] except KeyError: raise exceptions.NotFound("Name %r is not mapped as a produced" " output by any providers" % name) locator = _ProviderLocator( self._transients, functools.partial(self._fetch_providers, providers=maybe_providers), lambda atom_name: self._get(atom_name, 'last_results', 'failure', _EXECUTE_STATES_WITH_RESULTS, states.EXECUTE)) values = [] searched_providers, providers = locator.find( name, short_circuit=False, # NOTE(harlowja): There are no scopes used here (as of now), so # we just return all known providers as if it was one large # scope. scope_walker=[[p.name for p in maybe_providers]]) for provider, results in providers: values.append(_item_from_single(provider, results, name)) if not values: raise exceptions.NotFound( "Unable to find result %r, searched %s providers" % (name, len(searched_providers))) else: return many_handler(values) @fasteners.read_locked def fetch_unsatisfied_args(self, atom_name, args_mapping, scope_walker=None, optional_args=None): """Fetch unsatisfied ``execute`` arguments using an atoms args mapping. NOTE(harlowja): this takes into account the provided scope walker atoms who should produce the required value at runtime, as well as the transient/persistent flow and atom specific injected arguments. It does **not** check if the providers actually have produced the needed values; it just checks that they are registered to produce it in the future. """ source, _clone = self._atomdetail_by_name(atom_name) if scope_walker is None: scope_walker = self._scope_fetcher(atom_name) if optional_args is None: optional_args = [] injected_sources = [ self._injected_args.get(atom_name, {}), source.meta.get(META_INJECTED, {}), ] missing = set(six.iterkeys(args_mapping)) locator = _ProviderLocator( self._transients, self._fetch_providers, lambda atom_name: self._get(atom_name, 'last_results', 'failure', _EXECUTE_STATES_WITH_RESULTS, states.EXECUTE)) for (bound_name, name) in six.iteritems(args_mapping): if LOG.isEnabledFor(logging.TRACE): LOG.trace("Looking for %r <= %r for atom '%s'", bound_name, name, atom_name) if bound_name in optional_args: LOG.trace("Argument %r is optional, skipping", bound_name) missing.discard(bound_name) continue maybe_providers = 0 for source in injected_sources: if not source: continue if name in source: maybe_providers += 1 maybe_providers += len( locator.find_potentials(name, scope_walker=scope_walker)) if maybe_providers: LOG.trace("Atom '%s' will have %s potential providers" " of %r <= %r", atom_name, maybe_providers, bound_name, name) missing.discard(bound_name) return missing @fasteners.read_locked def fetch_all(self, many_handler=None): """Fetch all named ``execute`` results known so far.""" def _many_handler(values): if len(values) > 1: return values return values[0] if many_handler is None: many_handler = _many_handler results = {} for name in six.iterkeys(self._reverse_mapping): try: results[name] = self.fetch(name, many_handler=many_handler) except exceptions.NotFound: pass return results @fasteners.read_locked def fetch_mapped_args(self, args_mapping, atom_name=None, scope_walker=None, optional_args=None): """Fetch ``execute`` arguments for an atom using its args mapping.""" def _extract_first_from(name, sources): """Extracts/returns first occurrence of key in list of dicts.""" for i, source in enumerate(sources): if not source: continue if name in source: return (i, source[name]) raise KeyError(name) if optional_args is None: optional_args = [] if atom_name: source, _clone = self._atomdetail_by_name(atom_name) injected_sources = [ self._injected_args.get(atom_name, {}), source.meta.get(META_INJECTED, {}), ] if scope_walker is None: scope_walker = self._scope_fetcher(atom_name) else: injected_sources = [] if not args_mapping: return {} get_results = lambda atom_name: \ self._get(atom_name, 'last_results', 'failure', _EXECUTE_STATES_WITH_RESULTS, states.EXECUTE) mapped_args = {} for (bound_name, name) in six.iteritems(args_mapping): if LOG.isEnabledFor(logging.TRACE): if atom_name: LOG.trace("Looking for %r <= %r for atom '%s'", bound_name, name, atom_name) else: LOG.trace("Looking for %r <= %r", bound_name, name) try: source_index, value = _extract_first_from( name, injected_sources) mapped_args[bound_name] = value if LOG.isEnabledFor(logging.TRACE): if source_index == 0: LOG.trace("Matched %r <= %r to %r (from injected" " atom-specific transient" " values)", bound_name, name, value) else: LOG.trace("Matched %r <= %r to %r (from injected" " atom-specific persistent" " values)", bound_name, name, value) except KeyError: try: maybe_providers = self._reverse_mapping[name] except KeyError: if bound_name in optional_args: LOG.trace("Argument %r is optional, skipping", bound_name) continue raise exceptions.NotFound("Name %r is not mapped as a" " produced output by any" " providers" % name) locator = _ProviderLocator( self._transients, functools.partial(self._fetch_providers, providers=maybe_providers), get_results) searched_providers, providers = locator.find( name, scope_walker=scope_walker) if not providers: raise exceptions.NotFound( "Mapped argument %r <= %r was not produced" " by any accessible provider (%s possible" " providers were scanned)" % (bound_name, name, len(searched_providers))) provider, value = _item_from_first_of(providers, name) mapped_args[bound_name] = value LOG.trace("Matched %r <= %r to %r (from %s)", bound_name, name, value, provider) return mapped_args @fasteners.write_locked def set_flow_state(self, state): """Set flow details state and save it.""" source, clone = self._fetch_flowdetail(clone=True) clone.state = state self._with_connection(self._save_flow_detail, source, clone) @fasteners.write_locked def update_flow_metadata(self, update_with): """Update flowdetails metadata and save it.""" if update_with: source, clone = self._fetch_flowdetail(clone=True) clone.meta.update(update_with) self._with_connection(self._save_flow_detail, source, clone) @fasteners.write_locked def change_flow_state(self, state): """Transition flow from old state to new state. Returns ``(True, old_state)`` if transition was performed, or ``(False, old_state)`` if it was ignored, or raises a :py:class:`~taskflow.exceptions.InvalidState` exception if transition is invalid. """ old_state = self.get_flow_state() if not states.check_flow_transition(old_state, state): return (False, old_state) self.set_flow_state(state) return (True, old_state) @fasteners.read_locked def get_flow_state(self): """Get state from flow details.""" source = self._flowdetail state = source.state if state is None: state = states.PENDING return state def _translate_into_history(self, ad): failure = None if ad.failure is not None: # NOTE(harlowja): Try to use our local cache to get a more # complete failure object that has a traceback (instead of the # one that is saved which will *typically* not have one)... failure = ad.failure fail_cache = self._failures[ad.name] try: fail = fail_cache[states.EXECUTE] if failure.matches(fail): failure = fail except KeyError: pass return retry.History(ad.results, failure=failure) @fasteners.read_locked def get_retry_history(self, retry_name): """Fetch a single retrys history.""" source, _clone = self._atomdetail_by_name( retry_name, expected_type=models.RetryDetail) return self._translate_into_history(source) @fasteners.read_locked def get_retry_histories(self): """Fetch all retrys histories.""" histories = [] for ad in self._flowdetail: if isinstance(ad, models.RetryDetail): histories.append((ad.name, self._translate_into_history(ad))) return histories ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/task.py0000664000175000017500000002366100000000000016560 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Hewlett-Packard Development Company, L.P. # Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import copy from oslo_utils import reflection import six from six.moves import map as compat_map from six.moves import reduce as compat_reduce from taskflow import atom from taskflow import logging from taskflow.types import notifier from taskflow.utils import misc LOG = logging.getLogger(__name__) # Constants passed into revert kwargs. # # Contain the execute() result (if any). REVERT_RESULT = 'result' # # The cause of the flow failure/s REVERT_FLOW_FAILURES = 'flow_failures' # Common events EVENT_UPDATE_PROGRESS = 'update_progress' @six.add_metaclass(abc.ABCMeta) class Task(atom.Atom): """An abstraction that defines a potential piece of work. This potential piece of work is expected to be able to contain functionality that defines what can be executed to accomplish that work as well as a way of defining what can be executed to reverted/undo that same piece of work. """ # Known internal events this task can have callbacks bound to (others that # are not in this set/tuple will not be able to be bound); this should be # updated and/or extended in subclasses as needed to enable or disable new # or existing internal events... TASK_EVENTS = (EVENT_UPDATE_PROGRESS,) def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None, inject=None, ignore_list=None, revert_rebind=None, revert_requires=None): if name is None: name = reflection.get_class_name(self) super(Task, self).__init__(name, provides=provides, requires=requires, auto_extract=auto_extract, rebind=rebind, inject=inject, revert_rebind=revert_rebind, revert_requires=revert_requires) self._notifier = notifier.RestrictedNotifier(self.TASK_EVENTS) @property def notifier(self): """Internal notification dispatcher/registry. A notification object that will dispatch events that occur related to *internal* notifications that the task internally emits to listeners (for example for progress status updates, telling others that a task has reached 50% completion...). """ return self._notifier def copy(self, retain_listeners=True): """Clone/copy this task. :param retain_listeners: retain the attached notification listeners when cloning, when false the listeners will be emptied, when true the listeners will be copied and retained :return: the copied task """ c = copy.copy(self) c._notifier = self._notifier.copy() if not retain_listeners: c._notifier.reset() return c def update_progress(self, progress): """Update task progress and notify all registered listeners. :param progress: task progress float value between 0.0 and 1.0 """ def on_clamped(): LOG.warning("Progress value must be greater or equal to 0.0 or" " less than or equal to 1.0 instead of being '%s'", progress) cleaned_progress = misc.clamp(progress, 0.0, 1.0, on_clamped=on_clamped) self._notifier.notify(EVENT_UPDATE_PROGRESS, {'progress': cleaned_progress}) class FunctorTask(Task): """Adaptor to make a task from a callable. Take any callable pair and make a task from it. NOTE(harlowja): If a name is not provided the function/method name of the ``execute`` callable will be used as the name instead (the name of the ``revert`` callable is not used). """ def __init__(self, execute, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert=None, version=None, inject=None): if not six.callable(execute): raise ValueError("Function to use for executing must be" " callable") if revert is not None: if not six.callable(revert): raise ValueError("Function to use for reverting must" " be callable") if name is None: name = reflection.get_callable_name(execute) super(FunctorTask, self).__init__(name, provides=provides, inject=inject) self._execute = execute self._revert = revert if version is not None: self.version = version mapping = self._build_arg_mapping(execute, requires, rebind, auto_extract) self.rebind, exec_requires, self.optional = mapping if revert: revert_mapping = self._build_arg_mapping(revert, requires, rebind, auto_extract) else: revert_mapping = (self.rebind, exec_requires, self.optional) (self.revert_rebind, revert_requires, self.revert_optional) = revert_mapping self.requires = exec_requires.union(revert_requires) def execute(self, *args, **kwargs): return self._execute(*args, **kwargs) def revert(self, *args, **kwargs): if self._revert: return self._revert(*args, **kwargs) else: return None class ReduceFunctorTask(Task): """General purpose Task to reduce a list by applying a function. This Task mimics the behavior of Python's built-in ``reduce`` function. The Task takes a functor (lambda or otherwise) and a list. The list is specified using the ``requires`` argument of the Task. When executed, this task calls ``reduce`` with the functor and list as arguments. The resulting value from the call to ``reduce`` is then returned after execution. """ def __init__(self, functor, requires, name=None, provides=None, auto_extract=True, rebind=None, inject=None): if not six.callable(functor): raise ValueError("Function to use for reduce must be callable") f_args = reflection.get_callable_args(functor) if len(f_args) != 2: raise ValueError("%s arguments were provided. Reduce functor " "must take exactly 2 arguments." % len(f_args)) if not misc.is_iterable(requires): raise TypeError("%s type was provided for requires. Requires " "must be an iterable." % type(requires)) if len(requires) < 2: raise ValueError("%s elements were provided. Requires must have " "at least 2 elements." % len(requires)) if name is None: name = reflection.get_callable_name(functor) super(ReduceFunctorTask, self).__init__(name=name, provides=provides, inject=inject, requires=requires, rebind=rebind, auto_extract=auto_extract) self._functor = functor def execute(self, *args, **kwargs): l = [kwargs[r] for r in self.requires] return compat_reduce(self._functor, l) class MapFunctorTask(Task): """General purpose Task to map a function to a list. This Task mimics the behavior of Python's built-in ``map`` function. The Task takes a functor (lambda or otherwise) and a list. The list is specified using the ``requires`` argument of the Task. When executed, this task calls ``map`` with the functor and list as arguments. The resulting list from the call to ``map`` is then returned after execution. Each value of the returned list can be bound to individual names using the ``provides`` argument, following taskflow standard behavior. Order is preserved in the returned list. """ def __init__(self, functor, requires, name=None, provides=None, auto_extract=True, rebind=None, inject=None): if not six.callable(functor): raise ValueError("Function to use for map must be callable") f_args = reflection.get_callable_args(functor) if len(f_args) != 1: raise ValueError("%s arguments were provided. Map functor must " "take exactly 1 argument." % len(f_args)) if not misc.is_iterable(requires): raise TypeError("%s type was provided for requires. Requires " "must be an iterable." % type(requires)) if name is None: name = reflection.get_callable_name(functor) super(MapFunctorTask, self).__init__(name=name, provides=provides, inject=inject, requires=requires, rebind=rebind, auto_extract=auto_extract) self._functor = functor def execute(self, *args, **kwargs): l = [kwargs[r] for r in self.requires] return list(compat_map(self._functor, l)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/test.py0000664000175000017500000002413700000000000016574 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import logging from unittest import mock import fixtures from oslotest import base import six from testtools import compat from testtools import matchers from testtools import testcase from taskflow import exceptions from taskflow.tests import utils from taskflow.utils import misc class GreaterThanEqual(object): """Matches if the item is geq than the matchers reference object.""" def __init__(self, source): self.source = source def match(self, other): if other >= self.source: return None return matchers.Mismatch("%s was not >= %s" % (other, self.source)) class FailureRegexpMatcher(object): """Matches if the failure was caused by the given exception and message. This will match if a given failure contains and exception of the given class type and if its string message matches to the given regular expression pattern. """ def __init__(self, exc_class, pattern): self.exc_class = exc_class self.pattern = pattern def match(self, failure): for cause in failure: if cause.check(self.exc_class) is not None: return matchers.MatchesRegex( self.pattern).match(cause.exception_str) return matchers.Mismatch("The `%s` wasn't caused by the `%s`" % (failure, self.exc_class)) class ItemsEqual(object): """Matches the items in two sequences. This matcher will validate that the provided sequence has the same elements as a reference sequence, regardless of the order. """ def __init__(self, seq): self._seq = seq self._list = list(seq) def match(self, other): other_list = list(other) extra = misc.sequence_minus(other_list, self._list) missing = misc.sequence_minus(self._list, other_list) if extra or missing: msg = ("Sequences %s and %s do not have same items." % (self._seq, other)) if missing: msg += " Extra items in first sequence: %s." % missing if extra: msg += " Extra items in second sequence: %s." % extra return matchers.Mismatch(msg) return None class TestCase(base.BaseTestCase): """Test case base class for all taskflow unit tests.""" def makeTmpDir(self): t_dir = self.useFixture(fixtures.TempDir()) return t_dir.path def assertDictEqual(self, expected, check): self.assertIsInstance(expected, dict, 'First argument is not a dictionary') self.assertIsInstance(check, dict, 'Second argument is not a dictionary') # Testtools seems to want equals objects instead of just keys? compare_dict = {} for k in list(six.iterkeys(expected)): if not isinstance(expected[k], matchers.Equals): compare_dict[k] = matchers.Equals(expected[k]) else: compare_dict[k] = expected[k] self.assertThat(matchee=check, matcher=matchers.MatchesDict(compare_dict)) def assertRaisesAttrAccess(self, exc_class, obj, attr_name): def access_func(): getattr(obj, attr_name) self.assertRaises(exc_class, access_func) def assertRaisesRegex(self, exc_class, pattern, callable_obj, *args, **kwargs): # TODO(harlowja): submit a pull/review request to testtools to add # this method to there codebase instead of having it exist in ours # since it really doesn't belong here. class ReRaiseOtherTypes(object): def match(self, matchee): if not issubclass(matchee[0], exc_class): compat.reraise(*matchee) class CaptureMatchee(object): def match(self, matchee): self.matchee = matchee[1] capture = CaptureMatchee() matcher = matchers.Raises(matchers.MatchesAll(ReRaiseOtherTypes(), matchers.MatchesException(exc_class, pattern), capture)) our_callable = testcase.Nullary(callable_obj, *args, **kwargs) self.assertThat(our_callable, matcher) return capture.matchee def assertGreater(self, first, second): matcher = matchers.GreaterThan(first) self.assertThat(second, matcher) def assertGreaterEqual(self, first, second): matcher = GreaterThanEqual(first) self.assertThat(second, matcher) def assertRegexpMatches(self, text, pattern): matcher = matchers.MatchesRegex(pattern) self.assertThat(text, matcher) def assertIsSuperAndSubsequence(self, super_seq, sub_seq, msg=None): super_seq = list(super_seq) sub_seq = list(sub_seq) current_tail = super_seq for sub_elem in sub_seq: try: super_index = current_tail.index(sub_elem) except ValueError: # element not found if msg is None: msg = ("%r is not subsequence of %r: " "element %r not found in tail %r" % (sub_seq, super_seq, sub_elem, current_tail)) self.fail(msg) else: current_tail = current_tail[super_index + 1:] def assertFailuresRegexp(self, exc_class, pattern, callable_obj, *args, **kwargs): """Asserts the callable failed with the given exception and message.""" try: with utils.wrap_all_failures(): callable_obj(*args, **kwargs) except exceptions.WrappedFailure as e: self.assertThat(e, FailureRegexpMatcher(exc_class, pattern)) def assertCountEqual(self, seq1, seq2, msg=None): matcher = ItemsEqual(seq1) self.assertThat(seq2, matcher) class MockTestCase(TestCase): def setUp(self): super(MockTestCase, self).setUp() self.master_mock = mock.Mock(name='master_mock') def patch(self, target, autospec=True, **kwargs): """Patch target and attach it to the master mock.""" f = self.useFixture(fixtures.MockPatch(target, autospec=autospec, **kwargs)) mocked = f.mock attach_as = kwargs.pop('attach_as', None) if attach_as is not None: self.master_mock.attach_mock(mocked, attach_as) return mocked def patchClass(self, module, name, autospec=True, attach_as=None): """Patches a modules class. This will create a class instance mock (using the provided name to find the class in the module) and attach a mock class the master mock to be cleaned up on test exit. """ if autospec: instance_mock = mock.Mock(spec_set=getattr(module, name)) else: instance_mock = mock.Mock() f = self.useFixture(fixtures.MockPatchObject(module, name, autospec=autospec)) class_mock = f.mock class_mock.return_value = instance_mock if attach_as is None: attach_class_as = name attach_instance_as = name.lower() else: attach_class_as = attach_as + '_class' attach_instance_as = attach_as self.master_mock.attach_mock(class_mock, attach_class_as) self.master_mock.attach_mock(instance_mock, attach_instance_as) return class_mock, instance_mock def resetMasterMock(self): self.master_mock.reset_mock() class CapturingLoggingHandler(logging.Handler): """A handler that saves record contents for post-test analysis.""" def __init__(self, level=logging.DEBUG): # It seems needed to use the old style of base class calling, we # can remove this old style when we only support py3.x logging.Handler.__init__(self, level=level) self._records = [] @property def counts(self): """Returns a dictionary with the number of records at each level.""" self.acquire() try: captured = collections.defaultdict(int) for r in self._records: captured[r.levelno] += 1 return captured finally: self.release() @property def messages(self): """Returns a dictionary with list of record messages at each level.""" self.acquire() try: captured = collections.defaultdict(list) for r in self._records: captured[r.levelno].append(r.getMessage()) return captured finally: self.release() @property def exc_infos(self): """Returns a list of all the record exc_info tuples captured.""" self.acquire() try: captured = [] for r in self._records: if r.exc_info: captured.append(r.exc_info) return captured finally: self.release() def emit(self, record): self.acquire() try: self._records.append(record) finally: self.release() def reset(self): """Resets *all* internally captured state.""" self.acquire() try: self._records = [] finally: self.release() def close(self): logging.Handler.close(self) self.reset() ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1644397810.636042 taskflow-4.6.4/taskflow/tests/0000775000175000017500000000000000000000000016376 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/__init__.py0000664000175000017500000000000000000000000020475 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/test_examples.py0000664000175000017500000001152200000000000021626 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Run examples as unit tests. This module executes examples as unit tests, thus ensuring they at least can be executed with current taskflow. For examples with deterministic output, the output can be put to file with same name and '.out.txt' extension; then it will be checked that output did not change. When this module is used as main module, output for all examples are generated. Please note that this will break tests as output for most examples is indeterministic (due to hash randomization for example). """ import keyword import os import re import subprocess import sys import six from taskflow import test ROOT_DIR = os.path.abspath( os.path.dirname( os.path.dirname( os.path.dirname(__file__)))) # This is used so that any uuid like data being output is removed (since it # will change per test run and will invalidate the deterministic output that # we expect to be able to check). UUID_RE = re.compile('XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' .replace('X', '[0-9a-f]')) def safe_filename(filename): # Translates a filename into a method name, returns falsey if not # possible to perform this translation... name = re.sub("[^a-zA-Z0-9_]+", "_", filename) if not name or re.match(r"^[_]+$", name) or keyword.iskeyword(name): return False return name def root_path(*args): return os.path.join(ROOT_DIR, *args) def run_example(name): path = root_path('taskflow', 'examples', '%s.py' % name) obj = subprocess.Popen([sys.executable, path], stdout=subprocess.PIPE, stderr=subprocess.PIPE) output = obj.communicate() stdout = output[0].decode() stderr = output[1].decode() rc = obj.wait() if rc != 0: raise RuntimeError('Example %s failed, return code=%s\n' '<<>>\n%s' '<<>>\n' '<<>>\n%s' '<<>>' % (name, rc, stdout, stderr)) return stdout def expected_output_path(name): return root_path('taskflow', 'examples', '%s.out.txt' % name) def iter_examples(): examples_dir = root_path('taskflow', 'examples') for filename in os.listdir(examples_dir): path = os.path.join(examples_dir, filename) if not os.path.isfile(path): continue name, ext = os.path.splitext(filename) if ext != ".py": continue if not name.endswith('utils'): safe_name = safe_filename(name) if safe_name: yield name, safe_name class ExampleAdderMeta(type): """Translates examples into test cases/methods.""" def __new__(cls, name, parents, dct): def generate_test(example_name): def test_example(self): self._check_example(example_name) return test_example for example_name, safe_name in iter_examples(): test_name = 'test_%s' % safe_name dct[test_name] = generate_test(example_name) return type.__new__(cls, name, parents, dct) @six.add_metaclass(ExampleAdderMeta) class ExamplesTestCase(test.TestCase): """Runs the examples, and checks the outputs against expected outputs.""" def _check_example(self, name): output = run_example(name) eop = expected_output_path(name) if os.path.isfile(eop): with open(eop) as f: expected_output = f.read() # NOTE(imelnikov): on each run new uuid is generated, so we just # replace them with some constant string output = UUID_RE.sub('', output) expected_output = UUID_RE.sub('', expected_output) self.assertEqual(expected_output, output) def make_output_files(): """Generate output files for all examples.""" for example_name, _safe_name in iter_examples(): print("Running %s" % example_name) print("Please wait...") output = run_example(example_name) with open(expected_output_path(example_name), 'w') as f: f.write(output) if __name__ == '__main__': make_output_files() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6440423 taskflow-4.6.4/taskflow/tests/unit/0000775000175000017500000000000000000000000017355 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/__init__.py0000664000175000017500000000000000000000000021454 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6440423 taskflow-4.6.4/taskflow/tests/unit/action_engine/0000775000175000017500000000000000000000000022157 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/__init__.py0000664000175000017500000000000000000000000024256 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/test_builder.py0000664000175000017500000003115100000000000025217 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from automaton import exceptions as excp from automaton import runners import six from taskflow.engines.action_engine import builder from taskflow.engines.action_engine import compiler from taskflow.engines.action_engine import executor from taskflow.engines.action_engine import runtime from taskflow.patterns import linear_flow as lf from taskflow import states as st from taskflow import storage from taskflow import test from taskflow.tests import utils as test_utils from taskflow.types import notifier from taskflow.utils import persistence_utils as pu class BuildersTest(test.TestCase): def _make_runtime(self, flow, initial_state=None): compilation = compiler.PatternCompiler(flow).compile() flow_detail = pu.create_flow_detail(flow) store = storage.Storage(flow_detail) nodes_iter = compilation.execution_graph.nodes(data=True) for node, node_attrs in nodes_iter: if node_attrs['kind'] in ('task', 'retry'): store.ensure_atom(node) if initial_state: store.set_flow_state(initial_state) atom_notifier = notifier.Notifier() task_executor = executor.SerialTaskExecutor() retry_executor = executor.SerialRetryExecutor() task_executor.start() self.addCleanup(task_executor.stop) r = runtime.Runtime(compilation, store, atom_notifier, task_executor, retry_executor) r.compile() return r def _make_machine(self, flow, initial_state=None): runtime = self._make_runtime(flow, initial_state=initial_state) machine, memory = runtime.builder.build({}) machine_runner = runners.FiniteRunner(machine) return (runtime, machine, memory, machine_runner) def test_run_iterations(self): flow = lf.Flow("root") tasks = test_utils.make_many( 1, task_cls=test_utils.TaskNoRequiresNoReturns) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) it = machine_runner.run_iter(builder.START) prior_state, new_state = six.next(it) self.assertEqual(st.RESUMING, new_state) self.assertEqual(0, len(memory.failures)) prior_state, new_state = six.next(it) self.assertEqual(st.SCHEDULING, new_state) self.assertEqual(0, len(memory.failures)) prior_state, new_state = six.next(it) self.assertEqual(st.WAITING, new_state) self.assertEqual(0, len(memory.failures)) prior_state, new_state = six.next(it) self.assertEqual(st.ANALYZING, new_state) self.assertEqual(0, len(memory.failures)) prior_state, new_state = six.next(it) self.assertEqual(builder.GAME_OVER, new_state) self.assertEqual(0, len(memory.failures)) prior_state, new_state = six.next(it) self.assertEqual(st.SUCCESS, new_state) self.assertEqual(0, len(memory.failures)) self.assertRaises(StopIteration, six.next, it) def test_run_iterations_reverted(self): flow = lf.Flow("root") tasks = test_utils.make_many( 1, task_cls=test_utils.TaskWithFailure) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) prior_state, new_state = transitions[-1] self.assertEqual(st.REVERTED, new_state) self.assertEqual([], memory.failures) self.assertEqual(st.REVERTED, runtime.storage.get_atom_state(tasks[0].name)) def test_run_iterations_failure(self): flow = lf.Flow("root") tasks = test_utils.make_many( 1, task_cls=test_utils.NastyFailingTask) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) prior_state, new_state = transitions[-1] self.assertEqual(st.FAILURE, new_state) self.assertEqual(1, len(memory.failures)) failure = memory.failures[0] self.assertTrue(failure.check(RuntimeError)) self.assertEqual(st.REVERT_FAILURE, runtime.storage.get_atom_state(tasks[0].name)) def test_run_iterations_suspended(self): flow = lf.Flow("root") tasks = test_utils.make_many( 2, task_cls=test_utils.TaskNoRequiresNoReturns) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = [] for prior_state, new_state in machine_runner.run_iter(builder.START): transitions.append((new_state, memory.failures)) if new_state == st.ANALYZING: runtime.storage.set_flow_state(st.SUSPENDED) state, failures = transitions[-1] self.assertEqual(st.SUSPENDED, state) self.assertEqual([], failures) self.assertEqual(st.SUCCESS, runtime.storage.get_atom_state(tasks[0].name)) self.assertEqual(st.PENDING, runtime.storage.get_atom_state(tasks[1].name)) def test_run_iterations_suspended_failure(self): flow = lf.Flow("root") sad_tasks = test_utils.make_many( 1, task_cls=test_utils.NastyFailingTask) flow.add(*sad_tasks) happy_tasks = test_utils.make_many( 1, task_cls=test_utils.TaskNoRequiresNoReturns, offset=1) flow.add(*happy_tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = [] for prior_state, new_state in machine_runner.run_iter(builder.START): transitions.append((new_state, memory.failures)) if new_state == st.ANALYZING: runtime.storage.set_flow_state(st.SUSPENDED) state, failures = transitions[-1] self.assertEqual(st.SUSPENDED, state) self.assertEqual([], failures) self.assertEqual(st.PENDING, runtime.storage.get_atom_state(happy_tasks[0].name)) self.assertEqual(st.FAILURE, runtime.storage.get_atom_state(sad_tasks[0].name)) def test_builder_manual_process(self): flow = lf.Flow("root") tasks = test_utils.make_many( 1, task_cls=test_utils.TaskNoRequiresNoReturns) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) self.assertRaises(excp.NotInitialized, machine.process_event, 'poke') # Should now be pending... self.assertEqual(st.PENDING, runtime.storage.get_atom_state(tasks[0].name)) machine.initialize() self.assertEqual(builder.UNDEFINED, machine.current_state) self.assertFalse(machine.terminated) self.assertRaises(excp.NotFound, machine.process_event, 'poke') last_state = machine.current_state reaction, terminal = machine.process_event(builder.START) self.assertFalse(terminal) self.assertIsNotNone(reaction) self.assertEqual(st.RESUMING, machine.current_state) self.assertRaises(excp.NotFound, machine.process_event, 'poke') last_state = machine.current_state cb, args, kwargs = reaction next_event = cb(last_state, machine.current_state, builder.START, *args, **kwargs) reaction, terminal = machine.process_event(next_event) self.assertFalse(terminal) self.assertIsNotNone(reaction) self.assertEqual(st.SCHEDULING, machine.current_state) self.assertRaises(excp.NotFound, machine.process_event, 'poke') last_state = machine.current_state cb, args, kwargs = reaction next_event = cb(last_state, machine.current_state, next_event, *args, **kwargs) reaction, terminal = machine.process_event(next_event) self.assertFalse(terminal) self.assertEqual(st.WAITING, machine.current_state) self.assertRaises(excp.NotFound, machine.process_event, 'poke') # Should now be running... self.assertEqual(st.RUNNING, runtime.storage.get_atom_state(tasks[0].name)) last_state = machine.current_state cb, args, kwargs = reaction next_event = cb(last_state, machine.current_state, next_event, *args, **kwargs) reaction, terminal = machine.process_event(next_event) self.assertFalse(terminal) self.assertIsNotNone(reaction) self.assertEqual(st.ANALYZING, machine.current_state) self.assertRaises(excp.NotFound, machine.process_event, 'poke') last_state = machine.current_state cb, args, kwargs = reaction next_event = cb(last_state, machine.current_state, next_event, *args, **kwargs) reaction, terminal = machine.process_event(next_event) self.assertFalse(terminal) self.assertEqual(builder.GAME_OVER, machine.current_state) # Should now be done... self.assertEqual(st.SUCCESS, runtime.storage.get_atom_state(tasks[0].name)) def test_builder_automatic_process(self): flow = lf.Flow("root") tasks = test_utils.make_many( 1, task_cls=test_utils.TaskNoRequiresNoReturns) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) self.assertEqual((builder.UNDEFINED, st.RESUMING), transitions[0]) self.assertEqual((builder.GAME_OVER, st.SUCCESS), transitions[-1]) self.assertEqual(st.SUCCESS, runtime.storage.get_atom_state(tasks[0].name)) def test_builder_automatic_process_failure(self): flow = lf.Flow("root") tasks = test_utils.make_many(1, task_cls=test_utils.NastyFailingTask) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) self.assertEqual((builder.GAME_OVER, st.FAILURE), transitions[-1]) self.assertEqual(1, len(memory.failures)) def test_builder_automatic_process_reverted(self): flow = lf.Flow("root") tasks = test_utils.make_many(1, task_cls=test_utils.TaskWithFailure) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) self.assertEqual((builder.GAME_OVER, st.REVERTED), transitions[-1]) self.assertEqual(st.REVERTED, runtime.storage.get_atom_state(tasks[0].name)) def test_builder_expected_transition_occurrences(self): flow = lf.Flow("root") tasks = test_utils.make_many( 10, task_cls=test_utils.TaskNoRequiresNoReturns) flow.add(*tasks) runtime, machine, memory, machine_runner = self._make_machine( flow, initial_state=st.RUNNING) transitions = list(machine_runner.run_iter(builder.START)) occurrences = dict((t, transitions.count(t)) for t in transitions) self.assertEqual(10, occurrences.get((st.SCHEDULING, st.WAITING))) self.assertEqual(10, occurrences.get((st.WAITING, st.ANALYZING))) self.assertEqual(9, occurrences.get((st.ANALYZING, st.SCHEDULING))) self.assertEqual(1, occurrences.get((builder.GAME_OVER, st.SUCCESS))) self.assertEqual(1, occurrences.get((builder.UNDEFINED, st.RESUMING))) self.assertEqual(0, len(memory.next_up)) self.assertEqual(0, len(memory.not_done)) self.assertEqual(0, len(memory.failures)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/test_compile.py0000664000175000017500000005631700000000000025234 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import engines from taskflow.engines.action_engine import compiler from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import retry from taskflow import test from taskflow.tests import utils as test_utils def _replicate_graph_with_names(compilation): # Turn a graph of nodes into a graph of names only so that # testing can use those names instead of having to use the exact # node objects themselves (which is problematic for any end nodes that # are added into the graph *dynamically*, and are not there in the # original/source flow). g = compilation.execution_graph n_g = g.__class__(name=g.name) for node, node_data in g.nodes(data=True): n_g.add_node(node.name, attr_dict=node_data) for u, v, u_v_data in g.edges(data=True): n_g.add_edge(u.name, v.name, attr_dict=u_v_data) return n_g class PatternCompileTest(test.TestCase): def test_task(self): task = test_utils.DummyTask(name='a') g = _replicate_graph_with_names( compiler.PatternCompiler(task).compile()) self.assertEqual(['a'], list(g.nodes())) self.assertEqual([], list(g.edges())) def test_retry(self): r = retry.AlwaysRevert('r1') self.assertRaises(TypeError, compiler.PatternCompiler(r).compile) def test_wrong_object(self): msg_regex = '^Unknown object .* requested to compile' self.assertRaisesRegex(TypeError, msg_regex, compiler.PatternCompiler(42).compile) def test_empty(self): flo = lf.Flow("test") compiler.PatternCompiler(flo).compile() def test_linear(self): a, b, c, d = test_utils.make_many(4) flo = lf.Flow("test") flo.add(a, b, c) inner_flo = lf.Flow("sub-test") inner_flo.add(d) flo.add(inner_flo) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(8, len(g)) order = list(g.topological_sort()) self.assertEqual(['test', 'a', 'b', 'c', "sub-test", 'd', "sub-test[$]", 'test[$]'], order) self.assertTrue(g.has_edge('c', "sub-test")) self.assertTrue(g.has_edge("sub-test", 'd')) self.assertEqual({'invariant': True}, g.get_edge_data("sub-test", 'd')) self.assertEqual(['test[$]'], list(g.no_successors_iter())) self.assertEqual(['test'], list(g.no_predecessors_iter())) def test_invalid(self): a, b, c = test_utils.make_many(3) flo = lf.Flow("test") flo.add(a, b, c) flo.add(flo) self.assertRaises(ValueError, compiler.PatternCompiler(flo).compile) def test_unordered(self): a, b, c, d = test_utils.make_many(4) flo = uf.Flow("test") flo.add(a, b, c, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(6, len(g)) self.assertCountEqual(g.edges(), [ ('test', 'a'), ('test', 'b'), ('test', 'c'), ('test', 'd'), ('a', 'test[$]'), ('b', 'test[$]'), ('c', 'test[$]'), ('d', 'test[$]'), ]) self.assertEqual(set(['test']), set(g.no_predecessors_iter())) def test_linear_nested(self): a, b, c, d = test_utils.make_many(4) flo = lf.Flow("test") flo.add(a, b) inner_flo = uf.Flow("test2") inner_flo.add(c, d) flo.add(inner_flo) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(8, len(g)) sub_g = g.subgraph(['a', 'b']) self.assertFalse(sub_g.has_edge('b', 'a')) self.assertTrue(sub_g.has_edge('a', 'b')) self.assertEqual({'invariant': True}, sub_g.get_edge_data("a", "b")) sub_g = g.subgraph(['c', 'd']) self.assertEqual(0, sub_g.number_of_edges()) # This ensures that c and d do not start executing until after b. self.assertTrue(g.has_edge('b', 'test2')) self.assertTrue(g.has_edge('test2', 'c')) self.assertTrue(g.has_edge('test2', 'd')) def test_unordered_nested(self): a, b, c, d = test_utils.make_many(4) flo = uf.Flow("test") flo.add(a, b) flo2 = lf.Flow("test2") flo2.add(c, d) flo.add(flo2) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(8, len(g)) self.assertCountEqual(g.edges(), [ ('test', 'a'), ('test', 'b'), ('test', 'test2'), ('test2', 'c'), ('c', 'd'), ('d', 'test2[$]'), ('test2[$]', 'test[$]'), ('a', 'test[$]'), ('b', 'test[$]'), ]) def test_unordered_nested_in_linear(self): a, b, c, d = test_utils.make_many(4) inner_flo = uf.Flow('ut').add(b, c) flo = lf.Flow('lt').add(a, inner_flo, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(8, len(g)) self.assertCountEqual(g.edges(), [ ('lt', 'a'), ('a', 'ut'), ('ut', 'b'), ('ut', 'c'), ('b', 'ut[$]'), ('c', 'ut[$]'), ('ut[$]', 'd'), ('d', 'lt[$]'), ]) def test_graph(self): a, b, c, d = test_utils.make_many(4) flo = gf.Flow("test") flo.add(a, b, c, d) compilation = compiler.PatternCompiler(flo).compile() self.assertEqual(6, len(compilation.execution_graph)) self.assertEqual(8, compilation.execution_graph.number_of_edges()) def test_graph_nested(self): a, b, c, d, e, f, g = test_utils.make_many(7) flo = gf.Flow("test") flo.add(a, b, c, d) flo2 = lf.Flow('test2') flo2.add(e, f, g) flo.add(flo2) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(11, len(g)) self.assertCountEqual(g.edges(), [ ('test', 'a'), ('test', 'b'), ('test', 'c'), ('test', 'd'), ('a', 'test[$]'), ('b', 'test[$]'), ('c', 'test[$]'), ('d', 'test[$]'), ('test', 'test2'), ('test2', 'e'), ('e', 'f'), ('f', 'g'), ('g', 'test2[$]'), ('test2[$]', 'test[$]'), ]) def test_graph_nested_graph(self): a, b, c, d, e, f, g = test_utils.make_many(7) flo = gf.Flow("test") flo.add(a, b, c, d) flo2 = gf.Flow('test2') flo2.add(e, f, g) flo.add(flo2) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(11, len(g)) self.assertCountEqual(g.edges(), [ ('test', 'a'), ('test', 'b'), ('test', 'c'), ('test', 'd'), ('test', 'test2'), ('test2', 'e'), ('test2', 'f'), ('test2', 'g'), ('e', 'test2[$]'), ('f', 'test2[$]'), ('g', 'test2[$]'), ('test2[$]', 'test[$]'), ('a', 'test[$]'), ('b', 'test[$]'), ('c', 'test[$]'), ('d', 'test[$]'), ]) def test_graph_links(self): a, b, c, d = test_utils.make_many(4) flo = gf.Flow("test") flo.add(a, b, c, d) flo.link(a, b) flo.link(b, c) flo.link(c, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(6, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'a', {'invariant': True}), ('a', 'b', {'manual': True}), ('b', 'c', {'manual': True}), ('c', 'd', {'manual': True}), ('d', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], g.no_predecessors_iter()) self.assertCountEqual(['test[$]'], g.no_successors_iter()) def test_graph_dependencies(self): a = test_utils.ProvidesRequiresTask('a', provides=['x'], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=['x']) flo = gf.Flow("test").add(a, b) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(4, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'a', {'invariant': True}), ('a', 'b', {'reasons': set(['x'])}), ('b', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], g.no_predecessors_iter()) self.assertCountEqual(['test[$]'], g.no_successors_iter()) def test_graph_nested_requires(self): a = test_utils.ProvidesRequiresTask('a', provides=['x'], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) c = test_utils.ProvidesRequiresTask('c', provides=[], requires=['x']) inner_flo = lf.Flow("test2").add(b, c) flo = gf.Flow("test").add(a, inner_flo) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(7, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'a', {'invariant': True}), ('test2', 'b', {'invariant': True}), ('a', 'test2', {'reasons': set(['x'])}), ('b', 'c', {'invariant': True}), ('c', 'test2[$]', {'invariant': True}), ('test2[$]', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], list(g.no_predecessors_iter())) self.assertCountEqual(['test[$]'], list(g.no_successors_iter())) def test_graph_nested_provides(self): a = test_utils.ProvidesRequiresTask('a', provides=[], requires=['x']) b = test_utils.ProvidesRequiresTask('b', provides=['x'], requires=[]) c = test_utils.ProvidesRequiresTask('c', provides=[], requires=[]) inner_flo = lf.Flow("test2").add(b, c) flo = gf.Flow("test").add(a, inner_flo) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(7, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'test2', {'invariant': True}), ('a', 'test[$]', {'invariant': True}), # The 'x' requirement is produced out of test2... ('test2[$]', 'a', {'reasons': set(['x'])}), ('test2', 'b', {'invariant': True}), ('b', 'c', {'invariant': True}), ('c', 'test2[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], g.no_predecessors_iter()) self.assertCountEqual(['test[$]'], g.no_successors_iter()) def test_empty_flow_in_linear_flow(self): flo = lf.Flow('lf') a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) empty_flo = gf.Flow("empty") flo.add(a, empty_flo, b) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertCountEqual(g.edges(), [ ("lf", "a"), ("a", "empty"), ("empty", "empty[$]"), ("empty[$]", "b"), ("b", "lf[$]"), ]) def test_many_empty_in_graph_flow(self): flo = gf.Flow('root') a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) flo.add(a) b = lf.Flow('b') b_0 = test_utils.ProvidesRequiresTask('b.0', provides=[], requires=[]) b_1 = lf.Flow('b.1') b_2 = lf.Flow('b.2') b_3 = test_utils.ProvidesRequiresTask('b.3', provides=[], requires=[]) b.add(b_0, b_1, b_2, b_3) flo.add(b) c = lf.Flow('c') c_0 = lf.Flow('c.0') c_1 = lf.Flow('c.1') c_2 = lf.Flow('c.2') c.add(c_0, c_1, c_2) flo.add(c) d = test_utils.ProvidesRequiresTask('d', provides=[], requires=[]) flo.add(d) flo.link(b, d) flo.link(a, d) flo.link(c, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertTrue(g.has_edge('root', 'a')) self.assertTrue(g.has_edge('root', 'b')) self.assertTrue(g.has_edge('root', 'c')) self.assertTrue(g.has_edge('b.0', 'b.1')) self.assertTrue(g.has_edge('b.1[$]', 'b.2')) self.assertTrue(g.has_edge('b.2[$]', 'b.3')) self.assertTrue(g.has_edge('c.0[$]', 'c.1')) self.assertTrue(g.has_edge('c.1[$]', 'c.2')) self.assertTrue(g.has_edge('a', 'd')) self.assertTrue(g.has_edge('b[$]', 'd')) self.assertTrue(g.has_edge('c[$]', 'd')) self.assertEqual(20, len(g)) def test_empty_flow_in_nested_flow(self): flow = lf.Flow('lf') a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) flow2 = lf.Flow("lf-2") c = test_utils.ProvidesRequiresTask('c', provides=[], requires=[]) d = test_utils.ProvidesRequiresTask('d', provides=[], requires=[]) empty_flow = gf.Flow("empty") flow2.add(c, empty_flow, d) flow.add(a, flow2, b) g = _replicate_graph_with_names( compiler.PatternCompiler(flow).compile()) for u, v in [('lf', 'a'), ('a', 'lf-2'), ('lf-2', 'c'), ('c', 'empty'), ('empty[$]', 'd'), ('d', 'lf-2[$]'), ('lf-2[$]', 'b'), ('b', 'lf[$]')]: self.assertTrue(g.has_edge(u, v)) def test_empty_flow_in_graph_flow(self): flow = lf.Flow('lf') a = test_utils.ProvidesRequiresTask('a', provides=['a'], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=['a']) empty_flow = lf.Flow("empty") flow.add(a, empty_flow, b) compilation = compiler.PatternCompiler(flow).compile() g = compilation.execution_graph self.assertTrue(g.has_edge(flow, a)) self.assertTrue(g.has_edge(a, empty_flow)) empty_flow_successors = list(g.successors(empty_flow)) self.assertEqual(1, len(empty_flow_successors)) empty_flow_terminal = empty_flow_successors[0] self.assertIs(empty_flow, empty_flow_terminal.flow) self.assertEqual(compiler.FLOW_END, g.nodes[empty_flow_terminal]['kind']) self.assertTrue(g.has_edge(empty_flow_terminal, b)) def test_empty_flow_in_graph_flow_linkage(self): flow = gf.Flow('lf') a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) empty_flow = lf.Flow("empty") flow.add(a, empty_flow, b) flow.link(a, b) compilation = compiler.PatternCompiler(flow).compile() g = compilation.execution_graph self.assertTrue(g.has_edge(a, b)) self.assertTrue(g.has_edge(flow, a)) self.assertTrue(g.has_edge(flow, empty_flow)) def test_checks_for_dups(self): flo = gf.Flow("test").add( test_utils.DummyTask(name="a"), test_utils.DummyTask(name="a") ) e = engines.load(flo) self.assertRaisesRegex(exc.Duplicate, '^Atoms with duplicate names', e.compile) def test_checks_for_dups_globally(self): flo = gf.Flow("test").add( gf.Flow("int1").add(test_utils.DummyTask(name="a")), gf.Flow("int2").add(test_utils.DummyTask(name="a"))) e = engines.load(flo) self.assertRaisesRegex(exc.Duplicate, '^Atoms with duplicate names', e.compile) def test_retry_in_linear_flow(self): flo = lf.Flow("test", retry.AlwaysRevert("c")) compilation = compiler.PatternCompiler(flo).compile() self.assertEqual(3, len(compilation.execution_graph)) self.assertEqual(2, compilation.execution_graph.number_of_edges()) def test_retry_in_unordered_flow(self): flo = uf.Flow("test", retry.AlwaysRevert("c")) compilation = compiler.PatternCompiler(flo).compile() self.assertEqual(3, len(compilation.execution_graph)) self.assertEqual(2, compilation.execution_graph.number_of_edges()) def test_retry_in_graph_flow(self): flo = gf.Flow("test", retry.AlwaysRevert("c")) compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(3, len(g)) self.assertEqual(2, g.number_of_edges()) def test_retry_in_nested_flows(self): c1 = retry.AlwaysRevert("c1") c2 = retry.AlwaysRevert("c2") inner_flo = lf.Flow("test2", c2) flo = lf.Flow("test", c1).add(inner_flo) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(6, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'c1', {'invariant': True}), ('c1', 'test2', {'invariant': True, 'retry': True}), ('test2', 'c2', {'invariant': True}), ('c2', 'test2[$]', {'invariant': True}), ('test2[$]', 'test[$]', {'invariant': True}), ]) self.assertIs(c1, g.nodes['c2']['retry']) self.assertCountEqual(['test'], list(g.no_predecessors_iter())) self.assertCountEqual(['test[$]'], list(g.no_successors_iter())) def test_retry_in_linear_flow_with_tasks(self): c = retry.AlwaysRevert("c") a, b = test_utils.make_many(2) flo = lf.Flow("test", c).add(a, b) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(5, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'c', {'invariant': True}), ('a', 'b', {'invariant': True}), ('c', 'a', {'invariant': True, 'retry': True}), ('b', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], g.no_predecessors_iter()) self.assertCountEqual(['test[$]'], g.no_successors_iter()) self.assertIs(c, g.nodes['a']['retry']) self.assertIs(c, g.nodes['b']['retry']) def test_retry_in_unordered_flow_with_tasks(self): c = retry.AlwaysRevert("c") a, b = test_utils.make_many(2) flo = uf.Flow("test", c).add(a, b) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(5, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'c', {'invariant': True}), ('c', 'a', {'invariant': True, 'retry': True}), ('c', 'b', {'invariant': True, 'retry': True}), ('b', 'test[$]', {'invariant': True}), ('a', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], list(g.no_predecessors_iter())) self.assertCountEqual(['test[$]'], list(g.no_successors_iter())) self.assertIs(c, g.nodes['a']['retry']) self.assertIs(c, g.nodes['b']['retry']) def test_retry_in_graph_flow_with_tasks(self): r = retry.AlwaysRevert("r") a, b, c = test_utils.make_many(3) flo = gf.Flow("test", r).add(a, b, c).link(b, c) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertCountEqual(g.edges(data=True), [ ('test', 'r', {'invariant': True}), ('r', 'a', {'invariant': True, 'retry': True}), ('r', 'b', {'invariant': True, 'retry': True}), ('b', 'c', {'manual': True}), ('a', 'test[$]', {'invariant': True}), ('c', 'test[$]', {'invariant': True}), ]) self.assertCountEqual(['test'], g.no_predecessors_iter()) self.assertCountEqual(['test[$]'], g.no_successors_iter()) self.assertIs(r, g.nodes['a']['retry']) self.assertIs(r, g.nodes['b']['retry']) self.assertIs(r, g.nodes['c']['retry']) def test_retries_hierarchy(self): c1 = retry.AlwaysRevert("c1") c2 = retry.AlwaysRevert("c2") a, b, c, d = test_utils.make_many(4) inner_flo = lf.Flow("test2", c2).add(b, c) flo = lf.Flow("test", c1).add(a, inner_flo, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(10, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'c1', {'invariant': True}), ('c1', 'a', {'invariant': True, 'retry': True}), ('a', 'test2', {'invariant': True}), ('test2', 'c2', {'invariant': True}), ('c2', 'b', {'invariant': True, 'retry': True}), ('b', 'c', {'invariant': True}), ('c', 'test2[$]', {'invariant': True}), ('test2[$]', 'd', {'invariant': True}), ('d', 'test[$]', {'invariant': True}), ]) self.assertIs(c1, g.nodes['a']['retry']) self.assertIs(c1, g.nodes['d']['retry']) self.assertIs(c2, g.nodes['b']['retry']) self.assertIs(c2, g.nodes['c']['retry']) self.assertIs(c1, g.nodes['c2']['retry']) self.assertIsNone(g.nodes['c1'].get('retry')) def test_retry_subflows_hierarchy(self): c1 = retry.AlwaysRevert("c1") a, b, c, d = test_utils.make_many(4) inner_flo = lf.Flow("test2").add(b, c) flo = lf.Flow("test", c1).add(a, inner_flo, d) g = _replicate_graph_with_names( compiler.PatternCompiler(flo).compile()) self.assertEqual(9, len(g)) self.assertCountEqual(g.edges(data=True), [ ('test', 'c1', {'invariant': True}), ('c1', 'a', {'invariant': True, 'retry': True}), ('a', 'test2', {'invariant': True}), ('test2', 'b', {'invariant': True}), ('b', 'c', {'invariant': True}), ('c', 'test2[$]', {'invariant': True}), ('test2[$]', 'd', {'invariant': True}), ('d', 'test[$]', {'invariant': True}), ]) self.assertIs(c1, g.nodes['a']['retry']) self.assertIs(c1, g.nodes['d']['retry']) self.assertIs(c1, g.nodes['b']['retry']) self.assertIs(c1, g.nodes['c']['retry']) self.assertIsNone(g.nodes['c1'].get('retry')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/test_creation.py0000664000175000017500000000667500000000000025412 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist import testtools from taskflow.engines.action_engine import engine from taskflow.engines.action_engine import executor from taskflow.engines.action_engine import process_executor from taskflow.patterns import linear_flow as lf from taskflow.persistence import backends from taskflow import test from taskflow.tests import utils from taskflow.utils import eventlet_utils as eu from taskflow.utils import persistence_utils as pu class ParallelCreationTest(test.TestCase): @staticmethod def _create_engine(**kwargs): flow = lf.Flow('test-flow').add(utils.DummyTask()) backend = backends.fetch({'connection': 'memory'}) flow_detail = pu.create_flow_detail(flow, backend=backend) options = kwargs.copy() return engine.ParallelActionEngine(flow, flow_detail, backend, options) def test_thread_string_creation(self): for s in ['threads', 'threaded', 'thread']: eng = self._create_engine(executor=s) self.assertIsInstance(eng._task_executor, executor.ParallelThreadTaskExecutor) def test_process_string_creation(self): for s in ['process', 'processes']: eng = self._create_engine(executor=s) self.assertIsInstance(eng._task_executor, process_executor.ParallelProcessTaskExecutor) def test_thread_executor_creation(self): with futurist.ThreadPoolExecutor(1) as e: eng = self._create_engine(executor=e) self.assertIsInstance(eng._task_executor, executor.ParallelThreadTaskExecutor) def test_process_executor_creation(self): with futurist.ProcessPoolExecutor(1) as e: eng = self._create_engine(executor=e) self.assertIsInstance(eng._task_executor, process_executor.ParallelProcessTaskExecutor) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') def test_green_executor_creation(self): with futurist.GreenThreadPoolExecutor(1) as e: eng = self._create_engine(executor=e) self.assertIsInstance(eng._task_executor, executor.ParallelThreadTaskExecutor) def test_sync_executor_creation(self): with futurist.SynchronousExecutor() as e: eng = self._create_engine(executor=e) self.assertIsInstance(eng._task_executor, executor.ParallelThreadTaskExecutor) def test_invalid_creation(self): self.assertRaises(ValueError, self._create_engine, executor='crap') self.assertRaises(TypeError, self._create_engine, executor=2) self.assertRaises(TypeError, self._create_engine, executor=object()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/test_process_executor.py0000664000175000017500000000657700000000000027203 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import asyncore import errno import socket import threading from taskflow.engines.action_engine import process_executor as pu from taskflow import task from taskflow import test from taskflow.test import mock from taskflow.tests import utils as test_utils class ProcessExecutorHelpersTest(test.TestCase): def test_reader(self): capture_buf = [] def do_capture(identity, message_capture_func): capture_buf.append(message_capture_func()) r = pu.Reader(b"secret", do_capture) for data in pu._encode_message(b"secret", ['hi'], b'me'): self.assertEqual(len(data), r.bytes_needed) r.feed(data) self.assertEqual(1, len(capture_buf)) self.assertEqual(['hi'], capture_buf[0]) def test_bad_hmac_reader(self): r = pu.Reader(b"secret-2", lambda ident, capture_func: capture_func()) in_data = b"".join(pu._encode_message(b"secret", ['hi'], b'me')) self.assertRaises(pu.BadHmacValueError, r.feed, in_data) @mock.patch("socket.socket") def test_no_connect_channel(self, mock_socket_factory): mock_sock = mock.MagicMock() mock_socket_factory.return_value = mock_sock mock_sock.connect.side_effect = socket.error(errno.ECONNREFUSED, 'broken') c = pu.Channel(2222, b"me", b"secret") self.assertRaises(socket.error, c.send, "hi") self.assertTrue(c.dead) self.assertTrue(mock_sock.close.called) def test_send_and_dispatch(self): details_capture = [] t = test_utils.DummyTask("rcver") t.notifier.register( task.EVENT_UPDATE_PROGRESS, lambda _event_type, details: details_capture.append(details)) d = pu.Dispatcher({}, b'secret', b'server-josh') d.setup() d.targets[b'child-josh'] = t s = threading.Thread(target=asyncore.loop, kwargs={'map': d.map}) s.start() self.addCleanup(s.join) c = pu.Channel(d.port, b'child-josh', b'secret') self.addCleanup(c.close) send_what = [ {'progress': 0.1}, {'progress': 0.2}, {'progress': 0.3}, {'progress': 0.4}, {'progress': 0.5}, {'progress': 0.6}, {'progress': 0.7}, {'progress': 0.8}, {'progress': 0.9}, ] e_s = pu.EventSender(c) for details in send_what: e_s(task.EVENT_UPDATE_PROGRESS, details) # This forces the thread to shutdown (since the asyncore loop # will exit when no more sockets exist to process...) d.close() self.assertEqual(len(send_what), len(details_capture)) self.assertEqual(send_what, details_capture) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/action_engine/test_scoping.py0000664000175000017500000002607700000000000025246 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.engines.action_engine import compiler from taskflow.engines.action_engine import scopes as sc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import test from taskflow.tests import utils as test_utils def _get_scopes(compilation, atom, names_only=True): walker = sc.ScopeWalker(compilation, atom, names_only=names_only) return list(iter(walker)) class LinearScopingTest(test.TestCase): def test_unknown(self): r = lf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r.add(r_1) r_2 = test_utils.TaskOneReturn("root.2") c = compiler.PatternCompiler(r).compile() self.assertRaises(ValueError, _get_scopes, c, r_2) def test_empty(self): r = lf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r.add(r_1) c = compiler.PatternCompiler(r).compile() self.assertIn(r_1, c.execution_graph) self.assertIsNotNone(c.hierarchy.find(r_1)) walker = sc.ScopeWalker(c, r_1) scopes = list(walker) self.assertEqual([], scopes) def test_single_prior_linear(self): r = lf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r_2 = test_utils.TaskOneReturn("root.2") r.add(r_1, r_2) c = compiler.PatternCompiler(r).compile() for a in r: self.assertIn(a, c.execution_graph) self.assertIsNotNone(c.hierarchy.find(a)) self.assertEqual([], _get_scopes(c, r_1)) self.assertEqual([['root.1']], _get_scopes(c, r_2)) def test_nested_prior_linear(self): r = lf.Flow("root") r.add(test_utils.TaskOneReturn("root.1"), test_utils.TaskOneReturn("root.2")) sub_r = lf.Flow("subroot") sub_r_1 = test_utils.TaskOneReturn("subroot.1") sub_r.add(sub_r_1) r.add(sub_r) c = compiler.PatternCompiler(r).compile() self.assertEqual([[], ['root.2', 'root.1']], _get_scopes(c, sub_r_1)) def test_nested_prior_linear_begin_middle_end(self): r = lf.Flow("root") begin_r = test_utils.TaskOneReturn("root.1") r.add(begin_r, test_utils.TaskOneReturn("root.2")) middle_r = test_utils.TaskOneReturn("root.3") r.add(middle_r) sub_r = lf.Flow("subroot") sub_r.add(test_utils.TaskOneReturn("subroot.1"), test_utils.TaskOneReturn("subroot.2")) r.add(sub_r) end_r = test_utils.TaskOneReturn("root.4") r.add(end_r) c = compiler.PatternCompiler(r).compile() self.assertEqual([], _get_scopes(c, begin_r)) self.assertEqual([['root.2', 'root.1']], _get_scopes(c, middle_r)) self.assertEqual([['subroot.2', 'subroot.1', 'root.3', 'root.2', 'root.1']], _get_scopes(c, end_r)) class GraphScopingTest(test.TestCase): def test_dependent(self): r = gf.Flow("root") customer = test_utils.ProvidesRequiresTask("customer", provides=['dog'], requires=[]) washer = test_utils.ProvidesRequiresTask("washer", requires=['dog'], provides=['wash']) dryer = test_utils.ProvidesRequiresTask("dryer", requires=['dog', 'wash'], provides=['dry_dog']) shaved = test_utils.ProvidesRequiresTask("shaver", requires=['dry_dog'], provides=['shaved_dog']) happy_customer = test_utils.ProvidesRequiresTask( "happy_customer", requires=['shaved_dog'], provides=['happiness']) r.add(customer, washer, dryer, shaved, happy_customer) c = compiler.PatternCompiler(r).compile() self.assertEqual([], _get_scopes(c, customer)) self.assertEqual([['washer', 'customer']], _get_scopes(c, dryer)) self.assertEqual([['shaver', 'dryer', 'washer', 'customer']], _get_scopes(c, happy_customer)) def test_no_visible(self): r = gf.Flow("root") atoms = [] for i in range(0, 10): atoms.append(test_utils.TaskOneReturn("root.%s" % i)) r.add(*atoms) c = compiler.PatternCompiler(r).compile() for a in atoms: self.assertEqual([], _get_scopes(c, a)) def test_nested(self): r = gf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r_2 = test_utils.TaskOneReturn("root.2") r.add(r_1, r_2) r.link(r_1, r_2) subroot = gf.Flow("subroot") subroot_r_1 = test_utils.TaskOneReturn("subroot.1") subroot_r_2 = test_utils.TaskOneReturn("subroot.2") subroot.add(subroot_r_1, subroot_r_2) subroot.link(subroot_r_1, subroot_r_2) r.add(subroot) r_3 = test_utils.TaskOneReturn("root.3") r.add(r_3) r.link(r_2, r_3) c = compiler.PatternCompiler(r).compile() self.assertEqual([], _get_scopes(c, r_1)) self.assertEqual([['root.1']], _get_scopes(c, r_2)) self.assertEqual([['root.2', 'root.1']], _get_scopes(c, r_3)) self.assertEqual([], _get_scopes(c, subroot_r_1)) self.assertEqual([['subroot.1']], _get_scopes(c, subroot_r_2)) class UnorderedScopingTest(test.TestCase): def test_no_visible(self): r = uf.Flow("root") atoms = [] for i in range(0, 10): atoms.append(test_utils.TaskOneReturn("root.%s" % i)) r.add(*atoms) c = compiler.PatternCompiler(r).compile() for a in atoms: self.assertEqual([], _get_scopes(c, a)) class MixedPatternScopingTest(test.TestCase): def test_graph_linear_scope(self): r = gf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r_2 = test_utils.TaskOneReturn("root.2") r.add(r_1, r_2) r.link(r_1, r_2) s = lf.Flow("subroot") s_1 = test_utils.TaskOneReturn("subroot.1") s_2 = test_utils.TaskOneReturn("subroot.2") s.add(s_1, s_2) r.add(s) t = gf.Flow("subroot2") t_1 = test_utils.TaskOneReturn("subroot2.1") t_2 = test_utils.TaskOneReturn("subroot2.2") t.add(t_1, t_2) t.link(t_1, t_2) r.add(t) r.link(s, t) c = compiler.PatternCompiler(r).compile() self.assertEqual([], _get_scopes(c, r_1)) self.assertEqual([['root.1']], _get_scopes(c, r_2)) self.assertEqual([], _get_scopes(c, s_1)) self.assertEqual([['subroot.1']], _get_scopes(c, s_2)) self.assertEqual([[], ['subroot.2', 'subroot.1']], _get_scopes(c, t_1)) self.assertEqual([["subroot2.1"], ['subroot.2', 'subroot.1']], _get_scopes(c, t_2)) def test_linear_unordered_scope(self): r = lf.Flow("root") r_1 = test_utils.TaskOneReturn("root.1") r_2 = test_utils.TaskOneReturn("root.2") r.add(r_1, r_2) u = uf.Flow("subroot") atoms = [] for i in range(0, 5): atoms.append(test_utils.TaskOneReturn("subroot.%s" % i)) u.add(*atoms) r.add(u) r_3 = test_utils.TaskOneReturn("root.3") r.add(r_3) c = compiler.PatternCompiler(r).compile() self.assertEqual([], _get_scopes(c, r_1)) self.assertEqual([['root.1']], _get_scopes(c, r_2)) for a in atoms: self.assertEqual([[], ['root.2', 'root.1']], _get_scopes(c, a)) scope = _get_scopes(c, r_3) self.assertEqual(1, len(scope)) first_root = 0 for i, n in enumerate(scope[0]): if n.startswith('root.'): first_root = i break first_subroot = 0 for i, n in enumerate(scope[0]): if n.startswith('subroot.'): first_subroot = i break self.assertGreater(first_subroot, first_root) self.assertEqual(['root.2', 'root.1'], scope[0][-2:]) def test_shadow_graph(self): r = gf.Flow("root") customer = test_utils.ProvidesRequiresTask("customer", provides=['dog'], requires=[]) customer2 = test_utils.ProvidesRequiresTask("customer2", provides=['dog'], requires=[]) washer = test_utils.ProvidesRequiresTask("washer", requires=['dog'], provides=['wash']) r.add(customer, washer) r.add(customer2, resolve_requires=False) r.link(customer2, washer) c = compiler.PatternCompiler(r).compile() # The order currently is *not* guaranteed to be 'customer' before # 'customer2' or the reverse, since either can occur before the # washer; since *either* is a valid topological ordering of the # dependencies... # # This may be different after/if the following is resolved: # # https://github.com/networkx/networkx/issues/1181 (and a few others) self.assertEqual(set(['customer', 'customer2']), set(_get_scopes(c, washer)[0])) self.assertEqual([], _get_scopes(c, customer2)) self.assertEqual([], _get_scopes(c, customer)) def test_shadow_linear(self): r = lf.Flow("root") customer = test_utils.ProvidesRequiresTask("customer", provides=['dog'], requires=[]) customer2 = test_utils.ProvidesRequiresTask("customer2", provides=['dog'], requires=[]) washer = test_utils.ProvidesRequiresTask("washer", requires=['dog'], provides=['wash']) r.add(customer, customer2, washer) c = compiler.PatternCompiler(r).compile() # This order is guaranteed... self.assertEqual(['customer2', 'customer'], _get_scopes(c, washer)[0]) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6480424 taskflow-4.6.4/taskflow/tests/unit/jobs/0000775000175000017500000000000000000000000020312 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/jobs/__init__.py0000664000175000017500000000000000000000000022411 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/jobs/base.py0000664000175000017500000002037700000000000021607 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import threading import time from taskflow import exceptions as excp from taskflow.persistence.backends import impl_dir from taskflow import states from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as p_utils from taskflow.utils import threading_utils @contextlib.contextmanager def connect_close(*args): try: for a in args: a.connect() yield finally: for a in args: a.close() class BoardTestMixin(object): @contextlib.contextmanager def flush(self, client): yield def close_client(self, client): pass def test_connect(self): self.assertFalse(self.board.connected) with connect_close(self.board): self.assertTrue(self.board.connected) def test_board_iter_empty(self): with connect_close(self.board): jobs_found = list(self.board.iterjobs()) self.assertEqual([], jobs_found) def test_fresh_iter(self): with connect_close(self.board): book = p_utils.temporary_log_book() self.board.post('test', book) jobs = list(self.board.iterjobs(ensure_fresh=True)) self.assertEqual(1, len(jobs)) def test_wait_timeout(self): with connect_close(self.board): self.assertRaises(excp.NotFound, self.board.wait, timeout=0.1) def test_wait_arrival(self): ev = threading.Event() jobs = [] def poster(wait_post=0.2): if not ev.wait(test_utils.WAIT_TIMEOUT): raise RuntimeError("Waiter did not appear ready" " in %s seconds" % test_utils.WAIT_TIMEOUT) time.sleep(wait_post) self.board.post('test', p_utils.temporary_log_book()) def waiter(): ev.set() it = self.board.wait() jobs.extend(it) with connect_close(self.board): t1 = threading_utils.daemon_thread(poster) t1.start() t2 = threading_utils.daemon_thread(waiter) t2.start() for t in (t1, t2): t.join() self.assertEqual(1, len(jobs)) def test_posting_claim(self): with connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(1, self.board.job_count) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name) self.assertEqual(self.board.name, self.board.find_owner(j)) self.assertEqual(states.CLAIMED, j.state) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(0, len(possible_jobs)) self.close_client(self.client) self.assertRaisesAttrAccess(excp.JobFailure, j, 'state') def test_posting_claim_consume(self): with connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] with self.flush(self.client): self.board.claim(j, self.board.name) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(0, len(possible_jobs)) with self.flush(self.client): self.board.consume(j, self.board.name) self.assertEqual(0, len(list(self.board.iterjobs()))) self.assertRaises(excp.NotFound, self.board.consume, j, self.board.name) def test_posting_claim_abandon(self): with connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] with self.flush(self.client): self.board.claim(j, self.board.name) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(0, len(possible_jobs)) with self.flush(self.client): self.board.abandon(j, self.board.name) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) def test_posting_claim_diff_owner(self): with connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) with self.flush(self.client): self.board.claim(possible_jobs[0], self.board.name) possible_jobs = list(self.board.iterjobs()) self.assertEqual(1, len(possible_jobs)) self.assertRaises(excp.UnclaimableJob, self.board.claim, possible_jobs[0], self.board.name + "-1") possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(0, len(possible_jobs)) def test_posting_consume_wait(self): with connect_close(self.board): jb = self.board.post('test', p_utils.temporary_log_book()) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.board.claim(possible_jobs[0], self.board.name) self.board.consume(possible_jobs[0], self.board.name) self.assertTrue(jb.wait()) def test_posting_no_consume_wait(self): with connect_close(self.board): jb = self.board.post('test', p_utils.temporary_log_book()) self.assertFalse(jb.wait(0.1)) def test_posting_with_book(self): backend = impl_dir.DirBackend(conf={ 'path': self.makeTmpDir(), }) backend.get_connection().upgrade() book, flow_detail = p_utils.temporary_flow_detail(backend) self.assertEqual(1, len(book)) client, board = self.create_board(persistence=backend) with connect_close(board): with self.flush(client): board.post('test', book) possible_jobs = list(board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] self.assertEqual(1, len(j.book)) self.assertEqual(book.name, j.book.name) self.assertEqual(book.uuid, j.book.uuid) self.assertEqual(book.name, j.book_name) self.assertEqual(book.uuid, j.book_uuid) flow_details = list(j.book) self.assertEqual(flow_detail.uuid, flow_details[0].uuid) self.assertEqual(flow_detail.name, flow_details[0].name) def test_posting_abandon_no_owner(self): with connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(1, self.board.job_count) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] self.assertRaises(excp.NotFound, self.board.abandon, j, j.name) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/jobs/test_entrypoint.py0000664000175000017500000000425400000000000024143 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from zake import fake_client from taskflow.jobs import backends from taskflow.jobs.backends import impl_redis from taskflow.jobs.backends import impl_zookeeper from taskflow import test class BackendFetchingTest(test.TestCase): def test_zk_entry_point_text(self): conf = 'zookeeper' with contextlib.closing(backends.fetch('test', conf)) as be: self.assertIsInstance(be, impl_zookeeper.ZookeeperJobBoard) def test_zk_entry_point(self): conf = { 'board': 'zookeeper', } with contextlib.closing(backends.fetch('test', conf)) as be: self.assertIsInstance(be, impl_zookeeper.ZookeeperJobBoard) def test_zk_entry_point_existing_client(self): existing_client = fake_client.FakeClient() conf = { 'board': 'zookeeper', } kwargs = { 'client': existing_client, } with contextlib.closing(backends.fetch('test', conf, **kwargs)) as be: self.assertIsInstance(be, impl_zookeeper.ZookeeperJobBoard) self.assertIs(existing_client, be._client) def test_redis_entry_point_text(self): conf = 'redis' with contextlib.closing(backends.fetch('test', conf)) as be: self.assertIsInstance(be, impl_redis.RedisJobBoard) def test_redis_entry_point(self): conf = { 'board': 'redis', } with contextlib.closing(backends.fetch('test', conf)) as be: self.assertIsInstance(be, impl_redis.RedisJobBoard) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/jobs/test_redis_job.py0000664000175000017500000001206400000000000023666 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from unittest import mock from oslo_utils import uuidutils import six import testtools from taskflow import exceptions as excp from taskflow.jobs.backends import impl_redis from taskflow import states from taskflow import test from taskflow.tests.unit.jobs import base from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as p_utils from taskflow.utils import redis_utils as ru REDIS_AVAILABLE = test_utils.redis_available( impl_redis.RedisJobBoard.MIN_REDIS_VERSION) @testtools.skipIf(not REDIS_AVAILABLE, 'redis is not available') class RedisJobboardTest(test.TestCase, base.BoardTestMixin): def close_client(self, client): client.close() def create_board(self, persistence=None): namespace = uuidutils.generate_uuid() client = ru.RedisClient() config = { 'namespace': six.b("taskflow-%s" % namespace), } kwargs = { 'client': client, 'persistence': persistence, } board = impl_redis.RedisJobBoard('test-board', config, **kwargs) self.addCleanup(board.close) self.addCleanup(self.close_client, client) return (client, board) def test_posting_claim_expiry(self): with base.connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(1, self.board.job_count) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name, expiry=0.5) self.assertEqual(self.board.name, self.board.find_owner(j)) self.assertEqual(states.CLAIMED, j.state) time.sleep(0.6) self.assertEqual(states.UNCLAIMED, j.state) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) def test_posting_claim_same_owner(self): with base.connect_close(self.board): with self.flush(self.client): self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(1, self.board.job_count) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(1, len(possible_jobs)) j = possible_jobs[0] self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name) possible_jobs = list(self.board.iterjobs()) self.assertEqual(1, len(possible_jobs)) with self.flush(self.client): self.assertRaises(excp.UnclaimableJob, self.board.claim, possible_jobs[0], self.board.name) possible_jobs = list(self.board.iterjobs(only_unclaimed=True)) self.assertEqual(0, len(possible_jobs)) def setUp(self): super(RedisJobboardTest, self).setUp() self.client, self.board = self.create_board() def test__make_client(self): conf = {'host': '127.0.0.1', 'port': 6379, 'password': 'secret', 'namespace': 'test' } test_conf = { 'host': '127.0.0.1', 'port': 6379, 'password': 'secret', } with mock.patch('taskflow.utils.redis_utils.RedisClient') as mock_ru: impl_redis.RedisJobBoard('test-board', conf) mock_ru.assert_called_once_with(**test_conf) def test__make_client_sentinel(self): conf = {'host': '127.0.0.1', 'port': 26379, 'password': 'secret', 'namespace': 'test', 'sentinel': 'mymaster', 'sentinel_kwargs': {'password': 'senitelsecret'}} with mock.patch('redis.sentinel.Sentinel') as mock_sentinel: impl_redis.RedisJobBoard('test-board', conf) test_conf = { 'password': 'secret', } mock_sentinel.assert_called_once_with( [('127.0.0.1', 26379)], sentinel_kwargs={'password': 'senitelsecret'}, **test_conf) mock_sentinel().master_for.assert_called_once_with('mymaster') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/jobs/test_zk_job.py0000664000175000017500000002775700000000000023223 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import threading from kazoo.protocol import paths as k_paths from kazoo.recipe import watchers from oslo_serialization import jsonutils from oslo_utils import uuidutils import six import testtools from zake import fake_client from zake import utils as zake_utils from taskflow import exceptions as excp from taskflow.jobs.backends import impl_zookeeper from taskflow import states from taskflow import test from taskflow.test import mock from taskflow.tests.unit.jobs import base from taskflow.tests import utils as test_utils from taskflow.types import entity from taskflow.utils import kazoo_utils from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils FLUSH_PATH_TPL = '/taskflow/flush-test/%s' TEST_PATH_TPL = '/taskflow/board-test/%s' ZOOKEEPER_AVAILABLE = test_utils.zookeeper_available( impl_zookeeper.ZookeeperJobBoard.MIN_ZK_VERSION) TRASH_FOLDER = impl_zookeeper.ZookeeperJobBoard.TRASH_FOLDER LOCK_POSTFIX = impl_zookeeper.ZookeeperJobBoard.LOCK_POSTFIX class ZookeeperBoardTestMixin(base.BoardTestMixin): def close_client(self, client): kazoo_utils.finalize_client(client) @contextlib.contextmanager def flush(self, client, path=None): # This uses the linearity guarantee of zookeeper (and associated # libraries) to create a temporary node, wait until a watcher notifies # it's created, then yield back for more work, and then at the end of # that work delete the created node. This ensures that the operations # done in the yield of this context manager will be applied and all # watchers will have fired before this context manager exits. if not path: path = FLUSH_PATH_TPL % uuidutils.generate_uuid() created = threading.Event() deleted = threading.Event() def on_created(data, stat): if stat is not None: created.set() return False # cause this watcher to cease to exist def on_deleted(data, stat): if stat is None: deleted.set() return False # cause this watcher to cease to exist watchers.DataWatch(client, path, func=on_created) client.create(path, makepath=True) if not created.wait(test_utils.WAIT_TIMEOUT): raise RuntimeError("Could not receive creation of %s in" " the alloted timeout of %s seconds" % (path, test_utils.WAIT_TIMEOUT)) try: yield finally: watchers.DataWatch(client, path, func=on_deleted) client.delete(path, recursive=True) if not deleted.wait(test_utils.WAIT_TIMEOUT): raise RuntimeError("Could not receive deletion of %s in" " the alloted timeout of %s seconds" % (path, test_utils.WAIT_TIMEOUT)) def test_posting_no_post(self): with base.connect_close(self.board): with mock.patch.object(self.client, 'create') as create_func: create_func.side_effect = IOError("Unable to post") self.assertRaises(IOError, self.board.post, 'test', p_utils.temporary_log_book()) self.assertEqual(0, self.board.job_count) def test_board_iter(self): with base.connect_close(self.board): it = self.board.iterjobs() self.assertEqual(self.board, it.board) self.assertFalse(it.only_unclaimed) self.assertFalse(it.ensure_fresh) @mock.patch("taskflow.jobs.backends.impl_zookeeper.misc." "millis_to_datetime") def test_posting_dates(self, mock_dt): epoch = misc.millis_to_datetime(0) mock_dt.return_value = epoch with base.connect_close(self.board): j = self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(epoch, j.created_on) self.assertEqual(epoch, j.last_modified) self.assertTrue(mock_dt.called) @testtools.skipIf(not ZOOKEEPER_AVAILABLE, 'zookeeper is not available') class ZookeeperJobboardTest(test.TestCase, ZookeeperBoardTestMixin): def create_board(self, persistence=None): def cleanup_path(client, path): if not client.connected: return client.delete(path, recursive=True) client = kazoo_utils.make_client(test_utils.ZK_TEST_CONFIG.copy()) path = TEST_PATH_TPL % (uuidutils.generate_uuid()) board = impl_zookeeper.ZookeeperJobBoard('test-board', {'path': path}, client=client, persistence=persistence) self.addCleanup(self.close_client, client) self.addCleanup(cleanup_path, client, path) self.addCleanup(board.close) return (client, board) def setUp(self): super(ZookeeperJobboardTest, self).setUp() self.client, self.board = self.create_board() class ZakeJobboardTest(test.TestCase, ZookeeperBoardTestMixin): def create_board(self, persistence=None): client = fake_client.FakeClient() board = impl_zookeeper.ZookeeperJobBoard('test-board', {}, client=client, persistence=persistence) self.addCleanup(board.close) self.addCleanup(self.close_client, client) return (client, board) def setUp(self): super(ZakeJobboardTest, self).setUp() self.client, self.board = self.create_board() self.bad_paths = [self.board.path, self.board.trash_path] self.bad_paths.extend(zake_utils.partition_path(self.board.path)) def test_posting_owner_lost(self): with base.connect_close(self.board): with self.flush(self.client): j = self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name) self.assertEqual(states.CLAIMED, j.state) # Forcefully delete the owner from the backend storage to make # sure the job becomes unclaimed (this may happen if some admin # manually deletes the lock). paths = list(six.iteritems(self.client.storage.paths)) for (path, value) in paths: if path in self.bad_paths: continue if path.endswith('lock'): value['data'] = misc.binary_encode(jsonutils.dumps({})) self.assertEqual(states.UNCLAIMED, j.state) def test_posting_state_lock_lost(self): with base.connect_close(self.board): with self.flush(self.client): j = self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name) self.assertEqual(states.CLAIMED, j.state) # Forcefully delete the lock from the backend storage to make # sure the job becomes unclaimed (this may happen if some admin # manually deletes the lock). paths = list(six.iteritems(self.client.storage.paths)) for (path, value) in paths: if path in self.bad_paths: continue if path.endswith("lock"): self.client.storage.pop(path) self.assertEqual(states.UNCLAIMED, j.state) def test_trashing_claimed_job(self): with base.connect_close(self.board): with self.flush(self.client): j = self.board.post('test', p_utils.temporary_log_book()) self.assertEqual(states.UNCLAIMED, j.state) with self.flush(self.client): self.board.claim(j, self.board.name) self.assertEqual(states.CLAIMED, j.state) with self.flush(self.client): self.board.trash(j, self.board.name) trashed = [] jobs = [] paths = list(six.iteritems(self.client.storage.paths)) for (path, value) in paths: if path in self.bad_paths: continue if path.find(TRASH_FOLDER) > -1: trashed.append(path) elif (path.find(self.board._job_base) > -1 and not path.endswith(LOCK_POSTFIX)): jobs.append(path) self.assertEqual(1, len(trashed)) self.assertEqual(0, len(jobs)) def test_posting_received_raw(self): book = p_utils.temporary_log_book() with base.connect_close(self.board): self.assertTrue(self.board.connected) self.assertEqual(0, self.board.job_count) posted_job = self.board.post('test', book) self.assertEqual(self.board, posted_job.board) self.assertEqual(1, self.board.job_count) self.assertIn(posted_job.uuid, [j.uuid for j in self.board.iterjobs()]) # Remove paths that got created due to the running process that we are # not interested in... paths = {} for (path, data) in six.iteritems(self.client.storage.paths): if path in self.bad_paths: continue paths[path] = data # Check the actual data that was posted. self.assertEqual(1, len(paths)) path_key = list(six.iterkeys(paths))[0] self.assertTrue(len(paths[path_key]['data']) > 0) self.assertDictEqual({ 'uuid': posted_job.uuid, 'name': posted_job.name, 'book': { 'name': book.name, 'uuid': book.uuid, }, 'priority': 'NORMAL', 'details': {}, }, jsonutils.loads(misc.binary_decode(paths[path_key]['data']))) def test_register_entity(self): conductor_name = "conductor-abc@localhost:4123" entity_instance = entity.Entity("conductor", conductor_name, {}) with base.connect_close(self.board): self.board.register_entity(entity_instance) # Check '.entity' node has been created self.assertTrue(self.board.entity_path in self.client.storage.paths) conductor_entity_path = k_paths.join(self.board.entity_path, 'conductor', conductor_name) self.assertTrue(conductor_entity_path in self.client.storage.paths) conductor_data = ( self.client.storage.paths[conductor_entity_path]['data']) self.assertTrue(len(conductor_data) > 0) self.assertDictEqual({ 'name': conductor_name, 'kind': 'conductor', 'metadata': {}, }, jsonutils.loads(misc.binary_decode(conductor_data))) entity_instance_2 = entity.Entity("non-sense", "other_name", {}) with base.connect_close(self.board): self.assertRaises(excp.NotImplementedError, self.board.register_entity, entity_instance_2) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6480424 taskflow-4.6.4/taskflow/tests/unit/patterns/0000775000175000017500000000000000000000000021215 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/patterns/__init__.py0000664000175000017500000000000000000000000023314 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/patterns/test_graph_flow.py0000664000175000017500000003025400000000000024762 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow import retry from taskflow import test from taskflow.tests import utils def _task(name, provides=None, requires=None): return utils.ProvidesRequiresTask(name, provides, requires) class GraphFlowTest(test.TestCase): def test_invalid_decider_depth(self): g_1 = utils.ProgressingTask(name='g-1') g_2 = utils.ProgressingTask(name='g-2') for not_a_depth in ['not-a-depth', object(), 2, 3.4, False]: flow = gf.Flow('g') flow.add(g_1, g_2) self.assertRaises((ValueError, TypeError), flow.link, g_1, g_2, decider=lambda history: False, decider_depth=not_a_depth) def test_graph_flow_stringy(self): f = gf.Flow('test') expected = 'graph_flow.Flow: test(len=0)' self.assertEqual(expected, str(f)) task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = gf.Flow('test') f.add(task1, task2, task3) expected = 'graph_flow.Flow: test(len=3)' self.assertEqual(expected, str(f)) def test_graph_flow_starts_as_empty(self): f = gf.Flow('test') self.assertEqual(0, len(f)) self.assertEqual([], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(), f.requires) self.assertEqual(set(), f.provides) def test_graph_flow_add_nothing(self): f = gf.Flow('test') result = f.add() self.assertIs(f, result) self.assertEqual(0, len(f)) def test_graph_flow_one_task(self): f = gf.Flow('test') task = _task(name='task1', requires=['a', 'b'], provides=['c', 'd']) result = f.add(task) self.assertIs(f, result) self.assertEqual(1, len(f)) self.assertEqual([task], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(['a', 'b']), f.requires) self.assertEqual(set(['c', 'd']), f.provides) def test_graph_flow_two_independent_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') f = gf.Flow('test').add(task1, task2) self.assertEqual(2, len(f)) self.assertCountEqual(f, [task1, task2]) self.assertEqual([], list(f.iter_links())) def test_graph_flow_two_dependent_tasks(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = gf.Flow('test').add(task1, task2) self.assertEqual(2, len(f)) self.assertCountEqual(f, [task1, task2]) self.assertEqual([(task1, task2, {'reasons': set(['a'])})], list(f.iter_links())) self.assertEqual(set(), f.requires) self.assertEqual(set(['a']), f.provides) def test_graph_flow_two_dependent_tasks_two_different_calls(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = gf.Flow('test').add(task1).add(task2) self.assertEqual(2, len(f)) self.assertCountEqual(f, [task1, task2]) self.assertEqual([(task1, task2, {'reasons': set(['a'])})], list(f.iter_links())) def test_graph_flow_two_task_same_provide(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) f = gf.Flow('test') f.add(task2, task1) self.assertEqual(set(['a', 'b', 'c']), f.provides) def test_graph_flow_ambiguous_provides(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a']) f = gf.Flow('test') f.add(task1, task2) self.assertEqual(set(['a', 'b']), f.provides) task3 = _task(name='task3', requires=['a']) self.assertRaises(exc.AmbiguousDependency, f.add, task3) def test_graph_flow_no_resolve_requires(self): task1 = _task(name='task1', provides=['a', 'b', 'c']) task2 = _task(name='task2', requires=['a', 'b']) f = gf.Flow('test') f.add(task1, task2, resolve_requires=False) self.assertEqual(set(['a', 'b']), f.requires) def test_graph_flow_no_resolve_existing(self): task1 = _task(name='task1', requires=['a', 'b']) task2 = _task(name='task2', provides=['a', 'b']) f = gf.Flow('test') f.add(task1) f.add(task2, resolve_existing=False) self.assertEqual(set(['a', 'b']), f.requires) def test_graph_flow_resolve_existing(self): task1 = _task(name='task1', requires=['a', 'b']) task2 = _task(name='task2', provides=['a', 'b']) f = gf.Flow('test') f.add(task1) f.add(task2, resolve_existing=True) self.assertEqual(set([]), f.requires) def test_graph_flow_with_retry(self): ret = retry.AlwaysRevert(requires=['a'], provides=['b']) f = gf.Flow('test', ret) self.assertIs(f.retry, ret) self.assertEqual('test_retry', ret.name) self.assertEqual(set(['a']), f.requires) self.assertEqual(set(['b']), f.provides) def test_graph_flow_ordering(self): task1 = _task('task1', provides=set(['a', 'b'])) task2 = _task('task2', provides=['c'], requires=['a', 'b']) task3 = _task('task3', provides=[], requires=['c']) f = gf.Flow('test').add(task1, task2, task3) self.assertEqual(3, len(f)) self.assertCountEqual(list(f.iter_links()), [ (task1, task2, {'reasons': set(['a', 'b'])}), (task2, task3, {'reasons': set(['c'])}) ]) def test_graph_flow_links(self): task1 = _task('task1') task2 = _task('task2') f = gf.Flow('test').add(task1, task2) linked = f.link(task1, task2) self.assertIs(linked, f) self.assertCountEqual(list(f.iter_links()), [ (task1, task2, {'manual': True}) ]) def test_graph_flow_links_and_dependencies(self): task1 = _task('task1', provides=['a']) task2 = _task('task2', requires=['a']) f = gf.Flow('test').add(task1, task2) linked = f.link(task1, task2) self.assertIs(linked, f) expected_meta = { 'manual': True, 'reasons': set(['a']) } self.assertCountEqual(list(f.iter_links()), [ (task1, task2, expected_meta) ]) def test_graph_flow_link_from_unknown_node(self): task1 = _task('task1') task2 = _task('task2') f = gf.Flow('test').add(task2) self.assertRaisesRegex(ValueError, 'Node .* not found to link from', f.link, task1, task2) def test_graph_flow_link_to_unknown_node(self): task1 = _task('task1') task2 = _task('task2') f = gf.Flow('test').add(task1) self.assertRaisesRegex(ValueError, 'Node .* not found to link to', f.link, task1, task2) def test_graph_flow_link_raises_on_cycle(self): task1 = _task('task1', provides=['a']) task2 = _task('task2', requires=['a']) f = gf.Flow('test').add(task1, task2) self.assertRaises(exc.DependencyFailure, f.link, task2, task1) def test_graph_flow_link_raises_on_link_cycle(self): task1 = _task('task1') task2 = _task('task2') f = gf.Flow('test').add(task1, task2) f.link(task1, task2) self.assertRaises(exc.DependencyFailure, f.link, task2, task1) def test_graph_flow_dependency_cycle(self): task1 = _task('task1', provides=['a'], requires=['c']) task2 = _task('task2', provides=['b'], requires=['a']) task3 = _task('task3', provides=['c'], requires=['b']) f = gf.Flow('test').add(task1, task2) self.assertRaises(exc.DependencyFailure, f.add, task3) def test_iter_nodes(self): task1 = _task('task1', provides=['a'], requires=['c']) task2 = _task('task2', provides=['b'], requires=['a']) task3 = _task('task3', provides=['c']) f1 = gf.Flow('nested') f1.add(task3) tasks = set([task1, task2, f1]) f = gf.Flow('test').add(task1, task2, f1) for (n, data) in f.iter_nodes(): self.assertTrue(n in tasks) self.assertDictEqual({}, data) def test_iter_links(self): task1 = _task('task1') task2 = _task('task2') task3 = _task('task3') f1 = gf.Flow('nested') f1.add(task3) tasks = set([task1, task2, f1]) f = gf.Flow('test').add(task1, task2, f1) for (u, v, data) in f.iter_links(): self.assertTrue(u in tasks) self.assertTrue(v in tasks) self.assertDictEqual({}, data) class TargetedGraphFlowTest(test.TestCase): def test_targeted_flow_restricts(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=['a'], requires=[]) task2 = _task('task2', provides=['b'], requires=['a']) task3 = _task('task3', provides=[], requires=['b']) task4 = _task('task4', provides=[], requires=['b']) f.add(task1, task2, task3, task4) f.set_target(task3) self.assertEqual(3, len(f)) self.assertCountEqual(f, [task1, task2, task3]) self.assertNotIn('c', f.provides) def test_targeted_flow_reset(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=['a'], requires=[]) task2 = _task('task2', provides=['b'], requires=['a']) task3 = _task('task3', provides=[], requires=['b']) task4 = _task('task4', provides=['c'], requires=['b']) f.add(task1, task2, task3, task4) f.set_target(task3) f.reset_target() self.assertEqual(4, len(f)) self.assertCountEqual(f, [task1, task2, task3, task4]) self.assertIn('c', f.provides) def test_targeted_flow_bad_target(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=['a'], requires=[]) task2 = _task('task2', provides=['b'], requires=['a']) f.add(task1) self.assertRaisesRegex(ValueError, '^Node .* not found', f.set_target, task2) def test_targeted_flow_one_node(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=['a'], requires=[]) f.add(task1) f.set_target(task1) self.assertEqual(1, len(f)) self.assertCountEqual(f, [task1]) def test_recache_on_add(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=[], requires=['a']) f.add(task1) f.set_target(task1) self.assertEqual(1, len(f)) task2 = _task('task2', provides=['a'], requires=[]) f.add(task2) self.assertEqual(2, len(f)) def test_recache_on_add_no_deps(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=[], requires=[]) f.add(task1) f.set_target(task1) self.assertEqual(1, len(f)) task2 = _task('task2', provides=[], requires=[]) f.add(task2) self.assertEqual(1, len(f)) def test_recache_on_link(self): f = gf.TargetedFlow("test") task1 = _task('task1', provides=[], requires=[]) task2 = _task('task2', provides=[], requires=[]) f.add(task1, task2) f.set_target(task1) self.assertEqual(1, len(f)) f.link(task2, task1) self.assertEqual(2, len(f)) self.assertEqual([(task2, task1, {'manual': True})], list(f.iter_links()), ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/patterns/test_linear_flow.py0000664000175000017500000001202500000000000025127 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.patterns import linear_flow as lf from taskflow import retry from taskflow import test from taskflow.tests import utils def _task(name, provides=None, requires=None): return utils.ProvidesRequiresTask(name, provides, requires) class LinearFlowTest(test.TestCase): def test_linear_flow_stringy(self): f = lf.Flow('test') expected = 'linear_flow.Flow: test(len=0)' self.assertEqual(expected, str(f)) task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = lf.Flow('test') f.add(task1, task2, task3) expected = 'linear_flow.Flow: test(len=3)' self.assertEqual(expected, str(f)) def test_linear_flow_starts_as_empty(self): f = lf.Flow('test') self.assertEqual(0, len(f)) self.assertEqual([], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(), f.requires) self.assertEqual(set(), f.provides) def test_linear_flow_add_nothing(self): f = lf.Flow('test') result = f.add() self.assertIs(f, result) self.assertEqual(0, len(f)) def test_linear_flow_one_task(self): f = lf.Flow('test') task = _task(name='task1', requires=['a', 'b'], provides=['c', 'd']) result = f.add(task) self.assertIs(f, result) self.assertEqual(1, len(f)) self.assertEqual([task], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(['a', 'b']), f.requires) self.assertEqual(set(['c', 'd']), f.provides) def test_linear_flow_two_independent_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') f = lf.Flow('test').add(task1, task2) self.assertEqual(2, len(f)) self.assertEqual([task1, task2], list(f)) self.assertEqual([(task1, task2, {'invariant': True})], list(f.iter_links())) def test_linear_flow_two_dependent_tasks(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = lf.Flow('test').add(task1, task2) self.assertEqual(2, len(f)) self.assertEqual([task1, task2], list(f)) self.assertEqual([(task1, task2, {'invariant': True})], list(f.iter_links())) self.assertEqual(set(), f.requires) self.assertEqual(set(['a']), f.provides) def test_linear_flow_two_dependent_tasks_two_different_calls(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = lf.Flow('test').add(task1).add(task2) self.assertEqual(2, len(f)) self.assertEqual([task1, task2], list(f)) self.assertEqual([(task1, task2, {'invariant': True})], list(f.iter_links()), ) def test_linear_flow_three_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = lf.Flow('test').add(task1, task2, task3) self.assertEqual(3, len(f)) self.assertEqual([task1, task2, task3], list(f)) self.assertEqual([ (task1, task2, {'invariant': True}), (task2, task3, {'invariant': True}) ], list(f.iter_links())) def test_linear_flow_with_retry(self): ret = retry.AlwaysRevert(requires=['a'], provides=['b']) f = lf.Flow('test', ret) self.assertIs(f.retry, ret) self.assertEqual('test_retry', ret.name) self.assertEqual(set(['a']), f.requires) self.assertEqual(set(['b']), f.provides) def test_iter_nodes(self): task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = lf.Flow('test').add(task1, task2, task3) tasks = set([task1, task2, task3]) for (node, data) in f.iter_nodes(): self.assertTrue(node in tasks) self.assertDictEqual({}, data) def test_iter_links(self): task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = lf.Flow('test').add(task1, task2, task3) tasks = set([task1, task2, task3]) for (u, v, data) in f.iter_links(): self.assertTrue(u in tasks) self.assertTrue(v in tasks) self.assertDictEqual({'invariant': True}, data) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/patterns/test_unordered_flow.py0000664000175000017500000001156500000000000025654 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.patterns import unordered_flow as uf from taskflow import retry from taskflow import test from taskflow.tests import utils def _task(name, provides=None, requires=None): return utils.ProvidesRequiresTask(name, provides, requires) class UnorderedFlowTest(test.TestCase): def test_unordered_flow_stringy(self): f = uf.Flow('test') expected = 'unordered_flow.Flow: test(len=0)' self.assertEqual(expected, str(f)) task1 = _task(name='task1') task2 = _task(name='task2') task3 = _task(name='task3') f = uf.Flow('test') f.add(task1, task2, task3) expected = 'unordered_flow.Flow: test(len=3)' self.assertEqual(expected, str(f)) def test_unordered_flow_starts_as_empty(self): f = uf.Flow('test') self.assertEqual(0, len(f)) self.assertEqual([], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(), f.requires) self.assertEqual(set(), f.provides) def test_unordered_flow_add_nothing(self): f = uf.Flow('test') result = f.add() self.assertIs(f, result) self.assertEqual(0, len(f)) def test_unordered_flow_one_task(self): f = uf.Flow('test') task = _task(name='task1', requires=['a', 'b'], provides=['c', 'd']) result = f.add(task) self.assertIs(f, result) self.assertEqual(1, len(f)) self.assertEqual([task], list(f)) self.assertEqual([], list(f.iter_links())) self.assertEqual(set(['a', 'b']), f.requires) self.assertEqual(set(['c', 'd']), f.provides) def test_unordered_flow_two_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') f = uf.Flow('test').add(task1, task2) self.assertEqual(2, len(f)) self.assertEqual(set([task1, task2]), set(f)) self.assertEqual([], list(f.iter_links())) def test_unordered_flow_two_tasks_two_different_calls(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = uf.Flow('test').add(task1) f.add(task2) self.assertEqual(2, len(f)) self.assertEqual(set(['a']), f.requires) self.assertEqual(set(['a']), f.provides) def test_unordered_flow_two_tasks_reverse_order(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = uf.Flow('test').add(task2).add(task1) self.assertEqual(2, len(f)) self.assertEqual(set(['a']), f.requires) self.assertEqual(set(['a']), f.provides) def test_unordered_flow_two_task_same_provide(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) f = uf.Flow('test') f.add(task2, task1) self.assertEqual(2, len(f)) def test_unordered_flow_with_retry(self): ret = retry.AlwaysRevert(requires=['a'], provides=['b']) f = uf.Flow('test', ret) self.assertIs(f.retry, ret) self.assertEqual('test_retry', ret.name) self.assertEqual(set(['a']), f.requires) self.assertEqual(set(['b']), f.provides) def test_unordered_flow_with_retry_fully_satisfies(self): ret = retry.AlwaysRevert(provides=['b', 'a']) f = uf.Flow('test', ret) f.add(_task(name='task1', requires=['a'])) self.assertIs(f.retry, ret) self.assertEqual('test_retry', ret.name) self.assertEqual(set([]), f.requires) self.assertEqual(set(['b', 'a']), f.provides) def test_iter_nodes(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) tasks = set([task1, task2]) f = uf.Flow('test') f.add(task2, task1) for (node, data) in f.iter_nodes(): self.assertTrue(node in tasks) self.assertDictEqual({}, data) def test_iter_links(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) f = uf.Flow('test') f.add(task2, task1) for (u, v, data) in f.iter_links(): raise AssertionError('links iterator should be empty') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6480424 taskflow-4.6.4/taskflow/tests/unit/persistence/0000775000175000017500000000000000000000000021701 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/__init__.py0000664000175000017500000000000000000000000024000 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/base.py0000664000175000017500000004024300000000000023170 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_utils import uuidutils from taskflow import exceptions as exc from taskflow.persistence import models from taskflow import states from taskflow.types import failure class PersistenceTestMixin(object): def _get_connection(self): raise NotImplementedError('_get_connection() implementation required') def test_task_detail_update_not_existing(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) fd.add(td) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) td2 = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) fd.add(td2) with contextlib.closing(self._get_connection()) as conn: conn.update_flow_details(fd) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb.uuid) fd2 = lb2.find(fd.uuid) self.assertIsNotNone(fd2.find(td.uuid)) self.assertIsNotNone(fd2.find(td2.uuid)) def test_flow_detail_update_not_existing(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) fd2 = models.FlowDetail('test-2', uuid=uuidutils.generate_uuid()) lb.add(fd2) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb.uuid) self.assertIsNotNone(lb2.find(fd.uuid)) self.assertIsNotNone(lb2.find(fd2.uuid)) def test_logbook_save_retrieve_many(self): lb_ids = {} for i in range(0, 10): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s-%s' % (i, lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) lb_ids[lb_id] = True # Should not already exist with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) conn.save_logbook(lb) # Now fetch them all with contextlib.closing(self._get_connection()) as conn: lbs = conn.get_logbooks() for lb in lbs: self.assertIn(lb.uuid, lb_ids) lb_ids.pop(lb.uuid) self.assertEqual(0, len(lb_ids)) def test_logbook_save_retrieve(self): lb_id = uuidutils.generate_uuid() lb_meta = {'1': 2} lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) lb.meta = lb_meta # Should not already exist with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) conn.save_logbook(lb) # Make sure we can reload it (and all of its attributes are what # we expect them to be). with contextlib.closing(self._get_connection()) as conn: lb = conn.get_logbook(lb_id) self.assertEqual(lb_name, lb.name) self.assertEqual(0, len(lb)) self.assertEqual(lb_meta, lb.meta) self.assertIsNone(lb.updated_at) self.assertIsNotNone(lb.created_at) def test_flow_detail_save(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) # Ensure we can't save it since its owning logbook hasn't been # saved (flow details can not exist on their own without a connection # to a logbook). with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) self.assertRaises(exc.NotFound, conn.update_flow_details, fd) # Ok now we should be able to save both. with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) def test_flow_detail_meta_update(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) fd.meta = {'test': 42} lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) fd.meta['test'] = 43 with contextlib.closing(self._get_connection()) as conn: conn.update_flow_details(fd) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) self.assertEqual(43, fd2.meta.get('test')) def test_flow_detail_lazy_fetch(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) td.version = '4.2' fd.add(td) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: fd2 = conn.get_flow_details(fd.uuid, lazy=True) self.assertEqual(0, len(fd2)) self.assertEqual(1, len(fd)) def test_task_detail_save(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) fd.add(td) # Ensure we can't save it since its owning logbook hasn't been # saved (flow details/task details can not exist on their own without # their parent existing). with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.update_flow_details, fd) self.assertRaises(exc.NotFound, conn.update_atom_details, td) # Ok now we should be able to save them. with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(td) def test_task_detail_meta_update(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) td.meta = {'test': 42} fd.add(td) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(td) td.meta['test'] = 43 with contextlib.closing(self._get_connection()) as conn: conn.update_atom_details(td) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) self.assertEqual(43, td2.meta.get('test')) self.assertIsInstance(td2, models.TaskDetail) def test_task_detail_with_failure(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) try: raise RuntimeError('Woot!') except Exception: td.failure = failure.Failure() fd.add(td) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(td) # Read failure back with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) self.assertEqual('Woot!', td2.failure.exception_str) self.assertIs(td2.failure.check(RuntimeError), RuntimeError) self.assertEqual(td.failure.traceback_str, td2.failure.traceback_str) self.assertIsInstance(td2, models.TaskDetail) def test_logbook_merge_flow_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) lb2 = models.LogBook(name=lb_name, uuid=lb_id) fd2 = models.FlowDetail('test2', uuid=uuidutils.generate_uuid()) lb2.add(fd2) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb2) with contextlib.closing(self._get_connection()) as conn: lb3 = conn.get_logbook(lb_id) self.assertEqual(2, len(lb3)) def test_logbook_add_flow_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertEqual(1, len(lb2)) self.assertEqual(1, len(lb)) self.assertEqual(fd.name, lb2.find(fd.uuid).name) def test_logbook_lazy_fetch(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id, lazy=True) self.assertEqual(0, len(lb2)) self.assertEqual(1, len(lb)) def test_logbook_add_task_detail(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) td = models.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) td.version = '4.2' fd.add(td) lb.add(fd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertEqual(1, len(lb2)) tasks = 0 for fd in lb: tasks += len(fd) self.assertEqual(1, tasks) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) self.assertIsNotNone(td2) self.assertEqual('detail-1', td2.name) self.assertEqual('4.2', td2.version) self.assertEqual(states.EXECUTE, td2.intention) def test_logbook_delete(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) with contextlib.closing(self._get_connection()) as conn: self.assertRaises(exc.NotFound, conn.destroy_logbook, lb_id) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) self.assertIsNotNone(lb2) with contextlib.closing(self._get_connection()) as conn: conn.destroy_logbook(lb_id) self.assertRaises(exc.NotFound, conn.destroy_logbook, lb_id) def test_task_detail_retry_type_(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) rd = models.RetryDetail("detail-1", uuid=uuidutils.generate_uuid()) rd.intention = states.REVERT fd.add(rd) with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(rd) with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) rd2 = fd2.find(rd.uuid) self.assertEqual(states.REVERT, rd2.intention) self.assertIsInstance(rd2, models.RetryDetail) def test_retry_detail_save_with_task_failure(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) rd = models.RetryDetail("retry-1", uuid=uuidutils.generate_uuid()) fail = failure.Failure.from_exception(RuntimeError('fail')) rd.results.append((42, {'some-task': fail})) fd.add(rd) # save it with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(rd) # now read it back with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) rd2 = fd2.find(rd.uuid) self.assertIsInstance(rd2, models.RetryDetail) fail2 = rd2.results[0][1].get('some-task') self.assertIsInstance(fail2, failure.Failure) self.assertTrue(fail.matches(fail2)) def test_retry_detail_save_intention(self): lb_id = uuidutils.generate_uuid() lb_name = 'lb-%s' % (lb_id) lb = models.LogBook(name=lb_name, uuid=lb_id) fd = models.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) rd = models.RetryDetail("retry-1", uuid=uuidutils.generate_uuid()) fd.add(rd) # save it with contextlib.closing(self._get_connection()) as conn: conn.save_logbook(lb) conn.update_flow_details(fd) conn.update_atom_details(rd) # change intention and save rd.intention = states.REVERT with contextlib.closing(self._get_connection()) as conn: conn.update_atom_details(rd) # now read it back with contextlib.closing(self._get_connection()) as conn: lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) rd2 = fd2.find(rd.uuid) self.assertEqual(states.REVERT, rd2.intention) self.assertIsInstance(rd2, models.RetryDetail) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/test_dir_persistence.py0000664000175000017500000000753100000000000026502 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import os import shutil import tempfile from oslo_utils import uuidutils import testscenarios from taskflow import exceptions as exc from taskflow.persistence import backends from taskflow.persistence.backends import impl_dir from taskflow.persistence import models from taskflow import test from taskflow.tests.unit.persistence import base class DirPersistenceTest(testscenarios.TestWithScenarios, test.TestCase, base.PersistenceTestMixin): scenarios = [ ('no_cache', {'max_cache_size': None}), ('one', {'max_cache_size': 1}), ('tiny', {'max_cache_size': 256}), ('medimum', {'max_cache_size': 512}), ('large', {'max_cache_size': 1024}), ] def _get_connection(self): return self.backend.get_connection() def setUp(self): super(DirPersistenceTest, self).setUp() self.path = tempfile.mkdtemp() self.backend = impl_dir.DirBackend({ 'path': self.path, 'max_cache_size': self.max_cache_size, }) with contextlib.closing(self._get_connection()) as conn: conn.upgrade() def tearDown(self): super(DirPersistenceTest, self).tearDown() if self.path and os.path.isdir(self.path): shutil.rmtree(self.path) self.path = None self.backend = None def _check_backend(self, conf): with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_dir.DirBackend) def test_dir_backend_invalid_cache_size(self): for invalid_size in [-1024, 0, -1]: conf = { 'path': self.path, 'max_cache_size': invalid_size, } self.assertRaises(ValueError, impl_dir.DirBackend, conf) def test_dir_backend_cache_overfill(self): if self.max_cache_size is not None: # Ensure cache never goes past the desired max size... books_ids_made = [] with contextlib.closing(self._get_connection()) as conn: for i in range(0, int(1.5 * self.max_cache_size)): lb_name = 'book-%s' % (i) lb_id = uuidutils.generate_uuid() lb = models.LogBook(name=lb_name, uuid=lb_id) self.assertRaises(exc.NotFound, conn.get_logbook, lb_id) conn.save_logbook(lb) books_ids_made.append(lb_id) self.assertLessEqual(self.backend.file_cache.currsize, self.max_cache_size) # Also ensure that we can still read all created books... with contextlib.closing(self._get_connection()) as conn: for lb_id in books_ids_made: lb = conn.get_logbook(lb_id) self.assertIsNotNone(lb) def test_dir_backend_entry_point(self): self._check_backend(dict(connection='dir:', path=self.path)) def test_dir_backend_name(self): self._check_backend(dict(connection='dir', # no colon path=self.path)) def test_file_backend_entry_point(self): self._check_backend(dict(connection='file:', path=self.path)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/test_memory_persistence.py0000664000175000017500000001727700000000000027244 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from taskflow import exceptions as exc from taskflow.persistence import backends from taskflow.persistence.backends import impl_memory from taskflow import test from taskflow.tests.unit.persistence import base class MemoryPersistenceTest(test.TestCase, base.PersistenceTestMixin): def setUp(self): super(MemoryPersistenceTest, self).setUp() self._backend = impl_memory.MemoryBackend({}) def _get_connection(self): return self._backend.get_connection() def tearDown(self): conn = self._get_connection() conn.clear_all() self._backend = None super(MemoryPersistenceTest, self).tearDown() def test_memory_backend_entry_point(self): conf = {'connection': 'memory:'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_memory.MemoryBackend) def test_memory_backend_fetch_by_name(self): conf = {'connection': 'memory'} # note no colon with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_memory.MemoryBackend) class MemoryFilesystemTest(test.TestCase): @staticmethod def _get_item_path(fs, path): # TODO(harlowja): is there a better way to do this?? return fs[path] @staticmethod def _del_item_path(fs, path): # TODO(harlowja): is there a better way to do this?? del fs[path] def test_set_get_ls(self): fs = impl_memory.FakeFilesystem() fs['/d'] = 'd' fs['/c'] = 'c' fs['/d/b'] = 'db' self.assertEqual(2, len(fs.ls('/'))) self.assertEqual(1, len(fs.ls('/d'))) self.assertEqual('d', fs['/d']) self.assertEqual('c', fs['/c']) self.assertEqual('db', fs['/d/b']) def test_ls_recursive(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls_r("/", absolute=False) self.assertEqual([ 'a', 'b', 'c', 'd', 'a/b', 'b/c', 'c/d', 'a/b/c', 'b/c/d', 'a/b/c/d', ], contents) def test_ls_recursive_absolute(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls_r("/", absolute=True) self.assertEqual([ '/a', '/b', '/c', '/d', '/a/b', '/b/c', '/c/d', '/a/b/c', '/b/c/d', '/a/b/c/d', ], contents) def test_ls_recursive_targeted(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls_r("/a/b", absolute=False) self.assertEqual(['c', 'c/d'], contents) def test_ls_targeted(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls("/a/b", absolute=False) self.assertEqual(['c'], contents) def test_ls_targeted_absolute(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls("/a/b", absolute=True) self.assertEqual(['/a/b/c'], contents) def test_ls_recursive_targeted_absolute(self): fs = impl_memory.FakeFilesystem() fs.ensure_path("/d") fs.ensure_path("/c/d") fs.ensure_path("/b/c/d") fs.ensure_path("/a/b/c/d") contents = fs.ls_r("/a/b", absolute=True) self.assertEqual(['/a/b/c', '/a/b/c/d'], contents) def test_ensure_path(self): fs = impl_memory.FakeFilesystem() pieces = ['a', 'b', 'c'] path = "/" + "/".join(pieces) fs.ensure_path(path) path = fs.root_path for i, p in enumerate(pieces): if i == 0: path += p else: path += "/" + p self.assertIsNone(fs[path]) def test_clear(self): fs = impl_memory.FakeFilesystem() paths = ['/b', '/c', '/a/b/c'] for p in paths: fs.ensure_path(p) for p in paths: self.assertIsNone(self._get_item_path(fs, p)) fs.clear() for p in paths: self.assertRaises(exc.NotFound, self._get_item_path, fs, p) def test_not_found(self): fs = impl_memory.FakeFilesystem() self.assertRaises(exc.NotFound, self._get_item_path, fs, '/c') def test_bad_norms(self): fs = impl_memory.FakeFilesystem() self.assertRaises(ValueError, fs.normpath, '') self.assertRaises(ValueError, fs.normpath, 'abc/c') self.assertRaises(ValueError, fs.normpath, '../c') def test_del_root_not_allowed(self): fs = impl_memory.FakeFilesystem() self.assertRaises(ValueError, fs.delete, "/", recursive=False) def test_del_no_children_allowed(self): fs = impl_memory.FakeFilesystem() fs['/a'] = 'a' self.assertEqual(1, len(fs.ls_r("/"))) fs.delete("/a") self.assertEqual(0, len(fs.ls("/"))) def test_del_many_children_not_allowed(self): fs = impl_memory.FakeFilesystem() fs['/a'] = 'a' fs['/a/b'] = 'b' self.assertRaises(ValueError, fs.delete, "/", recursive=False) def test_del_with_children_not_allowed(self): fs = impl_memory.FakeFilesystem() fs['/a'] = 'a' fs['/a/b'] = 'b' self.assertRaises(ValueError, fs.delete, "/a", recursive=False) def test_del_many_children_allowed(self): fs = impl_memory.FakeFilesystem() fs['/a'] = 'a' fs['/a/b'] = 'b' self.assertEqual(2, len(fs.ls_r("/"))) fs.delete("/a", recursive=True) self.assertEqual(0, len(fs.ls("/"))) def test_del_many_children_allowed_not_recursive(self): fs = impl_memory.FakeFilesystem() fs['/a'] = 'a' fs['/a/b'] = 'b' self.assertEqual(2, len(fs.ls_r("/"))) fs.delete("/a/b", recursive=False) self.assertEqual(1, len(fs.ls("/"))) fs.delete("/a", recursive=False) self.assertEqual(0, len(fs.ls("/"))) def test_link_loop_raises(self): fs = impl_memory.FakeFilesystem() fs['/b'] = 'c' fs.symlink('/b', '/b') self.assertRaises(ValueError, self._get_item_path, fs, '/b') def test_ensure_linked_delete(self): fs = impl_memory.FakeFilesystem() fs['/b'] = 'd' fs.symlink('/b', '/c') self.assertEqual('d', fs['/b']) self.assertEqual('d', fs['/c']) del fs['/b'] self.assertRaises(exc.NotFound, self._get_item_path, fs, '/c') self.assertRaises(exc.NotFound, self._get_item_path, fs, '/b') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/test_sql_persistence.py0000664000175000017500000002400600000000000026517 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import contextlib import os import random import tempfile import six import testtools # NOTE(harlowja): by default this will test against sqlite using a temporary # sqlite file (this is done instead of in-memory to ensure thread safety, # in-memory sqlite is not thread safe). # # There are also "opportunistic" tests for both mysql and postgresql in here, # which allows testing against all 3 databases (sqlite, mysql, postgres) in # a properly configured unit test environment. For the opportunistic testing # you need to set up a db user 'openstack_citest' with password # 'openstack_citest' that has the permissions to create databases on # localhost. USER = "openstack_citest" PASSWD = "openstack_citest" DATABASE = "tftest_" + ''.join(random.choice('0123456789') for _ in range(12)) import sqlalchemy as sa from taskflow.persistence import backends from taskflow.persistence.backends import impl_sqlalchemy from taskflow import test from taskflow.tests.unit.persistence import base def _get_connect_string(backend, user, passwd, database=None, variant=None): """Forms a sqlalchemy database uri string for the given values.""" if backend == "postgres": if not variant: variant = 'psycopg2' backend = "postgresql+%s" % (variant) elif backend == "mysql": if not variant: variant = 'pymysql' backend = "mysql+%s" % (variant) else: raise Exception("Unrecognized backend: '%s'" % backend) if not database: database = '' return "%s://%s:%s@localhost/%s" % (backend, user, passwd, database) def _mysql_exists(): engine = None try: db_uri = _get_connect_string('mysql', USER, PASSWD) engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()): return True except Exception: pass finally: if engine is not None: try: engine.dispose() except Exception: pass return False def _postgres_exists(): engine = None try: db_uri = _get_connect_string('postgres', USER, PASSWD, 'postgres') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()): return True except Exception: return False finally: if engine is not None: try: engine.dispose() except Exception: pass class SqlitePersistenceTest(test.TestCase, base.PersistenceTestMixin): """Inherits from the base test and sets up a sqlite temporary db.""" def _get_connection(self): conf = { 'connection': self.db_uri, } return impl_sqlalchemy.SQLAlchemyBackend(conf).get_connection() def setUp(self): super(SqlitePersistenceTest, self).setUp() self.db_location = tempfile.mktemp(suffix='.db') self.db_uri = "sqlite:///%s" % (self.db_location) # Ensure upgraded to the right schema with contextlib.closing(self._get_connection()) as conn: conn.upgrade() def tearDown(self): super(SqlitePersistenceTest, self).tearDown() if self.db_location and os.path.isfile(self.db_location): os.unlink(self.db_location) self.db_location = None @six.add_metaclass(abc.ABCMeta) class BackendPersistenceTestMixin(base.PersistenceTestMixin): """Specifies a backend type and does required setup and teardown.""" def _get_connection(self): return self.backend.get_connection() def test_entrypoint(self): # Test that the entrypoint fetching also works (even with dialects) # using the same configuration we used in setUp() but not using # the impl_sqlalchemy SQLAlchemyBackend class directly... with contextlib.closing(backends.fetch(self.db_conf)) as backend: with contextlib.closing(backend.get_connection()): pass @abc.abstractmethod def _init_db(self): """Sets up the database, and returns the uri to that database.""" @abc.abstractmethod def _remove_db(self): """Cleans up by removing the database once the tests are done.""" def setUp(self): super(BackendPersistenceTestMixin, self).setUp() self.backend = None try: self.db_uri = self._init_db() self.db_conf = { 'connection': self.db_uri } # Since we are using random database names, we need to make sure # and remove our random database when we are done testing. self.addCleanup(self._remove_db) except Exception as e: self.skipTest("Failed to create temporary database;" " testing being skipped due to: %s" % (e)) else: self.backend = impl_sqlalchemy.SQLAlchemyBackend(self.db_conf) self.addCleanup(self.backend.close) with contextlib.closing(self._get_connection()) as conn: conn.upgrade() @testtools.skipIf(not _mysql_exists(), 'mysql is not available') class MysqlPersistenceTest(BackendPersistenceTestMixin, test.TestCase): def _init_db(self): engine = None try: db_uri = _get_connect_string('mysql', USER, PASSWD) engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: conn.execute("CREATE DATABASE %s" % DATABASE) except Exception as e: raise Exception('Failed to initialize MySQL db: %s' % (e)) finally: if engine is not None: try: engine.dispose() except Exception: pass return _get_connect_string('mysql', USER, PASSWD, database=DATABASE) def _remove_db(self): engine = None try: engine = sa.create_engine(self.db_uri) with contextlib.closing(engine.connect()) as conn: conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) except Exception as e: raise Exception('Failed to remove temporary database: %s' % (e)) finally: if engine is not None: try: engine.dispose() except Exception: pass @testtools.skipIf(not _postgres_exists(), 'postgres is not available') class PostgresPersistenceTest(BackendPersistenceTestMixin, test.TestCase): def _init_db(self): engine = None try: # Postgres can't operate on the database it's connected to, that's # why we connect to the database 'postgres' and then create the # desired database. db_uri = _get_connect_string('postgres', USER, PASSWD, database='postgres') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: conn.connection.set_isolation_level(0) conn.execute("CREATE DATABASE %s" % DATABASE) conn.connection.set_isolation_level(1) except Exception as e: raise Exception('Failed to initialize PostgreSQL db: %s' % (e)) finally: if engine is not None: try: engine.dispose() except Exception: pass return _get_connect_string('postgres', USER, PASSWD, database=DATABASE) def _remove_db(self): engine = None try: # Postgres can't operate on the database it's connected to, that's # why we connect to the 'postgres' database and then drop the # database. db_uri = _get_connect_string('postgres', USER, PASSWD, database='postgres') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: conn.connection.set_isolation_level(0) conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) conn.connection.set_isolation_level(1) except Exception as e: raise Exception('Failed to remove temporary database: %s' % (e)) finally: if engine is not None: try: engine.dispose() except Exception: pass class SQLBackendFetchingTest(test.TestCase): def test_sqlite_persistence_entry_point(self): conf = {'connection': 'sqlite:///'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) @testtools.skipIf(not _mysql_exists(), 'mysql is not available') def test_mysql_persistence_entry_point(self): uri = _get_connect_string('mysql', USER, PASSWD, database=DATABASE) conf = {'connection': uri} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) @testtools.skipIf(not _postgres_exists(), 'postgres is not available') def test_postgres_persistence_entry_point(self): uri = _get_connect_string('postgres', USER, PASSWD, database=DATABASE) conf = {'connection': uri} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_sqlalchemy.SQLAlchemyBackend) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/persistence/test_zk_persistence.py0000664000175000017500000000710000000000000026340 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 AT&T Labs All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from kazoo import exceptions as kazoo_exceptions from oslo_utils import uuidutils import testtools from zake import fake_client from taskflow import exceptions as exc from taskflow.persistence import backends from taskflow.persistence.backends import impl_zookeeper from taskflow import test from taskflow.tests.unit.persistence import base from taskflow.tests import utils as test_utils from taskflow.utils import kazoo_utils TEST_PATH_TPL = '/taskflow/persistence-test/%s' _ZOOKEEPER_AVAILABLE = test_utils.zookeeper_available( impl_zookeeper.MIN_ZK_VERSION) def clean_backend(backend, conf): with contextlib.closing(backend.get_connection()) as conn: try: conn.clear_all() except exc.NotFound: pass client = kazoo_utils.make_client(conf) client.start() try: client.delete(conf['path'], recursive=True) except kazoo_exceptions.NoNodeError: pass finally: kazoo_utils.finalize_client(client) @testtools.skipIf(not _ZOOKEEPER_AVAILABLE, 'zookeeper is not available') class ZkPersistenceTest(test.TestCase, base.PersistenceTestMixin): def _get_connection(self): return self.backend.get_connection() def setUp(self): super(ZkPersistenceTest, self).setUp() conf = test_utils.ZK_TEST_CONFIG.copy() # Create a unique path just for this test (so that we don't overwrite # what other tests are doing). conf['path'] = TEST_PATH_TPL % (uuidutils.generate_uuid()) try: self.backend = impl_zookeeper.ZkBackend(conf) except Exception as e: self.skipTest("Failed creating backend created from configuration" " %s due to %s" % (conf, e)) else: self.addCleanup(self.backend.close) self.addCleanup(clean_backend, self.backend, conf) with contextlib.closing(self.backend.get_connection()) as conn: conn.upgrade() def test_zk_persistence_entry_point(self): conf = {'connection': 'zookeeper:'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_zookeeper.ZkBackend) @testtools.skipIf(_ZOOKEEPER_AVAILABLE, 'zookeeper is available') class ZakePersistenceTest(test.TestCase, base.PersistenceTestMixin): def _get_connection(self): return self._backend.get_connection() def setUp(self): super(ZakePersistenceTest, self).setUp() conf = { "path": "/taskflow", } self.client = fake_client.FakeClient() self.client.start() self._backend = impl_zookeeper.ZkBackend(conf, client=self.client) conn = self._backend.get_connection() conn.upgrade() def test_zk_persistence_entry_point(self): conf = {'connection': 'zookeeper:'} with contextlib.closing(backends.fetch(conf)) as be: self.assertIsInstance(be, impl_zookeeper.ZkBackend) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_arguments_passing.py0000664000175000017500000002176400000000000024531 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist import testtools import taskflow.engines from taskflow import exceptions as exc from taskflow import test from taskflow.tests import utils from taskflow.utils import eventlet_utils as eu class ArgumentsPassingTest(utils.EngineTestBase): def test_save_as(self): flow = utils.TaskOneReturn(name='task1', provides='first_data') engine = self._make_engine(flow) engine.run() self.assertEqual({'first_data': 1}, engine.storage.fetch_all()) def test_save_all_in_one(self): flow = utils.TaskMultiReturn(provides='all_data') engine = self._make_engine(flow) engine.run() self.assertEqual({'all_data': (1, 3, 5)}, engine.storage.fetch_all()) def test_save_several_values(self): flow = utils.TaskMultiReturn(provides=('badger', 'mushroom', 'snake')) engine = self._make_engine(flow) engine.run() self.assertEqual({ 'badger': 1, 'mushroom': 3, 'snake': 5 }, engine.storage.fetch_all()) def test_save_dict(self): flow = utils.TaskMultiDict(provides=set(['badger', 'mushroom', 'snake'])) engine = self._make_engine(flow) engine.run() self.assertEqual({ 'badger': 0, 'mushroom': 1, 'snake': 2, }, engine.storage.fetch_all()) def test_bad_save_as_value(self): self.assertRaises(TypeError, utils.TaskOneReturn, name='task1', provides=object()) def test_arguments_passing(self): flow = utils.TaskMultiArgOneReturn(provides='result') engine = self._make_engine(flow) engine.storage.inject({'x': 1, 'y': 4, 'z': 9, 'a': 17}) engine.run() self.assertEqual({ 'x': 1, 'y': 4, 'z': 9, 'a': 17, 'result': 14, }, engine.storage.fetch_all()) def test_arguments_missing(self): flow = utils.TaskMultiArg() engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'b': 4, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_partial_arguments_mapping(self): flow = utils.TaskMultiArgOneReturn(provides='result', rebind={'x': 'a'}) engine = self._make_engine(flow) engine.storage.inject({'x': 1, 'y': 4, 'z': 9, 'a': 17}) engine.run() self.assertEqual({ 'x': 1, 'y': 4, 'z': 9, 'a': 17, 'result': 30, }, engine.storage.fetch_all()) def test_argument_injection(self): flow = utils.TaskMultiArgOneReturn(provides='result', inject={'x': 1, 'y': 4, 'z': 9}) engine = self._make_engine(flow) engine.run() self.assertEqual({ 'result': 14, }, engine.storage.fetch_all()) def test_argument_injection_rebind(self): flow = utils.TaskMultiArgOneReturn(provides='result', rebind=['a', 'b', 'c'], inject={'a': 1, 'b': 4, 'c': 9}) engine = self._make_engine(flow) engine.run() self.assertEqual({ 'result': 14, }, engine.storage.fetch_all()) def test_argument_injection_required(self): flow = utils.TaskMultiArgOneReturn(provides='result', requires=['a', 'b', 'c'], inject={'x': 1, 'y': 4, 'z': 9, 'a': 0, 'b': 0, 'c': 0}) engine = self._make_engine(flow) engine.run() self.assertEqual({ 'result': 14, }, engine.storage.fetch_all()) def test_all_arguments_mapping(self): flow = utils.TaskMultiArgOneReturn(provides='result', rebind=['a', 'b', 'c']) engine = self._make_engine(flow) engine.storage.inject({ 'a': 1, 'b': 2, 'c': 3, 'x': 4, 'y': 5, 'z': 6 }) engine.run() self.assertEqual({ 'a': 1, 'b': 2, 'c': 3, 'x': 4, 'y': 5, 'z': 6, 'result': 6, }, engine.storage.fetch_all()) def test_invalid_argument_name_map(self): flow = utils.TaskMultiArg(rebind={'z': 'b'}) engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'y': 4, 'c': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_invalid_argument_name_list(self): flow = utils.TaskMultiArg(rebind=['a', 'z', 'b']) engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'b': 4, 'c': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_bad_rebind_args_value(self): self.assertRaises(TypeError, utils.TaskOneArg, rebind=object()) def test_long_arg_name(self): flow = utils.LongArgNameTask(requires='long_arg_name', provides='result') engine = self._make_engine(flow) engine.storage.inject({'long_arg_name': 1}) engine.run() self.assertEqual({ 'long_arg_name': 1, 'result': 1 }, engine.storage.fetch_all()) def test_revert_rebound_args_required(self): flow = utils.TaskMultiArg(revert_rebind={'z': 'b'}) engine = self._make_engine(flow) engine.storage.inject({'a': 1, 'y': 4, 'c': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_revert_required_args_required(self): flow = utils.TaskMultiArg(revert_requires=['a']) engine = self._make_engine(flow) engine.storage.inject({'y': 4, 'z': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) def test_derived_revert_args_required(self): flow = utils.TaskRevertExtraArgs() engine = self._make_engine(flow) engine.storage.inject({'y': 4, 'z': 9, 'x': 17}) self.assertRaises(exc.MissingDependencies, engine.run) engine.storage.inject({'revert_arg': None}) self.assertRaises(exc.ExecutionFailure, engine.run) class SerialEngineTest(ArgumentsPassingTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine='serial', backend=self.backend) class ParallelEngineWithThreadsTest(ArgumentsPassingTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, engine='parallel', backend=self.backend, executor=executor, max_workers=self._EXECUTOR_WORKERS) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(ArgumentsPassingTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = futurist.GreenThreadPoolExecutor() self.addCleanup(executor.shutdown) return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor) class ParallelEngineWithProcessTest(ArgumentsPassingTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = 'processes' return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor, max_workers=self._EXECUTOR_WORKERS) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_check_transition.py0000664000175000017500000001403500000000000024320 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as exc from taskflow import states from taskflow import test class TransitionTest(test.TestCase): _DISALLOWED_TPL = "Transition from '%s' to '%s' was found to be disallowed" _NOT_IGNORED_TPL = "Transition from '%s' to '%s' was not ignored" def assertTransitionAllowed(self, from_state, to_state): msg = self._DISALLOWED_TPL % (from_state, to_state) self.assertTrue(self.check_transition(from_state, to_state), msg=msg) def assertTransitionIgnored(self, from_state, to_state): msg = self._NOT_IGNORED_TPL % (from_state, to_state) self.assertFalse(self.check_transition(from_state, to_state), msg=msg) def assertTransitionForbidden(self, from_state, to_state): self.assertRaisesRegex(exc.InvalidState, self.transition_exc_regexp, self.check_transition, from_state, to_state) def assertTransitions(self, from_state, allowed=None, ignored=None, forbidden=None): for a in allowed or []: self.assertTransitionAllowed(from_state, a) for i in ignored or []: self.assertTransitionIgnored(from_state, i) for f in forbidden or []: self.assertTransitionForbidden(from_state, f) class CheckFlowTransitionTest(TransitionTest): def setUp(self): super(CheckFlowTransitionTest, self).setUp() self.check_transition = states.check_flow_transition self.transition_exc_regexp = '^Flow transition.*not allowed' def test_to_same_state(self): self.assertTransitionIgnored(states.SUCCESS, states.SUCCESS) def test_rerunning_allowed(self): self.assertTransitionAllowed(states.SUCCESS, states.RUNNING) def test_no_resuming_from_pending(self): self.assertTransitionIgnored(states.PENDING, states.RESUMING) def test_resuming_from_running(self): self.assertTransitionAllowed(states.RUNNING, states.RESUMING) def test_bad_transition_raises(self): self.assertTransitionForbidden(states.FAILURE, states.SUCCESS) class CheckTaskTransitionTest(TransitionTest): def setUp(self): super(CheckTaskTransitionTest, self).setUp() self.check_transition = states.check_task_transition self.transition_exc_regexp = '^Task transition.*not allowed' def test_from_pending_state(self): self.assertTransitions(from_state=states.PENDING, allowed=(states.RUNNING,), ignored=(states.PENDING, states.REVERTING, states.SUCCESS, states.FAILURE, states.REVERTED)) def test_from_running_state(self): self.assertTransitions(from_state=states.RUNNING, allowed=(states.SUCCESS, states.FAILURE,), ignored=(states.REVERTING, states.RUNNING, states.PENDING, states.REVERTED)) def test_from_success_state(self): self.assertTransitions(from_state=states.SUCCESS, allowed=(states.REVERTING,), ignored=(states.RUNNING, states.SUCCESS, states.PENDING, states.FAILURE, states.REVERTED)) def test_from_failure_state(self): self.assertTransitions(from_state=states.FAILURE, allowed=(states.REVERTING,), ignored=(states.FAILURE, states.RUNNING, states.PENDING, states.SUCCESS, states.REVERTED)) def test_from_reverting_state(self): self.assertTransitions(from_state=states.REVERTING, allowed=(states.REVERT_FAILURE, states.REVERTED), ignored=(states.RUNNING, states.REVERTING, states.PENDING, states.SUCCESS)) def test_from_reverted_state(self): self.assertTransitions(from_state=states.REVERTED, allowed=(states.PENDING,), ignored=(states.REVERTING, states.REVERTED, states.RUNNING, states.SUCCESS, states.FAILURE)) class CheckRetryTransitionTest(CheckTaskTransitionTest): def setUp(self): super(CheckRetryTransitionTest, self).setUp() self.check_transition = states.check_retry_transition self.transition_exc_regexp = '^Retry transition.*not allowed' def test_from_success_state(self): self.assertTransitions(from_state=states.SUCCESS, allowed=(states.REVERTING, states.RETRYING), ignored=(states.RUNNING, states.SUCCESS, states.PENDING, states.FAILURE, states.REVERTED)) def test_from_retrying_state(self): self.assertTransitions(from_state=states.RETRYING, allowed=(states.RUNNING,), ignored=(states.RETRYING, states.SUCCESS, states.PENDING, states.FAILURE, states.REVERTED)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_conductors.py0000664000175000017500000004506400000000000023162 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import threading import futurist import testscenarios from zake import fake_client from taskflow.conductors import backends from taskflow import engines from taskflow.jobs.backends import impl_zookeeper from taskflow.jobs import base from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow import states as st from taskflow import test from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as pu from taskflow.utils import threading_utils @contextlib.contextmanager def close_many(*closeables): try: yield finally: for c in closeables: c.close() def test_factory(blowup): f = lf.Flow("test") if not blowup: f.add(test_utils.ProgressingTask('test1')) else: f.add(test_utils.FailingTask("test1")) return f def sleep_factory(): f = lf.Flow("test") f.add(test_utils.SleepTask('test1')) f.add(test_utils.ProgressingTask('test2')) return f def test_store_factory(): f = lf.Flow("test") f.add(test_utils.TaskMultiArg('task1')) return f def single_factory(): return futurist.ThreadPoolExecutor(max_workers=1) ComponentBundle = collections.namedtuple('ComponentBundle', ['board', 'client', 'persistence', 'conductor']) class ManyConductorTest(testscenarios.TestWithScenarios, test_utils.EngineTestBase, test.TestCase): scenarios = [ ('blocking', {'kind': 'blocking', 'conductor_kwargs': {'wait_timeout': 0.1}}), ('nonblocking_many_thread', {'kind': 'nonblocking', 'conductor_kwargs': {'wait_timeout': 0.1}}), ('nonblocking_one_thread', {'kind': 'nonblocking', 'conductor_kwargs': { 'executor_factory': single_factory, 'wait_timeout': 0.1, }}) ] def make_components(self): client = fake_client.FakeClient() persistence = impl_memory.MemoryBackend() board = impl_zookeeper.ZookeeperJobBoard('testing', {}, client=client, persistence=persistence) conductor_kwargs = self.conductor_kwargs.copy() conductor_kwargs['persistence'] = persistence conductor = backends.fetch(self.kind, 'testing', board, **conductor_kwargs) return ComponentBundle(board, client, persistence, conductor) def test_connection(self): components = self.make_components() components.conductor.connect() with close_many(components.conductor, components.client): self.assertTrue(components.board.connected) self.assertTrue(components.client.connected) self.assertFalse(components.board.connected) self.assertFalse(components.client.connected) def test_run_empty(self): components = self.make_components() components.conductor.connect() with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() components.conductor.stop() self.assertTrue( components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) t.join() def test_run(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() job_consumed_event = threading.Event() job_abandoned_event = threading.Event() def on_consume(state, details): consumed_event.set() def on_job_consumed(event, details): if event == 'job_consumed': job_consumed_event.set() def on_job_abandoned(event, details): if event == 'job_abandoned': job_abandoned_event.set() components.board.notifier.register(base.REMOVAL, on_consume) components.conductor.notifier.register("job_consumed", on_job_consumed) components.conductor.notifier.register("job_abandoned", on_job_abandoned) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_factory, [False], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) self.assertTrue(job_consumed_event.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(job_abandoned_event.wait(1)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertEqual(st.SUCCESS, fd.state) def test_run_max_dispatches(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() def on_consume(state, details): consumed_event.set() components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.client, components.conductor): t = threading_utils.daemon_thread( lambda: components.conductor.run(max_dispatches=5)) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_factory, [False], {}, backend=components.persistence) for _ in range(5): components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) self.assertTrue(consumed_event.wait( test_utils.WAIT_TIMEOUT)) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) def test_fail_run(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() job_consumed_event = threading.Event() job_abandoned_event = threading.Event() def on_consume(state, details): consumed_event.set() def on_job_consumed(event, details): if event == 'job_consumed': job_consumed_event.set() def on_job_abandoned(event, details): if event == 'job_abandoned': job_abandoned_event.set() components.board.notifier.register(base.REMOVAL, on_consume) components.conductor.notifier.register("job_consumed", on_job_consumed) components.conductor.notifier.register("job_abandoned", on_job_abandoned) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_factory, [True], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) self.assertTrue(job_consumed_event.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(job_abandoned_event.wait(1)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertEqual(st.REVERTED, fd.state) def test_missing_store(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() def on_consume(state, details): consumed_event.set() components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_store_factory, [], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertIsNone(fd.state) def test_job_store(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() def on_consume(state, details): consumed_event.set() store = {'x': True, 'y': False, 'z': None} components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_store_factory, [], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid, 'store': store}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertEqual(st.SUCCESS, fd.state) def test_flowdetails_store(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() def on_consume(state, details): consumed_event.set() store = {'x': True, 'y': False, 'z': None} components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence, meta={'store': store}) engines.save_factory_details(fd, test_store_factory, [], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertEqual(st.SUCCESS, fd.state) def test_combined_store(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() def on_consume(state, details): consumed_event.set() flow_store = {'x': True, 'y': False} job_store = {'z': None} components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence, meta={'store': flow_store}) engines.save_factory_details(fd, test_store_factory, [], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid, 'store': job_store}) self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) components.conductor.stop() self.assertTrue(components.conductor.wait(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence with contextlib.closing(persistence.get_connection()) as conn: lb = conn.get_logbook(lb.uuid) fd = lb.find(fd.uuid) self.assertIsNotNone(fd) self.assertEqual(st.SUCCESS, fd.state) def test_stop_aborts_engine(self): components = self.make_components() components.conductor.connect() consumed_event = threading.Event() job_consumed_event = threading.Event() job_abandoned_event = threading.Event() running_start_event = threading.Event() def on_running_start(event, details): running_start_event.set() def on_consume(state, details): consumed_event.set() def on_job_consumed(event, details): if event == 'job_consumed': job_consumed_event.set() def on_job_abandoned(event, details): if event == 'job_abandoned': job_abandoned_event.set() components.board.notifier.register(base.REMOVAL, on_consume) components.conductor.notifier.register("job_consumed", on_job_consumed) components.conductor.notifier.register("job_abandoned", on_job_abandoned) components.conductor.notifier.register("running_start", on_running_start) with close_many(components.conductor, components.client): t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, sleep_factory, [], {}, backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid, 'store': {'duration': 2}}) running_start_event.wait(test_utils.WAIT_TIMEOUT) components.conductor.stop() job_abandoned_event.wait(test_utils.WAIT_TIMEOUT) self.assertTrue(job_abandoned_event.is_set()) self.assertFalse(job_consumed_event.is_set()) self.assertFalse(consumed_event.is_set()) class NonBlockingExecutorTest(test.TestCase): def test_bad_wait_timeout(self): persistence = impl_memory.MemoryBackend() client = fake_client.FakeClient() board = impl_zookeeper.ZookeeperJobBoard('testing', {}, client=client, persistence=persistence) self.assertRaises(ValueError, backends.fetch, 'nonblocking', 'testing', board, persistence=persistence, wait_timeout='testing') def test_bad_factory(self): persistence = impl_memory.MemoryBackend() client = fake_client.FakeClient() board = impl_zookeeper.ZookeeperJobBoard('testing', {}, client=client, persistence=persistence) self.assertRaises(ValueError, backends.fetch, 'nonblocking', 'testing', board, persistence=persistence, executor_factory='testing') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_deciders.py0000664000175000017500000000525500000000000022557 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import deciders from taskflow import test class TestDeciders(test.TestCase): def test_translate(self): for val in ['all', 'ALL', 'aLL', deciders.Depth.ALL]: self.assertEqual(deciders.Depth.ALL, deciders.Depth.translate(val)) for val in ['atom', 'ATOM', 'atOM', deciders.Depth.ATOM]: self.assertEqual(deciders.Depth.ATOM, deciders.Depth.translate(val)) for val in ['neighbors', 'Neighbors', 'NEIGHBORS', deciders.Depth.NEIGHBORS]: self.assertEqual(deciders.Depth.NEIGHBORS, deciders.Depth.translate(val)) for val in ['flow', 'FLOW', 'flOW', deciders.Depth.FLOW]: self.assertEqual(deciders.Depth.FLOW, deciders.Depth.translate(val)) def test_bad_translate(self): self.assertRaises(TypeError, deciders.Depth.translate, 3) self.assertRaises(TypeError, deciders.Depth.translate, object()) self.assertRaises(ValueError, deciders.Depth.translate, "stuff") def test_pick_widest(self): choices = [deciders.Depth.ATOM, deciders.Depth.FLOW] self.assertEqual(deciders.Depth.FLOW, deciders.pick_widest(choices)) choices = [deciders.Depth.ATOM, deciders.Depth.FLOW, deciders.Depth.ALL] self.assertEqual(deciders.Depth.ALL, deciders.pick_widest(choices)) choices = [deciders.Depth.ATOM, deciders.Depth.FLOW, deciders.Depth.ALL, deciders.Depth.NEIGHBORS] self.assertEqual(deciders.Depth.ALL, deciders.pick_widest(choices)) choices = [deciders.Depth.ATOM, deciders.Depth.NEIGHBORS] self.assertEqual(deciders.Depth.NEIGHBORS, deciders.pick_widest(choices)) def test_bad_pick_widest(self): self.assertRaises(ValueError, deciders.pick_widest, []) self.assertRaises(ValueError, deciders.pick_widest, ["a"]) self.assertRaises(ValueError, deciders.pick_widest, set(['b'])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_engine_helpers.py0000664000175000017500000001242700000000000023763 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines from taskflow import exceptions as exc from taskflow.patterns import linear_flow from taskflow import test from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as p_utils class EngineLoadingTestCase(test.TestCase): def _make_dummy_flow(self): f = linear_flow.Flow('test') f.add(test_utils.TaskOneReturn("run-1")) return f def test_default_load(self): f = self._make_dummy_flow() e = taskflow.engines.load(f) self.assertIsNotNone(e) def test_unknown_load(self): f = self._make_dummy_flow() self.assertRaises(exc.NotFound, taskflow.engines.load, f, engine='not_really_any_engine') def test_options_empty(self): f = self._make_dummy_flow() e = taskflow.engines.load(f) self.assertEqual({}, e.options) def test_options_passthrough(self): f = self._make_dummy_flow() e = taskflow.engines.load(f, pass_1=1, pass_2=2) self.assertEqual({'pass_1': 1, 'pass_2': 2}, e.options) class FlowFromDetailTestCase(test.TestCase): def test_no_meta(self): _lb, flow_detail = p_utils.temporary_flow_detail() self.assertEqual({}, flow_detail.meta) self.assertRaisesRegex(ValueError, '^Cannot .* no factory information saved.$', taskflow.engines.flow_from_detail, flow_detail) def test_no_factory_in_meta(self): _lb, flow_detail = p_utils.temporary_flow_detail() self.assertRaisesRegex(ValueError, '^Cannot .* no factory information saved.$', taskflow.engines.flow_from_detail, flow_detail) def test_no_importable_function(self): _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict( name='you can not import me, i contain spaces' )) self.assertRaisesRegex(ImportError, '^Could not import factory', taskflow.engines.flow_from_detail, flow_detail) def test_no_arg_factory(self): name = 'some.test.factory' _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name)) with mock.patch('oslo_utils.importutils.import_class', return_value=lambda: 'RESULT') as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_once_with(name) self.assertEqual('RESULT', result) def test_factory_with_arg(self): name = 'some.test.factory' _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name, args=['foo'])) with mock.patch('oslo_utils.importutils.import_class', return_value=lambda x: 'RESULT %s' % x) as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_once_with(name) self.assertEqual('RESULT foo', result) def my_flow_factory(task_name): return test_utils.DummyTask(name=task_name) class LoadFromFactoryTestCase(test.TestCase): def test_non_reimportable(self): def factory(): pass self.assertRaisesRegex(ValueError, 'Flow factory .* is not reimportable', taskflow.engines.load_from_factory, factory) def test_it_works(self): engine = taskflow.engines.load_from_factory( my_flow_factory, factory_kwargs={'task_name': 'test1'}) self.assertIsInstance(engine._flow, test_utils.DummyTask) fd = engine.storage._flowdetail self.assertEqual('test1', fd.name) self.assertEqual({ 'name': '%s.my_flow_factory' % __name__, 'args': [], 'kwargs': {'task_name': 'test1'}, }, fd.meta.get('factory')) def test_it_works_by_name(self): factory_name = '%s.my_flow_factory' % __name__ engine = taskflow.engines.load_from_factory( factory_name, factory_kwargs={'task_name': 'test1'}) self.assertIsInstance(engine._flow, test_utils.DummyTask) fd = engine.storage._flowdetail self.assertEqual('test1', fd.name) self.assertEqual({ 'name': factory_name, 'args': [], 'kwargs': {'task_name': 'test1'}, }, fd.meta.get('factory')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_engines.py0000664000175000017500000017563700000000000022441 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import functools import threading import futurist import six import testtools import taskflow.engines from taskflow.engines.action_engine import engine as eng from taskflow.engines.worker_based import engine as w_eng from taskflow.engines.worker_based import worker as wkr from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow.persistence import models from taskflow import states from taskflow import task from taskflow import test from taskflow.tests import utils from taskflow.types import failure from taskflow.types import graph as gr from taskflow.utils import eventlet_utils as eu from taskflow.utils import persistence_utils as p_utils from taskflow.utils import threading_utils as tu # Expected engine transitions when empty workflows are ran... _EMPTY_TRANSITIONS = [ states.RESUMING, states.SCHEDULING, states.WAITING, states.ANALYZING, states.SUCCESS, ] class EngineTaskNotificationsTest(object): def test_run_capture_task_notifications(self): captured = collections.defaultdict(list) def do_capture(bound_name, event_type, details): progress_capture = captured[bound_name] progress_capture.append(details) flow = lf.Flow("flow") work_1 = utils.MultiProgressingTask('work-1') work_1.notifier.register(task.EVENT_UPDATE_PROGRESS, functools.partial(do_capture, 'work-1')) work_2 = utils.MultiProgressingTask('work-2') work_2.notifier.register(task.EVENT_UPDATE_PROGRESS, functools.partial(do_capture, 'work-2')) flow.add(work_1, work_2) # NOTE(harlowja): These were selected so that float comparison will # work vs not work... progress_chunks = tuple([0.2, 0.5, 0.8]) engine = self._make_engine( flow, store={'progress_chunks': progress_chunks}) engine.run() expected = [ {'progress': 0.0}, {'progress': 0.2}, {'progress': 0.5}, {'progress': 0.8}, {'progress': 1.0}, ] for name in ['work-1', 'work-2']: self.assertEqual(expected, captured[name]) class EngineTaskTest(object): def test_run_task_as_flow(self): flow = utils.ProgressingTask(name='task1') engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_run_task_with_flow_notifications(self): flow = utils.ProgressingTask(name='task1') engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_failing_task_with_flow_notifications(self): values = [] flow = utils.FailingTask('fail') engine = self._make_engine(flow) expected = ['fail.f RUNNING', 'fail.t RUNNING', 'fail.t FAILURE(Failure: RuntimeError: Woot!)', 'fail.t REVERTING', 'fail.t REVERTED(None)', 'fail.f REVERTED'] with utils.CaptureListener(engine, values=values) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(expected, capturer.values) self.assertEqual(states.REVERTED, engine.storage.get_flow_state()) with utils.CaptureListener(engine, values=values) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) now_expected = list(expected) now_expected.extend(['fail.t PENDING', 'fail.f PENDING']) now_expected.extend(expected) self.assertEqual(now_expected, values) self.assertEqual(states.REVERTED, engine.storage.get_flow_state()) def test_invalid_flow_raises(self): def compile_bad(value): engine = self._make_engine(value) engine.compile() value = 'i am string, not task/flow, sorry' err = self.assertRaises(TypeError, compile_bad, value) self.assertIn(value, str(err)) def test_invalid_flow_raises_from_run(self): def run_bad(value): engine = self._make_engine(value) engine.run() value = 'i am string, not task/flow, sorry' err = self.assertRaises(TypeError, run_bad, value) self.assertIn(value, str(err)) def test_nasty_failing_task_exception_reraised(self): flow = utils.NastyFailingTask() engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) class EngineOptionalRequirementsTest(utils.EngineTestBase): def test_expected_optional_multiplers(self): flow_no_inject = lf.Flow("flow") flow_no_inject.add(utils.OptionalTask(provides='result')) flow_inject_a = lf.Flow("flow") flow_inject_a.add(utils.OptionalTask(provides='result', inject={'a': 10})) flow_inject_b = lf.Flow("flow") flow_inject_b.add(utils.OptionalTask(provides='result', inject={'b': 1000})) engine = self._make_engine(flow_no_inject, store={'a': 3}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'result': 15}, result) engine = self._make_engine(flow_no_inject, store={'a': 3, 'b': 7}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'b': 7, 'result': 21}, result) engine = self._make_engine(flow_inject_a, store={'a': 3}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'result': 50}, result) engine = self._make_engine(flow_inject_a, store={'a': 3, 'b': 7}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'b': 7, 'result': 70}, result) engine = self._make_engine(flow_inject_b, store={'a': 3}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'result': 3000}, result) engine = self._make_engine(flow_inject_b, store={'a': 3, 'b': 7}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'a': 3, 'b': 7, 'result': 3000}, result) class EngineMultipleResultsTest(utils.EngineTestBase): def test_fetch_with_a_single_result(self): flow = lf.Flow("flow") flow.add(utils.TaskOneReturn(provides='x')) engine = self._make_engine(flow) engine.run() result = engine.storage.fetch('x') self.assertEqual(1, result) def test_many_results_visible_to(self): flow = lf.Flow("flow") flow.add(utils.AddOneSameProvidesRequires( 'a', rebind={'value': 'source'})) flow.add(utils.AddOneSameProvidesRequires('b')) flow.add(utils.AddOneSameProvidesRequires('c')) engine = self._make_engine(flow, store={'source': 0}) engine.run() # Check what each task in the prior should be seeing... atoms = list(flow) a = atoms[0] a_kwargs = engine.storage.fetch_mapped_args(a.rebind, atom_name='a') self.assertEqual({'value': 0}, a_kwargs) b = atoms[1] b_kwargs = engine.storage.fetch_mapped_args(b.rebind, atom_name='b') self.assertEqual({'value': 1}, b_kwargs) c = atoms[2] c_kwargs = engine.storage.fetch_mapped_args(c.rebind, atom_name='c') self.assertEqual({'value': 2}, c_kwargs) def test_many_results_storage_provided_visible_to(self): # This works as expected due to docs listed at # # https://docs.openstack.org/taskflow/latest/user/engines.html#scoping flow = lf.Flow("flow") flow.add(utils.AddOneSameProvidesRequires('a')) flow.add(utils.AddOneSameProvidesRequires('b')) flow.add(utils.AddOneSameProvidesRequires('c')) engine = self._make_engine(flow, store={'value': 0}) engine.run() # Check what each task in the prior should be seeing... atoms = list(flow) a = atoms[0] a_kwargs = engine.storage.fetch_mapped_args(a.rebind, atom_name='a') self.assertEqual({'value': 0}, a_kwargs) b = atoms[1] b_kwargs = engine.storage.fetch_mapped_args(b.rebind, atom_name='b') self.assertEqual({'value': 0}, b_kwargs) c = atoms[2] c_kwargs = engine.storage.fetch_mapped_args(c.rebind, atom_name='c') self.assertEqual({'value': 0}, c_kwargs) def test_fetch_with_two_results(self): flow = lf.Flow("flow") flow.add(utils.TaskOneReturn(provides='x')) engine = self._make_engine(flow, store={'x': 0}) engine.run() result = engine.storage.fetch('x') self.assertEqual(0, result) def test_fetch_all_with_a_single_result(self): flow = lf.Flow("flow") flow.add(utils.TaskOneReturn(provides='x')) engine = self._make_engine(flow) engine.run() result = engine.storage.fetch_all() self.assertEqual({'x': 1}, result) def test_fetch_all_with_two_results(self): flow = lf.Flow("flow") flow.add(utils.TaskOneReturn(provides='x')) engine = self._make_engine(flow, store={'x': 0}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'x': [0, 1]}, result) def test_task_can_update_value(self): flow = lf.Flow("flow") flow.add(utils.TaskOneArgOneReturn(requires='x', provides='x')) engine = self._make_engine(flow, store={'x': 0}) engine.run() result = engine.storage.fetch_all() self.assertEqual({'x': [0, 1]}, result) class EngineLinearFlowTest(utils.EngineTestBase): def test_run_empty_linear_flow(self): flow = lf.Flow('flow-1') engine = self._make_engine(flow) self.assertEqual(_EMPTY_TRANSITIONS, list(engine.run_iter())) def test_overlap_parent_sibling_expected_result(self): flow = lf.Flow('flow-1') flow.add(utils.ProgressingTask(provides='source')) flow.add(utils.TaskOneReturn(provides='source')) subflow = lf.Flow('flow-2') subflow.add(utils.AddOne()) flow.add(subflow) engine = self._make_engine(flow) engine.run() results = engine.storage.fetch_all() self.assertEqual(2, results['result']) def test_overlap_parent_expected_result(self): flow = lf.Flow('flow-1') flow.add(utils.ProgressingTask(provides='source')) subflow = lf.Flow('flow-2') subflow.add(utils.TaskOneReturn(provides='source')) subflow.add(utils.AddOne()) flow.add(subflow) engine = self._make_engine(flow) engine.run() results = engine.storage.fetch_all() self.assertEqual(2, results['result']) def test_overlap_sibling_expected_result(self): flow = lf.Flow('flow-1') flow.add(utils.ProgressingTask(provides='source')) flow.add(utils.TaskOneReturn(provides='source')) flow.add(utils.AddOne()) engine = self._make_engine(flow) engine.run() results = engine.storage.fetch_all() self.assertEqual(2, results['result']) def test_sequential_flow_interrupted_externally(self): flow = lf.Flow('flow-1').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2'), utils.ProgressingTask(name='task3'), ) engine = self._make_engine(flow) def _run_engine_and_raise(): engine_states = {} engine_it = engine.run_iter() while True: try: engine_state = six.next(engine_it) if engine_state not in engine_states: engine_states[engine_state] = 1 else: engine_states[engine_state] += 1 if engine_states.get(states.SCHEDULING) == 2: engine_state = engine_it.throw(IOError("I Broke")) if engine_state not in engine_states: engine_states[engine_state] = 1 else: engine_states[engine_state] += 1 except StopIteration: break self.assertRaises(IOError, _run_engine_and_raise) self.assertEqual(states.FAILURE, engine.storage.get_flow_state()) def test_sequential_flow_one_task(self): flow = lf.Flow('flow-1').add( utils.ProgressingTask(name='task1') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_sequential_flow_two_tasks(self): flow = lf.Flow('flow-2').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual(2, len(flow)) def test_sequential_flow_two_tasks_iter(self): flow = lf.Flow('flow-2').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: gathered_states = list(engine.run_iter()) self.assertTrue(len(gathered_states) > 0) expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual(2, len(flow)) def test_sequential_flow_iter_suspend_resume(self): flow = lf.Flow('flow-2').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2') ) lb, fd = p_utils.temporary_flow_detail(self.backend) engine = self._make_engine(flow, flow_detail=fd) with utils.CaptureListener(engine, capture_flow=False) as capturer: it = engine.run_iter() gathered_states = [] suspend_it = None while True: try: s = it.send(suspend_it) gathered_states.append(s) if s == states.WAITING: # Stop it before task2 runs/starts. suspend_it = True except StopIteration: break self.assertTrue(len(gathered_states) > 0) expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual(states.SUSPENDED, engine.storage.get_flow_state()) # Attempt to resume it and see what runs now... with utils.CaptureListener(engine, capture_flow=False) as capturer: gathered_states = list(engine.run_iter()) self.assertTrue(len(gathered_states) > 0) expected = ['task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual(states.SUCCESS, engine.storage.get_flow_state()) def test_revert_removes_data(self): flow = lf.Flow('revert-removes').add( utils.TaskOneReturn(provides='one'), utils.TaskMultiReturn(provides=('a', 'b', 'c')), utils.FailingTask(name='fail') ) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual({}, engine.storage.fetch_all()) def test_revert_provided(self): flow = lf.Flow('revert').add( utils.GiveBackRevert('giver'), utils.FailingTask(name='fail') ) engine = self._make_engine(flow, store={'value': 0}) self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(2, engine.storage.get_revert_result('giver')) def test_nasty_revert(self): flow = lf.Flow('revert').add( utils.NastyTask('nasty'), utils.FailingTask(name='fail') ) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) fail = engine.storage.get_revert_result('nasty') self.assertIsNotNone(fail.check(RuntimeError)) exec_failures = engine.storage.get_execute_failures() self.assertIn('fail', exec_failures) rev_failures = engine.storage.get_revert_failures() self.assertIn('nasty', rev_failures) def test_sequential_flow_nested_blocks(self): flow = lf.Flow('nested-1').add( utils.ProgressingTask('task1'), lf.Flow('inner-1').add( utils.ProgressingTask('task2') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_revert_exception_is_reraised(self): flow = lf.Flow('revert-1').add( utils.NastyTask(), utils.FailingTask(name='fail') ) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) def test_revert_not_run_task_is_not_reverted(self): flow = lf.Flow('revert-not-run').add( utils.FailingTask('fail'), utils.NeverRunningTask(), ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) expected = ['fail.t RUNNING', 'fail.t FAILURE(Failure: RuntimeError: Woot!)', 'fail.t REVERTING', 'fail.t REVERTED(None)'] self.assertEqual(expected, capturer.values) def test_correctly_reverts_children(self): flow = lf.Flow('root-1').add( utils.ProgressingTask('task1'), lf.Flow('child-1').add( utils.ProgressingTask('task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'fail.t RUNNING', 'fail.t FAILURE(Failure: RuntimeError: Woot!)', 'fail.t REVERTING', 'fail.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)'] self.assertEqual(expected, capturer.values) class EngineParallelFlowTest(utils.EngineTestBase): def test_run_empty_unordered_flow(self): flow = uf.Flow('p-1') engine = self._make_engine(flow) self.assertEqual(_EMPTY_TRANSITIONS, list(engine.run_iter())) def test_parallel_flow_with_priority(self): flow = uf.Flow('p-1') for i in range(0, 10): t = utils.ProgressingTask(name='task%s' % i) t.priority = i flow.add(t) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task9.t RUNNING', 'task8.t RUNNING', 'task7.t RUNNING', 'task6.t RUNNING', 'task5.t RUNNING', 'task4.t RUNNING', 'task3.t RUNNING', 'task2.t RUNNING', 'task1.t RUNNING', 'task0.t RUNNING', ] # NOTE(harlowja): chop off the gathering of SUCCESS states, since we # don't care if thats in order... gotten = capturer.values[0:10] self.assertEqual(expected, gotten) def test_parallel_flow_one_task(self): flow = uf.Flow('p-1').add( utils.ProgressingTask(name='task1', provides='a') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual({'a': 5}, engine.storage.fetch_all()) def test_parallel_flow_two_tasks(self): flow = uf.Flow('p-2').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set(['task2.t SUCCESS(5)', 'task2.t RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)']) self.assertEqual(expected, set(capturer.values)) def test_parallel_revert(self): flow = uf.Flow('p-r-3').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.FailingTask(name='fail'), utils.TaskNoRequiresNoReturns(name='task2') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) self.assertIn('fail.t FAILURE(Failure: RuntimeError: Woot!)', capturer.values) def test_parallel_revert_exception_is_reraised(self): # NOTE(imelnikov): if we put NastyTask and FailingTask # into the same unordered flow, it is not guaranteed # that NastyTask execution would be attempted before # FailingTask fails. flow = lf.Flow('p-r-r-l').add( uf.Flow('p-r-r').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.NastyTask() ), utils.FailingTask() ) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) def test_sequential_flow_two_tasks_with_resumption(self): flow = lf.Flow('lf-2-r').add( utils.ProgressingTask(name='task1', provides='x1'), utils.ProgressingTask(name='task2', provides='x2') ) # Create FlowDetail as if we already run task1 lb, fd = p_utils.temporary_flow_detail(self.backend) td = models.TaskDetail(name='task1', uuid='42') td.state = states.SUCCESS td.results = 17 fd.add(td) with contextlib.closing(self.backend.get_connection()) as conn: fd.update(conn.update_flow_details(fd)) td.update(conn.update_atom_details(td)) engine = self._make_engine(flow, fd) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) self.assertEqual({'x1': 17, 'x2': 5}, engine.storage.fetch_all()) class EngineLinearAndUnorderedExceptionsTest(utils.EngineTestBase): def test_revert_ok_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2'), uf.Flow('p-inner').add( utils.ProgressingTask(name='task3'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been REVERTED(None) in correct order. possible_values_no_task3 = [ 'task1.t RUNNING', 'task2.t RUNNING', 'fail.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTED(None)', 'task1.t REVERTED(None)' ] self.assertIsSuperAndSubsequence(capturer.values, possible_values_no_task3) if 'task3' in capturer.values: possible_values_task3 = [ 'task1.t RUNNING', 'task2.t RUNNING', 'task3.t RUNNING', 'task3.t REVERTED(None)', 'task2.t REVERTED(None)', 'task1.t REVERTED(None)' ] self.assertIsSuperAndSubsequence(capturer.values, possible_values_task3) def test_revert_raises_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2'), uf.Flow('p-inner').add( utils.ProgressingTask(name='task3'), utils.NastyFailingTask(name='nasty') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False, skip_tasks=['nasty']) as capturer: self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been REVERTED(None) in correct order. possible_values = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', 'task3.t REVERTING', 'task3.t REVERTED(None)'] self.assertIsSuperAndSubsequence(possible_values, capturer.values) possible_values_no_task3 = ['task1.t RUNNING', 'task2.t RUNNING'] self.assertIsSuperAndSubsequence(capturer.values, possible_values_no_task3) def test_revert_ok_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( utils.ProgressingTask(name='task1'), lf.Flow('p-inner').add( utils.ProgressingTask(name='task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) self.assertIn('fail.t FAILURE(Failure: RuntimeError: Woot!)', capturer.values) # NOTE(imelnikov): if task1 was run, it should have been reverted. if 'task1' in capturer.values: task1_story = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task1.t REVERTED(None)'] self.assertIsSuperAndSubsequence(capturer.values, task1_story) # NOTE(imelnikov): task2 should have been run and reverted task2_story = ['task2.t RUNNING', 'task2.t SUCCESS(5)', 'task2.t REVERTED(None)'] self.assertIsSuperAndSubsequence(capturer.values, task2_story) def test_revert_raises_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( utils.ProgressingTask(name='task1'), lf.Flow('p-inner').add( utils.ProgressingTask(name='task2'), utils.NastyFailingTask() ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) self.assertNotIn('task2.t REVERTED(None)', capturer.values) class EngineDeciderDepthTest(utils.EngineTestBase): def test_run_graph_flow_decider_various_depths(self): sub_flow_1 = gf.Flow('g_1') g_1_1 = utils.ProgressingTask(name='g_1-1') sub_flow_1.add(g_1_1) g_1 = utils.ProgressingTask(name='g-1') g_2 = utils.ProgressingTask(name='g-2') g_3 = utils.ProgressingTask(name='g-3') g_4 = utils.ProgressingTask(name='g-4') for a_depth, ran_how_many in [('all', 1), ('atom', 4), ('flow', 2), ('neighbors', 3)]: flow = gf.Flow('g') flow.add(g_1, g_2, sub_flow_1, g_3, g_4) flow.link(g_1, g_2, decider=lambda history: False, decider_depth=a_depth) flow.link(g_2, sub_flow_1) flow.link(g_2, g_3) flow.link(g_3, g_4) flow.link(g_1, sub_flow_1, decider=lambda history: True, decider_depth=a_depth) e = self._make_engine(flow) with utils.CaptureListener(e, capture_flow=False) as capturer: e.run() ran_tasks = 0 for outcome in capturer.values: if outcome.endswith("RUNNING"): ran_tasks += 1 self.assertEqual(ran_how_many, ran_tasks) def test_run_graph_flow_decider_jump_over_atom(self): flow = gf.Flow('g') a = utils.AddOneSameProvidesRequires("a", inject={'value': 0}) b = utils.AddOneSameProvidesRequires("b") c = utils.AddOneSameProvidesRequires("c") flow.add(a, b, c, resolve_requires=False) flow.link(a, b, decider=lambda history: False, decider_depth='atom') flow.link(b, c) e = self._make_engine(flow) e.run() self.assertEqual(2, e.storage.get('c')) self.assertEqual(states.IGNORE, e.storage.get_atom_state('b')) def test_run_graph_flow_decider_jump_over_bad_atom(self): flow = gf.Flow('g') a = utils.NoopTask("a") b = utils.FailingTask("b") c = utils.NoopTask("c") flow.add(a, b, c) flow.link(a, b, decider=lambda history: False, decider_depth='atom') flow.link(b, c) e = self._make_engine(flow) e.run() def test_run_graph_flow_decider_revert(self): flow = gf.Flow('g') a = utils.NoopTask("a") b = utils.NoopTask("b") c = utils.FailingTask("c") flow.add(a, b, c) flow.link(a, b, decider=lambda history: False, decider_depth='atom') flow.link(b, c) e = self._make_engine(flow) with utils.CaptureListener(e, capture_flow=False) as capturer: # Wrapped failure here for WBE engine, make this better in # the future, perhaps via a custom testtools matcher?? self.assertRaises((RuntimeError, exc.WrappedFailure), e.run) expected = [ 'a.t RUNNING', 'a.t SUCCESS(None)', 'b.t IGNORE', 'c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'c.t REVERTING', 'c.t REVERTED(None)', 'a.t REVERTING', 'a.t REVERTED(None)', ] self.assertEqual(expected, capturer.values) class EngineGraphFlowTest(utils.EngineTestBase): def test_run_empty_graph_flow(self): flow = gf.Flow('g-1') engine = self._make_engine(flow) self.assertEqual(_EMPTY_TRANSITIONS, list(engine.run_iter())) def test_run_empty_nested_graph_flows(self): flow = gf.Flow('g-1').add(lf.Flow('l-1'), gf.Flow('g-2')) engine = self._make_engine(flow) self.assertEqual(_EMPTY_TRANSITIONS, list(engine.run_iter())) def test_graph_flow_one_task(self): flow = gf.Flow('g-1').add( utils.ProgressingTask(name='task1') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_graph_flow_two_independent_tasks(self): flow = gf.Flow('g-2').add( utils.ProgressingTask(name='task1'), utils.ProgressingTask(name='task2') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set(['task2.t SUCCESS(5)', 'task2.t RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)']) self.assertEqual(expected, set(capturer.values)) self.assertEqual(2, len(flow)) def test_graph_flow_two_tasks(self): flow = gf.Flow('g-1-1').add( utils.ProgressingTask(name='task2', requires=['a']), utils.ProgressingTask(name='task1', provides='a') ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_graph_flow_four_tasks_added_separately(self): flow = (gf.Flow('g-4') .add(utils.ProgressingTask(name='task4', provides='d', requires=['c'])) .add(utils.ProgressingTask(name='task2', provides='b', requires=['a'])) .add(utils.ProgressingTask(name='task3', provides='c', requires=['b'])) .add(utils.ProgressingTask(name='task1', provides='a')) ) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', 'task4.t RUNNING', 'task4.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_graph_flow_four_tasks_revert(self): flow = gf.Flow('g-4-failing').add( utils.ProgressingTask(name='task4', provides='d', requires=['c']), utils.ProgressingTask(name='task2', provides='b', requires=['a']), utils.FailingTask(name='task3', provides='c', requires=['b']), utils.ProgressingTask(name='task1', provides='a')) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)'] self.assertEqual(expected, capturer.values) self.assertEqual(states.REVERTED, engine.storage.get_flow_state()) def test_graph_flow_four_tasks_revert_failure(self): flow = gf.Flow('g-3-nasty').add( utils.NastyTask(name='task2', provides='b', requires=['a']), utils.FailingTask(name='task3', requires=['b']), utils.ProgressingTask(name='task1', provides='a')) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) self.assertEqual(states.FAILURE, engine.storage.get_flow_state()) def test_graph_flow_with_multireturn_and_multiargs_tasks(self): flow = gf.Flow('g-3-multi').add( utils.TaskMultiArgOneReturn(name='task1', rebind=['a', 'b', 'y'], provides='z'), utils.TaskMultiReturn(name='task2', provides=['a', 'b', 'c']), utils.TaskMultiArgOneReturn(name='task3', rebind=['c', 'b', 'x'], provides='y')) engine = self._make_engine(flow) engine.storage.inject({'x': 30}) engine.run() self.assertEqual({ 'a': 1, 'b': 3, 'c': 5, 'x': 30, 'y': 38, 'z': 42 }, engine.storage.fetch_all()) def test_task_graph_property(self): flow = gf.Flow('test').add( utils.TaskNoRequiresNoReturns(name='task1'), utils.TaskNoRequiresNoReturns(name='task2')) engine = self._make_engine(flow) engine.compile() graph = engine.compilation.execution_graph self.assertIsInstance(graph, gr.DiGraph) def test_task_graph_property_for_one_task(self): flow = utils.TaskNoRequiresNoReturns(name='task1') engine = self._make_engine(flow) engine.compile() graph = engine.compilation.execution_graph self.assertIsInstance(graph, gr.DiGraph) class EngineMissingDepsTest(utils.EngineTestBase): def test_missing_deps_deep(self): flow = gf.Flow('missing-many').add( utils.TaskOneReturn(name='task1', requires=['a', 'b', 'c']), utils.TaskMultiArgOneReturn(name='task2', rebind=['e', 'f', 'g'])) engine = self._make_engine(flow) engine.compile() engine.prepare() self.assertRaises(exc.MissingDependencies, engine.validate) c_e = None try: engine.validate() except exc.MissingDependencies as e: c_e = e self.assertIsNotNone(c_e) self.assertIsNotNone(c_e.cause) class EngineResetTests(utils.EngineTestBase): def test_completed_reset_run_again(self): task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') flow = lf.Flow('root') flow.add(task1, task2, task3) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', ] self.assertEqual(expected, capturer.values) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() self.assertEqual([], capturer.values) engine.reset() with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() self.assertEqual(expected, capturer.values) def test_failed_reset_run_again(self): task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.FailingTask(name='task3') flow = lf.Flow('root') flow.add(task1, task2, task3) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: # Also allow a WrappedFailure exception so that when this is used # with the WBE engine (as it can't re-raise the original # exception) that we will work correctly.... self.assertRaises((RuntimeError, exc.WrappedFailure), engine.run) expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', ] self.assertEqual(expected, capturer.values) engine.reset() with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertRaises((RuntimeError, exc.WrappedFailure), engine.run) self.assertEqual(expected, capturer.values) def test_suspended_reset_run_again(self): task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') flow = lf.Flow('root') flow.add(task1, task2, task3) engine = self._make_engine(flow) suspend_at = object() expected_states = [ states.RESUMING, states.SCHEDULING, states.WAITING, states.ANALYZING, states.SCHEDULING, states.WAITING, # Stop/suspend here... suspend_at, states.SUSPENDED, ] with utils.CaptureListener(engine, capture_flow=False) as capturer: for i, st in enumerate(engine.run_iter()): expected = expected_states[i] if expected is suspend_at: engine.suspend() else: self.assertEqual(expected, st) expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', ] self.assertEqual(expected, capturer.values) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task3.t RUNNING', 'task3.t SUCCESS(5)', ] self.assertEqual(expected, capturer.values) engine.reset() with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', ] self.assertEqual(expected, capturer.values) class EngineGraphConditionalFlowTest(utils.EngineTestBase): def test_graph_flow_conditional_jumps_across_2(self): histories = [] def should_go(history): histories.append(history) return False task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') task4 = utils.ProgressingTask(name='task4') subflow = lf.Flow("more-work") subsub_flow = lf.Flow("more-more-work") subsub_flow.add(task3, task4) subflow.add(subsub_flow) flow = gf.Flow("main-work") flow.add(task1, task2) flow.link(task1, task2) flow.add(subflow) flow.link(task2, subflow, decider=should_go) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t IGNORE', 'task4.t IGNORE', ] self.assertEqual(expected, capturer.values) self.assertEqual(1, len(histories)) self.assertIn('task2', histories[0]) def test_graph_flow_conditional_jumps_across(self): histories = [] def should_go(history): histories.append(history) return False task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') task4 = utils.ProgressingTask(name='task4') subflow = lf.Flow("more-work") subflow.add(task3, task4) flow = gf.Flow("main-work") flow.add(task1, task2) flow.link(task1, task2) flow.add(subflow) flow.link(task2, subflow, decider=should_go) flow.link(task1, subflow, decider=should_go) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = [ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t IGNORE', 'task4.t IGNORE', ] self.assertEqual(expected, capturer.values) self.assertEqual(2, len(histories)) for i in range(0, 2): self.assertIn('task1', histories[i]) self.assertIn('task2', histories[i]) def test_graph_flow_conditional(self): flow = gf.Flow('root') task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task2_2 = utils.ProgressingTask(name='task2_2') task3 = utils.ProgressingTask(name='task3') flow.add(task1, task2, task2_2, task3) flow.link(task1, task2, decider=lambda history: False) flow.link(task2, task2_2) flow.link(task1, task3, decider=lambda history: True) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t IGNORE', 'task2_2.t IGNORE', 'task3.t RUNNING', 'task3.t SUCCESS(5)', ]) self.assertEqual(expected, set(capturer.values)) def test_graph_flow_conditional_ignore_reset(self): allow_execute = threading.Event() flow = gf.Flow('root') task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') flow.add(task1, task2, task3) flow.link(task1, task2) flow.link(task2, task3, decider=lambda history: allow_execute.is_set()) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t IGNORE', ]) self.assertEqual(expected, set(capturer.values)) self.assertEqual(states.IGNORE, engine.storage.get_atom_state('task3')) self.assertEqual(states.IGNORE, engine.storage.get_atom_intention('task3')) engine.reset() allow_execute.set() with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', ]) self.assertEqual(expected, set(capturer.values)) def test_graph_flow_diamond_ignored(self): flow = gf.Flow('root') task1 = utils.ProgressingTask(name='task1') task2 = utils.ProgressingTask(name='task2') task3 = utils.ProgressingTask(name='task3') task4 = utils.ProgressingTask(name='task4') flow.add(task1, task2, task3, task4) flow.link(task1, task2) flow.link(task2, task4, decider=lambda history: False) flow.link(task1, task3) flow.link(task3, task4, decider=lambda history: True) engine = self._make_engine(flow) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(5)', 'task4.t IGNORE', ]) self.assertEqual(expected, set(capturer.values)) self.assertEqual(states.IGNORE, engine.storage.get_atom_state('task4')) self.assertEqual(states.IGNORE, engine.storage.get_atom_intention('task4')) def test_graph_flow_conditional_history(self): def even_odd_decider(history, allowed): total = sum(six.itervalues(history)) if total == allowed: return True return False flow = gf.Flow('root') task1 = utils.TaskMultiArgOneReturn(name='task1') task2 = utils.ProgressingTask(name='task2') task2_2 = utils.ProgressingTask(name='task2_2') task3 = utils.ProgressingTask(name='task3') task3_3 = utils.ProgressingTask(name='task3_3') flow.add(task1, task2, task2_2, task3, task3_3) flow.link(task1, task2, decider=functools.partial(even_odd_decider, allowed=2)) flow.link(task2, task2_2) flow.link(task1, task3, decider=functools.partial(even_odd_decider, allowed=1)) flow.link(task3, task3_3) engine = self._make_engine(flow) engine.storage.inject({'x': 0, 'y': 1, 'z': 1}) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(2)', 'task3.t IGNORE', 'task3_3.t IGNORE', 'task2.t RUNNING', 'task2.t SUCCESS(5)', 'task2_2.t RUNNING', 'task2_2.t SUCCESS(5)', ]) self.assertEqual(expected, set(capturer.values)) engine = self._make_engine(flow) engine.storage.inject({'x': 0, 'y': 0, 'z': 1}) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = set([ 'task1.t RUNNING', 'task1.t SUCCESS(1)', 'task2.t IGNORE', 'task2_2.t IGNORE', 'task3.t RUNNING', 'task3.t SUCCESS(5)', 'task3_3.t RUNNING', 'task3_3.t SUCCESS(5)', ]) self.assertEqual(expected, set(capturer.values)) class EngineCheckingTaskTest(utils.EngineTestBase): # FIXME: this test uses a inner class that workers/process engines can't # get to, so we need to do something better to make this test useful for # those engines... def test_flow_failures_are_passed_to_revert(self): class CheckingTask(task.Task): def execute(m_self): return 'RESULT' def revert(m_self, result, flow_failures): self.assertEqual('RESULT', result) self.assertEqual(['fail1'], list(flow_failures.keys())) fail = flow_failures['fail1'] self.assertIsInstance(fail, failure.Failure) self.assertEqual('Failure: RuntimeError: Woot!', str(fail)) flow = lf.Flow('test').add( CheckingTask(), utils.FailingTask('fail1') ) engine = self._make_engine(flow) self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) class SerialEngineTest(EngineTaskTest, EngineMultipleResultsTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineOptionalRequirementsTest, EngineGraphFlowTest, EngineMissingDepsTest, EngineResetTests, EngineGraphConditionalFlowTest, EngineCheckingTaskTest, EngineDeciderDepthTest, EngineTaskNotificationsTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, store=None, **kwargs): return taskflow.engines.load(flow, flow_detail=flow_detail, engine='serial', backend=self.backend, store=store, **kwargs) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.SerialActionEngine) def test_singlethreaded_is_the_default(self): engine = taskflow.engines.load(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.SerialActionEngine) class ParallelEngineWithThreadsTest(EngineTaskTest, EngineMultipleResultsTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineOptionalRequirementsTest, EngineGraphFlowTest, EngineResetTests, EngineMissingDepsTest, EngineGraphConditionalFlowTest, EngineCheckingTaskTest, EngineDeciderDepthTest, EngineTaskNotificationsTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, flow_detail=None, executor=None, store=None, **kwargs): if executor is None: executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, executor=executor, engine='parallel', store=store, max_workers=self._EXECUTOR_WORKERS, **kwargs) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.ParallelActionEngine) def test_using_common_executor(self): flow = utils.TaskNoRequiresNoReturns(name='task1') executor = futurist.ThreadPoolExecutor(self._EXECUTOR_WORKERS) try: e1 = self._make_engine(flow, executor=executor) e2 = self._make_engine(flow, executor=executor) self.assertIs(e1.options['executor'], e2.options['executor']) finally: executor.shutdown(wait=True) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(EngineTaskTest, EngineMultipleResultsTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineOptionalRequirementsTest, EngineGraphFlowTest, EngineResetTests, EngineMissingDepsTest, EngineGraphConditionalFlowTest, EngineCheckingTaskTest, EngineDeciderDepthTest, EngineTaskNotificationsTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None, store=None, **kwargs): if executor is None: executor = 'greenthreads' return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor, store=store, **kwargs) class ParallelEngineWithProcessTest(EngineTaskTest, EngineMultipleResultsTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineOptionalRequirementsTest, EngineGraphFlowTest, EngineResetTests, EngineMissingDepsTest, EngineGraphConditionalFlowTest, EngineDeciderDepthTest, EngineTaskNotificationsTest, test.TestCase): _EXECUTOR_WORKERS = 2 def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, eng.ParallelActionEngine) def _make_engine(self, flow, flow_detail=None, executor=None, store=None, **kwargs): if executor is None: executor = 'processes' return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor, store=store, max_workers=self._EXECUTOR_WORKERS, **kwargs) def test_update_progress_notifications_proxied(self): captured = collections.defaultdict(list) def notify_me(event_type, details): captured[event_type].append(details) a = utils.MultiProgressingTask('a') a.notifier.register(a.notifier.ANY, notify_me) progress_chunks = list(x / 10.0 for x in range(1, 10)) e = self._make_engine(a, store={'progress_chunks': progress_chunks}) e.run() self.assertEqual(11, len(captured[task.EVENT_UPDATE_PROGRESS])) def test_custom_notifications_proxied(self): captured = collections.defaultdict(list) def notify_me(event_type, details): captured[event_type].append(details) a = utils.EmittingTask('a') a.notifier.register(a.notifier.ANY, notify_me) e = self._make_engine(a) e.run() self.assertEqual(1, len(captured['hi'])) self.assertEqual(2, len(captured[task.EVENT_UPDATE_PROGRESS])) def test_just_custom_notifications_proxied(self): captured = collections.defaultdict(list) def notify_me(event_type, details): captured[event_type].append(details) a = utils.EmittingTask('a') a.notifier.register('hi', notify_me) e = self._make_engine(a) e.run() self.assertEqual(1, len(captured['hi'])) self.assertEqual(0, len(captured[task.EVENT_UPDATE_PROGRESS])) class WorkerBasedEngineTest(EngineTaskTest, EngineMultipleResultsTest, EngineLinearFlowTest, EngineParallelFlowTest, EngineLinearAndUnorderedExceptionsTest, EngineOptionalRequirementsTest, EngineGraphFlowTest, EngineResetTests, EngineMissingDepsTest, EngineGraphConditionalFlowTest, EngineDeciderDepthTest, EngineTaskNotificationsTest, test.TestCase): def setUp(self): super(WorkerBasedEngineTest, self).setUp() shared_conf = { 'exchange': 'test', 'transport': 'memory', 'transport_options': { # NOTE(imelnikov): I run tests several times for different # intervals. Reducing polling interval below 0.01 did not give # considerable win in tests run time; reducing polling interval # too much (AFAIR below 0.0005) affected stability -- I was # seeing timeouts. So, 0.01 looks like the most balanced for # local transports (for now). 'polling_interval': 0.01, }, } worker_conf = shared_conf.copy() worker_conf.update({ 'topic': 'my-topic', 'tasks': [ # This makes it possible for the worker to run/find any atoms # that are defined in the test.utils module (which are all # the task/atom types that this test uses)... utils.__name__, ], }) self.engine_conf = shared_conf.copy() self.engine_conf.update({ 'engine': 'worker-based', 'topics': tuple([worker_conf['topic']]), }) self.worker = wkr.Worker(**worker_conf) self.worker_thread = tu.daemon_thread(self.worker.run) self.worker_thread.start() # Ensure worker and thread is stopped when test is done; these are # called in reverse order, so make sure we signal the stop before # performing the join (because the reverse won't work). self.addCleanup(self.worker_thread.join) self.addCleanup(self.worker.stop) # Make sure the worker is started before we can continue... self.worker.wait() def _make_engine(self, flow, flow_detail=None, store=None, **kwargs): kwargs.update(self.engine_conf) return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, store=store, **kwargs) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) self.assertIsInstance(engine, w_eng.WorkerBasedActionEngine) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_exceptions.py0000664000175000017500000001054400000000000023153 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import string import six import testtools from taskflow import exceptions as exc from taskflow import test class TestExceptions(test.TestCase): def test_cause(self): capture = None try: raise exc.TaskFlowException("broken", cause=IOError("dead")) except Exception as e: capture = e self.assertIsNotNone(capture) self.assertIsInstance(capture, exc.TaskFlowException) self.assertIsNotNone(capture.cause) self.assertIsInstance(capture.cause, IOError) def test_cause_pformat(self): capture = None try: raise exc.TaskFlowException("broken", cause=IOError("dead")) except Exception as e: capture = e self.assertIsNotNone(capture) self.assertGreater(0, len(capture.pformat())) def test_raise_with(self): capture = None try: raise IOError('broken') except Exception: try: exc.raise_with_cause(exc.TaskFlowException, 'broken') except Exception as e: capture = e self.assertIsNotNone(capture) self.assertIsInstance(capture, exc.TaskFlowException) self.assertIsNotNone(capture.cause) self.assertIsInstance(capture.cause, IOError) def test_no_looping(self): causes = [] for a in string.ascii_lowercase: try: cause = causes[-1] except IndexError: cause = None causes.append(exc.TaskFlowException('%s broken' % a, cause=cause)) e = causes[0] last_e = causes[-1] e._cause = last_e self.assertIsNotNone(e.pformat()) def test_pformat_str(self): ex = None try: try: try: raise IOError("Didn't work") except IOError: exc.raise_with_cause(exc.TaskFlowException, "It didn't go so well") except exc.TaskFlowException: exc.raise_with_cause(exc.TaskFlowException, "I Failed") except exc.TaskFlowException as e: ex = e self.assertIsNotNone(ex) self.assertIsInstance(ex, exc.TaskFlowException) self.assertIsInstance(ex.cause, exc.TaskFlowException) self.assertIsInstance(ex.cause.cause, IOError) p_msg = ex.pformat() p_str_msg = str(ex) for msg in ["I Failed", "It didn't go so well", "Didn't work"]: self.assertIn(msg, p_msg) self.assertIn(msg, p_str_msg) def test_pformat_root_class(self): ex = exc.TaskFlowException("Broken") self.assertIn("TaskFlowException", ex.pformat(show_root_class=True)) self.assertNotIn("TaskFlowException", ex.pformat(show_root_class=False)) self.assertIn("Broken", ex.pformat(show_root_class=True)) def test_invalid_pformat_indent(self): ex = exc.TaskFlowException("Broken") self.assertRaises(ValueError, ex.pformat, indent=-100) @testtools.skipIf(not six.PY3, 'py3.x is not available') def test_raise_with_cause(self): capture = None try: raise IOError('broken') except Exception: try: exc.raise_with_cause(exc.TaskFlowException, 'broken') except Exception as e: capture = e self.assertIsNotNone(capture) self.assertIsInstance(capture, exc.TaskFlowException) self.assertIsNotNone(capture.cause) self.assertIsInstance(capture.cause, IOError) self.assertIsNotNone(capture.__cause__) self.assertIsInstance(capture.__cause__, IOError) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_failure.py0000664000175000017500000004404100000000000022420 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from oslo_utils import encodeutils import six from six.moves import cPickle as pickle import testtools from taskflow import exceptions from taskflow import test from taskflow.tests import utils as test_utils from taskflow.types import failure def _captured_failure(msg): try: raise RuntimeError(msg) except Exception: return failure.Failure() def _make_exc_info(msg): try: raise RuntimeError(msg) except Exception: return sys.exc_info() class GeneralFailureObjTestsMixin(object): def test_captures_message(self): self.assertEqual('Woot!', self.fail_obj.exception_str) def test_str(self): self.assertEqual('Failure: RuntimeError: Woot!', str(self.fail_obj)) def test_exception_types(self): self.assertEqual(test_utils.RUNTIME_ERROR_CLASSES[:-2], list(self.fail_obj)) def test_pformat_no_traceback(self): text = self.fail_obj.pformat() self.assertNotIn("Traceback", text) def test_check_str(self): val = 'Exception' self.assertEqual(val, self.fail_obj.check(val)) def test_check_str_not_there(self): val = 'ValueError' self.assertIsNone(self.fail_obj.check(val)) def test_check_type(self): self.assertIs(self.fail_obj.check(RuntimeError), RuntimeError) def test_check_type_not_there(self): self.assertIsNone(self.fail_obj.check(ValueError)) class CaptureFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(CaptureFailureTestCase, self).setUp() self.fail_obj = _captured_failure('Woot!') def test_captures_value(self): self.assertIsInstance(self.fail_obj.exception, RuntimeError) def test_captures_exc_info(self): exc_info = self.fail_obj.exc_info self.assertEqual(3, len(exc_info)) self.assertEqual(RuntimeError, exc_info[0]) self.assertIs(exc_info[1], self.fail_obj.exception) def test_reraises(self): self.assertRaisesRegex(RuntimeError, '^Woot!$', self.fail_obj.reraise) class ReCreatedFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(ReCreatedFailureTestCase, self).setUp() fail_obj = _captured_failure('Woot!') self.fail_obj = failure.Failure(exception_str=fail_obj.exception_str, traceback_str=fail_obj.traceback_str, exc_type_names=list(fail_obj)) def test_value_lost(self): self.assertIsNone(self.fail_obj.exception) def test_no_exc_info(self): self.assertIsNone(self.fail_obj.exc_info) def test_pformat_traceback(self): text = self.fail_obj.pformat(traceback=True) self.assertIn("Traceback (most recent call last):", text) def test_reraises(self): exc = self.assertRaises(exceptions.WrappedFailure, self.fail_obj.reraise) self.assertIs(exc.check(RuntimeError), RuntimeError) def test_no_type_names(self): fail_obj = _captured_failure('Woot!') fail_obj = failure.Failure(exception_str=fail_obj.exception_str, traceback_str=fail_obj.traceback_str, exc_type_names=[]) self.assertEqual([], list(fail_obj)) self.assertEqual("Failure: Woot!", fail_obj.pformat()) class FromExceptionTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(FromExceptionTestCase, self).setUp() self.fail_obj = failure.Failure.from_exception(RuntimeError('Woot!')) def test_pformat_no_traceback(self): text = self.fail_obj.pformat(traceback=True) self.assertIn("Traceback not available", text) class FailureObjectTestCase(test.TestCase): def test_invalids(self): f = { 'exception_str': 'blah', 'traceback_str': 'blah', 'exc_type_names': [], } self.assertRaises(exceptions.InvalidFormat, failure.Failure.validate, f) f = { 'exception_str': 'blah', 'exc_type_names': ['Exception'], } self.assertRaises(exceptions.InvalidFormat, failure.Failure.validate, f) f = { 'exception_str': 'blah', 'traceback_str': 'blah', 'exc_type_names': ['Exception'], 'version': -1, } self.assertRaises(exceptions.InvalidFormat, failure.Failure.validate, f) def test_valid_from_dict_to_dict(self): f = _captured_failure('Woot!') d_f = f.to_dict() failure.Failure.validate(d_f) f2 = failure.Failure.from_dict(d_f) self.assertTrue(f.matches(f2)) def test_bad_root_exception(self): f = _captured_failure('Woot!') d_f = f.to_dict() d_f['exc_type_names'] = ['Junk'] self.assertRaises(exceptions.InvalidFormat, failure.Failure.validate, d_f) def test_valid_from_dict_to_dict_2(self): f = _captured_failure('Woot!') d_f = f.to_dict() d_f['exc_type_names'] = ['RuntimeError', 'Exception', 'BaseException'] failure.Failure.validate(d_f) def test_cause_exception_args(self): f = _captured_failure('Woot!') d_f = f.to_dict() self.assertEqual(1, len(d_f['exc_args'])) self.assertEqual(("Woot!",), d_f['exc_args']) f2 = failure.Failure.from_dict(d_f) self.assertEqual(f.exception_args, f2.exception_args) def test_dont_catch_base_exception(self): try: raise SystemExit() except BaseException: self.assertRaises(TypeError, failure.Failure) def test_unknown_argument(self): exc = self.assertRaises(TypeError, failure.Failure, exception_str='Woot!', traceback_str=None, exc_type_names=['Exception'], hi='hi there') expected = "Failure.__init__ got unexpected keyword argument(s): hi" self.assertEqual(expected, str(exc)) def test_empty_does_not_reraise(self): self.assertIsNone(failure.Failure.reraise_if_any([])) def test_reraises_one(self): fls = [_captured_failure('Woot!')] self.assertRaisesRegex(RuntimeError, '^Woot!$', failure.Failure.reraise_if_any, fls) def test_reraises_several(self): fls = [ _captured_failure('Woot!'), _captured_failure('Oh, not again!') ] exc = self.assertRaises(exceptions.WrappedFailure, failure.Failure.reraise_if_any, fls) self.assertEqual(fls, list(exc)) def test_failure_copy(self): fail_obj = _captured_failure('Woot!') copied = fail_obj.copy() self.assertIsNot(fail_obj, copied) self.assertEqual(fail_obj, copied) self.assertTrue(fail_obj.matches(copied)) def test_failure_copy_recaptured(self): captured = _captured_failure('Woot!') fail_obj = failure.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) copied = fail_obj.copy() self.assertIsNot(fail_obj, copied) self.assertEqual(fail_obj, copied) self.assertFalse(fail_obj != copied) self.assertTrue(fail_obj.matches(copied)) def test_recaptured_not_eq(self): captured = _captured_failure('Woot!') fail_obj = failure.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured), exc_args=list(captured.exception_args)) self.assertFalse(fail_obj == captured) self.assertTrue(fail_obj != captured) self.assertTrue(fail_obj.matches(captured)) def test_two_captured_eq(self): captured = _captured_failure('Woot!') captured2 = _captured_failure('Woot!') self.assertEqual(captured, captured2) def test_two_recaptured_neq(self): captured = _captured_failure('Woot!') fail_obj = failure.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) new_exc_str = captured.exception_str.replace('Woot', 'w00t') fail_obj2 = failure.Failure(exception_str=new_exc_str, traceback_str=captured.traceback_str, exc_type_names=list(captured)) self.assertNotEqual(fail_obj, fail_obj2) self.assertFalse(fail_obj2.matches(fail_obj)) def test_compares_to_none(self): captured = _captured_failure('Woot!') self.assertIsNotNone(captured) self.assertFalse(captured.matches(None)) def test_pformat_traceback(self): captured = _captured_failure('Woot!') text = captured.pformat(traceback=True) self.assertIn("Traceback (most recent call last):", text) def test_pformat_traceback_captured_no_exc_info(self): captured = _captured_failure('Woot!') captured = failure.Failure.from_dict(captured.to_dict()) text = captured.pformat(traceback=True) self.assertIn("Traceback (most recent call last):", text) def test_no_capture_exc_args(self): captured = _captured_failure(Exception("I am not valid JSON")) fail_obj = failure.Failure(exception_str=captured.exception_str, traceback_str=captured.traceback_str, exc_type_names=list(captured), exc_args=list(captured.exception_args)) fail_json = fail_obj.to_dict(include_args=False) self.assertNotEqual(fail_obj.exception_args, fail_json['exc_args']) self.assertEqual(fail_json['exc_args'], tuple()) class WrappedFailureTestCase(test.TestCase): def test_simple_iter(self): fail_obj = _captured_failure('Woot!') wf = exceptions.WrappedFailure([fail_obj]) self.assertEqual(1, len(wf)) self.assertEqual([fail_obj], list(wf)) def test_simple_check(self): fail_obj = _captured_failure('Woot!') wf = exceptions.WrappedFailure([fail_obj]) self.assertEqual(RuntimeError, wf.check(RuntimeError)) self.assertIsNone(wf.check(ValueError)) def test_two_failures(self): fls = [ _captured_failure('Woot!'), _captured_failure('Oh, not again!') ] wf = exceptions.WrappedFailure(fls) self.assertEqual(2, len(wf)) self.assertEqual(fls, list(wf)) def test_flattening(self): f1 = _captured_failure('Wrap me') f2 = _captured_failure('Wrap me, too') f3 = _captured_failure('Woot!') try: raise exceptions.WrappedFailure([f1, f2]) except Exception: fail_obj = failure.Failure() wf = exceptions.WrappedFailure([fail_obj, f3]) self.assertEqual([f1, f2, f3], list(wf)) class NonAsciiExceptionsTestCase(test.TestCase): def test_exception_with_non_ascii_str(self): bad_string = chr(200) excp = ValueError(bad_string) fail = failure.Failure.from_exception(excp) self.assertEqual(encodeutils.exception_to_unicode(excp), fail.exception_str) # This is slightly different on py2 vs py3... due to how # __str__ or __unicode__ is called and what is expected from # both... if six.PY2: msg = encodeutils.exception_to_unicode(excp) expected = 'Failure: ValueError: %s' % msg.encode('utf-8') else: expected = u'Failure: ValueError: \xc8' self.assertEqual(expected, str(fail)) def test_exception_non_ascii_unicode(self): hi_ru = u'привет' fail = failure.Failure.from_exception(ValueError(hi_ru)) self.assertEqual(hi_ru, fail.exception_str) self.assertIsInstance(fail.exception_str, six.text_type) self.assertEqual(u'Failure: ValueError: %s' % hi_ru, six.text_type(fail)) def test_wrapped_failure_non_ascii_unicode(self): hi_cn = u'嗨' fail = ValueError(hi_cn) self.assertEqual(hi_cn, encodeutils.exception_to_unicode(fail)) fail = failure.Failure.from_exception(fail) wrapped_fail = exceptions.WrappedFailure([fail]) expected_result = (u"WrappedFailure: " "[Failure: ValueError: %s]" % (hi_cn)) self.assertEqual(expected_result, six.text_type(wrapped_fail)) def test_failure_equality_with_non_ascii_str(self): bad_string = chr(200) fail = failure.Failure.from_exception(ValueError(bad_string)) copied = fail.copy() self.assertEqual(fail, copied) def test_failure_equality_non_ascii_unicode(self): hi_ru = u'привет' fail = failure.Failure.from_exception(ValueError(hi_ru)) copied = fail.copy() self.assertEqual(fail, copied) @testtools.skipIf(not six.PY3, 'this test only works on python 3.x') class FailureCausesTest(test.TestCase): @classmethod def _raise_many(cls, messages): if not messages: return msg = messages.pop(0) e = RuntimeError(msg) try: cls._raise_many(messages) raise e except RuntimeError as e1: six.raise_from(e, e1) def test_causes(self): f = None try: self._raise_many(["Still still not working", "Still not working", "Not working"]) except RuntimeError: f = failure.Failure() self.assertIsNotNone(f) self.assertEqual(2, len(f.causes)) self.assertEqual("Still not working", f.causes[0].exception_str) self.assertEqual("Not working", f.causes[1].exception_str) f = f.causes[0] self.assertEqual(1, len(f.causes)) self.assertEqual("Not working", f.causes[0].exception_str) f = f.causes[0] self.assertEqual(0, len(f.causes)) def test_causes_to_from_dict(self): f = None try: self._raise_many(["Still still not working", "Still not working", "Not working"]) except RuntimeError: f = failure.Failure() self.assertIsNotNone(f) d_f = f.to_dict() failure.Failure.validate(d_f) f = failure.Failure.from_dict(d_f) self.assertEqual(2, len(f.causes)) self.assertEqual("Still not working", f.causes[0].exception_str) self.assertEqual("Not working", f.causes[1].exception_str) f = f.causes[0] self.assertEqual(1, len(f.causes)) self.assertEqual("Not working", f.causes[0].exception_str) f = f.causes[0] self.assertEqual(0, len(f.causes)) def test_causes_pickle(self): f = None try: self._raise_many(["Still still not working", "Still not working", "Not working"]) except RuntimeError: f = failure.Failure() self.assertIsNotNone(f) p_f = pickle.dumps(f) f = pickle.loads(p_f) self.assertEqual(2, len(f.causes)) self.assertEqual("Still not working", f.causes[0].exception_str) self.assertEqual("Not working", f.causes[1].exception_str) f = f.causes[0] self.assertEqual(1, len(f.causes)) self.assertEqual("Not working", f.causes[0].exception_str) f = f.causes[0] self.assertEqual(0, len(f.causes)) def test_causes_suppress_context(self): f = None try: try: self._raise_many(["Still still not working", "Still not working", "Not working"]) except RuntimeError as e: six.raise_from(e, None) except RuntimeError: f = failure.Failure() self.assertIsNotNone(f) self.assertEqual([], list(f.causes)) class ExcInfoUtilsTest(test.TestCase): def test_copy_none(self): result = failure._copy_exc_info(None) self.assertIsNone(result) def test_copy_exc_info(self): exc_info = _make_exc_info("Woot!") result = failure._copy_exc_info(exc_info) self.assertIsNot(result, exc_info) self.assertIs(result[0], RuntimeError) self.assertIsNot(result[1], exc_info[1]) self.assertIs(result[2], exc_info[2]) def test_none_equals(self): self.assertTrue(failure._are_equal_exc_info_tuples(None, None)) def test_none_ne_tuple(self): exc_info = _make_exc_info("Woot!") self.assertFalse(failure._are_equal_exc_info_tuples(None, exc_info)) def test_tuple_nen_none(self): exc_info = _make_exc_info("Woot!") self.assertFalse(failure._are_equal_exc_info_tuples(exc_info, None)) def test_tuple_equals_itself(self): exc_info = _make_exc_info("Woot!") self.assertTrue(failure._are_equal_exc_info_tuples(exc_info, exc_info)) def test_typle_equals_copy(self): exc_info = _make_exc_info("Woot!") copied = failure._copy_exc_info(exc_info) self.assertTrue(failure._are_equal_exc_info_tuples(exc_info, copied)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_flow_dependencies.py0000664000175000017500000004324600000000000024454 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import retry from taskflow import test from taskflow.tests import utils class FlowDependenciesTest(test.TestCase): def test_task_without_dependencies(self): flow = utils.TaskNoRequiresNoReturns() self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_task_requires_default_values(self): flow = utils.TaskMultiArg() self.assertEqual(set(['x', 'y', 'z']), flow.requires) self.assertEqual(set(), flow.provides, ) def test_task_requires_rebinded_mapped(self): flow = utils.TaskMultiArg(rebind={'x': 'a', 'y': 'b', 'z': 'c'}) self.assertEqual(set(['a', 'b', 'c']), flow.requires) self.assertEqual(set(), flow.provides) def test_task_requires_additional_values(self): flow = utils.TaskMultiArg(requires=['a', 'b']) self.assertEqual(set(['a', 'b', 'x', 'y', 'z']), flow.requires) self.assertEqual(set(), flow.provides) def test_task_provides_values(self): flow = utils.TaskMultiReturn(provides=['a', 'b', 'c']) self.assertEqual(set(), flow.requires) self.assertEqual(set(['a', 'b', 'c']), flow.provides) def test_task_provides_and_requires_values(self): flow = utils.TaskMultiArgMultiReturn(provides=['a', 'b', 'c']) self.assertEqual(set(['x', 'y', 'z']), flow.requires) self.assertEqual(set(['a', 'b', 'c']), flow.provides) def test_linear_flow_without_dependencies(self): flow = lf.Flow('lf').add( utils.TaskNoRequiresNoReturns('task1'), utils.TaskNoRequiresNoReturns('task2')) self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_linear_flow_requires_values(self): flow = lf.Flow('lf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z']), flow.requires) self.assertEqual(set(), flow.provides) def test_linear_flow_requires_rebind_values(self): flow = lf.Flow('lf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z', 'q']), flow.requires) self.assertEqual(set(), flow.provides) def test_linear_flow_provides_values(self): flow = lf.Flow('lf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'a', 'b', 'c']), flow.provides) def test_linear_flow_provides_required_values(self): flow = lf.Flow('lf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x']), flow.provides) def test_linear_flow_multi_provides_and_requires_values(self): flow = lf.Flow('lf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['x', 'y', 'q']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(set(['a', 'b', 'c', 'z']), flow.requires) self.assertEqual(set(['x', 'y', 'q', 'i', 'j', 'k']), flow.provides) def test_unordered_flow_without_dependencies(self): flow = uf.Flow('uf').add( utils.TaskNoRequiresNoReturns('task1'), utils.TaskNoRequiresNoReturns('task2')) self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_unordered_flow_requires_values(self): flow = uf.Flow('uf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z']), flow.requires) self.assertEqual(set(), flow.provides) def test_unordered_flow_requires_rebind_values(self): flow = uf.Flow('uf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z', 'q']), flow.requires) self.assertEqual(set(), flow.provides) def test_unordered_flow_provides_values(self): flow = uf.Flow('uf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'a', 'b', 'c']), flow.provides) def test_unordered_flow_provides_required_values(self): flow = uf.Flow('uf') flow.add(utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) flow.add(utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) self.assertEqual(set(['x']), flow.provides) self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_requires_provided_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneReturn('task1', provides='x')) flow.add(utils.TaskOneArg('task2')) self.assertEqual(set(['x']), flow.provides) self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_provides_required_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneArg('task2')) flow.add(utils.TaskOneReturn('task1', provides='x')) self.assertEqual(2, len(flow)) self.assertEqual(set(['x']), flow.provides) self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_multi_provides_and_requires_values(self): flow = uf.Flow('uf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['d', 'e', 'f']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(set(['a', 'b', 'c', 'x', 'y', 'z']), flow.requires) self.assertEqual(set(['d', 'e', 'f', 'i', 'j', 'k']), flow.provides) def test_unordered_flow_provides_same_values(self): flow = uf.Flow('uf').add(utils.TaskOneReturn(provides='x')) flow.add(utils.TaskOneReturn(provides='x')) self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_provides_same_values_one_add(self): flow = uf.Flow('uf') flow.add(utils.TaskOneReturn(provides='x'), utils.TaskOneReturn(provides='x')) self.assertEqual(set(['x']), flow.provides) def test_nested_flows_requirements(self): flow = uf.Flow('uf').add( lf.Flow('lf').add( utils.TaskOneArgOneReturn('task1', rebind=['a'], provides=['x']), utils.TaskOneArgOneReturn('task2', provides=['y'])), uf.Flow('uf').add( utils.TaskOneArgOneReturn('task3', rebind=['b'], provides=['z']), utils.TaskOneArgOneReturn('task4', rebind=['c'], provides=['q']))) self.assertEqual(set(['a', 'b', 'c']), flow.requires) self.assertEqual(set(['x', 'y', 'z', 'q']), flow.provides) def test_graph_flow_requires_values(self): flow = gf.Flow('gf').add( utils.TaskOneArg('task1'), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z']), flow.requires) self.assertEqual(set(), flow.provides) def test_graph_flow_requires_rebind_values(self): flow = gf.Flow('gf').add( utils.TaskOneArg('task1', rebind=['q']), utils.TaskMultiArg('task2')) self.assertEqual(set(['x', 'y', 'z', 'q']), flow.requires) self.assertEqual(set(), flow.provides) def test_graph_flow_provides_values(self): flow = gf.Flow('gf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskMultiReturn('task2', provides=['a', 'b', 'c'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'a', 'b', 'c']), flow.provides) def test_graph_flow_provides_required_values(self): flow = gf.Flow('gf').add( utils.TaskOneReturn('task1', provides='x'), utils.TaskOneArg('task2')) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x']), flow.provides) def test_graph_flow_provides_provided_value_other_call(self): flow = gf.Flow('gf') flow.add(utils.TaskOneReturn('task1', provides='x')) flow.add(utils.TaskOneReturn('task2', provides='x')) self.assertEqual(set(['x']), flow.provides) def test_graph_flow_multi_provides_and_requires_values(self): flow = gf.Flow('gf').add( utils.TaskMultiArgMultiReturn('task1', rebind=['a', 'b', 'c'], provides=['d', 'e', 'f']), utils.TaskMultiArgMultiReturn('task2', provides=['i', 'j', 'k'])) self.assertEqual(set(['a', 'b', 'c', 'x', 'y', 'z']), flow.requires) self.assertEqual(set(['d', 'e', 'f', 'i', 'j', 'k']), flow.provides) def test_graph_cyclic_dependency(self): flow = gf.Flow('g-3-cyclic') self.assertRaisesRegex(exceptions.DependencyFailure, '^No path', flow.add, utils.TaskOneArgOneReturn(provides='a', requires=['b']), utils.TaskOneArgOneReturn(provides='b', requires=['c']), utils.TaskOneArgOneReturn(provides='c', requires=['a'])) def test_task_requires_and_provides_same_values(self): flow = lf.Flow('lf', utils.TaskOneArgOneReturn('rt', requires='x', provides='x')) self.assertEqual(set('x'), flow.requires) self.assertEqual(set('x'), flow.provides) def test_retry_in_linear_flow_no_requirements_no_provides(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt')) self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_linear_flow_with_requirements(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt', requires=['x', 'y'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_linear_flow_with_provides(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt', provides=['x', 'y'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'y']), flow.provides) def test_retry_in_linear_flow_requires_and_provides(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(['a', 'b']), flow.provides) def test_retry_requires_and_provides_same_value(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['x', 'y'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(['x', 'y']), flow.provides) def test_retry_in_unordered_flow_no_requirements_no_provides(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt')) self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_unordered_flow_with_requirements(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x', 'y'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_unordered_flow_with_provides(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', provides=['x', 'y'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'y']), flow.provides) def test_retry_in_unordered_flow_requires_and_provides(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(['a', 'b']), flow.provides) def test_retry_in_graph_flow_no_requirements_no_provides(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt')) self.assertEqual(set(), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_graph_flow_with_requirements(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', requires=['x', 'y'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(), flow.provides) def test_retry_in_graph_flow_with_provides(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', provides=['x', 'y'])) self.assertEqual(set(), flow.requires) self.assertEqual(set(['x', 'y']), flow.provides) def test_retry_in_graph_flow_requires_and_provides(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) self.assertEqual(set(['x', 'y']), flow.requires) self.assertEqual(set(['a', 'b']), flow.provides) def test_linear_flow_retry_and_task(self): flow = lf.Flow('lf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) flow.add(utils.TaskMultiArgOneReturn(rebind=['a', 'x', 'c'], provides=['z'])) self.assertEqual(set(['x', 'y', 'c']), flow.requires) self.assertEqual(set(['a', 'b', 'z']), flow.provides) def test_unordered_flow_retry_and_task(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) flow.add(utils.TaskMultiArgOneReturn(rebind=['a', 'x', 'c'], provides=['z'])) self.assertEqual(set(['x', 'y', 'c']), flow.requires) self.assertEqual(set(['a', 'b', 'z']), flow.provides) def test_unordered_flow_retry_and_task_same_requires_provides(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x'])) flow.add(utils.TaskOneReturn(provides=['x'])) self.assertEqual(set(['x']), flow.requires) self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_retry_and_task_provide_same_value(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', provides=['x'])) flow.add(utils.TaskOneReturn('t1', provides=['x'])) self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_retry_two_tasks_provide_same_value(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', provides=['y'])) flow.add(utils.TaskOneReturn('t1', provides=['x']), utils.TaskOneReturn('t2', provides=['x'])) self.assertEqual(set(['x', 'y']), flow.provides) def test_graph_flow_retry_and_task(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', requires=['x', 'y'], provides=['a', 'b'])) flow.add(utils.TaskMultiArgOneReturn(rebind=['a', 'x', 'c'], provides=['z'])) self.assertEqual(set(['x', 'y', 'c']), flow.requires) self.assertEqual(set(['a', 'b', 'z']), flow.provides) def test_graph_flow_retry_and_task_dependency_provide_require(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', requires=['x'])) flow.add(utils.TaskOneReturn(provides=['x'])) self.assertEqual(set(['x']), flow.provides) self.assertEqual(set(['x']), flow.requires) def test_graph_flow_retry_and_task_provide_same_value(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', provides=['x'])) flow.add(utils.TaskOneReturn('t1', provides=['x'])) self.assertEqual(set(['x']), flow.provides) def test_builtin_retry_args(self): class FullArgsRetry(retry.AlwaysRevert): def execute(self, history, **kwargs): pass def revert(self, history, **kwargs): pass flow = lf.Flow('lf', retry=FullArgsRetry(requires='a')) self.assertEqual(set(['a']), flow.requires) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_formatters.py0000664000175000017500000000752100000000000023161 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import engines from taskflow import formatters from taskflow.listeners import logging as logging_listener from taskflow.patterns import linear_flow from taskflow import states from taskflow import test from taskflow.test import mock from taskflow.test import utils as test_utils class FormattersTest(test.TestCase): @staticmethod def _broken_atom_matcher(node): return node.item.name == 'Broken' def _make_test_flow(self): b = test_utils.TaskWithFailure("Broken") h_1 = test_utils.ProgressingTask("Happy-1") h_2 = test_utils.ProgressingTask("Happy-2") flo = linear_flow.Flow("test") flo.add(h_1, h_2, b) return flo def test_exc_info_format(self): flo = self._make_test_flow() e = engines.load(flo) self.assertRaises(RuntimeError, e.run) fails = e.storage.get_execute_failures() self.assertEqual(1, len(fails)) self.assertIn('Broken', fails) fail = fails['Broken'] f = formatters.FailureFormatter(e) (exc_info, details) = f.format(fail, self._broken_atom_matcher) self.assertEqual(3, len(exc_info)) self.assertEqual("", details) @mock.patch('taskflow.formatters.FailureFormatter._format_node') def test_exc_info_with_details_format(self, mock_format_node): mock_format_node.return_value = 'A node' flo = self._make_test_flow() e = engines.load(flo) self.assertRaises(RuntimeError, e.run) fails = e.storage.get_execute_failures() self.assertEqual(1, len(fails)) self.assertIn('Broken', fails) fail = fails['Broken'] # Doing this allows the details to be shown... e.storage.set_atom_intention("Broken", states.EXECUTE) f = formatters.FailureFormatter(e) (exc_info, details) = f.format(fail, self._broken_atom_matcher) self.assertEqual(3, len(exc_info)) self.assertTrue(mock_format_node.called) @mock.patch('taskflow.storage.Storage.get_execute_result') def test_exc_info_with_details_format_hidden(self, mock_get_execute): flo = self._make_test_flow() e = engines.load(flo) self.assertRaises(RuntimeError, e.run) fails = e.storage.get_execute_failures() self.assertEqual(1, len(fails)) self.assertIn('Broken', fails) fail = fails['Broken'] # Doing this allows the details to be shown... e.storage.set_atom_intention("Broken", states.EXECUTE) hide_inputs_outputs_of = ['Broken', "Happy-1", "Happy-2"] f = formatters.FailureFormatter( e, hide_inputs_outputs_of=hide_inputs_outputs_of) (exc_info, details) = f.format(fail, self._broken_atom_matcher) self.assertEqual(3, len(exc_info)) self.assertFalse(mock_get_execute.called) @mock.patch('taskflow.formatters.FailureFormatter._format_node') def test_formatted_via_listener(self, mock_format_node): mock_format_node.return_value = 'A node' flo = self._make_test_flow() e = engines.load(flo) with logging_listener.DynamicLoggingListener(e): self.assertRaises(RuntimeError, e.run) self.assertTrue(mock_format_node.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_functor_task.py0000664000175000017500000000474000000000000023475 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines from taskflow.patterns import linear_flow from taskflow import task as base from taskflow import test def add(a, b): return a + b class BunchOfFunctions(object): def __init__(self, values): self.values = values def run_one(self, *args, **kwargs): self.values.append('one') def revert_one(self, *args, **kwargs): self.values.append('revert one') def run_fail(self, *args, **kwargs): self.values.append('fail') raise RuntimeError('Woot!') five = lambda: 5 multiply = lambda x, y: x * y class FunctorTaskTest(test.TestCase): def test_simple(self): task = base.FunctorTask(add) self.assertEqual(__name__ + '.add', task.name) def test_other_name(self): task = base.FunctorTask(add, name='my task') self.assertEqual('my task', task.name) def test_it_runs(self): values = [] bof = BunchOfFunctions(values) t = base.FunctorTask flow = linear_flow.Flow('test') flow.add( t(bof.run_one, revert=bof.revert_one), t(bof.run_fail) ) self.assertRaisesRegex(RuntimeError, '^Woot', taskflow.engines.run, flow) self.assertEqual(['one', 'fail', 'revert one'], values) def test_lambda_functors(self): t = base.FunctorTask flow = linear_flow.Flow('test') flow.add( t(five, provides='five', name='five'), t(multiply, provides='product', name='product') ) flow_store = { 'x': 2, 'y': 3 } result = taskflow.engines.run(flow, store=flow_store) expected = flow_store.copy() expected.update({ 'five': 5, 'product': 6 }) self.assertDictEqual(expected, result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_listeners.py0000664000175000017500000003574100000000000023010 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import logging import threading import time from oslo_serialization import jsonutils from oslo_utils import reflection import six from zake import fake_client import taskflow.engines from taskflow import exceptions as exc from taskflow.jobs import backends as jobs from taskflow.listeners import claims from taskflow.listeners import logging as logging_listeners from taskflow.listeners import timing from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow import states from taskflow import task from taskflow import test from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.utils import misc from taskflow.utils import persistence_utils _LOG_LEVELS = frozenset([ logging.CRITICAL, logging.DEBUG, logging.ERROR, logging.INFO, logging.NOTSET, logging.WARNING, ]) class SleepyTask(task.Task): def __init__(self, name, sleep_for=0.0): super(SleepyTask, self).__init__(name=name) self._sleep_for = float(sleep_for) def execute(self): if self._sleep_for <= 0: return else: time.sleep(self._sleep_for) class EngineMakerMixin(object): def _make_engine(self, flow, flow_detail=None, backend=None): e = taskflow.engines.load(flow, flow_detail=flow_detail, backend=backend) e.compile() e.prepare() return e class TestClaimListener(test.TestCase, EngineMakerMixin): def _make_dummy_flow(self, count): f = lf.Flow('root') for i in range(0, count): f.add(test_utils.ProvidesRequiresTask('%s_test' % i, [], [])) return f def setUp(self): super(TestClaimListener, self).setUp() self.client = fake_client.FakeClient() self.addCleanup(self.client.stop) self.board = jobs.fetch('test', 'zookeeper', client=self.client) self.addCleanup(self.board.close) self.board.connect() def _post_claim_job(self, job_name, book=None, details=None): arrived = threading.Event() def set_on_children(children): if children: arrived.set() self.client.ChildrenWatch("/taskflow", set_on_children) job = self.board.post('test-1') # Make sure it arrived and claimed before doing further work... self.assertTrue(arrived.wait(test_utils.WAIT_TIMEOUT)) arrived.clear() self.board.claim(job, self.board.name) self.assertTrue(arrived.wait(test_utils.WAIT_TIMEOUT)) self.assertEqual(states.CLAIMED, job.state) return job def _destroy_locks(self): children = self.client.storage.get_children("/taskflow", only_direct=False) removed = 0 for p, data in six.iteritems(children): if p.endswith(".lock"): self.client.storage.pop(p) removed += 1 return removed def _change_owner(self, new_owner): children = self.client.storage.get_children("/taskflow", only_direct=False) altered = 0 for p, data in six.iteritems(children): if p.endswith(".lock"): self.client.set(p, misc.binary_encode( jsonutils.dumps({'owner': new_owner}))) altered += 1 return altered def test_bad_create(self): job = self._post_claim_job('test') f = self._make_dummy_flow(10) e = self._make_engine(f) self.assertRaises(ValueError, claims.CheckingClaimListener, e, job, self.board, self.board.name, on_job_loss=1) def test_claim_lost_suspended(self): job = self._post_claim_job('test') f = self._make_dummy_flow(10) e = self._make_engine(f) try_destroy = True ran_states = [] with claims.CheckingClaimListener(e, job, self.board, self.board.name): for state in e.run_iter(): ran_states.append(state) if state == states.SCHEDULING and try_destroy: try_destroy = bool(self._destroy_locks()) self.assertEqual(states.SUSPENDED, e.storage.get_flow_state()) self.assertEqual(1, ran_states.count(states.ANALYZING)) self.assertEqual(1, ran_states.count(states.SCHEDULING)) self.assertEqual(1, ran_states.count(states.WAITING)) def test_claim_lost_custom_handler(self): job = self._post_claim_job('test') f = self._make_dummy_flow(10) e = self._make_engine(f) handler = mock.MagicMock() ran_states = [] try_destroy = True destroyed_at = -1 with claims.CheckingClaimListener(e, job, self.board, self.board.name, on_job_loss=handler): for i, state in enumerate(e.run_iter()): ran_states.append(state) if state == states.SCHEDULING and try_destroy: destroyed = bool(self._destroy_locks()) if destroyed: destroyed_at = i try_destroy = False self.assertTrue(handler.called) self.assertEqual(10, ran_states.count(states.SCHEDULING)) self.assertNotEqual(-1, destroyed_at) after_states = ran_states[destroyed_at:] self.assertGreater(0, len(after_states)) def test_claim_lost_new_owner(self): job = self._post_claim_job('test') f = self._make_dummy_flow(10) e = self._make_engine(f) change_owner = True ran_states = [] with claims.CheckingClaimListener(e, job, self.board, self.board.name): for state in e.run_iter(): ran_states.append(state) if state == states.SCHEDULING and change_owner: change_owner = bool(self._change_owner('test-2')) self.assertEqual(states.SUSPENDED, e.storage.get_flow_state()) self.assertEqual(1, ran_states.count(states.ANALYZING)) self.assertEqual(1, ran_states.count(states.SCHEDULING)) self.assertEqual(1, ran_states.count(states.WAITING)) class TestDurationListener(test.TestCase, EngineMakerMixin): def test_deregister(self): """Verify that register and deregister don't blow up""" with contextlib.closing(impl_memory.MemoryBackend()) as be: flow = lf.Flow("test") flow.add(SleepyTask("test-1", sleep_for=0.1)) (lb, fd) = persistence_utils.temporary_flow_detail(be) e = self._make_engine(flow, fd, be) l = timing.DurationListener(e) l.register() l.deregister() def test_task_duration(self): with contextlib.closing(impl_memory.MemoryBackend()) as be: flow = lf.Flow("test") flow.add(SleepyTask("test-1", sleep_for=0.1)) (lb, fd) = persistence_utils.temporary_flow_detail(be) e = self._make_engine(flow, fd, be) with timing.DurationListener(e): e.run() t_uuid = e.storage.get_atom_uuid("test-1") td = fd.find(t_uuid) self.assertIsNotNone(td) self.assertIsNotNone(td.meta) self.assertIn('duration', td.meta) self.assertGreaterEqual(0.1, td.meta['duration']) def test_flow_duration(self): with contextlib.closing(impl_memory.MemoryBackend()) as be: flow = lf.Flow("test") flow.add(SleepyTask("test-1", sleep_for=0.1)) (lb, fd) = persistence_utils.temporary_flow_detail(be) e = self._make_engine(flow, fd, be) with timing.DurationListener(e): e.run() self.assertIsNotNone(fd) self.assertIsNotNone(fd.meta) self.assertIn('duration', fd.meta) self.assertGreaterEqual(0.1, fd.meta['duration']) @mock.patch.object(timing.LOG, 'warning') def test_record_ending_exception(self, mocked_warning): with contextlib.closing(impl_memory.MemoryBackend()) as be: flow = lf.Flow("test") flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) (lb, fd) = persistence_utils.temporary_flow_detail(be) e = self._make_engine(flow, fd, be) duration_listener = timing.DurationListener(e) with mock.patch.object(duration_listener._engine.storage, 'update_atom_metadata') as mocked_uam: mocked_uam.side_effect = exc.StorageFailure('Woot!') with duration_listener: e.run() mocked_warning.assert_called_once_with(mock.ANY, mock.ANY, 'task', 'test-1', exc_info=True) class TestEventTimeListener(test.TestCase, EngineMakerMixin): def test_event_time(self): flow = lf.Flow('flow1').add(SleepyTask("task1", sleep_for=0.1)) engine = self._make_engine(flow) with timing.EventTimeListener(engine): engine.run() t_uuid = engine.storage.get_atom_uuid("task1") td = engine.storage._flowdetail.find(t_uuid) self.assertIsNotNone(td) self.assertIsNotNone(td.meta) running_field = '%s-timestamp' % states.RUNNING success_field = '%s-timestamp' % states.SUCCESS self.assertIn(running_field, td.meta) self.assertIn(success_field, td.meta) td_duration = td.meta[success_field] - td.meta[running_field] self.assertGreaterEqual(0.1, td_duration) fd_meta = engine.storage._flowdetail.meta self.assertIn(running_field, fd_meta) self.assertIn(success_field, fd_meta) fd_duration = fd_meta[success_field] - fd_meta[running_field] self.assertGreaterEqual(0.1, fd_duration) class TestCapturingListeners(test.TestCase, EngineMakerMixin): def test_basic_do_not_capture(self): flow = lf.Flow("test") flow.add(test_utils.ProgressingTask("task1")) e = self._make_engine(flow) with test_utils.CaptureListener(e, capture_task=False) as capturer: e.run() expected = ['test.f RUNNING', 'test.f SUCCESS'] self.assertEqual(expected, capturer.values) class TestLoggingListeners(test.TestCase, EngineMakerMixin): def _make_logger(self, level=logging.DEBUG): log = logging.getLogger( reflection.get_callable_name(self._get_test_method())) log.propagate = False for handler in reversed(log.handlers): log.removeHandler(handler) handler = test.CapturingLoggingHandler(level=level) log.addHandler(handler) log.setLevel(level) self.addCleanup(handler.reset) self.addCleanup(log.removeHandler, handler) return (log, handler) def test_basic(self): flow = lf.Flow("test") flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() with logging_listeners.LoggingListener(e, log=log): e.run() self.assertGreater(0, handler.counts[logging.DEBUG]) for levelno in _LOG_LEVELS - set([logging.DEBUG]): self.assertEqual(0, handler.counts[levelno]) self.assertEqual([], handler.exc_infos) def test_basic_customized(self): flow = lf.Flow("test") flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() listener = logging_listeners.LoggingListener( e, log=log, level=logging.INFO) with listener: e.run() self.assertGreater(0, handler.counts[logging.INFO]) for levelno in _LOG_LEVELS - set([logging.INFO]): self.assertEqual(0, handler.counts[levelno]) self.assertEqual([], handler.exc_infos) def test_basic_failure(self): flow = lf.Flow("test") flow.add(test_utils.TaskWithFailure("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() with logging_listeners.LoggingListener(e, log=log): self.assertRaises(RuntimeError, e.run) self.assertGreater(0, handler.counts[logging.DEBUG]) for levelno in _LOG_LEVELS - set([logging.DEBUG]): self.assertEqual(0, handler.counts[levelno]) self.assertEqual(1, len(handler.exc_infos)) def test_dynamic(self): flow = lf.Flow("test") flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() with logging_listeners.DynamicLoggingListener(e, log=log): e.run() self.assertGreater(0, handler.counts[logging.DEBUG]) for levelno in _LOG_LEVELS - set([logging.DEBUG]): self.assertEqual(0, handler.counts[levelno]) self.assertEqual([], handler.exc_infos) def test_dynamic_failure(self): flow = lf.Flow("test") flow.add(test_utils.TaskWithFailure("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() with logging_listeners.DynamicLoggingListener(e, log=log): self.assertRaises(RuntimeError, e.run) self.assertGreater(0, handler.counts[logging.WARNING]) self.assertGreater(0, handler.counts[logging.DEBUG]) self.assertEqual(1, len(handler.exc_infos)) for levelno in _LOG_LEVELS - set([logging.DEBUG, logging.WARNING]): self.assertEqual(0, handler.counts[levelno]) def test_dynamic_failure_customized_level(self): flow = lf.Flow("test") flow.add(test_utils.TaskWithFailure("test-1")) e = self._make_engine(flow) log, handler = self._make_logger() listener = logging_listeners.DynamicLoggingListener( e, log=log, failure_level=logging.ERROR) with listener: self.assertRaises(RuntimeError, e.run) self.assertGreater(0, handler.counts[logging.ERROR]) self.assertGreater(0, handler.counts[logging.DEBUG]) self.assertEqual(1, len(handler.exc_infos)) for levelno in _LOG_LEVELS - set([logging.DEBUG, logging.ERROR]): self.assertEqual(0, handler.counts[levelno]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_mapfunctor_task.py0000664000175000017500000000450000000000000024165 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines as engines from taskflow.patterns import linear_flow from taskflow import task as base from taskflow import test def double(x): return x * 2 square = lambda x: x * x class MapFunctorTaskTest(test.TestCase): def setUp(self): super(MapFunctorTaskTest, self).setUp() self.flow_store = { 'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, } def test_double_array(self): expected = self.flow_store.copy() expected.update({ 'double_a': 2, 'double_b': 4, 'double_c': 6, 'double_d': 8, 'double_e': 10, }) requires = self.flow_store.keys() provides = ["double_%s" % k for k in requires] flow = linear_flow.Flow("double array flow") flow.add(base.MapFunctorTask(double, requires=requires, provides=provides)) result = engines.run(flow, store=self.flow_store) self.assertDictEqual(expected, result) def test_square_array(self): expected = self.flow_store.copy() expected.update({ 'square_a': 1, 'square_b': 4, 'square_c': 9, 'square_d': 16, 'square_e': 25, }) requires = self.flow_store.keys() provides = ["square_%s" % k for k in requires] flow = linear_flow.Flow("square array flow") flow.add(base.MapFunctorTask(square, requires=requires, provides=provides)) result = engines.run(flow, store=self.flow_store) self.assertDictEqual(expected, result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_notifier.py0000664000175000017500000001732600000000000022616 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools from taskflow import states from taskflow import test from taskflow.types import notifier as nt class NotifierTest(test.TestCase): def test_notify_called(self): call_collector = [] def call_me(state, details): call_collector.append((state, details)) notifier = nt.Notifier() notifier.register(nt.Notifier.ANY, call_me) notifier.notify(states.SUCCESS, {}) notifier.notify(states.SUCCESS, {}) self.assertEqual(2, len(call_collector)) self.assertEqual(1, len(notifier)) def test_notify_not_called(self): call_collector = [] def call_me(state, details): call_collector.append((state, details)) notifier = nt.Notifier() notifier.register(nt.Notifier.ANY, call_me) notifier.notify(nt.Notifier.ANY, {}) self.assertFalse(notifier.can_trigger_notification(nt.Notifier.ANY)) self.assertEqual(0, len(call_collector)) self.assertEqual(1, len(notifier)) def test_notify_register_deregister(self): def call_me(state, details): pass class A(object): def call_me_too(self, state, details): pass notifier = nt.Notifier() notifier.register(nt.Notifier.ANY, call_me) a = A() notifier.register(nt.Notifier.ANY, a.call_me_too) self.assertEqual(2, len(notifier)) notifier.deregister(nt.Notifier.ANY, call_me) notifier.deregister(nt.Notifier.ANY, a.call_me_too) self.assertEqual(0, len(notifier)) def test_notify_reset(self): def call_me(state, details): pass notifier = nt.Notifier() notifier.register(nt.Notifier.ANY, call_me) self.assertEqual(1, len(notifier)) notifier.reset() self.assertEqual(0, len(notifier)) def test_bad_notify(self): def call_me(state, details): pass notifier = nt.Notifier() self.assertRaises(KeyError, notifier.register, nt.Notifier.ANY, call_me, kwargs={'details': 5}) def test_not_callable(self): notifier = nt.Notifier() self.assertRaises(ValueError, notifier.register, nt.Notifier.ANY, 2) def test_restricted_notifier(self): notifier = nt.RestrictedNotifier(['a', 'b']) self.assertRaises(ValueError, notifier.register, 'c', lambda *args, **kargs: None) notifier.register('b', lambda *args, **kargs: None) self.assertEqual(1, len(notifier)) def test_restricted_notifier_any(self): notifier = nt.RestrictedNotifier(['a', 'b']) self.assertRaises(ValueError, notifier.register, 'c', lambda *args, **kargs: None) notifier.register('b', lambda *args, **kargs: None) self.assertEqual(1, len(notifier)) notifier.register(nt.RestrictedNotifier.ANY, lambda *args, **kargs: None) self.assertEqual(2, len(notifier)) def test_restricted_notifier_no_any(self): notifier = nt.RestrictedNotifier(['a', 'b'], allow_any=False) self.assertRaises(ValueError, notifier.register, nt.RestrictedNotifier.ANY, lambda *args, **kargs: None) notifier.register('b', lambda *args, **kargs: None) self.assertEqual(1, len(notifier)) def test_selective_notify(self): call_counts = collections.defaultdict(list) def call_me_on(registered_state, state, details): call_counts[registered_state].append((state, details)) notifier = nt.Notifier() call_me_on_success = functools.partial(call_me_on, states.SUCCESS) notifier.register(states.SUCCESS, call_me_on_success) self.assertTrue(notifier.is_registered(states.SUCCESS, call_me_on_success)) call_me_on_any = functools.partial(call_me_on, nt.Notifier.ANY) notifier.register(nt.Notifier.ANY, call_me_on_any) self.assertTrue(notifier.is_registered(nt.Notifier.ANY, call_me_on_any)) self.assertEqual(2, len(notifier)) notifier.notify(states.SUCCESS, {}) self.assertEqual(1, len(call_counts[nt.Notifier.ANY])) self.assertEqual(1, len(call_counts[states.SUCCESS])) notifier.notify(states.FAILURE, {}) self.assertEqual(2, len(call_counts[nt.Notifier.ANY])) self.assertEqual(1, len(call_counts[states.SUCCESS])) self.assertEqual(2, len(call_counts)) def test_details_filter(self): call_counts = collections.defaultdict(list) def call_me_on(registered_state, state, details): call_counts[registered_state].append((state, details)) def when_red(details): return details.get('color') == 'red' notifier = nt.Notifier() call_me_on_success = functools.partial(call_me_on, states.SUCCESS) notifier.register(states.SUCCESS, call_me_on_success, details_filter=when_red) self.assertEqual(1, len(notifier)) self.assertTrue(notifier.is_registered( states.SUCCESS, call_me_on_success, details_filter=when_red)) notifier.notify(states.SUCCESS, {}) self.assertEqual(0, len(call_counts[states.SUCCESS])) notifier.notify(states.SUCCESS, {'color': 'red'}) self.assertEqual(1, len(call_counts[states.SUCCESS])) notifier.notify(states.SUCCESS, {'color': 'green'}) self.assertEqual(1, len(call_counts[states.SUCCESS])) def test_different_details_filter(self): call_counts = collections.defaultdict(list) def call_me_on(registered_state, state, details): call_counts[registered_state].append((state, details)) def when_red(details): return details.get('color') == 'red' def when_blue(details): return details.get('color') == 'blue' notifier = nt.Notifier() call_me_on_success = functools.partial(call_me_on, states.SUCCESS) notifier.register(states.SUCCESS, call_me_on_success, details_filter=when_red) notifier.register(states.SUCCESS, call_me_on_success, details_filter=when_blue) self.assertEqual(2, len(notifier)) self.assertTrue(notifier.is_registered( states.SUCCESS, call_me_on_success, details_filter=when_blue)) self.assertTrue(notifier.is_registered( states.SUCCESS, call_me_on_success, details_filter=when_red)) notifier.notify(states.SUCCESS, {}) self.assertEqual(0, len(call_counts[states.SUCCESS])) notifier.notify(states.SUCCESS, {'color': 'red'}) self.assertEqual(1, len(call_counts[states.SUCCESS])) notifier.notify(states.SUCCESS, {'color': 'blue'}) self.assertEqual(2, len(call_counts[states.SUCCESS])) notifier.notify(states.SUCCESS, {'color': 'green'}) self.assertEqual(2, len(call_counts[states.SUCCESS])) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_progress.py0000664000175000017500000001221300000000000022631 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow import task from taskflow import test from taskflow.utils import persistence_utils as p_utils class ProgressTask(task.Task): def __init__(self, name, segments): super(ProgressTask, self).__init__(name=name) self._segments = segments def execute(self): if self._segments <= 0: return for i in range(1, self._segments): progress = float(i) / self._segments self.update_progress(progress) class ProgressTaskWithDetails(task.Task): def execute(self): details = { 'progress': 0.5, 'test': 'test data', 'foo': 'bar', } self.notifier.notify(task.EVENT_UPDATE_PROGRESS, details) class TestProgress(test.TestCase): def _make_engine(self, flow, flow_detail=None, backend=None): e = taskflow.engines.load(flow, flow_detail=flow_detail, backend=backend) e.compile() e.prepare() return e def tearDown(self): super(TestProgress, self).tearDown() with contextlib.closing(impl_memory.MemoryBackend({})) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() def test_sanity_progress(self): fired_events = [] def notify_me(event_type, details): fired_events.append(details.pop('progress')) ev_count = 5 t = ProgressTask("test", ev_count) t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) e.run() self.assertEqual(ev_count + 1, len(fired_events)) self.assertEqual(1.0, fired_events[-1]) self.assertEqual(0.0, fired_events[0]) def test_no_segments_progress(self): fired_events = [] def notify_me(event_type, details): fired_events.append(details.pop('progress')) t = ProgressTask("test", 0) t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) e.run() # 0.0 and 1.0 should be automatically fired self.assertEqual(2, len(fired_events)) self.assertEqual(1.0, fired_events[-1]) self.assertEqual(0.0, fired_events[0]) def test_storage_progress(self): with contextlib.closing(impl_memory.MemoryBackend({})) as be: flo = lf.Flow("test") flo.add(ProgressTask("test", 3)) b, fd = p_utils.temporary_flow_detail(be) e = self._make_engine(flo, flow_detail=fd, backend=be) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) task_uuid = e.storage.get_atom_uuid("test") td = fd.find(task_uuid) self.assertEqual(1.0, td.meta['progress']) self.assertFalse(td.meta['progress_details']) def test_storage_progress_detail(self): flo = ProgressTaskWithDetails("test") e = self._make_engine(flo) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) end_details = e.storage.get_task_progress_details("test") self.assertEqual(0.5, end_details.get('at_progress')) self.assertEqual({ 'test': 'test data', 'foo': 'bar' }, end_details.get('details')) def test_dual_storage_progress(self): fired_events = [] def notify_me(event_type, details): fired_events.append(details.pop('progress')) with contextlib.closing(impl_memory.MemoryBackend({})) as be: t = ProgressTask("test", 5) t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) b, fd = p_utils.temporary_flow_detail(be) e = self._make_engine(flo, flow_detail=fd, backend=be) e.run() end_progress = e.storage.get_task_progress("test") self.assertEqual(1.0, end_progress) task_uuid = e.storage.get_atom_uuid("test") td = fd.find(task_uuid) self.assertEqual(1.0, td.meta['progress']) self.assertFalse(td.meta['progress_details']) self.assertEqual(6, len(fired_events)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_reducefunctor_task.py0000664000175000017500000000407200000000000024663 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import taskflow.engines as engines from taskflow.patterns import linear_flow from taskflow import task as base from taskflow import test def sum(x, y): return x + y multiply = lambda x, y: x * y class ReduceFunctorTaskTest(test.TestCase): def setUp(self): super(ReduceFunctorTaskTest, self).setUp() self.flow_store = { 'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, } def test_sum_array(self): expected = self.flow_store.copy() expected.update({ 'sum': 15 }) requires = self.flow_store.keys() provides = 'sum' flow = linear_flow.Flow("sum array flow") flow.add(base.ReduceFunctorTask(sum, requires=requires, provides=provides)) result = engines.run(flow, store=self.flow_store) self.assertDictEqual(expected, result) def test_multiply_array(self): expected = self.flow_store.copy() expected.update({ 'product': 120 }) requires = self.flow_store.keys() provides = 'product' flow = linear_flow.Flow("square array flow") flow.add(base.ReduceFunctorTask(multiply, requires=requires, provides=provides)) result = engines.run(flow, store=self.flow_store) self.assertDictEqual(expected, result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_retries.py0000664000175000017500000015605500000000000022457 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import taskflow.engines from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow.patterns import unordered_flow as uf from taskflow import retry from taskflow import states as st from taskflow import test from taskflow.tests import utils from taskflow.types import failure from taskflow.utils import eventlet_utils as eu class FailingRetry(retry.Retry): def execute(self, **kwargs): raise ValueError('OMG I FAILED') def revert(self, history, **kwargs): self.history = history def on_failure(self, **kwargs): return retry.REVERT class NastyFailingRetry(FailingRetry): def revert(self, history, **kwargs): raise ValueError('WOOT!') class RetryTest(utils.EngineTestBase): def test_run_empty_linear_flow(self): flow = lf.Flow('flow-1', utils.OneReturnRetry(provides='x')) engine = self._make_engine(flow) engine.run() self.assertEqual({'x': 1}, engine.storage.fetch_all()) def test_run_empty_unordered_flow(self): flow = uf.Flow('flow-1', utils.OneReturnRetry(provides='x')) engine = self._make_engine(flow) engine.run() self.assertEqual({'x': 1}, engine.storage.fetch_all()) def test_run_empty_graph_flow(self): flow = gf.Flow('flow-1', utils.OneReturnRetry(provides='x')) engine = self._make_engine(flow) engine.run() self.assertEqual({'x': 1}, engine.storage.fetch_all()) def test_states_retry_success_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(4, 'r1', provides='x')).add( utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r RETRYING', 'task1.t PENDING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_states_retry_reverted_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(2, 'r1', provides='x')).add( utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) engine.storage.inject({'y': 4}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) self.assertEqual({'y': 4}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r RETRYING', 'task1.t PENDING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_states_retry_failure_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(2, 'r1', provides='x')).add( utils.NastyTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) engine.storage.inject({'y': 4}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Gotcha', engine.run) self.assertEqual({'y': 4, 'x': 1}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERT_FAILURE(Failure: RuntimeError: Gotcha!)', 'flow-1.f FAILURE'] self.assertEqual(expected, capturer.values) def test_states_retry_failure_nested_flow_fails(self): flow = lf.Flow('flow-1', utils.retry.AlwaysRevert('r1')).add( utils.TaskNoRequiresNoReturns("task1"), lf.Flow('flow-2', retry.Times(3, 'r2', provides='x')).add( utils.TaskNoRequiresNoReturns("task2"), utils.ConditionalTask("task3") ), utils.TaskNoRequiresNoReturns("task4") ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(None)', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'r2.r RUNNING', 'r2.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r2.r RETRYING', 'task2.t PENDING', 'task3.t PENDING', 'r2.r RUNNING', 'r2.r SUCCESS(2)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t SUCCESS(None)', 'task4.t RUNNING', 'task4.t SUCCESS(None)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_new_revert_vs_old(self): flow = lf.Flow('flow-1').add( utils.TaskNoRequiresNoReturns("task1"), lf.Flow('flow-2', retry.Times(1, 'r1', provides='x')).add( utils.TaskNoRequiresNoReturns("task2"), utils.ConditionalTask("task3") ), utils.TaskNoRequiresNoReturns("task4") ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: try: engine.run() except Exception: pass expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) engine = self._make_engine(flow, defer_reverts=True) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: try: engine.run() except Exception: pass expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_states_retry_failure_parent_flow_fails(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x1')).add( utils.TaskNoRequiresNoReturns("task1"), lf.Flow('flow-2', retry.Times(3, 'r2', provides='x2')).add( utils.TaskNoRequiresNoReturns("task2"), utils.TaskNoRequiresNoReturns("task3") ), utils.ConditionalTask("task4", rebind={'x': 'x1'}) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() self.assertEqual({'y': 2, 'x1': 2, 'x2': 1}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'r2.r RUNNING', 'r2.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t SUCCESS(None)', 'task4.t RUNNING', 'task4.t FAILURE(Failure: RuntimeError: Woot!)', 'task4.t REVERTING', 'task4.t REVERTED(None)', 'task3.t REVERTING', 'task3.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r2.r REVERTING', 'r2.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r RETRYING', 'task1.t PENDING', 'r2.r PENDING', 'task2.t PENDING', 'task3.t PENDING', 'task4.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(None)', 'r2.r RUNNING', 'r2.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task3.t RUNNING', 'task3.t SUCCESS(None)', 'task4.t RUNNING', 'task4.t SUCCESS(None)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_unordered_flow_task_fails_parallel_tasks_should_be_reverted(self): flow = uf.Flow('flow-1', retry.Times(3, 'r', provides='x')).add( utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r.r RUNNING', 'r.r SUCCESS(1)', 'task1.t RUNNING', 'task2.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task1.t REVERTING', 'task2.t REVERTED(None)', 'task1.t REVERTED(None)', 'r.r RETRYING', 'task1.t PENDING', 'task2.t PENDING', 'r.r RUNNING', 'r.r SUCCESS(2)', 'task1.t RUNNING', 'task2.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t SUCCESS(None)', 'flow-1.f SUCCESS'] self.assertCountEqual(capturer.values, expected) def test_nested_flow_reverts_parent_retries(self): retry1 = retry.Times(3, 'r1', provides='x') retry2 = retry.Times(0, 'r2', provides='x2') flow = lf.Flow('flow-1', retry1).add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry2).add(utils.ConditionalTask("task2")) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2, 'x2': 1}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r2.r RUNNING', 'r2.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r2.r REVERTING', 'r2.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r RETRYING', 'task1.t PENDING', 'r2.r PENDING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r2.r RUNNING', 'r2.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_nested_flow_with_retry_revert(self): retry1 = retry.Times(0, 'r1', provides='x2') flow = lf.Flow('flow-1').add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry1).add( utils.ConditionalTask("task2", inject={'x': 1})) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: try: engine.run() except Exception: pass self.assertEqual({'y': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_nested_flow_with_retry_revert_all(self): retry1 = retry.Times(0, 'r1', provides='x2', revert_all=True) flow = lf.Flow('flow-1').add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry1).add( utils.ConditionalTask("task2", inject={'x': 1})) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: try: engine.run() except Exception: pass self.assertEqual({'y': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_revert_all_retry(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x')).add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry.AlwaysRevertAll('r2')).add( utils.ConditionalTask("task2")) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) self.assertEqual({'y': 2}, engine.storage.fetch_all()) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r2.r RUNNING', 'r2.r SUCCESS(None)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r2.r REVERTING', 'r2.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_restart_reverted_flow_with_retry(self): flow = lf.Flow('test', retry=utils.OneReturnRetry(provides='x')).add( utils.FailingTask('fail')) engine = self._make_engine(flow) self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) def test_run_just_retry(self): flow = utils.OneReturnRetry(provides='x') engine = self._make_engine(flow) self.assertRaises(TypeError, engine.run) def test_use_retry_as_a_task(self): flow = lf.Flow('test').add(utils.OneReturnRetry(provides='x')) engine = self._make_engine(flow) self.assertRaises(TypeError, engine.run) def test_resume_flow_that_had_been_interrupted_during_retrying(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( utils.ProgressingTask('t1'), utils.ProgressingTask('t2'), utils.ProgressingTask('t3') ) engine = self._make_engine(flow) engine.compile() engine.prepare() with utils.CaptureListener(engine) as capturer: engine.storage.set_atom_state('r1', st.RETRYING) engine.storage.set_atom_state('t1', st.PENDING) engine.storage.set_atom_state('t2', st.REVERTED) engine.storage.set_atom_state('t3', st.REVERTED) engine.run() expected = ['flow-1.f RUNNING', 't2.t PENDING', 't3.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 't1.t RUNNING', 't1.t SUCCESS(5)', 't2.t RUNNING', 't2.t SUCCESS(5)', 't3.t RUNNING', 't3.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_resume_flow_that_should_be_retried(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( utils.ProgressingTask('t1'), utils.ProgressingTask('t2') ) engine = self._make_engine(flow) engine.compile() engine.prepare() with utils.CaptureListener(engine) as capturer: engine.storage.set_atom_intention('r1', st.RETRY) engine.storage.set_atom_state('r1', st.SUCCESS) engine.storage.set_atom_state('t1', st.REVERTED) engine.storage.set_atom_state('t2', st.REVERTED) engine.run() expected = ['flow-1.f RUNNING', 'r1.r RETRYING', 't1.t PENDING', 't2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 't1.t RUNNING', 't1.t SUCCESS(5)', 't2.t RUNNING', 't2.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_retry_tasks_that_has_not_been_reverted(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x')).add( utils.ConditionalTask('c'), utils.ProgressingTask('t1') ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine) as capturer: engine.run() expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 'c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'c.t REVERTING', 'c.t REVERTED(None)', 'r1.r RETRYING', 'c.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'c.t RUNNING', 'c.t SUCCESS(None)', 't1.t RUNNING', 't1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_default_times_retry(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( utils.ProgressingTask('t1'), utils.FailingTask('t2')) engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(1)', 't1.t RUNNING', 't1.t SUCCESS(5)', 't2.t RUNNING', 't2.t FAILURE(Failure: RuntimeError: Woot!)', 't2.t REVERTING', 't2.t REVERTED(None)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 't2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 't1.t RUNNING', 't1.t SUCCESS(5)', 't2.t RUNNING', 't2.t FAILURE(Failure: RuntimeError: Woot!)', 't2.t REVERTING', 't2.t REVERTED(None)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 't2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t SUCCESS(5)', 't2.t RUNNING', 't2.t FAILURE(Failure: RuntimeError: Woot!)', 't2.t REVERTING', 't2.t REVERTED(None)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_for_each_with_list(self): collection = [3, 2, 3, 5] retry1 = retry.ForEach(collection, 'r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_for_each_with_set(self): collection = set([3, 2, 5]) retry1 = retry.ForEach(collection, 'r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertCountEqual(capturer.values, expected) def test_nested_for_each_revert(self): collection = [3, 2, 3, 5] retry1 = retry.ForEach(collection, 'r1', provides='x') flow = lf.Flow('flow-1').add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry1).add( utils.FailingTaskWithOneArg('task2') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 2)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 5)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_nested_for_each_revert_all(self): collection = [3, 2, 3, 5] retry1 = retry.ForEach(collection, 'r1', provides='x', revert_all=True) flow = lf.Flow('flow-1').add( utils.ProgressingTask("task1"), lf.Flow('flow-2', retry1).add( utils.FailingTaskWithOneArg('task2') ) ) engine = self._make_engine(flow) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 2)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r RETRYING', 'task2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot with 5)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_for_each_empty_collection(self): values = [] retry1 = retry.ForEach(values, 'r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.ConditionalTask('t1')) engine = self._make_engine(flow) engine.storage.inject({'y': 1}) self.assertRaisesRegex(exc.NotFound, '^No elements left', engine.run) def test_parameterized_for_each_with_list(self): values = [3, 2, 5] retry1 = retry.ParameterizedForEach('r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_parameterized_for_each_with_set(self): values = ([3, 2, 5]) retry1 = retry.ParameterizedForEach('r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r RETRYING', 't1.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 't1.t RUNNING', 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', 't1.t REVERTING', 't1.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertCountEqual(capturer.values, expected) def test_nested_parameterized_for_each_revert(self): values = [3, 2, 5] retry1 = retry.ParameterizedForEach('r1', provides='x') flow = lf.Flow('flow-1').add( utils.ProgressingTask('task-1'), lf.Flow('flow-2', retry1).add( utils.FailingTaskWithOneArg('task-2') ) ) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'task-1.t RUNNING', 'task-1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r RETRYING', 'task-2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 2)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r RETRYING', 'task-2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 5)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_nested_parameterized_for_each_revert_all(self): values = [3, 2, 5] retry1 = retry.ParameterizedForEach('r1', provides='x', revert_all=True) flow = lf.Flow('flow-1').add( utils.ProgressingTask('task-1'), lf.Flow('flow-2', retry1).add( utils.FailingTaskWithOneArg('task-2') ) ) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) with utils.CaptureListener(engine) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['flow-1.f RUNNING', 'task-1.t RUNNING', 'task-1.t SUCCESS(5)', 'r1.r RUNNING', 'r1.r SUCCESS(3)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 3)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r RETRYING', 'task-2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(2)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 2)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r RETRYING', 'task-2.t PENDING', 'r1.r RUNNING', 'r1.r SUCCESS(5)', 'task-2.t RUNNING', 'task-2.t FAILURE(Failure: RuntimeError: Woot with 5)', 'task-2.t REVERTING', 'task-2.t REVERTED(None)', 'r1.r REVERTING', 'r1.r REVERTED(None)', 'task-1.t REVERTING', 'task-1.t REVERTED(None)', 'flow-1.f REVERTED'] self.assertEqual(expected, capturer.values) def test_parameterized_for_each_empty_collection(self): values = [] retry1 = retry.ParameterizedForEach('r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.ConditionalTask('t1')) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) self.assertRaisesRegex(exc.NotFound, '^No elements left', engine.run) def _pretend_to_run_a_flow_and_crash(self, when): flow = uf.Flow('flow-1', retry.Times(3, provides='x')).add( utils.ProgressingTask('task1')) engine = self._make_engine(flow) engine.compile() engine.prepare() # imagine we run engine engine.storage.set_flow_state(st.RUNNING) engine.storage.set_atom_intention('flow-1_retry', st.EXECUTE) engine.storage.set_atom_intention('task1', st.EXECUTE) # we execute retry engine.storage.save('flow-1_retry', 1) # task fails fail = failure.Failure.from_exception(RuntimeError('foo')) engine.storage.save('task1', fail, state=st.FAILURE) if when == 'task fails': return engine # we save it's failure to retry and ask what to do engine.storage.save_retry_failure('flow-1_retry', 'task1', fail) if when == 'retry queried': return engine # it returned 'RETRY', so we update it's intention engine.storage.set_atom_intention('flow-1_retry', st.RETRY) if when == 'retry updated': return engine # we set task1 intention to REVERT engine.storage.set_atom_intention('task1', st.REVERT) if when == 'task updated': return engine # we schedule task1 for reversion engine.storage.set_atom_state('task1', st.REVERTING) if when == 'revert scheduled': return engine raise ValueError('Invalid crash point: %s' % when) def test_resumption_on_crash_after_task_failure(self): engine = self._pretend_to_run_a_flow_and_crash('task fails') with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1_retry.r RETRYING', 'task1.t PENDING', 'flow-1_retry.r RUNNING', 'flow-1_retry.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_retry_queried(self): engine = self._pretend_to_run_a_flow_and_crash('retry queried') with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1_retry.r RETRYING', 'task1.t PENDING', 'flow-1_retry.r RUNNING', 'flow-1_retry.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_retry_updated(self): engine = self._pretend_to_run_a_flow_and_crash('retry updated') with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1_retry.r RETRYING', 'task1.t PENDING', 'flow-1_retry.r RUNNING', 'flow-1_retry.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_task_updated(self): engine = self._pretend_to_run_a_flow_and_crash('task updated') with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.t REVERTING', 'task1.t REVERTED(None)', 'flow-1_retry.r RETRYING', 'task1.t PENDING', 'flow-1_retry.r RUNNING', 'flow-1_retry.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_revert_scheduled(self): engine = self._pretend_to_run_a_flow_and_crash('revert scheduled') with utils.CaptureListener(engine) as capturer: engine.run() expected = ['task1.t REVERTED(None)', 'flow-1_retry.r RETRYING', 'task1.t PENDING', 'flow-1_retry.r RUNNING', 'flow-1_retry.r SUCCESS(2)', 'task1.t RUNNING', 'task1.t SUCCESS(5)', 'flow-1.f SUCCESS'] self.assertEqual(expected, capturer.values) def test_retry_fails(self): r = FailingRetry() flow = lf.Flow('testflow', r) engine = self._make_engine(flow) self.assertRaisesRegex(ValueError, '^OMG', engine.run) self.assertEqual(1, len(engine.storage.get_retry_histories())) self.assertEqual(0, len(r.history)) self.assertEqual([], list(r.history.outcomes_iter())) self.assertIsNotNone(r.history.failure) self.assertTrue(r.history.caused_by(ValueError, include_retry=True)) def test_retry_revert_fails(self): r = NastyFailingRetry() flow = lf.Flow('testflow', r) engine = self._make_engine(flow) self.assertRaisesRegex(ValueError, '^WOOT', engine.run) def test_nested_provides_graph_reverts_correctly(self): flow = gf.Flow("test").add( utils.ProgressingTask('a', requires=['x']), lf.Flow("test2", retry=retry.Times(2)).add( utils.ProgressingTask('b', provides='x'), utils.FailingTask('c'))) engine = self._make_engine(flow) engine.compile() engine.prepare() engine.storage.save('test2_retry', 1) engine.storage.save('b', 11) engine.storage.save('a', 10) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) expected = ['c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'a.t REVERTING', 'c.t REVERTING', 'a.t REVERTED(None)', 'c.t REVERTED(None)', 'b.t REVERTING', 'b.t REVERTED(None)'] self.assertCountEqual(capturer.values[:8], expected) # Task 'a' was or was not executed again, both cases are ok. self.assertIsSuperAndSubsequence(capturer.values[8:], [ 'b.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'b.t REVERTED(None)', ]) self.assertEqual(st.REVERTED, engine.storage.get_flow_state()) def test_nested_provides_graph_retried_correctly(self): flow = gf.Flow("test").add( utils.ProgressingTask('a', requires=['x']), lf.Flow("test2", retry=retry.Times(2)).add( utils.ProgressingTask('b', provides='x'), utils.ProgressingTask('c'))) engine = self._make_engine(flow) engine.compile() engine.prepare() engine.storage.save('test2_retry', 1) engine.storage.save('b', 11) # pretend that 'c' failed fail = failure.Failure.from_exception(RuntimeError('Woot!')) engine.storage.save('c', fail, st.FAILURE) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() expected = ['c.t REVERTING', 'c.t REVERTED(None)', 'b.t REVERTING', 'b.t REVERTED(None)'] self.assertCountEqual(capturer.values[:4], expected) expected = ['test2_retry.r RETRYING', 'b.t PENDING', 'c.t PENDING', 'test2_retry.r RUNNING', 'test2_retry.r SUCCESS(2)', 'b.t RUNNING', 'b.t SUCCESS(5)', 'a.t RUNNING', 'c.t RUNNING', 'a.t SUCCESS(5)', 'c.t SUCCESS(5)'] self.assertCountEqual(expected, capturer.values[4:]) self.assertEqual(st.SUCCESS, engine.storage.get_flow_state()) class RetryParallelExecutionTest(utils.EngineTestBase): # FIXME(harlowja): fix this class so that it doesn't use events or uses # them in a way that works with more executors... def test_when_subflow_fails_revert_running_tasks(self): waiting_task = utils.WaitForOneFromTask('task1', 'task2', [st.SUCCESS, st.FAILURE]) flow = uf.Flow('flow-1', retry.Times(3, 'r', provides='x')).add( waiting_task, utils.ConditionalTask('task2') ) engine = self._make_engine(flow) engine.atom_notifier.register('*', waiting_task.callback) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2}, engine.storage.fetch_all()) expected = ['r.r RUNNING', 'r.r SUCCESS(1)', 'task1.t RUNNING', 'task2.t RUNNING', 'task2.t FAILURE(Failure: RuntimeError: Woot!)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'task1.t SUCCESS(5)', 'task1.t REVERTING', 'task1.t REVERTED(None)', 'r.r RETRYING', 'task1.t PENDING', 'task2.t PENDING', 'r.r RUNNING', 'r.r SUCCESS(2)', 'task1.t RUNNING', 'task2.t RUNNING', 'task2.t SUCCESS(None)', 'task1.t SUCCESS(5)'] self.assertCountEqual(capturer.values, expected) def test_when_subflow_fails_revert_success_tasks(self): waiting_task = utils.WaitForOneFromTask('task2', 'task1', [st.SUCCESS, st.FAILURE]) flow = uf.Flow('flow-1', retry.Times(3, 'r', provides='x')).add( utils.ProgressingTask('task1'), lf.Flow('flow-2').add( waiting_task, utils.ConditionalTask('task3')) ) engine = self._make_engine(flow) engine.atom_notifier.register('*', waiting_task.callback) engine.storage.inject({'y': 2}) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() self.assertEqual({'y': 2, 'x': 2}, engine.storage.fetch_all()) expected = ['r.r RUNNING', 'r.r SUCCESS(1)', 'task1.t RUNNING', 'task2.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t FAILURE(Failure: RuntimeError: Woot!)', 'task3.t REVERTING', 'task1.t REVERTING', 'task3.t REVERTED(None)', 'task1.t REVERTED(None)', 'task2.t REVERTING', 'task2.t REVERTED(None)', 'r.r RETRYING', 'task1.t PENDING', 'task2.t PENDING', 'task3.t PENDING', 'r.r RUNNING', 'r.r SUCCESS(2)', 'task1.t RUNNING', 'task2.t RUNNING', 'task1.t SUCCESS(5)', 'task2.t SUCCESS(5)', 'task3.t RUNNING', 'task3.t SUCCESS(None)'] self.assertCountEqual(capturer.values, expected) class SerialEngineTest(RetryTest, test.TestCase): def _make_engine(self, flow, defer_reverts=None, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine='serial', backend=self.backend, defer_reverts=defer_reverts) class ParallelEngineWithThreadsTest(RetryTest, RetryParallelExecutionTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, defer_reverts=None, flow_detail=None, executor=None): if executor is None: executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, engine='parallel', backend=self.backend, executor=executor, max_workers=self._EXECUTOR_WORKERS, defer_reverts=defer_reverts) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(RetryTest, test.TestCase): def _make_engine(self, flow, defer_reverts=None, flow_detail=None, executor=None): if executor is None: executor = 'greenthreads' return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor, defer_reverts=defer_reverts) class ParallelEngineWithProcessTest(RetryTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, defer_reverts=None, flow_detail=None, executor=None): if executor is None: executor = 'processes' return taskflow.engines.load(flow, flow_detail=flow_detail, engine='parallel', backend=self.backend, executor=executor, max_workers=self._EXECUTOR_WORKERS, defer_reverts=defer_reverts) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_states.py0000664000175000017500000000711400000000000022274 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import exceptions as excp from taskflow import states from taskflow import test class TestStates(test.TestCase): def test_valid_flow_states(self): for start_state, end_state in states._ALLOWED_FLOW_TRANSITIONS: self.assertTrue(states.check_flow_transition(start_state, end_state)) def test_ignored_flow_states(self): for start_state, end_state in states._IGNORED_FLOW_TRANSITIONS: self.assertFalse(states.check_flow_transition(start_state, end_state)) def test_invalid_flow_states(self): invalids = [ # Not a comprhensive set/listing... (states.RUNNING, states.PENDING), (states.REVERTED, states.RUNNING), (states.RESUMING, states.RUNNING), ] for start_state, end_state in invalids: self.assertRaises(excp.InvalidState, states.check_flow_transition, start_state, end_state) def test_valid_job_states(self): for start_state, end_state in states._ALLOWED_JOB_TRANSITIONS: self.assertTrue(states.check_job_transition(start_state, end_state)) def test_ignored_job_states(self): ignored = [] for start_state, end_state in states._ALLOWED_JOB_TRANSITIONS: ignored.append((start_state, start_state)) ignored.append((end_state, end_state)) for start_state, end_state in ignored: self.assertFalse(states.check_job_transition(start_state, end_state)) def test_invalid_job_states(self): invalids = [ (states.COMPLETE, states.UNCLAIMED), (states.UNCLAIMED, states.COMPLETE), ] for start_state, end_state in invalids: self.assertRaises(excp.InvalidState, states.check_job_transition, start_state, end_state) def test_valid_task_states(self): for start_state, end_state in states._ALLOWED_TASK_TRANSITIONS: self.assertTrue(states.check_task_transition(start_state, end_state)) def test_invalid_task_states(self): invalids = [ # Not a comprhensive set/listing... (states.RUNNING, states.PENDING), (states.PENDING, states.REVERTED), (states.PENDING, states.SUCCESS), (states.PENDING, states.FAILURE), (states.RETRYING, states.PENDING), ] for start_state, end_state in invalids: # TODO(harlowja): fix this so that it raises instead of # returning false... self.assertFalse( states.check_task_transition(start_state, end_state)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_storage.py0000664000175000017500000005574000000000000022445 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import threading from oslo_utils import uuidutils from taskflow import exceptions from taskflow.persistence import backends from taskflow.persistence import models from taskflow import states from taskflow import storage from taskflow import test from taskflow.tests import utils as test_utils from taskflow.types import failure from taskflow.utils import persistence_utils as p_utils class StorageTestMixin(object): def setUp(self): super(StorageTestMixin, self).setUp() self.backend = None self.thread_count = 50 def tearDown(self): with contextlib.closing(self.backend) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() super(StorageTestMixin, self).tearDown() @staticmethod def _run_many_threads(threads): for t in threads: t.start() for t in threads: t.join() def _get_storage(self, flow_detail=None): if flow_detail is None: _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) return storage.Storage(flow_detail=flow_detail, backend=self.backend) def test_non_saving_storage(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = storage.Storage(flow_detail=flow_detail) s.ensure_atom(test_utils.NoopTask('my_task')) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my_task'))) def test_flow_name_uuid_and_meta(self): flow_detail = models.FlowDetail(name='test-fd', uuid='aaaa') flow_detail.meta = {'a': 1} s = self._get_storage(flow_detail) self.assertEqual('test-fd', s.flow_name) self.assertEqual('aaaa', s.flow_uuid) self.assertEqual({'a': 1}, s.flow_meta) def test_ensure_task(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) self.assertEqual(states.PENDING, s.get_atom_state('my task')) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my task'))) def test_get_tasks_states(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.ensure_atom(test_utils.NoopTask('my task2')) s.save('my task', 'foo') expected = { 'my task': (states.SUCCESS, states.EXECUTE), 'my task2': (states.PENDING, states.EXECUTE), } self.assertEqual(expected, s.get_atoms_states(['my task', 'my task2'])) def test_ensure_task_flow_detail(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = self._get_storage(flow_detail) t = test_utils.NoopTask('my task') t.version = (3, 11) s.ensure_atom(t) td = flow_detail.find(s.get_atom_uuid('my task')) self.assertIsNotNone(td) self.assertEqual('my task', td.name) self.assertEqual('3.11', td.version) self.assertEqual(states.PENDING, td.state) def test_get_without_save(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) td = models.TaskDetail(name='my_task', uuid='42') flow_detail.add(td) s = self._get_storage(flow_detail) self.assertEqual('42', s.get_atom_uuid('my_task')) def test_ensure_existing_task(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) td = models.TaskDetail(name='my_task', uuid='42') flow_detail.add(td) s = self._get_storage(flow_detail) s.ensure_atom(test_utils.NoopTask('my_task')) self.assertEqual('42', s.get_atom_uuid('my_task')) def test_save_and_get(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) self.assertEqual(5, s.get('my task')) self.assertEqual({}, s.fetch_all()) self.assertEqual(states.SUCCESS, s.get_atom_state('my task')) def test_save_and_get_cached_failure(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', a_failure, states.FAILURE) self.assertEqual(a_failure, s.get('my task')) self.assertEqual(states.FAILURE, s.get_atom_state('my task')) self.assertTrue(s.has_failures()) self.assertEqual({'my task': a_failure}, s.get_failures()) def test_save_and_get_non_cached_failure(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', a_failure, states.FAILURE) self.assertEqual(a_failure, s.get('my task')) s._failures['my task'] = {} self.assertTrue(a_failure.matches(s.get('my task'))) def test_get_failure_from_reverted_task(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', a_failure, states.FAILURE) s.set_atom_state('my task', states.REVERTING) self.assertEqual(a_failure, s.get('my task')) s.set_atom_state('my task', states.REVERTED) self.assertEqual(a_failure, s.get('my task')) def test_get_failure_after_reload(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', a_failure, states.FAILURE) s2 = self._get_storage(s._flowdetail) self.assertTrue(s2.has_failures()) self.assertEqual(1, len(s2.get_failures())) self.assertTrue(a_failure.matches(s2.get('my task'))) self.assertEqual(states.FAILURE, s2.get_atom_state('my task')) def test_get_non_existing_var(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) self.assertRaises(exceptions.NotFound, s.get, 'my task') def test_reset(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) s.reset('my task') self.assertEqual(states.PENDING, s.get_atom_state('my task')) self.assertRaises(exceptions.NotFound, s.get, 'my task') def test_reset_unknown_task(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) self.assertIsNone(s.reset('my task')) def test_fetch_by_name(self): s = self._get_storage() name = 'my result' s.ensure_atom(test_utils.NoopTask('my task', provides=name)) s.save('my task', 5) self.assertEqual(5, s.fetch(name)) self.assertEqual({name: 5}, s.fetch_all()) def test_fetch_unknown_name(self): s = self._get_storage() self.assertRaisesRegex(exceptions.NotFound, "^Name 'xxx' is not mapped", s.fetch, 'xxx') def test_flow_metadata_update(self): s = self._get_storage() update_with = {'test_data': True} s.update_flow_metadata(update_with) self.assertTrue(s._flowdetail.meta['test_data']) def test_task_metadata_update_with_none(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.update_atom_metadata('my task', None) self.assertEqual(0.0, s.get_task_progress('my task')) s.set_task_progress('my task', 0.5) self.assertEqual(0.5, s.get_task_progress('my task')) s.update_atom_metadata('my task', None) self.assertEqual(0.5, s.get_task_progress('my task')) def test_default_task_progress(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) self.assertEqual(0.0, s.get_task_progress('my task')) self.assertIsNone(s.get_task_progress_details('my task')) def test_task_progress(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.set_task_progress('my task', 0.5, {'test_data': 11}) self.assertEqual(0.5, s.get_task_progress('my task')) self.assertEqual({ 'at_progress': 0.5, 'details': {'test_data': 11} }, s.get_task_progress_details('my task')) s.set_task_progress('my task', 0.7, {'test_data': 17}) self.assertEqual(0.7, s.get_task_progress('my task')) self.assertEqual({ 'at_progress': 0.7, 'details': {'test_data': 17} }, s.get_task_progress_details('my task')) s.set_task_progress('my task', 0.99) self.assertEqual(0.99, s.get_task_progress('my task')) self.assertEqual({ 'at_progress': 0.7, 'details': {'test_data': 17} }, s.get_task_progress_details('my task')) def test_task_progress_erase(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.set_task_progress('my task', 0.8, {}) self.assertEqual(0.8, s.get_task_progress('my task')) self.assertIsNone(s.get_task_progress_details('my task')) def test_fetch_result_not_ready(self): s = self._get_storage() name = 'my result' s.ensure_atom(test_utils.NoopTask('my task', provides=name)) self.assertRaises(exceptions.NotFound, s.get, name) self.assertEqual({}, s.fetch_all()) def test_save_multiple_results(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task', provides=['foo', 'bar'])) s.save('my task', ('spam', 'eggs')) self.assertEqual({ 'foo': 'spam', 'bar': 'eggs', }, s.fetch_all()) def test_mapping_none(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) self.assertEqual({}, s.fetch_all()) def test_inject(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual('eggs', s.fetch('spam')) self.assertEqual({ 'foo': 'bar', 'spam': 'eggs', }, s.fetch_all()) def test_inject_twice(self): s = self._get_storage() s.inject({'foo': 'bar'}) self.assertEqual({'foo': 'bar'}, s.fetch_all()) s.inject({'spam': 'eggs'}) self.assertEqual({ 'foo': 'bar', 'spam': 'eggs', }, s.fetch_all()) def test_inject_resumed(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) # verify it's there self.assertEqual({ 'foo': 'bar', 'spam': 'eggs', }, s.fetch_all()) # imagine we are resuming, so we need to make new # storage from same flow details s2 = self._get_storage(s._flowdetail) # injected data should still be there: self.assertEqual({ 'foo': 'bar', 'spam': 'eggs', }, s2.fetch_all()) def test_many_thread_ensure_same_task(self): s = self._get_storage() def ensure_my_task(): s.ensure_atom(test_utils.NoopTask('my_task')) threads = [] for i in range(0, self.thread_count): threads.append(threading.Thread(target=ensure_my_task)) self._run_many_threads(threads) # Only one task should have been made, no more. self.assertEqual(1, len(s._flowdetail)) def test_many_thread_inject(self): s = self._get_storage() def inject_values(values): s.inject(values) threads = [] for i in range(0, self.thread_count): values = { str(i): str(i), } threads.append(threading.Thread(target=inject_values, args=[values])) self._run_many_threads(threads) self.assertEqual(self.thread_count, len(s.fetch_all())) self.assertEqual(1, len(s._flowdetail)) def test_fetch_mapped_args(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual({'viking': 'eggs'}, s.fetch_mapped_args({'viking': 'spam'})) def test_fetch_not_found_args(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertRaises(exceptions.NotFound, s.fetch_mapped_args, {'viking': 'helmet'}) def test_fetch_optional_args_found(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual({'viking': 'eggs'}, s.fetch_mapped_args({'viking': 'spam'}, optional_args=set(['viking']))) def test_fetch_optional_args_not_found(self): s = self._get_storage() s.inject({'foo': 'bar', 'spam': 'eggs'}) self.assertEqual({}, s.fetch_mapped_args({'viking': 'helmet'}, optional_args=set(['viking']))) def test_set_and_get_task_state(self): s = self._get_storage() state = states.PENDING s.ensure_atom(test_utils.NoopTask('my task')) s.set_atom_state('my task', state) self.assertEqual(state, s.get_atom_state('my task')) def test_get_state_of_unknown_task(self): s = self._get_storage() self.assertRaisesRegex(exceptions.NotFound, '^Unknown', s.get_atom_state, 'my task') def test_task_by_name(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my task'))) def test_transient_storage_fetch_all(self): s = self._get_storage() s.inject([("a", "b")], transient=True) s.inject([("b", "c")]) results = s.fetch_all() self.assertEqual({"a": "b", "b": "c"}, results) def test_transient_storage_fetch_mapped(self): s = self._get_storage() s.inject([("a", "b")], transient=True) s.inject([("b", "c")]) desired = { 'y': 'a', 'z': 'b', } args = s.fetch_mapped_args(desired) self.assertEqual({'y': 'b', 'z': 'c'}, args) def test_transient_storage_restore(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = self._get_storage(flow_detail=flow_detail) s.inject([("a", "b")], transient=True) s.inject([("b", "c")]) s2 = self._get_storage(flow_detail=flow_detail) results = s2.fetch_all() self.assertEqual({"b": "c"}, results) def test_unknown_task_by_name(self): s = self._get_storage() self.assertRaisesRegex(exceptions.NotFound, '^Unknown atom', s.get_atom_uuid, '42') def test_initial_flow_state(self): s = self._get_storage() self.assertEqual(states.PENDING, s.get_flow_state()) def test_get_flow_state(self): _lb, flow_detail = p_utils.temporary_flow_detail(backend=self.backend) flow_detail.state = states.FAILURE with contextlib.closing(self.backend.get_connection()) as conn: flow_detail.update(conn.update_flow_details(flow_detail)) s = self._get_storage(flow_detail) self.assertEqual(states.FAILURE, s.get_flow_state()) def test_set_and_get_flow_state(self): s = self._get_storage() s.set_flow_state(states.SUCCESS) self.assertEqual(states.SUCCESS, s.get_flow_state()) def test_result_is_checked(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task', provides=set(['result']))) s.save('my task', {}) self.assertRaisesRegex(exceptions.NotFound, '^Unable to find result', s.fetch, 'result') def test_empty_result_is_checked(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task', provides=['a'])) s.save('my task', ()) self.assertRaisesRegex(exceptions.NotFound, '^Unable to find result', s.fetch, 'a') def test_short_result_is_checked(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task', provides=['a', 'b'])) s.save('my task', ['result']) self.assertEqual('result', s.fetch('a')) self.assertRaisesRegex(exceptions.NotFound, '^Unable to find result', s.fetch, 'b') def test_ensure_retry(self): s = self._get_storage() s.ensure_atom(test_utils.NoopRetry('my retry')) history = s.get_retry_history('my retry') self.assertEqual([], list(history)) def test_ensure_retry_and_task_with_same_name(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my retry')) self.assertRaisesRegex(exceptions.Duplicate, '^Atom detail', s.ensure_atom, test_utils.NoopRetry('my retry')) def test_save_retry_results(self): s = self._get_storage() s.ensure_atom(test_utils.NoopRetry('my retry')) s.save('my retry', 'a') s.save('my retry', 'b') history = s.get_retry_history('my retry') self.assertEqual([('a', {}), ('b', {})], list(history)) self.assertEqual(['a', 'b'], list(history.provided_iter())) def test_save_retry_results_with_mapping(self): s = self._get_storage() s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') s.save('my retry', 'b') history = s.get_retry_history('my retry') self.assertEqual([('a', {}), ('b', {})], list(history)) self.assertEqual(['a', 'b'], list(history.provided_iter())) self.assertEqual({'x': 'b'}, s.fetch_all()) self.assertEqual('b', s.fetch('x')) def test_cleanup_retry_history(self): s = self._get_storage() s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') s.save('my retry', 'b') s.cleanup_retry_history('my retry', states.REVERTED) history = s.get_retry_history('my retry') self.assertEqual([], list(history)) self.assertEqual(0, len(history)) self.assertEqual({}, s.fetch_all()) def test_cached_retry_failure(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') s.save('my retry', a_failure, states.FAILURE) history = s.get_retry_history('my retry') self.assertEqual([('a', {})], list(history)) self.assertTrue(history.caused_by(RuntimeError, include_retry=True)) self.assertIsNotNone(history.failure) self.assertEqual(1, len(history)) self.assertTrue(s.has_failures()) self.assertEqual({'my retry': a_failure}, s.get_failures()) def test_logbook_get_unknown_atom_type(self): self.assertRaisesRegex(TypeError, 'Unknown atom', models.atom_detail_class, 'some_detail') def test_save_task_intention(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my task')) s.set_atom_intention('my task', states.REVERT) intention = s.get_atom_intention('my task') self.assertEqual(states.REVERT, intention) def test_save_retry_intention(self): s = self._get_storage() s.ensure_atom(test_utils.NoopTask('my retry')) s.set_atom_intention('my retry', states.RETRY) intention = s.get_atom_intention('my retry') self.assertEqual(states.RETRY, intention) def test_inject_persistent_missing(self): t = test_utils.ProgressingTask('my retry', requires=['x']) s = self._get_storage() s.ensure_atom(t) missing = s.fetch_unsatisfied_args(t.name, t.rebind) self.assertEqual(set(['x']), missing) s.inject_atom_args(t.name, {'x': 2}, transient=False) missing = s.fetch_unsatisfied_args(t.name, t.rebind) self.assertEqual(set(), missing) args = s.fetch_mapped_args(t.rebind, atom_name=t.name) self.assertEqual(2, args['x']) def test_inject_persistent_and_transient_missing(self): t = test_utils.ProgressingTask('my retry', requires=['x']) s = self._get_storage() s.ensure_atom(t) missing = s.fetch_unsatisfied_args(t.name, t.rebind) self.assertEqual(set(['x']), missing) s.inject_atom_args(t.name, {'x': 2}, transient=False) s.inject_atom_args(t.name, {'x': 3}, transient=True) missing = s.fetch_unsatisfied_args(t.name, t.rebind) self.assertEqual(set(), missing) args = s.fetch_mapped_args(t.rebind, atom_name=t.name) self.assertEqual(3, args['x']) def test_save_fetch(self): t = test_utils.GiveBackRevert('my task') s = self._get_storage() s.ensure_atom(t) s.save('my task', 2) self.assertEqual(2, s.get('my task')) self.assertRaises(exceptions.NotFound, s.get_revert_result, 'my task') def test_save_fetch_revert(self): t = test_utils.GiveBackRevert('my task') s = self._get_storage() s.ensure_atom(t) s.set_atom_intention('my task', states.REVERT) s.save('my task', 2, state=states.REVERTED) self.assertRaises(exceptions.NotFound, s.get, 'my task') self.assertEqual(2, s.get_revert_result('my task')) def test_save_fail_fetch_revert(self): t = test_utils.GiveBackRevert('my task') s = self._get_storage() s.ensure_atom(t) s.set_atom_intention('my task', states.REVERT) a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s.save('my task', a_failure, state=states.REVERT_FAILURE) self.assertEqual(a_failure, s.get_revert_result('my task')) class StorageMemoryTest(StorageTestMixin, test.TestCase): def setUp(self): super(StorageMemoryTest, self).setUp() self.backend = backends.fetch({'connection': 'memory://'}) class StorageSQLTest(StorageTestMixin, test.TestCase): def setUp(self): super(StorageSQLTest, self).setUp() self.backend = backends.fetch({'connection': 'sqlite://'}) with contextlib.closing(self.backend.get_connection()) as conn: conn.upgrade() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_suspend.py0000664000175000017500000002325400000000000022455 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist import testtools import taskflow.engines from taskflow import exceptions as exc from taskflow.patterns import linear_flow as lf from taskflow import states from taskflow import test from taskflow.tests import utils from taskflow.utils import eventlet_utils as eu class SuspendingListener(utils.CaptureListener): def __init__(self, engine, task_name, task_state, capture_flow=False): super(SuspendingListener, self).__init__( engine, capture_flow=capture_flow) self._revert_match = (task_name, task_state) def _task_receiver(self, state, details): super(SuspendingListener, self)._task_receiver(state, details) if (details['task_name'], state) == self._revert_match: self._engine.suspend() class SuspendTest(utils.EngineTestBase): def test_suspend_one_task(self): flow = utils.ProgressingTask('a') engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS) as capturer: engine.run() self.assertEqual(states.SUCCESS, engine.storage.get_flow_state()) expected = ['a.t RUNNING', 'a.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS) as capturer: engine.run() self.assertEqual(states.SUCCESS, engine.storage.get_flow_state()) expected = [] self.assertEqual(expected, capturer.values) def test_suspend_linear_flow(self): flow = lf.Flow('linear').add( utils.ProgressingTask('a'), utils.ProgressingTask('b'), utils.ProgressingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS) as capturer: engine.run() self.assertEqual(states.SUSPENDED, engine.storage.get_flow_state()) expected = ['a.t RUNNING', 'a.t SUCCESS(5)', 'b.t RUNNING', 'b.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) with utils.CaptureListener(engine, capture_flow=False) as capturer: engine.run() self.assertEqual(states.SUCCESS, engine.storage.get_flow_state()) expected = ['c.t RUNNING', 'c.t SUCCESS(5)'] self.assertEqual(expected, capturer.values) def test_suspend_linear_flow_on_revert(self): flow = lf.Flow('linear').add( utils.ProgressingTask('a'), utils.ProgressingTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED) as capturer: engine.run() self.assertEqual(states.SUSPENDED, engine.storage.get_flow_state()) expected = ['a.t RUNNING', 'a.t SUCCESS(5)', 'b.t RUNNING', 'b.t SUCCESS(5)', 'c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'c.t REVERTING', 'c.t REVERTED(None)', 'b.t REVERTING', 'b.t REVERTED(None)'] self.assertEqual(expected, capturer.values) with utils.CaptureListener(engine, capture_flow=False) as capturer: self.assertRaisesRegex(RuntimeError, '^Woot', engine.run) self.assertEqual(states.REVERTED, engine.storage.get_flow_state()) expected = ['a.t REVERTING', 'a.t REVERTED(None)'] self.assertEqual(expected, capturer.values) def test_suspend_and_resume_linear_flow_on_revert(self): flow = lf.Flow('linear').add( utils.ProgressingTask('a'), utils.ProgressingTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED) as capturer: engine.run() expected = ['a.t RUNNING', 'a.t SUCCESS(5)', 'b.t RUNNING', 'b.t SUCCESS(5)', 'c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'c.t REVERTING', 'c.t REVERTED(None)', 'b.t REVERTING', 'b.t REVERTED(None)'] self.assertEqual(expected, capturer.values) # pretend we are resuming engine2 = self._make_engine(flow, engine.storage._flowdetail) with utils.CaptureListener(engine2, capture_flow=False) as capturer2: self.assertRaisesRegex(RuntimeError, '^Woot', engine2.run) self.assertEqual(states.REVERTED, engine2.storage.get_flow_state()) expected = ['a.t REVERTING', 'a.t REVERTED(None)'] self.assertEqual(expected, capturer2.values) def test_suspend_and_revert_even_if_task_is_gone(self): flow = lf.Flow('linear').add( utils.ProgressingTask('a'), utils.ProgressingTask('b'), utils.FailingTask('c') ) engine = self._make_engine(flow) with SuspendingListener(engine, task_name='b', task_state=states.REVERTED) as capturer: engine.run() expected = ['a.t RUNNING', 'a.t SUCCESS(5)', 'b.t RUNNING', 'b.t SUCCESS(5)', 'c.t RUNNING', 'c.t FAILURE(Failure: RuntimeError: Woot!)', 'c.t REVERTING', 'c.t REVERTED(None)', 'b.t REVERTING', 'b.t REVERTED(None)'] self.assertEqual(expected, capturer.values) # pretend we are resuming, but task 'c' gone when flow got updated flow2 = lf.Flow('linear').add( utils.ProgressingTask('a'), utils.ProgressingTask('b'), ) engine2 = self._make_engine(flow2, engine.storage._flowdetail) with utils.CaptureListener(engine2, capture_flow=False) as capturer2: self.assertRaisesRegex(RuntimeError, '^Woot', engine2.run) self.assertEqual(states.REVERTED, engine2.storage.get_flow_state()) expected = ['a.t REVERTING', 'a.t REVERTED(None)'] self.assertEqual(expected, capturer2.values) def test_storage_is_rechecked(self): flow = lf.Flow('linear').add( utils.ProgressingTask('b', requires=['foo']), utils.ProgressingTask('c') ) engine = self._make_engine(flow) engine.storage.inject({'foo': 'bar'}) with SuspendingListener(engine, task_name='b', task_state=states.SUCCESS): engine.run() self.assertEqual(states.SUSPENDED, engine.storage.get_flow_state()) # uninject everything: engine.storage.save(engine.storage.injector_name, {}, states.SUCCESS) self.assertRaises(exc.MissingDependencies, engine.run) class SerialEngineTest(SuspendTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, engine='serial', backend=self.backend) class ParallelEngineWithThreadsTest(SuspendTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, engine='parallel', backend=self.backend, executor=executor, max_workers=self._EXECUTOR_WORKERS) @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class ParallelEngineWithEventletTest(SuspendTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = futurist.GreenThreadPoolExecutor() self.addCleanup(executor.shutdown) return taskflow.engines.load(flow, flow_detail=flow_detail, backend=self.backend, engine='parallel', executor=executor) class ParallelEngineWithProcessTest(SuspendTest, test.TestCase): _EXECUTOR_WORKERS = 2 def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: executor = 'processes' return taskflow.engines.load(flow, flow_detail=flow_detail, engine='parallel', backend=self.backend, executor=executor, max_workers=self._EXECUTOR_WORKERS) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_task.py0000664000175000017500000003710700000000000021740 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Hewlett-Packard Development Company, L.P. # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import task from taskflow import test from taskflow.test import mock from taskflow.types import notifier class MyTask(task.Task): def execute(self, context, spam, eggs): pass class KwargsTask(task.Task): def execute(self, spam, **kwargs): pass class DefaultArgTask(task.Task): def execute(self, spam, eggs=()): pass class DefaultProvidesTask(task.Task): default_provides = 'def' def execute(self): return None class ProgressTask(task.Task): def execute(self, values, **kwargs): for value in values: self.update_progress(value) class SeparateRevertTask(task.Task): def execute(self, execute_arg): pass def revert(self, revert_arg, result, flow_failures): pass class SeparateRevertOptionalTask(task.Task): def execute(self, execute_arg=None): pass def revert(self, result, flow_failures, revert_arg=None): pass class RevertKwargsTask(task.Task): def execute(self, execute_arg1, execute_arg2): pass def revert(self, execute_arg1, *args, **kwargs): pass class TaskTest(test.TestCase): def test_passed_name(self): my_task = MyTask(name='my name') self.assertEqual('my name', my_task.name) def test_generated_name(self): my_task = MyTask() self.assertEqual('%s.%s' % (__name__, 'MyTask'), my_task.name) def test_task_str(self): my_task = MyTask(name='my') self.assertEqual('my==1.0', str(my_task)) def test_task_repr(self): my_task = MyTask(name='my') self.assertEqual('<%s.MyTask my==1.0>' % __name__, repr(my_task)) def test_no_provides(self): my_task = MyTask() self.assertEqual({}, my_task.save_as) def test_provides(self): my_task = MyTask(provides='food') self.assertEqual({'food': None}, my_task.save_as) def test_multi_provides(self): my_task = MyTask(provides=('food', 'water')) self.assertEqual({'food': 0, 'water': 1}, my_task.save_as) def test_unpack(self): my_task = MyTask(provides=('food',)) self.assertEqual({'food': 0}, my_task.save_as) def test_bad_provides(self): self.assertRaisesRegex(TypeError, '^Atom provides', MyTask, provides=object()) def test_requires_by_default(self): my_task = MyTask() expected = { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['spam', 'eggs', 'context']), my_task.requires) def test_requires_amended(self): my_task = MyTask(requires=('spam', 'eggs')) expected = { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' } self.assertEqual(expected, my_task.rebind) def test_requires_explicit(self): my_task = MyTask(auto_extract=False, requires=('spam', 'eggs', 'context')) expected = { 'spam': 'spam', 'eggs': 'eggs', 'context': 'context' } self.assertEqual(expected, my_task.rebind) def test_requires_explicit_not_enough(self): self.assertRaisesRegex(ValueError, '^Missing arguments', MyTask, auto_extract=False, requires=('spam', 'eggs')) def test_requires_ignores_optional(self): my_task = DefaultArgTask() self.assertEqual(set(['spam']), my_task.requires) self.assertEqual(set(['eggs']), my_task.optional) def test_requires_allows_optional(self): my_task = DefaultArgTask(requires=('spam', 'eggs')) self.assertEqual(set(['spam', 'eggs']), my_task.requires) self.assertEqual(set(), my_task.optional) def test_rebind_includes_optional(self): my_task = DefaultArgTask() expected = { 'spam': 'spam', 'eggs': 'eggs', } self.assertEqual(expected, my_task.rebind) def test_rebind_all_args(self): my_task = MyTask(rebind={'spam': 'a', 'eggs': 'b', 'context': 'c'}) expected = { 'spam': 'a', 'eggs': 'b', 'context': 'c' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['a', 'b', 'c']), my_task.requires) def test_rebind_partial(self): my_task = MyTask(rebind={'spam': 'a', 'eggs': 'b'}) expected = { 'spam': 'a', 'eggs': 'b', 'context': 'context' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['a', 'b', 'context']), my_task.requires) def test_rebind_unknown(self): self.assertRaisesRegex(ValueError, '^Extra arguments', MyTask, rebind={'foo': 'bar'}) def test_rebind_unknown_kwargs(self): my_task = KwargsTask(rebind={'foo': 'bar'}) expected = { 'foo': 'bar', 'spam': 'spam' } self.assertEqual(expected, my_task.rebind) def test_rebind_list_all(self): my_task = MyTask(rebind=('a', 'b', 'c')) expected = { 'context': 'a', 'spam': 'b', 'eggs': 'c' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['a', 'b', 'c']), my_task.requires) def test_rebind_list_partial(self): my_task = MyTask(rebind=('a', 'b')) expected = { 'context': 'a', 'spam': 'b', 'eggs': 'eggs' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['a', 'b', 'eggs']), my_task.requires) def test_rebind_list_more(self): self.assertRaisesRegex(ValueError, '^Extra arguments', MyTask, rebind=('a', 'b', 'c', 'd')) def test_rebind_list_more_kwargs(self): my_task = KwargsTask(rebind=('a', 'b', 'c')) expected = { 'spam': 'a', 'b': 'b', 'c': 'c' } self.assertEqual(expected, my_task.rebind) self.assertEqual(set(['a', 'b', 'c']), my_task.requires) def test_rebind_list_bad_value(self): self.assertRaisesRegex(TypeError, '^Invalid rebind value', MyTask, rebind=object()) def test_default_provides(self): my_task = DefaultProvidesTask() self.assertEqual(set(['def']), my_task.provides) self.assertEqual({'def': None}, my_task.save_as) def test_default_provides_can_be_overridden(self): my_task = DefaultProvidesTask(provides=('spam', 'eggs')) self.assertEqual(set(['spam', 'eggs']), my_task.provides) self.assertEqual({'spam': 0, 'eggs': 1}, my_task.save_as) def test_update_progress_within_bounds(self): values = [0.0, 0.5, 1.0] result = [] def progress_callback(event_type, details): result.append(details.pop('progress')) a_task = ProgressTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) a_task.execute(values) self.assertEqual(values, result) @mock.patch.object(task.LOG, 'warning') def test_update_progress_lower_bound(self, mocked_warning): result = [] def progress_callback(event_type, details): result.append(details.pop('progress')) a_task = ProgressTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) a_task.execute([-1.0, -0.5, 0.0]) self.assertEqual([0.0, 0.0, 0.0], result) self.assertEqual(2, mocked_warning.call_count) @mock.patch.object(task.LOG, 'warning') def test_update_progress_upper_bound(self, mocked_warning): result = [] def progress_callback(event_type, details): result.append(details.pop('progress')) a_task = ProgressTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) a_task.execute([1.0, 1.5, 2.0]) self.assertEqual([1.0, 1.0, 1.0], result) self.assertEqual(2, mocked_warning.call_count) @mock.patch.object(notifier.LOG, 'warning') def test_update_progress_handler_failure(self, mocked_warning): def progress_callback(*args, **kwargs): raise Exception('Woot!') a_task = ProgressTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) a_task.execute([0.5]) self.assertEqual(1, mocked_warning.call_count) def test_register_handler_is_none(self): a_task = MyTask() self.assertRaises(ValueError, a_task.notifier.register, task.EVENT_UPDATE_PROGRESS, None) self.assertEqual(0, len(a_task.notifier)) def test_deregister_any_handler(self): a_task = MyTask() self.assertEqual(0, len(a_task.notifier)) a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, lambda event_type, details: None) self.assertEqual(1, len(a_task.notifier)) a_task.notifier.deregister_event(task.EVENT_UPDATE_PROGRESS) self.assertEqual(0, len(a_task.notifier)) def test_deregister_any_handler_empty_listeners(self): a_task = MyTask() self.assertEqual(0, len(a_task.notifier)) self.assertFalse(a_task.notifier.deregister_event( task.EVENT_UPDATE_PROGRESS)) self.assertEqual(0, len(a_task.notifier)) def test_deregister_non_existent_listener(self): handler1 = lambda event_type, details: None handler2 = lambda event_type, details: None a_task = MyTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) self.assertEqual(1, len(list(a_task.notifier.listeners_iter()))) a_task.notifier.deregister(task.EVENT_UPDATE_PROGRESS, handler2) self.assertEqual(1, len(list(a_task.notifier.listeners_iter()))) a_task.notifier.deregister(task.EVENT_UPDATE_PROGRESS, handler1) self.assertEqual(0, len(list(a_task.notifier.listeners_iter()))) def test_bind_not_callable(self): a_task = MyTask() self.assertRaises(ValueError, a_task.notifier.register, task.EVENT_UPDATE_PROGRESS, 2) def test_copy_no_listeners(self): handler1 = lambda event_type, details: None a_task = MyTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) b_task = a_task.copy(retain_listeners=False) self.assertEqual(1, len(a_task.notifier)) self.assertEqual(0, len(b_task.notifier)) def test_copy_listeners(self): handler1 = lambda event_type, details: None handler2 = lambda event_type, details: None a_task = MyTask() a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) b_task = a_task.copy() self.assertEqual(1, len(b_task.notifier)) self.assertTrue(a_task.notifier.deregister_event( task.EVENT_UPDATE_PROGRESS)) self.assertEqual(0, len(a_task.notifier)) self.assertEqual(1, len(b_task.notifier)) b_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler2) listeners = dict(list(b_task.notifier.listeners_iter())) self.assertEqual(2, len(listeners[task.EVENT_UPDATE_PROGRESS])) self.assertEqual(0, len(a_task.notifier)) def test_separate_revert_args(self): my_task = SeparateRevertTask(rebind=('a',), revert_rebind=('b',)) self.assertEqual({'execute_arg': 'a'}, my_task.rebind) self.assertEqual({'revert_arg': 'b'}, my_task.revert_rebind) self.assertEqual(set(['a', 'b']), my_task.requires) my_task = SeparateRevertTask(requires='execute_arg', revert_requires='revert_arg') self.assertEqual({'execute_arg': 'execute_arg'}, my_task.rebind) self.assertEqual({'revert_arg': 'revert_arg'}, my_task.revert_rebind) self.assertEqual(set(['execute_arg', 'revert_arg']), my_task.requires) def test_separate_revert_optional_args(self): my_task = SeparateRevertOptionalTask() self.assertEqual(set(['execute_arg']), my_task.optional) self.assertEqual(set(['revert_arg']), my_task.revert_optional) def test_revert_kwargs(self): my_task = RevertKwargsTask() expected_rebind = {'execute_arg1': 'execute_arg1', 'execute_arg2': 'execute_arg2'} self.assertEqual(expected_rebind, my_task.rebind) expected_rebind = {'execute_arg1': 'execute_arg1'} self.assertEqual(expected_rebind, my_task.revert_rebind) self.assertEqual(set(['execute_arg1', 'execute_arg2']), my_task.requires) class FunctorTaskTest(test.TestCase): def test_creation_with_version(self): version = (2, 0) f_task = task.FunctorTask(lambda: None, version=version) self.assertEqual(version, f_task.version) def test_execute_not_callable(self): self.assertRaises(ValueError, task.FunctorTask, 2) def test_revert_not_callable(self): self.assertRaises(ValueError, task.FunctorTask, lambda: None, revert=2) class ReduceFunctorTaskTest(test.TestCase): def test_invalid_functor(self): # Functor not callable self.assertRaises(ValueError, task.ReduceFunctorTask, 2, requires=5) # Functor takes no arguments self.assertRaises(ValueError, task.ReduceFunctorTask, lambda: None, requires=5) # Functor takes too few arguments self.assertRaises(ValueError, task.ReduceFunctorTask, lambda x: None, requires=5) def test_functor_invalid_requires(self): # Invalid type, requires is not iterable self.assertRaises(TypeError, task.ReduceFunctorTask, lambda x, y: None, requires=1) # Too few elements in requires self.assertRaises(ValueError, task.ReduceFunctorTask, lambda x, y: None, requires=[1]) class MapFunctorTaskTest(test.TestCase): def test_invalid_functor(self): # Functor not callable self.assertRaises(ValueError, task.MapFunctorTask, 2, requires=5) # Functor takes no arguments self.assertRaises(ValueError, task.MapFunctorTask, lambda: None, requires=5) # Functor takes too many arguments self.assertRaises(ValueError, task.MapFunctorTask, lambda x, y: None, requires=5) def test_functor_invalid_requires(self): # Invalid type, requires is not iterable self.assertRaises(TypeError, task.MapFunctorTask, lambda x: None, requires=1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_types.py0000664000175000017500000004743300000000000022145 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import networkx as nx from six.moves import cPickle as pickle from taskflow import test from taskflow.types import graph from taskflow.types import sets from taskflow.types import timing from taskflow.types import tree class TimingTest(test.TestCase): def test_convert_fail(self): for baddie in ["abc123", "-1", "", object()]: self.assertRaises(ValueError, timing.convert_to_timeout, baddie) def test_convert_noop(self): t = timing.convert_to_timeout(1.0) t2 = timing.convert_to_timeout(t) self.assertEqual(t, t2) def test_interrupt(self): t = timing.convert_to_timeout(1.0) self.assertFalse(t.is_stopped()) t.interrupt() self.assertTrue(t.is_stopped()) def test_reset(self): t = timing.convert_to_timeout(1.0) t.interrupt() self.assertTrue(t.is_stopped()) t.reset() self.assertFalse(t.is_stopped()) def test_values(self): for v, e_v in [("1.0", 1.0), (1, 1.0), ("2.0", 2.0)]: t = timing.convert_to_timeout(v) self.assertEqual(e_v, t.value) def test_fail(self): self.assertRaises(ValueError, timing.Timeout, -1) class GraphTest(test.TestCase): def test_no_successors_no_predecessors(self): g = graph.DiGraph() g.add_node("a") g.add_node("b") g.add_node("c") g.add_edge("b", "c") self.assertEqual(set(['a', 'b']), set(g.no_predecessors_iter())) self.assertEqual(set(['a', 'c']), set(g.no_successors_iter())) def test_directed(self): g = graph.DiGraph() g.add_node("a") g.add_node("b") g.add_edge("a", "b") self.assertTrue(g.is_directed_acyclic()) g.add_edge("b", "a") self.assertFalse(g.is_directed_acyclic()) def test_frozen(self): g = graph.DiGraph() self.assertFalse(g.frozen) g.add_node("b") g.freeze() self.assertRaises(nx.NetworkXError, g.add_node, "c") def test_merge(self): g = graph.DiGraph() g.add_node("a") g.add_node("b") g2 = graph.DiGraph() g2.add_node('c') g3 = graph.merge_graphs(g, g2) self.assertEqual(3, len(g3)) def test_pydot_output(self): # NOTE(harlowja): ensure we use the ordered types here, otherwise # the expected output will vary based on randomized hashing and then # the test will fail randomly... for graph_cls, kind, edge in [(graph.OrderedDiGraph, 'digraph', '->'), (graph.OrderedGraph, 'graph', '--')]: g = graph_cls(name='test') g.add_node("a") g.add_node("b") g.add_node("c") g.add_edge("a", "b") g.add_edge("b", "c") expected = """ strict %(kind)s "test" { a; b; c; a %(edge)s b; b %(edge)s c; } """ % ({'kind': kind, 'edge': edge}) self.assertEqual(expected.lstrip(), g.export_to_dot()) def test_merge_edges(self): g = graph.DiGraph() g.add_node("a") g.add_node("b") g.add_edge('a', 'b') g2 = graph.DiGraph() g2.add_node('c') g2.add_node('d') g2.add_edge('c', 'd') g3 = graph.merge_graphs(g, g2) self.assertEqual(4, len(g3)) self.assertTrue(g3.has_edge('c', 'd')) self.assertTrue(g3.has_edge('a', 'b')) def test_overlap_detector(self): g = graph.DiGraph() g.add_node("a") g.add_node("b") g.add_edge('a', 'b') g2 = graph.DiGraph() g2.add_node('a') g2.add_node('d') g2.add_edge('a', 'd') self.assertRaises(ValueError, graph.merge_graphs, g, g2) def occurrence_detector(to_graph, from_graph): return sum(1 for node in from_graph.nodes if node in to_graph) self.assertRaises(ValueError, graph.merge_graphs, g, g2, overlap_detector=occurrence_detector) g3 = graph.merge_graphs(g, g2, allow_overlaps=True) self.assertEqual(3, len(g3)) self.assertTrue(g3.has_edge('a', 'b')) self.assertTrue(g3.has_edge('a', 'd')) def test_invalid_detector(self): g = graph.DiGraph() g.add_node("a") g2 = graph.DiGraph() g2.add_node('c') self.assertRaises(ValueError, graph.merge_graphs, g, g2, overlap_detector='b') class TreeTest(test.TestCase): def _make_species(self): # This is the following tree: # # animal # |__mammal # | |__horse # | |__primate # | |__monkey # | |__human # |__reptile a = tree.Node("animal") m = tree.Node("mammal") r = tree.Node("reptile") a.add(m) a.add(r) m.add(tree.Node("horse")) p = tree.Node("primate") m.add(p) p.add(tree.Node("monkey")) p.add(tree.Node("human")) return a def test_pformat_species(self): root = self._make_species() expected = """ animal |__mammal | |__horse | |__primate | |__monkey | |__human |__reptile """ self.assertEqual(expected.strip(), root.pformat()) def test_pformat_flat(self): root = tree.Node("josh") root.add(tree.Node("josh.1")) expected = """ josh |__josh.1 """ self.assertEqual(expected.strip(), root.pformat()) root[0].add(tree.Node("josh.1.1")) expected = """ josh |__josh.1 |__josh.1.1 """ self.assertEqual(expected.strip(), root.pformat()) root[0][0].add(tree.Node("josh.1.1.1")) expected = """ josh |__josh.1 |__josh.1.1 |__josh.1.1.1 """ self.assertEqual(expected.strip(), root.pformat()) root[0][0][0].add(tree.Node("josh.1.1.1.1")) expected = """ josh |__josh.1 |__josh.1.1 |__josh.1.1.1 |__josh.1.1.1.1 """ self.assertEqual(expected.strip(), root.pformat()) def test_pformat_partial_species(self): root = self._make_species() expected = """ reptile """ self.assertEqual(expected.strip(), root[1].pformat()) expected = """ mammal |__horse |__primate |__monkey |__human """ self.assertEqual(expected.strip(), root[0].pformat()) expected = """ primate |__monkey |__human """ self.assertEqual(expected.strip(), root[0][1].pformat()) expected = """ monkey """ self.assertEqual(expected.strip(), root[0][1][0].pformat()) def test_pformat(self): root = tree.Node("CEO") expected = """ CEO """ self.assertEqual(expected.strip(), root.pformat()) root.add(tree.Node("Infra")) expected = """ CEO |__Infra """ self.assertEqual(expected.strip(), root.pformat()) root[0].add(tree.Node("Infra.1")) expected = """ CEO |__Infra |__Infra.1 """ self.assertEqual(expected.strip(), root.pformat()) root.add(tree.Node("Mail")) expected = """ CEO |__Infra | |__Infra.1 |__Mail """ self.assertEqual(expected.strip(), root.pformat()) root.add(tree.Node("Search")) expected = """ CEO |__Infra | |__Infra.1 |__Mail |__Search """ self.assertEqual(expected.strip(), root.pformat()) root[-1].add(tree.Node("Search.1")) expected = """ CEO |__Infra | |__Infra.1 |__Mail |__Search |__Search.1 """ self.assertEqual(expected.strip(), root.pformat()) root[-1].add(tree.Node("Search.2")) expected = """ CEO |__Infra | |__Infra.1 |__Mail |__Search |__Search.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[0].add(tree.Node("Infra.2")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 |__Mail |__Search |__Search.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[0].add(tree.Node("Infra.3")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 | |__Infra.3 |__Mail |__Search |__Search.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[0][-1].add(tree.Node("Infra.3.1")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 | |__Infra.3 | |__Infra.3.1 |__Mail |__Search |__Search.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[-1][0].add(tree.Node("Search.1.1")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 | |__Infra.3 | |__Infra.3.1 |__Mail |__Search |__Search.1 | |__Search.1.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[1].add(tree.Node("Mail.1")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 | |__Infra.3 | |__Infra.3.1 |__Mail | |__Mail.1 |__Search |__Search.1 | |__Search.1.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) root[1][0].add(tree.Node("Mail.1.1")) expected = """ CEO |__Infra | |__Infra.1 | |__Infra.2 | |__Infra.3 | |__Infra.3.1 |__Mail | |__Mail.1 | |__Mail.1.1 |__Search |__Search.1 | |__Search.1.1 |__Search.2 """ self.assertEqual(expected.strip(), root.pformat()) def test_path(self): root = self._make_species() human = root.find("human") self.assertIsNotNone(human) p = list([n.item for n in human.path_iter()]) self.assertEqual(['human', 'primate', 'mammal', 'animal'], p) def test_empty(self): root = tree.Node("josh") self.assertTrue(root.empty()) def test_after_frozen(self): root = tree.Node("josh") root.add(tree.Node("josh.1")) root.freeze() self.assertTrue( all(n.frozen for n in root.dfs_iter(include_self=True))) self.assertRaises(tree.FrozenNode, root.remove, "josh.1") self.assertRaises(tree.FrozenNode, root.disassociate) self.assertRaises(tree.FrozenNode, root.add, tree.Node("josh.2")) def test_removal(self): root = self._make_species() self.assertIsNotNone(root.remove('reptile')) self.assertRaises(ValueError, root.remove, 'reptile') self.assertIsNone(root.find('reptile')) def test_removal_direct(self): root = self._make_species() self.assertRaises(ValueError, root.remove, 'human', only_direct=True) def test_removal_self(self): root = self._make_species() n = root.find('horse') self.assertIsNotNone(n.parent) n.remove('horse', include_self=True) self.assertIsNone(n.parent) self.assertIsNone(root.find('horse')) def test_disassociate(self): root = self._make_species() n = root.find('horse') self.assertIsNotNone(n.parent) c = n.disassociate() self.assertEqual(1, c) self.assertIsNone(n.parent) self.assertIsNone(root.find('horse')) def test_disassociate_many(self): root = self._make_species() n = root.find('horse') n.parent.add(n) n.parent.add(n) c = n.disassociate() self.assertEqual(3, c) self.assertIsNone(n.parent) self.assertIsNone(root.find('horse')) def test_not_empty(self): root = self._make_species() self.assertFalse(root.empty()) def test_node_count(self): root = self._make_species() self.assertEqual(7, 1 + root.child_count(only_direct=False)) def test_index(self): root = self._make_species() self.assertEqual(0, root.index("mammal")) self.assertEqual(1, root.index("reptile")) def test_contains(self): root = self._make_species() self.assertIn("monkey", root) self.assertNotIn("bird", root) def test_freeze(self): root = self._make_species() root.freeze() self.assertRaises(tree.FrozenNode, root.add, "bird") def test_find(self): root = self._make_species() self.assertIsNone(root.find('monkey', only_direct=True)) self.assertIsNotNone(root.find('monkey', only_direct=False)) self.assertIsNotNone(root.find('animal', only_direct=True)) self.assertIsNotNone(root.find('reptile', only_direct=True)) self.assertIsNone(root.find('animal', include_self=False)) self.assertIsNone(root.find('animal', include_self=False, only_direct=True)) def test_dfs_itr(self): root = self._make_species() things = list([n.item for n in root.dfs_iter(include_self=True)]) self.assertEqual(set(['animal', 'reptile', 'mammal', 'horse', 'primate', 'monkey', 'human']), set(things)) def test_dfs_itr_left_to_right(self): root = self._make_species() it = root.dfs_iter(include_self=False, right_to_left=False) things = list([n.item for n in it]) self.assertEqual(['reptile', 'mammal', 'primate', 'human', 'monkey', 'horse'], things) def test_dfs_itr_no_self(self): root = self._make_species() things = list([n.item for n in root.dfs_iter(include_self=False)]) self.assertEqual(['mammal', 'horse', 'primate', 'monkey', 'human', 'reptile'], things) def test_bfs_itr(self): root = self._make_species() things = list([n.item for n in root.bfs_iter(include_self=True)]) self.assertEqual(['animal', 'reptile', 'mammal', 'primate', 'horse', 'human', 'monkey'], things) def test_bfs_itr_no_self(self): root = self._make_species() things = list([n.item for n in root.bfs_iter(include_self=False)]) self.assertEqual(['reptile', 'mammal', 'primate', 'horse', 'human', 'monkey'], things) def test_bfs_itr_right_to_left(self): root = self._make_species() it = root.bfs_iter(include_self=False, right_to_left=True) things = list([n.item for n in it]) self.assertEqual(['mammal', 'reptile', 'horse', 'primate', 'monkey', 'human'], things) def test_to_diagraph(self): root = self._make_species() g = root.to_digraph() self.assertEqual(root.child_count(only_direct=False) + 1, len(g)) for node in root.dfs_iter(include_self=True): self.assertIn(node.item, g) self.assertEqual([], list(g.predecessors('animal'))) self.assertEqual(['animal'], list(g.predecessors('reptile'))) self.assertEqual(['primate'], list(g.predecessors('human'))) self.assertEqual(['mammal'], list(g.predecessors('primate'))) self.assertEqual(['animal'], list(g.predecessors('mammal'))) self.assertEqual(['mammal', 'reptile'], list(g.successors('animal'))) def test_to_digraph_retains_metadata(self): root = tree.Node("chickens", alive=True) dead_chicken = tree.Node("chicken.1", alive=False) root.add(dead_chicken) g = root.to_digraph() self.assertEqual(g.nodes['chickens'], {'alive': True}) self.assertEqual(g.nodes['chicken.1'], {'alive': False}) class OrderedSetTest(test.TestCase): def test_pickleable(self): items = [10, 9, 8, 7] s = sets.OrderedSet(items) self.assertEqual(items, list(s)) s_bin = pickle.dumps(s) s2 = pickle.loads(s_bin) self.assertEqual(s, s2) self.assertEqual(items, list(s2)) def test_retain_ordering(self): items = [10, 9, 8, 7] s = sets.OrderedSet(iter(items)) self.assertEqual(items, list(s)) def test_retain_duplicate_ordering(self): items = [10, 9, 10, 8, 9, 7, 8] s = sets.OrderedSet(iter(items)) self.assertEqual([10, 9, 8, 7], list(s)) def test_length(self): items = [10, 9, 8, 7] s = sets.OrderedSet(iter(items)) self.assertEqual(4, len(s)) def test_duplicate_length(self): items = [10, 9, 10, 8, 9, 7, 8] s = sets.OrderedSet(iter(items)) self.assertEqual(4, len(s)) def test_contains(self): items = [10, 9, 8, 7] s = sets.OrderedSet(iter(items)) for i in items: self.assertIn(i, s) def test_copy(self): items = [10, 9, 8, 7] s = sets.OrderedSet(iter(items)) s2 = s.copy() self.assertEqual(s, s2) self.assertEqual(items, list(s2)) def test_empty_intersection(self): s = sets.OrderedSet([1, 2, 3]) es = set(s) self.assertEqual(es.intersection(), s.intersection()) def test_intersection(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3, 4, 5]) es = set(s) es2 = set(s2) self.assertEqual(es.intersection(es2), s.intersection(s2)) self.assertEqual(es2.intersection(s), s2.intersection(s)) def test_multi_intersection(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3, 4, 5]) s3 = sets.OrderedSet([1, 2]) es = set(s) es2 = set(s2) es3 = set(s3) self.assertEqual(es.intersection(s2, s3), s.intersection(s2, s3)) self.assertEqual(es2.intersection(es3), s2.intersection(s3)) def test_superset(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3]) self.assertTrue(s.issuperset(s2)) self.assertFalse(s.issubset(s2)) def test_subset(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3]) self.assertTrue(s2.issubset(s)) self.assertFalse(s2.issuperset(s)) def test_empty_difference(self): s = sets.OrderedSet([1, 2, 3]) es = set(s) self.assertEqual(es.difference(), s.difference()) def test_difference(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3]) es = set(s) es2 = set(s2) self.assertEqual(es.difference(es2), s.difference(s2)) self.assertEqual(es2.difference(es), s2.difference(s)) def test_multi_difference(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3]) s3 = sets.OrderedSet([3, 4, 5]) es = set(s) es2 = set(s2) es3 = set(s3) self.assertEqual(es3.difference(es), s3.difference(s)) self.assertEqual(es.difference(es3), s.difference(s3)) self.assertEqual(es2.difference(es, es3), s2.difference(s, s3)) def test_empty_union(self): s = sets.OrderedSet([1, 2, 3]) es = set(s) self.assertEqual(es.union(), s.union()) def test_union(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3, 4]) es = set(s) es2 = set(s2) self.assertEqual(es.union(es2), s.union(s2)) self.assertEqual(es2.union(es), s2.union(s)) def test_multi_union(self): s = sets.OrderedSet([1, 2, 3]) s2 = sets.OrderedSet([2, 3, 4]) s3 = sets.OrderedSet([4, 5, 6]) es = set(s) es2 = set(s2) es3 = set(s3) self.assertEqual(es.union(es2, es3), s.union(s2, s3)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_utils.py0000664000175000017500000002544400000000000022137 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import inspect import random import string import time import six import testscenarios from taskflow import test from taskflow.utils import misc from taskflow.utils import threading_utils class CachedPropertyTest(test.TestCase): def test_attribute_caching(self): class A(object): def __init__(self): self.call_counter = 0 @misc.cachedproperty def b(self): self.call_counter += 1 return 'b' a = A() self.assertEqual('b', a.b) self.assertEqual('b', a.b) self.assertEqual(1, a.call_counter) def test_custom_property(self): class A(object): @misc.cachedproperty('_c') def b(self): return 'b' a = A() self.assertEqual('b', a.b) self.assertEqual('b', a._c) def test_no_delete(self): def try_del(a): del a.b class A(object): @misc.cachedproperty def b(self): return 'b' a = A() self.assertEqual('b', a.b) self.assertRaises(AttributeError, try_del, a) self.assertEqual('b', a.b) def test_set(self): def try_set(a): a.b = 'c' class A(object): @misc.cachedproperty def b(self): return 'b' a = A() self.assertEqual('b', a.b) self.assertRaises(AttributeError, try_set, a) self.assertEqual('b', a.b) def test_documented_property(self): class A(object): @misc.cachedproperty def b(self): """I like bees.""" return 'b' self.assertEqual("I like bees.", inspect.getdoc(A.b)) def test_undocumented_property(self): class A(object): @misc.cachedproperty def b(self): return 'b' self.assertIsNone(inspect.getdoc(A.b)) def test_threaded_access_property(self): called = collections.deque() class A(object): @misc.cachedproperty def b(self): called.append(1) # NOTE(harlowja): wait for a little and give some time for # another thread to potentially also get in this method to # also create the same property... time.sleep(random.random() * 0.5) return 'b' a = A() threads = [] try: for _i in range(0, 20): t = threading_utils.daemon_thread(lambda: a.b) threads.append(t) for t in threads: t.start() finally: while threads: t = threads.pop() t.join() self.assertEqual(1, len(called)) self.assertEqual('b', a.b) class UriParseTest(test.TestCase): def test_parse(self): url = "zookeeper://192.168.0.1:2181/a/b/?c=d" parsed = misc.parse_uri(url) self.assertEqual('zookeeper', parsed.scheme) self.assertEqual(2181, parsed.port) self.assertEqual('192.168.0.1', parsed.hostname) self.assertEqual('', parsed.fragment) self.assertEqual('/a/b/', parsed.path) self.assertEqual({'c': 'd'}, parsed.params()) def test_port_provided(self): url = "rabbitmq://www.yahoo.com:5672" parsed = misc.parse_uri(url) self.assertEqual('rabbitmq', parsed.scheme) self.assertEqual('www.yahoo.com', parsed.hostname) self.assertEqual(5672, parsed.port) self.assertEqual('', parsed.path) def test_ipv6_host(self): url = "rsync://[2001:db8:0:1]:873" parsed = misc.parse_uri(url) self.assertEqual('rsync', parsed.scheme) self.assertEqual('2001:db8:0:1', parsed.hostname) self.assertEqual(873, parsed.port) def test_user_password(self): url = "rsync://test:test_pw@www.yahoo.com:873" parsed = misc.parse_uri(url) self.assertEqual('test', parsed.username) self.assertEqual('test_pw', parsed.password) self.assertEqual('www.yahoo.com', parsed.hostname) def test_user(self): url = "rsync://test@www.yahoo.com:873" parsed = misc.parse_uri(url) self.assertEqual('test', parsed.username) self.assertIsNone(parsed.password) class TestSequenceMinus(test.TestCase): def test_simple_case(self): result = misc.sequence_minus([1, 2, 3, 4], [2, 3]) self.assertEqual([1, 4], result) def test_subtrahend_has_extra_elements(self): result = misc.sequence_minus([1, 2, 3, 4], [2, 3, 5, 7, 13]) self.assertEqual([1, 4], result) def test_some_items_are_equal(self): result = misc.sequence_minus([1, 1, 1, 1], [1, 1, 3]) self.assertEqual([1, 1], result) def test_equal_items_not_continious(self): result = misc.sequence_minus([1, 2, 3, 1], [1, 3]) self.assertEqual([2, 1], result) class TestReversedEnumerate(testscenarios.TestWithScenarios, test.TestCase): scenarios = [ ('ten', {'sample': [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}), ('empty', {'sample': []}), ('negative', {'sample': [-1, -2, -3]}), ('one', {'sample': [1]}), ('abc', {'sample': ['a', 'b', 'c']}), ('ascii_letters', {'sample': list(string.ascii_letters)}), ] def test_sample_equivalence(self): expected = list(reversed(list(enumerate(self.sample)))) actual = list(misc.reverse_enumerate(self.sample)) self.assertEqual(expected, actual) class TestCountdownIter(test.TestCase): def test_expected_count(self): upper = 100 it = misc.countdown_iter(upper) items = [] for i in it: self.assertEqual(upper, i) upper -= 1 items.append(i) self.assertEqual(0, upper) self.assertEqual(100, len(items)) def test_no_count(self): it = misc.countdown_iter(0) self.assertEqual(0, len(list(it))) it = misc.countdown_iter(-1) self.assertEqual(0, len(list(it))) def test_expected_count_custom_decr(self): upper = 100 it = misc.countdown_iter(upper, decr=2) items = [] for i in it: self.assertEqual(upper, i) upper -= 2 items.append(i) self.assertEqual(0, upper) self.assertEqual(50, len(items)) def test_invalid_decr(self): it = misc.countdown_iter(10, -1) self.assertRaises(ValueError, six.next, it) class TestMergeUri(test.TestCase): def test_merge(self): url = "http://www.yahoo.com/?a=b&c=d" parsed = misc.parse_uri(url) joined = misc.merge_uri(parsed, {}) self.assertEqual('b', joined.get('a')) self.assertEqual('d', joined.get('c')) self.assertEqual('www.yahoo.com', joined.get('hostname')) def test_merge_existing_hostname(self): url = "http://www.yahoo.com/" parsed = misc.parse_uri(url) joined = misc.merge_uri(parsed, {'hostname': 'b.com'}) self.assertEqual('b.com', joined.get('hostname')) def test_merge_user_password(self): url = "http://josh:harlow@www.yahoo.com/" parsed = misc.parse_uri(url) joined = misc.merge_uri(parsed, {}) self.assertEqual('www.yahoo.com', joined.get('hostname')) self.assertEqual('josh', joined.get('username')) self.assertEqual('harlow', joined.get('password')) def test_merge_user_password_existing(self): url = "http://josh:harlow@www.yahoo.com/" parsed = misc.parse_uri(url) existing = { 'username': 'joe', 'password': 'biggie', } joined = misc.merge_uri(parsed, existing) self.assertEqual('www.yahoo.com', joined.get('hostname')) self.assertEqual('joe', joined.get('username')) self.assertEqual('biggie', joined.get('password')) class TestClamping(test.TestCase): def test_simple_clamp(self): result = misc.clamp(1.0, 2.0, 3.0) self.assertEqual(2.0, result) result = misc.clamp(4.0, 2.0, 3.0) self.assertEqual(3.0, result) result = misc.clamp(3.0, 4.0, 4.0) self.assertEqual(4.0, result) def test_invalid_clamp(self): self.assertRaises(ValueError, misc.clamp, 0.0, 2.0, 1.0) def test_clamped_callback(self): calls = [] def on_clamped(): calls.append(True) misc.clamp(-1, 0.0, 1.0, on_clamped=on_clamped) self.assertEqual(1, len(calls)) calls.pop() misc.clamp(0.0, 0.0, 1.0, on_clamped=on_clamped) self.assertEqual(0, len(calls)) misc.clamp(2, 0.0, 1.0, on_clamped=on_clamped) self.assertEqual(1, len(calls)) class TestIterable(test.TestCase): def test_string_types(self): self.assertFalse(misc.is_iterable('string')) self.assertFalse(misc.is_iterable(u'string')) def test_list(self): self.assertTrue(misc.is_iterable(list())) def test_tuple(self): self.assertTrue(misc.is_iterable(tuple())) def test_dict(self): self.assertTrue(misc.is_iterable(dict())) class TestSafeCopyDict(testscenarios.TestWithScenarios): scenarios = [ ('none', {'original': None, 'expected': {}}), ('empty_dict', {'original': {}, 'expected': {}}), ('empty_list', {'original': [], 'expected': {}}), ('dict', {'original': {'a': 1, 'b': 2}, 'expected': {'a': 1, 'b': 2}}), ] def test_expected(self): self.assertEqual(self.expected, misc.safe_copy_dict(self.original)) self.assertFalse(self.expected is misc.safe_copy_dict(self.original)) def test_mutated_post_copy(self): a = {"a": "b"} a_2 = misc.safe_copy_dict(a) a['a'] = 'c' self.assertEqual("b", a_2['a']) self.assertEqual("c", a['a']) class TestSafeCopyDictRaises(testscenarios.TestWithScenarios): scenarios = [ ('list', {'original': [1, 2], 'exception': TypeError}), ('tuple', {'original': (1, 2), 'exception': TypeError}), ('set', {'original': set([1, 2]), 'exception': TypeError}), ] def test_exceptions(self): self.assertRaises(self.exception, misc.safe_copy_dict, self.original) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_utils_async_utils.py0000664000175000017500000000172500000000000024550 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow import test from taskflow.utils import async_utils as au class MakeCompletedFutureTest(test.TestCase): def test_make_completed_future(self): result = object() future = au.make_completed_future(result) self.assertTrue(future.done()) self.assertIs(future.result(), result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_utils_binary.py0000664000175000017500000000627300000000000023502 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow import test from taskflow.utils import misc def _bytes(data): if six.PY3: return data.encode(encoding='utf-8') else: return data class BinaryEncodeTest(test.TestCase): def _check(self, data, expected_result): result = misc.binary_encode(data) self.assertIsInstance(result, six.binary_type) self.assertEqual(expected_result, result) def test_simple_binary(self): data = _bytes('hello') self._check(data, data) def test_unicode_binary(self): data = _bytes('привет') self._check(data, data) def test_simple_text(self): self._check(u'hello', _bytes('hello')) def test_unicode_text(self): self._check(u'привет', _bytes('привет')) def test_unicode_other_encoding(self): result = misc.binary_encode(u'mañana', 'latin-1') self.assertIsInstance(result, six.binary_type) self.assertEqual(u'mañana'.encode('latin-1'), result) class BinaryDecodeTest(test.TestCase): def _check(self, data, expected_result): result = misc.binary_decode(data) self.assertIsInstance(result, six.text_type) self.assertEqual(expected_result, result) def test_simple_text(self): data = u'hello' self._check(data, data) def test_unicode_text(self): data = u'привет' self._check(data, data) def test_simple_binary(self): self._check(_bytes('hello'), u'hello') def test_unicode_binary(self): self._check(_bytes('привет'), u'привет') def test_unicode_other_encoding(self): data = u'mañana'.encode('latin-1') result = misc.binary_decode(data, 'latin-1') self.assertIsInstance(result, six.text_type) self.assertEqual(u'mañana', result) class DecodeJsonTest(test.TestCase): def test_it_works(self): self.assertEqual({"foo": 1}, misc.decode_json(_bytes('{"foo": 1}'))) def test_it_works_with_unicode(self): data = _bytes('{"foo": "фуу"}') self.assertEqual({"foo": u'фуу'}, misc.decode_json(data)) def test_handles_invalid_unicode(self): self.assertRaises(ValueError, misc.decode_json, six.b('{"\xf1": 1}')) def test_handles_bad_json(self): self.assertRaises(ValueError, misc.decode_json, _bytes('{"foo":')) def test_handles_wrong_types(self): self.assertRaises(ValueError, misc.decode_json, _bytes('42')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_utils_iter_utils.py0000664000175000017500000001307600000000000024400 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import string import six from six.moves import range as compat_range from taskflow import test from taskflow.utils import iter_utils def forever_it(): i = 0 while True: yield i i += 1 class IterUtilsTest(test.TestCase): def test_fill_empty(self): self.assertEqual([], list(iter_utils.fill([1, 2, 3], 0))) def test_bad_unique_seen(self): iters = [ ['a', 'b'], 2, None, object(), ] self.assertRaises(ValueError, iter_utils.unique_seen, iters) def test_generate_delays(self): it = iter_utils.generate_delays(1, 60) self.assertEqual(1, six.next(it)) self.assertEqual(2, six.next(it)) self.assertEqual(4, six.next(it)) self.assertEqual(8, six.next(it)) self.assertEqual(16, six.next(it)) self.assertEqual(32, six.next(it)) self.assertEqual(60, six.next(it)) self.assertEqual(60, six.next(it)) def test_generate_delays_custom_multiplier(self): it = iter_utils.generate_delays(1, 60, multiplier=4) self.assertEqual(1, six.next(it)) self.assertEqual(4, six.next(it)) self.assertEqual(16, six.next(it)) self.assertEqual(60, six.next(it)) self.assertEqual(60, six.next(it)) def test_generate_delays_bad(self): self.assertRaises(ValueError, iter_utils.generate_delays, -1, -1) self.assertRaises(ValueError, iter_utils.generate_delays, -1, 2) self.assertRaises(ValueError, iter_utils.generate_delays, 2, -1) self.assertRaises(ValueError, iter_utils.generate_delays, 1, 1, multiplier=0.5) def test_unique_seen(self): iters = [ ['a', 'b'], ['a', 'c', 'd'], ['a', 'e', 'f'], ['f', 'm', 'n'], ] self.assertEqual(['a', 'b', 'c', 'd', 'e', 'f', 'm', 'n'], list(iter_utils.unique_seen(iters))) def test_unique_seen_empty(self): iters = [] self.assertEqual([], list(iter_utils.unique_seen(iters))) def test_unique_seen_selector(self): iters = [ [(1, 'a'), (1, 'a')], [(2, 'b')], [(3, 'c')], [(1, 'a'), (3, 'c')], ] it = iter_utils.unique_seen(iters, seen_selector=lambda value: value[0]) self.assertEqual([(1, 'a'), (2, 'b'), (3, 'c')], list(it)) def test_bad_fill(self): self.assertRaises(ValueError, iter_utils.fill, 2, 2) def test_fill_many_empty(self): result = list(iter_utils.fill(compat_range(0, 50), 500)) self.assertEqual(450, sum(1 for x in result if x is None)) self.assertEqual(50, sum(1 for x in result if x is not None)) def test_fill_custom_filler(self): self.assertEqual("abcd", "".join(iter_utils.fill("abc", 4, filler='d'))) def test_fill_less_needed(self): self.assertEqual("ab", "".join(iter_utils.fill("abc", 2))) def test_fill(self): self.assertEqual([None, None], list(iter_utils.fill([], 2))) self.assertEqual((None, None), tuple(iter_utils.fill([], 2))) def test_bad_find_first_match(self): self.assertRaises(ValueError, iter_utils.find_first_match, 2, lambda v: False) def test_find_first_match(self): it = forever_it() self.assertEqual(100, iter_utils.find_first_match(it, lambda v: v == 100)) def test_find_first_match_not_found(self): it = iter(string.ascii_lowercase) self.assertIsNone(iter_utils.find_first_match(it, lambda v: v == '')) def test_bad_count(self): self.assertRaises(ValueError, iter_utils.count, 2) def test_count(self): self.assertEqual(0, iter_utils.count([])) self.assertEqual(1, iter_utils.count(['a'])) self.assertEqual(10, iter_utils.count(compat_range(0, 10))) self.assertEqual(1000, iter_utils.count(compat_range(0, 1000))) self.assertEqual(0, iter_utils.count(compat_range(0))) self.assertEqual(0, iter_utils.count(compat_range(-1))) def test_bad_while_is_not(self): self.assertRaises(ValueError, iter_utils.while_is_not, 2, 'a') def test_while_is_not(self): it = iter(string.ascii_lowercase) self.assertEqual(['a'], list(iter_utils.while_is_not(it, 'a'))) it = iter(string.ascii_lowercase) self.assertEqual(['a', 'b'], list(iter_utils.while_is_not(it, 'b'))) self.assertEqual(list(string.ascii_lowercase[2:]), list(iter_utils.while_is_not(it, 'zzz'))) it = iter(string.ascii_lowercase) self.assertEqual(list(string.ascii_lowercase), list(iter_utils.while_is_not(it, ''))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/test_utils_threading_utils.py0000664000175000017500000001257700000000000025407 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import threading import time from taskflow import test from taskflow.utils import threading_utils as tu def _spinner(death): while not death.is_set(): time.sleep(0.1) class TestThreadHelpers(test.TestCase): def test_alive_thread_falsey(self): for v in [False, 0, None, ""]: self.assertFalse(tu.is_alive(v)) def test_alive_thread(self): death = threading.Event() t = tu.daemon_thread(_spinner, death) self.assertFalse(tu.is_alive(t)) t.start() self.assertTrue(tu.is_alive(t)) death.set() t.join() self.assertFalse(tu.is_alive(t)) def test_daemon_thread(self): death = threading.Event() t = tu.daemon_thread(_spinner, death) self.assertTrue(t.daemon) class TestThreadBundle(test.TestCase): thread_count = 5 def setUp(self): super(TestThreadBundle, self).setUp() self.bundle = tu.ThreadBundle() self.death = threading.Event() self.addCleanup(self.bundle.stop) self.addCleanup(self.death.set) def test_bind_invalid(self): self.assertRaises(ValueError, self.bundle.bind, 1) for k in ['after_start', 'before_start', 'before_join', 'after_join']: kwargs = { k: 1, } self.assertRaises(ValueError, self.bundle.bind, lambda: tu.daemon_thread(_spinner, self.death), **kwargs) def test_bundle_length(self): self.assertEqual(0, len(self.bundle)) for i in range(0, self.thread_count): self.bundle.bind(lambda: tu.daemon_thread(_spinner, self.death)) self.assertEqual(1, self.bundle.start()) self.assertEqual(i + 1, len(self.bundle)) self.death.set() self.assertEqual(self.thread_count, self.bundle.stop()) self.assertEqual(self.thread_count, len(self.bundle)) def test_start_stop_order(self): start_events = collections.deque() death_events = collections.deque() def before_start(i, t): start_events.append((i, 'bs')) def before_join(i, t): death_events.append((i, 'bj')) self.death.set() def after_start(i, t): start_events.append((i, 'as')) def after_join(i, t): death_events.append((i, 'aj')) for i in range(0, self.thread_count): self.bundle.bind(lambda: tu.daemon_thread(_spinner, self.death), before_join=functools.partial(before_join, i), after_join=functools.partial(after_join, i), before_start=functools.partial(before_start, i), after_start=functools.partial(after_start, i)) self.assertEqual(self.thread_count, self.bundle.start()) self.assertEqual(self.thread_count, len(self.bundle)) self.assertEqual(self.thread_count, self.bundle.stop()) self.assertEqual(0, self.bundle.stop()) self.assertTrue(self.death.is_set()) expected_start_events = [] for i in range(0, self.thread_count): expected_start_events.extend([ (i, 'bs'), (i, 'as'), ]) self.assertEqual(expected_start_events, list(start_events)) expected_death_events = [] j = self.thread_count - 1 for _i in range(0, self.thread_count): expected_death_events.extend([ (j, 'bj'), (j, 'aj'), ]) j -= 1 self.assertEqual(expected_death_events, list(death_events)) def test_start_stop(self): events = collections.deque() def before_start(t): events.append('bs') def before_join(t): events.append('bj') self.death.set() def after_start(t): events.append('as') def after_join(t): events.append('aj') for _i in range(0, self.thread_count): self.bundle.bind(lambda: tu.daemon_thread(_spinner, self.death), before_join=before_join, after_join=after_join, before_start=before_start, after_start=after_start) self.assertEqual(self.thread_count, self.bundle.start()) self.assertEqual(self.thread_count, len(self.bundle)) self.assertEqual(self.thread_count, self.bundle.stop()) for event in ['as', 'bs', 'bj', 'aj']: self.assertEqual(self.thread_count, len([e for e in events if e == event])) self.assertEqual(0, self.bundle.stop()) self.assertTrue(self.death.is_set()) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6520426 taskflow-4.6.4/taskflow/tests/unit/worker_based/0000775000175000017500000000000000000000000022024 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/__init__.py0000664000175000017500000000000000000000000024123 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_creation.py0000664000175000017500000000771500000000000025253 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from taskflow.engines.worker_based import engine from taskflow.engines.worker_based import executor from taskflow.patterns import linear_flow as lf from taskflow.persistence import backends from taskflow import test from taskflow.test import mock from taskflow.tests import utils from taskflow.utils import persistence_utils as pu class TestWorkerBasedActionEngine(test.MockTestCase): @staticmethod def _create_engine(**kwargs): flow = lf.Flow('test-flow').add(utils.DummyTask()) backend = backends.fetch({'connection': 'memory'}) flow_detail = pu.create_flow_detail(flow, backend=backend) options = kwargs.copy() return engine.WorkerBasedActionEngine(flow, flow_detail, backend, options) def _patch_in_executor(self): executor_mock, executor_inst_mock = self.patchClass( engine.executor, 'WorkerTaskExecutor', attach_as='executor') return executor_mock, executor_inst_mock def test_creation_default(self): executor_mock, executor_inst_mock = self._patch_in_executor() eng = self._create_engine() expected_calls = [ mock.call.executor_class(uuid=eng.storage.flow_uuid, url=None, exchange='default', topics=[], transport=None, transport_options=None, transition_timeout=mock.ANY, retry_options=None, worker_expiry=mock.ANY) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_creation_custom(self): executor_mock, executor_inst_mock = self._patch_in_executor() topics = ['test-topic1', 'test-topic2'] exchange = 'test-exchange' broker_url = 'test-url' eng = self._create_engine( url=broker_url, exchange=exchange, transport='memory', transport_options={}, transition_timeout=200, topics=topics, retry_options={}, worker_expiry=1) expected_calls = [ mock.call.executor_class(uuid=eng.storage.flow_uuid, url=broker_url, exchange=exchange, topics=topics, transport='memory', transport_options={}, transition_timeout=200, retry_options={}, worker_expiry=1) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_creation_custom_executor(self): ex = executor.WorkerTaskExecutor('a', 'test-exchange', ['test-topic']) eng = self._create_engine(executor=ex) self.assertIs(eng._task_executor, ex) self.assertIsInstance(eng._task_executor, executor.WorkerTaskExecutor) def test_creation_invalid_custom_executor(self): self.assertRaises(TypeError, self._create_engine, executor=2) self.assertRaises(TypeError, self._create_engine, executor='blah') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_dispatcher.py0000664000175000017500000000554300000000000025572 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. try: from kombu import message # noqa except ImportError: from kombu.transport import base as message from taskflow.engines.worker_based import dispatcher from taskflow import test from taskflow.test import mock def mock_acked_message(ack_ok=True, **kwargs): msg = mock.create_autospec(message.Message, spec_set=True, instance=True, channel=None, **kwargs) def ack_side_effect(*args, **kwargs): msg.acknowledged = True if ack_ok: msg.ack_log_error.side_effect = ack_side_effect msg.acknowledged = False return msg class TestDispatcher(test.TestCase): def test_creation(self): on_hello = mock.MagicMock() handlers = {'hello': dispatcher.Handler(on_hello)} dispatcher.TypeDispatcher(type_handlers=handlers) def test_on_message(self): on_hello = mock.MagicMock() handlers = {'hello': dispatcher.Handler(on_hello)} d = dispatcher.TypeDispatcher(type_handlers=handlers) msg = mock_acked_message(properties={'type': 'hello'}) d.on_message("", msg) self.assertTrue(on_hello.called) self.assertTrue(msg.ack_log_error.called) self.assertTrue(msg.acknowledged) def test_on_rejected_message(self): d = dispatcher.TypeDispatcher() msg = mock_acked_message(properties={'type': 'hello'}) d.on_message("", msg) self.assertTrue(msg.reject_log_error.called) self.assertFalse(msg.acknowledged) def test_on_requeue_message(self): d = dispatcher.TypeDispatcher() d.requeue_filters.append(lambda data, message: True) msg = mock_acked_message() d.on_message("", msg) self.assertTrue(msg.requeue.called) self.assertFalse(msg.acknowledged) def test_failed_ack(self): on_hello = mock.MagicMock() handlers = {'hello': dispatcher.Handler(on_hello)} d = dispatcher.TypeDispatcher(type_handlers=handlers) msg = mock_acked_message(ack_ok=False, properties={'type': 'hello'}) d.on_message("", msg) self.assertTrue(msg.ack_log_error.called) self.assertFalse(msg.acknowledged) self.assertFalse(on_hello.called) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_endpoint.py0000664000175000017500000000607000000000000025260 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import reflection from taskflow.engines.worker_based import endpoint as ep from taskflow import task from taskflow import test from taskflow.tests import utils class Task(task.Task): def __init__(self, a, *args, **kwargs): super(Task, self).__init__(*args, **kwargs) def execute(self, *args, **kwargs): pass class TestEndpoint(test.TestCase): def setUp(self): super(TestEndpoint, self).setUp() self.task_cls = utils.TaskOneReturn self.task_uuid = 'task-uuid' self.task_args = {'context': 'context'} self.task_cls_name = reflection.get_class_name(self.task_cls) self.task_ep = ep.Endpoint(self.task_cls) self.task_result = 1 def test_creation(self): task = self.task_ep.generate() self.assertEqual(self.task_cls_name, self.task_ep.name) self.assertIsInstance(task, self.task_cls) self.assertEqual(self.task_cls_name, task.name) def test_creation_with_task_name(self): task_name = 'test' task = self.task_ep.generate(name=task_name) self.assertEqual(self.task_cls_name, self.task_ep.name) self.assertIsInstance(task, self.task_cls) self.assertEqual(task_name, task.name) def test_creation_task_with_constructor_args(self): # NOTE(skudriashev): Exception is expected here since task # is created without any arguments passing to its constructor. endpoint = ep.Endpoint(Task) self.assertRaises(TypeError, endpoint.generate) def test_to_str(self): self.assertEqual(self.task_cls_name, str(self.task_ep)) def test_execute(self): task = self.task_ep.generate(self.task_cls_name) result = self.task_ep.execute(task, task_uuid=self.task_uuid, arguments=self.task_args, progress_callback=None) self.assertEqual(self.task_result, result) def test_revert(self): task = self.task_ep.generate(self.task_cls_name) result = self.task_ep.revert(task, task_uuid=self.task_uuid, arguments=self.task_args, progress_callback=None, result=self.task_result, failures={}) self.assertIsNone(result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_executor.py0000664000175000017500000003246600000000000025306 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading import time from taskflow.engines.worker_based import executor from taskflow.engines.worker_based import protocol as pr from taskflow import task as task_atom from taskflow import test from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.types import failure class TestWorkerTaskExecutor(test.MockTestCase): def setUp(self): super(TestWorkerTaskExecutor, self).setUp() self.task = test_utils.DummyTask() self.task_uuid = 'task-uuid' self.task_args = {'a': 'a'} self.task_result = 'task-result' self.task_failures = {} self.timeout = 60 self.broker_url = 'broker-url' self.executor_uuid = 'executor-uuid' self.executor_exchange = 'executor-exchange' self.executor_topic = 'test-topic1' self.proxy_started_event = threading.Event() # patch classes self.proxy_mock, self.proxy_inst_mock = self.patchClass( executor.proxy, 'Proxy') self.request_mock, self.request_inst_mock = self.patchClass( executor.pr, 'Request', autospec=False) # other mocking self.proxy_inst_mock.start.side_effect = self._fake_proxy_start self.proxy_inst_mock.stop.side_effect = self._fake_proxy_stop self.request_inst_mock.uuid = self.task_uuid self.request_inst_mock.expired = False self.request_inst_mock.created_on = 0 self.request_inst_mock.task_cls = self.task.name self.message_mock = mock.MagicMock(name='message') self.message_mock.properties = {'correlation_id': self.task_uuid, 'type': pr.RESPONSE} def _fake_proxy_start(self): self.proxy_started_event.set() while self.proxy_started_event.is_set(): time.sleep(0.01) def _fake_proxy_stop(self): self.proxy_started_event.clear() def executor(self, reset_master_mock=True, **kwargs): executor_kwargs = dict(uuid=self.executor_uuid, exchange=self.executor_exchange, topics=[self.executor_topic], url=self.broker_url) executor_kwargs.update(kwargs) ex = executor.WorkerTaskExecutor(**executor_kwargs) if reset_master_mock: self.resetMasterMock() return ex def test_creation(self): ex = self.executor(reset_master_mock=False) master_mock_calls = [ mock.call.Proxy(self.executor_uuid, self.executor_exchange, on_wait=ex._on_wait, url=self.broker_url, transport=mock.ANY, transport_options=mock.ANY, retry_options=mock.ANY), mock.call.proxy.dispatcher.type_handlers.update(mock.ANY), ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_on_message_response_state_running(self): response = pr.Response(pr.RUNNING) ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) expected_calls = [ mock.call.transition_and_log_error(pr.RUNNING, logger=mock.ANY), ] self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_state_progress(self): response = pr.Response(pr.EVENT, event_type=task_atom.EVENT_UPDATE_PROGRESS, details={'progress': 1.0}) ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) expected_calls = [ mock.call.task.notifier.notify(task_atom.EVENT_UPDATE_PROGRESS, {'progress': 1.0}), ] self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_state_failure(self): a_failure = failure.Failure.from_exception(Exception('test')) failure_dict = a_failure.to_dict() response = pr.Response(pr.FAILURE, result=failure_dict) ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) self.assertEqual(0, len(ex._ongoing_requests)) expected_calls = [ mock.call.transition_and_log_error(pr.FAILURE, logger=mock.ANY), mock.call.set_result(result=test_utils.FailureMatcher(a_failure)) ] self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_state_success(self): response = pr.Response(pr.SUCCESS, result=self.task_result, event='executed') ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) expected_calls = [ mock.call.transition_and_log_error(pr.SUCCESS, logger=mock.ANY), mock.call.set_result(result=self.task_result) ] self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_unknown_state(self): response = pr.Response(state='') ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) self.assertEqual([], self.request_inst_mock.mock_calls) def test_on_message_response_unknown_task(self): self.message_mock.properties['correlation_id'] = '' response = pr.Response(pr.RUNNING) ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) self.assertEqual([], self.request_inst_mock.mock_calls) def test_on_message_response_no_correlation_id(self): self.message_mock.properties = {'type': pr.RESPONSE} response = pr.Response(pr.RUNNING) ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) self.assertEqual([], self.request_inst_mock.mock_calls) def test_on_wait_task_not_expired(self): ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock self.assertEqual(1, len(ex._ongoing_requests)) ex._on_wait() self.assertEqual(1, len(ex._ongoing_requests)) @mock.patch('oslo_utils.timeutils.now') def test_on_wait_task_expired(self, mock_now): mock_now.side_effect = [0, 120] self.request_inst_mock.expired = True self.request_inst_mock.created_on = 0 ex = self.executor() ex._ongoing_requests[self.task_uuid] = self.request_inst_mock self.assertEqual(1, len(ex._ongoing_requests)) ex._on_wait() self.assertEqual(0, len(ex._ongoing_requests)) def test_execute_task(self): ex = self.executor() ex._finder._add(self.executor_topic, [self.task.name]) ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', self.task_args, timeout=self.timeout, result=mock.ANY, failures=mock.ANY), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), mock.call.proxy.publish(self.request_inst_mock, self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_revert_task(self): ex = self.executor() ex._finder._add(self.executor_topic, [self.task.name]) ex.revert_task(self.task, self.task_uuid, self.task_args, self.task_result, self.task_failures) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'revert', self.task_args, timeout=self.timeout, failures=self.task_failures, result=self.task_result), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), mock.call.proxy.publish(self.request_inst_mock, self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_execute_task_topic_not_found(self): ex = self.executor() ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', self.task_args, timeout=self.timeout, result=mock.ANY, failures=mock.ANY), ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_execute_task_publish_error(self): self.proxy_inst_mock.publish.side_effect = Exception('Woot!') ex = self.executor() ex._finder._add(self.executor_topic, [self.task.name]) ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', self.task_args, timeout=self.timeout, result=mock.ANY, failures=mock.ANY), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), mock.call.proxy.publish(self.request_inst_mock, self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid), mock.call.request.transition_and_log_error(pr.FAILURE, logger=mock.ANY), mock.call.request.set_result(mock.ANY) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_start_stop(self): ex = self.executor() ex.start() # make sure proxy thread started self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # stop executor ex.stop() self.master_mock.assert_has_calls([ mock.call.proxy.start(), mock.call.proxy.wait(), mock.call.proxy.stop() ], any_order=True) def test_start_already_running(self): ex = self.executor() ex.start() # make sure proxy thread started self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # start executor again self.assertRaises(RuntimeError, ex.start) # stop executor ex.stop() self.master_mock.assert_has_calls([ mock.call.proxy.start(), mock.call.proxy.wait(), mock.call.proxy.stop() ], any_order=True) def test_stop_not_running(self): self.executor().stop() self.assertEqual([], self.master_mock.mock_calls) def test_stop_not_alive(self): self.proxy_inst_mock.start.side_effect = None # start executor ex = self.executor() ex.start() # stop executor ex.stop() # since proxy thread is already done - stop is not called self.master_mock.assert_has_calls([ mock.call.proxy.start(), mock.call.proxy.wait() ], any_order=True) def test_restart(self): ex = self.executor() ex.start() # make sure thread started self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # restart executor ex.stop() ex.start() # make sure thread started self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # stop executor ex.stop() self.master_mock.assert_has_calls([ mock.call.proxy.start(), mock.call.proxy.wait(), mock.call.proxy.stop(), mock.call.proxy.start(), mock.call.proxy.wait(), mock.call.proxy.stop() ], any_order=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_message_pump.py0000664000175000017500000001143600000000000026127 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading from oslo_utils import uuidutils from taskflow.engines.worker_based import dispatcher from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy from taskflow import test from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.types import latch from taskflow.utils import threading_utils TEST_EXCHANGE, TEST_TOPIC = ('test-exchange', 'test-topic') POLLING_INTERVAL = 0.01 class TestMessagePump(test.TestCase): def test_notify(self): barrier = threading.Event() on_notify = mock.MagicMock() on_notify.side_effect = lambda *args, **kwargs: barrier.set() handlers = {pr.NOTIFY: dispatcher.Handler(on_notify)} p = proxy.Proxy(TEST_TOPIC, TEST_EXCHANGE, handlers, transport='memory', transport_options={ 'polling_interval': POLLING_INTERVAL, }) t = threading_utils.daemon_thread(p.start) t.start() p.wait() p.publish(pr.Notify(), TEST_TOPIC) self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) p.stop() t.join() self.assertTrue(on_notify.called) on_notify.assert_called_with({}, mock.ANY) def test_response(self): barrier = threading.Event() on_response = mock.MagicMock() on_response.side_effect = lambda *args, **kwargs: barrier.set() handlers = {pr.RESPONSE: dispatcher.Handler(on_response)} p = proxy.Proxy(TEST_TOPIC, TEST_EXCHANGE, handlers, transport='memory', transport_options={ 'polling_interval': POLLING_INTERVAL, }) t = threading_utils.daemon_thread(p.start) t.start() p.wait() resp = pr.Response(pr.RUNNING) p.publish(resp, TEST_TOPIC) self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) self.assertTrue(barrier.is_set()) p.stop() t.join() self.assertTrue(on_response.called) on_response.assert_called_with(resp.to_dict(), mock.ANY) def test_multi_message(self): message_count = 30 barrier = latch.Latch(message_count) countdown = lambda data, message: barrier.countdown() on_notify = mock.MagicMock() on_notify.side_effect = countdown on_response = mock.MagicMock() on_response.side_effect = countdown on_request = mock.MagicMock() on_request.side_effect = countdown handlers = { pr.NOTIFY: dispatcher.Handler(on_notify), pr.RESPONSE: dispatcher.Handler(on_response), pr.REQUEST: dispatcher.Handler(on_request), } p = proxy.Proxy(TEST_TOPIC, TEST_EXCHANGE, handlers, transport='memory', transport_options={ 'polling_interval': POLLING_INTERVAL, }) t = threading_utils.daemon_thread(p.start) t.start() p.wait() for i in range(0, message_count): j = i % 3 if j == 0: p.publish(pr.Notify(), TEST_TOPIC) elif j == 1: p.publish(pr.Response(pr.RUNNING), TEST_TOPIC) else: p.publish(pr.Request(test_utils.DummyTask("dummy_%s" % i), uuidutils.generate_uuid(), pr.EXECUTE, [], None), TEST_TOPIC) self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) self.assertEqual(0, barrier.needed) p.stop() t.join() self.assertTrue(on_notify.called) self.assertTrue(on_response.called) self.assertTrue(on_request.called) self.assertEqual(10, on_notify.call_count) self.assertEqual(10, on_response.call_count) self.assertEqual(10, on_request.call_count) call_count = sum([ on_notify.call_count, on_response.call_count, on_request.call_count, ]) self.assertEqual(message_count, call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_pipeline.py0000664000175000017500000000734400000000000025252 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist from futurist import waiters from oslo_utils import uuidutils from taskflow.engines.action_engine import executor as base_executor from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import executor as worker_executor from taskflow.engines.worker_based import server as worker_server from taskflow import test from taskflow.tests import utils as test_utils from taskflow.types import failure from taskflow.utils import threading_utils TEST_EXCHANGE, TEST_TOPIC = ('test-exchange', 'test-topic') WAIT_TIMEOUT = 1.0 POLLING_INTERVAL = 0.01 class TestPipeline(test.TestCase): def _fetch_server(self, task_classes): endpoints = [] for cls in task_classes: endpoints.append(endpoint.Endpoint(cls)) server = worker_server.Server( TEST_TOPIC, TEST_EXCHANGE, futurist.ThreadPoolExecutor(max_workers=1), endpoints, transport='memory', transport_options={ 'polling_interval': POLLING_INTERVAL, }) server_thread = threading_utils.daemon_thread(server.start) return (server, server_thread) def _fetch_executor(self): executor = worker_executor.WorkerTaskExecutor( uuidutils.generate_uuid(), TEST_EXCHANGE, [TEST_TOPIC], transport='memory', transport_options={ 'polling_interval': POLLING_INTERVAL, }) return executor def _start_components(self, task_classes): server, server_thread = self._fetch_server(task_classes) executor = self._fetch_executor() self.addCleanup(executor.stop) self.addCleanup(server_thread.join) self.addCleanup(server.stop) executor.start() server_thread.start() server.wait() return (executor, server) def test_execution_pipeline(self): executor, server = self._start_components([test_utils.TaskOneReturn]) self.assertEqual(0, executor.wait_for_workers(timeout=WAIT_TIMEOUT)) t = test_utils.TaskOneReturn() progress_callback = lambda *args, **kwargs: None f = executor.execute_task(t, uuidutils.generate_uuid(), {}, progress_callback=progress_callback) waiters.wait_for_any([f]) event, result = f.result() self.assertEqual(1, result) self.assertEqual(base_executor.EXECUTED, event) def test_execution_failure_pipeline(self): task_classes = [ test_utils.TaskWithFailure, ] executor, server = self._start_components(task_classes) t = test_utils.TaskWithFailure() progress_callback = lambda *args, **kwargs: None f = executor.execute_task(t, uuidutils.generate_uuid(), {}, progress_callback=progress_callback) waiters.wait_for_any([f]) action, result = f.result() self.assertIsInstance(result, failure.Failure) self.assertEqual(RuntimeError, result.check(RuntimeError)) self.assertEqual(base_executor.EXECUTED, action) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_protocol.py0000664000175000017500000001710100000000000025276 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from taskflow.engines.action_engine import executor from taskflow.engines.worker_based import protocol as pr from taskflow import exceptions as excp from taskflow import test from taskflow.test import mock from taskflow.tests import utils from taskflow.types import failure class Unserializable(object): pass class TestProtocolValidation(test.TestCase): def test_send_notify(self): msg = pr.Notify() pr.Notify.validate(msg.to_dict(), False) def test_send_notify_invalid(self): msg = { 'all your base': 'are belong to us', } self.assertRaises(excp.InvalidFormat, pr.Notify.validate, msg, False) def test_reply_notify(self): msg = pr.Notify(topic="bob", tasks=['a', 'b', 'c']) pr.Notify.validate(msg.to_dict(), True) def test_reply_notify_invalid(self): msg = { 'topic': {}, 'tasks': 'not yours', } self.assertRaises(excp.InvalidFormat, pr.Notify.validate, msg, True) def test_request(self): request = pr.Request(utils.DummyTask("hi"), uuidutils.generate_uuid(), pr.EXECUTE, {}, 1.0) pr.Request.validate(request.to_dict()) def test_request_invalid(self): msg = { 'task_name': 1, 'task_cls': False, 'arguments': [], } self.assertRaises(excp.InvalidFormat, pr.Request.validate, msg) def test_request_invalid_action(self): request = pr.Request(utils.DummyTask("hi"), uuidutils.generate_uuid(), pr.EXECUTE, {}, 1.0) request = request.to_dict() request['action'] = 'NOTHING' self.assertRaises(excp.InvalidFormat, pr.Request.validate, request) def test_response_progress(self): msg = pr.Response(pr.EVENT, details={'progress': 0.5}, event_type='blah') pr.Response.validate(msg.to_dict()) def test_response_completion(self): msg = pr.Response(pr.SUCCESS, result=1) pr.Response.validate(msg.to_dict()) def test_response_mixed_invalid(self): msg = pr.Response(pr.EVENT, details={'progress': 0.5}, event_type='blah', result=1) self.assertRaises(excp.InvalidFormat, pr.Response.validate, msg) def test_response_bad_state(self): msg = pr.Response('STUFF') self.assertRaises(excp.InvalidFormat, pr.Response.validate, msg) class TestProtocol(test.TestCase): def setUp(self): super(TestProtocol, self).setUp() self.task = utils.DummyTask() self.task_uuid = 'task-uuid' self.task_action = 'execute' self.task_args = {'a': 'a'} self.timeout = 60 def request(self, **kwargs): request_kwargs = dict(task=self.task, uuid=self.task_uuid, action=self.task_action, arguments=self.task_args, timeout=self.timeout) request_kwargs.update(kwargs) return pr.Request(**request_kwargs) def request_to_dict(self, **kwargs): to_dict = dict(task_cls=self.task.name, task_name=self.task.name, task_version=self.task.version, action=self.task_action, arguments=self.task_args) to_dict.update(kwargs) return to_dict def test_request_transitions(self): request = self.request() self.assertEqual(pr.WAITING, request.current_state) self.assertIn(request.current_state, pr.WAITING_STATES) self.assertRaises(excp.InvalidState, request.transition, pr.SUCCESS) self.assertFalse(request.transition(pr.WAITING)) self.assertTrue(request.transition(pr.PENDING)) self.assertTrue(request.transition(pr.RUNNING)) self.assertTrue(request.transition(pr.SUCCESS)) for s in (pr.PENDING, pr.WAITING): self.assertRaises(excp.InvalidState, request.transition, s) def test_creation(self): request = self.request() self.assertEqual(self.task_uuid, request.uuid) self.assertEqual(self.task, request.task) self.assertFalse(request.future.done()) def test_to_dict_default(self): request = self.request() self.assertEqual(self.request_to_dict(), request.to_dict()) def test_to_dict_with_result(self): request = self.request(result=333) self.assertEqual(self.request_to_dict(result=('success', 333)), request.to_dict()) def test_to_dict_with_result_none(self): request = self.request(result=None) self.assertEqual(self.request_to_dict(result=('success', None)), request.to_dict()) def test_to_dict_with_result_failure(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) expected = self.request_to_dict(result=('failure', a_failure.to_dict())) request = self.request(result=a_failure) self.assertEqual(expected, request.to_dict()) def test_to_dict_with_failures(self): a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) request = self.request(failures={self.task.name: a_failure}) expected = self.request_to_dict( failures={self.task.name: a_failure.to_dict()}) self.assertEqual(expected, request.to_dict()) def test_to_dict_with_invalid_json_failures(self): exc = RuntimeError(Unserializable()) a_failure = failure.Failure.from_exception(exc) request = self.request(failures={self.task.name: a_failure}) expected = self.request_to_dict( failures={self.task.name: a_failure.to_dict(include_args=False)}) self.assertEqual(expected, request.to_dict()) @mock.patch('oslo_utils.timeutils.now') def test_pending_not_expired(self, now): now.return_value = 0 request = self.request() now.return_value = self.timeout - 1 self.assertFalse(request.expired) @mock.patch('oslo_utils.timeutils.now') def test_pending_expired(self, now): now.return_value = 0 request = self.request() now.return_value = self.timeout + 1 self.assertTrue(request.expired) @mock.patch('oslo_utils.timeutils.now') def test_running_not_expired(self, now): now.return_value = 0 request = self.request() request.transition(pr.PENDING) request.transition(pr.RUNNING) now.return_value = self.timeout + 1 self.assertFalse(request.expired) def test_set_result(self): request = self.request() request.set_result(111) result = request.future.result() self.assertEqual((executor.EXECUTED, 111), result) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_proxy.py0000664000175000017500000002330000000000000024614 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket from unittest import mock from taskflow.engines.worker_based import proxy from taskflow import test from taskflow.utils import threading_utils class TestProxy(test.MockTestCase): def setUp(self): super(TestProxy, self).setUp() self.topic = 'test-topic' self.broker_url = 'test-url' self.exchange = 'test-exchange' self.timeout = 5 self.de_period = proxy.DRAIN_EVENTS_PERIOD # patch classes self.conn_mock, self.conn_inst_mock = self.patchClass( proxy.kombu, 'Connection') self.exchange_mock, self.exchange_inst_mock = self.patchClass( proxy.kombu, 'Exchange') self.queue_mock, self.queue_inst_mock = self.patchClass( proxy.kombu, 'Queue') self.producer_mock, self.producer_inst_mock = self.patchClass( proxy.kombu, 'Producer') # connection mocking def _ensure(obj, func, *args, **kwargs): return func self.conn_inst_mock.drain_events.side_effect = [ socket.timeout, socket.timeout, KeyboardInterrupt] self.conn_inst_mock.ensure = mock.MagicMock(side_effect=_ensure) # connections mocking self.connections_mock = self.patch( "taskflow.engines.worker_based.proxy.kombu.connections", attach_as='connections') self.connections_mock.__getitem__().acquire().__enter__.return_value =\ self.conn_inst_mock # producers mocking self.conn_inst_mock.Producer.return_value.__enter__ = mock.MagicMock() self.conn_inst_mock.Producer.return_value.__exit__ = mock.MagicMock() # consumer mocking self.conn_inst_mock.Consumer.return_value.__enter__ = mock.MagicMock() self.conn_inst_mock.Consumer.return_value.__exit__ = mock.MagicMock() # other mocking self.on_wait_mock = mock.MagicMock(name='on_wait') self.master_mock.attach_mock(self.on_wait_mock, 'on_wait') # reset master mock self.resetMasterMock() def _queue_name(self, topic): return "%s_%s" % (self.exchange, topic) def proxy_start_calls(self, calls, exc_type=mock.ANY): return [ mock.call.Queue(name=self._queue_name(self.topic), exchange=self.exchange_inst_mock, routing_key=self.topic, durable=False, auto_delete=True, channel=self.conn_inst_mock), mock.call.connection.Consumer(queues=self.queue_inst_mock, callbacks=[mock.ANY]), mock.call.connection.Consumer().__enter__(), mock.call.connection.ensure(mock.ANY, mock.ANY, interval_start=mock.ANY, interval_max=mock.ANY, max_retries=mock.ANY, interval_step=mock.ANY, errback=mock.ANY), ] + calls + [ mock.call.connection.Consumer().__exit__(exc_type, mock.ANY, mock.ANY) ] def proxy_publish_calls(self, calls, routing_key, exc_type=mock.ANY): return [ mock.call.connection.Producer(), mock.call.connection.Producer().__enter__(), mock.call.connection.ensure(mock.ANY, mock.ANY, interval_start=mock.ANY, interval_max=mock.ANY, max_retries=mock.ANY, interval_step=mock.ANY, errback=mock.ANY), mock.call.Queue(name=self._queue_name(routing_key), routing_key=routing_key, exchange=self.exchange_inst_mock, durable=False, auto_delete=True, channel=None), ] + calls + [ mock.call.connection.Producer().__exit__(exc_type, mock.ANY, mock.ANY) ] def proxy(self, reset_master_mock=False, **kwargs): proxy_kwargs = dict(topic=self.topic, exchange=self.exchange, url=self.broker_url, type_handlers={}) proxy_kwargs.update(kwargs) p = proxy.Proxy(**proxy_kwargs) if reset_master_mock: self.resetMasterMock() return p def test_creation(self): self.proxy() master_mock_calls = [ mock.call.Connection(self.broker_url, transport=None, transport_options=None), mock.call.Exchange(name=self.exchange, durable=False, auto_delete=True) ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_creation_custom(self): transport_opts = {'context': 'context'} self.proxy(transport='memory', transport_options=transport_opts) master_mock_calls = [ mock.call.Connection(self.broker_url, transport='memory', transport_options=transport_opts), mock.call.Exchange(name=self.exchange, durable=False, auto_delete=True) ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_publish(self): msg_mock = mock.MagicMock() msg_data = 'msg-data' msg_mock.to_dict.return_value = msg_data routing_key = 'routing-key' task_uuid = 'task-uuid' p = self.proxy(reset_master_mock=True) p.publish(msg_mock, routing_key, correlation_id=task_uuid) mock_producer = mock.call.connection.Producer() master_mock_calls = self.proxy_publish_calls([ mock_producer.__enter__().publish(body=msg_data, routing_key=routing_key, exchange=self.exchange_inst_mock, correlation_id=task_uuid, declare=[self.queue_inst_mock], type=msg_mock.TYPE, reply_to=None) ], routing_key) self.master_mock.assert_has_calls(master_mock_calls) def test_start(self): try: # KeyboardInterrupt will be raised after two iterations self.proxy(reset_master_mock=True).start() except KeyboardInterrupt: pass master_calls = self.proxy_start_calls([ mock.call.connection.drain_events(timeout=self.de_period), mock.call.connection.drain_events(timeout=self.de_period), mock.call.connection.drain_events(timeout=self.de_period), ], exc_type=KeyboardInterrupt) self.master_mock.assert_has_calls(master_calls) def test_start_with_on_wait(self): try: # KeyboardInterrupt will be raised after two iterations self.proxy(reset_master_mock=True, on_wait=self.on_wait_mock).start() except KeyboardInterrupt: pass master_calls = self.proxy_start_calls([ mock.call.connection.drain_events(timeout=self.de_period), mock.call.on_wait(), mock.call.connection.drain_events(timeout=self.de_period), mock.call.on_wait(), mock.call.connection.drain_events(timeout=self.de_period), ], exc_type=KeyboardInterrupt) self.master_mock.assert_has_calls(master_calls) def test_start_with_on_wait_raises(self): self.on_wait_mock.side_effect = RuntimeError('Woot!') try: # KeyboardInterrupt will be raised after two iterations self.proxy(reset_master_mock=True, on_wait=self.on_wait_mock).start() except KeyboardInterrupt: pass master_calls = self.proxy_start_calls([ mock.call.connection.drain_events(timeout=self.de_period), mock.call.on_wait(), ], exc_type=RuntimeError) self.master_mock.assert_has_calls(master_calls) def test_stop(self): self.conn_inst_mock.drain_events.side_effect = socket.timeout # create proxy pr = self.proxy(reset_master_mock=True) # check that proxy is not running yes self.assertFalse(pr.is_running) # start proxy in separate thread t = threading_utils.daemon_thread(pr.start) t.start() # make sure proxy is started pr.wait() # check that proxy is running now self.assertTrue(pr.is_running) # stop proxy and wait for thread to finish pr.stop() # wait for thread to finish t.join() self.assertFalse(pr.is_running) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_server.py0000664000175000017500000003403300000000000024746 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from taskflow.engines.worker_based import endpoint as ep from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import server from taskflow import task as task_atom from taskflow import test from taskflow.test import mock from taskflow.tests import utils from taskflow.types import failure class TestServer(test.MockTestCase): def setUp(self): super(TestServer, self).setUp() self.server_topic = 'server-topic' self.server_exchange = 'server-exchange' self.broker_url = 'test-url' self.task = utils.TaskOneArgOneReturn() self.task_uuid = 'task-uuid' self.task_args = {'x': 1} self.task_action = 'execute' self.reply_to = 'reply-to' self.endpoints = [ep.Endpoint(task_cls=utils.TaskOneArgOneReturn), ep.Endpoint(task_cls=utils.TaskWithFailure), ep.Endpoint(task_cls=utils.ProgressingTask)] # patch classes self.proxy_mock, self.proxy_inst_mock = self.patchClass( server.proxy, 'Proxy') self.response_mock, self.response_inst_mock = self.patchClass( server.pr, 'Response') # other mocking self.proxy_inst_mock.is_running = True self.executor_mock = mock.MagicMock(name='executor') self.message_mock = mock.MagicMock(name='message') self.message_mock.properties = {'correlation_id': self.task_uuid, 'reply_to': self.reply_to, 'type': pr.REQUEST} self.master_mock.attach_mock(self.executor_mock, 'executor') self.master_mock.attach_mock(self.message_mock, 'message') def server(self, reset_master_mock=False, **kwargs): server_kwargs = dict(topic=self.server_topic, exchange=self.server_exchange, executor=self.executor_mock, endpoints=self.endpoints, url=self.broker_url) server_kwargs.update(kwargs) s = server.Server(**server_kwargs) if reset_master_mock: self.resetMasterMock() return s def make_request(self, **kwargs): request_kwargs = dict(task=self.task, uuid=self.task_uuid, action=self.task_action, arguments=self.task_args, timeout=60) request_kwargs.update(kwargs) request = pr.Request(**request_kwargs) return request.to_dict() def test_creation(self): s = self.server() # check calls master_mock_calls = [ mock.call.Proxy(self.server_topic, self.server_exchange, type_handlers=mock.ANY, url=self.broker_url, transport=mock.ANY, transport_options=mock.ANY, retry_options=mock.ANY) ] self.master_mock.assert_has_calls(master_mock_calls) self.assertEqual(3, len(s._endpoints)) def test_creation_with_endpoints(self): s = self.server(endpoints=self.endpoints) # check calls master_mock_calls = [ mock.call.Proxy(self.server_topic, self.server_exchange, type_handlers=mock.ANY, url=self.broker_url, transport=mock.ANY, transport_options=mock.ANY, retry_options=mock.ANY) ] self.master_mock.assert_has_calls(master_mock_calls) self.assertEqual(len(self.endpoints), len(s._endpoints)) def test_parse_request(self): request = self.make_request() bundle = pr.Request.from_dict(request) task_cls, task_name, action, task_args = bundle self.assertEqual((self.task.name, self.task.name, self.task_action, dict(arguments=self.task_args)), (task_cls, task_name, action, task_args)) def test_parse_request_with_success_result(self): request = self.make_request(action='revert', result=1) bundle = pr.Request.from_dict(request) task_cls, task_name, action, task_args = bundle self.assertEqual((self.task.name, self.task.name, 'revert', dict(arguments=self.task_args, result=1)), (task_cls, task_name, action, task_args)) def test_parse_request_with_failure_result(self): a_failure = failure.Failure.from_exception(Exception('test')) request = self.make_request(action='revert', result=a_failure) bundle = pr.Request.from_dict(request) task_cls, task_name, action, task_args = bundle self.assertEqual((self.task.name, self.task.name, 'revert', dict(arguments=self.task_args, result=utils.FailureMatcher(a_failure))), (task_cls, task_name, action, task_args)) def test_parse_request_with_failures(self): failures = {'0': failure.Failure.from_exception(Exception('test1')), '1': failure.Failure.from_exception(Exception('test2'))} request = self.make_request(action='revert', failures=failures) bundle = pr.Request.from_dict(request) task_cls, task_name, action, task_args = bundle self.assertEqual( (self.task.name, self.task.name, 'revert', dict(arguments=self.task_args, failures=dict((i, utils.FailureMatcher(f)) for i, f in six.iteritems(failures)))), (task_cls, task_name, action, task_args)) @mock.patch("taskflow.engines.worker_based.server.LOG.critical") def test_reply_publish_failure(self, mocked_exception): self.proxy_inst_mock.publish.side_effect = RuntimeError('Woot!') # create server and process request s = self.server(reset_master_mock=True) s._reply(True, self.reply_to, self.task_uuid) self.master_mock.assert_has_calls([ mock.call.Response(pr.FAILURE), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ]) self.assertTrue(mocked_exception.called) def test_on_run_reply_failure(self): request = self.make_request(task=utils.ProgressingTask(), arguments={}) self.proxy_inst_mock.publish.side_effect = RuntimeError('Woot!') # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) self.assertEqual(1, self.proxy_inst_mock.publish.call_count) def test_on_update_progress(self): request = self.make_request(task=utils.ProgressingTask(), arguments={}) # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.RUNNING), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.EVENT, details={'progress': 0.0}, event_type=task_atom.EVENT_UPDATE_PROGRESS), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.EVENT, details={'progress': 1.0}, event_type=task_atom.EVENT_UPDATE_PROGRESS), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.SUCCESS, result=5), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) def test_process_request(self): # create server and process request s = self.server(reset_master_mock=True) s._process_request(self.make_request(), self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.RUNNING), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.SUCCESS, result=1), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) @mock.patch("taskflow.engines.worker_based.server.LOG.warning") def test_process_request_parse_message_failure(self, mocked_exception): self.message_mock.properties = {} request = self.make_request() s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) self.assertTrue(mocked_exception.called) @mock.patch.object(failure.Failure, 'from_dict') @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_parse_request_failure(self, to_mock, from_mock): failure_dict = { 'failure': 'failure', } a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) to_mock.return_value = failure_dict from_mock.side_effect = ValueError('Woot!') request = self.make_request(result=a_failure) # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.FAILURE, result=failure_dict), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_endpoint_not_found(self, to_mock): failure_dict = { 'failure': 'failure', } to_mock.return_value = failure_dict request = self.make_request(task=mock.MagicMock(name='')) # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.FAILURE, result=failure_dict), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_execution_failure(self, to_mock): failure_dict = { 'failure': 'failure', } to_mock.return_value = failure_dict request = self.make_request() request['action'] = '' # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.FAILURE, result=failure_dict), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_task_failure(self, to_mock): failure_dict = { 'failure': 'failure', } to_mock.return_value = failure_dict request = self.make_request(task=utils.TaskWithFailure(), arguments={}) # create server and process request s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) # check calls master_mock_calls = [ mock.call.Response(pr.RUNNING), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.FAILURE, result=failure_dict), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] self.master_mock.assert_has_calls(master_mock_calls) def test_start(self): self.server(reset_master_mock=True).start() # check calls master_mock_calls = [ mock.call.proxy.start() ] self.master_mock.assert_has_calls(master_mock_calls) def test_wait(self): server = self.server(reset_master_mock=True) server.start() server.wait() # check calls master_mock_calls = [ mock.call.proxy.start(), mock.call.proxy.wait() ] self.master_mock.assert_has_calls(master_mock_calls) def test_stop(self): self.server(reset_master_mock=True).stop() # check calls master_mock_calls = [ mock.call.proxy.stop() ] self.master_mock.assert_has_calls(master_mock_calls) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_types.py0000664000175000017500000000645000000000000024606 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import reflection from taskflow.engines.worker_based import types as worker_types from taskflow import test from taskflow.test import mock from taskflow.tests import utils class TestTopicWorker(test.TestCase): def test_topic_worker(self): worker = worker_types.TopicWorker("dummy-topic", [utils.DummyTask], identity="dummy") self.assertTrue(worker.performs(utils.DummyTask)) self.assertFalse(worker.performs(utils.NastyTask)) self.assertEqual('dummy', worker.identity) self.assertEqual('dummy-topic', worker.topic) class TestProxyFinder(test.TestCase): @mock.patch("oslo_utils.timeutils.now") def test_expiry(self, mock_now): finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), [], worker_expiry=60) w, emit = finder._add('dummy-topic', [utils.DummyTask]) w.last_seen = 0 mock_now.side_effect = [120] gone = finder.clean() self.assertEqual(0, finder.total_workers) self.assertEqual(1, gone) def test_single_topic_worker(self): finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) w, emit = finder._add('dummy-topic', [utils.DummyTask]) self.assertIsNotNone(w) self.assertTrue(emit) self.assertEqual(1, finder.total_workers) w2 = finder.get_worker_for_task(utils.DummyTask) self.assertEqual(w.identity, w2.identity) def test_multi_same_topic_workers(self): finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) w, emit = finder._add('dummy-topic', [utils.DummyTask]) self.assertIsNotNone(w) self.assertTrue(emit) w2, emit = finder._add('dummy-topic-2', [utils.DummyTask]) self.assertIsNotNone(w2) self.assertTrue(emit) w3 = finder.get_worker_for_task( reflection.get_class_name(utils.DummyTask)) self.assertIn(w3.identity, [w.identity, w2.identity]) def test_multi_different_topic_workers(self): finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) added = [] added.append(finder._add('dummy-topic', [utils.DummyTask])) added.append(finder._add('dummy-topic-2', [utils.DummyTask])) added.append(finder._add('dummy-topic-3', [utils.NastyTask])) self.assertEqual(3, finder.total_workers) w = finder.get_worker_for_task(utils.NastyTask) self.assertEqual(added[-1][0].identity, w.identity) w = finder.get_worker_for_task(utils.DummyTask) self.assertIn(w.identity, [w_a[0].identity for w_a in added[0:2]]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/unit/worker_based/test_worker.py0000664000175000017500000001570700000000000024760 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import reflection import six from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import worker from taskflow import test from taskflow.test import mock from taskflow.tests import utils class TestWorker(test.MockTestCase): def setUp(self): super(TestWorker, self).setUp() self.task_cls = utils.DummyTask self.task_name = reflection.get_class_name(self.task_cls) self.broker_url = 'test-url' self.exchange = 'test-exchange' self.topic = 'test-topic' # patch classes self.executor_mock, self.executor_inst_mock = self.patchClass( worker.futurist, 'ThreadPoolExecutor', attach_as='executor') self.server_mock, self.server_inst_mock = self.patchClass( worker.server, 'Server') def worker(self, reset_master_mock=False, **kwargs): worker_kwargs = dict(exchange=self.exchange, topic=self.topic, tasks=[], url=self.broker_url) worker_kwargs.update(kwargs) w = worker.Worker(**worker_kwargs) if reset_master_mock: self.resetMasterMock() return w def test_creation(self): self.worker() master_mock_calls = [ mock.call.executor_class(max_workers=None), mock.call.Server(self.topic, self.exchange, self.executor_inst_mock, [], url=self.broker_url, transport_options=mock.ANY, transport=mock.ANY, retry_options=mock.ANY) ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_banner_writing(self): buf = six.StringIO() w = self.worker() w.run(banner_writer=buf.write) w.wait() w.stop() self.assertGreater(0, len(buf.getvalue())) def test_creation_with_custom_threads_count(self): self.worker(threads_count=10) master_mock_calls = [ mock.call.executor_class(max_workers=10), mock.call.Server(self.topic, self.exchange, self.executor_inst_mock, [], url=self.broker_url, transport_options=mock.ANY, transport=mock.ANY, retry_options=mock.ANY) ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_creation_with_custom_executor(self): executor_mock = mock.MagicMock(name='executor') self.worker(executor=executor_mock) master_mock_calls = [ mock.call.Server(self.topic, self.exchange, executor_mock, [], url=self.broker_url, transport_options=mock.ANY, transport=mock.ANY, retry_options=mock.ANY) ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_run_with_no_tasks(self): self.worker(reset_master_mock=True).run() master_mock_calls = [ mock.call.server.start() ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_run_with_tasks(self): self.worker(reset_master_mock=True, tasks=['taskflow.tests.utils:DummyTask']).run() master_mock_calls = [ mock.call.server.start() ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_run_with_custom_executor(self): executor_mock = mock.MagicMock(name='executor') self.worker(reset_master_mock=True, executor=executor_mock).run() master_mock_calls = [ mock.call.server.start() ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_wait(self): w = self.worker(reset_master_mock=True) w.run() w.wait() master_mock_calls = [ mock.call.server.start(), mock.call.server.wait() ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_stop(self): self.worker(reset_master_mock=True).stop() master_mock_calls = [ mock.call.server.stop(), mock.call.executor.shutdown() ] self.assertEqual(master_mock_calls, self.master_mock.mock_calls) def test_derive_endpoints_from_string_tasks(self): endpoints = worker.Worker._derive_endpoints( ['taskflow.tests.utils:DummyTask']) self.assertEqual(1, len(endpoints)) self.assertIsInstance(endpoints[0], endpoint.Endpoint) self.assertEqual(self.task_name, endpoints[0].name) def test_derive_endpoints_from_string_modules(self): endpoints = worker.Worker._derive_endpoints(['taskflow.tests.utils']) assert any(e.name == self.task_name for e in endpoints) def test_derive_endpoints_from_string_non_existent_module(self): tasks = ['non.existent.module'] self.assertRaises(ImportError, worker.Worker._derive_endpoints, tasks) def test_derive_endpoints_from_string_non_existent_task(self): tasks = ['non.existent.module:Task'] self.assertRaises(ImportError, worker.Worker._derive_endpoints, tasks) def test_derive_endpoints_from_string_non_task_class(self): tasks = ['taskflow.tests.utils:FakeTask'] self.assertRaises(TypeError, worker.Worker._derive_endpoints, tasks) def test_derive_endpoints_from_tasks(self): endpoints = worker.Worker._derive_endpoints([self.task_cls]) self.assertEqual(1, len(endpoints)) self.assertIsInstance(endpoints[0], endpoint.Endpoint) self.assertEqual(self.task_name, endpoints[0].name) def test_derive_endpoints_from_non_task_class(self): self.assertRaises(TypeError, worker.Worker._derive_endpoints, [utils.FakeTask]) def test_derive_endpoints_from_modules(self): endpoints = worker.Worker._derive_endpoints([utils]) assert any(e.name == self.task_name for e in endpoints) def test_derive_endpoints_unexpected_task_type(self): self.assertRaises(TypeError, worker.Worker._derive_endpoints, [111]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/tests/utils.py0000664000175000017500000002615300000000000020117 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import string import threading import time from oslo_utils import timeutils import redis import six from taskflow import exceptions from taskflow.listeners import capturing from taskflow.persistence.backends import impl_memory from taskflow import retry from taskflow import task from taskflow.types import failure from taskflow.utils import kazoo_utils from taskflow.utils import redis_utils ARGS_KEY = '__args__' KWARGS_KEY = '__kwargs__' ORDER_KEY = '__order__' ZK_TEST_CONFIG = { 'timeout': 1.0, 'hosts': ["localhost:2181"], } # If latches/events take longer than this to become empty/set, something is # usually wrong and should be debugged instead of deadlocking... WAIT_TIMEOUT = 300 @contextlib.contextmanager def wrap_all_failures(): """Convert any exceptions to WrappedFailure. When you expect several failures, it may be convenient to wrap any exception with WrappedFailure in order to unify error handling. """ try: yield except Exception: raise exceptions.WrappedFailure([failure.Failure()]) def zookeeper_available(min_version, timeout=3): client = kazoo_utils.make_client(ZK_TEST_CONFIG.copy()) try: # NOTE(imelnikov): 3 seconds we should be enough for localhost client.start(timeout=float(timeout)) if min_version: zk_ver = client.server_version() if zk_ver >= min_version: return True else: return False else: return True except Exception: return False finally: kazoo_utils.finalize_client(client) def redis_available(min_version): client = redis.StrictRedis() try: client.ping() except Exception: return False else: ok, redis_version = redis_utils.is_server_new_enough(client, min_version) return ok class NoopRetry(retry.AlwaysRevert): pass class NoopTask(task.Task): def execute(self): pass class DummyTask(task.Task): def execute(self, context, *args, **kwargs): pass class EmittingTask(task.Task): TASK_EVENTS = (task.EVENT_UPDATE_PROGRESS, 'hi') def execute(self, *args, **kwargs): self.notifier.notify('hi', details={'sent_on': timeutils.utcnow(), 'args': args, 'kwargs': kwargs}) class AddOneSameProvidesRequires(task.Task): default_provides = 'value' def execute(self, value): return value + 1 class AddOne(task.Task): default_provides = 'result' def execute(self, source): return source + 1 class GiveBackRevert(task.Task): def execute(self, value): return value + 1 def revert(self, *args, **kwargs): result = kwargs.get('result') # If this somehow fails, timeout, or other don't send back a # valid result... if isinstance(result, six.integer_types): return result + 1 class FakeTask(object): def execute(self, **kwargs): pass class LongArgNameTask(task.Task): def execute(self, long_arg_name): return long_arg_name if six.PY3: RUNTIME_ERROR_CLASSES = ['RuntimeError', 'Exception', 'BaseException', 'object'] else: RUNTIME_ERROR_CLASSES = ['RuntimeError', 'StandardError', 'Exception', 'BaseException', 'object'] class ProvidesRequiresTask(task.Task): def __init__(self, name, provides, requires, return_tuple=True): super(ProvidesRequiresTask, self).__init__(name=name, provides=provides, requires=requires) self.return_tuple = isinstance(provides, (tuple, list)) def execute(self, *args, **kwargs): if self.return_tuple: return tuple(range(len(self.provides))) else: return dict((k, k) for k in self.provides) # Used to format the captured values into strings (which are easier to # check later in tests)... LOOKUP_NAME_POSTFIX = { capturing.CaptureListener.TASK: ('.t', 'task_name'), capturing.CaptureListener.RETRY: ('.r', 'retry_name'), capturing.CaptureListener.FLOW: ('.f', 'flow_name'), } class CaptureListener(capturing.CaptureListener): @staticmethod def _format_capture(kind, state, details): name_postfix, name_key = LOOKUP_NAME_POSTFIX[kind] name = details[name_key] + name_postfix if 'result' in details: name += ' %s(%s)' % (state, details['result']) else: name += " %s" % state return name class MultiProgressingTask(task.Task): def execute(self, progress_chunks): for chunk in progress_chunks: self.update_progress(chunk) return len(progress_chunks) class ProgressingTask(task.Task): def execute(self, **kwargs): self.update_progress(0.0) self.update_progress(1.0) return 5 def revert(self, **kwargs): self.update_progress(0) self.update_progress(1.0) class FailingTask(ProgressingTask): def execute(self, **kwargs): self.update_progress(0) self.update_progress(0.99) raise RuntimeError('Woot!') class OptionalTask(task.Task): def execute(self, a, b=5): result = a * b return result class TaskWithFailure(task.Task): def execute(self, **kwargs): raise RuntimeError('Woot!') class FailingTaskWithOneArg(ProgressingTask): def execute(self, x, **kwargs): raise RuntimeError('Woot with %s' % x) class NastyTask(task.Task): def execute(self, **kwargs): pass def revert(self, **kwargs): raise RuntimeError('Gotcha!') class NastyFailingTask(NastyTask): def execute(self, **kwargs): raise RuntimeError('Woot!') class TaskNoRequiresNoReturns(task.Task): def execute(self, **kwargs): pass def revert(self, **kwargs): pass class TaskOneArg(task.Task): def execute(self, x, **kwargs): pass def revert(self, x, **kwargs): pass class TaskMultiArg(task.Task): def execute(self, x, y, z, **kwargs): pass def revert(self, x, y, z, **kwargs): pass class TaskOneReturn(task.Task): def execute(self, **kwargs): return 1 def revert(self, **kwargs): pass class TaskMultiReturn(task.Task): def execute(self, **kwargs): return 1, 3, 5 def revert(self, **kwargs): pass class TaskOneArgOneReturn(task.Task): def execute(self, x, **kwargs): return 1 def revert(self, x, **kwargs): pass class TaskMultiArgOneReturn(task.Task): def execute(self, x, y, z, **kwargs): return x + y + z def revert(self, x, y, z, **kwargs): pass class TaskMultiArgMultiReturn(task.Task): def execute(self, x, y, z, **kwargs): return 1, 3, 5 def revert(self, x, y, z, **kwargs): pass class TaskMultiDict(task.Task): def execute(self): output = {} for i, k in enumerate(sorted(self.provides)): output[k] = i return output class NeverRunningTask(task.Task): def execute(self, **kwargs): assert False, 'This method should not be called' def revert(self, **kwargs): assert False, 'This method should not be called' class TaskRevertExtraArgs(task.Task): def execute(self, **kwargs): raise exceptions.ExecutionFailure("We want to force a revert here") def revert(self, revert_arg, flow_failures, result, **kwargs): pass class SleepTask(task.Task): def execute(self, duration, **kwargs): time.sleep(duration) class EngineTestBase(object): def setUp(self): super(EngineTestBase, self).setUp() self.backend = impl_memory.MemoryBackend(conf={}) def tearDown(self): EngineTestBase.values = None with contextlib.closing(self.backend) as be: with contextlib.closing(be.get_connection()) as conn: conn.clear_all() super(EngineTestBase, self).tearDown() def _make_engine(self, flow, **kwargs): raise exceptions.NotImplementedError("_make_engine() must be" " overridden if an engine is" " desired") class FailureMatcher(object): """Needed for failure objects comparison.""" def __init__(self, failure): self._failure = failure def __repr__(self): return str(self._failure) def __eq__(self, other): return self._failure.matches(other) def __ne__(self, other): return not self.__eq__(other) class OneReturnRetry(retry.AlwaysRevert): def execute(self, **kwargs): return 1 def revert(self, **kwargs): pass class ConditionalTask(ProgressingTask): def execute(self, x, y): super(ConditionalTask, self).execute() if x != y: raise RuntimeError('Woot!') class WaitForOneFromTask(ProgressingTask): def __init__(self, name, wait_for, wait_states, **kwargs): super(WaitForOneFromTask, self).__init__(name, **kwargs) if isinstance(wait_for, six.string_types): self.wait_for = [wait_for] else: self.wait_for = wait_for if isinstance(wait_states, six.string_types): self.wait_states = [wait_states] else: self.wait_states = wait_states self.event = threading.Event() def execute(self): if not self.event.wait(WAIT_TIMEOUT): raise RuntimeError('%s second timeout occurred while waiting ' 'for %s to change state to %s' % (WAIT_TIMEOUT, self.wait_for, self.wait_states)) return super(WaitForOneFromTask, self).execute() def callback(self, state, details): name = details.get('task_name', None) if name not in self.wait_for or state not in self.wait_states: return self.event.set() def make_many(amount, task_cls=DummyTask, offset=0): name_pool = string.ascii_lowercase + string.ascii_uppercase tasks = [] while amount > 0: if offset >= len(name_pool): raise AssertionError('Name pool size to small (%s < %s)' % (len(name_pool), offset + 1)) tasks.append(task_cls(name=name_pool[offset])) offset += 1 amount -= 1 return tasks ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6520426 taskflow-4.6.4/taskflow/types/0000775000175000017500000000000000000000000016400 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/__init__.py0000664000175000017500000000000000000000000020477 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/entity.py0000664000175000017500000000306300000000000020270 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Rackspace Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class Entity(object): """Entity object that identifies some resource/item/other. :ivar kind: **immutable** type/kind that identifies this entity (typically unique to a library/application) :type kind: string :ivar Entity.name: **immutable** name that can be used to uniquely identify this entity among many other entities :type name: string :ivar metadata: **immutable** dictionary of metadata that is associated with this entity (and typically has keys/values that further describe this entity) :type metadata: dict """ def __init__(self, kind, name, metadata): self.kind = kind self.name = name self.metadata = metadata def to_dict(self): return { 'kind': self.kind, 'name': self.name, 'metadata': self.metadata } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/failure.py0000664000175000017500000005114400000000000020406 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import os import sys import traceback from oslo_utils import encodeutils from oslo_utils import reflection import six from taskflow import exceptions as exc from taskflow.utils import iter_utils from taskflow.utils import mixins from taskflow.utils import schema_utils as su _exception_message = encodeutils.exception_to_unicode def _copy_exc_info(exc_info): if exc_info is None: return None exc_type, exc_value, tb = exc_info # NOTE(imelnikov): there is no need to copy the exception type, and # a shallow copy of the value is fine and we can't copy the traceback since # it contains reference to the internal stack frames... return (exc_type, copy.copy(exc_value), tb) def _are_equal_exc_info_tuples(ei1, ei2): if ei1 == ei2: return True if ei1 is None or ei2 is None: return False # if both are None, we returned True above # NOTE(imelnikov): we can't compare exceptions with '==' # because we want exc_info be equal to it's copy made with # copy_exc_info above. if ei1[0] is not ei2[0]: return False # NOTE(dhellmann): The flake8/pep8 error E721 does not apply here # because we want the types to be exactly the same, not just have # one be inherited from the other. if not all((type(ei1[1]) == type(ei2[1]), # noqa: E721 _exception_message(ei1[1]) == _exception_message(ei2[1]), repr(ei1[1]) == repr(ei2[1]))): return False if ei1[2] == ei2[2]: return True tb1 = traceback.format_tb(ei1[2]) tb2 = traceback.format_tb(ei2[2]) return tb1 == tb2 class Failure(mixins.StrMixin): """An immutable object that represents failure. Failure objects encapsulate exception information so that they can be re-used later to re-raise, inspect, examine, log, print, serialize, deserialize... One example where they are depended upon is in the WBE engine. When a remote worker throws an exception, the WBE based engine will receive that exception and desire to reraise it to the user/caller of the WBE based engine for appropriate handling (this matches the behavior of non-remote engines). To accomplish this a failure object (or a :py:meth:`~.Failure.to_dict` form) would be sent over the WBE channel and the WBE based engine would deserialize it and use this objects :meth:`.reraise` method to cause an exception that contains similar/equivalent information as the original exception to be reraised, allowing the user (or the WBE engine itself) to then handle the worker failure/exception as they desire. For those who are curious, here are a few reasons why the original exception itself *may* not be reraised and instead a reraised wrapped failure exception object will be instead. These explanations are *only* applicable when a failure object is serialized and deserialized (when it is retained inside the python process that the exception was created in the the original exception can be reraised correctly without issue). * Traceback objects are not serializable/recreatable, since they contain references to stack frames at the location where the exception was raised. When a failure object is serialized and sent across a channel and recreated it is *not* possible to restore the original traceback and originating stack frames. * The original exception *type* can not be guaranteed to be found, workers can run code that is not accessible/available when the failure is being deserialized. Even if it was possible to use pickle safely it would not be possible to find the originating exception or associated code in this situation. * The original exception *type* can not be guaranteed to be constructed in a *correct* manner. At the time of failure object creation the exception has already been created and the failure object can not assume it has knowledge (or the ability) to recreate the original type of the captured exception (this is especially hard if the original exception was created via a complex process via some custom exception constructor). * The original exception *type* can not be guaranteed to be constructed in a *safe* manner. Importing *foreign* exception types dynamically can be problematic when not done correctly and in a safe manner; since failure objects can capture any exception it would be *unsafe* to try to import those exception types namespaces and modules on the receiver side dynamically (this would create similar issues as the ``pickle`` module in python has where foreign modules can be imported, causing those modules to have code ran when this happens, and this can cause issues and side-effects that the receiver would not have intended to have caused). TODO(harlowja): use parts of http://bugs.python.org/issue17911 and the backport at https://pypi.org/project/traceback2/ to (hopefully) simplify the methods and contents of this object... """ DICT_VERSION = 1 BASE_EXCEPTIONS = ('BaseException', 'Exception') """ Root exceptions of all other python exceptions. See: https://docs.python.org/2/library/exceptions.html """ #: Expected failure schema (in json schema format). SCHEMA = { "$ref": "#/definitions/cause", "definitions": { "cause": { "type": "object", 'properties': { 'version': { "type": "integer", "minimum": 0, }, 'exc_args': { "type": "array", "minItems": 0, }, 'exception_str': { "type": "string", }, 'traceback_str': { "type": "string", }, 'exc_type_names': { "type": "array", "items": { "type": "string", }, "minItems": 1, }, 'causes': { "type": "array", "items": { "$ref": "#/definitions/cause", }, } }, "required": [ "exception_str", 'traceback_str', 'exc_type_names', ], "additionalProperties": True, }, }, } def __init__(self, exc_info=None, **kwargs): if not kwargs: if exc_info is None: exc_info = sys.exc_info() else: # This should always be the (type, value, traceback) tuple, # either from a prior sys.exc_info() call or from some other # creation... if len(exc_info) != 3: raise ValueError("Provided 'exc_info' must contain three" " elements") self._exc_info = exc_info self._exc_args = tuple(getattr(exc_info[1], 'args', [])) self._exc_type_names = tuple( reflection.get_all_class_names(exc_info[0], up_to=Exception)) if not self._exc_type_names: raise TypeError("Invalid exception type '%s' (%s)" % (exc_info[0], type(exc_info[0]))) self._exception_str = _exception_message(self._exc_info[1]) self._traceback_str = ''.join( traceback.format_tb(self._exc_info[2])) self._causes = kwargs.pop('causes', None) else: self._causes = kwargs.pop('causes', None) self._exc_info = exc_info self._exc_args = tuple(kwargs.pop('exc_args', [])) self._exception_str = kwargs.pop('exception_str') self._exc_type_names = tuple(kwargs.pop('exc_type_names', [])) self._traceback_str = kwargs.pop('traceback_str', None) if kwargs: raise TypeError( 'Failure.__init__ got unexpected keyword argument(s): %s' % ', '.join(six.iterkeys(kwargs))) @classmethod def from_exception(cls, exception): """Creates a failure object from a exception instance.""" exc_info = ( type(exception), exception, getattr(exception, '__traceback__', None) ) return cls(exc_info=exc_info) @classmethod def validate(cls, data): """Validate input data matches expected failure ``dict`` format.""" try: su.schema_validate(data, cls.SCHEMA) except su.ValidationError as e: raise exc.InvalidFormat("Failure data not of the" " expected format: %s" % (e.message), e) else: # Ensure that all 'exc_type_names' originate from one of # BASE_EXCEPTIONS, because those are the root exceptions that # python mandates/provides and anything else is invalid... causes = collections.deque([data]) while causes: cause = causes.popleft() root_exc_type = cause['exc_type_names'][-1] if root_exc_type not in cls.BASE_EXCEPTIONS: raise exc.InvalidFormat( "Failure data 'exc_type_names' must" " have an initial exception type that is one" " of %s types: '%s' is not one of those" " types" % (cls.BASE_EXCEPTIONS, root_exc_type)) sub_causes = cause.get('causes') if sub_causes: causes.extend(sub_causes) def _matches(self, other): if self is other: return True return (self._exc_type_names == other._exc_type_names and self.exception_args == other.exception_args and self.exception_str == other.exception_str and self.traceback_str == other.traceback_str and self.causes == other.causes) def matches(self, other): """Checks if another object is equivalent to this object. :returns: checks if another object is equivalent to this object :rtype: boolean """ if not isinstance(other, Failure): return False if self.exc_info is None or other.exc_info is None: return self._matches(other) else: return self == other def __eq__(self, other): if not isinstance(other, Failure): return NotImplemented return (self._matches(other) and _are_equal_exc_info_tuples(self.exc_info, other.exc_info)) def __ne__(self, other): return not (self == other) # NOTE(imelnikov): obj.__hash__() should return same values for equal # objects, so we should redefine __hash__. Failure equality semantics # is a bit complicated, so for now we just mark Failure objects as # unhashable. See python docs on object.__hash__ for more info: # http://docs.python.org/2/reference/datamodel.html#object.__hash__ __hash__ = None @property def exception(self): """Exception value, or none if exception value is not present. Exception value may be lost during serialization. """ if self._exc_info: return self._exc_info[1] else: return None @property def exception_str(self): """String representation of exception.""" return self._exception_str @property def exception_args(self): """Tuple of arguments given to the exception constructor.""" return self._exc_args @property def exc_info(self): """Exception info tuple or none. See: https://docs.python.org/2/library/sys.html#sys.exc_info for what the contents of this tuple are (if none, then no contents can be examined). """ return self._exc_info @property def traceback_str(self): """Exception traceback as string.""" return self._traceback_str @staticmethod def reraise_if_any(failures): """Re-raise exceptions if argument is not empty. If argument is empty list/tuple/iterator, this method returns None. If argument is converted into a list with a single ``Failure`` object in it, that failure is reraised. Else, a :class:`~taskflow.exceptions.WrappedFailure` exception is raised with the failure list as causes. """ if not isinstance(failures, (list, tuple)): # Convert generators/other into a list... failures = list(failures) if len(failures) == 1: failures[0].reraise() elif len(failures) > 1: raise exc.WrappedFailure(failures) def reraise(self): """Re-raise captured exception.""" if self._exc_info: six.reraise(*self._exc_info) else: raise exc.WrappedFailure([self]) def check(self, *exc_classes): """Check if any of ``exc_classes`` caused the failure. Arguments of this method can be exception types or type names (stings). If captured exception is instance of exception of given type, the corresponding argument is returned. Else, None is returned. """ for cls in exc_classes: if isinstance(cls, type): err = reflection.get_class_name(cls) else: err = cls if err in self._exc_type_names: return cls return None @classmethod def _extract_causes_iter(cls, exc_val): seen = [exc_val] causes = [exc_val] while causes: exc_val = causes.pop() if exc_val is None: continue # See: https://www.python.org/dev/peps/pep-3134/ for why/what # these are... # # '__cause__' attribute for explicitly chained exceptions # '__context__' attribute for implicitly chained exceptions # '__traceback__' attribute for the traceback # # See: https://www.python.org/dev/peps/pep-0415/ for why/what # the '__suppress_context__' is/means/implies... suppress_context = getattr(exc_val, '__suppress_context__', False) if suppress_context: attr_lookups = ['__cause__'] else: attr_lookups = ['__cause__', '__context__'] nested_exc_val = None for attr_name in attr_lookups: attr_val = getattr(exc_val, attr_name, None) if attr_val is None: continue if attr_val not in seen: nested_exc_val = attr_val break if nested_exc_val is not None: exc_info = ( type(nested_exc_val), nested_exc_val, getattr(nested_exc_val, '__traceback__', None), ) seen.append(nested_exc_val) causes.append(nested_exc_val) yield cls(exc_info=exc_info) @property def causes(self): """Tuple of all *inner* failure *causes* of this failure. NOTE(harlowja): Does **not** include the current failure (only returns connected causes of this failure, if any). This property is really only useful on 3.x or newer versions of python as older versions do **not** have associated causes (the tuple will **always** be empty on 2.x versions of python). Refer to :pep:`3134` and :pep:`409` and :pep:`415` for what this is examining to find failure causes. """ if self._causes is not None: return self._causes else: self._causes = tuple(self._extract_causes_iter(self.exception)) return self._causes def __unicode__(self): return self.pformat() def pformat(self, traceback=False): """Pretty formats the failure object into a string.""" buf = six.StringIO() if not self._exc_type_names: buf.write('Failure: %s' % (self._exception_str)) else: buf.write('Failure: %s: %s' % (self._exc_type_names[0], self._exception_str)) if traceback: if self._traceback_str is not None: traceback_str = self._traceback_str.rstrip() else: traceback_str = None if traceback_str: buf.write(os.linesep) buf.write('Traceback (most recent call last):') buf.write(os.linesep) buf.write(traceback_str) else: buf.write(os.linesep) buf.write('Traceback not available.') return buf.getvalue() def __iter__(self): """Iterate over exception type names.""" for et in self._exc_type_names: yield et def __getstate__(self): dct = self.to_dict() if self._exc_info: # Avoids 'TypeError: can't pickle traceback objects' dct['exc_info'] = self._exc_info[0:2] return dct def __setstate__(self, dct): self._exception_str = dct['exception_str'] if 'exc_args' in dct: self._exc_args = tuple(dct['exc_args']) else: # Guess we got an older version somehow, before this # was added, so at that point just set to an empty tuple... self._exc_args = () self._traceback_str = dct['traceback_str'] self._exc_type_names = dct['exc_type_names'] if 'exc_info' in dct: # Tracebacks can't be serialized/deserialized, but since we # provide a traceback string (and more) this should be # acceptable... # # TODO(harlowja): in the future we could do something like # what the twisted people have done, see for example # twisted-13.0.0/twisted/python/failure.py#L89 for how they # created a fake traceback object... self._exc_info = tuple(iter_utils.fill(dct['exc_info'], 3)) else: self._exc_info = None causes = dct.get('causes') if causes is not None: causes = tuple(self.from_dict(d) for d in causes) self._causes = causes @classmethod def from_dict(cls, data): """Converts this from a dictionary to a object.""" data = dict(data) version = data.pop('version', None) if version != cls.DICT_VERSION: raise ValueError('Invalid dict version of failure object: %r' % version) causes = data.get('causes') if causes is not None: data['causes'] = tuple(cls.from_dict(d) for d in causes) return cls(**data) def to_dict(self, include_args=True): """Converts this object to a dictionary. :param include_args: boolean indicating whether to include the exception args in the output. """ return { 'exception_str': self.exception_str, 'traceback_str': self.traceback_str, 'exc_type_names': list(self), 'version': self.DICT_VERSION, 'exc_args': self.exception_args if include_args else tuple(), 'causes': [f.to_dict() for f in self.causes], } def copy(self): """Copies this object.""" return Failure(exc_info=_copy_exc_info(self.exc_info), exception_str=self.exception_str, traceback_str=self.traceback_str, exc_args=self.exception_args, exc_type_names=self._exc_type_names[:], causes=self._causes) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/graph.py0000664000175000017500000002441600000000000020062 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import os import networkx as nx from networkx.drawing import nx_pydot import six def _common_format(g, edge_notation): lines = [] lines.append("Name: %s" % g.name) lines.append("Type: %s" % type(g).__name__) lines.append("Frozen: %s" % nx.is_frozen(g)) lines.append("Density: %0.3f" % nx.density(g)) lines.append("Nodes: %s" % g.number_of_nodes()) for n, n_data in g.nodes(data=True): if n_data: lines.append(" - %s (%s)" % (n, n_data)) else: lines.append(" - %s" % n) lines.append("Edges: %s" % g.number_of_edges()) for (u, v, e_data) in g.edges(data=True): if e_data: lines.append(" %s %s %s (%s)" % (u, edge_notation, v, e_data)) else: lines.append(" %s %s %s" % (u, edge_notation, v)) return lines class Graph(nx.Graph): """A graph subclass with useful utility functions.""" def __init__(self, incoming_graph_data=None, name=''): super(Graph, self).__init__(incoming_graph_data=incoming_graph_data, name=name) self.frozen = False def freeze(self): """Freezes the graph so that no more mutations can occur.""" if not self.frozen: nx.freeze(self) return self def export_to_dot(self): """Exports the graph to a dot format (requires pydot library).""" return nx_pydot.to_pydot(self).to_string() def pformat(self): """Pretty formats your graph into a string.""" return os.linesep.join(_common_format(self, "<->")) def add_edge(self, u, v, attr_dict=None, **attr): """Add an edge between u and v.""" if attr_dict is not None: return super(Graph, self).add_edge(u, v, **attr_dict) return super(Graph, self).add_edge(u, v, **attr) def add_node(self, n, attr_dict=None, **attr): """Add a single node n and update node attributes.""" if attr_dict is not None: return super(Graph, self).add_node(n, **attr_dict) return super(Graph, self).add_node(n, **attr) def fresh_copy(self): """Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. """ return Graph() class DiGraph(nx.DiGraph): """A directed graph subclass with useful utility functions.""" def __init__(self, incoming_graph_data=None, name=''): super(DiGraph, self).__init__(incoming_graph_data=incoming_graph_data, name=name) self.frozen = False def freeze(self): """Freezes the graph so that no more mutations can occur.""" if not self.frozen: nx.freeze(self) return self def get_edge_data(self, u, v, default=None): """Returns a *copy* of the edge attribute dictionary between (u, v). NOTE(harlowja): this differs from the networkx get_edge_data() as that function does not return a copy (but returns a reference to the actual edge data). """ try: return dict(self.adj[u][v]) except KeyError: return default def topological_sort(self): """Return a list of nodes in this graph in topological sort order.""" return nx.topological_sort(self) def pformat(self): """Pretty formats your graph into a string. This pretty formatted string representation includes many useful details about your graph, including; name, type, frozeness, node count, nodes, edge count, edges, graph density and graph cycles (if any). """ lines = _common_format(self, "->") cycles = list(nx.cycles.recursive_simple_cycles(self)) lines.append("Cycles: %s" % len(cycles)) for cycle in cycles: buf = six.StringIO() buf.write("%s" % (cycle[0])) for i in range(1, len(cycle)): buf.write(" --> %s" % (cycle[i])) buf.write(" --> %s" % (cycle[0])) lines.append(" %s" % buf.getvalue()) return os.linesep.join(lines) def export_to_dot(self): """Exports the graph to a dot format (requires pydot library).""" return nx_pydot.to_pydot(self).to_string() def is_directed_acyclic(self): """Returns if this graph is a DAG or not.""" return nx.is_directed_acyclic_graph(self) def no_successors_iter(self): """Returns an iterator for all nodes with no successors.""" for n in self.nodes: if not len(list(self.successors(n))): yield n def no_predecessors_iter(self): """Returns an iterator for all nodes with no predecessors.""" for n in self.nodes: if not len(list(self.predecessors(n))): yield n def bfs_predecessors_iter(self, n): """Iterates breadth first over *all* predecessors of a given node. This will go through the nodes predecessors, then the predecessor nodes predecessors and so on until no more predecessors are found. NOTE(harlowja): predecessor cycles (if they exist) will not be iterated over more than once (this prevents infinite iteration). """ visited = set([n]) queue = collections.deque(self.predecessors(n)) while queue: pred = queue.popleft() if pred not in visited: yield pred visited.add(pred) for pred_pred in self.predecessors(pred): if pred_pred not in visited: queue.append(pred_pred) def add_edge(self, u, v, attr_dict=None, **attr): """Add an edge between u and v.""" if attr_dict is not None: return super(DiGraph, self).add_edge(u, v, **attr_dict) return super(DiGraph, self).add_edge(u, v, **attr) def add_node(self, n, attr_dict=None, **attr): """Add a single node n and update node attributes.""" if attr_dict is not None: return super(DiGraph, self).add_node(n, **attr_dict) return super(DiGraph, self).add_node(n, **attr) def fresh_copy(self): """Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. """ return DiGraph() class OrderedDiGraph(DiGraph): """A directed graph subclass with useful utility functions. This derivative retains node, edge, insertion and iteration ordering (so that the iteration order matches the insertion order). """ node_dict_factory = collections.OrderedDict adjlist_outer_dict_factory = collections.OrderedDict adjlist_inner_dict_factory = collections.OrderedDict edge_attr_dict_factory = collections.OrderedDict def fresh_copy(self): """Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. """ return OrderedDiGraph() class OrderedGraph(Graph): """A graph subclass with useful utility functions. This derivative retains node, edge, insertion and iteration ordering (so that the iteration order matches the insertion order). """ node_dict_factory = collections.OrderedDict adjlist_outer_dict_factory = collections.OrderedDict adjlist_inner_dict_factory = collections.OrderedDict edge_attr_dict_factory = collections.OrderedDict def fresh_copy(self): """Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. """ return OrderedGraph() def merge_graphs(graph, *graphs, **kwargs): """Merges a bunch of graphs into a new graph. If no additional graphs are provided the first graph is returned unmodified otherwise the merged graph is returned. """ tmp_graph = graph allow_overlaps = kwargs.get('allow_overlaps', False) overlap_detector = kwargs.get('overlap_detector') if overlap_detector is not None and not six.callable(overlap_detector): raise ValueError("Overlap detection callback expected to be callable") elif overlap_detector is None: overlap_detector = (lambda to_graph, from_graph: len(to_graph.subgraph(from_graph.nodes))) for g in graphs: # This should ensure that the nodes to be merged do not already exist # in the graph that is to be merged into. This could be problematic if # there are duplicates. if not allow_overlaps: # Attempt to induce a subgraph using the to be merged graphs nodes # and see if any graph results. overlaps = overlap_detector(graph, g) if overlaps: raise ValueError("Can not merge graph %s into %s since there " "are %s overlapping nodes (and we do not " "support merging nodes)" % (g, graph, overlaps)) graph = nx.algorithms.compose(graph, g) # Keep the first graphs name. if graphs: graph.name = tmp_graph.name return graph ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/latch.py0000664000175000017500000000422100000000000020044 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading from oslo_utils import timeutils class Latch(object): """A class that ensures N-arrivals occur before unblocking. TODO(harlowja): replace with http://bugs.python.org/issue8777 when we no longer have to support python 2.6 or 2.7 and we can only support 3.2 or later. """ def __init__(self, count): count = int(count) if count <= 0: raise ValueError("Count must be greater than zero") self._count = count self._cond = threading.Condition() @property def needed(self): """Returns how many decrements are needed before latch is released.""" return max(0, self._count) def countdown(self): """Decrements the internal counter due to an arrival.""" with self._cond: self._count -= 1 if self._count <= 0: self._cond.notify_all() def wait(self, timeout=None): """Waits until the latch is released. :param timeout: wait until the timeout expires :type timeout: number :returns: true if the latch has been released before the timeout expires otherwise false :rtype: boolean """ watch = timeutils.StopWatch(duration=timeout) watch.start() with self._cond: while self._count > 0: if watch.expired(): return False else: self._cond.wait(watch.leftover(return_none=True)) return True ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/notifier.py0000664000175000017500000003343700000000000020603 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import copy import logging from oslo_utils import reflection import six LOG = logging.getLogger(__name__) class Listener(object): """Immutable helper that represents a notification listener/target.""" def __init__(self, callback, args=None, kwargs=None, details_filter=None): """Initialize members :param callback: callback function :param details_filter: a callback that will be called before the actual callback that can be used to discard the event (thus avoiding the invocation of the actual callback) :param args: non-keyworded arguments :type args: list/iterable/tuple :param kwargs: key-value pair arguments :type kwargs: dictionary """ self._callback = callback self._details_filter = details_filter if not args: self._args = () else: if not isinstance(args, tuple): self._args = tuple(args) else: self._args = args if not kwargs: self._kwargs = {} else: self._kwargs = kwargs.copy() @property def callback(self): """Callback (can not be none) to call with event + details.""" return self._callback @property def details_filter(self): """Callback (may be none) to call to discard events + details.""" return self._details_filter @property def kwargs(self): """Dictionary of keyword arguments to use in future calls.""" return self._kwargs.copy() @property def args(self): """Tuple of positional arguments to use in future calls.""" return self._args def __call__(self, event_type, details): """Activate the target callback with the given event + details. NOTE(harlowja): if a details filter callback exists and it returns a falsey value when called with the provided ``details``, then the target callback will **not** be called. """ if self._details_filter is not None: if not self._details_filter(details): return kwargs = self._kwargs.copy() kwargs['details'] = details self._callback(event_type, *self._args, **kwargs) def __repr__(self): repr_msg = "%s object at 0x%x calling into '%r'" % ( reflection.get_class_name(self, fully_qualified=False), id(self), self._callback) if self._details_filter is not None: repr_msg += " using details filter '%r'" % self._details_filter return "<%s>" % repr_msg def is_equivalent(self, callback, details_filter=None): """Check if the callback is same :param callback: callback used for comparison :param details_filter: callback used for comparison :returns: false if not the same callback, otherwise true :rtype: boolean """ if not reflection.is_same_callback(self._callback, callback): return False if details_filter is not None: if self._details_filter is None: return False else: return reflection.is_same_callback(self._details_filter, details_filter) else: return self._details_filter is None def __eq__(self, other): if isinstance(other, Listener): return self.is_equivalent(other._callback, details_filter=other._details_filter) else: return NotImplemented def __ne__(self, other): return not self.__eq__(other) class Notifier(object): """A notification (`pub/sub`_ *like*) helper class. It is intended to be used to subscribe to notifications of events occurring as well as allow a entity to post said notifications to any associated subscribers without having either entity care about how this notification occurs. **Not** thread-safe when a single notifier is mutated at the same time by multiple threads. For example having multiple threads call into :py:meth:`.register` or :py:meth:`.reset` at the same time could potentially end badly. It is thread-safe when only :py:meth:`.notify` calls or other read-only actions (like calling into :py:meth:`.is_registered`) are occurring at the same time. .. _pub/sub: http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern """ #: Keys that can *not* be used in callbacks arguments RESERVED_KEYS = ('details',) #: Kleene star constant that is used to receive all notifications ANY = '*' #: Events which can *not* be used to trigger notifications _DISALLOWED_NOTIFICATION_EVENTS = set([ANY]) def __init__(self): self._topics = collections.defaultdict(list) def __len__(self): """Returns how many callbacks are registered. :returns: count of how many callbacks are registered :rtype: number """ count = 0 for (_event_type, listeners) in six.iteritems(self._topics): count += len(listeners) return count def is_registered(self, event_type, callback, details_filter=None): """Check if a callback is registered. :returns: checks if the callback is registered :rtype: boolean """ for listener in self._topics.get(event_type, []): if listener.is_equivalent(callback, details_filter=details_filter): return True return False def reset(self): """Forget all previously registered callbacks.""" self._topics.clear() def notify(self, event_type, details): """Notify about event occurrence. All callbacks registered to receive notifications about given event type will be called. If the provided event type can not be used to emit notifications (this is checked via the :meth:`.can_be_registered` method) then it will silently be dropped (notification failures are not allowed to cause or raise exceptions). :param event_type: event type that occurred :param details: additional event details *dictionary* passed to callback keyword argument with the same name :type details: dictionary """ if not self.can_trigger_notification(event_type): LOG.debug("Event type '%s' is not allowed to trigger" " notifications", event_type) return listeners = list(self._topics.get(self.ANY, [])) listeners.extend(self._topics.get(event_type, [])) if not listeners: return if not details: details = {} for listener in listeners: try: listener(event_type, details.copy()) except Exception: LOG.warning("Failure calling listener %s to notify about event" " %s, details: %s", listener, event_type, details, exc_info=True) def register(self, event_type, callback, args=None, kwargs=None, details_filter=None): """Register a callback to be called when event of a given type occurs. Callback will be called with provided ``args`` and ``kwargs`` and when event type occurs (or on any event if ``event_type`` equals to :attr:`.ANY`). It will also get additional keyword argument, ``details``, that will hold event details provided to the :meth:`.notify` method (if a details filter callback is provided then the target callback will *only* be triggered if the details filter callback returns a truthy value). :param event_type: event type input :param callback: function callback to be registered. :param args: non-keyworded arguments :type args: list :param kwargs: key-value pair arguments :type kwargs: dictionary """ if not six.callable(callback): raise ValueError("Event callback must be callable") if details_filter is not None: if not six.callable(details_filter): raise ValueError("Details filter must be callable") if not self.can_be_registered(event_type): raise ValueError("Disallowed event type '%s' can not have a" " callback registered" % event_type) if self.is_registered(event_type, callback, details_filter=details_filter): raise ValueError("Event callback already registered with" " equivalent details filter") if kwargs: for k in self.RESERVED_KEYS: if k in kwargs: raise KeyError("Reserved key '%s' not allowed in " "kwargs" % k) self._topics[event_type].append( Listener(callback, args=args, kwargs=kwargs, details_filter=details_filter)) def deregister(self, event_type, callback, details_filter=None): """Remove a single listener bound to event ``event_type``. :param event_type: deregister listener bound to event_type """ if event_type not in self._topics: return False for i, listener in enumerate(self._topics.get(event_type, [])): if listener.is_equivalent(callback, details_filter=details_filter): self._topics[event_type].pop(i) return True return False def deregister_event(self, event_type): """Remove a group of listeners bound to event ``event_type``. :param event_type: deregister listeners bound to event_type """ return len(self._topics.pop(event_type, [])) def copy(self): c = copy.copy(self) c._topics = collections.defaultdict(list) for (event_type, listeners) in six.iteritems(self._topics): c._topics[event_type] = listeners[:] return c def listeners_iter(self): """Return an iterator over the mapping of event => listeners bound. NOTE(harlowja): Each listener in the yielded (event, listeners) tuple is an instance of the :py:class:`~.Listener` type, which itself wraps a provided callback (and its details filter callback, if any). """ for event_type, listeners in six.iteritems(self._topics): if listeners: yield (event_type, listeners) def can_be_registered(self, event_type): """Checks if the event can be registered/subscribed to.""" return True def can_trigger_notification(self, event_type): """Checks if the event can trigger a notification. :param event_type: event that needs to be verified :returns: whether the event can trigger a notification :rtype: boolean """ if event_type in self._DISALLOWED_NOTIFICATION_EVENTS: return False else: return True class RestrictedNotifier(Notifier): """A notification class that restricts events registered/triggered. NOTE(harlowja): This class unlike :class:`.Notifier` restricts and disallows registering callbacks for event types that are not declared when constructing the notifier. """ def __init__(self, watchable_events, allow_any=True): super(RestrictedNotifier, self).__init__() self._watchable_events = frozenset(watchable_events) self._allow_any = allow_any def events_iter(self): """Returns iterator of events that can be registered/subscribed to. NOTE(harlowja): does not include back the ``ANY`` event type as that meta-type is not a specific event but is a capture-all that does not imply the same meaning as specific event types. """ for event_type in self._watchable_events: yield event_type def can_be_registered(self, event_type): """Checks if the event can be registered/subscribed to. :param event_type: event that needs to be verified :returns: whether the event can be registered/subscribed to :rtype: boolean """ return (event_type in self._watchable_events or (event_type == self.ANY and self._allow_any)) @contextlib.contextmanager def register_deregister(notifier, event_type, callback=None, args=None, kwargs=None, details_filter=None): """Context manager that registers a callback, then deregisters on exit. NOTE(harlowja): if the callback is none, then this registers nothing, which is different from the behavior of the ``register`` method which will *not* accept none as it is not callable... """ if callback is None: yield else: notifier.register(event_type, callback, args=args, kwargs=kwargs, details_filter=details_filter) try: yield finally: notifier.deregister(event_type, callback, details_filter=details_filter) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/sets.py0000664000175000017500000000766700000000000017750 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from collections import abc import itertools import six # Used for values that don't matter in sets backed by dicts... _sentinel = object() def _merge_in(target, iterable=None, sentinel=_sentinel): """Merges iterable into the target and returns the target.""" if iterable is not None: for value in iterable: target.setdefault(value, sentinel) return target class OrderedSet(abc.Set, abc.Hashable): """A read-only hashable set that retains insertion/initial ordering. It should work in all existing places that ``frozenset`` is used. See: https://mail.python.org/pipermail/python-ideas/2009-May/004567.html for an idea thread that *may* eventually (*someday*) result in this (or similar) code being included in the mainline python codebase (although the end result of that thread is somewhat discouraging in that regard). """ __slots__ = ['_data'] def __init__(self, iterable=None): self._data = _merge_in(collections.OrderedDict(), iterable) def __hash__(self): return self._hash() def __contains__(self, value): return value in self._data def __len__(self): return len(self._data) def __iter__(self): for value in six.iterkeys(self._data): yield value def __setstate__(self, items): self.__init__(iterable=iter(items)) def __getstate__(self): return tuple(self) def __repr__(self): return "%s(%s)" % (type(self).__name__, list(self)) def copy(self): """Return a shallow copy of a set.""" return self._from_iterable(iter(self)) def intersection(self, *sets): """Return the intersection of two or more sets as a new set. (i.e. elements that are common to all of the sets.) """ def absorb_it(sets): for value in iter(self): matches = 0 for s in sets: if value in s: matches += 1 else: break if matches == len(sets): yield value return self._from_iterable(absorb_it(sets)) def issuperset(self, other): """Report whether this set contains another set.""" for value in other: if value not in self: return False return True def issubset(self, other): """Report whether another set contains this set.""" for value in iter(self): if value not in other: return False return True def difference(self, *sets): """Return the difference of two or more sets as a new set. (i.e. all elements that are in this set but not the others.) """ def absorb_it(sets): for value in iter(self): seen = False for s in sets: if value in s: seen = True break if not seen: yield value return self._from_iterable(absorb_it(sets)) def union(self, *sets): """Return the union of sets as a new set. (i.e. all elements that are in either set.) """ return self._from_iterable(itertools.chain(iter(self), *sets)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/timing.py0000664000175000017500000000444300000000000020246 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import threading import six class Timeout(object): """An object which represents a timeout. This object has the ability to be interrupted before the actual timeout is reached. """ def __init__(self, value, event_factory=threading.Event): if value < 0: raise ValueError("Timeout value must be greater or" " equal to zero and not '%s'" % (value)) self._value = value self._event = event_factory() @property def value(self): """Immutable value of the internally used timeout.""" return self._value def interrupt(self): """Forcefully set the timeout (releases any waiters).""" self._event.set() def is_stopped(self): """Returns if the timeout has been interrupted.""" return self._event.is_set() def wait(self): """Block current thread (up to timeout) and wait until interrupted.""" self._event.wait(self._value) def reset(self): """Reset so that interruption (and waiting) can happen again.""" self._event.clear() def convert_to_timeout(value=None, default_value=None, event_factory=threading.Event): """Converts a given value to a timeout instance (and returns it). Does nothing if the value provided is already a timeout instance. """ if value is None: value = default_value if isinstance(value, (int, float) + six.string_types): return Timeout(float(value), event_factory=event_factory) elif isinstance(value, Timeout): return value else: raise ValueError("Invalid timeout literal '%s'" % (value)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/types/tree.py0000664000175000017500000003673100000000000017723 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools import os import six from taskflow.types import graph from taskflow.utils import iter_utils from taskflow.utils import misc class FrozenNode(Exception): """Exception raised when a frozen node is modified.""" def __init__(self): super(FrozenNode, self).__init__("Frozen node(s) can't be modified") class _DFSIter(object): """Depth first iterator (non-recursive) over the child nodes.""" def __init__(self, root, include_self=False, right_to_left=True): self.root = root self.right_to_left = bool(right_to_left) self.include_self = bool(include_self) def __iter__(self): stack = [] if self.include_self: stack.append(self.root) else: if self.right_to_left: stack.extend(self.root.reverse_iter()) else: # Traverse the left nodes first to the right nodes. stack.extend(iter(self.root)) while stack: # Visit the node. node = stack.pop() yield node if self.right_to_left: stack.extend(node.reverse_iter()) else: # Traverse the left nodes first to the right nodes. stack.extend(iter(node)) class _BFSIter(object): """Breadth first iterator (non-recursive) over the child nodes.""" def __init__(self, root, include_self=False, right_to_left=False): self.root = root self.right_to_left = bool(right_to_left) self.include_self = bool(include_self) def __iter__(self): q = collections.deque() if self.include_self: q.append(self.root) else: if self.right_to_left: q.extend(iter(self.root)) else: # Traverse the left nodes first to the right nodes. q.extend(self.root.reverse_iter()) while q: # Visit the node. node = q.popleft() yield node if self.right_to_left: q.extend(iter(node)) else: # Traverse the left nodes first to the right nodes. q.extend(node.reverse_iter()) class Node(object): """A n-ary node class that can be used to create tree structures.""" #: Default string prefix used in :py:meth:`.pformat`. STARTING_PREFIX = "" #: Default string used to create empty space used in :py:meth:`.pformat`. EMPTY_SPACE_SEP = " " HORIZONTAL_CONN = "__" """ Default string used to horizontally connect a node to its parent (used in :py:meth:`.pformat`.). """ VERTICAL_CONN = "|" """ Default string used to vertically connect a node to its parent (used in :py:meth:`.pformat`). """ #: Default line separator used in :py:meth:`.pformat`. LINE_SEP = os.linesep def __init__(self, item, **kwargs): self.item = item self.parent = None self.metadata = dict(kwargs) self.frozen = False self._children = [] def freeze(self): if not self.frozen: # This will DFS until all children are frozen as well, only # after that works do we freeze ourselves (this makes it so # that we don't become frozen if a child node fails to perform # the freeze operation). for n in self: n.freeze() self.frozen = True @misc.disallow_when_frozen(FrozenNode) def add(self, child): """Adds a child to this node (appends to left of existing children). NOTE(harlowja): this will also set the childs parent to be this node. """ child.parent = self self._children.append(child) def empty(self): """Returns if the node is a leaf node.""" return self.child_count() == 0 def path_iter(self, include_self=True): """Yields back the path from this node to the root node.""" if include_self: node = self else: node = self.parent while node is not None: yield node node = node.parent def find_first_match(self, matcher, only_direct=False, include_self=True): """Finds the *first* node that matching callback returns true. This will search not only this node but also any children nodes (in depth first order, from right to left) and finally if nothing is matched then ``None`` is returned instead of a node object. :param matcher: callback that takes one positional argument (a node) and returns true if it matches desired node or false if not. :param only_direct: only look at current node and its direct children (implies that this does not search using depth first). :param include_self: include the current node during searching. :returns: the node that matched (or ``None``) """ if only_direct: if include_self: it = itertools.chain([self], self.reverse_iter()) else: it = self.reverse_iter() else: it = self.dfs_iter(include_self=include_self) return iter_utils.find_first_match(it, matcher) def find(self, item, only_direct=False, include_self=True): """Returns the *first* node for an item if it exists in this node. This will search not only this node but also any children nodes (in depth first order, from right to left) and finally if nothing is matched then ``None`` is returned instead of a node object. :param item: item to look for. :param only_direct: only look at current node and its direct children (implies that this does not search using depth first). :param include_self: include the current node during searching. :returns: the node that matched provided item (or ``None``) """ return self.find_first_match(lambda n: n.item == item, only_direct=only_direct, include_self=include_self) @misc.disallow_when_frozen(FrozenNode) def disassociate(self): """Removes this node from its parent (if any). :returns: occurrences of this node that were removed from its parent. """ occurrences = 0 if self.parent is not None: p = self.parent self.parent = None # Remove all instances of this node from its parent. while True: try: p._children.remove(self) except ValueError: break else: occurrences += 1 return occurrences @misc.disallow_when_frozen(FrozenNode) def remove(self, item, only_direct=False, include_self=True): """Removes a item from this nodes children. This will search not only this node but also any children nodes and finally if nothing is found then a value error is raised instead of the normally returned *removed* node object. :param item: item to lookup. :param only_direct: only look at current node and its direct children (implies that this does not search using depth first). :param include_self: include the current node during searching. """ node = self.find(item, only_direct=only_direct, include_self=include_self) if node is None: raise ValueError("Item '%s' not found to remove" % item) else: node.disassociate() return node def __contains__(self, item): """Returns whether item exists in this node or this nodes children. :returns: if the item exists in this node or nodes children, true if the item exists, false otherwise :rtype: boolean """ return self.find(item) is not None def __getitem__(self, index): # NOTE(harlowja): 0 is the right most index, len - 1 is the left most return self._children[index] def pformat(self, stringify_node=None, linesep=LINE_SEP, vertical_conn=VERTICAL_CONN, horizontal_conn=HORIZONTAL_CONN, empty_space=EMPTY_SPACE_SEP, starting_prefix=STARTING_PREFIX): """Formats this node + children into a nice string representation. **Example**:: >>> from taskflow.types import tree >>> yahoo = tree.Node("CEO") >>> yahoo.add(tree.Node("Infra")) >>> yahoo[0].add(tree.Node("Boss")) >>> yahoo[0][0].add(tree.Node("Me")) >>> yahoo.add(tree.Node("Mobile")) >>> yahoo.add(tree.Node("Mail")) >>> print(yahoo.pformat()) CEO |__Infra | |__Boss | |__Me |__Mobile |__Mail """ if stringify_node is None: # Default to making a unicode string out of the nodes item... stringify_node = lambda node: six.text_type(node.item) expected_lines = self.child_count(only_direct=False) + 1 buff = six.StringIO() conn = vertical_conn + horizontal_conn stop_at_parent = self for i, node in enumerate(self.dfs_iter(include_self=True), 1): prefix = [] connected_to_parent = False last_node = node # Walk through *most* of this nodes parents, and form the expected # prefix that each parent should require, repeat this until we # hit the root node (self) and use that as our nodes prefix # string... parent_node_it = iter_utils.while_is_not( node.path_iter(include_self=True), stop_at_parent) for j, parent_node in enumerate(parent_node_it): if parent_node is stop_at_parent: if j > 0: if not connected_to_parent: prefix.append(conn) connected_to_parent = True else: # If the node was connected already then it must # have had more than one parent, so we want to put # the right final starting prefix on (which may be # a empty space or another vertical connector)... last_node = self._children[-1] m = last_node.find_first_match(lambda n: n is node, include_self=False, only_direct=False) if m is not None: prefix.append(empty_space) else: prefix.append(vertical_conn) elif parent_node is node: # Skip ourself... (we only include ourself so that # we can use the 'j' variable to determine if the only # node requested is ourself in the first place); used # in the first conditional here... pass else: if not connected_to_parent: prefix.append(conn) spaces = len(horizontal_conn) connected_to_parent = True else: # If we have already been connected to our parent # then determine if this current node is the last # node of its parent (and in that case just put # on more spaces), otherwise put a vertical connector # on and less spaces... if parent_node[-1] is not last_node: prefix.append(vertical_conn) spaces = len(horizontal_conn) else: spaces = len(conn) prefix.append(empty_space * spaces) last_node = parent_node prefix.append(starting_prefix) for prefix_piece in reversed(prefix): buff.write(prefix_piece) buff.write(stringify_node(node)) if i != expected_lines: buff.write(linesep) return buff.getvalue() def child_count(self, only_direct=True): """Returns how many children this node has. This can be either only the direct children of this node or inclusive of all children nodes of this node (children of children and so-on). NOTE(harlowja): it does not account for the current node in this count. """ if not only_direct: return iter_utils.count(self.dfs_iter()) return len(self._children) def __iter__(self): """Iterates over the direct children of this node (right->left).""" for c in self._children: yield c def reverse_iter(self): """Iterates over the direct children of this node (left->right).""" for c in reversed(self._children): yield c def index(self, item): """Finds the child index of a given item, searches in added order.""" index_at = None for (i, child) in enumerate(self._children): if child.item == item: index_at = i break if index_at is None: raise ValueError("%s is not contained in any child" % (item)) return index_at def dfs_iter(self, include_self=False, right_to_left=True): """Depth first iteration (non-recursive) over the child nodes.""" return _DFSIter(self, include_self=include_self, right_to_left=right_to_left) def bfs_iter(self, include_self=False, right_to_left=False): """Breadth first iteration (non-recursive) over the child nodes.""" return _BFSIter(self, include_self=include_self, right_to_left=right_to_left) def to_digraph(self): """Converts this node + its children into a ordered directed graph. The graph returned will have the same structure as the this node and its children (and tree node metadata will be translated into graph node metadata). :returns: a directed graph :rtype: :py:class:`taskflow.types.graph.OrderedDiGraph` """ g = graph.OrderedDiGraph() for node in self.bfs_iter(include_self=True, right_to_left=True): g.add_node(node.item, **node.metadata) if node is not self: g.add_edge(node.parent.item, node.item) return g ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6560426 taskflow-4.6.4/taskflow/utils/0000775000175000017500000000000000000000000016374 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/__init__.py0000664000175000017500000000000000000000000020473 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/async_utils.py0000664000175000017500000000152700000000000021310 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist def make_completed_future(result): """Make and return a future completed with a given result.""" future = futurist.Future() future.set_result(result) return future ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/banner.py0000664000175000017500000000711500000000000020217 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2016 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import string import six from taskflow.utils import misc from taskflow import version BANNER_HEADER = string.Template(""" ___ __ | |_ |ask |low v$version """.strip()) BANNER_HEADER = BANNER_HEADER.substitute(version=version.version_string()) def make_banner(what, chapters): """Makes a taskflow banner string. For example:: >>> from taskflow.utils import banner >>> chapters = { 'Connection details': { 'Topic': 'hello', }, 'Powered by': { 'Executor': 'parallel', }, } >>> print(banner.make_banner('Worker', chapters)) This will output:: ___ __ | |_ |ask |low v1.26.1 *Worker* Connection details: Topic => hello Powered by: Executor => parallel """ buf = misc.StringIO() buf.write_nl(BANNER_HEADER) if chapters: buf.write_nl("*%s*" % what) chapter_names = sorted(six.iterkeys(chapters)) else: buf.write("*%s*" % what) chapter_names = [] for i, chapter_name in enumerate(chapter_names): chapter_contents = chapters[chapter_name] if chapter_contents: buf.write_nl("%s:" % (chapter_name)) else: buf.write("%s:" % (chapter_name)) if isinstance(chapter_contents, dict): section_names = sorted(six.iterkeys(chapter_contents)) for j, section_name in enumerate(section_names): if j + 1 < len(section_names): buf.write_nl(" %s => %s" % (section_name, chapter_contents[section_name])) else: buf.write(" %s => %s" % (section_name, chapter_contents[section_name])) elif isinstance(chapter_contents, (list, tuple, set)): if isinstance(chapter_contents, set): sections = sorted(chapter_contents) else: sections = chapter_contents for j, section in enumerate(sections): if j + 1 < len(sections): buf.write_nl(" %s. %s" % (j + 1, section)) else: buf.write(" %s. %s" % (j + 1, section)) else: raise TypeError("Unsupported chapter contents" " type: one of dict, list, tuple, set expected" " and not %s" % type(chapter_contents).__name__) if i + 1 < len(chapter_names): buf.write_nl("") # NOTE(harlowja): this is needed since the template in this file # will always have newlines that end with '\n' (even on different # platforms due to the way this source file is encoded) so we have # to do this little dance to make it platform neutral... if os.linesep != "\n": return misc.fix_newlines(buf.getvalue()) return buf.getvalue() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/eventlet_utils.py0000664000175000017500000000217500000000000022021 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import importutils _eventlet = importutils.try_import('eventlet') EVENTLET_AVAILABLE = bool(_eventlet) def check_for_eventlet(exc=None): """Check if eventlet is available and if not raise a runtime error. :param exc: exception to raise instead of raising a runtime error :type exc: exception """ if not EVENTLET_AVAILABLE: if exc is None: raise RuntimeError('Eventlet is not current available') else: raise exc ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/iter_utils.py0000664000175000017500000001115200000000000021131 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import abc import itertools import six from six.moves import range as compat_range def _ensure_iterable(func): @six.wraps(func) def wrapper(it, *args, **kwargs): if not isinstance(it, abc.Iterable): raise ValueError("Iterable expected, but '%s' is not" " iterable" % it) return func(it, *args, **kwargs) return wrapper @_ensure_iterable def fill(it, desired_len, filler=None): """Iterates over a provided iterator up to the desired length. If the source iterator does not have enough values then the filler value is yielded until the desired length is reached. """ if desired_len > 0: count = 0 for value in it: yield value count += 1 if count >= desired_len: return while count < desired_len: yield filler count += 1 @_ensure_iterable def count(it): """Returns how many values in the iterator (depletes the iterator).""" return sum(1 for _value in it) def generate_delays(delay, max_delay, multiplier=2): """Generator/iterator that provides back delays values. The values it generates increments by a given multiple after each iteration (using the max delay as a upper bound). Negative values will never be generated... and it will iterate forever (ie it will never stop generating values). """ if max_delay < 0: raise ValueError("Provided delay (max) must be greater" " than or equal to zero") if delay < 0: raise ValueError("Provided delay must start off greater" " than or equal to zero") if multiplier < 1.0: raise ValueError("Provided multiplier must be greater than" " or equal to 1.0") def _gen_it(): # NOTE(harlowja): Generation is delayed so that validation # can happen before generation/iteration... (instead of # during generation/iteration) curr_delay = delay while True: curr_delay = max(0, min(max_delay, curr_delay)) yield curr_delay curr_delay = curr_delay * multiplier return _gen_it() def unique_seen(its, seen_selector=None): """Yields unique values from iterator(s) (and retains order).""" def _gen_it(all_its): # NOTE(harlowja): Generation is delayed so that validation # can happen before generation/iteration... (instead of # during generation/iteration) seen = set() for it in all_its: for value in it: if seen_selector is not None: maybe_seen_value = seen_selector(value) else: maybe_seen_value = value if maybe_seen_value not in seen: yield value seen.add(maybe_seen_value) all_its = list(its) for it in all_its: if not isinstance(it, abc.Iterable): raise ValueError("Iterable expected, but '%s' is" " not iterable" % it) return _gen_it(all_its) @_ensure_iterable def find_first_match(it, matcher, not_found_value=None): """Searches iterator for first value that matcher callback returns true.""" for value in it: if matcher(value): return value return not_found_value @_ensure_iterable def while_is_not(it, stop_value): """Yields given values from iterator until stop value is passed. This uses the ``is`` operator to determine equivalency (and not the ``==`` operator). """ for value in it: yield value if value is stop_value: break def iter_forever(limit): """Yields values from iterator until a limit is reached. if limit is negative, we iterate forever. """ if limit < 0: i = itertools.count() while True: yield next(i) else: for i in compat_range(0, limit): yield i ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/kazoo_utils.py0000664000175000017500000002261400000000000021316 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from kazoo import client from kazoo import exceptions as k_exc from oslo_utils import reflection import six from six.moves import zip as compat_zip from taskflow import exceptions as exc from taskflow import logging LOG = logging.getLogger(__name__) def _parse_hosts(hosts): if isinstance(hosts, six.string_types): return hosts.strip() if isinstance(hosts, (dict)): host_ports = [] for (k, v) in six.iteritems(hosts): host_ports.append("%s:%s" % (k, v)) hosts = host_ports if isinstance(hosts, (list, set, tuple)): return ",".join([str(h) for h in hosts]) return hosts def prettify_failures(failures, limit=-1): """Prettifies a checked commits failures (ignores sensitive data...).""" prettier = [] for (op, r) in failures: pretty_op = reflection.get_class_name(op, fully_qualified=False) # Pick off a few attributes that are meaningful (but one that don't # show actual data, which might not be desired to show...). selected_attrs = [ "path=%r" % op.path, ] try: if op.version != -1: selected_attrs.append("version=%s" % op.version) except AttributeError: pass pretty_op += "(%s)" % (", ".join(selected_attrs)) pretty_cause = reflection.get_class_name(r, fully_qualified=False) prettier.append("%s@%s" % (pretty_cause, pretty_op)) if limit <= 0 or len(prettier) <= limit: return ", ".join(prettier) else: leftover = prettier[limit:] prettier = prettier[0:limit] return ", ".join(prettier) + " and %s more..." % len(leftover) class KazooTransactionException(k_exc.KazooException): """Exception raised when a checked commit fails.""" def __init__(self, message, failures): super(KazooTransactionException, self).__init__(message) self._failures = tuple(failures) @property def failures(self): return self._failures def checked_commit(txn): """Commits a kazoo transcation and validates the result. NOTE(harlowja): Until https://github.com/python-zk/kazoo/pull/224 is fixed or a similar pull request is merged we have to workaround the transaction failing silently. """ if not txn.operations: return [] results = txn.commit() failures = [] for op, result in compat_zip(txn.operations, results): if isinstance(result, k_exc.KazooException): failures.append((op, result)) if len(results) < len(txn.operations): raise KazooTransactionException( "Transaction returned %s results, this is less than" " the number of expected transaction operations %s" % (len(results), len(txn.operations)), failures) if len(results) > len(txn.operations): raise KazooTransactionException( "Transaction returned %s results, this is greater than" " the number of expected transaction operations %s" % (len(results), len(txn.operations)), failures) if failures: raise KazooTransactionException( "Transaction with %s operations failed: %s" % (len(txn.operations), prettify_failures(failures, limit=1)), failures) return results def finalize_client(client): """Stops and closes a client, even if it wasn't started.""" client.stop() client.close() def check_compatible(client, min_version=None, max_version=None): """Checks if a kazoo client is backed by a zookeeper server version. This check will verify that the zookeeper server version that the client is connected to satisfies a given minimum version (inclusive) and maximum (inclusive) version range. If the server is not in the provided version range then a exception is raised indiciating this. """ server_version = None if min_version: server_version = tuple((int(a) for a in client.server_version())) min_version = tuple((int(a) for a in min_version)) if server_version < min_version: pretty_server_version = ".".join([str(a) for a in server_version]) min_version = ".".join([str(a) for a in min_version]) raise exc.IncompatibleVersion("Incompatible zookeeper version" " %s detected, zookeeper >= %s" " required" % (pretty_server_version, min_version)) if max_version: if server_version is None: server_version = tuple((int(a) for a in client.server_version())) max_version = tuple((int(a) for a in max_version)) if server_version > max_version: pretty_server_version = ".".join([str(a) for a in server_version]) max_version = ".".join([str(a) for a in max_version]) raise exc.IncompatibleVersion("Incompatible zookeeper version" " %s detected, zookeeper <= %s" " required" % (pretty_server_version, max_version)) def make_client(conf): """Creates a `kazoo`_ `client`_ given a configuration dictionary. :param conf: configuration dictionary that will be used to configure the created client :type conf: dict The keys that will be extracted are: - ``read_only``: boolean that specifies whether to allow connections to read only servers, defaults to ``False`` - ``randomize_hosts``: boolean that specifies whether to randomize host lists provided, defaults to ``False`` - ``command_retry``: a kazoo `retry`_ object (or dict of options which will be used for creating one) that will be used for retrying commands that are executed - ``connection_retry``: a kazoo `retry`_ object (or dict of options which will be used for creating one) that will be used for retrying connection failures that occur - ``hosts``: a string, list, set (or dict with host keys) that will specify the hosts the kazoo client should be connected to, if none is provided then ``localhost:2181`` will be used by default - ``timeout``: a float value that specifies the default timeout that the kazoo client will use - ``handler``: a kazoo handler object that can be used to provide the client with alternate async strategies (the default is `thread`_ based, but `gevent`_, or `eventlet`_ ones can be provided as needed) - ``keyfile`` : SSL keyfile to use for authentication - ``keyfile_password``: SSL keyfile password - ``certfile``: SSL certfile to use for authentication - ``ca``: SSL CA file to use for authentication - ``use_ssl``: argument to control whether SSL is used or not - ``verify_certs``: when using SSL, argument to bypass certs verification .. _client: https://kazoo.readthedocs.io/en/latest/api/client.html .. _kazoo: https://kazoo.readthedocs.io/ .. _retry: https://kazoo.readthedocs.io/en/latest/api/retry.html .. _gevent: https://kazoo.readthedocs.io/en/latest/api/\ handlers/gevent.html .. _eventlet: https://kazoo.readthedocs.io/en/latest/api/\ handlers/eventlet.html .. _thread: https://kazoo.readthedocs.io/en/latest/api/\ handlers/threading.html """ # See: https://kazoo.readthedocs.io/en/latest/api/client.html client_kwargs = { 'read_only': bool(conf.get('read_only')), 'randomize_hosts': bool(conf.get('randomize_hosts')), 'logger': LOG, 'keyfile': conf.get('keyfile', None), 'keyfile_password': conf.get('keyfile_password', None), 'certfile': conf.get('certfile', None), 'use_ssl': conf.get('use_ssl', False), 'verify_certs': conf.get('verify_certs', True), } # See: https://kazoo.readthedocs.io/en/latest/api/retry.html if 'command_retry' in conf: client_kwargs['command_retry'] = conf['command_retry'] if 'connection_retry' in conf: client_kwargs['connection_retry'] = conf['connection_retry'] hosts = _parse_hosts(conf.get("hosts", "localhost:2181")) if not hosts or not isinstance(hosts, six.string_types): raise TypeError("Invalid hosts format, expected " "non-empty string/list, not '%s' (%s)" % (hosts, type(hosts))) client_kwargs['hosts'] = hosts if 'timeout' in conf: client_kwargs['timeout'] = float(conf['timeout']) # Kazoo supports various handlers, gevent, threading, eventlet... # allow the user of this client object to optionally specify one to be # used. if 'handler' in conf: client_kwargs['handler'] = conf['handler'] return client.KazooClient(**client_kwargs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/kombu_utils.py0000664000175000017500000000474000000000000021310 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Keys extracted from the message properties when formatting... _MSG_PROPERTIES = tuple([ 'correlation_id', 'delivery_info/routing_key', 'type', ]) class DelayedPretty(object): """Wraps a message and delays prettifying it until requested. TODO(harlowja): remove this when https://github.com/celery/kombu/pull/454/ is merged and a release is made that contains it (since that pull request is equivalent and/or better than this). """ def __init__(self, message): self._message = message self._message_pretty = None def __str__(self): if self._message_pretty is None: self._message_pretty = _prettify_message(self._message) return self._message_pretty def _get_deep(properties, *keys): """Get a final key among a list of keys (each with its own sub-dict).""" for key in keys: properties = properties[key] return properties def _prettify_message(message): """Kombu doesn't currently have a useful ``__str__()`` or ``__repr__()``. This provides something decent(ish) for debugging (or other purposes) so that messages are more nice and understandable.... """ if message.content_type is not None: properties = { 'content_type': message.content_type, } else: properties = {} for name in _MSG_PROPERTIES: segments = name.split("/") try: value = _get_deep(message.properties, *segments) except (KeyError, ValueError, TypeError): pass else: if value is not None: properties[segments[-1]] = value if message.body is not None: properties['body_length'] = len(message.body) return "%(delivery_tag)s: %(properties)s" % { 'delivery_tag': message.delivery_tag, 'properties': properties, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/misc.py0000664000175000017500000004424000000000000017705 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # Copyright (C) 2013 Rackspace Hosting All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections.abc import contextlib import datetime import inspect import os import re import socket import sys import threading import types import enum from oslo_serialization import jsonutils from oslo_serialization import msgpackutils from oslo_utils import encodeutils from oslo_utils import importutils from oslo_utils import netutils from oslo_utils import reflection import six from taskflow.types import failure UNKNOWN_HOSTNAME = "" NUMERIC_TYPES = six.integer_types + (float,) # NOTE(imelnikov): regular expression to get scheme from URI, # see RFC 3986 section 3.1 _SCHEME_REGEX = re.compile(r"^([A-Za-z][A-Za-z0-9+.-]*):") class StrEnum(str, enum.Enum): """An enumeration that is also a string and can be compared to strings.""" def __new__(cls, *args, **kwargs): for a in args: if not isinstance(a, str): raise TypeError("Enumeration '%s' (%s) is not" " a string" % (a, type(a).__name__)) return super(StrEnum, cls).__new__(cls, *args, **kwargs) class StringIO(six.StringIO): """String buffer with some small additions.""" def write_nl(self, value, linesep=os.linesep): self.write(value) self.write(linesep) class BytesIO(six.BytesIO): """Byte buffer with some small additions.""" def reset(self): self.seek(0) self.truncate() def get_hostname(unknown_hostname=UNKNOWN_HOSTNAME): """Gets the machines hostname; if not able to returns an invalid one.""" try: hostname = socket.getfqdn() if not hostname: return unknown_hostname else: return hostname except socket.error: return unknown_hostname def match_type(obj, matchers): """Matches a given object using the given matchers list/iterable. NOTE(harlowja): each element of the provided list/iterable must be tuple of (valid types, result). Returns the result (the second element of the provided tuple) if a type match occurs, otherwise none if no matches are found. """ for (match_types, match_result) in matchers: if isinstance(obj, match_types): return match_result else: return None def countdown_iter(start_at, decr=1): """Generator that decrements after each generation until <= zero. NOTE(harlowja): we can likely remove this when we can use an ``itertools.count`` that takes a step (on py2.6 which we still support that step parameter does **not** exist and therefore can't be used). """ if decr <= 0: raise ValueError("Decrement value must be greater" " than zero and not %s" % decr) while start_at > 0: yield start_at start_at -= decr def extract_driver_and_conf(conf, conf_key): """Common function to get a driver name and its configuration.""" if isinstance(conf, six.string_types): conf = {conf_key: conf} maybe_uri = conf[conf_key] try: uri = parse_uri(maybe_uri) except (TypeError, ValueError): return (maybe_uri, conf) else: return (uri.scheme, merge_uri(uri, conf.copy())) def reverse_enumerate(items): """Like reversed(enumerate(items)) but with less copying/cloning...""" for i in countdown_iter(len(items)): yield i - 1, items[i - 1] def merge_uri(uri, conf): """Merges a parsed uri into the given configuration dictionary. Merges the username, password, hostname, port, and query parameters of a URI into the given configuration dictionary (it does **not** overwrite existing configuration keys if they already exist) and returns the merged configuration. NOTE(harlowja): does not merge the path, scheme or fragment. """ uri_port = uri.port specials = [ ('username', uri.username, lambda v: bool(v)), ('password', uri.password, lambda v: bool(v)), # NOTE(harlowja): A different check function is used since 0 is # false (when bool(v) is applied), and that is a valid port... ('port', uri_port, lambda v: v is not None), ] hostname = uri.hostname if hostname: if uri_port is not None: hostname += ":%s" % (uri_port) specials.append(('hostname', hostname, lambda v: bool(v))) for (k, v, is_not_empty_value_func) in specials: if is_not_empty_value_func(v): conf.setdefault(k, v) for (k, v) in six.iteritems(uri.params()): conf.setdefault(k, v) return conf def find_subclasses(locations, base_cls, exclude_hidden=True): """Finds subclass types in the given locations. This will examines the given locations for types which are subclasses of the base class type provided and returns the found subclasses (or fails with exceptions if this introspection can not be accomplished). If a string is provided as one of the locations it will be imported and examined if it is a subclass of the base class. If a module is given, all of its members will be examined for attributes which are subclasses of the base class. If a type itself is given it will be examined for being a subclass of the base class. """ derived = set() for item in locations: module = None if isinstance(item, six.string_types): try: pkg, cls = item.split(':') except ValueError: module = importutils.import_module(item) else: obj = importutils.import_class('%s.%s' % (pkg, cls)) if not reflection.is_subclass(obj, base_cls): raise TypeError("Object '%s' (%s) is not a '%s' subclass" % (item, type(item), base_cls)) derived.add(obj) elif isinstance(item, types.ModuleType): module = item elif reflection.is_subclass(item, base_cls): derived.add(item) else: raise TypeError("Object '%s' (%s) is an unexpected type" % (item, type(item))) # If it's a module derive objects from it if we can. if module is not None: for (name, obj) in inspect.getmembers(module): if name.startswith("_") and exclude_hidden: continue if reflection.is_subclass(obj, base_cls): derived.add(obj) return derived def pick_first_not_none(*values): """Returns first of values that is *not* None (or None if all are/were).""" for val in values: if val is not None: return val return None def parse_uri(uri): """Parses a uri into its components.""" # Do some basic validation before continuing... if not isinstance(uri, six.string_types): raise TypeError("Can only parse string types to uri data, " "and not '%s' (%s)" % (uri, type(uri))) match = _SCHEME_REGEX.match(uri) if not match: raise ValueError("Uri '%s' does not start with a RFC 3986 compliant" " scheme" % (uri)) return netutils.urlsplit(uri) def disallow_when_frozen(excp_cls): """Frozen checking/raising method decorator.""" def decorator(f): @six.wraps(f) def wrapper(self, *args, **kwargs): if self.frozen: raise excp_cls() else: return f(self, *args, **kwargs) return wrapper return decorator def clamp(value, minimum, maximum, on_clamped=None): """Clamps a value to ensure its >= minimum and <= maximum.""" if minimum > maximum: raise ValueError("Provided minimum '%s' must be less than or equal to" " the provided maximum '%s'" % (minimum, maximum)) if value > maximum: value = maximum if on_clamped is not None: on_clamped() if value < minimum: value = minimum if on_clamped is not None: on_clamped() return value def fix_newlines(text, replacement=os.linesep): """Fixes text that *may* end with wrong nl by replacing with right nl.""" return replacement.join(text.splitlines()) def binary_encode(text, encoding='utf-8', errors='strict'): """Encodes a text string into a binary string using given encoding. Does nothing if data is already a binary string (raises on unknown types). """ if isinstance(text, six.binary_type): return text else: return encodeutils.safe_encode(text, encoding=encoding, errors=errors) def binary_decode(data, encoding='utf-8', errors='strict'): """Decodes a binary string into a text string using given encoding. Does nothing if data is already a text string (raises on unknown types). """ if isinstance(data, six.text_type): return data else: return encodeutils.safe_decode(data, incoming=encoding, errors=errors) def _check_decoded_type(data, root_types=(dict,)): if root_types: if not isinstance(root_types, tuple): root_types = tuple(root_types) if not isinstance(data, root_types): if len(root_types) == 1: root_type = root_types[0] raise ValueError("Expected '%s' root type not '%s'" % (root_type, type(data))) else: raise ValueError("Expected %s root types not '%s'" % (list(root_types), type(data))) return data def decode_msgpack(raw_data, root_types=(dict,)): """Parse raw data to get decoded object. Decodes a msgback encoded 'blob' from a given raw data binary string and checks that the root type of that decoded object is in the allowed set of types (by default a dict should be the root type). """ try: data = msgpackutils.loads(raw_data) except Exception as e: # TODO(harlowja): fix this when msgpackutils exposes the msgpack # exceptions so that we can avoid catching just exception... raise ValueError("Expected msgpack decodable data: %s" % e) else: return _check_decoded_type(data, root_types=root_types) def decode_json(raw_data, root_types=(dict,)): """Parse raw data to get decoded object. Decodes a JSON encoded 'blob' from a given raw data binary string and checks that the root type of that decoded object is in the allowed set of types (by default a dict should be the root type). """ try: data = jsonutils.loads(binary_decode(raw_data)) except UnicodeDecodeError as e: raise ValueError("Expected UTF-8 decodable data: %s" % e) except ValueError as e: raise ValueError("Expected JSON decodable data: %s" % e) else: return _check_decoded_type(data, root_types=root_types) class cachedproperty(object): """A *thread-safe* descriptor property that is only evaluated once. This caching descriptor can be placed on instance methods to translate those methods into properties that will be cached in the instance (avoiding repeated attribute checking logic to do the equivalent). NOTE(harlowja): by default the property that will be saved will be under the decorated methods name prefixed with an underscore. For example if we were to attach this descriptor to an instance method 'get_thing(self)' the cached property would be stored under '_get_thing' in the self object after the first call to 'get_thing' occurs. """ def __init__(self, fget=None, require_lock=True): if require_lock: self._lock = threading.RLock() else: self._lock = None # If a name is provided (as an argument) then this will be the string # to place the cached attribute under if not then it will be the # function itself to be wrapped into a property. if inspect.isfunction(fget): self._fget = fget self._attr_name = "_%s" % (fget.__name__) self.__doc__ = getattr(fget, '__doc__', None) else: self._attr_name = fget self._fget = None self.__doc__ = None def __call__(self, fget): # If __init__ received a string or a lock boolean then this will be # the function to be wrapped as a property (if __init__ got a # function then this will not be called). self._fget = fget if not self._attr_name: self._attr_name = "_%s" % (fget.__name__) self.__doc__ = getattr(fget, '__doc__', None) return self def __set__(self, instance, value): raise AttributeError("can't set attribute") def __delete__(self, instance): raise AttributeError("can't delete attribute") def __get__(self, instance, owner): if instance is None: return self # Quick check to see if this already has been made (before acquiring # the lock). This is safe to do since we don't allow deletion after # being created. if hasattr(instance, self._attr_name): return getattr(instance, self._attr_name) else: if self._lock is not None: self._lock.acquire() try: return getattr(instance, self._attr_name) except AttributeError: value = self._fget(instance) setattr(instance, self._attr_name, value) return value finally: if self._lock is not None: self._lock.release() def millis_to_datetime(milliseconds): """Converts number of milliseconds (from epoch) into a datetime object.""" return datetime.datetime.fromtimestamp(float(milliseconds) / 1000) def get_version_string(obj): """Gets a object's version as a string. Returns string representation of object's version taken from its 'version' attribute, or None if object does not have such attribute or its version is None. """ obj_version = getattr(obj, 'version', None) if isinstance(obj_version, (list, tuple)): obj_version = '.'.join(str(item) for item in obj_version) if obj_version is not None and not isinstance(obj_version, six.string_types): obj_version = str(obj_version) return obj_version def sequence_minus(seq1, seq2): """Calculate difference of two sequences. Result contains the elements from first sequence that are not present in second sequence, in original order. Works even if sequence elements are not hashable. """ result = list(seq1) for item in seq2: try: result.remove(item) except ValueError: pass return result def as_int(obj, quiet=False): """Converts an arbitrary value into a integer.""" # Try "2" -> 2 try: return int(obj) except (ValueError, TypeError): pass # Try "2.5" -> 2 try: return int(float(obj)) except (ValueError, TypeError): pass # Eck, not sure what this is then. if not quiet: raise TypeError("Can not translate '%s' (%s) to an integer" % (obj, type(obj))) return obj @contextlib.contextmanager def capture_failure(): """Captures the occurring exception and provides a failure object back. This will save the current exception information and yield back a failure object for the caller to use (it will raise a runtime error if no active exception is being handled). This is useful since in some cases the exception context can be cleared, resulting in None being attempted to be saved after an exception handler is run. This can happen when eventlet switches greenthreads or when running an exception handler, code raises and catches an exception. In both cases the exception context will be cleared. To work around this, we save the exception state, yield a failure and then run other code. For example:: >>> from taskflow.utils import misc >>> >>> def cleanup(): ... pass ... >>> >>> def save_failure(f): ... print("Saving %s" % f) ... >>> >>> try: ... raise IOError("Broken") ... except Exception: ... with misc.capture_failure() as fail: ... print("Activating cleanup") ... cleanup() ... save_failure(fail) ... Activating cleanup Saving Failure: IOError: Broken """ exc_info = sys.exc_info() if not any(exc_info): raise RuntimeError("No active exception is being handled") else: yield failure.Failure(exc_info=exc_info) def is_iterable(obj): """Tests an object to to determine whether it is iterable. This function will test the specified object to determine whether it is iterable. String types (both ``str`` and ``unicode``) are ignored and will return False. :param obj: object to be tested for iterable :return: True if object is iterable and is not a string """ return (not isinstance(obj, six.string_types) and isinstance(obj, collections.abc.Iterable)) def safe_copy_dict(obj): """Copy an existing dictionary or default to empty dict... This will return a empty dict if given object is falsey, otherwise it will create a dict of the given object (which if provided a dictionary object will make a shallow copy of that object). """ if not obj: return {} # default to a shallow copy to avoid most ownership issues return dict(obj) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/mixins.py0000664000175000017500000000217500000000000020262 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six class StrMixin(object): """Mixin that helps deal with the PY2 and PY3 method differences. http://lucumr.pocoo.org/2011/1/22/forwards-compatible-python/ explains why this is quite useful... """ if six.PY2: def __str__(self): try: return self.__bytes__() except AttributeError: return self.__unicode__().encode('utf-8') else: def __str__(self): return self.__unicode__() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/persistence_utils.py0000664000175000017500000000747300000000000022525 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_utils import uuidutils from taskflow import logging from taskflow.persistence import models LOG = logging.getLogger(__name__) def temporary_log_book(backend=None): """Creates a temporary logbook for temporary usage in the given backend. Mainly useful for tests and other use cases where a temporary logbook is needed for a short-period of time. """ book = models.LogBook('tmp') if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) return book def temporary_flow_detail(backend=None, meta=None): """Creates a temporary flow detail and logbook in the given backend. Mainly useful for tests and other use cases where a temporary flow detail and a temporary logbook is needed for a short-period of time. """ flow_id = uuidutils.generate_uuid() book = temporary_log_book(backend) flow_detail = models.FlowDetail(name='tmp-flow-detail', uuid=flow_id) if meta is not None: if flow_detail.meta is None: flow_detail.meta = {} flow_detail.meta.update(meta) book.add(flow_detail) if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) # Return the one from the saved logbook instead of the local one so # that the freshest version is given back. return book, book.find(flow_id) def create_flow_detail(flow, book=None, backend=None, meta=None): """Creates a flow detail for a flow & adds & saves it in a logbook. This will create a flow detail for the given flow using the flow name, and add it to the provided logbook and then uses the given backend to save the logbook and then returns the created flow detail. If no book is provided a temporary one will be created automatically (no reference to the logbook will be returned, so this should nearly *always* be provided or only used in situations where no logbook is needed, for example in tests). If no backend is provided then no saving will occur and the created flow detail will not be persisted even if the flow detail was added to a given (or temporarily generated) logbook. """ flow_id = uuidutils.generate_uuid() flow_name = getattr(flow, 'name', None) if flow_name is None: LOG.warning("No name provided for flow %s (id %s)", flow, flow_id) flow_name = flow_id flow_detail = models.FlowDetail(name=flow_name, uuid=flow_id) if meta is not None: if flow_detail.meta is None: flow_detail.meta = {} flow_detail.meta.update(meta) if backend is not None and book is None: LOG.warning("No logbook provided for flow %s, creating one.", flow) book = temporary_log_book(backend) if book is not None: book.add(flow_detail) if backend is not None: with contextlib.closing(backend.get_connection()) as conn: conn.save_logbook(book) # Return the one from the saved logbook instead of the local one so # that the freshest version is given back. return book.find(flow_id) else: return flow_detail ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/redis_utils.py0000664000175000017500000001040500000000000021274 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import enum import redis from redis import exceptions as redis_exceptions import six def _raise_on_closed(meth): @six.wraps(meth) def wrapper(self, *args, **kwargs): if self.closed: raise redis_exceptions.ConnectionError("Connection has been" " closed") return meth(self, *args, **kwargs) return wrapper class RedisClient(redis.StrictRedis): """A redis client that can be closed (and raises on-usage after closed). TODO(harlowja): if https://github.com/andymccurdy/redis-py/issues/613 ever gets resolved or merged or other then we can likely remove this. """ def __init__(self, *args, **kwargs): super(RedisClient, self).__init__(*args, **kwargs) self.closed = False def close(self): self.closed = True self.connection_pool.disconnect() execute_command = _raise_on_closed(redis.StrictRedis.execute_command) transaction = _raise_on_closed(redis.StrictRedis.transaction) pubsub = _raise_on_closed(redis.StrictRedis.pubsub) class UnknownExpire(enum.IntEnum): """Non-expiry (not ttls) results return from :func:`.get_expiry`. See: http://redis.io/commands/ttl or http://redis.io/commands/pttl """ DOES_NOT_EXPIRE = -1 """ The command returns ``-1`` if the key exists but has no associated expire. """ #: The command returns ``-2`` if the key does not exist. KEY_NOT_FOUND = -2 DOES_NOT_EXPIRE = UnknownExpire.DOES_NOT_EXPIRE KEY_NOT_FOUND = UnknownExpire.KEY_NOT_FOUND _UNKNOWN_EXPIRE_MAPPING = dict((e.value, e) for e in list(UnknownExpire)) def get_expiry(client, key, prior_version=None): """Gets an expiry for a key (using **best** determined ttl method).""" is_new_enough, _prior_version = is_server_new_enough( client, (2, 6), prior_version=prior_version) if is_new_enough: result = client.pttl(key) try: return _UNKNOWN_EXPIRE_MAPPING[result] except KeyError: return result / 1000.0 else: result = client.ttl(key) try: return _UNKNOWN_EXPIRE_MAPPING[result] except KeyError: return float(result) def apply_expiry(client, key, expiry, prior_version=None): """Applies an expiry to a key (using **best** determined expiry method).""" is_new_enough, _prior_version = is_server_new_enough( client, (2, 6), prior_version=prior_version) if is_new_enough: # Use milliseconds (as that is what pexpire uses/expects...) ms_expiry = expiry * 1000.0 ms_expiry = max(0, int(ms_expiry)) result = client.pexpire(key, ms_expiry) else: # Only supports seconds (not subseconds...) sec_expiry = int(expiry) sec_expiry = max(0, sec_expiry) result = client.expire(key, sec_expiry) return bool(result) def is_server_new_enough(client, min_version, default=False, prior_version=None): """Checks if a client is attached to a new enough redis server.""" if not prior_version: try: server_info = client.info() except redis_exceptions.ResponseError: server_info = {} version_text = server_info.get('redis_version', '') else: version_text = prior_version version_pieces = [] for p in version_text.split("."): try: version_pieces.append(int(p)) except ValueError: break if not version_pieces: return (default, version_text) else: version_pieces = tuple(version_pieces) return (version_pieces >= min_version, version_text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/schema_utils.py0000664000175000017500000000225700000000000021434 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import jsonschema from jsonschema import exceptions as schema_exc # Special jsonschema validation types/adjustments. _SCHEMA_TYPES = { # See: https://github.com/Julian/jsonschema/issues/148 'array': (list, tuple), } # Expose these types so that people don't have to import the same exceptions. ValidationError = schema_exc.ValidationError SchemaError = schema_exc.SchemaError def schema_validate(data, schema): """Validates given data using provided json schema.""" jsonschema.validate(data, schema, types=_SCHEMA_TYPES) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/utils/threading_utils.py0000664000175000017500000001337000000000000022137 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import multiprocessing import threading import six from six.moves import _thread from taskflow.utils import misc def is_alive(thread): """Helper to determine if a thread is alive (handles none safely).""" if not thread: return False return thread.is_alive() def get_ident(): """Return the 'thread identifier' of the current thread.""" return _thread.get_ident() def get_optimal_thread_count(default=2): """Try to guess optimal thread count for current system.""" try: return multiprocessing.cpu_count() + 1 except NotImplementedError: # NOTE(harlowja): apparently may raise so in this case we will # just setup two threads since it's hard to know what else we # should do in this situation. return default def daemon_thread(target, *args, **kwargs): """Makes a daemon thread that calls the given target when started.""" thread = threading.Thread(target=target, args=args, kwargs=kwargs) # NOTE(skudriashev): When the main thread is terminated unexpectedly # and thread is still alive - it will prevent main thread from exiting # unless the daemon property is set to True. thread.daemon = True return thread # Container for thread creator + associated callbacks. _ThreadBuilder = collections.namedtuple('_ThreadBuilder', ['thread_factory', 'before_start', 'after_start', 'before_join', 'after_join']) _ThreadBuilder.fields = tuple([ 'thread_factory', 'before_start', 'after_start', 'before_join', 'after_join', ]) def no_op(*args, **kwargs): """Function that does nothing.""" class ThreadBundle(object): """A group/bundle of threads that start/stop together.""" def __init__(self): self._threads = [] self._lock = threading.Lock() def bind(self, thread_factory, before_start=None, after_start=None, before_join=None, after_join=None): """Adds a thread (to-be) into this bundle (with given callbacks). NOTE(harlowja): callbacks provided should not attempt to call mutating methods (:meth:`.stop`, :meth:`.start`, :meth:`.bind` ...) on this object as that will result in dead-lock since the lock on this object is not meant to be (and is not) reentrant... """ if before_start is None: before_start = no_op if after_start is None: after_start = no_op if before_join is None: before_join = no_op if after_join is None: after_join = no_op builder = _ThreadBuilder(thread_factory, before_start, after_start, before_join, after_join) for attr_name in builder.fields: cb = getattr(builder, attr_name) if not six.callable(cb): raise ValueError("Provided callback for argument" " '%s' must be callable" % attr_name) with self._lock: self._threads.append([ builder, # The built thread. None, # Whether the built thread was started (and should have # ran or still be running). False, ]) def start(self): """Creates & starts all associated threads (that are not running).""" count = 0 with self._lock: it = enumerate(self._threads) for i, (builder, thread, started) in it: if thread and started: continue if not thread: self._threads[i][1] = thread = builder.thread_factory() builder.before_start(thread) thread.start() count += 1 try: builder.after_start(thread) finally: # Just incase the 'after_start' callback blows up make sure # we always set this... self._threads[i][2] = started = True return count def stop(self): """Stops & joins all associated threads (that have been started).""" count = 0 with self._lock: it = misc.reverse_enumerate(self._threads) for i, (builder, thread, started) in it: if not thread or not started: continue builder.before_join(thread) thread.join() count += 1 try: builder.after_join(thread) finally: # Just incase the 'after_join' callback blows up make sure # we always set/reset these... self._threads[i][1] = thread = None self._threads[i][2] = started = False return count def __len__(self): """Returns how many threads (to-be) are in this bundle.""" return len(self._threads) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/taskflow/version.py0000664000175000017500000000235700000000000017302 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pkg_resources TASK_VENDOR = "OpenStack Foundation" TASK_PRODUCT = "OpenStack TaskFlow" TASK_PACKAGE = None # OS distro package version suffix try: from pbr import version as pbr_version _version_info = pbr_version.VersionInfo('taskflow') version_string = _version_info.version_string except ImportError: _version_info = pkg_resources.get_distribution('taskflow') version_string = lambda: _version_info.version def version_string_with_package(): if TASK_PACKAGE is None: return version_string() else: return "%s-%s" % (version_string(), TASK_PACKAGE) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6040409 taskflow-4.6.4/taskflow.egg-info/0000775000175000017500000000000000000000000016726 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/PKG-INFO0000664000175000017500000001024600000000000020026 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: taskflow Version: 4.6.4 Summary: Taskflow structured state management library. Home-page: https://docs.openstack.org/taskflow/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/taskflow.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on TaskFlow ======== .. image:: https://img.shields.io/pypi/v/taskflow.svg :target: https://pypi.org/project/taskflow/ :alt: Latest Version A library to do [jobs, tasks, flows] in a highly available, easy to understand and declarative manner (and more!) to be used with OpenStack and other projects. * Free software: Apache license * Documentation: https://docs.openstack.org/taskflow/latest/ * Source: https://opendev.org/openstack/taskflow * Bugs: https://bugs.launchpad.net/taskflow/ * Release notes: https://docs.openstack.org/releasenotes/taskflow/ Join us ------- - https://launchpad.net/taskflow Testing and requirements ------------------------ Requirements ~~~~~~~~~~~~ Because this project has many optional (pluggable) parts like persistence backends and engines, we decided to split our requirements into two parts: - things that are absolutely required (you can't use the project without them) are put into ``requirements.txt``. The requirements that are required by some optional part of this project (you can use the project without them) are put into our ``test-requirements.txt`` file (so that we can still test the optional functionality works as expected). If you want to use the feature in question (`eventlet`_ or the worker based engine that uses `kombu`_ or the `sqlalchemy`_ persistence backend or jobboards which have an implementation built using `kazoo`_ ...), you should add that requirement(s) to your project or environment. Tox.ini ~~~~~~~ Our ``tox.ini`` file describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. Please refer to the `tox`_ documentation to understand how to make these test environments work for you. Developer documentation ----------------------- We also have sphinx documentation in ``docs/source``. *To build it, run:* :: $ python setup.py build_sphinx .. _kazoo: https://kazoo.readthedocs.io/en/latest/ .. _sqlalchemy: https://www.sqlalchemy.org/ .. _kombu: https://kombu.readthedocs.io/en/latest/ .. _eventlet: http://eventlet.net/ .. _tox: https://tox.testrun.org/ Keywords: reliable,tasks,execution,parallel,dataflow,workflows,distributed Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: System :: Distributed Computing Requires-Python: >=3.6 Provides-Extra: database Provides-Extra: eventlet Provides-Extra: redis Provides-Extra: test Provides-Extra: workers Provides-Extra: zookeeper ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/SOURCES.txt0000664000175000017500000002776300000000000020631 0ustar00zuulzuul00000000000000.coveragerc .mailmap .pre-commit-config.yaml .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog LICENSE README.rst bindep.txt pylintrc requirements.txt run_tests.sh setup.cfg setup.py test-requirements.txt tox.ini doc/requirements.txt doc/diagrams/area_of_influence.graffle.tgz doc/diagrams/core.graffle.tgz doc/diagrams/jobboard.graffle.tgz doc/diagrams/tasks.graffle.tgz doc/diagrams/worker-engine.graffle.tgz doc/source/conf.py doc/source/index.rst doc/source/templates/layout.html doc/source/user/arguments_and_results.rst doc/source/user/atoms.rst doc/source/user/conductors.rst doc/source/user/engines.rst doc/source/user/examples.rst doc/source/user/exceptions.rst doc/source/user/history.rst doc/source/user/index.rst doc/source/user/inputs_and_outputs.rst doc/source/user/jobs.rst doc/source/user/notifications.rst doc/source/user/patterns.rst doc/source/user/persistence.rst doc/source/user/resumption.rst doc/source/user/shelf.rst doc/source/user/states.rst doc/source/user/types.rst doc/source/user/utils.rst doc/source/user/workers.rst doc/source/user/img/area_of_influence.svg doc/source/user/img/conductor.png doc/source/user/img/conductor_cycle.png doc/source/user/img/distributed_flow_rpc.png doc/source/user/img/engine_states.svg doc/source/user/img/flow_states.svg doc/source/user/img/job_states.svg doc/source/user/img/jobboard.png doc/source/user/img/mandelbrot.png doc/source/user/img/retry_states.svg doc/source/user/img/task_states.svg doc/source/user/img/tasks.png doc/source/user/img/wbe_request_states.svg doc/source/user/img/worker-engine.svg releasenotes/notes/.placeholder releasenotes/notes/add-sentinel-redis-support-9fd16e2a5dd5c0c9.yaml releasenotes/notes/drop-python-2-7-73d3113c69d724d6.yaml releasenotes/notes/fix-endless-loop-on-storage-error-dd4467f0bbc66abf.yaml releasenotes/notes/zookeeper-ssl-support-b9abf24a39096b62.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/ussuri.rst releasenotes/source/victoria.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder taskflow/__init__.py taskflow/atom.py taskflow/deciders.py taskflow/exceptions.py taskflow/flow.py taskflow/formatters.py taskflow/logging.py taskflow/retry.py taskflow/states.py taskflow/storage.py taskflow/task.py taskflow/test.py taskflow/version.py taskflow.egg-info/PKG-INFO taskflow.egg-info/SOURCES.txt taskflow.egg-info/dependency_links.txt taskflow.egg-info/entry_points.txt taskflow.egg-info/not-zip-safe taskflow.egg-info/pbr.json taskflow.egg-info/requires.txt taskflow.egg-info/top_level.txt taskflow/conductors/__init__.py taskflow/conductors/base.py taskflow/conductors/backends/__init__.py taskflow/conductors/backends/impl_blocking.py taskflow/conductors/backends/impl_executor.py taskflow/conductors/backends/impl_nonblocking.py taskflow/contrib/__init__.py taskflow/engines/__init__.py taskflow/engines/base.py taskflow/engines/helpers.py taskflow/engines/action_engine/__init__.py taskflow/engines/action_engine/builder.py taskflow/engines/action_engine/compiler.py taskflow/engines/action_engine/completer.py taskflow/engines/action_engine/deciders.py taskflow/engines/action_engine/engine.py taskflow/engines/action_engine/executor.py taskflow/engines/action_engine/process_executor.py taskflow/engines/action_engine/runtime.py taskflow/engines/action_engine/scheduler.py taskflow/engines/action_engine/scopes.py taskflow/engines/action_engine/selector.py taskflow/engines/action_engine/traversal.py taskflow/engines/action_engine/actions/__init__.py taskflow/engines/action_engine/actions/base.py taskflow/engines/action_engine/actions/retry.py taskflow/engines/action_engine/actions/task.py taskflow/engines/worker_based/__init__.py taskflow/engines/worker_based/dispatcher.py taskflow/engines/worker_based/endpoint.py taskflow/engines/worker_based/engine.py taskflow/engines/worker_based/executor.py taskflow/engines/worker_based/protocol.py taskflow/engines/worker_based/proxy.py taskflow/engines/worker_based/server.py taskflow/engines/worker_based/types.py taskflow/engines/worker_based/worker.py taskflow/examples/99_bottles.py taskflow/examples/alphabet_soup.py taskflow/examples/build_a_car.py taskflow/examples/buildsystem.py taskflow/examples/calculate_in_parallel.py taskflow/examples/calculate_linear.py taskflow/examples/create_parallel_volume.py taskflow/examples/delayed_return.py taskflow/examples/distance_calculator.py taskflow/examples/dump_memory_backend.py taskflow/examples/echo_listener.py taskflow/examples/example_utils.py taskflow/examples/fake_billing.py taskflow/examples/graph_flow.py taskflow/examples/hello_world.py taskflow/examples/jobboard_produce_consume_colors.py taskflow/examples/parallel_table_multiply.py taskflow/examples/persistence_example.py taskflow/examples/pseudo_scoping.out.txt taskflow/examples/pseudo_scoping.py taskflow/examples/resume_from_backend.out.txt taskflow/examples/resume_from_backend.py taskflow/examples/resume_many_flows.out.txt taskflow/examples/resume_many_flows.py taskflow/examples/resume_vm_boot.py taskflow/examples/resume_volume_create.py taskflow/examples/retry_flow.out.txt taskflow/examples/retry_flow.py taskflow/examples/reverting_linear.out.txt taskflow/examples/reverting_linear.py taskflow/examples/run_by_iter.out.txt taskflow/examples/run_by_iter.py taskflow/examples/run_by_iter_enumerate.out.txt taskflow/examples/run_by_iter_enumerate.py taskflow/examples/share_engine_thread.py taskflow/examples/simple_linear.out.txt taskflow/examples/simple_linear.py taskflow/examples/simple_linear_listening.out.txt taskflow/examples/simple_linear_listening.py taskflow/examples/simple_linear_pass.out.txt taskflow/examples/simple_linear_pass.py taskflow/examples/simple_map_reduce.py taskflow/examples/switch_graph_flow.py taskflow/examples/timing_listener.py taskflow/examples/tox_conductor.py taskflow/examples/wbe_event_sender.py taskflow/examples/wbe_mandelbrot.out.txt taskflow/examples/wbe_mandelbrot.py taskflow/examples/wbe_simple_linear.out.txt taskflow/examples/wbe_simple_linear.py taskflow/examples/wrapped_exception.py taskflow/examples/resume_many_flows/my_flows.py taskflow/examples/resume_many_flows/resume_all.py taskflow/examples/resume_many_flows/run_flow.py taskflow/jobs/__init__.py taskflow/jobs/base.py taskflow/jobs/backends/__init__.py taskflow/jobs/backends/impl_redis.py taskflow/jobs/backends/impl_zookeeper.py taskflow/listeners/__init__.py taskflow/listeners/base.py taskflow/listeners/capturing.py taskflow/listeners/claims.py taskflow/listeners/logging.py taskflow/listeners/printing.py taskflow/listeners/timing.py taskflow/patterns/__init__.py taskflow/patterns/graph_flow.py taskflow/patterns/linear_flow.py taskflow/patterns/unordered_flow.py taskflow/persistence/__init__.py taskflow/persistence/base.py taskflow/persistence/models.py taskflow/persistence/path_based.py taskflow/persistence/backends/__init__.py taskflow/persistence/backends/impl_dir.py taskflow/persistence/backends/impl_memory.py taskflow/persistence/backends/impl_sqlalchemy.py taskflow/persistence/backends/impl_zookeeper.py taskflow/persistence/backends/sqlalchemy/__init__.py taskflow/persistence/backends/sqlalchemy/migration.py taskflow/persistence/backends/sqlalchemy/tables.py taskflow/persistence/backends/sqlalchemy/alembic/README taskflow/persistence/backends/sqlalchemy/alembic/alembic.ini taskflow/persistence/backends/sqlalchemy/alembic/env.py taskflow/persistence/backends/sqlalchemy/alembic/script.py.mako taskflow/persistence/backends/sqlalchemy/alembic/versions/0bc3e1a3c135_set_result_meduimtext_type.py taskflow/persistence/backends/sqlalchemy/alembic/versions/14b227d79a87_add_intention_column.py taskflow/persistence/backends/sqlalchemy/alembic/versions/1c783c0c2875_replace_exception_an.py taskflow/persistence/backends/sqlalchemy/alembic/versions/1cea328f0f65_initial_logbook_deta.py taskflow/persistence/backends/sqlalchemy/alembic/versions/2ad4984f2864_switch_postgres_to_json_native.py taskflow/persistence/backends/sqlalchemy/alembic/versions/3162c0f3f8e4_add_revert_results_and_revert_failure_.py taskflow/persistence/backends/sqlalchemy/alembic/versions/589dccdf2b6e_rename_taskdetails_to_atomdetails.py taskflow/persistence/backends/sqlalchemy/alembic/versions/6df9422fcb43_fix_flowdetails_meta_size.py taskflow/persistence/backends/sqlalchemy/alembic/versions/84d6e888850_add_task_detail_type.py taskflow/persistence/backends/sqlalchemy/alembic/versions/README taskflow/tests/__init__.py taskflow/tests/test_examples.py taskflow/tests/utils.py taskflow/tests/unit/__init__.py taskflow/tests/unit/test_arguments_passing.py taskflow/tests/unit/test_check_transition.py taskflow/tests/unit/test_conductors.py taskflow/tests/unit/test_deciders.py taskflow/tests/unit/test_engine_helpers.py taskflow/tests/unit/test_engines.py taskflow/tests/unit/test_exceptions.py taskflow/tests/unit/test_failure.py taskflow/tests/unit/test_flow_dependencies.py taskflow/tests/unit/test_formatters.py taskflow/tests/unit/test_functor_task.py taskflow/tests/unit/test_listeners.py taskflow/tests/unit/test_mapfunctor_task.py taskflow/tests/unit/test_notifier.py taskflow/tests/unit/test_progress.py taskflow/tests/unit/test_reducefunctor_task.py taskflow/tests/unit/test_retries.py taskflow/tests/unit/test_states.py taskflow/tests/unit/test_storage.py taskflow/tests/unit/test_suspend.py taskflow/tests/unit/test_task.py taskflow/tests/unit/test_types.py taskflow/tests/unit/test_utils.py taskflow/tests/unit/test_utils_async_utils.py taskflow/tests/unit/test_utils_binary.py taskflow/tests/unit/test_utils_iter_utils.py taskflow/tests/unit/test_utils_threading_utils.py taskflow/tests/unit/action_engine/__init__.py taskflow/tests/unit/action_engine/test_builder.py taskflow/tests/unit/action_engine/test_compile.py taskflow/tests/unit/action_engine/test_creation.py taskflow/tests/unit/action_engine/test_process_executor.py taskflow/tests/unit/action_engine/test_scoping.py taskflow/tests/unit/jobs/__init__.py taskflow/tests/unit/jobs/base.py taskflow/tests/unit/jobs/test_entrypoint.py taskflow/tests/unit/jobs/test_redis_job.py taskflow/tests/unit/jobs/test_zk_job.py taskflow/tests/unit/patterns/__init__.py taskflow/tests/unit/patterns/test_graph_flow.py taskflow/tests/unit/patterns/test_linear_flow.py taskflow/tests/unit/patterns/test_unordered_flow.py taskflow/tests/unit/persistence/__init__.py taskflow/tests/unit/persistence/base.py taskflow/tests/unit/persistence/test_dir_persistence.py taskflow/tests/unit/persistence/test_memory_persistence.py taskflow/tests/unit/persistence/test_sql_persistence.py taskflow/tests/unit/persistence/test_zk_persistence.py taskflow/tests/unit/worker_based/__init__.py taskflow/tests/unit/worker_based/test_creation.py taskflow/tests/unit/worker_based/test_dispatcher.py taskflow/tests/unit/worker_based/test_endpoint.py taskflow/tests/unit/worker_based/test_executor.py taskflow/tests/unit/worker_based/test_message_pump.py taskflow/tests/unit/worker_based/test_pipeline.py taskflow/tests/unit/worker_based/test_protocol.py taskflow/tests/unit/worker_based/test_proxy.py taskflow/tests/unit/worker_based/test_server.py taskflow/tests/unit/worker_based/test_types.py taskflow/tests/unit/worker_based/test_worker.py taskflow/types/__init__.py taskflow/types/entity.py taskflow/types/failure.py taskflow/types/graph.py taskflow/types/latch.py taskflow/types/notifier.py taskflow/types/sets.py taskflow/types/timing.py taskflow/types/tree.py taskflow/utils/__init__.py taskflow/utils/async_utils.py taskflow/utils/banner.py taskflow/utils/eventlet_utils.py taskflow/utils/iter_utils.py taskflow/utils/kazoo_utils.py taskflow/utils/kombu_utils.py taskflow/utils/misc.py taskflow/utils/mixins.py taskflow/utils/persistence_utils.py taskflow/utils/redis_utils.py taskflow/utils/schema_utils.py taskflow/utils/threading_utils.py tools/clear_zk.sh tools/env_builder.sh tools/pretty_tox.sh tools/schema_generator.py tools/speed_test.py tools/state_graph.py tools/subunit_trace.py tools/test-setup.sh tools/update_states.sh././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/dependency_links.txt0000664000175000017500000000000100000000000022774 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/entry_points.txt0000664000175000017500000000223700000000000022230 0ustar00zuulzuul00000000000000[taskflow.conductors] blocking = taskflow.conductors.backends.impl_blocking:BlockingConductor nonblocking = taskflow.conductors.backends.impl_nonblocking:NonBlockingConductor [taskflow.engines] default = taskflow.engines.action_engine.engine:SerialActionEngine parallel = taskflow.engines.action_engine.engine:ParallelActionEngine serial = taskflow.engines.action_engine.engine:SerialActionEngine worker-based = taskflow.engines.worker_based.engine:WorkerBasedActionEngine workers = taskflow.engines.worker_based.engine:WorkerBasedActionEngine [taskflow.jobboards] redis = taskflow.jobs.backends.impl_redis:RedisJobBoard zookeeper = taskflow.jobs.backends.impl_zookeeper:ZookeeperJobBoard [taskflow.persistence] dir = taskflow.persistence.backends.impl_dir:DirBackend file = taskflow.persistence.backends.impl_dir:DirBackend memory = taskflow.persistence.backends.impl_memory:MemoryBackend mysql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend postgresql = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend sqlite = taskflow.persistence.backends.impl_sqlalchemy:SQLAlchemyBackend zookeeper = taskflow.persistence.backends.impl_zookeeper:ZkBackend ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/not-zip-safe0000664000175000017500000000000100000000000021154 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/pbr.json0000664000175000017500000000005700000000000020406 0ustar00zuulzuul00000000000000{"git_version": "185e6c4a", "is_release": true}././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/requires.txt0000664000175000017500000000117500000000000021332 0ustar00zuulzuul00000000000000automaton>=1.9.0 cachetools>=2.0.0 fasteners>=0.7.0 futurist>=1.2.0 jsonschema>=3.2.0 networkx>=2.1.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.utils>=3.33.0 pbr!=2.1.0,>=2.0.0 pydot>=1.2.4 six>=1.10.0 stevedore>=1.20.0 tenacity>=6.0.0 [database] PyMySQL>=0.7.6 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 SQLAlchemy-Utils>=0.30.11 alembic>=0.8.10 psycopg2>=2.8.0 [eventlet] eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 [redis] redis>=2.10.0 [test] hacking<0.11,>=0.10.0 mock>=2.0.0 oslotest>=3.2.0 pydotplus>=2.0.2 stestr>=2.0.0 testscenarios>=0.4 testtools>=2.2.0 [workers] kombu>=4.3.0 [zookeeper] kazoo>=2.6.0 zake>=0.1.6 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397810.0 taskflow-4.6.4/taskflow.egg-info/top_level.txt0000664000175000017500000000001100000000000021450 0ustar00zuulzuul00000000000000taskflow ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/test-requirements.txt0000664000175000017500000000135300000000000017645 0ustar00zuulzuul00000000000000# NOTE(dhellmann): This file contains duplicate dependency information # that is also present in the "extras" section of setup.cfg, and the # entries need to be kept consistent. # zookeeper kazoo>=2.6.0 # Apache-2.0 zake>=0.1.6 # Apache-2.0 # redis redis>=2.10.0 # MIT # workers kombu>=4.3.0 # BSD # eventlet eventlet!=0.18.3,!=0.20.1,!=0.21.0,>=0.18.2 # MIT # database SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT alembic>=0.8.10 # MIT SQLAlchemy-Utils>=0.30.11 # BSD License PyMySQL>=0.7.6 # MIT License psycopg2>=2.8.0 # LGPL/ZPL # test pydotplus>=2.0.2 # MIT License hacking<2.1,>=2.0 oslotest>=3.2.0 # Apache-2.0 testtools>=2.2.0 # MIT testscenarios>=0.4 # Apache-2.0/BSD stestr>=2.0.0 # Apache-2.0 pre-commit>=2.6.0 # MIT ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1644397810.6560426 taskflow-4.6.4/tools/0000775000175000017500000000000000000000000014542 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/clear_zk.sh0000775000175000017500000000055700000000000016702 0ustar00zuulzuul00000000000000#!/bin/bash # This requires https://pypi.org/project/zk_shell/ to be installed... set -e ZK_HOSTS=${ZK_HOSTS:-localhost:2181} TF_PATH=${TF_PATH:-taskflow} for path in `zk-shell --run-once "ls" $ZK_HOSTS`; do if [[ $path == ${TF_PATH}* ]]; then echo "Removing (recursively) path \"$path\"" zk-shell --run-once "rmr $path" $ZK_HOSTS fi done ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/env_builder.sh0000664000175000017500000000754500000000000017407 0ustar00zuulzuul00000000000000#!/bin/bash # This sets up a developer testing environment that can be used with various # openstack projects (mainly for taskflow, but for others it should work # fine also). # # Some things to note: # # - The mysql server that is setup is *not* secured. # - The zookeeper server that is setup is *not* secured. # - The downloads from external services are *not* certificate verified. # # Overall it should only be used for testing/developer environments (it was # tested on ubuntu 14.04 and rhel 6.x, for other distributions some tweaking # may be required). set -e set -u # If on a debian environment this will make apt-get *not* prompt for passwords. export DEBIAN_FRONTEND=noninteractive # http://www.unixcl.com/2009/03/print-text-in-style-box-bash-scripting.html Box () { str="$@" len=$((${#str}+4)) for i in $(seq $len); do echo -n '*'; done; echo; echo "* "$str" *"; for i in $(seq $len); do echo -n '*'; done; echo } Box "Installing system packages..." if [ -f "/etc/redhat-release" ]; then yum install -y -q mysql-devel postgresql-devel mysql-server \ wget gcc make autoconf mysqld="mysqld" zookeeperd="zookeeper-server" elif [ -f "/etc/debian_version" ]; then apt-get -y -qq install libmysqlclient-dev mysql-server postgresql \ wget gcc make autoconf mysqld="mysql" zookeeperd="zookeeper" else echo "Unknown distribution!!" lsb_release -a exit 1 fi set +e python_27=`which python2.7` set -e build_dir=`mktemp -d` echo "Created build directory $build_dir..." cd $build_dir # Get python 2.7 installed (if it's not). if [ -z "$python_27" ]; then py_version="2.7.9" py_file="Python-$py_version.tgz" py_base_file=${py_file%.*} py_url="https://www.python.org/ftp/python/$py_version/$py_file" Box "Building python 2.7 (version $py_version)..." wget $py_url -O "$build_dir/$py_file" --no-check-certificate -nv tar -xf "$py_file" cd $build_dir/$py_base_file ./configure --disable-ipv6 -q make --quiet Box "Installing python 2.7 (version $py_version)..." make altinstall >/dev/null 2>&1 python_27=/usr/local/bin/python2.7 fi set +e pip_27=`which pip2.7` set -e if [ -z "$pip_27" ]; then Box "Installing pip..." wget "https://bootstrap.pypa.io/get-pip.py" \ -O "$build_dir/get-pip.py" --no-check-certificate -nv $python_27 "$build_dir/get-pip.py" >/dev/null 2>&1 pip_27=/usr/local/bin/pip2.7 fi Box "Installing tox..." $pip_27 install -q 'tox>=1.6.1,<1.7.0' Box "Setting up mysql..." service $mysqld restart /usr/bin/mysql --user="root" --execute='CREATE DATABASE 'openstack_citest'' cat << EOF > $build_dir/mysql.sql CREATE USER 'openstack_citest'@'localhost' IDENTIFIED BY 'openstack_citest'; CREATE USER 'openstack_citest' IDENTIFIED BY 'openstack_citest'; GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest'@'localhost'; GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest'; FLUSH PRIVILEGES; EOF /usr/bin/mysql --user="root" < $build_dir/mysql.sql # TODO(harlowja): configure/setup postgresql... Box "Installing zookeeper..." if [ -f "/etc/redhat-release" ]; then # RH doesn't ship zookeeper (still...) zk_file="cloudera-cdh-4-0.x86_64.rpm" zk_url="http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/$zk_file" wget $zk_url -O $build_dir/$zk_file --no-check-certificate -nv yum -y -q --nogpgcheck localinstall $build_dir/$zk_file yum -y -q install zookeeper-server java service zookeeper-server stop service zookeeper-server init --force mkdir -pv /var/lib/zookeeper python -c "import random; print random.randint(1, 16384)" > /var/lib/zookeeper/myid elif [ -f "/etc/debian_version" ]; then apt-get install -y -qq zookeeperd else echo "Unknown distribution!!" lsb_release -a exit 1 fi Box "Starting zookeeper..." service $zookeeperd restart service $zookeeperd status ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/pretty_tox.sh0000775000175000017500000000067400000000000017331 0ustar00zuulzuul00000000000000#!/usr/bin/env bash set -o pipefail TESTRARGS=$1 # --until-failure is not compatible with --subunit see: # # https://bugs.launchpad.net/testrepository/+bug/1411804 # # this work around exists until that is addressed if [[ "$TESTARGS" =~ "until-failure" ]]; then python setup.py testr --slowest --testr-args="$TESTRARGS" else python setup.py testr --slowest --testr-args="--subunit $TESTRARGS" | $(dirname $0)/subunit_trace.py -f fi ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/schema_generator.py0000775000175000017500000000574600000000000020441 0ustar00zuulzuul00000000000000#!/usr/bin/env python # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import re import six import tabulate from taskflow.persistence.backends import impl_sqlalchemy NAME_MAPPING = { 'flowdetails': 'Flow details', 'atomdetails': 'Atom details', 'logbooks': 'Logbooks', } CONN_CONF = { # This uses an in-memory database (aka nothing is written) "connection": "sqlite://", } TABLE_QUERY = "SELECT name, sql FROM sqlite_master WHERE type='table'" SCHEMA_QUERY = "pragma table_info(%s)" def to_bool_string(val): if isinstance(val, (int, bool)): return six.text_type(bool(val)) if not isinstance(val, six.string_types): val = six.text_type(val) if val.lower() in ('0', 'false'): return 'False' if val.lower() in ('1', 'true'): return 'True' raise ValueError("Unknown boolean input '%s'" % (val)) def main(): backend = impl_sqlalchemy.SQLAlchemyBackend(CONN_CONF) with contextlib.closing(backend) as backend: # Make the schema exist... with contextlib.closing(backend.get_connection()) as conn: conn.upgrade() # Now make a prettier version of that schema... tables = backend.engine.execute(TABLE_QUERY) table_names = [r[0] for r in tables] for i, table_name in enumerate(table_names): pretty_name = NAME_MAPPING.get(table_name, table_name) print("*" + pretty_name + "*") # http://www.sqlite.org/faq.html#q24 table_name = table_name.replace("\"", "\"\"") rows = [] for r in backend.engine.execute(SCHEMA_QUERY % table_name): # Cut out the numbers from things like VARCHAR(12) since # this is not very useful to show users who just want to # see the basic schema... row_type = re.sub(r"\(.*?\)", "", r['type']).strip() if not row_type: raise ValueError("Row %s of table '%s' was empty after" " cleaning" % (r['cid'], table_name)) rows.append([r['name'], row_type, to_bool_string(r['pk'])]) contents = tabulate.tabulate( rows, headers=['Name', 'Type', 'Primary Key'], tablefmt="rst") print("\n%s" % contents.strip()) if i + 1 != len(table_names): print("") if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/speed_test.py0000664000175000017500000000771500000000000017265 0ustar00zuulzuul00000000000000# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Profile a simple engine build/load/compile/prepare/validate/run. """ import argparse import cProfile as profiler import pstats from oslo_utils import timeutils import six from six.moves import range as compat_range from taskflow import engines from taskflow.patterns import linear_flow as lf from taskflow import task def print_header(name): if name: header_footer = "-" * len(name) print(header_footer) print(name) print(header_footer) class ProfileIt(object): stats_ordering = ('cumulative', 'calls',) def __init__(self, name, args): self.name = name self.profile = profiler.Profile() self.args = args def __enter__(self): self.profile.enable() def __exit__(self, exc_tp, exc_v, exc_tb): self.profile.disable() buf = six.StringIO() ps = pstats.Stats(self.profile, stream=buf) ps = ps.sort_stats(*self.stats_ordering) percent_limit = max(0.0, max(1.0, self.args.limit / 100.0)) ps.print_stats(percent_limit) print_header(self.name) needs_newline = False for line in buf.getvalue().splitlines(): line = line.lstrip() if line: print(line) needs_newline = True if needs_newline: print("") class TimeIt(object): def __init__(self, name, args): self.watch = timeutils.StopWatch() self.name = name self.args = args def __enter__(self): self.watch.restart() def __exit__(self, exc_tp, exc_v, exc_tb): self.watch.stop() duration = self.watch.elapsed() print_header(self.name) print("- Took %0.3f seconds to run" % (duration)) class DummyTask(task.Task): def execute(self): pass def main(): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('--profile', "-p", dest='profile', action='store_true', default=False, help='profile instead of gather timing' ' (default: False)') parser.add_argument('--dummies', "-d", dest='dummies', action='store', type=int, default=100, metavar="", help='how many dummy/no-op tasks to inject' ' (default: 100)') parser.add_argument('--limit', '-l', dest='limit', action='store', type=float, default=100.0, metavar="", help='percentage of profiling output to show' ' (default: 100%%)') args = parser.parse_args() if args.profile: ctx_manager = ProfileIt else: ctx_manager = TimeIt dummy_am = max(0, args.dummies) with ctx_manager("Building linear flow with %s tasks" % dummy_am, args): f = lf.Flow("root") for i in compat_range(0, dummy_am): f.add(DummyTask(name="dummy_%s" % i)) with ctx_manager("Loading", args): e = engines.load(f) with ctx_manager("Compiling", args): e.compile() with ctx_manager("Preparing", args): e.prepare() with ctx_manager("Validating", args): e.validate() with ctx_manager("Running", args): e.run() if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/state_graph.py0000775000175000017500000001601500000000000017423 0ustar00zuulzuul00000000000000#!/usr/bin/env python # Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import optparse import os import sys top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) from automaton.converters import pydot from automaton import machines from taskflow.engines.action_engine import builder from taskflow.engines.worker_based import protocol from taskflow import states # This is just needed to get at the machine object (we will not # actually be running it...). class DummyRuntime(object): def __init__(self): self.analyzer = mock.MagicMock() self.completer = mock.MagicMock() self.scheduler = mock.MagicMock() self.storage = mock.MagicMock() def make_machine(start_state, transitions, event_name_cb): machine = machines.FiniteMachine() machine.add_state(start_state) machine.default_start_state = start_state for (start_state, end_state) in transitions: if start_state not in machine: machine.add_state(start_state) if end_state not in machine: machine.add_state(end_state) event = event_name_cb(start_state, end_state) machine.add_transition(start_state, end_state, event) return machine def main(): parser = optparse.OptionParser() parser.add_option("-f", "--file", dest="filename", help="write svg to FILE", metavar="FILE") parser.add_option("-t", "--tasks", dest="tasks", action='store_true', help="use task state transitions", default=False) parser.add_option("-r", "--retries", dest="retries", action='store_true', help="use retry state transitions", default=False) parser.add_option("-e", "--engines", dest="engines", action='store_true', help="use engine state transitions", default=False) parser.add_option("-w", "--wbe-requests", dest="wbe_requests", action='store_true', help="use wbe request transitions", default=False) parser.add_option("-j", "--jobs", dest="jobs", action='store_true', help="use job transitions", default=False) parser.add_option("--flow", dest="flow", action='store_true', help="use flow transitions", default=False) parser.add_option("-T", "--format", dest="format", help="output in given format", default='svg') (options, args) = parser.parse_args() if options.filename is None: options.filename = 'states.%s' % options.format types = [ options.engines, options.retries, options.tasks, options.wbe_requests, options.jobs, options.flow, ] provided = sum([int(i) for i in types]) if provided > 1: parser.error("Only one of task/retry/engines/wbe requests/jobs/flow" " may be specified.") if provided == 0: parser.error("One of task/retry/engines/wbe requests/jobs/flow" " must be specified.") event_name_cb = lambda start_state, end_state: "on_%s" % end_state.lower() internal_states = list() ordering = 'in' if options.tasks: source_type = "Tasks" source = make_machine(states.PENDING, list(states._ALLOWED_TASK_TRANSITIONS), event_name_cb) elif options.retries: source_type = "Retries" source = make_machine(states.PENDING, list(states._ALLOWED_RETRY_TRANSITIONS), event_name_cb) elif options.flow: source_type = "Flow" source = make_machine(states.PENDING, list(states._ALLOWED_FLOW_TRANSITIONS), event_name_cb) elif options.engines: source_type = "Engines" b = builder.MachineBuilder(DummyRuntime(), mock.MagicMock()) source, memory = b.build() internal_states.extend(builder.META_STATES) ordering = 'out' elif options.wbe_requests: source_type = "WBE requests" source = make_machine(protocol.WAITING, list(protocol._ALLOWED_TRANSITIONS), event_name_cb) elif options.jobs: source_type = "Jobs" source = make_machine(states.UNCLAIMED, list(states._ALLOWED_JOB_TRANSITIONS), event_name_cb) graph_attrs = { 'ordering': ordering, } graph_name = "%s states" % source_type def node_attrs_cb(state): node_color = None if state in internal_states: node_color = 'blue' if state in (states.FAILURE, states.REVERT_FAILURE): node_color = 'red' if state == states.REVERTED: node_color = 'darkorange' if state in (states.SUCCESS, states.COMPLETE): node_color = 'green' node_attrs = {} if node_color: node_attrs['fontcolor'] = node_color return node_attrs def edge_attrs_cb(start_state, on_event, end_state): edge_attrs = {} if options.engines: edge_attrs['label'] = on_event.replace("_", " ").strip() if 'reverted' in on_event: edge_attrs['fontcolor'] = 'darkorange' if 'fail' in on_event: edge_attrs['fontcolor'] = 'red' if 'success' in on_event: edge_attrs['fontcolor'] = 'green' return edge_attrs g = pydot.convert(source, graph_name, graph_attrs=graph_attrs, node_attrs_cb=node_attrs_cb, edge_attrs_cb=edge_attrs_cb) print("*" * len(graph_name)) print(graph_name) print("*" * len(graph_name)) print(source.pformat()) print(g.to_string().strip()) g.write(options.filename, format=options.format) print("Created %s at '%s'" % (options.format, options.filename)) # To make the svg more pretty use the following: # $ xsltproc ../diagram-tools/notugly.xsl ./states.svg > pretty-states.svg # Get diagram-tools from https://github.com/vidarh/diagram-tools.git if __name__ == '__main__': main() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/subunit_trace.py0000775000175000017500000002451400000000000017774 0ustar00zuulzuul00000000000000#!/usr/bin/env python # Copyright 2014 Hewlett-Packard Development Company, L.P. # Copyright 2014 Samsung Electronics # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Trace a subunit stream in reasonable detail and high accuracy.""" import argparse import functools import os import re import sys import mimeparse import subunit import testtools DAY_SECONDS = 60 * 60 * 24 FAILS = [] RESULTS = {} class Starts(testtools.StreamResult): def __init__(self, output): super(Starts, self).__init__() self._output = output def startTestRun(self): self._neednewline = False self._emitted = set() def status(self, test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None): super(Starts, self).status( test_id, test_status, test_tags=test_tags, runnable=runnable, file_name=file_name, file_bytes=file_bytes, eof=eof, mime_type=mime_type, route_code=route_code, timestamp=timestamp) if not test_id: if not file_bytes: return if not mime_type or mime_type == 'test/plain;charset=utf8': mime_type = 'text/plain; charset=utf-8' primary, sub, parameters = mimeparse.parse_mime_type(mime_type) content_type = testtools.content_type.ContentType( primary, sub, parameters) content = testtools.content.Content( content_type, lambda: [file_bytes]) text = content.as_text() if text and text[-1] not in '\r\n': self._neednewline = True self._output.write(text) elif test_status == 'inprogress' and test_id not in self._emitted: if self._neednewline: self._neednewline = False self._output.write('\n') worker = '' for tag in test_tags or (): if tag.startswith('worker-'): worker = '(' + tag[7:] + ') ' if timestamp: timestr = timestamp.isoformat() else: timestr = '' self._output.write('%s: %s%s [start]\n' % (timestr, worker, test_id)) self._emitted.add(test_id) def cleanup_test_name(name, strip_tags=True, strip_scenarios=False): """Clean up the test name for display. By default we strip out the tags in the test because they don't help us in identifying the test that is run to it's result. Make it possible to strip out the testscenarios information (not to be confused with tempest scenarios) however that's often needed to identify generated negative tests. """ if strip_tags: tags_start = name.find('[') tags_end = name.find(']') if tags_start > 0 and tags_end > tags_start: newname = name[:tags_start] newname += name[tags_end + 1:] name = newname if strip_scenarios: tags_start = name.find('(') tags_end = name.find(')') if tags_start > 0 and tags_end > tags_start: newname = name[:tags_start] newname += name[tags_end + 1:] name = newname return name def get_duration(timestamps): start, end = timestamps if not start or not end: duration = '' else: delta = end - start duration = '%d.%06ds' % ( delta.days * DAY_SECONDS + delta.seconds, delta.microseconds) return duration def find_worker(test): for tag in test['tags']: if tag.startswith('worker-'): return int(tag[7:]) return 'NaN' # Print out stdout/stderr if it exists, always def print_attachments(stream, test, all_channels=False): """Print out subunit attachments. Print out subunit attachments that contain content. This runs in 2 modes, one for successes where we print out just stdout and stderr, and an override that dumps all the attachments. """ channels = ('stdout', 'stderr') for name, detail in test['details'].items(): # NOTE(sdague): the subunit names are a little crazy, and actually # are in the form pythonlogging:'' (with the colon and quotes) name = name.split(':')[0] if detail.content_type.type == 'test': detail.content_type.type = 'text' if (all_channels or name in channels) and detail.as_text(): title = "Captured %s:" % name stream.write("\n%s\n%s\n" % (title, ('~' * len(title)))) # indent attachment lines 4 spaces to make them visually # offset for line in detail.as_text().split('\n'): stream.write(" %s\n" % line) def show_outcome(stream, test, print_failures=False, failonly=False): global RESULTS status = test['status'] # TODO(sdague): ask lifeless why on this? if status == 'exists': return worker = find_worker(test) name = cleanup_test_name(test['id']) duration = get_duration(test['timestamps']) if worker not in RESULTS: RESULTS[worker] = [] RESULTS[worker].append(test) # don't count the end of the return code as a fail if name == 'process-returncode': return if status == 'fail': FAILS.append(test) stream.write('{%s} %s [%s] ... FAILED\n' % ( worker, name, duration)) if not print_failures: print_attachments(stream, test, all_channels=True) elif not failonly: if status == 'success': stream.write('{%s} %s [%s] ... ok\n' % ( worker, name, duration)) print_attachments(stream, test) elif status == 'skip': stream.write('{%s} %s ... SKIPPED: %s\n' % ( worker, name, test['details']['reason'].as_text())) else: stream.write('{%s} %s [%s] ... %s\n' % ( worker, name, duration, test['status'])) if not print_failures: print_attachments(stream, test, all_channels=True) stream.flush() def print_fails(stream): """Print summary failure report. Currently unused, however there remains debate on inline vs. at end reporting, so leave the utility function for later use. """ if not FAILS: return stream.write("\n==============================\n") stream.write("Failed %s tests - output below:" % len(FAILS)) stream.write("\n==============================\n") for f in FAILS: stream.write("\n%s\n" % f['id']) stream.write("%s\n" % ('-' * len(f['id']))) print_attachments(stream, f, all_channels=True) stream.write('\n') def count_tests(key, value): count = 0 for k, v in RESULTS.items(): for item in v: if key in item: if re.search(value, item[key]): count += 1 return count def run_time(): runtime = 0.0 for k, v in RESULTS.items(): for test in v: runtime += float(get_duration(test['timestamps']).strip('s')) return runtime def worker_stats(worker): tests = RESULTS[worker] num_tests = len(tests) delta = tests[-1]['timestamps'][1] - tests[0]['timestamps'][0] return num_tests, delta def print_summary(stream): stream.write("\n======\nTotals\n======\n") stream.write("Run: %s in %s sec.\n" % (count_tests('status', '.*'), run_time())) stream.write(" - Passed: %s\n" % count_tests('status', 'success')) stream.write(" - Skipped: %s\n" % count_tests('status', 'skip')) stream.write(" - Failed: %s\n" % count_tests('status', 'fail')) # we could have no results, especially as we filter out the process-codes if RESULTS: stream.write("\n==============\nWorker Balance\n==============\n") for w in range(max(RESULTS.keys()) + 1): if w not in RESULTS: stream.write( " - WARNING: missing Worker %s! " "Race in testr accounting.\n" % w) else: num, time = worker_stats(w) stream.write(" - Worker %s (%s tests) => %ss\n" % (w, num, time)) def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('--no-failure-debug', '-n', action='store_true', dest='print_failures', help='Disable printing failure ' 'debug information in realtime') parser.add_argument('--fails', '-f', action='store_true', dest='post_fails', help='Print failure debug ' 'information after the stream is proccesed') parser.add_argument('--failonly', action='store_true', dest='failonly', help="Don't print success items", default=( os.environ.get('TRACE_FAILONLY', False) is not False)) return parser.parse_args() def main(): args = parse_args() stream = subunit.ByteStreamToStreamResult( sys.stdin, non_subunit_name='stdout') starts = Starts(sys.stdout) outcomes = testtools.StreamToDict( functools.partial(show_outcome, sys.stdout, print_failures=args.print_failures, failonly=args.failonly)) summary = testtools.StreamSummary() result = testtools.CopyStreamResult([starts, outcomes, summary]) result.startTestRun() try: stream.run(result) finally: result.stopTestRun() if count_tests('status', '.*') == 0: print("The test run didn't actually run any tests") return 1 if args.post_fails: print_fails(sys.stdout) print_summary(sys.stdout) return (0 if summary.wasSuccessful() else 1) if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/test-setup.sh0000775000175000017500000000353300000000000017222 0ustar00zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # a anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; CREATE USER '$DB_USER'@'%' IDENTIFIED BY '$DB_PW'; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tools/update_states.sh0000775000175000017500000000276100000000000017754 0ustar00zuulzuul00000000000000#!/bin/bash set -u xsltproc=`which xsltproc` if [ -z "$xsltproc" ]; then echo "Please install xsltproc before continuing." exit 1 fi set -e if [ ! -d "$PWD/.diagram-tools" ]; then git clone "https://github.com/vidarh/diagram-tools.git" "$PWD/.diagram-tools" fi script_dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) img_dir="$script_dir/../doc/source/img" echo "---- Updating task state diagram ----" python $script_dir/state_graph.py -t -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/task_states.svg echo "---- Updating flow state diagram ----" python $script_dir/state_graph.py --flow -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/flow_states.svg echo "---- Updating engine state diagram ----" python $script_dir/state_graph.py -e -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/engine_states.svg echo "---- Updating retry state diagram ----" python $script_dir/state_graph.py -r -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/retry_states.svg echo "---- Updating wbe request state diagram ----" python $script_dir/state_graph.py -w -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/wbe_request_states.svg echo "---- Updating job state diagram ----" python $script_dir/state_graph.py -j -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/job_states.svg ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1644397774.0 taskflow-4.6.4/tox.ini0000664000175000017500000000435300000000000014722 0ustar00zuulzuul00000000000000[tox] minversion = 3.1.0 envlist = cover,docs,pep8,py3,pylint,update-states ignore_basepython_conflict = True [testenv] basepython = python3 setenv = # We need to install a bit more than just `test' because those drivers have # custom tests that we always run deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt commands = stestr run {posargs} [testenv:docs] deps = {[testenv]deps} -r{toxinidir}/doc/requirements.txt commands = sphinx-build -E -W -b html doc/source doc/build/html doc8 doc/source [testenv:update-states] deps = {[testenv]deps} pydot3 commands = {toxinidir}/tools/update_states.sh [testenv:pep8] commands = pre-commit run -a [testenv:pylint] deps = {[testenv]deps} pylint==0.26.0 commands = pylint --rcfile=pylintrc taskflow [testenv:cover] deps = {[testenv]deps} coverage>=3.6 setenv = {[testenv]setenv} PYTHON=coverage run --source taskflow --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [testenv:venv] commands = {posargs} [flake8] builtins = _ exclude = .venv,.tox,dist,doc,*egg,.git,build,tools ignore = E305,E402,E721,E731,E741,W503,W504 [hacking] import_exceptions = six.moves taskflow.test.mock unittest.mock [doc8] # Settings for doc8: # Ignore doc/source/user/history.rst, it includes generated ChangeLog # file that fails with "D000 Inline emphasis start-string without # end-string." ignore-path = doc/*/target,doc/*/build* [testenv:releasenotes] deps = -r{toxinidir}/doc/requirements.txt commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files, and develop mode disabled # explicitly to avoid unnecessarily installing the checked-out repo too (this # further relies on "tox.skipsdist = True" above). deps = bindep commands = bindep test usedevelop = False